To see the other types of publications on this topic, follow the link: Wavelet coefficients.

Dissertations / Theses on the topic 'Wavelet coefficients'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 dissertations / theses for your research on the topic 'Wavelet coefficients.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.

1

Er, Chiangkai. "Speech recognition by clustering wavelet and PLP coefficients." Thesis, Massachusetts Institute of Technology, 1997. http://hdl.handle.net/1721.1/42742.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Al-Jawad, Naseer. "Exploiting statistical properties of wavelet coefficients for image/video processing and analysis tasks." Thesis, University of Buckingham, 2009. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.601354.

Full text
Abstract:
In this thesis the statistical properties of wavelet transform high frequency sub-bands has been used and exploited in three main different applications. These applications are; Image/video feature preserving compression, Face Biometric content based video retrieval and Face feature extraction for face verification and recognition. The main idea of this thesis was also used previously in watermarking (Dietze 2005) where the watermark can be hidden automatically near the significant features in the wavelet sub-bands. The idea is also used in image compression where special integer compression applied on low constrained devices (Ehlers 2008). In image quality measurement, Laplace Distribution Histogram (LDH) also used to measure the image quality. The theoretical LOH of any high frequency wavelet sub-band can match the histogram produced from the same high frequency wavelet sub-band of a high quality picture, where the noisy or blurred one can have a LOH which can be fitted to the theoretical one (Wang and Simoncelli 2005). Some research used the idea of wavelet high frequency sub-band features extraction implicitly, in this thesis we are focussed explicitly on using the statistical properties of the wavelet sub-bands in its multi-resolution wavelet transform. The fact that each high frequency wavelet sub-band frequencies have a Laplace Distribution (LO) (or so called General Gaussian distribution) has been mentioned in the literature. Here the relation between the statistical properties of the wavelet high frequency sub-bands and the feature extractions is well established. LOH has two tails, this make the LOH shape either symmetrical or skewed to the left, or the right This symmetry or skewing is normally around the mean which is theoretically equal to zero. In our study we paid a deep attention for these tails, these tails actually represent the image significant features which can be mapped from the wavelet domain to the spatial domain. The features can be maintained, accessed, and modified very easily using a certain threshold.
APA, Harvard, Vancouver, ISO, and other styles
3

Al-Jawad, Neseer. "Exploiting Statical Properties of Wavelet Coefficients for image/Video Processing and Analysis Tasks." Thesis, University of Exeter, 2009. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.515492.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Chrápek, Tomáš. "Potlačování šumu v řeči založené na waveletové transformaci a rozeznávání znělosti segmentů." Master's thesis, Vysoké učení technické v Brně. Fakulta elektrotechniky a komunikačních technologií, 2008. http://www.nusl.cz/ntk/nusl-217506.

Full text
Abstract:
The wavelet transform is a modern signal processing tool. The wavelet transform earned itself a great success mainly for its unique properties, such as the capability of recognizing very fast changes in processed signal. The theoretical part of this work is introduction to wavelet theory, more specifically wavelet types, a wavelet transform and its application in systems dealing with signal denoising. A main problem connected to speech signals denoising was introduced. The problem is degradation of the speech signal when denoising unvoiced parts. It is because of the fact that unvoiced parts and noise itself have very similar characteristics. The solution would be to apply different attitude to voiced and unvoiced segments of the speech. The main goal of this diploma thesis was to create an application implementing the speech signal denoising using the wavelet transform. The special attention should have been paid to applying different attitude to voiced and unvoiced segments of the speech. The demanded application is programmed as a grafical user interface (GUI) in MATLAB environment. The algorithm implemented in this form allows users to test introduced procedures with a great comfort. This work presents achieved results and discusses them considering general requirements posed on an application of given type. The most important conlusion of this Diploma Thesis is the fact that some kind of trade-off between sufficient signal denoising and keeping the speech understandable has to be made.
APA, Harvard, Vancouver, ISO, and other styles
5

Janajreh, Isam Mustafa II. "Wavelet Analysis of Extreme Wind Loads on Low-Rise Structures." Diss., Virginia Tech, 1998. http://hdl.handle.net/10919/30414.

Full text
Abstract:
Over the past thirty years, extensive research has been conducted with the objective of reducing wind damage to structures. Wind tunnel simulations of wind loads have been the major source of building codes. However, a simple comparison of pressure coefficients measured in wind tunnel simulations with full-scale measurements show that the simulations, in general, underpredict extreme negative pressure coefficients. One obvious reason is the lack of consensus on wind tunnel simulation parameters. The wind in the atmospheric surface layer is highly turbulent. In simulating wind loads on structures, one needs to simulate the turbulent character besides satisfying geometric and dynamic similitudes. Some turbulence parameters that have been considered in many simulations include, turbulence intensities, integral length scales, surface roughness, and frequency spectrum. One problem with these parameters is that they are time varying in the atmospheric boundary layer and their averaged value, usually considered in the wind tunnel simulations, cannot be used to simulate pressure peaks. In this work, we show how wavelet analysis and time-scale representation can be used to establish an intermittency factor that characterizes energetic turbulence events in the atmospheric flows. Moreover, we relate these events to the occurrence of extreme negative peak pressures.
Ph. D.
APA, Harvard, Vancouver, ISO, and other styles
6

Konczi, Róbert. "Digitální hudební efekt založený na waveletové transformaci jako plug-in modul." Master's thesis, Vysoké učení technické v Brně. Fakulta elektrotechniky a komunikačních technologií, 2011. http://www.nusl.cz/ntk/nusl-218981.

Full text
Abstract:
This work deals with theory of wavelet transform and Mallat’s algorithm. It also includes the programming method of creating VST plug-in modules and describes the developement of the plug-in module, witch uses the modificated coeficients of wavelet transform to applicate the music effect.
APA, Harvard, Vancouver, ISO, and other styles
7

Kato, Jien, Toyohide Watanabe, Sebastien Joga, Rittscher Jens, Blake Andrew, ジェーン 加藤, and 豊英 渡邉. "An HMM-based segmentation method for traffic monitoring movies." IEEE, 2002. http://hdl.handle.net/2237/6744.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Stamos, Dimitrios Georgios. "Experimental Analysis of the Interaction of Water Waves With Flexible Structures." Diss., Virginia Tech, 2000. http://hdl.handle.net/10919/27567.

Full text
Abstract:
An experimental investigation of the interaction of water waves with flexible structures acting as breakwaters was carried out. Wave profiles, mapped out by water level measuring transducers, were studied to provide information on the performance of different breakwater models. A new signal analysis procedure for determining reflection coefficients based on wavelet theory was developed and compared to a conventional method. The reliability of using wavelet analysis to separate a partial standing wave into incident and reflected wave components was verified with a numerical example. It was also verified by the small variance in the estimates of the incident wave height from independent experimental measurements. Different geometries of rigid and flexible structures were constructed and examined. Reflection, transmission and energy loss coefficients were obtained over them. The influence of various properties of the models, such as the width and the internal pressure, on the effectiveness in reflecting or absorbing the incident wave energy was determined. Various factors which affect the performance of the breakwater, including the water depth, the wave length and the wave amplitude, were measured and documented. Suspended and bottom-mounted models were considered. The flow field over and near a hemi-cylindrical breakwater model was also examined using a flow visualization technique. An overall comparison among the models has also been provided. The results showed that the rectangular models, rigid and flexible, are the most effective structures to dissipate wave energy. The flow visualization technique indicated that the flow conforms with the circular geometry of a hemi-cylindrical breakwater model, yielding no flow separation.
Ph. D.
APA, Harvard, Vancouver, ISO, and other styles
9

Morand, Claire. "Segmentation spatio-temporelle et indexation vidéo dans le domaine des représentations hiérarchiques." Thesis, Bordeaux 1, 2009. http://www.theses.fr/2009BOR13888/document.

Full text
Abstract:
L'objectif de cette thèse est de proposer une solution d'indexation ``scalable'' et basée objet de flux vidéos HD compressés avec Motion JPEG2000. Dans ce contexte, d'une part, nous travaillons dans le domaine transformé hiérachique des ondelettes 9/7 de Daubechies et, d'autre part, la représentation ``scalable'' nécessite des méthodes en multirésolution, de basse résolution vers haute résolution. La première partie de ce manuscrit est dédiée à la définition d'une méthode d'extraction automatique des objets en mouvement. Elle repose sur la combinaison d'une estimation du mouvement global robuste et d'une segmentation morphologique couleur à basse résolution. Le résultat est ensuite affiné en suivant l'ordre des données dans le flux scalable. La deuxième partie est consacrée à la définition d'un descripteur sur les objets précédemment extraits, basé sur les histogrammes en multirésolution des coefficients d'ondelettes. Enfin, les performances de la méthode d'indexation proposée sont évaluées dans le contexte de requêtes scalables de recherche de vidéos par le contenu
This thesis aims at proposing a solution of scalable object-based indexing of HD video flow compressed by MJPEG2000. In this context, on the one hand, we work in the hierarchical transform domain of the 9/7 Daubechies' wavelets and, on the other hand, the scalable representation implies to search for multiscale methods, from low to high resolution. The first part of this manuscript is dedicated to the definition of a method for automatic extraction of objects having their own motion. It is based on a combination of a robust global motion estimation with a morphological color segmentation at low resolution. The obtained result is then refined following the data order of the scalable flow. The second part is the definition of an object descriptor which is based on the multiscale histograms of the wavelet coefficients. Finally, the performances of the proposed method are evaluated in the context of scalable content-based queries
APA, Harvard, Vancouver, ISO, and other styles
10

Zhao, Fangwei. "Multiresolution analysis of ultrasound images of the prostate." University of Western Australia. School of Electrical, Electronic and Computer Engineering, 2004. http://theses.library.uwa.edu.au/adt-WU2004.0028.

Full text
Abstract:
[Truncated abstract] Transrectal ultrasound (TRUS) has become the urologist’s primary tool for diagnosing and staging prostate cancer due to its real-time and non-invasive nature, low cost, and minimal discomfort. However, the interpretation of a prostate ultrasound image depends critically on the experience and expertise of a urologist and is still difficult and subjective. To overcome the subjective interpretation and facilitate objective diagnosis, computer aided analysis of ultrasound images of the prostate would be very helpful. Computer aided analysis of images may improve diagnostic accuracy by providing a more reproducible interpretation of the images. This thesis is an attempt to address several key elements of computer aided analysis of ultrasound images of the prostate. Specifically, it addresses the following tasks: 1. modelling B-mode ultrasound image formation and statistical properties; 2. reducing ultrasound speckle; and 3. extracting prostate contour. Speckle refers to the granular appearance that compromises the image quality and resolution in optics, synthetic aperture radar (SAR), and ultrasound. Due to the existence of speckle the appearance of a B-mode ultrasound image does not necessarily relate to the internal structure of the object being scanned. A computer simulation of B-mode ultrasound imaging is presented, which not only provides an insight into the nature of speckle, but also a viable test-bed for any ultrasound speckle reduction methods. Motivated by analysis of the statistical properties of the simulated images, the generalised Fisher-Tippett distribution is empirically proposed to analyse statistical properties of ultrasound images of the prostate. A speckle reduction scheme is then presented, which is based on Mallat and Zhong’s dyadic wavelet transform (MZDWT) and modelling statistical properties of the wavelet coefficients and exploiting their inter-scale correlation. Specifically, the squared modulus of the component wavelet coefficients are modelled as a two-state Gamma mixture. Interscale correlation is exploited by taking the harmonic mean of the posterior probability functions, which are derived from the Gamma mixture. This noise reduction scheme is applied to both simulated and real ultrasound images, and its performance is quite satisfactory in that the important features of the original noise corrupted image are preserved while most of the speckle noise is removed successfully. It is also evaluated both qualitatively and quantitatively by comparing it with median, Wiener, and Lee filters, and the results revealed that it surpasses all these filters. A novel contour extraction scheme (CES), which fuses MZDWT and snakes, is proposed on the basis of multiresolution analysis (MRA). Extraction of the prostate contour is placed in a multi-scale framework provided by MZDWT. Specifically, the external potential functions of the snake are designated as the modulus of the wavelet coefficients at different scales, and thus are “switchable”. Such a multi-scale snake, which deforms and migrates from coarse to fine scales, eventually extracts the contour of the prostate
APA, Harvard, Vancouver, ISO, and other styles
11

Combrexelle, Sébastien. "Multifractal analysis for multivariate data with application to remote sensing." Phd thesis, Toulouse, INPT, 2016. http://oatao.univ-toulouse.fr/16477/1/Combrexelle.pdf.

Full text
Abstract:
Texture characterization is a central element in many image processing applications. Texture analysis can be embedded in the mathematical framework of multifractal analysis, enabling the study of the fluctuations in regularity of image intensity and providing practical tools for their assessment, the coefficients or wavelet leaders. Although successfully applied in various contexts, multi fractal analysis suffers at present from two major limitations. First, the accurate estimation of multifractal parameters for image texture remains a challenge, notably for small sample sizes. Second, multifractal analysis has so far been limited to the analysis of a single image, while the data available in applications are increasingly multivariate. The main goal of this thesis is to develop practical contributions to overcome these limitations. The first limitation is tackled by introducing a generic statistical model for the logarithm of wavelet leaders, parametrized by multifractal parameters of interest. This statistical model enables us to counterbalance the variability induced by small sample sizes and to embed the estimation in a Bayesian framework. This yields robust and accurate estimation procedures, effective both for small and large images. The multifractal analysis of multivariate images is then addressed by generalizing this Bayesian framework to hierarchical models able to account for the assumption that multifractal properties evolve smoothly in the dataset. This is achieved via the design of suitable priors relating the dynamical properties of the multifractal parameters of the different components composing the dataset. Different priors are investigated and compared in this thesis by means of numerical simulations conducted on synthetic multivariate multifractal images. This work is further completed by the investigation of the potential benefit of multifractal analysis and the proposed Bayesian methodology for remote sensing via the example of hyperspectral imaging.
APA, Harvard, Vancouver, ISO, and other styles
12

Zátyik, Ján. "Směrové reprezentace obrazů." Master's thesis, Vysoké učení technické v Brně. Fakulta elektrotechniky a komunikačních technologií, 2011. http://www.nusl.cz/ntk/nusl-218921.

Full text
Abstract:
Various methods describes an image by specific shapes, which are called basis or frames. With these basis can be transformed the image into a representation by transformation coefficients. The aim is that the image can be described by a small number of coefficients to obtain so-called sparse representation. This feature can be used for example for image compression. But basis are not able to describe all the shapes that may appear in the image. This lack increases the number of transformation coefficients describing the image. The aim of this thesis is to study the general principle of calculating the transformation coefficients and to compare classical methods of image analysis with some of the new methods of image analysis. Compares effectiveness of method for image reconstruction from a limited number of coefficients and a noisy image. Also, compares image interpolation method using characteristics of two different transformations with bicubic transformation. Theoretical part describes the transformation methods. Describes some methods from aspects of multi/resolution, localization in time and frequency domains, redundancy and directionality. Furthermore, gives examples of transformations on a particular image. The practical part of the thesis compares efficiency of the Fourier, Wavelet, Contourlet, Ridgelet, Radon, Wavelet Packet and WaveAtom transform in image recontruction from a limited number of the most significant transformation coefficients. Besides, ability of image denoising using these methods with thresholding techniques applied to transformation coefficients. The last section deals with the interpolation of image interpolation by combining of two methods and compares the results with the classical bicubic interpolation.
APA, Harvard, Vancouver, ISO, and other styles
13

Zarjam, Pega. "EEG Data acquisition and automatic seizure detection using wavelet transforms in the newborn EEG." Queensland University of Technology, 2003. http://eprints.qut.edu.au/15795/.

Full text
Abstract:
This thesis deals with the problem of newborn seizre detection from the Electroencephalogram (EEG) signals. The ultimate goal is to design an automated seizure detection system to assist the medical personnel in timely seizure detection. Seizure detection is vital as neurological diseases or dysfunctions in newborn infants are often first manifested by seizure and prolonged seizures can result in impaired neuro-development or even fatality. The EEG has proved superior to clinical examination of newborns in early detection and prognostication of brain dysfunctions. However, long-term newborn EEG signals acquisition is considerably more difficult than that of adults and children. This is because, the number of the electrodes attached to the skin is limited by the size of the head, the newborns EEGs vary from day to day, and the newborns are reluctant of being in the recording situation. Also, the movement of the newborn can create artifact in the recording and as a result strongly affect the electrical seizure recognition. Most of the existing methods for neonates are either time or frequency based, and, therefore, do not consider the non-stationarity nature of the EEG signal. Thus, notwithstanding the plethora of existing methods, this thesis applies the discrete wavelet transform (DWT) to account for the non-stationarity of the EEG signals. First, two methods for seizure detection in neonates are proposed. The detection schemes are based on observing the changing behaviour of a number of statistical quantities of the wavelet coefficients (WC) of the EEG signal at different scales. In the first method, the variance and mean of the WC are considered as a feature set to dassify the EEG data into seizure and non-seizure. The test results give an average seizure detection rate (SDR) of 97.4%. In the second method, the number of zero-crossings, and the average distance between adjacent extrema of the WC of certain scales are extracted to form a feature set. The test obtains an average SDR of 95.2%. The proposed feature sets are both simple to implement, have high detection rate and low false alarm rate. Then, in order to reduce the complexity of the proposed schemes, two optimising methods are used to reduce the number of selected features. First, the mutual information feature selection (MIFS) algorithm is applied to select the optimum feature subset. The results show that an optimal subset of 9 features, provides SDR of 94%. Compared to that of the full feature set, it is clear that the optimal feature set can significantly reduce the system complexity. The drawback of the MIFS algorithm is that it ignores the interaction between features. To overcome this drawback, an alternative algorithm, the mutual information evaluation function (MIEF) is then used. The MIEF evaluates a set of candidate features extracted from the WC to select an informative feature subset. This function is based on the measurement of the information gain and takes into consideration the interaction between features. The performance of the proposed features is evaluated and compared to that of the features obtained using the MIFS algorithm. The MIEF algorithm selected the optimal 10 features resulting an average SDR of 96.3%. It is also shown, an average SDR of 93.5% can be obtained with only 4 features when the MIEF algorithm is used. In comparison with results of the first two methods, it is shown that the optimal feature subsets improve the system performance and significantly reduce the system complexity for implementation purpose.
APA, Harvard, Vancouver, ISO, and other styles
14

Zarjam, Peggy. "EEG Data acquisition and automatic seizure detection using wavelet transforms in the newborn EEG." Thesis, Queensland University of Technology, 2003. https://eprints.qut.edu.au/15795/1/Pega_Zarjam_Thesis.pdf.

Full text
Abstract:
This thesis deals with the problem of newborn seizre detection from the Electroencephalogram (EEG) signals. The ultimate goal is to design an automated seizure detection system to assist the medical personnel in timely seizure detection. Seizure detection is vital as neurological diseases or dysfunctions in newborn infants are often first manifested by seizure and prolonged seizures can result in impaired neuro-development or even fatality. The EEG has proved superior to clinical examination of newborns in early detection and prognostication of brain dysfunctions. However, long-term newborn EEG signals acquisition is considerably more difficult than that of adults and children. This is because, the number of the electrodes attached to the skin is limited by the size of the head, the newborns EEGs vary from day to day, and the newborns are reluctant of being in the recording situation. Also, the movement of the newborn can create artifact in the recording and as a result strongly affect the electrical seizure recognition. Most of the existing methods for neonates are either time or frequency based, and, therefore, do not consider the non-stationarity nature of the EEG signal. Thus, notwithstanding the plethora of existing methods, this thesis applies the discrete wavelet transform (DWT) to account for the non-stationarity of the EEG signals. First, two methods for seizure detection in neonates are proposed. The detection schemes are based on observing the changing behaviour of a number of statistical quantities of the wavelet coefficients (WC) of the EEG signal at different scales. In the first method, the variance and mean of the WC are considered as a feature set to dassify the EEG data into seizure and non-seizure. The test results give an average seizure detection rate (SDR) of 97.4%. In the second method, the number of zero-crossings, and the average distance between adjacent extrema of the WC of certain scales are extracted to form a feature set. The test obtains an average SDR of 95.2%. The proposed feature sets are both simple to implement, have high detection rate and low false alarm rate. Then, in order to reduce the complexity of the proposed schemes, two optimising methods are used to reduce the number of selected features. First, the mutual information feature selection (MIFS) algorithm is applied to select the optimum feature subset. The results show that an optimal subset of 9 features, provides SDR of 94%. Compared to that of the full feature set, it is clear that the optimal feature set can significantly reduce the system complexity. The drawback of the MIFS algorithm is that it ignores the interaction between features. To overcome this drawback, an alternative algorithm, the mutual information evaluation function (MIEF) is then used. The MIEF evaluates a set of candidate features extracted from the WC to select an informative feature subset. This function is based on the measurement of the information gain and takes into consideration the interaction between features. The performance of the proposed features is evaluated and compared to that of the features obtained using the MIFS algorithm. The MIEF algorithm selected the optimal 10 features resulting an average SDR of 96.3%. It is also shown, an average SDR of 93.5% can be obtained with only 4 features when the MIEF algorithm is used. In comparison with results of the first two methods, it is shown that the optimal feature subsets improve the system performance and significantly reduce the system complexity for implementation purpose.
APA, Harvard, Vancouver, ISO, and other styles
15

Morais, Edemerson Solano Batista de. "Estudo de Fractalidade e Evolu??o Din?mica de Sistemas Complexos." Universidade Federal do Rio Grande do Norte, 2007. http://repositorio.ufrn.br:8080/jspui/handle/123456789/18610.

Full text
Abstract:
Made available in DSpace on 2015-03-03T15:16:22Z (GMT). No. of bitstreams: 1 EdemersonSBM.pdf: 812078 bytes, checksum: 167690407a20b9462083f00be2b0a159 (MD5) Previous issue date: 2007-12-28
Conselho Nacional de Desenvolvimento Cient?fico e Tecnol?gico
In this work, the study of some complex systems is done with use of two distinct procedures. In the first part, we have studied the usage of Wavelet transform on analysis and characterization of (multi)fractal time series. We have test the reliability of Wavelet Transform Modulus Maxima method (WTMM) in respect to the multifractal formalism, trough the calculation of the singularity spectrum of time series whose fractality is well known a priori. Next, we have use the Wavelet Transform Modulus Maxima method to study the fractality of lungs crackles sounds, a biological time series. Since the crackles sounds are due to the opening of a pulmonary airway bronchi, bronchioles and alveoli which was initially closed, we can get information on the phenomenon of the airway opening cascade of the whole lung. Once this phenomenon is associated with the pulmonar tree architecture, which displays fractal geometry, the analysis and fractal characterization of this noise may provide us with important parameters for comparison between healthy lungs and those affected by disorders that affect the geometry of the tree lung, such as the obstructive and parenchymal degenerative diseases, which occurs, for example, in pulmonary emphysema. In the second part, we study a site percolation model for square lattices, where the percolating cluster grows governed by a control rule, corresponding to a method of automatic search. In this model of percolation, which have characteristics of self-organized criticality, the method does not use the automated search on Leaths algorithm. It uses the following control rule: pt+1 = pt + k(Rc ? Rt), where p is the probability of percolation, k is a kinetic parameter where 0 < k < 1 and R is the fraction of percolating finite square lattices with side L, LxL. This rule provides a time series corresponding to the dynamical evolution of the system, in particular the likelihood of percolation p. We proceed an analysis of scaling of the signal obtained in this way. The model used here enables the study of the automatic search method used for site percolation in square lattices, evaluating the dynamics of their parameters when the system goes to the critical point. It shows that the scaling of , the time elapsed until the system reaches the critical point, and tcor, the time required for the system loses its correlations, are both inversely proportional to k, the kinetic parameter of the control rule. We verify yet that the system has two different time scales after: one in which the system shows noise of type 1 f , indicating to be strongly correlated. Another in which it shows white noise, indicating that the correlation is lost. For large intervals of time the dynamics of the system shows ergodicity
Neste trabalho, o estudo de alguns sistemas complexos ? feito com a utiliza??o de dois procedimentos distintos. Na primeira parte, estudamos a utiliza??o da transformada Wavelet na an?lise e caracteriza??o (multi)fractal de s?ries temporais. Testamos a confiabilidade do M?todo do M?ximo do M?dulo da Transformada Wavelet (MMTW) com rela??o ao formalismo multifractal, por meio da obten??o do espectro de singularidade de s?ries temporais cuja fractalidade ? bem conhecida a priori. A seguir, usamos o m?todo do m?ximo do m?dulo da transformada wavelet para estudar a fractalidade dos ru?dos de crepita??o pulmonar, uma s?rie temporal biol?gica. Uma vez que a crepita??o pulmonar se d? no momento da abertura de uma via a?rea ? br?nquios, bronqu?olos e alv?olos ? que estava inicialmente fechada, podemos obter informa??es sobre o fen?meno de abertura em cascata das vias a?reas de todo o pulm?o. Como este fen?meno est? associado ? arquitetura da ?rvore pulmonar, a qual apresenta geometria fractal, a an?lise e caracteriza??o da fractalidade desse ru?do pode nos fornecer importantes par?metros de compara??o entre pulm?es sadios e aqueles acometidos por patologias que alteram a geometria da ?rvore pulmonar, tais como as doen?as obstrutivas e as de degenera??o parenquimatosa, que ocorre, por exemplo, no enfisema pulmonar. Na segunda parte, estudamos um modelo de percola??o por s?tios em rede quadrada, onde o aglomerado de percola??o cresce governado por uma regra de controle, correspondendo a um m?todo de busca autom?tica. Neste modelo de percola??o, que apresenta caracter?sticas de criticalidade auto-organizada, o m?todo de busca autom?tica n?o usa o algoritmo de Leath. Usa-se a seguinte regra de controle: pt+1 = pt +k(Rc ?Rt), onde p ? a probabilidade de percola??o, k ? um par?metro cin?tico onde 0 < k < 1 e R ? a fra??o de redes quadradas finitas de lado L, LxL, percolantes. Esta regra fornece uma s?rie temporal correspondente ? evolu??o din?mica do sistema, em especial da probabilidade de percola??o p. ? feita uma an?lise de escalas do sinal assim obtido. O modelo aqui utilizado permite que o m?todo de busca autom?tica para a percola??o por s?tios em rede quadrada seja, per si, estudado, avaliando-se a din?mica dos seus par?metros quando o sistema se aproxima do ponto cr?tico. Verifica-se que os escalonamentos de ?, o tempo decorrido at? que o sistema chegue ao ponto cr?tico, e de tcor, o tempo necess?rio para que o sistema perca suas correla??es, s?o, ambos, inversamente proporcionais a k, o par?metro cin?tico da regra de controle. Verifica-se ainda que o sistema apresenta duas escalas temporais distintas depois de ? : uma em que o sistema mostra ru?do do tipo 1 f? , indicando ser fortemente correlacionado; outra em que aparece um ru?do branco, indicando que se perdeu a correla??o. Para grandes intervalos de tempo a din?mica do sistema mostra que ele se comporta como um sistema erg?dico
APA, Harvard, Vancouver, ISO, and other styles
16

Anantharaman, B. "Compressed Domain Processing of MPEG Audio." Thesis, Indian Institute of Science, 2001. https://etd.iisc.ac.in/handle/2005/3914.

Full text
Abstract:
MPEG audio compression techniques significantly reduces the storage and transmission requirements for high quality digital audio. However, compression complicates the processing of audio in many applications. If a compressed audio signal is to be processed, a direct method would be to decode the compressed signal, process the decoded signal and re-encode it. This is computationally expensive due to the complexity of the MPEG filter bank. This thesis deals with processing of MPEG compressed audio. The main contributions of this thesis are a) Extracting wavelet coefficients in the MPEG compressed domain. b) Wavelet based pitch extraction in MPEG compressed domain. c) Time Scale Modifications of MPEG audio. d) Watermarking of MPEG audio. The research contributions starts with a technique for calculating several levels of wavelet coefficients from the output of the MPEG analysis filter bank. The technique exploits the toeplitz structure which arises when the MPEG and wavelet filter banks are represented in a matrix form, The computational complexity for extracting several levels of wavelet coefficients after decoding the compressed signal and directly from the output of the MPEG analysis filter bank are compared. The proposed technique is found to be computationally efficient for extracting higher levels of wavelet coefficients. Extracting pitch in the compressed domain becomes essential when large multimedia databases need to be indexed. For example one may be interested in listening to a particular speaker or to listen to male female audio segments in a multimedia document. For this application, pitch information is one of the very basic and important features required. Pitch is basically the time interval between two successive glottal closures. Glottal closures are accompanied by sharp transients in the speech signal which in turn gives rise to a local maxima in the wavelet coefficients. Pitch can be calculated by finding the time interval between two successive maxima in the wavelet coefficients. It is shown that the computational complexity for extracting pitch in the compressed domain is less than 7% of the uncompressed domain processing. An algorithm for extracting pitch in the compressed domain is proposed. The result of this algorithm for synthetic signals, and utterances of words by male/female is reported. In a number of important applications, one needs to modify an audio signal to render it more useful than its original. Typical applications include changing the time evolution of an audio signal (increase or decrease the rate of articulation of a speaker),or to adapt a given audio sequence to a given video sequence. In this thesis, time scale modifications are obtained in the subband domain such that when the modified subband signals are given to the MPEG synthesis filter bank, the desired time scale modification of the decoded signal is achieved. This is done by making use of sinusoidal modeling [I]. Here, each of the subband signal is modeled in terms of parameters such as amplitude phase and frequencies and are subsequently synthesised by using these parameters with Ls = k La where Ls is the length of the synthesis window , k is the time scale factor and La is the length of the analysis window. As the PCM version of the time scaled signal is not available, psychoacoustic model based bit allocation cannot be used. Hence a new bit allocation is done by using a subband coding algorithm. This method has been satisfactorily tested for time scale expansion and compression of speech and music signals. The recent growth of multimedia systems has increased the need for protecting digital media. Digital watermarking has been proposed as a method for protecting digital documents. The watermark needs to be added to the signal in such a way that it does not cause audible distortions. However the idea behind the lossy MPEC encoders is to remove or make insignificant those portions of the signal which does not affect human hearing. This renders the watermark insignificant and hence proving ownership of the signal becomes difficult when an audio signal is compressed. The existing compressed domain methods merely change the bits or the scale factors according to a key. Though simple, these methods are not robust to attacks. Further these methods require original signal to be available in the verification process. In this thesis we propose a watermarking method based on spread spectrum technique which does not require original signal during the verification process. It is also shown to be more robust than the existing methods. In our method the watermark is spread across many subband samples. Here two factors need to be considered, a) the watermark is to be embedded only in those subbands which will make the addition of the noise inaudible. b) The watermark should be added to those subbands which has sufficient bit allocation so that the watermark does not become insignificant due to lack of bit allocation. Embedding the watermark in the lower subbands would cause distortion and in the higher subbands would prove futile as the bit allocation in these subbands are practically zero. Considering a11 these factors, one can introduce noise to samples across many frames corresponding to subbands 4 to 8. In the verification process, it is sufficient to have the key/code and the possibly attacked signal. This method has been satisfactorily tested for robustness to scalefactor, LSB change and MPEG decoding and re-encoding.
APA, Harvard, Vancouver, ISO, and other styles
17

Anantharaman, B. "Compressed Domain Processing of MPEG Audio." Thesis, Indian Institute of Science, 2001. http://hdl.handle.net/2005/68.

Full text
Abstract:
MPEG audio compression techniques significantly reduces the storage and transmission requirements for high quality digital audio. However, compression complicates the processing of audio in many applications. If a compressed audio signal is to be processed, a direct method would be to decode the compressed signal, process the decoded signal and re-encode it. This is computationally expensive due to the complexity of the MPEG filter bank. This thesis deals with processing of MPEG compressed audio. The main contributions of this thesis are a) Extracting wavelet coefficients in the MPEG compressed domain. b) Wavelet based pitch extraction in MPEG compressed domain. c) Time Scale Modifications of MPEG audio. d) Watermarking of MPEG audio. The research contributions starts with a technique for calculating several levels of wavelet coefficients from the output of the MPEG analysis filter bank. The technique exploits the toeplitz structure which arises when the MPEG and wavelet filter banks are represented in a matrix form, The computational complexity for extracting several levels of wavelet coefficients after decoding the compressed signal and directly from the output of the MPEG analysis filter bank are compared. The proposed technique is found to be computationally efficient for extracting higher levels of wavelet coefficients. Extracting pitch in the compressed domain becomes essential when large multimedia databases need to be indexed. For example one may be interested in listening to a particular speaker or to listen to male female audio segments in a multimedia document. For this application, pitch information is one of the very basic and important features required. Pitch is basically the time interval between two successive glottal closures. Glottal closures are accompanied by sharp transients in the speech signal which in turn gives rise to a local maxima in the wavelet coefficients. Pitch can be calculated by finding the time interval between two successive maxima in the wavelet coefficients. It is shown that the computational complexity for extracting pitch in the compressed domain is less than 7% of the uncompressed domain processing. An algorithm for extracting pitch in the compressed domain is proposed. The result of this algorithm for synthetic signals, and utterances of words by male/female is reported. In a number of important applications, one needs to modify an audio signal to render it more useful than its original. Typical applications include changing the time evolution of an audio signal (increase or decrease the rate of articulation of a speaker),or to adapt a given audio sequence to a given video sequence. In this thesis, time scale modifications are obtained in the subband domain such that when the modified subband signals are given to the MPEG synthesis filter bank, the desired time scale modification of the decoded signal is achieved. This is done by making use of sinusoidal modeling [I]. Here, each of the subband signal is modeled in terms of parameters such as amplitude phase and frequencies and are subsequently synthesised by using these parameters with Ls = k La where Ls is the length of the synthesis window , k is the time scale factor and La is the length of the analysis window. As the PCM version of the time scaled signal is not available, psychoacoustic model based bit allocation cannot be used. Hence a new bit allocation is done by using a subband coding algorithm. This method has been satisfactorily tested for time scale expansion and compression of speech and music signals. The recent growth of multimedia systems has increased the need for protecting digital media. Digital watermarking has been proposed as a method for protecting digital documents. The watermark needs to be added to the signal in such a way that it does not cause audible distortions. However the idea behind the lossy MPEC encoders is to remove or make insignificant those portions of the signal which does not affect human hearing. This renders the watermark insignificant and hence proving ownership of the signal becomes difficult when an audio signal is compressed. The existing compressed domain methods merely change the bits or the scale factors according to a key. Though simple, these methods are not robust to attacks. Further these methods require original signal to be available in the verification process. In this thesis we propose a watermarking method based on spread spectrum technique which does not require original signal during the verification process. It is also shown to be more robust than the existing methods. In our method the watermark is spread across many subband samples. Here two factors need to be considered, a) the watermark is to be embedded only in those subbands which will make the addition of the noise inaudible. b) The watermark should be added to those subbands which has sufficient bit allocation so that the watermark does not become insignificant due to lack of bit allocation. Embedding the watermark in the lower subbands would cause distortion and in the higher subbands would prove futile as the bit allocation in these subbands are practically zero. Considering a11 these factors, one can introduce noise to samples across many frames corresponding to subbands 4 to 8. In the verification process, it is sufficient to have the key/code and the possibly attacked signal. This method has been satisfactorily tested for robustness to scalefactor, LSB change and MPEG decoding and re-encoding.
APA, Harvard, Vancouver, ISO, and other styles
18

Garboan, Adriana. "Traçage de contenu vidéo : une méthode robuste à l’enregistrement en salle de cinéma." Thesis, Paris, ENMP, 2012. http://www.theses.fr/2012ENMP0097/document.

Full text
Abstract:
Composantes sine qua non des contenus multimédias distribués et/ou partagés via un réseau, les techniques de fingerprinting permettent d'identifier tout contenu numérique à l'aide d'une signature (empreinte) de taille réduite, calculée à partir des données d'origine. Cette signature doit être invariante aux transformations du contenu. Pour des vidéos, cela renvoie aussi bien à du filtrage, de la compression, des opérations géométriques (rotation, sélection de sous-région… ) qu'à du sous-échantillonnage spatio-temporel. Dans la pratique, c'est l'enregistrement par caméscope directement dans une salle de projection qui combine de façon non linéaire toutes les transformations pré-citées.Par rapport à l'état de l'art, sous contrainte de robustesse à l'enregistrement en salle de cinéma, trois verrous scientifiques restent à lever : (1) unicité des signatures, (2) appariement mathématique des signatures, (3) scalabilité de la recherche au regard de la dimension de la base de données.La principale contribution de cette thèse est de spécifier, concevoir, implanter et valider TrackART, une nouvelle méthode de traçage des contenus vidéo relevant ces trois défis dans un contexte de traçage de contenus cinématographiques.L'unicité de la signature est obtenue par sélection d'un sous-ensemble de coefficients d'ondelettes, selon un critère statistique de leurs propriétés. La robustesse des signatures aux distorsions lors de l'appariement est garantie par l'introduction d'un test statistique Rho de corrélation. Enfin, la méthode développée est scalable : l'algorithme de localisation met en œuvre une représentation auto-adaptative par sac de mots visuels. TrackART comporte également un mécanisme de synchronisation supplémentaire, capable de corriger automatiquement le jitter introduit par les attaques de désynchronisation variables en temps.La méthode TrackART a été validée dans le cadre d'un partenariat industriel, avec les principaux professionnels de l'industrie cinématographique et avec le concours de la Commission Technique Supérieure de l'Image et du Son. La base de données de référence est constituée de 14 heures de contenu vidéo. La base de données requête correspond à 25 heures de contenu vidéo attaqué, obtenues en appliquant neuf types de distorsion sur le tiers des vidéo de la base de référence.Les performances de la méthode TrackART ont été mesurées objectivement dans un contexte d'enregistrement en salle : la probabilité de fausse alarme est inférieure à 16*10^-6, la probabilité de perte inférieure à 0,041, la précision et le rappel sont égal à 93%. Ces valeurs représentent une avancée par rapport à l'état de l'art qui n'exhibe aucune méthode de traçage robuste à l'enregistrement en salle et valident une première preuve de concept de la méthodologie statistique développée
Sine qua non component of multimedia content distribution on the Internet, video fingerprinting techniques allow the identification of content based on digital signatures(fingerprints) computed from the content itself. The signatures have to be invariant to content transformations like filtering, compression, geometric modifications, and spatial-temporal sub-sampling/cropping. In practice, all these transformations are non-linearly combined by the live camcorder recording use case.The state-of-the-art limitations for video fingerprinting can be identified at three levels: (1) the uniqueness of the fingerprint is solely dealt with by heuristic procedures; (2) the fingerprinting matching is not constructed on a mathematical ground, thus resulting in lack of robustness to live camcorder recording distortions; (3) very few, if any, full scalable mono-modal methods exist.The main contribution of the present thesis is to specify, design, implement and validate a new video fingerprinting method, TrackART, able to overcome these limitations. In order to ensure a unique and mathematical representation of the video content, the fingerprint is represented by a set of wavelet coefficients. In order to grant the fingerprints robustness to the mundane or malicious distortions which appear practical use-cases, the fingerprint matching is based on a repeated Rho test on correlation. In order to make the method efficient in the case of large scale databases, a localization algorithm based on a bag of visual words representation (Sivic and Zisserman, 2003) is employed. An additional synchronization mechanism able to address the time-variants distortions induced by live camcorder recording was also designed.The TrackART method was validated in industrial partnership with professional players in cinematography special effects (Mikros Image) and with the French Cinematography Authority (CST - Commision Supérieure Technique de l'Image et du Son). The reference video database consists of 14 hours of video content. The query dataset consists in 25 hours of replica content obtained by applying nine types of distortions on a third of the reference video content. The performances of the TrackART method have been objectively assessed in the context of live camcorder recording: the probability of false alarm lower than 16 10-6, the probability of missed detection lower than 0.041, precision and recall equal to 0.93. These results represent an advancement compared to the state of the art which does not exhibit any video fingerprinting method robust to live camcorder recording and validate a first proof of concept for the developed statistical methodology
APA, Harvard, Vancouver, ISO, and other styles
19

Urbánek, Pavel. "Komprese obrazu pomocí vlnkové transformace." Master's thesis, Vysoké učení technické v Brně. Fakulta informačních technologií, 2013. http://www.nusl.cz/ntk/nusl-236385.

Full text
Abstract:
This thesis is focused on subject of image compression using wavelet transform. The first part of this document provides reader with information about image compression, presents well known contemporary algorithms and looks into details of wavelet compression and following encoding schemes. Both JPEG and JPEG 2000 standards are introduced. Second part of this document analyzes and describes implementation of image compression tool including inovations and optimalizations. The third part is dedicated to comparison and evaluation of achievements.
APA, Harvard, Vancouver, ISO, and other styles
20

Kubánková, Anna. "Automatická klasifikace digitálních modulací." Doctoral thesis, Vysoké učení technické v Brně. Fakulta elektrotechniky a komunikačních technologií, 2008. http://www.nusl.cz/ntk/nusl-233424.

Full text
Abstract:
This dissertation thesis deals with a new method for digital modulation recognition. The history and present state of the topic is summarized in the introduction. Present methods together with their characteristic properties are described. The recognition by means of artificial neural is presented in more detail. After setting the objective of the dissertation thesis, the digital modulations that were chosen for recognition are described theoretically. The modulations FSK, MSK, BPSK, QPSK, and QAM-16 are concerned. These modulations are mostly used in modern communication systems. The method designed is based on the analysis of module and phase spectrograms of the modulated signals. Their histograms are used for the examination of the spectrogram properties. They provide information on the count of carrier frequencies in the signal, which is used for the FSK and MSK recognition, and on the count of phase states on which the BPSK, QPSK, and QAM-16 are classified. The spectrograms in that the characteristic attributes of the modulations are visible are obtained with the segment length equal to the symbol length. It was found that it is possible to correctly recognize the modulation with the known symbol length at the signal-to-noise ratio at least 0 dB. That is why it is necessary to detect the symbol length prior to the spectrogram calculation. Four methods were designed for this purpose: autocorrelation function, cepstrum analysis, wavelet transform, and LPC coefficients. These methods were algorithmized and analyzed with signals disturbed by the white Gaussian noise, phase noise and with signals passed through a multipass fading channel. The method of detection by means of cepstrum analysis proved the most suitable and reliable. Finally the new method for digital modulation recognition was verified with signals passed through a channel with properties close to the real one.
APA, Harvard, Vancouver, ISO, and other styles
21

Gossler, Fabrício Ely [UNESP]. "Wavelets e polinômios com coeficientes de Fibonacci." Universidade Estadual Paulista (UNESP), 2016. http://hdl.handle.net/11449/148776.

Full text
Abstract:
Submitted by FABRÍCIO ELY GOSSLER null (fabricio_ely8@hotmail.com) on 2017-02-09T16:24:59Z No. of bitstreams: 1 Fabrício E. Gossler-Dissertação - Unesp - Feis-PPGEE.pdf: 5023440 bytes, checksum: b5346eb35f509f2283b503acccf22ec3 (MD5)
Approved for entry into archive by LUIZA DE MENEZES ROMANETTO (luizamenezes@reitoria.unesp.br) on 2017-02-14T16:08:30Z (GMT) No. of bitstreams: 1 gossler_fe_me_ilha.pdf: 5023440 bytes, checksum: b5346eb35f509f2283b503acccf22ec3 (MD5)
Made available in DSpace on 2017-02-14T16:08:30Z (GMT). No. of bitstreams: 1 gossler_fe_me_ilha.pdf: 5023440 bytes, checksum: b5346eb35f509f2283b503acccf22ec3 (MD5) Previous issue date: 2016-12-19
Conselho Nacional de Desenvolvimento Científico e Tecnológico (CNPq)
Existem diferentes tipos de funções wavelets que podem ser utilizadas na Transformada Wavelet. Na maioria das vezes, a função wavelet escolhida para a análise de um determinado sinal vai ser aquela que melhor se ajusta no domínio tempo-frequência do mesmo. Existem vários tipos de funções wavelets que podem ser escolhidas para certas aplicações, sendo que algumas destas pertencem a conjuntos específicos denominados de famílias wavelets, tais como a Haar, Daubechies, Symlets, Morlet, Meyer e Gaussianas. Nesse trabalho é apresentada uma nova família de funções wavelets geradas a partir de polinômios com coeficientes de Fibonacci (FCPs). Essa família recebe o nome de Golden, e cada membro desta é obtido por uma derivada de ordem n do quociente entre dois FCPs distintos. As Golden wavelets foram deduzidas através das observações de que, em alguns casos, a derivada de ordem n, do quociente entre dois FCPs distintos, resulta em uma função que possui as características de uma onda de duração curta. Como aplicação, algumas wavelets apresentadas no decorrer deste trabalho são utilizadas na classificação de arritmias cardíacas em sinais de eletrocardiograma, que foram extraídos da base de dados do MIT-BIH arrhythmia database.
There exist different types of wavelet functions that can be used in the Wavelet Transform. In most cases, the wavelet function chosen for the analysis of a given signal will be the one that best adjusts in the time-frequency domain of the same signal. There are many types of wavelet functions that can be chosen for certain applications, some of which belong to specific sets called wavelet families, such as Haar, Daubechies, Symlets, Morlet, Meyer, and Gaussians. In this work a new wavelet functions family generated from Fibonacci-coefficients polynomials (FCPs) is presented. This family is called Golden, and each member is obtained by the n-th derivative of the quotient between two distinct FCPs. The Golden wavelets were deduced from the observations that in some cases the n-th derivative of the quotient between two distinct FCPs results in a function that has the characteristics of a short-duration wave. As an application, some wavelets presented in the course of this work are used to cardiac arrhythmia classification in electrocardiogram signals, which were extracted from the MITBIH arrhythmia database.
CNPq: 130123/2015-3
APA, Harvard, Vancouver, ISO, and other styles
22

Gossler, Fabrício Ely. "Wavelets e polinômios com coeficientes de Fibonacci /." Ilha Solteira, 2016. http://hdl.handle.net/11449/148776.

Full text
Abstract:
Orientador: Francisco Villarreal Alvarado
Resumo: Existem diferentes tipos de funções wavelets que podem ser utilizadas na Transformada Wavelet. Na maioria das vezes, a função wavelet escolhida para a análise de um determinado sinal vai ser aquela que melhor se ajusta no domínio tempo-frequência do mesmo. Existem vários tipos de funções wavelets que podem ser escolhidas para certas aplicações, sendo que algumas destas pertencem a conjuntos específicos denominados de famílias wavelets, tais como a Haar, Daubechies, Symlets, Morlet, Meyer e Gaussianas. Nesse trabalho é apresentada uma nova família de funções wavelets geradas a partir de polinômios com coeficientes de Fibonacci (FCPs). Essa família recebe o nome de Golden, e cada membro desta é obtido por uma derivada de ordem n do quociente entre dois FCPs distintos. As Golden wavelets foram deduzidas através das observações de que, em alguns casos, a derivada de ordem n, do quociente entre dois FCPs distintos, resulta em uma função que possui as características de uma onda de duração curta. Como aplicação, algumas wavelets apresentadas no decorrer deste trabalho são utilizadas na classificação de arritmias cardíacas em sinais de eletrocardiograma, que foram extraídos da base de dados do MIT-BIH arrhythmia database.
Mestre
APA, Harvard, Vancouver, ISO, and other styles
23

Ge, Zhongfu. "Analysis of surface pressure and velocity fluctuations in the flow over surface-mounted prisms." Diss., Virginia Tech, 2004. http://hdl.handle.net/10919/25965.

Full text
Abstract:
The full-scale value of the Reynolds number associated with wind loads on structures is of the order of 10^7. This is further complicated by the high levels of turbulence fluctuations associated with strong winds. On the other hand, numerical and wind tunnel simulations are usually carried out at smaller values of Re. Consequently, the validation of these simulations should only be based on physical phenomena derived with tools capable of their identification. In this work, two physical aspects related to extreme wind loads on low-rise structures are examined. The first includes the statistical properties and prediction of pressure peaks. The second involves the identification of linear and nonlinear relations between pressure peaks and associated velocity fluctuations. The first part of this thesis is concerned with the statistical properties of surface pressure time series and their variations under different incident flow conditions. Various statistical tools, including space-time correlation, conditional sampling, the probability plot and the probability plot correlation coefficient, are used to characterize pressure peaks measured on the top surface of a surface-mounted prism. The results show that the Gamma distribution provides generally the best statistical description for the pressure time series, and that the method of moments is sufficient for determining its parameters. Additionally, the shape parameter of the Gamma distribution can be directly related to the incident flow conditions. As for prediction of pressure peaks, the results show that the probability of non-exceedence can best be derived from the Gumbel distribution. Two approaches for peak prediction, based on analysis of the parent pressure time series and of observed peaks, are presented. The prediction based on the parent time series yields more conservative estimates of the probability of non-exceedence. The second part of this thesis is concerned with determining the linear and nonlinear relations between pressure peaks and the velocity field. Validated by analytical test signals, the wavelet-based analysis is proven to be effective and accurate in detecting intermittent linear and nonlinear relations between the pressure and velocity fluctuations. In particular, intermittent linear and nonlinear velocity pressure relations are observed over the nondimensional frequency range fH/U<0.32. These results provide the basis for flow parameters and characteristics required in the simulation of the wind loads on structures.
Ph. D.
APA, Harvard, Vancouver, ISO, and other styles
24

Sansonnet, Laure. "Inférence non-paramétrique pour des interactions poissoniennes." Phd thesis, Université Paris Sud - Paris XI, 2013. http://tel.archives-ouvertes.fr/tel-00835427.

Full text
Abstract:
L'objet de cette thèse est d'étudier divers problèmes de statistique non-paramétrique dans le cadre d'un modèle d'interactions poissoniennes. De tels modèles sont, par exemple, utilisés en neurosciences pour analyser les interactions entre deux neurones au travers leur émission de potentiels d'action au cours de l'enregistrement de l'activité cérébrale ou encore en génomique pour étudier les distances favorisées ou évitées entre deux motifs le long du génome. Dans ce cadre, nous introduisons une fonction dite de reproduction qui permet de quantifier les positions préférentielles des motifs et qui peut être modélisée par l'intensité d'un processus de Poisson. Dans un premier temps, nous nous intéressons à l'estimation de cette fonction que l'on suppose très localisée. Nous proposons une procédure d'estimation adaptative par seuillage de coefficients d'ondelettes qui est optimale des points de vue oracle et minimax. Des simulations et une application en génomique sur des données réelles provenant de la bactérie E. coli nous permettent de montrer le bon comportement pratique de notre procédure. Puis, nous traitons les problèmes de test associés qui consistent à tester la nullité de la fonction de reproduction. Pour cela, nous construisons une procédure de test optimale du point de vue minimax sur des espaces de Besov faibles, qui a également montré ses performances du point de vue pratique. Enfin, nous prolongeons ces travaux par l'étude d'une version discrète en grande dimension du modèle précédent en proposant une procédure adaptative de type Lasso.
APA, Harvard, Vancouver, ISO, and other styles
25

Mucha, Martin. "Moderní směrové způsoby reprezentace obrazů." Master's thesis, Vysoké učení technické v Brně. Fakulta elektrotechniky a komunikačních technologií, 2013. http://www.nusl.cz/ntk/nusl-220200.

Full text
Abstract:
Transformation methods are used to describe the image based on defined shapes, which are called bases or frames. Thanks to these shapes it is possible to transform the image with the help of calculated transformation coefficients and further work with this image. It is possible to image denoising, reconstruct the image, transform it and do other things. There are several types of methods of the image processing. In this field a significiant development could be seen. This study is focused on analysis of characteristics of individual well known methods of transformation such as Fouriers´s or Wavelet´s. For comparison, there are also new chosen methods of transformation described: Ripplet, Curvelet, Surelet, Tetrolet, Contourlet and Shearlet. Functional toolboxes were used for comparison of individual methods and their characteristics. These functional toolboxes were modified for the possibility of limitation of transformation coefficients for their potential use in subsequent reconstruction.
APA, Harvard, Vancouver, ISO, and other styles
26

Montoril, Michel Helcias. "Modelos de regressão com coeficientes funcionais para séries temporais." Universidade de São Paulo, 2013. http://www.teses.usp.br/teses/disponiveis/45/45133/tde-04042013-215702/.

Full text
Abstract:
Nesta tese, consideramos o ajuste de modelos de regressão com coeficientes funcionais para séries temporais, por meio de splines, ondaletas clássicas e ondaletas deformadas. Consideramos os casos em que os erros do modelo são independentes e correlacionados. Através das três abordagens de estimação, obtemos taxas de convergência a zero para distâncias médias entre as funções do modelo e seus respectivos estimadores, propostos neste trabalho. No caso das abordagens de ondaletas (clássicas e deformadas), obtemos também resultados assintóticos em situações mais específicas, nas quais as funções do modelo pertencem a espaços de Sobolev e espaços de Besov. Além disso, estudos de simulação de Monte Carlo e aplicações a dados reais são apresentados. Por meio desses estudos numéricos, fazemos comparações entre as três abordagens de estimação propostas, e comparações entre outras abordagens já conhecidas na literatura, onde verificamos desempenhos satisfatórios, no sentido das abordagens propostas fornecerem resultados competitivos, quando comparados aos resultados oriundos de metodologias já utilizadas na literatura.
In this thesis, we study about fitting functional-coefficient regression models for time series, by splines, wavelets and warped wavelets. We consider models with independent and correlated errors. Through the three estimation approaches, we obtain rates of convergence to zero for average distances between the functions of the model and their estimators proposed in this work. In the case of (warped) wavelets approach, we also obtain asymptotic results in more specific situations, in which the functions of the model belong to Sobolev and Besov spaces. Moreover, Monte Carlo simulation studies and applications to real data sets are presented. Through these numerical results, we make comparisons between the three estimation approaches proposed here and comparisons between other approaches known in the literature, where we verify interesting performances in the sense that the proposed approaches provide competitive results compared to the results from methodologies used in literature.
APA, Harvard, Vancouver, ISO, and other styles
27

Scipioni, Angel. "Contribution à la théorie des ondelettes : application à la turbulence des plasmas de bord de Tokamak et à la mesure dimensionnelle de cibles." Thesis, Nancy 1, 2010. http://www.theses.fr/2010NAN10125.

Full text
Abstract:
La nécessaire représentation en échelle du monde nous amène à expliquer pourquoi la théorie des ondelettes en constitue le formalisme le mieux adapté. Ses performances sont comparées à d'autres outils : la méthode des étendues normalisées (R/S) et la méthode par décomposition empirique modale (EMD).La grande diversité des bases analysantes de la théorie des ondelettes nous conduit à proposer une approche à caractère morphologique de l'analyse. L'exposé est organisé en trois parties.Le premier chapitre est dédié aux éléments constitutifs de la théorie des ondelettes. Un lien surprenant est établi entre la notion de récurrence et l'analyse en échelle (polynômes de Daubechies) via le triangle de Pascal. Une expression analytique générale des coefficients des filtres de Daubechies à partir des racines des polynômes est ensuite proposée.Le deuxième chapitre constitue le premier domaine d'application. Il concerne les plasmas de bord des réacteurs de fusion de type tokamak. Nous exposons comment, pour la première fois sur des signaux expérimentaux, le coefficient de Hurst a pu être mesuré à partir d'un estimateur des moindres carrés à ondelettes. Nous détaillons ensuite, à partir de processus de type mouvement brownien fractionnaire (fBm), la manière dont nous avons établi un modèle (de synthèse) original reproduisant parfaitement la statistique mixte fBm et fGn qui caractérise un plasma de bord. Enfin, nous explicitons les raisons nous ayant amené à constater l'absence de lien existant entre des valeurs élevées du coefficient d'Hurst et de supposées longues corrélations.Le troisième chapitre est relatif au second domaine d'application. Il a été l'occasion de mettre en évidence comment le bien-fondé d'une approche morphologique couplée à une analyse en échelle nous ont permis d'extraire l'information relative à la taille, dans un écho rétrodiffusé d'une cible immergée et insonifiée par une onde ultrasonore
The necessary scale-based representation of the world leads us to explain why the wavelet theory is the best suited formalism. Its performances are compared to other tools: R/S analysis and empirical modal decomposition method (EMD). The great diversity of analyzing bases of wavelet theory leads us to propose a morphological approach of the analysis. The study is organized into three parts. The first chapter is dedicated to the constituent elements of wavelet theory. Then we will show the surprising link existing between recurrence concept and scale analysis (Daubechies polynomials) by using Pascal's triangle. A general analytical expression of Daubechies' filter coefficients is then proposed from the polynomial roots. The second chapter is the first application domain. It involves edge plasmas of tokamak fusion reactors. We will describe how, for the first time on experimental signals, the Hurst coefficient has been measured by a wavelet-based estimator. We will detail from fbm-like processes (fractional Brownian motion), how we have established an original model perfectly reproducing fBm and fGn joint statistics that characterizes magnetized plasmas. Finally, we will point out the reasons that show the lack of link between high values of the Hurst coefficient and possible long correlations. The third chapter is dedicated to the second application domain which is relative to the backscattered echo analysis of an immersed target insonified by an ultrasonic plane wave. We will explain how a morphological approach associated to a scale analysis can extract the diameter information
APA, Harvard, Vancouver, ISO, and other styles
28

Лавриненко, Олександр Юрійович, Александр Юрьевич Лавриненко, and Oleksandr Lavrynenko. "Методи підвищення ефективності семантичного кодування мовних сигналів." Thesis, Національний авіаційний університет, 2021. https://er.nau.edu.ua/handle/NAU/52212.

Full text
Abstract:
Дисертаційна робота присвячена вирішенню актуальної науково-практичної проблеми в телекомунікаційних системах, а саме підвищення пропускної здатності каналу передачі семантичних мовних даних за рахунок ефективного їх кодування, тобто формулюється питання підвищення ефективності семантичного кодування, а саме – з якою мінімальною швидкістю можливо кодувати семантичні ознаки мовних сигналів із заданою ймовірністю безпомилкового їх розпізнавання? Саме на це питання буде дана відповідь у даному науковому дослідженні, що є актуальною науково-технічною задачею враховуючи зростаючу тенденцію дистанційної взаємодії людей і роботизованої техніки за допомогою мови, де безпомилковість функціонування даного типу систем безпосередньо залежить від ефективності семантичного кодування мовних сигналів. У роботі досліджено відомий метод підвищення ефективності семантичного кодування мовних сигналів на основі мел-частотних кепстральних коефіцієнтів, який полягає в знаходженні середніх значень коефіцієнтів дискретного косинусного перетворення прологарифмованої енергії спектра дискретного перетворення Фур'є обробленого трикутним фільтром в мел-шкалі. Проблема полягає в тому, що представлений метод семантичного кодування мовних сигналів на основі мел-частотних кепстральних коефіцієнтів не дотримується умови адаптивності, тому було сформульовано основну наукову гіпотезу дослідження, яка полягає в тому що підвищити ефективність семантичного кодування мовних сигналів можливо за рахунок використання адаптивного емпіричного вейвлет-перетворення з подальшим застосуванням спектрального аналізу Гільберта. Під ефективністю кодування розуміється зниження швидкості передачі інформації із заданою ймовірністю безпомилкового розпізнавання семантичних ознак мовних сигналів, що дозволить значно знизити необхідну смугу пропускання, тим самим підвищуючи пропускну здатність каналу зв'язку. У процесі доведення сформульованої наукової гіпотези дослідження були отримані наступні результати: 1) вперше розроблено метод семантичного кодування мовних сигналів на основі емпіричного вейвлетперетворення, який відрізняється від існуючих методів побудовою множини адаптивних смугових вейвлет-фільтрів Мейера з подальшим застосуванням спектрального аналізу Гільберта для знаходження миттєвих амплітуд і частот функцій внутрішніх емпіричних мод, що дозволить визначити семантичні ознаки мовних сигналів та підвищити ефективність їх кодування; 2) вперше запропоновано використовувати метод адаптивного емпіричного вейвлет-перетворення в задачах кратномасштабного аналізу та семантичного кодування мовних сигналів, що дозволить підвищити ефективність спектрального аналізу за рахунок розкладання високочастотного мовного коливання на його низькочастотні складові, а саме внутрішні емпіричні моди; 3) отримав подальший розвиток метод семантичного кодування мовних сигналів на основі мел-частотних кепстральних коефіцієнтів, але з використанням базових принципів адаптивного спектрального аналізу за допомогою емпіричного вейвлет-перетворення, що підвищує ефективність даного методу.
The thesis is devoted to the solution of the actual scientific and practical problem in telecommunication systems, namely increasing the bandwidth of the semantic speech data transmission channel due to their efficient coding, that is the question of increasing the efficiency of semantic coding is formulated, namely – at what minimum speed it is possible to encode semantic features of speech signals with the set probability of their error-free recognition? It is on this question will be answered in this research, which is an urgent scientific and technical task given the growing trend of remote human interaction and robotic technology through speech, where the accurateness of this type of system directly depends on the effectiveness of semantic coding of speech signals. In the thesis the well-known method of increasing the efficiency of semantic coding of speech signals based on mel-frequency cepstral coefficients is investigated, which consists in finding the average values of the coefficients of the discrete cosine transformation of the prologarithmic energy of the spectrum of the discrete Fourier transform treated by a triangular filter in the mel-scale. The problem is that the presented method of semantic coding of speech signals based on mel-frequency cepstral coefficients does not meet the condition of adaptability, therefore the main scientific hypothesis of the study was formulated, which is that to increase the efficiency of semantic coding of speech signals is possible through the use of adaptive empirical wavelet transform followed by the use of Hilbert spectral analysis. Coding efficiency means a decrease in the rate of information transmission with a given probability of error-free recognition of semantic features of speech signals, which will significantly reduce the required passband, thereby increasing the bandwidth of the communication channel. In the process of proving the formulated scientific hypothesis of the study, the following results were obtained: 1) the first time the method of semantic coding of speech signals based on empirical wavelet transform is developed, which differs from existing methods by constructing a sets of adaptive bandpass wavelet-filters Meyer followed by the use of Hilbert spectral analysis for finding instantaneous amplitudes and frequencies of the functions of internal empirical modes, which will determine the semantic features of speech signals and increase the efficiency of their coding; 2) the first time it is proposed to use the method of adaptive empirical wavelet transform in problems of multiscale analysis and semantic coding of speech signals, which will increase the efficiency of spectral analysis due to the decomposition of high-frequency speech oscillations into its low-frequency components, namely internal empirical modes; 3) received further development the method of semantic coding of speech signals based on mel-frequency cepstral coefficients, but using the basic principles of adaptive spectral analysis with the application empirical wavelet transform, which increases the efficiency of this method. Conducted experimental research in the software environment MATLAB R2020b showed, that the developed method of semantic coding of speech signals based on empirical wavelet transform allows you to reduce the encoding speed from 320 to 192 bit/s and the required passband from 40 to 24 Hz with a probability of error-free recognition of about 0.96 (96%) and a signal-to-noise ratio of 48 dB, according to which its efficiency increases 1.6 times in contrast to the existing method. The results obtained in the thesis can be used to build systems for remote interaction of people and robotic equipment using speech technologies, such as speech recognition and synthesis, voice control of technical objects, low-speed encoding of speech information, voice translation from foreign languages, etc.
APA, Harvard, Vancouver, ISO, and other styles
29

Sláma, Adam. "Software pro manuální ostření kamery s rozlišením 4K." Master's thesis, Vysoké učení technické v Brně. Fakulta elektrotechniky a komunikačních technologií, 2019. http://www.nusl.cz/ntk/nusl-400900.

Full text
Abstract:
This Master thesis is focused on the analysis of currently used methods which whose target is to determine the rate of image focus. This analysis was used during the development of the program which evaluates the rate of image focus in percentage rate, works in real time and cooperates with a camera capable of 4k image resolution with a manual focus of the lenses. Application is then capable of a finding of a pre-defined image under certain circumstances which is being used for increasing of effectivity of image focusing. Another option is represented by a method that is searching the most suitable area for focusing in the center of the image. A detailed description of these methods and program itself are also included in the thesis. The final part of the thesis contains records of measurement tests with its results.
APA, Harvard, Vancouver, ISO, and other styles
30

Vedreño, Santos Francisco Jose. "Diagnosis of electric induction machines in non-stationary regimes working in randomly changing conditions." Doctoral thesis, Universitat Politècnica de València, 2013. http://hdl.handle.net/10251/34177.

Full text
Abstract:
Tradicionalmente, la detección de faltas en máquinas eléctricas se basa en el uso de la Transformada Rápida de Fourier ya que la mayoría de las faltas pueden ser diagnosticadas con ella con seguridad si las máquinas operan en condiciones de régimen estacionario durante un intervalo de tiempo razonable. Sin embargo, para aplicaciones en las que las máquinas operan en condiciones de carga y velocidad fluctuantes (condiciones no estacionarias) como por ejemplo los aerogeneradores, el uso de la Transformada Rápida de Fourier debe ser reemplazado por otras técnicas. La presente tesis desarrolla una nueva metodología para el diagnóstico de máquinas de inducción de rotor de jaula y rotor bobinado operando en condiciones no estacionarias, basada en el análisis de las componentes de falta de las corrientes en el plano deslizamiento frecuencia. La técnica es aplicada al diagnóstico de asimetrías estatóricas, rotóricas y también para la falta de excentricidad mixta. El diagnóstico de las máquinas eléctricas en el dominio deslizamiento-frecuencia confiere un carácter universal a la metodología ya que puede diagnosticar máquinas eléctricas independientemente de sus características, del modo en el que la velocidad de la máquina varía y de su modo de funcionamiento (motor o generador). El desarrollo de la metodología conlleva las siguientes etapas: (i) Caracterización de las evoluciones de las componentes de falta de asimetría estatórica, rotórica y excentricidad mixta para las máquinas de inducción de rotores de jaula y bobinados en función de la velocidad (deslizamiento) y la frecuencia de alimentación de la red a la que está conectada la máquina. (ii) Debido a la importancia del procesado de la señal, se realiza una introducción a los conceptos básicos del procesado de señal antes de centrarse en las técnicas actuales de procesado de señal para el diagnóstico de máquinas eléctricas. (iii) La extracción de las componentes de falta se lleva a cabo a través de tres técnicas de filtrado diferentes: filtros basados en la Transformada Discreta Wavelet, en la Transformada Wavelet Packet y con una nueva técnica de filtrado propuesta en esta tesis, el Filtrado Espectral. Las dos primeras técnicas de filtrado extraen las componentes de falta en el dominio del tiempo mientras que la nueva técnica de filtrado realiza la extracción en el dominio de la frecuencia. (iv) La extracción de las componentes de falta, en algunos casos, conlleva el desplazamiento de la frecuencia de las componentes de falta. El desplazamiento de la frecuencia se realiza a través de dos técnicas: el Teorema del Desplazamiento de la Frecuencia y la Transformada Hilbert. (v) A diferencia de otras técnicas ya desarrolladas, la metodología propuesta no se basa exclusivamente en el cálculo de la energía de la componente de falta sino que también estudia la evolución de la frecuencia instantánea de ellas, calculándola a través de dos técnicas diferentes (la Transformada Hilbert y el operador Teager-Kaiser), frente al deslizamiento. La representación de la frecuencia instantánea frente al deslizamiento elimina la posibilidad de diagnósticos falsos positivos mejorando la precisión y la calidad del diagnóstico. Además, la representación de la frecuencia instantánea frente al deslizamiento permite realizar diagnósticos cualitativos que son rápidos y requieren bajos requisitos computacionales. (vi) Finalmente, debido a la importancia de la automatización de los procesos industriales y para evitar la posible divergencia presente en el diagnóstico cualitativo, tres parámetros objetivos de diagnóstico son desarrollados: el parámetro de la energía, el coeficiente de similitud y los parámetros de regresión. El parámetro de la energía cuantifica la severidad de la falta según su valor y es calculado en el dominio del tiempo y en el dominio de la frecuencia (consecuencia de la extracción de las componentes de falta en el dominio de la frecuencia). El coeficiente de similitud y los parámetros de regresión son parámetros objetivos que permiten descartar diagnósticos falsos positivos aumentando la robustez de la metodología propuesta. La metodología de diagnóstico propuesta se valida experimentalmente para las faltas de asimetría estatórica y rotórica y para el fallo de excentricidad mixta en máquinas de inducción de rotor de jaula y rotor bobinado alimentadas desde la red eléctrica y desde convertidores de frecuencia en condiciones no estacionarias estocásticas.
Vedreño Santos, FJ. (2013). Diagnosis of electric induction machines in non-stationary regimes working in randomly changing conditions [Tesis doctoral no publicada]. Universitat Politècnica de València. https://doi.org/10.4995/Thesis/10251/34177
TESIS
APA, Harvard, Vancouver, ISO, and other styles
31

Lue, Ming-Sun, and 呂明山. "EEG Feature Analysis based on Wavelet Coefficients." Thesis, 1997. http://ndltd.ncl.edu.tw/handle/25176893352533451644.

Full text
Abstract:
碩士
國立交通大學
控制工程系
85
Most signals in nature have the spectral property which is time-varying. So the Time-Frequency representation is widely used. In this thesis, we will introduce a time-frequency representation method,Wavelet Transform(WT),to analyze the EEG signals. A signal processed by the wavelet transform will result in a two-dimensional atadata array with lots of parameters. From this large amount of parameters,we would like to obtain a few substantial coefficients using some quantitative approaches. So the purpose of this thesis is to introduce some quantitative methods and to explore the characteristics of EEG signal from various viewpoints,in addition to transforming the original signal by wavelet transform . In the thesis,we apply the Fractal Dimension、Mutual Information and Corss- correlation methods to analysis of wavelet coefficients of different scales. From our experiment, methods of the Fractal Dimension、Cross- correlation provide a way to qualitatively and quantitatively characterize and the EEG*s. Nevertheless, the Mutual Information method suppressing the temporal information is not a feasible tool for quantifying the WT coefficients.
APA, Harvard, Vancouver, ISO, and other styles
32

Yang, Tsung-Cheng, and 楊宗正. "Fast Restricted Quadtree Triangulation Using Effective Wavelet Coefficients." Thesis, 2000. http://ndltd.ncl.edu.tw/handle/45168965689640813271.

Full text
Abstract:
碩士
國立臺灣科技大學
資訊管理系
88
Triangular mesh has been extensively applied in image compression and visual reality in today''s technique. It can provide higher compression ratio, good image quality, fast computational time, and real-time rendering. In this paper, we present an efficient method to construct triangular mesh. First we use wavelet transform to obtain the wavelet coefficients and then analyze and rearrange the wavelet coefficients according their spatio-frequency characteristics to get the effective wavelet coefficients. Second, in order to prevent cracks and get better quality, we restrict and regulate the effective wavelet coefficients. And third, based on the restricted quadtree model, we get triangular mesh from the effective wavelet coefficients. Experimental results show that our method has better performance than several previous works for both 3D visualization of terrain data and compression of image data.
APA, Harvard, Vancouver, ISO, and other styles
33

Tseng, Chien-Chang, and 曾建昌. "Rotation Invariant Color Texture Image Retrieval Using Wavelet Coefficients." Thesis, 2002. http://ndltd.ncl.edu.tw/handle/3ad354.

Full text
Abstract:
碩士
國立成功大學
電機工程學系碩博士班
90
With the development of the World Wide Web(WWW) and fast computer technology, the use of visual information has become routine in scientific and commercial applications. Every day, large numbers of people are using the Internet for searching and browsing through diverse multimedia database.Due to the limitation of textual annotation based search , the content-based retrieval has been given more attention in recent years.It provides methods to query image database using image features as the basis for the queries. These image features include color, texture and shape of objects and regions.   Wavelet transform, because of its space-frequency localization characteristics, is preferred in many image and audio processing applications. Wavelet transform provide good multiresolution analytical tools for texture classification and it can achieve a high accuracy rate.   Considering the subband relationship after Wavelet transform, the essay suggests that we can combine HL and LH to a single feature which can decrease the vector numbers and the feature value has the rotation-invariant capability. By take the advantage of database indexing search ability ,the similar images using the input image condition and it reduces the querying response time.Besides, combining color texture features to emphasize the color distribution of image make retrieval system proves that it really can improve more efficiency.
APA, Harvard, Vancouver, ISO, and other styles
34

Tzeng, Yu-Quan, and 曾昱筌. "A blind wavelet-based watermarking with detail-subband coefficients prediction." Thesis, 2008. http://ndltd.ncl.edu.tw/handle/02642324963306693305.

Full text
Abstract:
碩士
長庚大學
電機工程學研究所
96
Recently, the wavelet transform is widely used in multimedia signal processing applications. To provide security solution, the digital watermarking is involved. This study presents a blind wavelet-based watermarking which cooperates with the Human Visual System (HVS) embedding watermarks into detail-subband coefficients. Since the imperceptibility is the most significant issue in watermarking, the approximate band is maintained unchanged, while the other detail subbands are modified to carry information. The perceptual embedded weights for all subbands are determined based on the Just Noticeable Distortion (JND) criterion. The strength of the modification is investigated to provide a compromised result between robustness and image quality. In the decoder, the Least-Mean-Square (LMS) is involved to predict the original detail subbands coefficients and then extract the embedded watermarks. As documented in experimental results, the proposed method provides good robustness and excellent image quality.
APA, Harvard, Vancouver, ISO, and other styles
35

Rahman, S. M. Mahbubur. "Probabilistic modeling of wavelet coefficients for processing of image and video signals." Thesis, 2009. http://spectrum.library.concordia.ca/976389/1/NR63363.pdf.

Full text
Abstract:
Statistical estimation and detection techniques are widely used in signal processing including wavelet-based image and video processing. The probability density function (PDF) of the wavelet coefficients of image and video signals plays a key role in the development of techniques for such a processing. Due to the fixed number of parameters, the conventional PDFs for the estimators and detectors usually ignore higher-order moments. Consequently, estimators and detectors designed using such PDFs do not provide a satisfactory performance. This thesis is concerned with first developing a probabilistic model that is capable of incorporating an appropriate number of parameters that depend on higher-order moments of the wavelet coefficients. This model is then used as the prior to propose certain estimation and detection techniques for denoising and watermarking of image and video signals. Towards developing the probabilistic model, the Gauss-Hermite series expansion is chosen, since the wavelet coefficients have non-compact support and their empirical density function shows a resemblance to the standard Gaussian function. A modification is introduced in the series expansion so that only a finite number of terms can be used for modeling the wavelet coefficients with rendering the resulting PDF to become negative. The parameters of the resulting PDF, called the modified Gauss-Hermite (NIGH) PDF, are evaluated in terms of the higher-order sample-moments. It is shown that the MGH PDF fits the empirical density function better than the existing PDFs that use a limited number of parameters do. The proposed MGH PDF is used as the prior of image and video signals in designing maximum a posteriori and minimum mean squared error-based estimators for denoising of image and video signals and log-likelihood ratio-based detector for watermarking of image signals. The performance of the estimation and detection techniques are then evaluated in terms of the commonly used metrics. It is shown through extensive experimentations that the estimation and detection techniques developed utilizing the proposed MGH PDF perform substantially better than those that utilize the conventional PDFs. These results confirm that the superior fit of the MGH PDF to the empirical density function resulting from the flexibility of the MGH PDF in choosing the number of parameters, which are functions of higher-order moments of data, leads to the better performance. Thus, the proposed MGH PDF should play a significant role in wavelet-based image and video signal processing
APA, Harvard, Vancouver, ISO, and other styles
36

Yung-Chuan, Liao, and 廖永傳. "Group Testing Using Alternative Class Partitions for Embedded Coding of Wavelet Coefficients." Thesis, 2005. http://ndltd.ncl.edu.tw/handle/19812625532602085117.

Full text
Abstract:
碩士
國立暨南國際大學
資訊工程學系
93
Progressive image compression is widely used in many domains. Group testing is a new concept of progressive image compressing technique. It has been proved to be suitable for image compression. Hong and Lander proposed an image compression algorithm, so called GTW that is primarily based on the concept of group testing. Although no arithmetic coding is involved in this algorithm, GTW has performed competitively with SPIHT’s arithmetic coding variant in terms of rate-distortion performance. We study a series of related papers and analysis the characters of wavelet image. We provide a new method to partition the coefficients of GTW algorithm. Furthermore, we improve the rate-distortion performance of our algorithm by using a novel coding algorithm that is more suitable for wavelet coefficients. Our proposed scheme owns better rate-distortion performance rather than enhanced SPIHT or GTW algorithms. The experimental result shows that our algorithm can typically provide better image quality about between 0.4 and 0.7 db and 0.1 and 0.2 db far above the SPIHT and GTW algorithm respectively.
APA, Harvard, Vancouver, ISO, and other styles
37

Cheng, Kuei-Hung, and 鄭貴鴻. "Progressive Wavelet Coefficients Codec System Design and Its Hardware Design and Implementation." Thesis, 2004. http://ndltd.ncl.edu.tw/handle/14670550100519572405.

Full text
Abstract:
碩士
國立雲林科技大學
電子與資訊工程研究所碩士班
92
In this paper, a novel shape-adaptive zero tree coding scheme and its hardware realization are presented for the discrete wavelet transform (DWT) based image compression. The shape adaptive scheme can totally eliminate the coding overheads of pixels outside of a video object to save bit rate. The proposed scheme employs a bottom up and breadth first scanning order and each coefficient is examined only once. It can avoid the expensive list buffers required in the SPIHT to save the memory complexity in hardware implementation. The proposed coding scheme is also extended to the 3-D case to handle the medical volumetric data such as MRI. Simulation results indicate that under given bit rates, the plain version (with shape adaptive feature disabled) of the proposed scheme performs slightly better than the SPIHT scheme. Additional 4dB PSNR performance edge over the SPHIT algorithm can be obtained by shape adaptive processing. Its hardwired design is also developed and verified by FPGA. A sustained processing rate of 20 1024x1024 sized frames/sec can be achieved by the derived hardwired design.
APA, Harvard, Vancouver, ISO, and other styles
38

Lin, Wen-feng, and 林文峰. "Image Compression - The Scalar and Vector Quantization of The Wavelet Transform Coefficients." Thesis, 1994. http://ndltd.ncl.edu.tw/handle/58388960763484966692.

Full text
Abstract:
碩士
中原大學
電子工程學系
82
This thesis mainly focus on the quantization of the wavelet transform coefficients of an image. The possibility of image compression through a wavelet transform is also presented. In this thesis, both scalar and vector quatization techniques are used to quantize the wavelet trransform coefficients. For the scalar quantization, the uniform quantizer is used. The quantized coefficients are encoded by Huffman procedure to futher reduce the bit rate. For the vector quantization, the codebook is generated by fast Pairwise-Nearest-Neighbor(PNN) algorithm in the wavelet transform domain. If the size of the codebook is small enough, high compression ratio is attainable. The purpose of image compression is to achieve high compresion ratio and maintain an acceptable visual quality of the reconstructed images. Our simulation study shows that different quantization levels can be used for the scalar quantization of the wavelet transform coefficients at different scales. For the vector quantization, the global-codebook approach is effective in reducing the bit rate of the compressed data. The bit rate and peak-signal-to-noise-ratio(PSNR) of the reconstructed 512x512 Lena image are 0.31bits/pixel(bpp) and 28.04dB, respectively, for the scalar quantization. The simulation results obtained from the vector quantization are 0.21bpp and 30.75dB, respectively. These results show that image compression through a wavelet transform can achieve good visual quality and high compression ratio simultaneously.
APA, Harvard, Vancouver, ISO, and other styles
39

Sun, Jingjing. "Fabric wrinkle characterization and classification using modified wavelet coefficients and support-vector-machine classifiers." Thesis, 2012. http://hdl.handle.net/2152/ETD-UT-2012-05-5634.

Full text
Abstract:
Wrinkling caused in wearing and laundry procedures is one of the most important performance properties of a fabric. Visual examination performed by trained experts is a routine wrinkle evaluation method in textile industry, however, this subjective evaluation is time-consuming. The need for objective, automatic and efficient methods of wrinkle evaluation has been increasing remarkably in recent years. In the present thesis, a wavelet transform based imaging analysis method was developed to measure the 2D fabric surface data captured by an infrared imaging system. After decomposing the fabric image by the Haar wavelet transform algorithm, five parameters were defined based on modified wavelet coefficients to describe wrinkling features, such as orientation, hardness, density and contrast. The wrinkle parameters provide useful information for textile, appliance, and detergent manufactures who study wrinkling behaviors of fabrics. A Support-Vector-Machine based classification scheme was developed for automatic wrinkle rating. Both linear kernel and radial-basis-function (RBF) kernel functions were used to achieve a higher rating accuracy. The effectiveness of this evaluation method was tested by 300 images of five selected fabric types with different fiber contents, weave structures, colors and laundering cycles. The results show agreement between the proposed wavelet-based automatic assessment and experts’ visual ratings.
text
APA, Harvard, Vancouver, ISO, and other styles
40

Tseng, Chien Tu, and 曾建篤. "High Resolution Wavelet Transform Coefficients And its Application To Resolution Enhancement of Digital Images." Thesis, 2000. http://ndltd.ncl.edu.tw/handle/44828898364997410058.

Full text
Abstract:
碩士
中華大學
資訊工程學系碩士班
88
Pixel domain interpolation is the typical method of resolution enhancement of digital images. After enlarging image with interpolation operation, sharp edges usually become aliased. By detecting the edges and compensating, profiles of the interpolated edges can be much smoother. However, the quality of edge detection and compensation method strongly affects quality of the enhancement. Besides, the computation is much complicated. A new approach of resolution enhancement by using the property of Multi-Resolution Analysis through Discrete Wavelet Transform is proposed. By estimating the higher resolution wavelet coefficients, the resolution can be increased by two times through the synthesis operation of Discrete Wavelet Transform. The estimation is done by using neural networks combined with a simple edge classification method to improve the estimation accuracy. Both 1-D and 2-D cases are shown in this thesis. The experiments show that the enlarged images are clear and sharp. Some details are preserved in the processing. However, parts of the edges are sharp unduly and spurious noise is generated. Further more, the training of the neural networks is very slow due to the huge size of training samples. It is desired to overcome these problems in the future.
APA, Harvard, Vancouver, ISO, and other styles
41

LEE, CHUNG-CHI, and 李宗其. "Region-based Image Retrieval Using Watershed Transformation and Region Adjacency Graph for Wavelet Coefficients." Thesis, 2001. http://ndltd.ncl.edu.tw/handle/76790220750023277841.

Full text
Abstract:
碩士
國立中正大學
資訊工程研究所
89
Traditional region-based image retrieval systems often use only the dominant features within each region and ignore the useful relationship of neighboring regions. In this paper, we propose a new color region-based image retrieval system using the region relationship. An image is first processed with the wavelet transform to divide the image into several subbands and to extract the important texture information. A new color watershed transformation on not only the luminance wavelet coefficients but also the chromatic wavelet coefficients is performed to accurately segment the image into several important regions. Then, the region adjacency graph (RAG) is used to be the representation of the regions and their spatial relationships in the segmented image. In the RAG, a node denotes one region and an edge represents the spatial relationship of two neighboring regions. Hence, the features of regions such as the wavelet coefficients in an image can be recorded in their corresponding nodes within a RAG; while the features of adjacent regions are recorded in the edges. Now, the image retrieval problem is reduced to a subgraph isomorphism algorithm, which is performed to verify the similarity between two graphs. A simple and heuristic algorithm of subgraph isomorphism is applied to compare the query image’s RAG with those RAGs in the image database. In experiments, several query results from the test database that contains various kinds of images are used to evaluate the performance of proposed system.
APA, Harvard, Vancouver, ISO, and other styles
42

Xin-MingChen and 陳新明. "ECG Compression Algorithm Based on Best k-coefficients Sparse Decomposition and Discrete Wavelet Transform." Thesis, 2018. http://ndltd.ncl.edu.tw/handle/5qr42c.

Full text
APA, Harvard, Vancouver, ISO, and other styles
43

Yu, Kuan-Chi, and 尤冠几. "The Novel Super-Resolution Technology Based on the Wavelet Coefficients Prediction for the Ecological images." Thesis, 2015. http://ndltd.ncl.edu.tw/handle/04423930813232945675.

Full text
Abstract:
碩士
中華大學
資訊工程學系碩士班
103
Recently, the super-resolution technologies are often used to assist image recognition. The super-resolution technologies aim at rebuilding the low-resolution images to high-resolution images. The conventional image resolution methods applied linear or non-linear interpolation method to obtain the high resolution images. Then, many types of super-resolution technologies with DFT (Discrete Fourier Transform) or DWT (Discrete Wavelet Transform) and learning-based approaches are proposed to resolution the high-resolution images. However, the above mentioned methods can reconstruct the high-resolution images with satisfied PSNR and SSIM evaluation scores. This paper proposes a novel learning-based super-resolution with wavelet coefficients prediction scheme to rebuild the low-resolution images to high-resolution images with high PSNR and SSIM evaluation scores. The experimental results show that the reconstructed high-resolution license plate images can have PSNR 48 dB and SSIM 0.99 and the reconstructed high-resolution ecological images can have PSNR 33 dB and SSIM 0.98.
APA, Harvard, Vancouver, ISO, and other styles
44

Ma, Cheng-Yang, and 馬政揚. "The Novel Super-resolution Technology Based on the Wavelet Coefficients Prediction for the License Plate Images." Thesis, 2013. http://ndltd.ncl.edu.tw/handle/nbd329.

Full text
Abstract:
碩士
中華大學
資訊工程學系碩士班
101
The super resolution technology will be applied in the research as the main method to support my concept. The super resolution’s concept is to rebuild the image to high resolution from low resolution image by algorithms. The Compensate for high frequency wavelet coefficients and learning base super resolution technology will be applied in the study to optimize the license plate images from low resolution to high resolution. By using the data characteristics which will occur after wavelet transform, we can rebuild the image into high resolution with high band. By wavelet transform, the image local feature can capture more detail information with high band and the tree structure can strengthen the correlation between wavelet coefficients. Based on the correlation, we can use high band wavelet coefficients as a prediction base to improve the license plate image quality. By using the method in the research, it can effectively improve the image quality which is enlarged by interpolation. Meanwhile, the PSNR (Peak Signal Noise Ratio) can be used as a reference to judge the image quality. Through the experiment result, the rebuild license plate image with high resolution will have PSNR data 4db higher than the interpolation on average. Keywords: license plate、super-resolution、wavelet.
APA, Harvard, Vancouver, ISO, and other styles
45

Chen, Guan-Zhou, and 陳冠州. "SOPC Implementation of a Bit-plane based Modified SPIHT Algorithm for 1-D Wavelet Coefficients Coding." Thesis, 2013. http://ndltd.ncl.edu.tw/handle/63116298028641886409.

Full text
Abstract:
碩士
國立高雄第一科技大學
電腦與通訊工程研究所
101
The SPIHT scheme can be very efficient for data inherent in hierarchical self similarities. However, this scheme which exploited the self similarities in terms of dynamic data structures, imposed practical limitation on hardware implementation, especially for large-size data sequences. A Modified Set Partitioning In Hierarchical Trees (MSPIHT) Algorithm was proposed for solving these problems. Different from SPIHT, the MSPIHT used the Bit-Plane and flag concepts, which can reduce memory requirements and speed up the coding process. Besides, three lists in SPIHT coding process: LIP, LSP, and LIS were combined into one step to simplify the complexity of MSPIHT coding process. The searching time of descendant coefficient was also reduced by using Check Bit. Comparing with SPIHT, MSPIHT had more regular coding process, lower coding complexity, and shorter coding time. In this study, we utilized ALTERA DE2-115 as a platform to implement MSPIHT coding by using SOPC. According to the experiment result, the hardware could be exactly implemented. Furthermore, MSPIHT encoding process was 70-90 times faster than SPIHT, and reduced 40% memory requirement.
APA, Harvard, Vancouver, ISO, and other styles
46

Phan, Quan. "Design of vibration inspired bi-orthogonal wavelets for signal analysis." Thesis, 2012. http://hdl.handle.net/1911/71679.

Full text
Abstract:
In this thesis, a method to calculate scaling function coefficients for a new bi-orthogonal wavelet family derived directly from an impulse response waveform is presented. In literature, the Daubechies wavelets (DB wavelet) and the Morlet wavelet are the most commonly used wavelets for the dyadic wavelet transform (DWT) and the continuous wavelet transform (CWT), respectively. For a specific vibration signal processing application, a wavelet basis that is similar or is derived directly from the signal being studied proves to be superior to the commonly used wavelet basis. To assure a wavelet basis has a direct relationship to the signal being studied, a new formula is proposed to calculate coefficients which capture the characteristics of an impulse response waveform. The calculated coefficients are then used to develop a new bi-orthogonal wavelet family.
APA, Harvard, Vancouver, ISO, and other styles
47

Liu, Chia-Chou, and 劉佳洲. "On the Application of the De-noising Method of Stationary Wavelet Coefficients Threshold to Filter Out Noise in Digital Hearing Aids." Thesis, 2010. http://ndltd.ncl.edu.tw/handle/43887541206717616847.

Full text
Abstract:
碩士
臺灣大學
工程科學及海洋工程學研究所
98
For a long time, improving the hearing of the hearing-impaired has been what researchers and medical professionals been struggling to achieve. Because there are currently over 200 million deaf or hard of hearing people worldwide, researchers and medical professionals realize the importance of this goal. Fortunately, the gift of technology, from early analog hearing aids to the mainstream of digital hearing aids, has brought about various kinds of flourishing digital signal processing technology. The function of current hearing aids is no longer restricted to just simple voice amplification, which allows the hearing-impaired to hear directly, but can also satisfy the different needs of different users with different sound signal processing. In fact, the development of hearing aids still has an opportunity for improvement. In this paper, the white noise is added to the clean voice signal, becoming a voice signal that contains noise. First, the discrete wavelet transform is used to cut the voice bandwidth into nine different sub-band bandwidths. Second, the discrete wavelet stationary transform is used to cut the voice bandwidth into nine different sub-band bandwidths. Third, the wavelet packet transform is used to cut the voice bandwidth into eight identical bandwidths. The wavelet de-noising method is used to filter out high-frequency noise. After the voice signal has been de-noised, it makes up four different types of hearing loss, including 40dB uniform hearing loss, mild low-frequency hearing loss, moderate high-frequency hearing loss, and severe high-frequency hearing loss. Finally, the saturated volume limits the final output of the energy of speech to a fixed size. This thesis is to simulate voice signal processing by the wavelet transform. The process of verification can effectively filter out white noise, and compensate the four different types of hearing loss to achieve the basic functions of digital hearing aids.
APA, Harvard, Vancouver, ISO, and other styles
48

(6642491), Jingzhao Dai. "SPARSE DISCRETE WAVELET DECOMPOSITION AND FILTER BANK TECHNIQUES FOR SPEECH RECOGNITION." Thesis, 2019.

Find full text
Abstract:

Speech recognition is widely applied to translation from speech to related text, voice driven commands, human machine interface and so on [1]-[8]. It has been increasingly proliferated to Human’s lives in the modern age. To improve the accuracy of speech recognition, various algorithms such as artificial neural network, hidden Markov model and so on have been developed [1], [2].

In this thesis work, the tasks of speech recognition with various classifiers are investigated. The classifiers employed include the support vector machine (SVM), k-nearest neighbors (KNN), random forest (RF) and convolutional neural network (CNN). Two novel features extraction methods of sparse discrete wavelet decomposition (SDWD) and bandpass filtering (BPF) based on the Mel filter banks [9] are developed and proposed. In order to meet diversity of classification algorithms, one-dimensional (1D) and two-dimensional (2D) features are required to be obtained. The 1D features are the array of power coefficients in frequency bands, which are dedicated for training SVM, KNN and RF classifiers while the 2D features are formed both in frequency domain and temporal variations. In fact, the 2D feature consists of the power values in decomposed bands versus consecutive speech frames. Most importantly, the 2D feature with geometric transformation are adopted to train CNN.

Speech recognition including males and females are from the recorded data set as well as the standard data set. Firstly, the recordings with little noise and clear pronunciation are applied with the proposed feature extraction methods. After many trials and experiments using this dataset, a high recognition accuracy is achieved. Then, these feature extraction methods are further applied to the standard recordings having random characteristics with ambient noise and unclear pronunciation. Many experiment results validate the effectiveness of the proposed feature extraction techniques.

APA, Harvard, Vancouver, ISO, and other styles
49

Lee, Wen-Li, and 李文禮. "Applications of Wavelet Coefficient Estimation in Medical Image Enhancement." Thesis, 2011. http://ndltd.ncl.edu.tw/handle/19027423592773343441.

Full text
Abstract:
博士
國立東華大學
電機工程學系
99
Medical images provide clinical information for facilitating diagnostic accuracy and treatment process. Clear image detail is essential and can provide better information for visualization. When the information in medical images is not satisfactory to viewers, image enhancement can be used to improve visual perception on the images. However, over-enhancement or under-enhancement could happen in some images. Moreover, the information obtained via visual inspection on the enhanced images can be varied by each viewer because the preference of visual perception is individualized. It would be beneficial for clinical practice if the viewers can select the optimal enhanced images for their medical use. We proposed three wavelet-based methods specifically to improve visibility on digitized medical images in terms of resolution enhancement, detail enhancement and texture enhancement. Wavelet coefficient estimation is the core technique employed in these three methods. Our proposed wavelet-based interpolation method enables arbitrary resizing medical images and reduces influence of image blurring. Our proposed detail enhancement scheme can sharpen image and reveal hidden information, so visibility on medical images can be improved. We also successfully integrate these two methods, wavelet-based interpolation and detail enhancement scheme, to achieve resolution enhancement and detail enhancement simultaneously. Furthermore, we proposed texture enhancement scheme to increase definition of texture in noise-corrupted sonograms without eliminating speckles. Experimental results show that our proposed methods outperform other schemes commonly used for medical image enhancement in terms of subjective assessments and objective evaluations. Our proposed methods allow scalable selection of level of image enhancement to meet viewers’ visual preferences.
APA, Harvard, Vancouver, ISO, and other styles
50

Lembono, Buwono, and 林國祥. "Investigation of Wavelet Coefficient of Electrocardiograph based on Image Processing Method." Thesis, 2008. http://ndltd.ncl.edu.tw/handle/80520984867551583877.

Full text
Abstract:
碩士
國立交通大學
電機與控制工程系所
96
The aim of this research was to quantify the continuous wavelet coefficients (CWT) of raw ECG data. Several methods employed in this thesis included invariant-moment analysis, singular value decomposition (SVD), correlation coefficient, and analysis of variance (ANOVA). This study included 17 subjects, 8 experimental subjects with Zen-meditation experience and 9 control subjects in the same age range, yet, without any meditation experience. According to our results, the seven invariant moment values in control group tended to decrease, while the experimental group showed the tendency of increase. SVD analysis gives us another perspective. The correlation coefficients between major components of both groups showed a high value of correlation, although one result from the control group was considered to be moderately correlated. In ANOVA, differences appeared to be more significant in the control group than the experimental group. Thus, we may preliminarily suggest that ECG waveform patterns of experimental group behave more stably than those of control group in certain condition.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography