Dissertations / Theses on the topic 'Wavelet coefficients'
Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles
Consult the top 50 dissertations / theses for your research on the topic 'Wavelet coefficients.'
Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.
You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.
Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.
Er, Chiangkai. "Speech recognition by clustering wavelet and PLP coefficients." Thesis, Massachusetts Institute of Technology, 1997. http://hdl.handle.net/1721.1/42742.
Full textAl-Jawad, Naseer. "Exploiting statistical properties of wavelet coefficients for image/video processing and analysis tasks." Thesis, University of Buckingham, 2009. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.601354.
Full textAl-Jawad, Neseer. "Exploiting Statical Properties of Wavelet Coefficients for image/Video Processing and Analysis Tasks." Thesis, University of Exeter, 2009. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.515492.
Full textChrápek, Tomáš. "Potlačování šumu v řeči založené na waveletové transformaci a rozeznávání znělosti segmentů." Master's thesis, Vysoké učení technické v Brně. Fakulta elektrotechniky a komunikačních technologií, 2008. http://www.nusl.cz/ntk/nusl-217506.
Full textJanajreh, Isam Mustafa II. "Wavelet Analysis of Extreme Wind Loads on Low-Rise Structures." Diss., Virginia Tech, 1998. http://hdl.handle.net/10919/30414.
Full textPh. D.
Konczi, Róbert. "Digitální hudební efekt založený na waveletové transformaci jako plug-in modul." Master's thesis, Vysoké učení technické v Brně. Fakulta elektrotechniky a komunikačních technologií, 2011. http://www.nusl.cz/ntk/nusl-218981.
Full textKato, Jien, Toyohide Watanabe, Sebastien Joga, Rittscher Jens, Blake Andrew, ジェーン 加藤, and 豊英 渡邉. "An HMM-based segmentation method for traffic monitoring movies." IEEE, 2002. http://hdl.handle.net/2237/6744.
Full textStamos, Dimitrios Georgios. "Experimental Analysis of the Interaction of Water Waves With Flexible Structures." Diss., Virginia Tech, 2000. http://hdl.handle.net/10919/27567.
Full textPh. D.
Morand, Claire. "Segmentation spatio-temporelle et indexation vidéo dans le domaine des représentations hiérarchiques." Thesis, Bordeaux 1, 2009. http://www.theses.fr/2009BOR13888/document.
Full textThis thesis aims at proposing a solution of scalable object-based indexing of HD video flow compressed by MJPEG2000. In this context, on the one hand, we work in the hierarchical transform domain of the 9/7 Daubechies' wavelets and, on the other hand, the scalable representation implies to search for multiscale methods, from low to high resolution. The first part of this manuscript is dedicated to the definition of a method for automatic extraction of objects having their own motion. It is based on a combination of a robust global motion estimation with a morphological color segmentation at low resolution. The obtained result is then refined following the data order of the scalable flow. The second part is the definition of an object descriptor which is based on the multiscale histograms of the wavelet coefficients. Finally, the performances of the proposed method are evaluated in the context of scalable content-based queries
Zhao, Fangwei. "Multiresolution analysis of ultrasound images of the prostate." University of Western Australia. School of Electrical, Electronic and Computer Engineering, 2004. http://theses.library.uwa.edu.au/adt-WU2004.0028.
Full textCombrexelle, Sébastien. "Multifractal analysis for multivariate data with application to remote sensing." Phd thesis, Toulouse, INPT, 2016. http://oatao.univ-toulouse.fr/16477/1/Combrexelle.pdf.
Full textZátyik, Ján. "Směrové reprezentace obrazů." Master's thesis, Vysoké učení technické v Brně. Fakulta elektrotechniky a komunikačních technologií, 2011. http://www.nusl.cz/ntk/nusl-218921.
Full textZarjam, Pega. "EEG Data acquisition and automatic seizure detection using wavelet transforms in the newborn EEG." Queensland University of Technology, 2003. http://eprints.qut.edu.au/15795/.
Full textZarjam, Peggy. "EEG Data acquisition and automatic seizure detection using wavelet transforms in the newborn EEG." Thesis, Queensland University of Technology, 2003. https://eprints.qut.edu.au/15795/1/Pega_Zarjam_Thesis.pdf.
Full textMorais, Edemerson Solano Batista de. "Estudo de Fractalidade e Evolu??o Din?mica de Sistemas Complexos." Universidade Federal do Rio Grande do Norte, 2007. http://repositorio.ufrn.br:8080/jspui/handle/123456789/18610.
Full textConselho Nacional de Desenvolvimento Cient?fico e Tecnol?gico
In this work, the study of some complex systems is done with use of two distinct procedures. In the first part, we have studied the usage of Wavelet transform on analysis and characterization of (multi)fractal time series. We have test the reliability of Wavelet Transform Modulus Maxima method (WTMM) in respect to the multifractal formalism, trough the calculation of the singularity spectrum of time series whose fractality is well known a priori. Next, we have use the Wavelet Transform Modulus Maxima method to study the fractality of lungs crackles sounds, a biological time series. Since the crackles sounds are due to the opening of a pulmonary airway bronchi, bronchioles and alveoli which was initially closed, we can get information on the phenomenon of the airway opening cascade of the whole lung. Once this phenomenon is associated with the pulmonar tree architecture, which displays fractal geometry, the analysis and fractal characterization of this noise may provide us with important parameters for comparison between healthy lungs and those affected by disorders that affect the geometry of the tree lung, such as the obstructive and parenchymal degenerative diseases, which occurs, for example, in pulmonary emphysema. In the second part, we study a site percolation model for square lattices, where the percolating cluster grows governed by a control rule, corresponding to a method of automatic search. In this model of percolation, which have characteristics of self-organized criticality, the method does not use the automated search on Leaths algorithm. It uses the following control rule: pt+1 = pt + k(Rc ? Rt), where p is the probability of percolation, k is a kinetic parameter where 0 < k < 1 and R is the fraction of percolating finite square lattices with side L, LxL. This rule provides a time series corresponding to the dynamical evolution of the system, in particular the likelihood of percolation p. We proceed an analysis of scaling of the signal obtained in this way. The model used here enables the study of the automatic search method used for site percolation in square lattices, evaluating the dynamics of their parameters when the system goes to the critical point. It shows that the scaling of , the time elapsed until the system reaches the critical point, and tcor, the time required for the system loses its correlations, are both inversely proportional to k, the kinetic parameter of the control rule. We verify yet that the system has two different time scales after: one in which the system shows noise of type 1 f , indicating to be strongly correlated. Another in which it shows white noise, indicating that the correlation is lost. For large intervals of time the dynamics of the system shows ergodicity
Neste trabalho, o estudo de alguns sistemas complexos ? feito com a utiliza??o de dois procedimentos distintos. Na primeira parte, estudamos a utiliza??o da transformada Wavelet na an?lise e caracteriza??o (multi)fractal de s?ries temporais. Testamos a confiabilidade do M?todo do M?ximo do M?dulo da Transformada Wavelet (MMTW) com rela??o ao formalismo multifractal, por meio da obten??o do espectro de singularidade de s?ries temporais cuja fractalidade ? bem conhecida a priori. A seguir, usamos o m?todo do m?ximo do m?dulo da transformada wavelet para estudar a fractalidade dos ru?dos de crepita??o pulmonar, uma s?rie temporal biol?gica. Uma vez que a crepita??o pulmonar se d? no momento da abertura de uma via a?rea ? br?nquios, bronqu?olos e alv?olos ? que estava inicialmente fechada, podemos obter informa??es sobre o fen?meno de abertura em cascata das vias a?reas de todo o pulm?o. Como este fen?meno est? associado ? arquitetura da ?rvore pulmonar, a qual apresenta geometria fractal, a an?lise e caracteriza??o da fractalidade desse ru?do pode nos fornecer importantes par?metros de compara??o entre pulm?es sadios e aqueles acometidos por patologias que alteram a geometria da ?rvore pulmonar, tais como as doen?as obstrutivas e as de degenera??o parenquimatosa, que ocorre, por exemplo, no enfisema pulmonar. Na segunda parte, estudamos um modelo de percola??o por s?tios em rede quadrada, onde o aglomerado de percola??o cresce governado por uma regra de controle, correspondendo a um m?todo de busca autom?tica. Neste modelo de percola??o, que apresenta caracter?sticas de criticalidade auto-organizada, o m?todo de busca autom?tica n?o usa o algoritmo de Leath. Usa-se a seguinte regra de controle: pt+1 = pt +k(Rc ?Rt), onde p ? a probabilidade de percola??o, k ? um par?metro cin?tico onde 0 < k < 1 e R ? a fra??o de redes quadradas finitas de lado L, LxL, percolantes. Esta regra fornece uma s?rie temporal correspondente ? evolu??o din?mica do sistema, em especial da probabilidade de percola??o p. ? feita uma an?lise de escalas do sinal assim obtido. O modelo aqui utilizado permite que o m?todo de busca autom?tica para a percola??o por s?tios em rede quadrada seja, per si, estudado, avaliando-se a din?mica dos seus par?metros quando o sistema se aproxima do ponto cr?tico. Verifica-se que os escalonamentos de ?, o tempo decorrido at? que o sistema chegue ao ponto cr?tico, e de tcor, o tempo necess?rio para que o sistema perca suas correla??es, s?o, ambos, inversamente proporcionais a k, o par?metro cin?tico da regra de controle. Verifica-se ainda que o sistema apresenta duas escalas temporais distintas depois de ? : uma em que o sistema mostra ru?do do tipo 1 f? , indicando ser fortemente correlacionado; outra em que aparece um ru?do branco, indicando que se perdeu a correla??o. Para grandes intervalos de tempo a din?mica do sistema mostra que ele se comporta como um sistema erg?dico
Anantharaman, B. "Compressed Domain Processing of MPEG Audio." Thesis, Indian Institute of Science, 2001. https://etd.iisc.ac.in/handle/2005/3914.
Full textAnantharaman, B. "Compressed Domain Processing of MPEG Audio." Thesis, Indian Institute of Science, 2001. http://hdl.handle.net/2005/68.
Full textGarboan, Adriana. "Traçage de contenu vidéo : une méthode robuste à l’enregistrement en salle de cinéma." Thesis, Paris, ENMP, 2012. http://www.theses.fr/2012ENMP0097/document.
Full textSine qua non component of multimedia content distribution on the Internet, video fingerprinting techniques allow the identification of content based on digital signatures(fingerprints) computed from the content itself. The signatures have to be invariant to content transformations like filtering, compression, geometric modifications, and spatial-temporal sub-sampling/cropping. In practice, all these transformations are non-linearly combined by the live camcorder recording use case.The state-of-the-art limitations for video fingerprinting can be identified at three levels: (1) the uniqueness of the fingerprint is solely dealt with by heuristic procedures; (2) the fingerprinting matching is not constructed on a mathematical ground, thus resulting in lack of robustness to live camcorder recording distortions; (3) very few, if any, full scalable mono-modal methods exist.The main contribution of the present thesis is to specify, design, implement and validate a new video fingerprinting method, TrackART, able to overcome these limitations. In order to ensure a unique and mathematical representation of the video content, the fingerprint is represented by a set of wavelet coefficients. In order to grant the fingerprints robustness to the mundane or malicious distortions which appear practical use-cases, the fingerprint matching is based on a repeated Rho test on correlation. In order to make the method efficient in the case of large scale databases, a localization algorithm based on a bag of visual words representation (Sivic and Zisserman, 2003) is employed. An additional synchronization mechanism able to address the time-variants distortions induced by live camcorder recording was also designed.The TrackART method was validated in industrial partnership with professional players in cinematography special effects (Mikros Image) and with the French Cinematography Authority (CST - Commision Supérieure Technique de l'Image et du Son). The reference video database consists of 14 hours of video content. The query dataset consists in 25 hours of replica content obtained by applying nine types of distortions on a third of the reference video content. The performances of the TrackART method have been objectively assessed in the context of live camcorder recording: the probability of false alarm lower than 16 10-6, the probability of missed detection lower than 0.041, precision and recall equal to 0.93. These results represent an advancement compared to the state of the art which does not exhibit any video fingerprinting method robust to live camcorder recording and validate a first proof of concept for the developed statistical methodology
Urbánek, Pavel. "Komprese obrazu pomocí vlnkové transformace." Master's thesis, Vysoké učení technické v Brně. Fakulta informačních technologií, 2013. http://www.nusl.cz/ntk/nusl-236385.
Full textKubánková, Anna. "Automatická klasifikace digitálních modulací." Doctoral thesis, Vysoké učení technické v Brně. Fakulta elektrotechniky a komunikačních technologií, 2008. http://www.nusl.cz/ntk/nusl-233424.
Full textGossler, Fabrício Ely [UNESP]. "Wavelets e polinômios com coeficientes de Fibonacci." Universidade Estadual Paulista (UNESP), 2016. http://hdl.handle.net/11449/148776.
Full textApproved for entry into archive by LUIZA DE MENEZES ROMANETTO (luizamenezes@reitoria.unesp.br) on 2017-02-14T16:08:30Z (GMT) No. of bitstreams: 1 gossler_fe_me_ilha.pdf: 5023440 bytes, checksum: b5346eb35f509f2283b503acccf22ec3 (MD5)
Made available in DSpace on 2017-02-14T16:08:30Z (GMT). No. of bitstreams: 1 gossler_fe_me_ilha.pdf: 5023440 bytes, checksum: b5346eb35f509f2283b503acccf22ec3 (MD5) Previous issue date: 2016-12-19
Conselho Nacional de Desenvolvimento Científico e Tecnológico (CNPq)
Existem diferentes tipos de funções wavelets que podem ser utilizadas na Transformada Wavelet. Na maioria das vezes, a função wavelet escolhida para a análise de um determinado sinal vai ser aquela que melhor se ajusta no domínio tempo-frequência do mesmo. Existem vários tipos de funções wavelets que podem ser escolhidas para certas aplicações, sendo que algumas destas pertencem a conjuntos específicos denominados de famílias wavelets, tais como a Haar, Daubechies, Symlets, Morlet, Meyer e Gaussianas. Nesse trabalho é apresentada uma nova família de funções wavelets geradas a partir de polinômios com coeficientes de Fibonacci (FCPs). Essa família recebe o nome de Golden, e cada membro desta é obtido por uma derivada de ordem n do quociente entre dois FCPs distintos. As Golden wavelets foram deduzidas através das observações de que, em alguns casos, a derivada de ordem n, do quociente entre dois FCPs distintos, resulta em uma função que possui as características de uma onda de duração curta. Como aplicação, algumas wavelets apresentadas no decorrer deste trabalho são utilizadas na classificação de arritmias cardíacas em sinais de eletrocardiograma, que foram extraídos da base de dados do MIT-BIH arrhythmia database.
There exist different types of wavelet functions that can be used in the Wavelet Transform. In most cases, the wavelet function chosen for the analysis of a given signal will be the one that best adjusts in the time-frequency domain of the same signal. There are many types of wavelet functions that can be chosen for certain applications, some of which belong to specific sets called wavelet families, such as Haar, Daubechies, Symlets, Morlet, Meyer, and Gaussians. In this work a new wavelet functions family generated from Fibonacci-coefficients polynomials (FCPs) is presented. This family is called Golden, and each member is obtained by the n-th derivative of the quotient between two distinct FCPs. The Golden wavelets were deduced from the observations that in some cases the n-th derivative of the quotient between two distinct FCPs results in a function that has the characteristics of a short-duration wave. As an application, some wavelets presented in the course of this work are used to cardiac arrhythmia classification in electrocardiogram signals, which were extracted from the MITBIH arrhythmia database.
CNPq: 130123/2015-3
Gossler, Fabrício Ely. "Wavelets e polinômios com coeficientes de Fibonacci /." Ilha Solteira, 2016. http://hdl.handle.net/11449/148776.
Full textResumo: Existem diferentes tipos de funções wavelets que podem ser utilizadas na Transformada Wavelet. Na maioria das vezes, a função wavelet escolhida para a análise de um determinado sinal vai ser aquela que melhor se ajusta no domínio tempo-frequência do mesmo. Existem vários tipos de funções wavelets que podem ser escolhidas para certas aplicações, sendo que algumas destas pertencem a conjuntos específicos denominados de famílias wavelets, tais como a Haar, Daubechies, Symlets, Morlet, Meyer e Gaussianas. Nesse trabalho é apresentada uma nova família de funções wavelets geradas a partir de polinômios com coeficientes de Fibonacci (FCPs). Essa família recebe o nome de Golden, e cada membro desta é obtido por uma derivada de ordem n do quociente entre dois FCPs distintos. As Golden wavelets foram deduzidas através das observações de que, em alguns casos, a derivada de ordem n, do quociente entre dois FCPs distintos, resulta em uma função que possui as características de uma onda de duração curta. Como aplicação, algumas wavelets apresentadas no decorrer deste trabalho são utilizadas na classificação de arritmias cardíacas em sinais de eletrocardiograma, que foram extraídos da base de dados do MIT-BIH arrhythmia database.
Mestre
Ge, Zhongfu. "Analysis of surface pressure and velocity fluctuations in the flow over surface-mounted prisms." Diss., Virginia Tech, 2004. http://hdl.handle.net/10919/25965.
Full textPh. D.
Sansonnet, Laure. "Inférence non-paramétrique pour des interactions poissoniennes." Phd thesis, Université Paris Sud - Paris XI, 2013. http://tel.archives-ouvertes.fr/tel-00835427.
Full textMucha, Martin. "Moderní směrové způsoby reprezentace obrazů." Master's thesis, Vysoké učení technické v Brně. Fakulta elektrotechniky a komunikačních technologií, 2013. http://www.nusl.cz/ntk/nusl-220200.
Full textMontoril, Michel Helcias. "Modelos de regressão com coeficientes funcionais para séries temporais." Universidade de São Paulo, 2013. http://www.teses.usp.br/teses/disponiveis/45/45133/tde-04042013-215702/.
Full textIn this thesis, we study about fitting functional-coefficient regression models for time series, by splines, wavelets and warped wavelets. We consider models with independent and correlated errors. Through the three estimation approaches, we obtain rates of convergence to zero for average distances between the functions of the model and their estimators proposed in this work. In the case of (warped) wavelets approach, we also obtain asymptotic results in more specific situations, in which the functions of the model belong to Sobolev and Besov spaces. Moreover, Monte Carlo simulation studies and applications to real data sets are presented. Through these numerical results, we make comparisons between the three estimation approaches proposed here and comparisons between other approaches known in the literature, where we verify interesting performances in the sense that the proposed approaches provide competitive results compared to the results from methodologies used in literature.
Scipioni, Angel. "Contribution à la théorie des ondelettes : application à la turbulence des plasmas de bord de Tokamak et à la mesure dimensionnelle de cibles." Thesis, Nancy 1, 2010. http://www.theses.fr/2010NAN10125.
Full textThe necessary scale-based representation of the world leads us to explain why the wavelet theory is the best suited formalism. Its performances are compared to other tools: R/S analysis and empirical modal decomposition method (EMD). The great diversity of analyzing bases of wavelet theory leads us to propose a morphological approach of the analysis. The study is organized into three parts. The first chapter is dedicated to the constituent elements of wavelet theory. Then we will show the surprising link existing between recurrence concept and scale analysis (Daubechies polynomials) by using Pascal's triangle. A general analytical expression of Daubechies' filter coefficients is then proposed from the polynomial roots. The second chapter is the first application domain. It involves edge plasmas of tokamak fusion reactors. We will describe how, for the first time on experimental signals, the Hurst coefficient has been measured by a wavelet-based estimator. We will detail from fbm-like processes (fractional Brownian motion), how we have established an original model perfectly reproducing fBm and fGn joint statistics that characterizes magnetized plasmas. Finally, we will point out the reasons that show the lack of link between high values of the Hurst coefficient and possible long correlations. The third chapter is dedicated to the second application domain which is relative to the backscattered echo analysis of an immersed target insonified by an ultrasonic plane wave. We will explain how a morphological approach associated to a scale analysis can extract the diameter information
Лавриненко, Олександр Юрійович, Александр Юрьевич Лавриненко, and Oleksandr Lavrynenko. "Методи підвищення ефективності семантичного кодування мовних сигналів." Thesis, Національний авіаційний університет, 2021. https://er.nau.edu.ua/handle/NAU/52212.
Full textThe thesis is devoted to the solution of the actual scientific and practical problem in telecommunication systems, namely increasing the bandwidth of the semantic speech data transmission channel due to their efficient coding, that is the question of increasing the efficiency of semantic coding is formulated, namely – at what minimum speed it is possible to encode semantic features of speech signals with the set probability of their error-free recognition? It is on this question will be answered in this research, which is an urgent scientific and technical task given the growing trend of remote human interaction and robotic technology through speech, where the accurateness of this type of system directly depends on the effectiveness of semantic coding of speech signals. In the thesis the well-known method of increasing the efficiency of semantic coding of speech signals based on mel-frequency cepstral coefficients is investigated, which consists in finding the average values of the coefficients of the discrete cosine transformation of the prologarithmic energy of the spectrum of the discrete Fourier transform treated by a triangular filter in the mel-scale. The problem is that the presented method of semantic coding of speech signals based on mel-frequency cepstral coefficients does not meet the condition of adaptability, therefore the main scientific hypothesis of the study was formulated, which is that to increase the efficiency of semantic coding of speech signals is possible through the use of adaptive empirical wavelet transform followed by the use of Hilbert spectral analysis. Coding efficiency means a decrease in the rate of information transmission with a given probability of error-free recognition of semantic features of speech signals, which will significantly reduce the required passband, thereby increasing the bandwidth of the communication channel. In the process of proving the formulated scientific hypothesis of the study, the following results were obtained: 1) the first time the method of semantic coding of speech signals based on empirical wavelet transform is developed, which differs from existing methods by constructing a sets of adaptive bandpass wavelet-filters Meyer followed by the use of Hilbert spectral analysis for finding instantaneous amplitudes and frequencies of the functions of internal empirical modes, which will determine the semantic features of speech signals and increase the efficiency of their coding; 2) the first time it is proposed to use the method of adaptive empirical wavelet transform in problems of multiscale analysis and semantic coding of speech signals, which will increase the efficiency of spectral analysis due to the decomposition of high-frequency speech oscillations into its low-frequency components, namely internal empirical modes; 3) received further development the method of semantic coding of speech signals based on mel-frequency cepstral coefficients, but using the basic principles of adaptive spectral analysis with the application empirical wavelet transform, which increases the efficiency of this method. Conducted experimental research in the software environment MATLAB R2020b showed, that the developed method of semantic coding of speech signals based on empirical wavelet transform allows you to reduce the encoding speed from 320 to 192 bit/s and the required passband from 40 to 24 Hz with a probability of error-free recognition of about 0.96 (96%) and a signal-to-noise ratio of 48 dB, according to which its efficiency increases 1.6 times in contrast to the existing method. The results obtained in the thesis can be used to build systems for remote interaction of people and robotic equipment using speech technologies, such as speech recognition and synthesis, voice control of technical objects, low-speed encoding of speech information, voice translation from foreign languages, etc.
Sláma, Adam. "Software pro manuální ostření kamery s rozlišením 4K." Master's thesis, Vysoké učení technické v Brně. Fakulta elektrotechniky a komunikačních technologií, 2019. http://www.nusl.cz/ntk/nusl-400900.
Full textVedreño, Santos Francisco Jose. "Diagnosis of electric induction machines in non-stationary regimes working in randomly changing conditions." Doctoral thesis, Universitat Politècnica de València, 2013. http://hdl.handle.net/10251/34177.
Full textVedreño Santos, FJ. (2013). Diagnosis of electric induction machines in non-stationary regimes working in randomly changing conditions [Tesis doctoral no publicada]. Universitat Politècnica de València. https://doi.org/10.4995/Thesis/10251/34177
TESIS
Lue, Ming-Sun, and 呂明山. "EEG Feature Analysis based on Wavelet Coefficients." Thesis, 1997. http://ndltd.ncl.edu.tw/handle/25176893352533451644.
Full text國立交通大學
控制工程系
85
Most signals in nature have the spectral property which is time-varying. So the Time-Frequency representation is widely used. In this thesis, we will introduce a time-frequency representation method,Wavelet Transform(WT),to analyze the EEG signals. A signal processed by the wavelet transform will result in a two-dimensional atadata array with lots of parameters. From this large amount of parameters,we would like to obtain a few substantial coefficients using some quantitative approaches. So the purpose of this thesis is to introduce some quantitative methods and to explore the characteristics of EEG signal from various viewpoints,in addition to transforming the original signal by wavelet transform . In the thesis,we apply the Fractal Dimension、Mutual Information and Corss- correlation methods to analysis of wavelet coefficients of different scales. From our experiment, methods of the Fractal Dimension、Cross- correlation provide a way to qualitatively and quantitatively characterize and the EEG*s. Nevertheless, the Mutual Information method suppressing the temporal information is not a feasible tool for quantifying the WT coefficients.
Yang, Tsung-Cheng, and 楊宗正. "Fast Restricted Quadtree Triangulation Using Effective Wavelet Coefficients." Thesis, 2000. http://ndltd.ncl.edu.tw/handle/45168965689640813271.
Full text國立臺灣科技大學
資訊管理系
88
Triangular mesh has been extensively applied in image compression and visual reality in today''s technique. It can provide higher compression ratio, good image quality, fast computational time, and real-time rendering. In this paper, we present an efficient method to construct triangular mesh. First we use wavelet transform to obtain the wavelet coefficients and then analyze and rearrange the wavelet coefficients according their spatio-frequency characteristics to get the effective wavelet coefficients. Second, in order to prevent cracks and get better quality, we restrict and regulate the effective wavelet coefficients. And third, based on the restricted quadtree model, we get triangular mesh from the effective wavelet coefficients. Experimental results show that our method has better performance than several previous works for both 3D visualization of terrain data and compression of image data.
Tseng, Chien-Chang, and 曾建昌. "Rotation Invariant Color Texture Image Retrieval Using Wavelet Coefficients." Thesis, 2002. http://ndltd.ncl.edu.tw/handle/3ad354.
Full text國立成功大學
電機工程學系碩博士班
90
With the development of the World Wide Web(WWW) and fast computer technology, the use of visual information has become routine in scientific and commercial applications. Every day, large numbers of people are using the Internet for searching and browsing through diverse multimedia database.Due to the limitation of textual annotation based search , the content-based retrieval has been given more attention in recent years.It provides methods to query image database using image features as the basis for the queries. These image features include color, texture and shape of objects and regions. Wavelet transform, because of its space-frequency localization characteristics, is preferred in many image and audio processing applications. Wavelet transform provide good multiresolution analytical tools for texture classification and it can achieve a high accuracy rate. Considering the subband relationship after Wavelet transform, the essay suggests that we can combine HL and LH to a single feature which can decrease the vector numbers and the feature value has the rotation-invariant capability. By take the advantage of database indexing search ability ,the similar images using the input image condition and it reduces the querying response time.Besides, combining color texture features to emphasize the color distribution of image make retrieval system proves that it really can improve more efficiency.
Tzeng, Yu-Quan, and 曾昱筌. "A blind wavelet-based watermarking with detail-subband coefficients prediction." Thesis, 2008. http://ndltd.ncl.edu.tw/handle/02642324963306693305.
Full text長庚大學
電機工程學研究所
96
Recently, the wavelet transform is widely used in multimedia signal processing applications. To provide security solution, the digital watermarking is involved. This study presents a blind wavelet-based watermarking which cooperates with the Human Visual System (HVS) embedding watermarks into detail-subband coefficients. Since the imperceptibility is the most significant issue in watermarking, the approximate band is maintained unchanged, while the other detail subbands are modified to carry information. The perceptual embedded weights for all subbands are determined based on the Just Noticeable Distortion (JND) criterion. The strength of the modification is investigated to provide a compromised result between robustness and image quality. In the decoder, the Least-Mean-Square (LMS) is involved to predict the original detail subbands coefficients and then extract the embedded watermarks. As documented in experimental results, the proposed method provides good robustness and excellent image quality.
Rahman, S. M. Mahbubur. "Probabilistic modeling of wavelet coefficients for processing of image and video signals." Thesis, 2009. http://spectrum.library.concordia.ca/976389/1/NR63363.pdf.
Full textYung-Chuan, Liao, and 廖永傳. "Group Testing Using Alternative Class Partitions for Embedded Coding of Wavelet Coefficients." Thesis, 2005. http://ndltd.ncl.edu.tw/handle/19812625532602085117.
Full text國立暨南國際大學
資訊工程學系
93
Progressive image compression is widely used in many domains. Group testing is a new concept of progressive image compressing technique. It has been proved to be suitable for image compression. Hong and Lander proposed an image compression algorithm, so called GTW that is primarily based on the concept of group testing. Although no arithmetic coding is involved in this algorithm, GTW has performed competitively with SPIHT’s arithmetic coding variant in terms of rate-distortion performance. We study a series of related papers and analysis the characters of wavelet image. We provide a new method to partition the coefficients of GTW algorithm. Furthermore, we improve the rate-distortion performance of our algorithm by using a novel coding algorithm that is more suitable for wavelet coefficients. Our proposed scheme owns better rate-distortion performance rather than enhanced SPIHT or GTW algorithms. The experimental result shows that our algorithm can typically provide better image quality about between 0.4 and 0.7 db and 0.1 and 0.2 db far above the SPIHT and GTW algorithm respectively.
Cheng, Kuei-Hung, and 鄭貴鴻. "Progressive Wavelet Coefficients Codec System Design and Its Hardware Design and Implementation." Thesis, 2004. http://ndltd.ncl.edu.tw/handle/14670550100519572405.
Full text國立雲林科技大學
電子與資訊工程研究所碩士班
92
In this paper, a novel shape-adaptive zero tree coding scheme and its hardware realization are presented for the discrete wavelet transform (DWT) based image compression. The shape adaptive scheme can totally eliminate the coding overheads of pixels outside of a video object to save bit rate. The proposed scheme employs a bottom up and breadth first scanning order and each coefficient is examined only once. It can avoid the expensive list buffers required in the SPIHT to save the memory complexity in hardware implementation. The proposed coding scheme is also extended to the 3-D case to handle the medical volumetric data such as MRI. Simulation results indicate that under given bit rates, the plain version (with shape adaptive feature disabled) of the proposed scheme performs slightly better than the SPIHT scheme. Additional 4dB PSNR performance edge over the SPHIT algorithm can be obtained by shape adaptive processing. Its hardwired design is also developed and verified by FPGA. A sustained processing rate of 20 1024x1024 sized frames/sec can be achieved by the derived hardwired design.
Lin, Wen-feng, and 林文峰. "Image Compression - The Scalar and Vector Quantization of The Wavelet Transform Coefficients." Thesis, 1994. http://ndltd.ncl.edu.tw/handle/58388960763484966692.
Full text中原大學
電子工程學系
82
This thesis mainly focus on the quantization of the wavelet transform coefficients of an image. The possibility of image compression through a wavelet transform is also presented. In this thesis, both scalar and vector quatization techniques are used to quantize the wavelet trransform coefficients. For the scalar quantization, the uniform quantizer is used. The quantized coefficients are encoded by Huffman procedure to futher reduce the bit rate. For the vector quantization, the codebook is generated by fast Pairwise-Nearest-Neighbor(PNN) algorithm in the wavelet transform domain. If the size of the codebook is small enough, high compression ratio is attainable. The purpose of image compression is to achieve high compresion ratio and maintain an acceptable visual quality of the reconstructed images. Our simulation study shows that different quantization levels can be used for the scalar quantization of the wavelet transform coefficients at different scales. For the vector quantization, the global-codebook approach is effective in reducing the bit rate of the compressed data. The bit rate and peak-signal-to-noise-ratio(PSNR) of the reconstructed 512x512 Lena image are 0.31bits/pixel(bpp) and 28.04dB, respectively, for the scalar quantization. The simulation results obtained from the vector quantization are 0.21bpp and 30.75dB, respectively. These results show that image compression through a wavelet transform can achieve good visual quality and high compression ratio simultaneously.
Sun, Jingjing. "Fabric wrinkle characterization and classification using modified wavelet coefficients and support-vector-machine classifiers." Thesis, 2012. http://hdl.handle.net/2152/ETD-UT-2012-05-5634.
Full texttext
Tseng, Chien Tu, and 曾建篤. "High Resolution Wavelet Transform Coefficients And its Application To Resolution Enhancement of Digital Images." Thesis, 2000. http://ndltd.ncl.edu.tw/handle/44828898364997410058.
Full text中華大學
資訊工程學系碩士班
88
Pixel domain interpolation is the typical method of resolution enhancement of digital images. After enlarging image with interpolation operation, sharp edges usually become aliased. By detecting the edges and compensating, profiles of the interpolated edges can be much smoother. However, the quality of edge detection and compensation method strongly affects quality of the enhancement. Besides, the computation is much complicated. A new approach of resolution enhancement by using the property of Multi-Resolution Analysis through Discrete Wavelet Transform is proposed. By estimating the higher resolution wavelet coefficients, the resolution can be increased by two times through the synthesis operation of Discrete Wavelet Transform. The estimation is done by using neural networks combined with a simple edge classification method to improve the estimation accuracy. Both 1-D and 2-D cases are shown in this thesis. The experiments show that the enlarged images are clear and sharp. Some details are preserved in the processing. However, parts of the edges are sharp unduly and spurious noise is generated. Further more, the training of the neural networks is very slow due to the huge size of training samples. It is desired to overcome these problems in the future.
LEE, CHUNG-CHI, and 李宗其. "Region-based Image Retrieval Using Watershed Transformation and Region Adjacency Graph for Wavelet Coefficients." Thesis, 2001. http://ndltd.ncl.edu.tw/handle/76790220750023277841.
Full text國立中正大學
資訊工程研究所
89
Traditional region-based image retrieval systems often use only the dominant features within each region and ignore the useful relationship of neighboring regions. In this paper, we propose a new color region-based image retrieval system using the region relationship. An image is first processed with the wavelet transform to divide the image into several subbands and to extract the important texture information. A new color watershed transformation on not only the luminance wavelet coefficients but also the chromatic wavelet coefficients is performed to accurately segment the image into several important regions. Then, the region adjacency graph (RAG) is used to be the representation of the regions and their spatial relationships in the segmented image. In the RAG, a node denotes one region and an edge represents the spatial relationship of two neighboring regions. Hence, the features of regions such as the wavelet coefficients in an image can be recorded in their corresponding nodes within a RAG; while the features of adjacent regions are recorded in the edges. Now, the image retrieval problem is reduced to a subgraph isomorphism algorithm, which is performed to verify the similarity between two graphs. A simple and heuristic algorithm of subgraph isomorphism is applied to compare the query image’s RAG with those RAGs in the image database. In experiments, several query results from the test database that contains various kinds of images are used to evaluate the performance of proposed system.
Xin-MingChen and 陳新明. "ECG Compression Algorithm Based on Best k-coefficients Sparse Decomposition and Discrete Wavelet Transform." Thesis, 2018. http://ndltd.ncl.edu.tw/handle/5qr42c.
Full textYu, Kuan-Chi, and 尤冠几. "The Novel Super-Resolution Technology Based on the Wavelet Coefficients Prediction for the Ecological images." Thesis, 2015. http://ndltd.ncl.edu.tw/handle/04423930813232945675.
Full text中華大學
資訊工程學系碩士班
103
Recently, the super-resolution technologies are often used to assist image recognition. The super-resolution technologies aim at rebuilding the low-resolution images to high-resolution images. The conventional image resolution methods applied linear or non-linear interpolation method to obtain the high resolution images. Then, many types of super-resolution technologies with DFT (Discrete Fourier Transform) or DWT (Discrete Wavelet Transform) and learning-based approaches are proposed to resolution the high-resolution images. However, the above mentioned methods can reconstruct the high-resolution images with satisfied PSNR and SSIM evaluation scores. This paper proposes a novel learning-based super-resolution with wavelet coefficients prediction scheme to rebuild the low-resolution images to high-resolution images with high PSNR and SSIM evaluation scores. The experimental results show that the reconstructed high-resolution license plate images can have PSNR 48 dB and SSIM 0.99 and the reconstructed high-resolution ecological images can have PSNR 33 dB and SSIM 0.98.
Ma, Cheng-Yang, and 馬政揚. "The Novel Super-resolution Technology Based on the Wavelet Coefficients Prediction for the License Plate Images." Thesis, 2013. http://ndltd.ncl.edu.tw/handle/nbd329.
Full text中華大學
資訊工程學系碩士班
101
The super resolution technology will be applied in the research as the main method to support my concept. The super resolution’s concept is to rebuild the image to high resolution from low resolution image by algorithms. The Compensate for high frequency wavelet coefficients and learning base super resolution technology will be applied in the study to optimize the license plate images from low resolution to high resolution. By using the data characteristics which will occur after wavelet transform, we can rebuild the image into high resolution with high band. By wavelet transform, the image local feature can capture more detail information with high band and the tree structure can strengthen the correlation between wavelet coefficients. Based on the correlation, we can use high band wavelet coefficients as a prediction base to improve the license plate image quality. By using the method in the research, it can effectively improve the image quality which is enlarged by interpolation. Meanwhile, the PSNR (Peak Signal Noise Ratio) can be used as a reference to judge the image quality. Through the experiment result, the rebuild license plate image with high resolution will have PSNR data 4db higher than the interpolation on average. Keywords: license plate、super-resolution、wavelet.
Chen, Guan-Zhou, and 陳冠州. "SOPC Implementation of a Bit-plane based Modified SPIHT Algorithm for 1-D Wavelet Coefficients Coding." Thesis, 2013. http://ndltd.ncl.edu.tw/handle/63116298028641886409.
Full text國立高雄第一科技大學
電腦與通訊工程研究所
101
The SPIHT scheme can be very efficient for data inherent in hierarchical self similarities. However, this scheme which exploited the self similarities in terms of dynamic data structures, imposed practical limitation on hardware implementation, especially for large-size data sequences. A Modified Set Partitioning In Hierarchical Trees (MSPIHT) Algorithm was proposed for solving these problems. Different from SPIHT, the MSPIHT used the Bit-Plane and flag concepts, which can reduce memory requirements and speed up the coding process. Besides, three lists in SPIHT coding process: LIP, LSP, and LIS were combined into one step to simplify the complexity of MSPIHT coding process. The searching time of descendant coefficient was also reduced by using Check Bit. Comparing with SPIHT, MSPIHT had more regular coding process, lower coding complexity, and shorter coding time. In this study, we utilized ALTERA DE2-115 as a platform to implement MSPIHT coding by using SOPC. According to the experiment result, the hardware could be exactly implemented. Furthermore, MSPIHT encoding process was 70-90 times faster than SPIHT, and reduced 40% memory requirement.
Phan, Quan. "Design of vibration inspired bi-orthogonal wavelets for signal analysis." Thesis, 2012. http://hdl.handle.net/1911/71679.
Full textLiu, Chia-Chou, and 劉佳洲. "On the Application of the De-noising Method of Stationary Wavelet Coefficients Threshold to Filter Out Noise in Digital Hearing Aids." Thesis, 2010. http://ndltd.ncl.edu.tw/handle/43887541206717616847.
Full text臺灣大學
工程科學及海洋工程學研究所
98
For a long time, improving the hearing of the hearing-impaired has been what researchers and medical professionals been struggling to achieve. Because there are currently over 200 million deaf or hard of hearing people worldwide, researchers and medical professionals realize the importance of this goal. Fortunately, the gift of technology, from early analog hearing aids to the mainstream of digital hearing aids, has brought about various kinds of flourishing digital signal processing technology. The function of current hearing aids is no longer restricted to just simple voice amplification, which allows the hearing-impaired to hear directly, but can also satisfy the different needs of different users with different sound signal processing. In fact, the development of hearing aids still has an opportunity for improvement. In this paper, the white noise is added to the clean voice signal, becoming a voice signal that contains noise. First, the discrete wavelet transform is used to cut the voice bandwidth into nine different sub-band bandwidths. Second, the discrete wavelet stationary transform is used to cut the voice bandwidth into nine different sub-band bandwidths. Third, the wavelet packet transform is used to cut the voice bandwidth into eight identical bandwidths. The wavelet de-noising method is used to filter out high-frequency noise. After the voice signal has been de-noised, it makes up four different types of hearing loss, including 40dB uniform hearing loss, mild low-frequency hearing loss, moderate high-frequency hearing loss, and severe high-frequency hearing loss. Finally, the saturated volume limits the final output of the energy of speech to a fixed size. This thesis is to simulate voice signal processing by the wavelet transform. The process of verification can effectively filter out white noise, and compensate the four different types of hearing loss to achieve the basic functions of digital hearing aids.
(6642491), Jingzhao Dai. "SPARSE DISCRETE WAVELET DECOMPOSITION AND FILTER BANK TECHNIQUES FOR SPEECH RECOGNITION." Thesis, 2019.
Find full textSpeech recognition is widely applied to translation from speech to related text, voice driven commands, human machine interface and so on [1]-[8]. It has been increasingly proliferated to Human’s lives in the modern age. To improve the accuracy of speech recognition, various algorithms such as artificial neural network, hidden Markov model and so on have been developed [1], [2].
In this thesis work, the tasks of speech recognition with various classifiers are investigated. The classifiers employed include the support vector machine (SVM), k-nearest neighbors (KNN), random forest (RF) and convolutional neural network (CNN). Two novel features extraction methods of sparse discrete wavelet decomposition (SDWD) and bandpass filtering (BPF) based on the Mel filter banks [9] are developed and proposed. In order to meet diversity of classification algorithms, one-dimensional (1D) and two-dimensional (2D) features are required to be obtained. The 1D features are the array of power coefficients in frequency bands, which are dedicated for training SVM, KNN and RF classifiers while the 2D features are formed both in frequency domain and temporal variations. In fact, the 2D feature consists of the power values in decomposed bands versus consecutive speech frames. Most importantly, the 2D feature with geometric transformation are adopted to train CNN.
Speech recognition including males and females are from the recorded data set as well as the standard data set. Firstly, the recordings with little noise and clear pronunciation are applied with the proposed feature extraction methods. After many trials and experiments using this dataset, a high recognition accuracy is achieved. Then, these feature extraction methods are further applied to the standard recordings having random characteristics with ambient noise and unclear pronunciation. Many experiment results validate the effectiveness of the proposed feature extraction techniques.
Lee, Wen-Li, and 李文禮. "Applications of Wavelet Coefficient Estimation in Medical Image Enhancement." Thesis, 2011. http://ndltd.ncl.edu.tw/handle/19027423592773343441.
Full text國立東華大學
電機工程學系
99
Medical images provide clinical information for facilitating diagnostic accuracy and treatment process. Clear image detail is essential and can provide better information for visualization. When the information in medical images is not satisfactory to viewers, image enhancement can be used to improve visual perception on the images. However, over-enhancement or under-enhancement could happen in some images. Moreover, the information obtained via visual inspection on the enhanced images can be varied by each viewer because the preference of visual perception is individualized. It would be beneficial for clinical practice if the viewers can select the optimal enhanced images for their medical use. We proposed three wavelet-based methods specifically to improve visibility on digitized medical images in terms of resolution enhancement, detail enhancement and texture enhancement. Wavelet coefficient estimation is the core technique employed in these three methods. Our proposed wavelet-based interpolation method enables arbitrary resizing medical images and reduces influence of image blurring. Our proposed detail enhancement scheme can sharpen image and reveal hidden information, so visibility on medical images can be improved. We also successfully integrate these two methods, wavelet-based interpolation and detail enhancement scheme, to achieve resolution enhancement and detail enhancement simultaneously. Furthermore, we proposed texture enhancement scheme to increase definition of texture in noise-corrupted sonograms without eliminating speckles. Experimental results show that our proposed methods outperform other schemes commonly used for medical image enhancement in terms of subjective assessments and objective evaluations. Our proposed methods allow scalable selection of level of image enhancement to meet viewers’ visual preferences.
Lembono, Buwono, and 林國祥. "Investigation of Wavelet Coefficient of Electrocardiograph based on Image Processing Method." Thesis, 2008. http://ndltd.ncl.edu.tw/handle/80520984867551583877.
Full text國立交通大學
電機與控制工程系所
96
The aim of this research was to quantify the continuous wavelet coefficients (CWT) of raw ECG data. Several methods employed in this thesis included invariant-moment analysis, singular value decomposition (SVD), correlation coefficient, and analysis of variance (ANOVA). This study included 17 subjects, 8 experimental subjects with Zen-meditation experience and 9 control subjects in the same age range, yet, without any meditation experience. According to our results, the seven invariant moment values in control group tended to decrease, while the experimental group showed the tendency of increase. SVD analysis gives us another perspective. The correlation coefficients between major components of both groups showed a high value of correlation, although one result from the control group was considered to be moderately correlated. In ANOVA, differences appeared to be more significant in the control group than the experimental group. Thus, we may preliminarily suggest that ECG waveform patterns of experimental group behave more stably than those of control group in certain condition.