Dissertations / Theses on the topic 'Adaptive signal processing'

To see the other types of publications on this topic, follow the link: Adaptive signal processing.

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 dissertations / theses for your research on the topic 'Adaptive signal processing.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.

1

Östlund, Nils. "Adaptive signal processing of surface electromyogram signals." Doctoral thesis, Umeå universitet, Strålningsvetenskaper, 2006. http://urn.kb.se/resolve?urn=urn:nbn:se:umu:diva-743.

Full text
Abstract:
Electromyography is the study of muscle function through the electrical signals from the muscles. In surface electromyography the electrical signal is detected on the skin. The signal arises from ion exchanges across the muscle fibres’ membranes. The ion exchange in a motor unit, which is the smallest unit of excitation, produces a waveform that is called an action potential (AP). When a sustained contraction is performed the motor units involved in the contraction will repeatedly produce APs, which result in AP trains. A surface electromyogram (EMG) signal consists of the superposition of many AP trains generated by a large number of active motor units. The aim of this dissertation was to introduce and evaluate new methods for analysis of surface EMG signals. An important aspect is to consider where to place the electrodes during the recording so that the electrodes are not located over the zone where the neuromuscular junctions are located. A method that could estimate the location of this zone was presented in one study. The mean frequency of the EMG signal is often used to estimate muscle fatigue. For signals with low signal-to-noise ratio it is important to limit the integration intervals in the mean frequency calculations. Therefore, a method that improved the maximum frequency estimation was introduced and evaluated in comparison with existing methods. The main methodological work in this dissertation was concentrated on finding single motor unit AP trains from EMG signals recorded with several channels. In two studies single motor unit AP trains were enhanced by using filters that maximised the kurtosis of the output. The first of these studies used a spatial filter, and in the second study the technique was expanded to include filtration in time. The introduction of time filtration resulted in improved performance, and when the method was evaluated in comparison with other methods that use spatial and/or temporal filtration, it gave the best performance among them. In the last study of this dissertation this technique was used to compare AP firing rates and conduction velocities in fibromyalgia patients as compared with a control group of healthy subjects. In conclusion, this dissertation has resulted in new methods that improve the analysis of EMG signals, and as a consequence the methods can simplify physiological research projects.
APA, Harvard, Vancouver, ISO, and other styles
2

Östlund, Nils. "Adaptive signal processing of surface electromyogram signals /." Umeå : Department of Radiation Sciences, Umeå University, 2006. http://urn.kb.se/resolve?urn=urn:nbn:se:umu:diva-743.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Chan, M. K. "Adaptive signal processing algorithms for non-Gaussian signals." Thesis, Queen's University Belfast, 2002. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.269023.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Jahanchahi, Cyrus. "Quaternion valued adaptive signal processing." Thesis, Imperial College London, 2013. http://hdl.handle.net/10044/1/24165.

Full text
Abstract:
Recent developments in sensor technology, human centered computing and robotics have brought to light new classes of multidimensional data which are naturally represented as three- or four-dimensional vector-valued processes. Such signals are readily modeled as real vectors in R3 and R4, however, it has become apparent that there are advantages in processing such data in division algebras - the quaternion domain. The progress in the statistics of quaternion variable, particularly augmented statistics and widely linear modeling, has opened up a new front of research in vector sensor modeling, however, there are several key problems that need to be addressed in order to exploit the full power of quaternions in statistical signal processing. The principal problem lies in the lack of a mathematical framework, such as the CR-calculus in the complex domain, for the differentiation of non-holomorphic functions. Since most functions (including typical cost functions) in the quaternion domain are non-holomorphic, as defined by the Cauchy-Riemann-Fueter (CRF) condition, this presents a severe obstacle to solving optimisation problems and developing adaptive filtering algorithms in the quaternion domain. To this end, we develop the HR-calculus, an extension of the CR-calculus, allowing the differentiation of non-holomorphic functions. This is followed by the introduction of the I-gradient, enabling for generic extensions of complex valued algorithms to be derived. Using this unified framework we introduce the quaternion least mean square (QLMS), quaternion recursive least squares (QRLS), quaternion affine projection algorithm (QAPA) and quaternion Kalman filter. These estimators are made optimal for the processing of noncircular data, by proposing widely linear extensions of their standard versions. Convergence and steady state properties of these adaptive estimators are analysed and validated experimentally via simulations on both synthetic and real world signals.
APA, Harvard, Vancouver, ISO, and other styles
5

CARINI, ALBERTO. "ADAPTIVE AND NONLINEAR SIGNAL PROCESSING." Doctoral thesis, Università degli studi di Trieste, 1997. http://thesis2.sba.units.it/store/handle/item/13000.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Testoni, Nicola <1980&gt. "Adaptive multiscale biological signal processing." Doctoral thesis, Alma Mater Studiorum - Università di Bologna, 2008. http://amsdottorato.unibo.it/1122/1/Tesi_Testoni_Nicola.pdf.

Full text
Abstract:
Biological processes are very complex mechanisms, most of them being accompanied by or manifested as signals that reflect their essential characteristics and qualities. The development of diagnostic techniques based on signal and image acquisition from the human body is commonly retained as one of the propelling factors in the advancements in medicine and biosciences recorded in the recent past. It is a fact that the instruments used for biological signal and image recording, like any other acquisition system, are affected by non-idealities which, by different degrees, negatively impact on the accuracy of the recording. This work discusses how it is possible to attenuate, and ideally to remove, these effects, with a particular attention toward ultrasound imaging and extracellular recordings. Original algorithms developed during the Ph.D. research activity will be examined and compared to ones in literature tackling the same problems; results will be drawn on the base of comparative tests on both synthetic and in-vivo acquisitions, evaluating standard metrics in the respective field of application. All the developed algorithms share an adaptive approach to signal analysis, meaning that their behavior is not dependent only on designer choices, but driven by input signal characteristics too. Performance comparisons following the state of the art concerning image quality assessment, contrast gain estimation and resolution gain quantification as well as visual inspection highlighted very good results featured by the proposed ultrasound image deconvolution and restoring algorithms: axial resolution up to 5 times better than algorithms in literature are possible. Concerning extracellular recordings, the results of the proposed denoising technique compared to other signal processing algorithms pointed out an improvement of the state of the art of almost 4 dB.
APA, Harvard, Vancouver, ISO, and other styles
7

Testoni, Nicola <1980&gt. "Adaptive multiscale biological signal processing." Doctoral thesis, Alma Mater Studiorum - Università di Bologna, 2008. http://amsdottorato.unibo.it/1122/.

Full text
Abstract:
Biological processes are very complex mechanisms, most of them being accompanied by or manifested as signals that reflect their essential characteristics and qualities. The development of diagnostic techniques based on signal and image acquisition from the human body is commonly retained as one of the propelling factors in the advancements in medicine and biosciences recorded in the recent past. It is a fact that the instruments used for biological signal and image recording, like any other acquisition system, are affected by non-idealities which, by different degrees, negatively impact on the accuracy of the recording. This work discusses how it is possible to attenuate, and ideally to remove, these effects, with a particular attention toward ultrasound imaging and extracellular recordings. Original algorithms developed during the Ph.D. research activity will be examined and compared to ones in literature tackling the same problems; results will be drawn on the base of comparative tests on both synthetic and in-vivo acquisitions, evaluating standard metrics in the respective field of application. All the developed algorithms share an adaptive approach to signal analysis, meaning that their behavior is not dependent only on designer choices, but driven by input signal characteristics too. Performance comparisons following the state of the art concerning image quality assessment, contrast gain estimation and resolution gain quantification as well as visual inspection highlighted very good results featured by the proposed ultrasound image deconvolution and restoring algorithms: axial resolution up to 5 times better than algorithms in literature are possible. Concerning extracellular recordings, the results of the proposed denoising technique compared to other signal processing algorithms pointed out an improvement of the state of the art of almost 4 dB.
APA, Harvard, Vancouver, ISO, and other styles
8

Figueroa, Toro Miguel E. "Adaptive signal processing and correlational learning in mixed-signal VLSI /." Thesis, Connect to this title online; UW restricted, 2005. http://hdl.handle.net/1773/6856.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Wyrsch, Sigisbert. "Adaptive subband signal processing for hearing instruments /." Zürich, 2000. http://e-collection.ethbib.ethz.ch/show?type=diss&nr=13577.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Hermand, Jean-Pierre. "Environmentally-Adaptive Signal Processing in Ocean Acoustics." Doctoral thesis, Universite Libre de Bruxelles, 1993. http://hdl.handle.net/2013/ULB-DIPOT:oai:dipot.ulb.ac.be:2013/212734.

Full text
APA, Harvard, Vancouver, ISO, and other styles
11

Pazaitis, Dimitrios I. "Performance improvement in adaptive signal processing algorithms." Thesis, Imperial College London, 1996. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.368771.

Full text
APA, Harvard, Vancouver, ISO, and other styles
12

Yaminysharif, Mohammad. "Accelerated gradient techniques and adaptive signal processing." Thesis, University of Strathclyde, 1987. http://oleg.lib.strath.ac.uk:80/R/?func=dbin-jump-full&object_id=21496.

Full text
Abstract:
The main objective of this thesis is to demonstrate the application of the accelerated gradient techniques to various fields of adaptive signal processing. A variety of adaptive algorithms based on the accelerated gradient techniques are developed and analysed in terms of the convergence speed, computational complexity and numerical stability. Extensive simulation results are presented to demonstrate the performance of the proposed algorithms when applied to the fields of adaptive noise cancelling, broad band adaptive array processing and narrow band adaptive spectral estimation. These results are very encouraging in terms of convergence speed and numerical stability of the developed algorithms. The proposed algorithms appear to be attractive alternatives to the conventional recursive least squares algorithms. In addition, the thesis includes a review chapter in which the conventional approaches (ranging from the least mean squares algorithm to the computationally demanding recursive least squares algorithm) to three types of minimization problems (namely unconstrained, linearly constrained and quadratically constrained) are discussed.
APA, Harvard, Vancouver, ISO, and other styles
13

Esparcia, Alcázar Anna Isabel. "Genetic programming for adaptive digital signal processing." Thesis, University of Glasgow, 1998. http://theses.gla.ac.uk/4780/.

Full text
APA, Harvard, Vancouver, ISO, and other styles
14

Kanagasabapathy, Shri. "Distributed adaptive signal processing for frequency estimation." Thesis, Imperial College London, 2016. http://hdl.handle.net/10044/1/49783.

Full text
Abstract:
It is widely recognised that future smart grids will heavily rely upon intelligent communication and signal processing as enabling technologies for their operation. Traditional tools for power system analysis, which have been built from a circuit theory perspective, are a good match for balanced system conditions. However, the unprecedented changes that are imposed by smart grid requirements, are pushing the limits of these old paradigms. To this end, we provide new signal processing perspectives to address some fundamental operations in power systems such as frequency estimation, regulation and fault detection. Firstly, motivated by our finding that any excursion from nominal power system conditions results in a degree of non-circularity in the measured variables, we cast the frequency estimation problem into a distributed estimation framework for noncircular complex random variables. Next, we derive the required next generation widely linear, frequency estimators which incorporate the so-called augmented data statistics and cater for the noncircularity and a widely linear nature of system functions. Uniquely, we also show that by virtue of augmented complex statistics, it is possible to treat frequency tracking and fault detection in a unified way. To address the ever shortening time-scales in future frequency regulation tasks, the developed distributed widely linear frequency estimators are equipped with the ability to compensate for the fewer available temporal voltage data by exploiting spatial diversity in wide area measurements. This contribution is further supported by new physically meaningful theoretical results on the statistical behavior of distributed adaptive filters. Our approach avoids the current restrictive assumptions routinely employed to simplify the analysis by making use of the collaborative learning strategies of distributed agents. The efficacy of the proposed distributed frequency estimators over standard strictly linear and stand-alone algorithms is illustrated in case studies over synthetic and real-world three-phase measurements. An overarching theme in this thesis is the elucidation of underlying commonalities between different methodologies employed in classical power engineering and signal processing. By revisiting fundamental power system ideas within the framework of augmented complex statistics, we provide a physically meaningful signal processing perspective of three-phase transforms and reveal their intimate connections with spatial discrete Fourier transform (DFT), optimal dimensionality reduction and frequency demodulation techniques. Moreover, under the widely linear framework, we also show that the two most widely used frequency estimators in the power grid are in fact special cases of frequency demodulation techniques. Finally, revisiting classic estimation problems in power engineering through the lens of non-circular complex estimation has made it possible to develop a new self-stabilising adaptive three-phase transformation which enables algorithms designed for balanced operating conditions to be straightforwardly implemented in a variety of real-world unbalanced operating conditions. This thesis therefore aims to help bridge the gap between signal processing and power communities by providing power system designers with advanced estimation algorithms and modern physically meaningful interpretations of key power engineering paradigms in order to match the dynamic and decentralised nature of the smart grid.
APA, Harvard, Vancouver, ISO, and other styles
15

Yang, Ho. "Partially adaptive space-time processing." Diss., Georgia Institute of Technology, 1998. http://hdl.handle.net/1853/13028.

Full text
APA, Harvard, Vancouver, ISO, and other styles
16

Cetin, Ediz. "Unsupervised adaptive signal processing techniques for wireless receivers." Thesis, University of Westminster, 2002. https://westminsterresearch.westminster.ac.uk/item/93q55/unsupervised-adaptive-signal-processing-techniques-for-wireless-receivers.

Full text
APA, Harvard, Vancouver, ISO, and other styles
17

Krishnan, Sridhar. "Adaptive signal processing techniques for analysis of knee joint vibroarthrographic signals." Thesis, National Library of Canada = Bibliothèque nationale du Canada, 1999. http://www.collectionscanada.ca/obj/s4/f2/dsk1/tape8/PQDD_0016/NQ47897.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
18

Picciolo, Michael L. "Robust Adaptive Signal Processors." Diss., Virginia Tech, 2003. http://hdl.handle.net/10919/26993.

Full text
Abstract:
Standard open loop linear adaptive signal processing algorithms derived from the least squares minimization criterion require estimates of the N-dimensional input interference and noise statistics. Often, estimated statistics are biased by contaminant data (such as outliers and non-stationary data) that do not fit the dominant distribution, which is often modeled as Gaussian. In particular, convergence of sample covariance matrices used in block processed adaptive algorithms, such as the Sample Matrix Inversion (SMI) algorithm, are known to be affected significantly by outliers, causing undue bias in subsequent adaptive weight vectors. The convergence measure of effectiveness (MOE) of the benchmark SMI algorithm is known to be relatively fast (order K = 2N training samples) and independent of the (effective) rank of the external interference covariance matrix, making it a useful method in practice for non-contaminated data environments. Novel robust adaptive algorithms are introduced here that perform superior to SMI algorithms in contaminated data environments while some retain its valuable convergence independence feature. Convergence performance is shown to be commensurate with SMI in non-contaminated environments as well. The robust algorithms are based on the Gram Schmidt Cascaded Canceller (GSCC) structure where novel building block algorithms are derived for it and analyzed using the theory of Robust Statistics. Coined M â cancellers after M â estimates of Huber, these novel cascaded cancellers combine robustness and statistical estimation efficiency in order to provide good adaptive performance in both contaminated and non-contaminated data environments. Additionally, a hybrid processor is derived by combining the Multistage Wiener Filter (MWF) and Median Cascaded Canceller (MCC) algorithms. Both simulated data and measured Space-Time Adaptive Processing (STAP) airborne radar data are used to show performance enhancements. The STAP application area is described in detail in order to further motivate research into robust adaptive processing.
Ph. D.
APA, Harvard, Vancouver, ISO, and other styles
19

Minelly, Shona. "Signal processing of His Purkinje System electrocardiograms." Thesis, University of Kent, 1998. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.267381.

Full text
APA, Harvard, Vancouver, ISO, and other styles
20

Owens, Peter. "Advanced signal processing of high resolution electrocardiograms." Thesis, University of Sussex, 1997. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.361399.

Full text
APA, Harvard, Vancouver, ISO, and other styles
21

Fabrizio, Giuseppe Aureliano. "Space-time characterisation and adaptive processing of ionospherically-propagated HF signals /." Title page, table of contents and abstract only, 2000. http://web4.library.adelaide.edu.au/theses/09PH/09phf129.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
22

Seliktar, Yaron. "Space-time adaptive monopulse processing." Diss., Georgia Institute of Technology, 1998. http://hdl.handle.net/1853/13075.

Full text
APA, Harvard, Vancouver, ISO, and other styles
23

Famorzadeh, Shahram. "BEEHIVE : an adaptive, distributed, embedded signal processing environment." Diss., Georgia Institute of Technology, 1997. http://hdl.handle.net/1853/14803.

Full text
APA, Harvard, Vancouver, ISO, and other styles
24

Dugger, Jeffery Don. "Adaptive Analog VLSI Signal Processing and Neural Networks." Diss., Georgia Institute of Technology, 2003. http://hdl.handle.net/1853/5294.

Full text
Abstract:
Research presented in this thesis provides a substantial leap from the study of interesting device physics to fully adaptive analog networks and lays a solid foundation for future development of large-scale, compact, low-power adaptive parallel analog computation systems. The investigation described here started with observation of this potential learning capability and led to the first derivation and characterization of the floating-gate pFET correlation learning rule. Starting with two synapses sharing the same error signal, we progressed from phase correlation experiments through correlation experiments involving harmonically related sinusoids, culminating in learning the Fourier series coefficients of a square wave cite{kn:Dugger2000}. Extending these earlier two-input node experiments to the general case of correlated inputs required dealing with weight decay naturally exhibited by the learning rule. We introduced a source-follower floating-gate synapse as an improvement over our earlier source-degenerated floating-gate synapse in terms of relative weight decay cite{kn:Dugger2004}. A larger network of source-follower floating-gate synapses was fabricated and an FPGA-controlled testboard was designed and built. This more sophisticated system provides an excellent framework for exploring applications to multi-input, multi-node adaptive filtering applications. Adaptive channel equalization provided a practical test-case illustrating the use of these adaptive systems in solving real-world problems. The same system could easily be applied to noise and echo cancellation in communication systems and system identification tasks in optimal control problems. We envision the commercialization of these adaptive analog VLSI systems as practical products within a couple of years.
APA, Harvard, Vancouver, ISO, and other styles
25

Lynch, Michael Richard. "Adaptive techniques in signal processing and connectionist models." Thesis, University of Cambridge, 1990. https://www.repository.cam.ac.uk/handle/1810/244884.

Full text
Abstract:
This thesis covers the development of a series of new methods and the application of adaptive filter theory which are combined to produce a generalised adaptive filter system which may be used to perform such tasks as pattern recognition. Firstly, the relevant background adaptive filter theory is discussed in Chapter 1 and methods and results which are important to the rest of the thesis are derived or referenced. Chapter 2 of this thesis covers the development of a new adaptive algorithm which is designed to give faster convergence than the LMS algorithm but unlike the Recursive Least Squares family of algorithms it does not require storage of a matrix with n2 elements, where n is the number of filter taps. In Chapter 3 a new extension of the LMS adaptive notch filter is derived and applied which gives an adaptive notch filter the ability to lock and track signals of varying pitch without sacrificing notch depth. This application of the LMS filter is of interest as it demonstrates a time varying filter solution to a stationary problem. The LMS filter is next extended to the multidimensional case which allows the application of LMS filters to image processing. The multidimensional filter is then applied to the problem of image registration and this new application of the LMS filter is shown to have significant advantages over current image registration methods. A consideration of the multidimensional LMS filter as a template matcher and pattern recogniser is given. In Chapter 5 a brief review of statistical pattern recognition is given, and in Chapter 6 a review of relevant connectionist models. In Chapter 7 the generalised adaptive filter is derived. This is an adaptive filter with the ability to model non-linear input-output relationships. The Volterra functional analysis of non-linear systems is given and this is combined with adaptive filter methods to give a generalised non-linear adaptive digital filter. This filter is then considered as a linear adaptive filter operating in a non-linearly extended vector space. This new filter is shown to have desirable properties as a pattern recognition system. The performance and properties of the new filter is compared with current connectionist models and results demonstrated in Chapter 8. In Chapter 9 further mathematical analysis of the networks leads to suggested methods to greatly reduce network complexity for a given problem by choosing suitable pattern classification indices and allowing it to define its own internal structure. In Chapter 10 robustness of the network to imperfections in its implementation is considered. Chapter 11 finishes the thesis with some conclusions and suggestions for future work.
APA, Harvard, Vancouver, ISO, and other styles
26

Price, Emma J. "The use of residuals for adaptive signal processing." Thesis, University of Oxford, 2006. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.433334.

Full text
APA, Harvard, Vancouver, ISO, and other styles
27

Maji, Suman Kumar. "Multiscale methods in signal processing for adaptive optics." Phd thesis, Université Sciences et Technologies - Bordeaux I, 2013. http://tel.archives-ouvertes.fr/tel-00909085.

Full text
Abstract:
In this thesis, we introduce a new approach to wavefront phase reconstruction in Adaptive Optics (AO) from the low-resolution gradient measurements provided by a wavefront sensor, using a non-linear approach derived from the Microcanonical Multiscale Formalism (MMF). MMF comes from established concepts in statistical physics, it is naturally suited to the study of multiscale properties of complex natural signals, mainly due to the precise numerical estimate of geometrically localized critical exponents, called the singularity exponents. These exponents quantify the degree of predictability, locally, at each point of the signal domain, and they provide information on the dynamics of the associated system. We show that multiresolution analysis carried out on the singularity exponents of a high-resolution turbulent phase (obtained by model or from data) allows a propagation along the scales of the gradients in low-resolution (obtained from the wavefront sensor), to a higher resolution. We compare our results with those obtained by linear approaches, which allows us to offer an innovative approach to wavefront phase reconstruction in Adaptive Optics.
APA, Harvard, Vancouver, ISO, and other styles
28

Javidi, Soroush. "Adaptive signal processing algorithms for noncircular complex data." Thesis, Imperial College London, 2010. http://hdl.handle.net/10044/1/6328.

Full text
Abstract:
The complex domain provides a natural processing framework for a large class of signals encountered in communications, radar, biomedical engineering and renewable energy. Statistical signal processing in C has traditionally been viewed as a straightforward extension of the corresponding algorithms in the real domain R, however, recent developments in augmented complex statistics show that, in general, this leads to under-modelling. This direct treatment of complex-valued signals has led to advances in so called widely linear modelling and the introduction of a generalised framework for the differentiability of both analytic and non-analytic complex and quaternion functions. In this thesis, supervised and blind complex adaptive algorithms capable of processing the generality of complex and quaternion signals (both circular and noncircular) in both noise-free and noisy environments are developed; their usefulness in real-world applications is demonstrated through case studies. The focus of this thesis is on the use of augmented statistics and widely linear modelling. The standard complex least mean square (CLMS) algorithm is extended to perform optimally for the generality of complex-valued signals, and is shown to outperform the CLMS algorithm. Next, extraction of latent complex-valued signals from large mixtures is addressed. This is achieved by developing several classes of complex blind source extraction algorithms based on fundamental signal properties such as smoothness, predictability and degree of Gaussianity, with the analysis of the existence and uniqueness of the solutions also provided. These algorithms are shown to facilitate real-time applications, such as those in brain computer interfacing (BCI). Due to their modified cost functions and the widely linear mixing model, this class of algorithms perform well in both noise-free and noisy environments. Next, based on a widely linear quaternion model, the FastICA algorithm is extended to the quaternion domain to provide separation of the generality of quaternion signals. The enhanced performances of the widely linear algorithms are illustrated in renewable energy and biomedical applications, in particular, for the prediction of wind profiles and extraction of artifacts from EEG recordings.
APA, Harvard, Vancouver, ISO, and other styles
29

Ranganathan, Raghuram. "Novel complex adaptive signal processing techniques employing optimally derived time-varying convergence factors with applications in digital signal processing and wireless communications." Orlando, Fla. : University of Central Florida, 2008. http://purl.fcla.edu/fcla/etd/CFE0002431.

Full text
APA, Harvard, Vancouver, ISO, and other styles
30

Al-Lawzi, Mahmod Jasim Mohammed. "The development of adaptive signal processing algorithms for the recovery of periodic signals." Thesis, Cranfield University, 1991. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.484187.

Full text
APA, Harvard, Vancouver, ISO, and other styles
31

Hayward, Stephen David. "Adaptive sensor array processing in non-stationary signal environments." Thesis, University of Birmingham, 1999. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.368454.

Full text
APA, Harvard, Vancouver, ISO, and other styles
32

Hong, John Hyunchul Psaltis Demetri. "Optical computing for adaptive signal processing and associative memories /." Diss., Pasadena, Calif. : California Institute of Technology, 1987. http://resolver.caltech.edu/CaltechETD:etd-06142006-094757.

Full text
APA, Harvard, Vancouver, ISO, and other styles
33

Chen, Teyan. "Novel adaptive signal processing techniques for underwater acoustic communications." Thesis, University of York, 2011. http://etheses.whiterose.ac.uk/1925/.

Full text
Abstract:
The underwater acoustic channel is characterized by time-varying multipath propagation with large delay spreads of up to hundreds of milliseconds, which introduces severe intersymbol interference (ISI) in digital communication system. Many of the existing channel estimation and equalization techniques used in radio frequency wireless communication systems might be practically inapplicable to underwater acoustic communication due to their high computational complexity. The recursive least squares (RLS)-dichotomous coordinate descent (DCD) algorithm has been recently proposed and shown to perform closely to the classical RLS algorithm while having a significantly lower complexity. It is therefore a highly promising channel estimation algorithm for underwater acoustic communications. However, predicting the convergence performance of the RLS-DCD algorithm is an open issue. Known approaches are found not applicable, as in the RLS-DCD algorithm, the normal equations are not exactly solved at every time instant and the sign function is involved at every update of the filter weights. In this thesis, we introduce an approach for convergence analysis of the RLS-DCD algorithm based on computations with only deterministic correlation quantities. Equalization is a well known method for combatting the ISI in communication channels. Coefficients of an adaptive equalizer can be computed without explicit channel estimation using the channel output and known pilot signal. Channel-estimate (CE) based equalizers which re-compute equalizer coefficients for every update of the channel estimate, can outperform equalizers with the direct adaptation. However, the computational complexity of CE based equalizers for channels with large delay spread, such as the underwater acoustic channel, is an open issue. In this thesis, we propose a low-complexity CE based adaptive linear equalizer, which exploits DCD iterations for computation of equalizer coefficients. The proposed technique has as low complexity as O(Nu(K+M)) operations per sample, where K and M are the equalizer and channel estimator length, respectively, and Nu is the number of iterations such that Nu << K and Nu << M. Moreover, when using the RLS-DCD algorithm for channel estimation, the computation of equalizer coefficients is multiplication-free and division-free, which makes the equalizer attractive for hardware design. Simulation results show that the proposed adaptive equalizer performs close to the minimum mean-square-error (MMSE) equalizer with perfect knowledge of the channel. Decision feedback equalizers (DFEs) can outperform LEs, provided that the effect of decision errors on performance is negligible. However, the complexity of existing CE based DFEs normally grows squarely with the feedforward filter (FFF) length K. In multipath channels with large delay spread and long precursor part, such as in underwater acoustic channels, the FFF length K needs to be large enough to equalize the precursor part, and it is usual that K > M. Reducing the complexity of CE based DFEs in such scenarios is still an open issue. In this thesis, we derive two low complexity approaches for computing CE based DFE coefficients. The proposed DFEs operate together with partial-update channel estimators, such as the RLS-DCD channel estimator, and exploit complex-valued DCD iterations to efficiently compute the DFE coefficients. In the first approach, the proposed DFE has a complexity of O(Nu l log 2l) real multiplications per sample, where l is the equalizer delay and Nu is the number of iterations such that Nu << l. In the second proposed approach, DFE has a complexity as low as O(Nu K)+O(Nu B) + O(Nu M) operations per sample, where B is the feedback filter (FBF) length and Nu << M. Moreover, when the channel estimator also exploits the DCD iterations, e.g. such as in the RLS-DCD adaptive filter, the second approach is multiplication-free and division-free, which makes the equalizer attractive for hardware implementation. Simulation results show that the proposed DFEs perform close to the RLS CE based DFE, where the CE is obtained using the classical RLS adaptive filter and the equalizer coefficients are computed according to the MMSE criterion. Localization is an important problem for many underwater communication systems, such as underwater sensor networks. Due to the characteristics of the underwater acoustic channel, localization of underwater acoustic sources is challenging and needs to be accurate and computationally efficient. The matched-phase coherent broadband matched-field (MF) processor has been previously proposed and shown to outperform other advanced broadband MF processors for underwater acoustic source localization. It has been previously proposed to search the matched phases using the simulated annealing, which is well known for its ability for solving global optimization problems while having high computational complexity. This prevents simultaneous processing of many frequencies, and thus, limits the processor performance. In this thesis, we introduce a novel iterative technique based on coordinate descent optimization, the phase descent search (PDS), for searching the matched phases. We show that the PDS algorithm obtains matched phases similar to that obtained by the simulated annealing, and has significantly lower complexity. Therefore, it enables to search phases for a large number of frequencies and significantly improves the processor performance. The proposed processor is applied to experimental data for locating a moving acoustic source and shown to provide accurate localization of the source well matched to GPS measurements.
APA, Harvard, Vancouver, ISO, and other styles
34

Baykal, Buyurman. "Underdetermined recursive least-squares adaptive filtering." Thesis, Imperial College London, 1995. http://hdl.handle.net/10044/1/7790.

Full text
APA, Harvard, Vancouver, ISO, and other styles
35

Schoenig, Gregory Neumann. "Contributions to Robust Adaptive Signal Processing with Application to Space-Time Adaptive Radar." Diss., Virginia Tech, 2007. http://hdl.handle.net/10919/26972.

Full text
Abstract:
Classical adaptive signal processors typically utilize assumptions in their derivation. The presence of adequate Gaussian and independent and identically distributed (i.i.d.) input data are central among such assumptions. However, classical processors have a tendency to suffer a degradation in performance when assumptions like these are violated. Worse yet, such degradation is not guaranteed to be proportional to the level of deviation from the assumptions. This dissertation proposes new signal processing algorithms based on aspects of modern robustness theory, including methods to enable adaptivity of presently non-adaptive robust approaches. The contributions presented are the result of research performed jointly in two disciplines, namely robustness theory and adaptive signal process- ing. This joint consideration of robustness and adaptivity enables improved performance in assumption-violating scenarios â scenarios in which classical adaptive signal processors fail. Three contributions are central to this dissertation. First, a new adaptive diagnostic tool for high-dimension data is developed and shown robust in problematic contamination. Second, a robust data-pre-whitening method is presented based on the new diagnostic tool. Finally, a new suppression-based robust estimator is developed for use with complex-valued adaptive signal processing data. To exercise the proposals and compare their performance to state- of-the art methods, data sets commonly used in statistics as well as Space-Time Adaptive Processing (STAP) radar data, both real and simulated, are processed, and performance is subsequently computed and displayed. The new algorithms are shown to outperform their state-of-the-art counterparts from both a signal-to-interference plus noise ratio (SINR) conver- gence rate and target detection perspective.
Ph. D.
APA, Harvard, Vancouver, ISO, and other styles
36

Spalding, Scott A. Jr. "Adaptive OFDM Radar Signal Design." Miami University / OhioLINK, 2012. http://rave.ohiolink.edu/etdc/view?acc_num=miami1335728143.

Full text
APA, Harvard, Vancouver, ISO, and other styles
37

Vartak, Aniket. "BIOSIGNAL PROCESSING CHALLENGES IN EMOTION RECOGNITIONFOR ADAPTIVE LEARNING." Doctoral diss., University of Central Florida, 2010. http://digital.library.ucf.edu/cdm/ref/collection/ETD/id/2667.

Full text
Abstract:
User-centered computer based learning is an emerging field of interdisciplinary research. Research in diverse areas such as psychology, computer science, neuroscience and signal processing is making contributions the promise to take this field to the next level. Learning systems built using contributions from these fields could be used in actual training and education instead of just laboratory proof-of-concept. One of the important advances in this research is the detection and assessment of the cognitive and emotional state of the learner using such systems. This capability moves development beyond the use of traditional user performance metrics to include system intelligence measures that are based on current neuroscience theories. These advances are of paramount importance in the success and wide spread use of learning systems that are automated and intelligent. Emotion is considered an important aspect of how learning occurs, and yet estimating it and making adaptive adjustments are not part of most learning systems. In this research we focus on one specific aspect of constructing an adaptive and intelligent learning system, that is, estimation of the emotion of the learner as he/she is using the automated training system. The challenge starts with the definition of the emotion and the utility of it in human life. The next challenge is to measure the co-varying factors of the emotions in a non-invasive way, and find consistent features from these measures that are valid across wide population. In this research we use four physiological sensors that are non-invasive, and establish a methodology of utilizing the data from these sensors using different signal processing tools. A validated set of visual stimuli used worldwide in the research of emotion and attention, called International Affective Picture System (IAPS), is used. A dataset is collected from the sensors in an experiment designed to elicit emotions from these validated visual stimuli. We describe a novel wavelet method to calculate hemispheric asymmetry metric using electroencephalography data. This method is tested against typically used power spectral density method. We show overall improvement in accuracy in classifying specific emotions using the novel method. We also show distinctions between different discrete emotions from the autonomic nervous system activity using electrocardiography, electrodermal activity and pupil diameter changes. Findings from different features from these sensors are used to give guidelines to use each of the individual sensors in the adaptive learning environment.
Ph.D.
School of Electrical Engineering and Computer Science
Engineering and Computer Science
Electrical Engineering PhD
APA, Harvard, Vancouver, ISO, and other styles
38

Myjak, Mitchell John. "A medium-grain reconfigurable architecture for digital signal processing." Online access for everyone, 2006. http://www.dissertations.wsu.edu/Dissertations/Spring2006/m%5Fmyjak%5F042706.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
39

Fertig, Louis B. "Dual forms for constrained adaptive filtering." Diss., Georgia Institute of Technology, 1998. http://hdl.handle.net/1853/15642.

Full text
APA, Harvard, Vancouver, ISO, and other styles
40

Howe, G. S. "A real-time adaptive beamformer for underwater telemetry." Thesis, University of Newcastle Upon Tyne, 1995. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.307825.

Full text
APA, Harvard, Vancouver, ISO, and other styles
41

Lampl, Tanja. "Implementation of adaptive filtering algorithms for noise cancellation." Thesis, Högskolan i Gävle, Avdelningen för elektroteknik, matematik och naturvetenskap, 2020. http://urn.kb.se/resolve?urn=urn:nbn:se:hig:diva-33277.

Full text
Abstract:
This paper deals with the implementation and performance evaluation of adaptive filtering algorithms for noise cancellation without reference signal. Noise cancellation is a technique of estimating a desired signal from a noise-corrupted observation. If the signal and noise characteristics are unknown or change continuously over time, the need of adaptive filter arises. In contrast to the conventional digital filter design techniques, adaptive filters do not have constant filter parameters, they have the capability to continuously adjust their coefficients to their operating environment. To design an adaptive filter, that produces an optimum estimate of the desired signal from the noisy environment, different adaptive filtering algorithms are implemented and compared to each other. The Least Mean Square LMS, the Normalized Least Mean Square NLMS and the Recursive Least Square RLS algorithm are investigated. Three performance criteria are used in the study of these algorithms: the rate of convergence, the error performance and the signal-to-noise ratio SNR. The implementation results show that the adaptive noise cancellation application benefits more from the use of the NLMS algorithm instead of the LMS or RLS algorithm.
APA, Harvard, Vancouver, ISO, and other styles
42

Perumalla, Calvin A. "Machine Learning and Adaptive Signal Processing Methods for Electrocardiography Applications." Scholar Commons, 2017. http://scholarcommons.usf.edu/etd/6926.

Full text
Abstract:
This dissertation is directed towards improving the state of art cardiac monitoring methods and automatic diagnosis of cardiac anomalies through modern engineering approaches such as adaptive signal processing, and machine learning methods. The dissertation will describe the invention and associated methods of a cardiac rhythm monitor dubbed the Integrated Vectorcardiogram (iVCG). In addition, novel machine learning approaches are discussed to improve diagnoses and prediction accuracy of cardiac diseases. It is estimated that around 17 million people in the world die from cardiac related events each year. It has also been shown that many of such deaths can be averted with long-term continuous monitoring and actuation. Hence, there is a growing need for better cardiac monitoring solutions. Leveraging the improvements in computational power, communication bandwidth, energy efficiency and electronic chip size in recent years, the Integrated Vectorcardiogram (iVCG) was invented as an answer to this problem. The iVCG is a miniaturized, integrated version of the Vectorcardiogram that was invented in the 1930s. The Vectorcardiogram provides full diagnostic quality cardiac information equivalent to that of the gold standard, 12-lead ECG, which is restricted to in-office use due to its bulky, obtrusive form. With the iVCG, it is possible to provide continuous, long-term, full diagnostic quality information, while being portable and unobtrusive to the patient. Moreover, it is possible to leverage this ‘Big Data’ and create machine learning algorithms to deliver better patient outcomes in the form of patient specific machine diagnosis and timely alerts. First, we present a proof-of-concept investigation for a miniaturized vectorcardiogram, the iVCG system for ambulatory on-body applications that continuously monitors the electrical activity of the heart in three dimensions. We investigate the minimum distance between a pair of leads in the X, Y and Z axes such that the signals are distinguishable from the noise. The target dimensions for our prototype iVCG are 3x3x2 cm and based on our experimental results we show that it is possible to achieve these dimensions. Following this, we present a solution to the problem of transforming the three VCG component signals to the familiar 12-lead ECG for the convenience of cardiologists. The least squares (LS) method is employed on the VCG signals and the reference (training) 12-lead ECG to obtain a 12x3 transformation matrix to generate the real-time ECG signals from the VCG signals. The iVCG is portable and worn on the chest of the patient and although a physician or trained technician will initially install it in the appropriate position, it is prone to subsequent rotation and displacement errors introduced by the patient placement of the device. We characterize these errors and present a software solution to correct the effect of the errors on the iVCG signals. We also describe the design of machine learning methods to improve automatic diagnosis and prediction of various heart conditions. Methods very similar to the ones described in this dissertation can be used on the long term, full diagnostic quality ‘Big Data’ such that the iVCG will be able to provide further insights into the health of patients. The iVCG system is potentially breakthrough and disruptive technology allowing long term and continuous remote monitoring of patient’s electrical heart activity. The implications are profound and include 1) providing a less expensive device compared to the 12-lead ECG system (the “gold standard”); 2) providing continuous, remote tele-monitoring of patients; 3) the replacement of current Holter shortterm monitoring system; 4) Improved and economic ICU cardiac monitoring; 5) The ability for patients to be sent home earlier from a hospital since physicians will have continuous remote monitoring of the patients.
APA, Harvard, Vancouver, ISO, and other styles
43

Tanrikulu, Oguz. "Adaptive signal processing algorithms with accelerated convergence and noise immunity." Thesis, Imperial College London, 1995. http://hdl.handle.net/10044/1/7877.

Full text
APA, Harvard, Vancouver, ISO, and other styles
44

Lin, Lu. "Adaptive signal processing in subbands using sigma-delta modulation technique." Thesis, University of Ottawa (Canada), 1994. http://hdl.handle.net/10393/6532.

Full text
Abstract:
In this thesis, the use of subbanding and sigma-delta modulation in interference/noise cancellation is intensively studied and a sigma-delta modulated subbanded adaptive interference/noise cancellation system is proposed. The filter bank is fully sigma-delta modulated. The output signal from the filter bank is then used to produce the input to the adaptive filter. The adaptive filter is partially sigma-delta modulated. The output is demodulated at the final stage. Maintaining the sigma-delta modulated signal representation throughout the system results in considerable savings in complexity. The performance of the proposed system is studied and compared to the regular non sigma-delta modulated case regarding complexity, convergence speed and steady state error. The effect of the oversampling rate used in the sigma-delta modulation as well as the quality of the demodulator is also considered. It is shown that in the case of interference cancellation a comb filter is sufficient, while in the case of noise canceller a good quality demodulator is essential. The thesis concludes by highlighting the tradeoffs between the hardware complexity reduction and the overall system performance.
APA, Harvard, Vancouver, ISO, and other styles
45

White, Paul Robert. "Adaptive signal processing and its application to infrared detector systems." Thesis, University of Southampton, 1992. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.316442.

Full text
APA, Harvard, Vancouver, ISO, and other styles
46

Scott, Iain. "Partially adaptive array signal processing with application to airborne radar." Thesis, University of Edinburgh, 1995. http://hdl.handle.net/1842/12912.

Full text
Abstract:
An adaptive array is a signal processor used in conjunction with a set of antennae to provide a versatile form of spatial filtering. The processor combines spatial samples of a propagating field with a variable set of weights, typically chosen to reject interfering signals and noise. In radar, the spatial filtering capability of the array facilitates cancellation of hostile jamming signals and aids in the suppression of clutter. In many applications, the practical usefulness of an adaptive array is limited by the complexity associated with computing the adaptive weights. In a partially adaptive beamformer only a subset of the available degrees of freedom are used adaptively, where adaptive degree of freedom denotes the number of unconstrained or free weights that must be computed. The principal benefits associated with reducing the number of adaptive degrees of freedom are reduced computational burden and improved adaptive convergence rate. The computational cost of adaptive algorithms is generally either directly proportional to the number of adaptive weights or to the square or cube of the number of adaptive weights. In radar it is often mandatory that the number of adaptive weights be reduced with large antenna arrays because of the algorithms computational requirement. The number of data vectors needed for the adaptive weights to converge to their optimal values is also proportional to the number of adaptive weights. Thus, in some applications, adaptive response requirements dictate reductions in the number of adaptive weights. Both of these aspects are investigated in this thesis. The primary disadvantage of reducing the number of adaptive weights is a degradation in the steady-state interference cancellation capability. This degradation is a function of which adaptive degrees of freedom are utilised and is the motivation for the partially adaptive design techniques detailed in this thesis. A new technique for selecting adaptive degrees of freedom is proposed. This algorithm sequentially selects adaptive weights based on an output mean square error criterion. It is demonstrated through simulation that for a given partially adaptive dimension this approach leads to improved steady-state performance, in mean square error terms, over popular eigenstructure approaches. Additionally, the adaptive structure which results from this design method is computationally efficient, yielding a reduction of around 80% in the number of both complex multiplications and additions.
APA, Harvard, Vancouver, ISO, and other styles
47

Chambers, Jonathon Arthur. "Digital signal processing algorithms and structures for adaptive line enhancing." Thesis, Imperial College London, 1990. http://hdl.handle.net/10044/1/47797.

Full text
APA, Harvard, Vancouver, ISO, and other styles
48

Chen, Jen Mei. "Multistage adaptive filtering in a multirate digital signal processing system." Thesis, Massachusetts Institute of Technology, 1993. https://hdl.handle.net/1721.1/127935.

Full text
Abstract:
Thesis (M.S.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 1993.
Includes bibliographical references (leaves 101-104).
by Jen Mei Chen.
Thesis (M.S.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 1993.
APA, Harvard, Vancouver, ISO, and other styles
49

Lorente, Giner Jorge. "Adaptive signal processing for multichannel sound using high performance computing." Doctoral thesis, Universitat Politècnica de València, 2015. http://hdl.handle.net/10251/58427.

Full text
Abstract:
[EN] The field of audio signal processing has undergone a major development in recent years. Both the consumer and professional marketplaces continue to show growth in audio applications such as immersive audio schemes that offer optimal listening experience, intelligent noise reduction in cars or improvements in audio teleconferencing or hearing aids. The development of these applications has a common interest in increasing or improving the number of discrete audio channels, the quality of the audio or the sophistication of the algorithms. This often gives rise to problems of high computational cost, even when using common signal processing algorithms, mainly due to the application of these algorithms to multiple signals with real-time requirements. The field of High Performance Computing (HPC) based on low cost hardware elements is the bridge needed between the computing problems and the real multimedia signals and systems that lead to user's applications. In this sense, the present thesis goes a step further in the development of these systems by using the computational power of General Purpose Graphics Processing Units (GPGPUs) to exploit the inherent parallelism of signal processing for multichannel audio applications. The increase of the computational capacity of the processing devices has been historically linked to the number of transistors in a chip. However, nowadays the improvements in the computational capacity are mainly given by increasing the number of processing units and using parallel processing. The Graphics Processing Units (GPUs), which have now thousands of computing cores, are a representative example. The GPUs were traditionally used to graphic or image processing, but new releases in the GPU programming environments such as CUDA have allowed the use of GPUS for general processing applications. Hence, the use of GPUs is being extended to a wide variety of intensive-computation applications among which audio processing is included. However, the data transactions between the CPU and the GPU and viceversa have questioned the viability of the use of GPUs for audio applications in which real-time interaction between microphones and loudspeakers is required. This is the case of the adaptive filtering applications, where an efficient use of parallel computation in not straightforward. For these reasons, up to the beginning of this thesis, very few publications had dealt with the GPU implementation of real-time acoustic applications based on adaptive filtering. Therefore, this thesis aims to demonstrate that GPUs are totally valid tools to carry out audio applications based on adaptive filtering that require high computational resources. To this end, different adaptive applications in the field of audio processing are studied and performed using GPUs. This manuscript also analyzes and solves possible limitations in each GPU-based implementation both from the acoustic point of view as from the computational point of view.
[ES] El campo de procesado de señales de audio ha experimentado un desarrollo importante en los últimos años. Tanto el mercado de consumo como el profesional siguen mostrando un crecimiento en aplicaciones de audio, tales como: los sistemas de audio inmersivo que ofrecen una experiencia de sonido óptima, los sistemas inteligentes de reducción de ruido en coches o las mejoras en sistemas de teleconferencia o en audífonos. El desarrollo de estas aplicaciones tiene un propósito común de aumentar o mejorar el número de canales de audio, la propia calidad del audio o la sofisticación de los algoritmos. Estas mejoras suelen dar lugar a sistemas de alto coste computacional, incluso usando algoritmos comunes de procesado de señal. Esto se debe principalmente a que los algoritmos se suelen aplicar a sistemas multicanales con requerimientos de procesamiento en tiempo real. El campo de la Computación de Alto Rendimiento basado en elementos hardware de bajo coste es el puente necesario entre los problemas de computación y los sistemas multimedia que dan lugar a aplicaciones de usuario. En este sentido, la presente tesis va un paso más allá en el desarrollo de estos sistemas mediante el uso de la potencia de cálculo de las Unidades de Procesamiento Gráfico (GPU) en aplicaciones de propósito general. Con ello, aprovechamos la inherente capacidad de paralelización que poseen las GPU para procesar señales de audio y obtener aplicaciones de audio multicanal. El aumento de la capacidad computacional de los dispositivos de procesado ha estado vinculado históricamente al número de transistores que había en un chip. Sin embargo, hoy en día, las mejoras en la capacidad computacional se dan principalmente por el aumento del número de unidades de procesado y su uso para el procesado en paralelo. Las GPUs son un ejemplo muy representativo. Hoy en día, las GPUs poseen hasta miles de núcleos de computación. Tradicionalmente, las GPUs se han utilizado para el procesado de gráficos o imágenes. Sin embargo, la aparición de entornos sencillos de programación GPU, como por ejemplo CUDA, han permitido el uso de las GPU para aplicaciones de procesado general. De ese modo, el uso de las GPU se ha extendido a una amplia variedad de aplicaciones que requieren cálculo intensivo. Entre esta gama de aplicaciones, se incluye el procesado de señales de audio. No obstante, las transferencias de datos entre la CPU y la GPU y viceversa pusieron en duda la viabilidad de las GPUs para aplicaciones de audio en las que se requiere una interacción en tiempo real entre micrófonos y altavoces. Este es el caso de las aplicaciones basadas en filtrado adaptativo, donde el uso eficiente de la computación en paralelo no es sencillo. Por estas razones, hasta el comienzo de esta tesis, había muy pocas publicaciones que utilizaran la GPU para implementaciones en tiempo real de aplicaciones acústicas basadas en filtrado adaptativo. A pesar de todo, esta tesis pretende demostrar que las GPU son herramientas totalmente válidas para llevar a cabo aplicaciones de audio basadas en filtrado adaptativo que requieran elevados recursos computacionales. Con este fin, la presente tesis ha estudiado y desarrollado varias aplicaciones adaptativas de procesado de audio utilizando una GPU como procesador. Además, también analiza y resuelve las posibles limitaciones de cada aplicación tanto desde el punto de vista acústico como desde el punto de vista computacional.
[CAT] El camp del processament de senyals d'àudio ha experimentat un desenvolupament important als últims anys. Tant el mercat de consum com el professional segueixen mostrant un creixement en aplicacions d'àudio, com ara: els sistemes d'àudio immersiu que ofereixen una experiència de so òptima, els sistemes intel·ligents de reducció de soroll en els cotxes o les millores en sistemes de teleconferència o en audiòfons. El desenvolupament d'aquestes aplicacions té un propòsit comú d'augmentar o millorar el nombre de canals d'àudio, la pròpia qualitat de l'àudio o la sofisticació dels algorismes que s'utilitzen. Això, sovint dóna lloc a sistemes d'alt cost computacional, fins i tot quan es fan servir algorismes comuns de processat de senyal. Això es deu principalment al fet que els algorismes se solen aplicar a sistemes multicanals amb requeriments de processat en temps real. El camp de la Computació d'Alt Rendiment basat en elements hardware de baix cost és el pont necessari entre els problemes de computació i els sistemes multimèdia que donen lloc a aplicacions d'usuari. En aquest sentit, aquesta tesi va un pas més enllà en el desenvolupament d'aquests sistemes mitjançant l'ús de la potència de càlcul de les Unitats de Processament Gràfic (GPU) en aplicacions de propòsit general. Amb això, s'aprofita la inherent capacitat de paral·lelització que posseeixen les GPUs per processar senyals d'àudio i obtenir aplicacions d'àudio multicanal. L'augment de la capacitat computacional dels dispositius de processat ha estat històricament vinculada al nombre de transistors que hi havia en un xip. No obstant, avui en dia, les millores en la capacitat computacional es donen principalment per l'augment del nombre d'unitats de processat i el seu ús per al processament en paral·lel. Un exemple molt representatiu són les GPU, que avui en dia posseeixen milers de nuclis de computació. Tradicionalment, les GPUs s'han utilitzat per al processat de gràfics o imatges. No obstant, l'aparició d'entorns senzills de programació de la GPU com és CUDA, han permès l'ús de les GPUs per a aplicacions de processat general. D'aquesta manera, l'ús de les GPUs s'ha estès a una àmplia varietat d'aplicacions que requereixen càlcul intensiu. Entre aquesta gamma d'aplicacions, s'inclou el processat de senyals d'àudio. No obstant, les transferències de dades entre la CPU i la GPU i viceversa van posar en dubte la viabilitat de les GPUs per a aplicacions d'àudio en què es requereix la interacció en temps real de micròfons i altaveus. Aquest és el cas de les aplicacions basades en filtrat adaptatiu, on l'ús eficient de la computació en paral·lel no és senzilla. Per aquestes raons, fins al començament d'aquesta tesi, hi havia molt poques publicacions que utilitzessin la GPU per implementar en temps real aplicacions acústiques basades en filtrat adaptatiu. Malgrat tot, aquesta tesi pretén demostrar que les GPU són eines totalment vàlides per dur a terme aplicacions d'àudio basades en filtrat adaptatiu que requereixen alts recursos computacionals. Amb aquesta finalitat, en la present tesi s'han estudiat i desenvolupat diverses aplicacions adaptatives de processament d'àudio utilitzant una GPU com a processador. A més, aquest manuscrit també analitza i resol les possibles limitacions de cada aplicació, tant des del punt de vista acústic, com des del punt de vista computacional.
Lorente Giner, J. (2015). Adaptive signal processing for multichannel sound using high performance computing [Tesis doctoral no publicada]. Universitat Politècnica de València. https://doi.org/10.4995/Thesis/10251/58427
TESIS
APA, Harvard, Vancouver, ISO, and other styles
50

Rankine, Luke. "Newborn EEG seizure detection using adaptive time-frequency signal processing." Thesis, Queensland University of Technology, 2006. https://eprints.qut.edu.au/16200/1/Luke_Rankine_Thesis.pdf.

Full text
Abstract:
Dysfunction in the central nervous system of the neonate is often first identified through seizures. The diffculty in detecting clinical seizures, which involves the observation of physical manifestations characteristic to newborn seizure, has placed greater emphasis on the detection of newborn electroencephalographic (EEG) seizure. The high incidence of newborn seizure has resulted in considerable mortality and morbidity rates in the neonate. Accurate and rapid diagnosis of neonatal seizure is essential for proper treatment and therapy. This has impelled researchers to investigate possible methods for the automatic detection of newborn EEG seizure. This thesis is focused on the development of algorithms for the automatic detection of newborn EEG seizure using adaptive time-frequency signal processing. The assessment of newborn EEG seizure detection algorithms requires large datasets of nonseizure and seizure EEG which are not always readily available and often hard to acquire. This has led to the proposition of realistic models of newborn EEG which can be used to create large datasets for the evaluation and comparison of newborn EEG seizure detection algorithms. In this thesis, we develop two simulation methods which produce synthetic newborn EEG background and seizure. The simulation methods use nonlinear and time-frequency signal processing techniques to allow for the demonstrated nonlinear and nonstationary characteristics of the newborn EEG. Atomic decomposition techniques incorporating redundant time-frequency dictionaries are exciting new signal processing methods which deliver adaptive signal representations or approximations. In this thesis we have investigated two prominent atomic decomposition techniques, matching pursuit and basis pursuit, for their possible use in an automatic seizure detection algorithm. In our investigation, it was shown that matching pursuit generally provided the sparsest (i.e. most compact) approximation for various real and synthetic signals over a wide range of signal approximation levels. For this reason, we chose MP as our preferred atomic decomposition technique for this thesis. A new measure, referred to as structural complexity, which quantifes the level or degree of correlation between signal structures and the decomposition dictionary was proposed. Using the change in structural complexity, a generic method of detecting changes in signal structure was proposed. This detection methodology was then applied to the newborn EEG for the detection of state transition (i.e. nonseizure to seizure state) in the EEG signal. To optimize the seizure detection process, we developed a time-frequency dictionary that is coherent with the newborn EEG seizure state based on the time-frequency analysis of the newborn EEG seizure. It was shown that using the new coherent time-frequency dictionary and the change in structural complexity, we can detect the transition from nonseizure to seizure states in synthetic and real newborn EEG. Repetitive spiking in the EEG is a classic feature of newborn EEG seizure. Therefore, the automatic detection of spikes can be fundamental in the detection of newborn EEG seizure. The capacity of two adaptive time-frequency signal processing techniques to detect spikes was investigated. It was shown that a relationship between the EEG epoch length and the number of repetitive spikes governs the ability of both matching pursuit and adaptive spectrogram in detecting repetitive spikes. However, it was demonstrated that the law was less restrictive forth eadaptive spectrogram and it was shown to outperform matching pursuit in detecting repetitive spikes. The method of adapting the window length associated with the adaptive spectrogram used in this thesis was the maximum correlation criterion. It was observed that for the time instants where signal spikes occurred, the optimal window lengths selected by the maximum correlation criterion were small. Therefore, spike detection directly from the adaptive window optimization method was demonstrated and also shown to outperform matching pursuit. An automatic newborn EEG seizure detection algorithm was proposed based on the detection of repetitive spikes using the adaptive window optimization method. The algorithm shows excellent performance with real EEG data. A comparison of the proposed algorithm with four well documented newborn EEG seizure detection algorithms is provided. The results of the comparison show that the proposed algorithm has significantly better performance than the existing algorithms (i.e. Our proposed algorithm achieved a good detection rate (GDR) of 94% and false detection rate (FDR) of 2.3% compared with the leading algorithm which only produced a GDR of 62% and FDR of 16%). In summary, the novel contribution of this thesis to the fields of time-frequency signal processing and biomedical engineering is the successful development and application of sophisticated algorithms based on adaptive time-frequency signal processing techniques to the solution of automatic newborn EEG seizure detection.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography