Dissertations / Theses on the topic 'Signal processing'

To see the other types of publications on this topic, follow the link: Signal processing.

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 dissertations / theses for your research on the topic 'Signal processing.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.

1

Östlund, Nils. "Adaptive signal processing of surface electromyogram signals." Doctoral thesis, Umeå universitet, Strålningsvetenskaper, 2006. http://urn.kb.se/resolve?urn=urn:nbn:se:umu:diva-743.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Electromyography is the study of muscle function through the electrical signals from the muscles. In surface electromyography the electrical signal is detected on the skin. The signal arises from ion exchanges across the muscle fibres’ membranes. The ion exchange in a motor unit, which is the smallest unit of excitation, produces a waveform that is called an action potential (AP). When a sustained contraction is performed the motor units involved in the contraction will repeatedly produce APs, which result in AP trains. A surface electromyogram (EMG) signal consists of the superposition of many AP trains generated by a large number of active motor units. The aim of this dissertation was to introduce and evaluate new methods for analysis of surface EMG signals. An important aspect is to consider where to place the electrodes during the recording so that the electrodes are not located over the zone where the neuromuscular junctions are located. A method that could estimate the location of this zone was presented in one study. The mean frequency of the EMG signal is often used to estimate muscle fatigue. For signals with low signal-to-noise ratio it is important to limit the integration intervals in the mean frequency calculations. Therefore, a method that improved the maximum frequency estimation was introduced and evaluated in comparison with existing methods. The main methodological work in this dissertation was concentrated on finding single motor unit AP trains from EMG signals recorded with several channels. In two studies single motor unit AP trains were enhanced by using filters that maximised the kurtosis of the output. The first of these studies used a spatial filter, and in the second study the technique was expanded to include filtration in time. The introduction of time filtration resulted in improved performance, and when the method was evaluated in comparison with other methods that use spatial and/or temporal filtration, it gave the best performance among them. In the last study of this dissertation this technique was used to compare AP firing rates and conduction velocities in fibromyalgia patients as compared with a control group of healthy subjects. In conclusion, this dissertation has resulted in new methods that improve the analysis of EMG signals, and as a consequence the methods can simplify physiological research projects.
2

Östlund, Nils. "Adaptive signal processing of surface electromyogram signals /." Umeå : Department of Radiation Sciences, Umeå University, 2006. http://urn.kb.se/resolve?urn=urn:nbn:se:umu:diva-743.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Lee, Li 1975. "Distributed signal processing." Thesis, Massachusetts Institute of Technology, 2000. http://hdl.handle.net/1721.1/86436.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Eldar, Yonina Chana 1973. "Quantum signal processing." Thesis, Massachusetts Institute of Technology, 2001. http://hdl.handle.net/1721.1/16805.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Thesis (Ph. D.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, February 2002.
Includes bibliographical references (p. 337-346).
This electronic version was submitted by the student author. The certified thesis is available in the Institute Archives and Special Collections.
Quantum signal processing (QSP) as formulated in this thesis, borrows from the formalism and principles of quantum mechanics and some of its interesting axioms and constraints, leading to a novel paradigm for signal processing with applications in areas ranging from frame theory, quantization and sampling methods to detection, parameter estimation, covariance shaping and multiuser wireless communication systems. The QSP framework is aimed at developing new or modifying existing signal processing algorithms by drawing a parallel between quantum mechanical measurements and signal processing algorithms, and by exploiting the rich mathematical structure of quantum mechanics, but not requiring a physical implementation based on quantum mechanics. This framework provides a unifying conceptual structure for a variety of traditional processing techniques, and a precise mathematical setting for developing generalizations and extensions of algorithms. Emulating the probabilistic nature of quantum mechanics in the QSP framework gives rise to probabilistic and randomized algorithms. As an example we introduce a probabilistic quantizer and derive its statistical properties. Exploiting the concept of generalized quantum measurements we develop frame-theoretical analogues of various quantum-mechanical concepts and results, as well as new classes of frames including oblique frame expansions, that are then applied to the development of a general framework for sampling in arbitrary spaces. Building upon the problem of optimal quantum measurement design, we develop and discuss applications of optimal methods that construct a set of vectors.
(cont.) We demonstrate that, even for problems without inherent inner product constraints, imposing such constraints in combination with least-squares inner product shaping leads to interesting processing techniques that often exhibit improved performance over traditional methods. In particular, we formulate a new viewpoint toward matched filter detection that leads to the notion of minimum mean-squared error covariance shaping. Using this concept we develop an effective linear estimator for the unknown parameters in a linear model, referred to as the covariance shaping least-squares estimator. Applying this estimator to a multiuser wireless setting, we derive an efficient covariance shaping multiuser receiver for suppressing interference in multiuser communication systems.
by Yonina Chana Eldar.
Ph.D.
5

Chan, M. K. "Adaptive signal processing algorithms for non-Gaussian signals." Thesis, Queen's University Belfast, 2002. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.269023.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Bland, Denise. "Alias-free signal processing of nonuniformly sampled signals." Thesis, University of Westminster, 1998. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.322992.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Hannaske, Roland. "Fast Digitizing and Digital Signal Processing of Detector Signals." Forschungszentrum Dresden, 2010. http://nbn-resolving.de/urn:nbn:de:bsz:d120-qucosa-27888.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
A fast-digitizer data acquisition system recently installed at the neutron time-of-flight experiment nELBE, which is located at the superconducting electron accelerator ELBE of Forschungszentrum Dresden-Rossendorf, is tested with two different detector types. Preamplifier signals from a high-purity germanium detector are digitized, stored and finally processed. For a precise determination of the energy of the detected radiation, the moving-window deconvolution algorithm is used to compensate the ballistic deficit and different shaping algorithms are applied. The energy resolution is determined in an experiment with γ-rays from a 22Na source and is compared to the energy resolution achieved with analogously processed signals. On the other hand, signals from the photomultipliers of barium fluoride and plastic scintillation detectors are digitized. These signals have risetimes of a few nanoseconds only. The moment of interaction of the radiation with the detector is determined by methods of digital signal processing. Therefore, different timing algorithms are implemented and tested with data from an experiment at nELBE. The time resolutions achieved with these algorithms are compared to each other as well as to reference values coming from analog signal processing. In addition to these experiments, some properties of the digitizing hardware are measured and a program for the analysis of stored, digitized data is developed. The analysis of the signals shows that the energy resolution achieved with the 10-bit digitizer system used here is not competitive to a 14-bit peak-sensing ADC, although the ballistic deficit can be fully corrected. However, digital methods give better result in sub-ns timing than analog signal processing.
8

Case, David Robert. "Real-time signal processing of multi-path video signals." Thesis, University of Salford, 1993. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.334170.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Haghighi-Mood, Ali. "Analysis of phonocardiographic signals using advanced signal processing techniques." Thesis, University of Sussex, 1996. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.321465.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Hannaske, Roland. "Fast Digitizing and Digital Signal Processing of Detector Signals." Forschungszentrum Dresden-Rossendorf, 2009. https://hzdr.qucosa.de/id/qucosa%3A21615.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
A fast-digitizer data acquisition system recently installed at the neutron time-of-flight experiment nELBE, which is located at the superconducting electron accelerator ELBE of Forschungszentrum Dresden-Rossendorf, is tested with two different detector types. Preamplifier signals from a high-purity germanium detector are digitized, stored and finally processed. For a precise determination of the energy of the detected radiation, the moving-window deconvolution algorithm is used to compensate the ballistic deficit and different shaping algorithms are applied. The energy resolution is determined in an experiment with γ-rays from a 22Na source and is compared to the energy resolution achieved with analogously processed signals. On the other hand, signals from the photomultipliers of barium fluoride and plastic scintillation detectors are digitized. These signals have risetimes of a few nanoseconds only. The moment of interaction of the radiation with the detector is determined by methods of digital signal processing. Therefore, different timing algorithms are implemented and tested with data from an experiment at nELBE. The time resolutions achieved with these algorithms are compared to each other as well as to reference values coming from analog signal processing. In addition to these experiments, some properties of the digitizing hardware are measured and a program for the analysis of stored, digitized data is developed. The analysis of the signals shows that the energy resolution achieved with the 10-bit digitizer system used here is not competitive to a 14-bit peak-sensing ADC, although the ballistic deficit can be fully corrected. However, digital methods give better result in sub-ns timing than analog signal processing.
11

Ahlström, Christer. "Nonlinear phonocardiographic Signal Processing." Doctoral thesis, Linköpings universitet, Fysiologisk mätteknik, 2008. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-11302.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
The aim of this thesis work has been to develop signal analysis methods for a computerized cardiac auscultation system, the intelligent stethoscope. In particular, the work focuses on classification and interpretation of features derived from the phonocardiographic (PCG) signal by using advanced signal processing techniques. The PCG signal is traditionally analyzed and characterized by morphological properties in the time domain, by spectral properties in the frequency domain or by nonstationary properties in a joint time-frequency domain. The main contribution of this thesis has been to introduce nonlinear analysis techniques based on dynamical systems theory to extract more information from the PCG signal. Especially, Takens' delay embedding theorem has been used to reconstruct the underlying system's state space based on the measured PCG signal. This processing step provides a geometrical interpretation of the dynamics of the signal, whose structure can be utilized for both system characterization and classification as well as for signal processing tasks such as detection and prediction. In this thesis, the PCG signal's structure in state space has been exploited in several applications. Change detection based on recurrence time statistics was used in combination with nonlinear prediction to remove obscuring heart sounds from lung sound recordings in healthy test subjects. Sample entropy and mutual information were used to assess the severity of aortic stenosis (AS) as well as mitral insufficiency (MI) in dogs. A large number of, partly nonlinear, features was extracted and used for distinguishing innocent murmurs from murmurs caused by AS or MI in patients with probable valve disease. Finally, novel work related to very accurate localization of the first heart sound by means of ECG-gated ensemble averaging was conducted. In general, the presented nonlinear processing techniques have shown considerably improved results in comparison with other PCG based techniques. In modern health care, auscultation has found its main role in primary or in home health care, when deciding if special care and more extensive examinations are required. Making a decision based on auscultation is however difficult, why a simple tool able to screen and assess murmurs would be both time- and cost-saving while relieving many patients from needless anxiety. In the emerging field of telemedicine and home care, an intelligent stethoscope with decision support abilities would be of great value.
12

Borga, Magnus. "Learning Multidimensional Signal Processing." Doctoral thesis, Linköpings universitet, Bildbehandling, 1998. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-54341.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
The subject of this dissertation is to show how learning can be used for multidimensional signal processing, in particular computer vision. Learning is a wide concept, but it can generally be defined as a system’s change of behaviour in order to improve its performance in some sense. Learning systems can be divided into three classes: supervised learning, reinforcement learning and unsupervised learning. Supervised learning requires a set of training data with correct answers and can be seen as a kind of function approximation. A reinforcement learning system does not require a set of answers. It learns by maximizing a scalar feedback signal indicating the system’s performance. Unsupervised learning can be seen as a way of finding a good representation of the input signals according to a given criterion. In learning and signal processing, the choice of signal representation is a central issue. For high-dimensional signals, dimensionality reduction is often necessary. It is then important not to discard useful information. For this reason, learning methods based on maximizing mutual information are particularly interesting. A properly chosen data representation allows local linear models to be used in learning systems. Such models have the advantage of having a small number of parameters and can for this reason be estimated by using relatively few samples. An interesting method that can be used to estimate local linear models is canonical correlation analysis (CCA). CCA is strongly related to mutual information. The relation between CCA and three other linear methods is discussed. These methods are principal component analysis (PCA), partial least squares (PLS) and multivariate linear regression (MLR). An iterative method for CCA, PCA, PLS and MLR, in particular low-rank versions of these methods, is presented. A novel method for learning filters for multidimensional signal processing using CCA is presented. By showing the system signals in pairs, the filters can be adapted to detect certain features and to be invariant to others. A new method for local orientation estimation has been developed using this principle. This method is significantly less sensitive to noise than previously used methods. Finally, a novel stereo algorithm is presented. This algorithm uses CCA and phase analysis to detect the disparity in stereo images. The algorithm adapts filters in each local neighbourhood of the image in a way which maximizes the correlation between the filtered images. The adapted filters are then analysed to find the disparity. This is done by a simple phase analysis of the scalar product of the filters. The algorithm can even handle cases where the images have different scales. The algorithm can also handle depth discontinuities and give multiple depth estimates for semi-transparent images.
13

Ghauri, Farzan Naseer. "Hybrid Photonic Signal Processing." Doctoral diss., University of Central Florida, 2007. http://digital.library.ucf.edu/cdm/ref/collection/ETD/id/3233.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
This thesis proposes research of novel hybrid photonic signal processing systems in the areas of optical communications, test and measurement, RF signal processing and extreme environment optical sensors. It will be shown that use of innovative hybrid techniques allows design of photonic signal processing systems with superior performance parameters and enhanced capabilities. These applications can be divided into domains of analog-digital hybrid signal processing applications and free-space--fiber-coupled hybrid optical sensors. The analog-digital hybrid signal processing applications include a high-performance analog-digital hybrid MEMS variable optical attenuator that can simultaneously provide high dynamic range as well as high resolution attenuation controls; an analog-digital hybrid MEMS beam profiler that allows high-power watt-level laser beam profiling and also provides both submicron-level high resolution and wide area profiling coverage; and all optical transversal RF filters that operate on the principle of broadband optical spectral control using MEMS and/or Acousto-Optic tunable Filters (AOTF) devices which can provide continuous, digital or hybrid signal time delay and weight selection. The hybrid optical sensors presented in the thesis are extreme environment pressure sensors and dual temperature-pressure sensors. The sensors employ hybrid free-space and fiber-coupled techniques for remotely monitoring a system under simultaneous extremely high temperatures and pressures.
Ph.D.
Optics and Photonics
Optics and Photonics
Optics PhD
14

Holt, A. G. J. "Studies in signal processing." Thesis, University of Newcastle Upon Tyne, 2000. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.248472.

Full text
APA, Harvard, Vancouver, ISO, and other styles
15

Wang, Xudong. "Microwave Photonic Signal Processing." Thesis, The University of Sydney, 2013. http://hdl.handle.net/2123/10087.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
A new single-wavelength coherence-free microwave photonic notch filter is presented. The concept is based on a dual Sagnac loop structure that functions with a new principle in which the two loops operate with different free spectral ranges. Experimental results demonstrate a notch filter with a narrow notch width, a flat passband, and high stopband attenuation of over 40 dB. A new multiple-tap microwave photonic notch filter structure that can simultaneously realise a frequency-independent group delay together with a narrow notch filter response and large free spectral range is presented. The concept is based on using multiple wavelengths circulating in a Sagnac loop. Experimental results demonstrate a notch filter with a flat passband, a narrow notch width, a high rejection level of over 40 dB, and an extremely low group delay ripple of less than ±25 ps. A new photonic microwave phase shifting structure that can realise a continuous 0o to 360o phase shift with only little frequency dependent amplitude and phase variation over a wide frequency range is presented. It is based on controlling the wavelengths of two phase modulated optical signals into an optical filter with a nonlinear phase response. The new photonic microwave phase shifter has been experimentally verified showing the continuous 0o to 360o phase shifting operation with less than 3 dB amplitude variation over a wide frequency range. A new microwave photonic phase shifter structure is presented. It is based on the conversion of the optical carrier phase shift into an RF signal phase shift via controlling the carrier wavelength of a single-sideband RF modulated optical signal into a fibre Bragg grating. Experimental results demonstrate a continuous 0o to 360o phase shift with low amplitude variation of < 2 dB and low phase deviation of < 5o over a wideband microwave range.
16

Clarke, Rupert Benjamin. "Signal processing for magnetoencephalography." Thesis, University of York, 2010. http://etheses.whiterose.ac.uk/2407/.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Magnetoencephalography (MEG) is a non-invasive technology for imaging human brain function. Contemporary methods of analysing MEG data include dipole fitting, minimum norm estimation (MNE) and beamforming. These are concerned with localising brain activity, but in isolation they do not provide concrete evidence of interaction among brain regions. Since cognitive neuroscience demands answers to this type of question, a novel signal processing framework has been developed consisting of three stages. The first stage uses conventional MNE to separate a small number of underlying source signals from a large data set. The second stage is a novel time-frequency analysis consisting of a recursive filter bank. Finally, the filtered outputs from different brain regions are compared using a unique partial cross-correlation analysis that accounts for propagation time. The output from this final stage could be used to construct conditional independence graphs depicting the internal networks of the brain. In the second processing stage, a complementary pair of high- and low-pass filters is iteratively applied to a discrete time series. The low-pass output is critically sampled at each stage, which both removes redundant information and effectively scales the filter coefficients in time. The approach is similar to the Fast Wavelet Transform (FWT), but features a more sophisticated resampling step. This, in combination with the filter design procedure leads to a finer frequency resolution than the FWT. The subsequent correlation analysis is unusual in that a latency estimation procedure is included to establish the probable transmission delays between regions of interest. This test statistic does not follow the same distribution as a conventional correlation measures, so an empirical model has been developed to facilitate hypothesis testing.
17

Li, Jian. "Array signal processing for polarized signals and signals with known waveforms /." The Ohio State University, 1991. http://rave.ohiolink.edu/etdc/view?acc_num=osu1487687485808063.

Full text
APA, Harvard, Vancouver, ISO, and other styles
18

EASTON, ROGER LEE JR. "TWO-DIMENSIONAL SIGNAL PROCESSING IN RADON SPACE (OPTICAL SIGNAL, IMAGE PROCESSING, FOURIER TRANSFORMS)." Diss., The University of Arizona, 1986. http://hdl.handle.net/10150/183978.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
This dissertation considers a method for processing two-dimensional (2-D) signals (e.g. imagery) by transformation to a coordinate space where the 2-D operation separates into orthogonal 1-D operations. After processing, the 2-D output is reconstructed by a second coordinate transformation. This approach is based on the Radon transform, which maps a two-dimensional Cartesian representation of a signal into a series of one-dimensional signals by line-integral projection. The mathematical principles of this transformation are well-known as the basis for medical computed tomography. This approach can process signals more rapidly than conventional digital processing and more flexibly and precisely than optical techniques. A new formulation of the Radon transform is introduced that employs a new transformation--the central-slice transform--to symmetrize the operations between the Cartesian and Radon representations of the signal and to aid in analyzing operations that may be susceptible to solution in this manner. It is well-known that 2-D Fourier transforms and convolutions can be performed by 1-D operations after Radon transformation, as proven by the central-slice and filter theorems. Demonstrations of these operations via Radon transforms are described. An optical system has been constructed to derive the line-integral projections of 2-D transmissive or reflective input data. Fourier transforms of the projections are derived by a surface-acoustic-wave chirp Fourier transformer, and filtering is performed in a surface-acoustic-wave convolver. Reconstruction of the processed 2-D signal is performed optically. The system can process 2-D imagery at approximately 5 frames/second, though rates to 30 frames/second are achievable if a faster image rotator is added. Other signal processing operations in Radon space are demonstrated, including Labeyrie stellar speckle interferometry, the Hartley transform, and the joint coordinate-frequency representations such as the Wigner distribution function. Other operations worthy of further study include derivation of the 2-D cepstrum, and several spectrum estimation algorithms.
19

Catelli, Ezio. "Representation functions in Signal Processing." Master's thesis, Alma Mater Studiorum - Università di Bologna, 2017. http://amslaurea.unibo.it/13530/.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Scopo dell'elaborato è presentare la trasformata windowed seguendo un approccio di modellizzazione matematica. La parte di teoria verte sui contenuti fondamentali e di specifico interesse per la trattazione nel capo della signal analysis di short-time Fourier transform e Wigner-Ville distribution. La parte di pratica presenta esempi svolti al calcolatore.
20

Gudmundson, Erik. "Signal Processing for Spectroscopic Applications." Doctoral thesis, Uppsala universitet, Avdelningen för systemteknik, 2010. http://urn.kb.se/resolve?urn=urn:nbn:se:uu:diva-120194.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Spectroscopic techniques allow for studies of materials and organisms on the atomic and molecular level. Examples of such techniques are nuclear magnetic resonance (NMR) spectroscopy—one of the principal techniques to obtain physical, chemical, electronic and structural information about molecules—and magnetic resonance imaging (MRI)—an important medical imaging technique for, e.g., visualization of the internal structure of the human body. The less well-known spectroscopic technique of nuclear quadrupole resonance (NQR) is related to NMR and MRI but with the difference that no external magnetic field is needed. NQR has found applications in, e.g., detection of explosives and narcotics. The first part of this thesis is focused on detection and identification of solid and liquid explosives using both NQR and NMR data. Methods allowing for uncertainties in the assumed signal amplitudes are proposed, as well as methods for estimation of model parameters that allow for non-uniform sampling of the data. The second part treats two medical applications. Firstly, new, fast methods for parameter estimation in MRI data are presented. MRI can be used for, e.g., the diagnosis of anomalies in the skin or in the brain. The presented methods allow for a significant decrease in computational complexity without loss in performance. Secondly, the estimation of blood flow velo-city using medical ultrasound scanners is addressed. Information about anomalies in the blood flow dynamics is an important tool for the diagnosis of, for example, stenosis and atherosclerosis. The presented methods make no assumption on the sampling schemes, allowing for duplex mode transmissions where B-mode images are interleaved with the Doppler emissions.
21

Han, Yichen. "All-optical Microwave Signal Processing." Thèse, Université d'Ottawa / University of Ottawa, 2011. http://hdl.handle.net/10393/20234.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Microwave signal processing in the optical domain is investigated in this thesis. Two signal processors including an all-optical fractional Hilbert transformer and an all-optical microwave differentiator are investigated and experimentally demonstrated. Specifically, the photonic-assisted fractional Hilbert transformer with tunable fractional order is implemented based on a temporal pulse shaping system incorporating a phase modulator. By applying a step function to the phase modulator to introduce a phase jump, a real-time fractional Hilbert transformer with a tunable fractional order is achieved. The microwave bandpass differentiator is implemented based on a finite impulse response (FIR) photonic microwave delay-line filter with nonuniformly-spaced taps. A microwave bandpass differentiator based on a six-tap nonuniformly-spaced photonic microwave delay-line filter with all- positive coefficients is designed, simulated, and experimentally demonstrated. The reconfigurability of the microwave bandpass differentiator is experimentally investigated. The employment of the differentiator to perform differentiation of a bandpass microwave signal is also experimentally demonstrated.
22

Sabbar, Bayan M. "High resolution array signal processing." Thesis, Loughborough University, 1987. https://dspace.lboro.ac.uk/2134/27193.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
This study is concerned with the processing of signals received by an array of sensor elements which may range from acoustic transducers in a sonar system to microwave horns in a radar system. The main aim of the work is to devise techniques for resolving the signals arriving from closely spaced sources in order to determine the presence and direction of these sources.
23

Bhattacharya, Dipankar. "Neural networks for signal processing." Thesis, National Library of Canada = Bibliothèque nationale du Canada, 1996. http://www.collectionscanada.ca/obj/s4/f2/dsk3/ftp04/nq21924.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
24

Davison, Alan Stephen. "All-optical signal processing devices." Thesis, University of Cambridge, 1989. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.316729.

Full text
APA, Harvard, Vancouver, ISO, and other styles
25

Aggoun, Amar. "DPCM video signal/image processing." Thesis, University of Nottingham, 1992. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.335792.

Full text
APA, Harvard, Vancouver, ISO, and other styles
26

Stewart, K. A. "Inverse problems in signal processing." Thesis, University of Strathclyde, 1986. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.382448.

Full text
APA, Harvard, Vancouver, ISO, and other styles
27

Boufounos, Petros T. 1977. "Signal processing for DNA sequencing." Thesis, Massachusetts Institute of Technology, 2002. http://hdl.handle.net/1721.1/17536.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Thesis (M.Eng. and S.B.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 2002.
Includes bibliographical references (p. 83-86).
DNA sequencing is the process of determining the sequence of chemical bases in a particular DNA molecule-nature's blueprint of how life works. The advancement of biological science in has created a vast demand for sequencing methods, which needs to be addressed by automated equipment. This thesis tries to address one part of that process, known as base calling: it is the conversion of the electrical signal-the electropherogram--collected by the sequencing equipment to a sequence of letters drawn from ( A,TC,G ) that corresponds to the sequence in the molecule sequenced. This work formulates the problem as a pattern recognition problem, and observes its striking resemblance to the speech recognition problem. We, therefore, propose combining Hidden Markov Models and Artificial Neural Networks to solve it. In the formulation we derive an algorithm for training both models together. Furthermore, we devise a method to create very accurate training data, requiring minimal hand-labeling. We compare our method with the de facto standard, PHRED, and produce comparable results. Finally, we propose alternative HMM topologies that have the potential to significantly improve the performance of the method.
by Petros T. Boufounos.
M.Eng.and S.B.
28

Vasconcellos, Brett W. (Brett William) 1977. "Parallel signal-processing for everyone." Thesis, Massachusetts Institute of Technology, 2000. http://hdl.handle.net/1721.1/9097.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Thesis (M.Eng.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 2000.
Includes bibliographical references (p. 65-67).
We designed, implemented, and evaluated a signal-processing environment that runs on a general-purpose multiprocessor system, allowing easy prototyping of new algorithms and integration with applications. The environment allows the composition of modules implementing individual signal-processing algorithms into a functional application, automatically optimizing their performance. We decompose the problem into four independent components: signal processing, data management, scheduling, and control. This simplifies the programming interface and facilitates transparent parallel signal processing. For tested applications, our system both runs efficiently on single-processors systems and achieves near-linear speedups on symmetric-multiprocessor (SMP) systems.
by Brett W. Vasconcellos.
M.Eng.
29

Baran, Thomas A. (Thomas Anthony). "Conservation in signal processing systems." Thesis, Massachusetts Institute of Technology, 2012. http://hdl.handle.net/1721.1/74991.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Thesis (Ph. D.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 2012.
Cataloged from PDF version of thesis.
Includes bibliographical references (p. 205-209).
Conservation principles have played a key role in the development and analysis of many existing engineering systems and algorithms. In electrical network theory for example, many of the useful theorems regarding the stability, robustness, and variational properties of circuits can be derived in terms of Tellegen's theorem, which states that a wide range of quantities, including power, are conserved. Conservation principles also lay the groundwork for a number of results related to control theory, algorithms for optimization, and efficient filter implementations, suggesting potential opportunity in developing a cohesive signal processing framework within which to view these principles. This thesis makes progress toward that goal, providing a unified treatment of a class of conservation principles that occur in signal processing systems. The main contributions in the thesis can be broadly categorized as pertaining to a mathematical formulation of a class of conservation principles, the synthesis and identification of these principles in signal processing systems, a variational interpretation of these principles, and the use of these principles in designing and gaining insight into various algorithms. In illustrating the use of the framework, examples related to linear and nonlinear signal-flow graph analysis, robust filter architectures, and algorithms for distributed control are provided.
by Thomas A. Baran.
Ph.D.
30

Jahanchahi, Cyrus. "Quaternion valued adaptive signal processing." Thesis, Imperial College London, 2013. http://hdl.handle.net/10044/1/24165.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Recent developments in sensor technology, human centered computing and robotics have brought to light new classes of multidimensional data which are naturally represented as three- or four-dimensional vector-valued processes. Such signals are readily modeled as real vectors in R3 and R4, however, it has become apparent that there are advantages in processing such data in division algebras - the quaternion domain. The progress in the statistics of quaternion variable, particularly augmented statistics and widely linear modeling, has opened up a new front of research in vector sensor modeling, however, there are several key problems that need to be addressed in order to exploit the full power of quaternions in statistical signal processing. The principal problem lies in the lack of a mathematical framework, such as the CR-calculus in the complex domain, for the differentiation of non-holomorphic functions. Since most functions (including typical cost functions) in the quaternion domain are non-holomorphic, as defined by the Cauchy-Riemann-Fueter (CRF) condition, this presents a severe obstacle to solving optimisation problems and developing adaptive filtering algorithms in the quaternion domain. To this end, we develop the HR-calculus, an extension of the CR-calculus, allowing the differentiation of non-holomorphic functions. This is followed by the introduction of the I-gradient, enabling for generic extensions of complex valued algorithms to be derived. Using this unified framework we introduce the quaternion least mean square (QLMS), quaternion recursive least squares (QRLS), quaternion affine projection algorithm (QAPA) and quaternion Kalman filter. These estimators are made optimal for the processing of noncircular data, by proposing widely linear extensions of their standard versions. Convergence and steady state properties of these adaptive estimators are analysed and validated experimentally via simulations on both synthetic and real world signals.
31

Conocimiento, Dirección de Gestión del. "IEEE Transactions on Signal Processing." IEEE, 2004. http://hdl.handle.net/10757/655314.

Full text
APA, Harvard, Vancouver, ISO, and other styles
32

Soykan, Orhan. "Signal processing for sensor arrays." Case Western Reserve University School of Graduate Studies / OhioLINK, 1990. http://rave.ohiolink.edu/etdc/view?acc_num=case1054833033.

Full text
APA, Harvard, Vancouver, ISO, and other styles
33

Wang, Dalan. "Parallel architectures for signal processing." Thesis, University of Aberdeen, 1991. http://digitool.abdn.ac.uk/R?func=search-advanced-go&find_code1=WSN&request1=AAIU034219.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
This thesis presents the development of parallel architectures and algorithms for signal processing techniques, particularly for application to ultrasonic surface texture measurement. The background and context of this project is the real need to perform high speed signal processing on ultrasonic echoes used to extract information on texture properties of surfaces. Earlier investigation provided a solution by the nonlinear Maximum Entropy Method (MEM) which needs to be implemented at high speed and high performance. A review of parallel architectures for signal processing and digital signal processors is given. The aim is to introduce ways in which signal processing algorithms can be implemented at high speed. Both hardware and software have been developed in the project, and the signal processing system and parallel implementations of the algorithms are presented in detail. The signal processing system employs a parallel architecture using transputers. A feature of the design is that a floating-point digital signal processor is incorporated into a transputer array so that the performance of the system can be significantly enhanced. The design, testing and construction of the hardware system are discussed in detail. An investigation of some parallel DSP algorithms, including matrix multiplication, the Discrete Fourier Transform (DFT) and the Fast Fourier Transform (FFT), and their implementations based on the transputer array are discussed in order to choose an appropriate FFT implementation for our application. Several implementations of the deconvolution algorithms, including the Wiener-Hopf filter, the Maximum Entropy Method (MEM) and Projection Onto Convex Sets (POCS) are developed, which can benefit from the use of concurrency. A development of the MEM implementation based on the transputer array is to use the DSP as a subsystem for FFT calculations; this dual-system environment provides a significant resourse to be used to process ultrasonic echoes to determine surface roughness. Finally, the performance of the Projection Onto Convex Sets (POCS) algorithm in the field of ultrasonic surface determination and comparison with the Wiener-Hopf filter and the MEM are presented using simulated and real data. It is concluded that the parallel architecture provides a valuable contribution to high speed implementations of signal processing techniques.
34

Liu, Bo. "Integrated Microwave Photonics Signal Processing." Thesis, The University of Sydney, 2019. https://hdl.handle.net/2123/21633.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
The microwave photonics (MWP) is a promising technology in recent years in terms of processing high frequency microwave signals. It offers some advantages over electronic signal processing. Some of its advantages are high speed, low loss, wide range, light weight and immunity from electromagnetic interference. Because of these advantages, integrated MWP circuits can be used in many applications such as filters, phase shifters and time delay devices. Moreover, with the development of complementary metal-oxide semiconductor technology, MWP circuits can be integrated on a compact silicon-on-insulator platform. In this thesis, a new tunable single passband MWP filter based on on-chip silicon photonics technology and integrated MWP technology is designed. The new method has a great improvement in the selectivity of the filter by employing a dual-parallel Mach–Zehnder modulator (DPMZM). It simultaneously achieves the generation of phase-modulated signal and compensation for the undesired phase. The results show that the designed single passband MWP filter based on a DPMZM and an SOI single ring resonator, has a narrowband radio frequency response, where an average 10-dB bandwidth of 5.12 GHz is achieved. Another challenge for photonic circuit integration is coupling lights from optical fibers into photonic chips because of the spot size difference between fiber optical mode and waveguide mode. In this thesis, a simple solution is designed to achieve a horizontal integration of a fiber-chip spot size converting edge coupler, which only requires an inverse taper and a linear mode expander to couple light from a fiber and laterally expand the mode. Optimizing inverse taper parameters yields a 90% coupling efficiency from fiber to coupler output end for both the transvers electric and the transverse magnetic polarizations, which can be used for horizontal integration with a 50:50 polarization splitter.
35

CARINI, ALBERTO. "ADAPTIVE AND NONLINEAR SIGNAL PROCESSING." Doctoral thesis, Università degli studi di Trieste, 1997. http://thesis2.sba.units.it/store/handle/item/13000.

Full text
APA, Harvard, Vancouver, ISO, and other styles
36

Testoni, Nicola <1980&gt. "Adaptive multiscale biological signal processing." Doctoral thesis, Alma Mater Studiorum - Università di Bologna, 2008. http://amsdottorato.unibo.it/1122/1/Tesi_Testoni_Nicola.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Biological processes are very complex mechanisms, most of them being accompanied by or manifested as signals that reflect their essential characteristics and qualities. The development of diagnostic techniques based on signal and image acquisition from the human body is commonly retained as one of the propelling factors in the advancements in medicine and biosciences recorded in the recent past. It is a fact that the instruments used for biological signal and image recording, like any other acquisition system, are affected by non-idealities which, by different degrees, negatively impact on the accuracy of the recording. This work discusses how it is possible to attenuate, and ideally to remove, these effects, with a particular attention toward ultrasound imaging and extracellular recordings. Original algorithms developed during the Ph.D. research activity will be examined and compared to ones in literature tackling the same problems; results will be drawn on the base of comparative tests on both synthetic and in-vivo acquisitions, evaluating standard metrics in the respective field of application. All the developed algorithms share an adaptive approach to signal analysis, meaning that their behavior is not dependent only on designer choices, but driven by input signal characteristics too. Performance comparisons following the state of the art concerning image quality assessment, contrast gain estimation and resolution gain quantification as well as visual inspection highlighted very good results featured by the proposed ultrasound image deconvolution and restoring algorithms: axial resolution up to 5 times better than algorithms in literature are possible. Concerning extracellular recordings, the results of the proposed denoising technique compared to other signal processing algorithms pointed out an improvement of the state of the art of almost 4 dB.
37

Testoni, Nicola <1980&gt. "Adaptive multiscale biological signal processing." Doctoral thesis, Alma Mater Studiorum - Università di Bologna, 2008. http://amsdottorato.unibo.it/1122/.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Biological processes are very complex mechanisms, most of them being accompanied by or manifested as signals that reflect their essential characteristics and qualities. The development of diagnostic techniques based on signal and image acquisition from the human body is commonly retained as one of the propelling factors in the advancements in medicine and biosciences recorded in the recent past. It is a fact that the instruments used for biological signal and image recording, like any other acquisition system, are affected by non-idealities which, by different degrees, negatively impact on the accuracy of the recording. This work discusses how it is possible to attenuate, and ideally to remove, these effects, with a particular attention toward ultrasound imaging and extracellular recordings. Original algorithms developed during the Ph.D. research activity will be examined and compared to ones in literature tackling the same problems; results will be drawn on the base of comparative tests on both synthetic and in-vivo acquisitions, evaluating standard metrics in the respective field of application. All the developed algorithms share an adaptive approach to signal analysis, meaning that their behavior is not dependent only on designer choices, but driven by input signal characteristics too. Performance comparisons following the state of the art concerning image quality assessment, contrast gain estimation and resolution gain quantification as well as visual inspection highlighted very good results featured by the proposed ultrasound image deconvolution and restoring algorithms: axial resolution up to 5 times better than algorithms in literature are possible. Concerning extracellular recordings, the results of the proposed denoising technique compared to other signal processing algorithms pointed out an improvement of the state of the art of almost 4 dB.
38

Neuman, Bartosz P. "Signal processing in diffusion MRI : high quality signal reconstruction." Thesis, University of Nottingham, 2014. http://eprints.nottingham.ac.uk/27691/.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Magnetic Resonance Imaging (MRI) is a medical imaging technique which is especially sensitive to different soft tissues, producing a good contrast between them. It allows for in vivo visualisation of internal structures in detail and became an indispensable tool in diagnosing and monitoring the brain related diseases and pathologies. Amongst others, MRI can be used to measure random incoherent motion of water molecules, which in turn allows to infer structural information. One of the main challenges in processing and analysing four dimensional diffusion MRI images is low signal quality. To improve the signal quality, either denoising algorithm or angular and spatial regularisations are utilised. Regularisation method based on Laplace--Beltrami smoothing operator was successfully applied to diffusion signal. In this thesis, a new regularisation strength selection scheme for diffusion signal regularisation is introduced. A mathematical model of diffusion signal is used in Monte--Carlo simulations, and a regularisation strength that optimally reconstructs the diffusion signal is sought. The regularisation values found in this research show a different trend than the currently used L-curve analysis, and further improve reconstruction accuracy. Additionally, as an alternative to regularisation methods a backward elimination regression for spherical harmonics is proposed. Instead of using the regularisation term as a low-pass filter, the statistical t-test is classifying regression terms into reliable and corrupted. Four algorithms that use this information are further introduced. As the result, a selective filtering is constructed that retains the angular sharpness of the signal, while at the same time reducing corruptive effect of measurement noise. Finally, a statistical approach for estimating diffusion signal is investigated. Based on the physical properties of water diffusion a prior knowledge for the diffusion signal is constructed. The spherical harmonic transform is then formulated as a Bayesian regression problem. Diffusion signal reconstructed with the addition of such prior knowledge is accurate, noise resilient, and of high quality.
39

Kwan, Ching Chung. "Digital signal processing techniques for on-board processing satellites." Thesis, University of Surrey, 1990. http://epubs.surrey.ac.uk/754893/.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
In on-board processing satellite systems in which FDMA/SCPC access schemes are employed. transmultiplexers are required for the frequency demultiplexing of the SCPC signals. Digital techniques for the implementation of the transmultiplexer for such application were examined in this project. The signal processing in the transmultiplexer operations involved many parameters which could be optimized in order to reduce the hardware complexity whilst satisfying the level of performance required of the system. An approach for the assessment of the relationship between the various parameters and the system performance was devised. which allowed hardware requirement of practical system specifications to be estimated. For systems involving signals of different bandwidths a more flexible implementation of the trans multiplexer is required and two computationally efficient methods. the DFT convolution and analysis/synthesis filter bank. were investigated. These methods gave greater flexibility to the input frequency plan of the transmultiplexer. at the expense of increased computational requirements. Filters were then designed to exploit specific properties of the flexible transmultiplexer methods. resulting in considerable improvement in their efficiencies. Hardware implementation of the flexible transmultiplexer was considered and an efficient multi-processor architecture in combination with parallel processing software algorithms for the signal processing operations were designed. Finally. an experimental model of the payload for a land-mobile satellite system proposal. T -SAT. was constructed using general-purpose digital signal processors and the merits of the on-board processing architecture was demonstrated.
40

Ghaderi, Foad. "Signal processing techniques for extracting signals with periodic structure : applications to biomedical signals." Thesis, Cardiff University, 2010. http://orca.cf.ac.uk/55183/.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
In this dissertation some advanced methods for extracting sources from single and multichannel data are developed and utilized in biomedical applications. It is assumed that the sources of interest have periodic structure and therefore, the periodicity is exploited in various forms. The proposed methods can even be used for the cases where the signals have hidden periodicities, i.e., the periodic behaviour is not detectable from their time representation or even Fourier transform of the signal. For the case of single channel recordings a method based on singular spectrum anal ysis (SSA) of the signal is proposed. The proposed method is utilized in localizing heart sounds in respiratory signals, which is an essential pre-processing step in most of the heart sound cancellation methods. Artificially mixed and real respiratory signals are used for evaluating the method. It is shown that the performance of the proposed method is superior to those of the other methods in terms of false detection. More over, the execution time is significantly lower than that of the method ranked second in performance. For multichannel data, the problem is tackled using two approaches. First, it is assumed that the sources are periodic and the statistical characteristics of periodic sources are exploited in developing a method to effectively choose the appropriate delays in which the diagonalization takes place. In the second approach it is assumed that the sources of interest are cyclostationary. Necessary and sufficient conditions for extractability of the sources are mathematically proved and the extraction algorithms are proposed. Ballistocardiogram (BCG) artifact is considered as the sum of a number of independent cyclostationary components having the same cycle frequency. The proposed method, called cyclostationary source extraction (CSE), is able to extract these components without much destructive effect on the background electroencephalogram (EEG)
41

Mabrouk, Mohamed Hussein Emam Mabrouk. "Signal Processing of UWB Radar Signals for Human Detection Behind Walls." Thesis, Université d'Ottawa / University of Ottawa, 2015. http://hdl.handle.net/10393/31945.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Non-contact life detection is a significant component of both civilian and military rescue applications. As a consequence, this interest has resulted in a very active area of research. The primary goal of this research is reliable detection of a human breathing signal. Additional goals of this research are to carry out detection under realistic conditions, to distinguish between two targets, to determine human breathing rate and estimate the posture. Range gating and Singular Value Decomposition (SVD) have been used to remove clutter in order to detect human breathing under realistic conditions. However, the information of the target range or what principal component contains target information may be unknown. DFT and Short Time Fourier Transform (STFT) algorithms have been used to detect the human breathing and discriminate between two targets. However, the algorithms result in many false alarms because they detect breathing when no target exists. The unsatisfactory performance of the DFT-based estimators in human breathing rate estimation is due to the fact that the second harmonic of the breathing signal has higher magnitude than the first harmonic. Human posture estimation has been performed by measuring the distance of the chest displacements from the ground. This requires multiple UWB receivers and a more complex system. In this thesis, monostatic UWB radar is used. Initially, the SVD method was combined with the skewness test to detect targets, discriminate between two targets, and reduce false alarms. Then, a novel human breathing rate estimation algorithm was proposed using zero-crossing method. Subsequently, a novel method was proposed to distinguish between human postures based on the ratios between different human breathing frequency harmonics magnitudes. It was noted that the ratios depend on the abdomen displacements and higher harmonic ratios were observed when the human target was sitting or standing. The theoretical analysis shows that the distribution of the skewness values of the correlator output of the target and the clutter signals in a single range-bin do not overlap. The experimental results on human breathing detection, breathing rate, and human posture estimation show that the proposed methods improve performance in human breathing detection and rate estimation.
42

Krishnan, Sridhar. "Adaptive signal processing techniques for analysis of knee joint vibroarthrographic signals." Thesis, National Library of Canada = Bibliothèque nationale du Canada, 1999. http://www.collectionscanada.ca/obj/s4/f2/dsk1/tape8/PQDD_0016/NQ47897.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
43

Andrikogiannopoulos, Nikolas I. "RF phase modulation of optical signals and optical/electrical signal processing." Thesis, Massachusetts Institute of Technology, 2006. http://hdl.handle.net/1721.1/42930.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Thesis (S.M.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 2006.
This electronic version was submitted by the student author. The certified thesis is available in the Institute Archives and Special Collections.
Includes bibliographical references (p. 125-127).
Analog RF phase modulation of optical signals has been a topic of interest for many years, mainly focusing on Intensity Modulation Direct Detection (IMDD). The virtues of coherent detection combined with the advantages of Frequency Modulation, however, have not been explored thoroughly. By employing Frequency Modulation Coherent Detection (FMCD), the wide optical transmission bandwidth of optical fiber can be traded for higher signal-to-noise performance. In this thesis, we derive the FM gain over AM modulation -- the maximum achievable signal-to-noise ratio (by spreading the signal's spectrum) for specific carrier-to-noise ratio. We then employ FMCD for a scheme of remote antennas for which we use optical components and subsystem to perform signal processing such as nulling of interfering signals. The performance of optical processing on different modulation schemes are compared, and some important conclusions are reported relating to the use of conventional FMCD, FMCD with optical discriminator (FMCD O-D), and IMDD. Specifically, the superiority of conventional FMCD is shown; and, on the other hand, the inferiority of FMCD O-D is shown (same performance as IMDD) because of the use of an O-D. Finally, the remote antenna scheme is generalized for N antennas and N users.
by Nikolas I. Andrikogiannopoulos.
S.M.
44

Kbayer, Nabil. "Advanced Signal Processing Methods for GNSS Positioning with NLOS/Multipath Signals." Thesis, Toulouse, ISAE, 2018. http://www.theses.fr/2018ESAE0017/document.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Les avancées récentes dans le domaine de navigation par satellites (GNSS) ontconduit à une prolifération des applications de géolocalisation dans les milieux urbains. Pourde tels environnements, les applications GNSS souffrent d’une grande dégradation liée à laréception des signaux satellitaires en lignes indirectes (NLOS) et en multitrajets (MP). Cetravail de thèse propose une méthodologie originale pour l’utilisation constructive des signauxdégradés MP/NLOS, en appliquant des techniques avancées de traitement du signal ou àl’aide d’une assistance d’un simulateur 3D de propagation des signaux GNSS. D’abord, nousavons établi le niveau maximal réalisable sur la précision de positionnement par un systèmeGNSS "Stand-Alone" en présence de conditions MP/NLOS, en étudiant les bornes inférieuressur l’estimation en présence des signaux MP/NLOS. Pour mieux améliorer ce niveau deprécision, nous avons proposé de compenser les erreurs NLOS en utilisant un simulateur 3D dessignaux GNSS afin de prédire les biais MP/NLOS et de les intégrer comme des observationsdans l’estimation de la position, soit par correction des mesures dégradées ou par sélectiond’une position parmi une grille de positions candidates. L’application des approches proposéesdans un environnement urbain profond montre une bonne amélioration des performances depositionnement dans ces conditions
Recent trends in Global Navigation Satellite System (GNSS) applications inurban environments have led to a proliferation of studies in this field that seek to mitigatethe adverse effect of non-line-of-sight (NLOS). For such harsh urban settings, this dissertationproposes an original methodology for constructive use of degraded MP/NLOS signals, insteadof their elimination, by applying advanced signal processing techniques or by using additionalinformation from a 3D GNSS simulator. First, we studied different signal processing frameworks,namely robust estimation and regularized estimation, to tackle this GNSS problemwithout using an external information. Then, we have established the maximum achievablelevel (lower bounds) of GNSS Stand-Alone positioning accuracy in presence of MP/NLOSconditions. To better enhance this accuracy level, we have proposed to compensate for theMP/NLOS errors using a 3D GNSS signal propagation simulator to predict the biases andintegrate them as observations in the estimation method. This could be either by correctingdegraded measurements or by scoring an array of candidate positions. Besides, new metricson the maximum acceptable errors on MP/NLOS errors predictions, using GNSS simulations,have been established. Experiment results using real GNSS data in a deep urban environmentshow that using these additional information provides good positioning performance enhancement,despite the intensive computational load of 3D GNSS simulation
45

Figueroa, Toro Miguel E. "Adaptive signal processing and correlational learning in mixed-signal VLSI /." Thesis, Connect to this title online; UW restricted, 2005. http://hdl.handle.net/1773/6856.

Full text
APA, Harvard, Vancouver, ISO, and other styles
46

Wang, Limin. "The ECG signal processing by ADSP-21062 digital signal processor." Morgantown, W. Va. : [West Virginia University Libraries], 1999. http://etd.wvu.edu/templates/showETD.cfm?recnum=840.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Thesis (M.S.)--West Virginia University, 1999.
Title from document title page. Document formatted into pages; contains vi, 110 p. : ill. (some col.) Includes abstract. Includes bibliographical references (p. 66-68).
47

Zhao, Wentao. "Genomic applications of statistical signal processing." [College Station, Tex. : Texas A&M University, 2008. http://hdl.handle.net/1969.1/ETD-TAMU-2952.

Full text
APA, Harvard, Vancouver, ISO, and other styles
48

Karsikas, M. (Mari). "New methods for vectorcardiographic signal processing." Doctoral thesis, Oulun yliopisto, 2011. http://urn.fi/urn:isbn:9789514296086.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Abstract Vectorcardiography (VCG) determines the direction and magnitude of the heart’s electrical forces. Interpretation of the digital three-dimensional vectorcardiography in clinical applications requires robust methods and novel approaches for calculating the vectorcardiographic features. This dissertation aimed to develop new methods for vectorcardiographic signal processing. The robustness of selected pre-processing and feature extraction algorithms was improved, novel methods for detecting the injured myocardial tissue from electrocardiogram (ECG) were devised, and dynamical behavior of vectorcardiographic features was determined. The main results of the dissertation are: (1) Digitizing process and proper filtering did not produce significant distortions for dipolar Singular Value Decomposition -based ECG parameters from a diagnostic viewpoint, whereas non-dipolar parameters were very sensitive to the pre-processing operations. (2) A novel method for estimating the severity of the myocardial infarction (MI) was developed by combining the action potential based computer model and 12-lead ECG patient data. Using the method it is possible to calculate an approximate estimate of the maximum troponin value and therefore the severity of the MI. In addition, the size and location of the myocardial infarction was found to affect diagnostic significant Total-cosine-R-to-T parameter (TCRT) - changes, both in the simulations and in the patient study. (3) Furthermore, the results showed that carefully targeted improvements to the basic algorithm of the TCRT parameter can evidently decrease the number of algorithm-based failures and therefore improve the diagnostic value of TCRT in different patient data. (4) Finally, a method for calculating beat-to-beat vectorcardiographic features during exercise was developed. It was observed that the breathing affects the beat-to-beat variability of all the QRS/T angle measures and the trend of the TCRT parameter during exercise was found to be negative. Further, the results of the thesis clearly showed that the QRS/T angle measures exhibit a strong correlation with the heart rate in individual subjects. The results of the dissertation highlight the importance of robust algorithms in a VCG analysis. The results should be taken into account in further studies, so that the vectorcardiography can be utilized more effectively in clinical applications
Tiivistelmä Vektorikardiorgafia (VKG) kuvaa sydämen sähköisen toiminnan suuntaa ja suuruutta sydämen lyönnin eri vaiheissa. Vektorikardiogrammin onnistunut tulkinta kliinisissä sovelluksissa edellyttää luotettavia menetelmiä ja uusia lähestymistapoja vektorikardiografisten piirteiden laskennassa. Tämän väitöskirjan tavoitteena oli kehittää uusia vektorikardiografisia signaalinkäsittelymenetelmiä. Väitöstyössä parannettin tiettyjen elektrokardiorgafisen (EKG) -signaalin esikäsittelyvaiheiden ja piirteentunnistusalgoritmien luotettavuutta, kehitettiin uusia menetelmiä vaurioituneen sydänlihaskudoksen tunnistamiseen EKG-signaalista, sekä tutkittiin vektorikardiografisten piirteiden dynaamista käyttäytymistä. Väitöskirjan päätulokset voidaan tiivistää seuraavasti: (1) Paperitallenteisten EKG-tallenteiden digitointiprosessi ja EKG-signaalin asianmukainen suodatus ei aiheuta diagnostisesti merkittäviä vääristymiä ns. dipolaarisiin singulaariarvohajotelmaan (SVD) perustuviin EKG-parametreihin. Kuitenkin ns. ei-dipolaariset herkemmät parametrit ovat sensitiivisiä näille esikäsittelyvaiheille. (2) Väitöskirjatyössä kehitettiin uusi menetelmä sydäninfarktin vakavuuden arvioimiselle 12-kanavaisesta EKG-signaalista käyttäen aktiopotentiaaleihin perustuvaa tietokonemallia. Väitöstyössä todettiin, että menetelmää käyttäen on mahdollista laskea karkea estimaatti kliinisessä käytössä olevalle maksimaaliselle troponiiniarvolle, joka kertoo vaurion määrästä sydänlihaskudoksessa. Lisäksi sydäninfarktin koon ja sijainnin havaittiin vaikuttavan vektorikardiografiseen de- ja repolarisaation suhdetta kuvaavaan diagnostisesti merkittävään Total-cosine-R-to-T- (TCRT) muuttujaan. (3) Tulokset osoittivat myös, että tekemällä muutamia pieniä parannuksia alkuperäiseen TCRT-parametrin algoritmiin, voidaan merkittävästi vähentää parametrin laskennassa aiheutuvia vääristymiä ja täten parantaa TCRT-parametrin diagnostista arvoa erilaisissa potilasaineistoissa. (4) Neljänneksi, työssä kehitettiin menetelmä, jolla vektorikardiografisia piirteitä laskettiin dynaamisesti lyönti lyönniltä. Hengityksen havaittiin aiheuttavan rasitustestin aikana merkittävää lyöntikohtaista vaihtelua. Työssä havaittiin myös, että niin TCRT-parametrilla kuin myös muillakin de- ja repolarisaation välistä suhdetta kuvaavilla muuttujilla oli selvä korrelaatio sydämen sykkeen kanssa. Väitöskirjan tulokset korostavat luotettavien algoritmien tärkeyttä vektorikardiografisessa analyysissä. Tulosten huomioiminen jatkotutkimuksissa edesauttaa vektorikardiografian hyödyntämistä kliinisissä sovelluksissa
49

Yung, Sheung Kai. "Signal processing in local sensor validation." Thesis, University of Oxford, 1992. https://ora.ox.ac.uk/objects/uuid:974f513e-a556-4503-bae8-91460f10d3e3.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Sensor integrity plays a crucial role in automatic control and system monitoring, both in achieving performance and guaranteeing safety. Conventional approaches in sensor failure detection demand precise process models and abundant central computing power. This thesis describes the development and the evaluation of a novel local sensor validation scheme which is independent of the underlying process and is applicable to a wide variety of sensors. A signal-based in-situ sensor validation scheme is proposed. Typical sensor failures are classified according to their signal patterns. To avoid the ambiguity between genuine failures and legitimate measurand variations, a pair of decomposition filters are designed to partition the sensor output; and attention is focused on characteristics beyond the measurement signal bandwidth, which is the only essential process-related variable required. In addition, the application of decimating filters is explored, both as a relief to the analog anti-aliasing filter and as an enhancement in signal discretization. An expression is derived relating the oversampling rate and the attainable improvement in signal resolution. Based on a period of failure-free observation, a whitening filter is identified by modelling the decomposed sensor signal as a stochastic time-series. Significant progress is achieved by a deliberate injection of bandlimited random noise to ensure signal stationarity and to avoid inadmissible leakage of measurement signal into the innovation sequence. The adopted failure detection strategy is primarily innovation-based. Pertinent sensor signal information is extracted recursively by a collection of efficient and robust signal processing algorithms. Its validity is continuously monitored by statistical tests on which a series of precursory failure alarms are formulated. Any aberration detected is then diagnosed under the supervision of a simple rule-based system. The practicality, efficacy and flexibility of the proposed scheme are successfully demonstrated by a bench-top thermocouple experiment and extensive synthetic simulations.
50

Wallace, Angus Keith, and wallace angus@gmail com. "Epilepsy research using nonlinear signal processing." Flinders University. Computer Science, Engineering and Mathematics, 2008. http://catalogue.flinders.edu.au./local/adt/public/adt-SFU20081124.210552.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
This thesis applies several standard nonlinear quantifiers to EEG analysis to examine both human primary generalised epilepsy (PGE) and rat models of human epilepsy. We analysed rat EEG, and then used the analysed data, in parallel with an impedance recording, to better understand the events during experiments. Next, the nonlinear analysis of EEG was used to attempt to model the behaviour of the impedance data. This modeling did not yield a useful predictive tool, so we recommend the continued recording of impedance data as a means of augmenting EEG recordings. The analyses were also applied to human data, and showed differences between the PGE and control groups in apparently normal EEG. We then attempted to use these differences to detect the presence of PGE in an unclassified subject – a diagnostic tool. This was done using a feed-forward neural network. We found that the inter-group differences were exploitable and facilitated the diagnosis of PGE in previously unknown subjects. The extent to which this is useful as a diagnostic tool should be assessed by further trials. Finally, the analyses were used to examine data from a paralysed human subject, in an attempt to identify the mental task being performed by that subject. This was not successful, suggesting that the same analyses that were useful in discriminating between PGE and control were not useful in detecting the mental state of the subject. It was also apparent that the presence of EMG (in an unparalysed state) assisted task-classification.

To the bibliography