Dissertations / Theses on the topic 'Signal processing- models'

To see the other types of publications on this topic, follow the link: Signal processing- models.

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 dissertations / theses for your research on the topic 'Signal processing- models.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.

1

Xu, Luzhou. "Growth curve models in signal processing applications." [Gainesville, Fla.] : University of Florida, 2006. http://purl.fcla.edu/fcla/etd/UFE0015020.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Lynch, Michael Richard. "Adaptive techniques in signal processing and connectionist models." Thesis, University of Cambridge, 1990. https://www.repository.cam.ac.uk/handle/1810/244884.

Full text
Abstract:
This thesis covers the development of a series of new methods and the application of adaptive filter theory which are combined to produce a generalised adaptive filter system which may be used to perform such tasks as pattern recognition. Firstly, the relevant background adaptive filter theory is discussed in Chapter 1 and methods and results which are important to the rest of the thesis are derived or referenced. Chapter 2 of this thesis covers the development of a new adaptive algorithm which is designed to give faster convergence than the LMS algorithm but unlike the Recursive Least Squares family of algorithms it does not require storage of a matrix with n2 elements, where n is the number of filter taps. In Chapter 3 a new extension of the LMS adaptive notch filter is derived and applied which gives an adaptive notch filter the ability to lock and track signals of varying pitch without sacrificing notch depth. This application of the LMS filter is of interest as it demonstrates a time varying filter solution to a stationary problem. The LMS filter is next extended to the multidimensional case which allows the application of LMS filters to image processing. The multidimensional filter is then applied to the problem of image registration and this new application of the LMS filter is shown to have significant advantages over current image registration methods. A consideration of the multidimensional LMS filter as a template matcher and pattern recogniser is given. In Chapter 5 a brief review of statistical pattern recognition is given, and in Chapter 6 a review of relevant connectionist models. In Chapter 7 the generalised adaptive filter is derived. This is an adaptive filter with the ability to model non-linear input-output relationships. The Volterra functional analysis of non-linear systems is given and this is combined with adaptive filter methods to give a generalised non-linear adaptive digital filter. This filter is then considered as a linear adaptive filter operating in a non-linearly extended vector space. This new filter is shown to have desirable properties as a pattern recognition system. The performance and properties of the new filter is compared with current connectionist models and results demonstrated in Chapter 8. In Chapter 9 further mathematical analysis of the networks leads to suggested methods to greatly reduce network complexity for a given problem by choosing suitable pattern classification indices and allowing it to define its own internal structure. In Chapter 10 robustness of the network to imperfections in its implementation is considered. Chapter 11 finishes the thesis with some conclusions and suggestions for future work.
APA, Harvard, Vancouver, ISO, and other styles
3

Rao, Tandhoni. "Noncausal methods and models for image." Diss., Georgia Institute of Technology, 1992. http://hdl.handle.net/1853/13344.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Bengtsson, Mats. "Antenna array signal processing for high rank data models." Doctoral thesis, KTH, Signaler, sensorer och system, 2000. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-2903.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Noland, Katy C. "Computational tonality estimation : signal processing and hidden Markov models." Thesis, Queen Mary, University of London, 2009. http://qmro.qmul.ac.uk/xmlui/handle/123456789/8492.

Full text
Abstract:
This thesis investigates computational musical tonality estimation from an audio signal. We present a hidden Markov model (HMM) in which relationships between chords and keys are expressed as probabilities of emitting observable chords from a hidden key sequence. The model is tested first using symbolic chord annotations as observations, and gives excellent global key recognition rates on a set of Beatles songs. The initial model is extended for audio input by using an existing chord recognition algorithm, which allows it to be tested on a much larger database. We show that a simple model of the upper partials in the signal improves percentage scores. We also present a variant of the HMM which has a continuous observation probability density, but show that the discrete version gives better performance. Then follows a detailed analysis of the effects on key estimation and computation time of changing the low level signal processing parameters. We find that much of the high frequency information can be omitted without loss of accuracy, and significant computational savings can be made by applying a threshold to the transform kernels. Results show that there is no single ideal set of parameters for all music, but that tuning the parameters can make a difference to accuracy. We discuss methods of evaluating more complex tonal changes than a single global key, and compare a metric that measures similarity to a ground truth to metrics that are rooted in music retrieval. We show that the two measures give different results, and so recommend that the choice of evaluation metric is determined by the intended application. Finally we draw together our conclusions and use them to suggest areas for continuation of this research, in the areas of tonality model development, feature extraction, evaluation methodology, and applications of computational tonality estimation.
APA, Harvard, Vancouver, ISO, and other styles
6

Said, Maya Rida 1976. "Signal processing in biological cells : proteins, networks, and models." Thesis, Massachusetts Institute of Technology, 2005. http://hdl.handle.net/1721.1/30165.

Full text
Abstract:
Thesis (Sc. D.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 2005.
Includes bibliographical references (p. 202-210).
This thesis introduces systematic engineering principles to model, at different levels of abstraction the information processing in biological cells in order to understand the algorithms implemented by the signaling pathways that perform the processing. An example of how to emulate one of these algorithms in other signal processing contexts is also presented. At a high modeling level, the focus is on the network topology rather than the dynamical properties of the components of the signaling network. In this regime, we examine and analyze the distribution and properties of the network graph. Specifically, we present a global network investigation of the genotype/phenotype data-set recently developed for the yeast Saccharomyces cerevisiae from exposure to DNA damaging agents, enabling explicit study of how protein-protein interaction network characteristics may be associated with phenotypic functional effects. The properties of several functional yeast networks are also compared and a simple method to combine gene expression data with network information is proposed to better predict pathophysiological behavior. At a low level of modeling, the thesis introduces a new framework for modeling cellular signal processing based on interacting Markov chains. This framework provides a unified way to simultaneously capture the stochasticity of signaling networks in individual cells while computing a deterministic solution which provides average behavior. The use of this framework is demonstrated on two classical signaling networks: the mitogen activated protein kinase cascade and the bacterial chemotaxis pathway. The prospects of using cell biology as a metaphor for signal processing are also considered in a preliminary way by presenting a surface mapping algorithm based on bacterial chemotaxis.
by Maya Rida Said.
Sc.D.
APA, Harvard, Vancouver, ISO, and other styles
7

Marmin, Arthur. "Rational models optimized exactly for solving signal processing problems." Electronic Thesis or Diss., université Paris-Saclay, 2020. http://www.theses.fr/2020UPASG017.

Full text
Abstract:
Une vaste classe de problèmes d'optimisation non convexes est celle de l'optimisation rationnelle. Cette dernière apparaît naturellement dans de nombreux domaines tels que le traitement du signal ou le génie des procédés. Toutefois, trouver les optima globaux pour ces problèmes est difficile. Une approche récente, appelée la hiérarchie de Lasserre, fournit néanmoins une suite de problèmes convexes assurée de converger vers le minimum global. Cependant, cette approche représente un défi calculatoire du fait de la très grande dimension de ses relaxations. Dans cette thèse, nous abordons ce défi pour divers problèmes de traitement du signal.Dans un premier temps, nous formulons la reconstruction de signaux parcimonieux en un problème d'optimisation rationnelle. Nous montrons alors que ce dernier possède une structure que nous exploitons afin de réduire la complexité des relaxations associées. Nous pouvons ainsi résoudre plusieurs problèmes pratiques comme la restoration de signaux de chromatographie. Nous étendons également notre méthode à la restoration de signaux dans différents contextes en proposant plusieurs modèles de bruit et de signal. Dans une deuxième partie, nous étudions les relaxations convexes générées par nos problèmes et qui se présentent sous la forme de problèmes d'optimisation semi-définie positive de très grandes dimensions. Nous considérons plusieurs algorithmes basés sur les opérateurs proximaux pour les résoudre efficacement.La dernière partie de cette thèse est consacrée au lien entre les problèmes d'optimisation polynomiaux et la décomposition de tenseurs symétriques. En effet, ces derniers peuvent être tous deux vus comme une instance du problème des moments. Nous proposons ainsi une méthode de détection de rang et de décomposition pour les tenseurs symétriques basée sur les outils connus en optimisation polynomiale. Parallèlement, nous proposons une technique d'extraction robuste des solutions d'un problème d'optimisation poylnomiale basée sur les algorithmes de décomposition de tenseurs. Ces méthodes sont illustrées sur des problèmes de traitement du signal
A wide class of nonconvex optimization problem is represented by rational optimization problems. The latter appear naturally in many areas such as signal processing or chemical engineering. However, finding the global optima of such problems is intricate. A recent approach called Lasserre's hierarchy provides a sequence of convex problems that has the theoretical guarantee to converge to the global optima. Nevertheless, this approach is computationally challenging due to the high dimensions of the convex relaxations. In this thesis, we tackle this challenge for various signal processing problems.First, we formulate the reconstruction of sparse signals as a rational optimization problem. We show that the latter has a structure that we wan exploit in order to reduce the complexity of the associated relaxations. We thus solve several practical problems such as the reconstruction of chromatography signals. We also extend our method to the reconstruction of various types of signal corrupted by different noise models.In a second part, we study the convex relaxations generated by our problems which take the form of high-dimensional semi-definite programming problems. We consider several algorithms mainly based on proximal operators to solve those high-dimensional problems efficiently.The last part of this thesis is dedicated to the link between polynomial optimization and symmetric tensor decomposition. Indeed, they both can be seen as an instance of the moment problem. We thereby propose a detection method as well as a decomposition algorithm for symmetric tensors based on the tools used in polynomial optimization. In parallel, we suggest a robust extraction method for polynomial optimization based on tensor decomposition algorithms. Those methods are illustrated on signal processing problems
APA, Harvard, Vancouver, ISO, and other styles
8

Archer, Cynthia. "A framework for representing non-stationary data with mixtures of linear models /." Full text open access at:, 2002. http://content.ohsu.edu/u?/etd,585.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Liu, Li. "Ground vehicle acoustic signal processing based on biological hearing models." College Park, Md. : University of Maryland, 1999. http://techreports.isr.umd.edu/reports/1999/MS%5F99-6.pdf.

Full text
Abstract:
Thesis (M.S.) -- University of Maryland, College Park, 1999.
Thesis research directed by Institute for Systems Research. "M.S. 99-6." Includes bibliographical references (leaves 75-78). Available also online as a PDF file via the World Wide Web.
APA, Harvard, Vancouver, ISO, and other styles
10

Boman, Katarina. "Low-angle estimation : Models, methods and bounds." Licentiate thesis, Uppsala universitet, Avdelningen för systemteknik, 2000. http://urn.kb.se/resolve?urn=urn:nbn:se:uu:diva-85998.

Full text
Abstract:
In this work we study the performance of elevation estimators and lower bounds on the estimation error variance for a low angle target in a smooth sea scenario using an array antenna. The article is structured around some key assumptions on multipath knowledge, signal parameterization and noise covariance, giving the reader a framework in which Maximum Likelihood estimators exploiting different á priori information can be found. The crucial factor that determines the estimator accuracy is the multipath modeling, and there are three alternative levels of knowledge that can be used: 1) two unknown target locations 2) the target and its corresponding sea-reflection are related via simple geometry 3) the sea-reflection coefficient is known as a function of grazing angle. A compact expression for the Cramér–Rao lower bound is derived, including all special cases of the key assumptions. We prove that the Cramér–Rao bound is highly dependent on the multipath model, while it is the same for the different signal parameterizations and that it is independent of the noise covariance. However, the Cramér–Rao bound is sometimes too optimistic and not achievable. The tighter Barankin bound is derived to predict the threshold behavior seen at low SNR. At high SNR the Barankin bound coincides with the Cramér–Rao bound. Simulations show that the Maximum Likelihood methods are statistically efficient and achieve the theoretical lower bound on error variance, in case of high enough SNR. The bounds are also useful tools to design an improved array structure that can give better performance than the standard uniform linear array structure. The influence of the number of sensors and the number of snapshots on the error variance is also studied, showing the rate of improvement with more sensors or snapshots. Finally we discuss the use of multiple frequencies, which is mainly a tool for suppressing ambiguities. We show for which signal models it provides improved performance.
APA, Harvard, Vancouver, ISO, and other styles
11

Mahata, Kaushik. "Identification of dynamic errors-in-variables models." Licentiate thesis, Uppsala universitet, Avdelningen för systemteknik, 2002. http://urn.kb.se/resolve?urn=urn:nbn:se:uu:diva-86301.

Full text
Abstract:
The problem of identifying dynamic errors-in-variables models is of fundamental interest in many areas like process control, array signal processing, astronomical data reduction. In recent years, this field has received increased attention of the research community. In this thesis, some time domain and frequency domain approaches for identifying the errors-in-variables model is studied. The first chapter gives an overview of various methods for identifying dynamic errors-in-variables systems. Several approaches are classified and a qualitative comparison of different existing methods is also presented. The second chapter deals with instrumental variables based approaches. The least squares and the total least squares methods of solving the Yule–Walker equation is of central interest here. The methods are compared from the view point of asymptotic performance, numerical robustness and computation. The method presented in the third chapter uses prefiltered data. The input-output data is passed through a pair of user defined prefilters and the output data from the prefilters is subjected to a least-squares like algorithm. Compared to the IV approach, the proposed method shows a significant improvement in the small-sample properties of the MA parameter estimates, without any increase in the computational load. In the fourth chapter, we show that the two-dimensional process composed of the input-output data admits a finite order ARMA representation. Then we propose a parametric identification algorithm and another non-parametric identification method based on the ARMA representation.
APA, Harvard, Vancouver, ISO, and other styles
12

Webb, M. R. "Millimetre wave quasi-optical signal processing systems." Thesis, University of St Andrews, 1993. http://hdl.handle.net/10023/2827.

Full text
Abstract:
The development of spatial signal processing techniques at millimetre wavelengths represents an area of science and technology that is new. At optical wavelengths, spatial signal processing techniques are well developed and are being applied to a variety of situations. In particular they are being used in pattern recognition systems with a great deal of success. At millimetre wavelengths, the kind of technology used for signal transport and processing is typically either waveguide based or quasi-optically based, or some hybrid of the two. It is the use of quasi-optical methods that opens up the possibility of applying some of the spatial signal processing techiques that up to the present time have almost exclusively been used at optical wavelengths. A generic device that opens up this dimension of spatial signal processing to millimetre wave quasi-optical systems is at the heart of the work described within this thesis. The device could be suitably called a millimetre wave quasi-optical spatial light modulator (8LM), and is identical in operation to the spatial light modulators used in many optical signal processing systems. Within this thesis both a theoretical and an experimental analysis of a specific millimetre wave quasi-optical spatial light modulator is undertaken. This thesis thus represents an attempt to open up this new area of research and development, and to establish for it, a helpful theoretical and experimental foundation. It is an area that involves a heterogeneous mix of various technologies, and it is an area that is full of potential. The development of the experimental method for measuring the beam patterns produced by millimetre wave quasi-optical spatial light modulators involved the separate development of two other components. Firstly, a sensitive, low-cost millimetre wave pyroelectric detector has been developed and characterised. And secondly, a high performance quasi-optical Faraday rotator (a polarisation rotator) has been developed and characterised. The polarisation state of a quasi-optical beam is the parameter most often exploited for signal processing applications in millimetre wave quasi-optical systems, and thus a high performance polarisation rotator has readily found many opportunities for use.
APA, Harvard, Vancouver, ISO, and other styles
13

Fabrizio, Giuseppe Aureliano. "Space-time characterisation and adaptive processing of ionospherically-propagated HF signals /." Title page, table of contents and abstract only, 2000. http://web4.library.adelaide.edu.au/theses/09PH/09phf129.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
14

Glenn, Dickins, and glenn dickins@dolby com. "Applications of Continuous Spatial Models in Multiple Antenna Signal Processing." The Australian National University. Research School of Information Sciences and Engineering, 2008. http://thesis.anu.edu.au./public/adt-ANU20080702.222814.

Full text
Abstract:
This thesis covers the investigation and application of continuous spatial models for multiple antenna signal processing. The use of antenna arrays for advanced sensing and communications systems has been facilitated by the rapid increase in the capabilities of digital signal processing systems. The wireless communications channel will vary across space as different signal paths from the same source combine and interfere. This creates a level of spatial diversity that can be exploited to improve the robustness and overall capacity of the wireless channel. Conventional approaches to using spatial diversity have centered on smart, adaptive antennas and spatial beam forming. Recently, the more general theory of multiple input, multiple output (MIMO) systems has been developed to utilise the independent spatial communication modes offered in a scattering environment.¶ Underlying any multiple antenna system is the basic physics of electromagnetic wave propagation. Whilst a MIMO system may present a set of discrete inputs and outputs, each antenna element must interact with the underlying continuous spatial field. Since an electromagnetic disturbance will propagate through space, the field at different positions in the space will be interrelated. In this way, each position in the field cannot assume an arbitrary independent value and the nature of wave propagation places a constraint on the allowable complexity of a wave-field over space. To take advantage of this underlying physical constraint, it is necessary to have a model that incorporates the continuous nature of the spatial wave-field. ¶This thesis investigates continuous spatial models for the wave-field. The wave equation constraint is introduced by considering a natural basis expansion for the space of physically valid wave-fields. This approach demonstrates that a wave-field over a finite spatial region has an effective finite dimensionality. The optimal basis for representing such a field is dependent on the shape of the region of interest and the angular power distribution of the incident field. By applying the continuous spatial model to the problem of direction of arrival estimation, it is shown that the spatial region occupied by the receiver places a fundamental limit on the number and accuracy with which sources can be resolved. Continuous spatial models also provide a parsimonious representation for modelling the spatial communications channel independent of specific antenna array configurations. The continuous spatial model is also applied to consider limits to the problem of wireless source direction and range localisation.
APA, Harvard, Vancouver, ISO, and other styles
15

Mahata, Kaushik. "Estimation Using Low Rank Signal Models." Doctoral thesis, Uppsala University, Department of Information Technology, 2003. http://urn.kb.se/resolve?urn=urn:nbn:se:uu:diva-3844.

Full text
Abstract:

Designing estimators based on low rank signal models is a common practice in signal processing. Some of these estimators are designed to use a single low rank snapshot vector, while others employ multiple snapshots. This dissertation deals with both these cases in different contexts.

Separable nonlinear least squares is a popular tool to extract parameter estimates from a single snapshot vector. Asymptotic statistical properties of the separable non-linear least squares estimates are explored in the first part of the thesis. The assumptions imposed on the noise process and the data model are general. Therefore, the results are useful in a wide range of applications. Sufficient conditions are established for consistency, asymptotic normality and statistical efficiency of the estimates. An expression for the asymptotic covariance matrix is derived and it is shown that the estimates are circular. The analysis is extended also to the constrained separable nonlinear least squares problems.

Nonparametric estimation of the material functions from wave propagation experiments is the topic of the second part. This is a typical application where a single snapshot vector is employed. Numerical and statistical properties of the least squares algorithm are explored in this context. Boundary conditions in the experiments are used to achieve superior estimation performance. Subsequently, a subspace based estimation algorithm is proposed. The subspace algorithm is not only computationally efficient, but is also equivalent to the least squares method in accuracy.

Estimation of the frequencies of multiple real valued sine waves is the topic in the third part, where multiple snapshots are employed. A new low rank signal model is introduced. Subsequently, an ESPRIT like method named R-Esprit and a weighted subspace fitting approach are developed based on the proposed model. When compared to ESPRIT, R-Esprit is not only computationally more economical but is also equivalent in performance. The weighted subspace fitting approach shows significant improvement in the resolution threshold. It is also robust to additive noise.

APA, Harvard, Vancouver, ISO, and other styles
16

Cai, Qin. "Detecting Chaotic Signals with Nonlinear Models." PDXScholar, 1993. https://pdxscholar.library.pdx.edu/open_access_etds/4564.

Full text
Abstract:
In this thesis we apply chaotic dynamic data analysis to the area of discrete time signal processing. A newly developed Hidden Filter Hidden Markov Model is introduced in detection of chaotic signals. Numerical experiments have verified that this novel nonlinear model outperforms linear AR model in detecting chaotic signals buried by noise having similar power spectra. A simple Histogram Model is proposed which can also be used to do detection on the data sets with chaotic behavior. Receiver Operating Characteristics for a variety of noise levels and model classes are reported.
APA, Harvard, Vancouver, ISO, and other styles
17

Clark, Laurence. "A distributed information processing model of bacterial chemotaxis." Thesis, University of Liverpool, 2000. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.367071.

Full text
APA, Harvard, Vancouver, ISO, and other styles
18

Sun, David. "Optical signal processors: novel computer models and experiments." Thesis, Virginia Tech, 1994. http://hdl.handle.net/10919/42233.

Full text
Abstract:
The concept of incorporating optical fiber delay lines as signal processors is investigated. These signal processors are discrete-time filters implemented using optical fiber delay lines in the form of optical fiber couplers advantages of such a filter include the ability to eliminate the need for electro-optical conversion or at least postpone the conversion to a point where the electronics may be simplified. In addition, these optical signal processors exhibit all the advantages afforded traditional optical fiber systems. Furthermore, these devices allow high-speed signal processing rates greater than 1GHZ which rival the performance of the most advanced electronic counterparts available today. In this thesis, optical signal processors are investigated in detail. First, the fundamental signal processing concepts and optical component background is discussed for the analysis of the devices are developed. Subsequently, various configurations of optical signal processors are discussed and novel computer models for these configurations using Z-transform theory are presented. Lastly, experimental verification is presented of the rudimentary models and a unique approach is presented that will allow the creation of optical signal processors with a flexibility never achievable before.
Master of Science
APA, Harvard, Vancouver, ISO, and other styles
19

Yu, Kai. "Multiple-Input Multiple-Output Radio Propagation Channels : Characteristics and Models." Doctoral thesis, Stockholm, 2005. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-138.

Full text
APA, Harvard, Vancouver, ISO, and other styles
20

Yang, Yang. "2D signal processing: efficient models for spectral compressive sensing & single image reflection suppression." Diss., University of Iowa, 2018. https://ir.uiowa.edu/etd/6667.

Full text
Abstract:
Two efficient models in two-dimensional signal processing are proposed in the thesis. The first model deals with large scale spectral compressive sensing in continuous domain, which aims to recover a 2D spectrally sparse signal from partially observed time samples. The signal is assumed to be a superposition of s complex sinusoids. We propose a semidefinite program for the 2D signal recovery problem. Our model is able to handle large scale 2D signals of size 500*500, whereas traditional approaches only handle signals of size around 20*20. The second model deals with the problem of single image reflection suppression. Removing the undesired reflection from images taken through glass is of great importance in computer vision. It serves as a means to enhance the image quality for aesthetic purposes as well as to preprocess images in machine learning and pattern recognition applications. We propose a convex model to suppress the reflection from a single input image. Our model implies a partial differential equation with gradient thresholding, which is solved efficiently using Discrete Cosine Transform. Extensive experiments on synthetic and real-world images demonstrate that our approach achieves desirable reflection suppression results and dramatically reduces the execution time compared to the state of the art.
APA, Harvard, Vancouver, ISO, and other styles
21

Meyer, Georg. "Models of neurons in the ventral cochlear nucleus : signal processing and speech recognition." Thesis, Keele University, 1993. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.334715.

Full text
APA, Harvard, Vancouver, ISO, and other styles
22

Zhang, Zhiguo, and 張治國. "On bandwidth and scale selection in processing of time-varying signalswith applications." Thesis, The University of Hong Kong (Pokfulam, Hong Kong), 2007. http://hub.hku.hk/bib/B39707465.

Full text
APA, Harvard, Vancouver, ISO, and other styles
23

Wang, Xiaofeng. "Simulation models for rolling bearing vibration generation and fault detection via neural networks." Thesis, University of Oxford, 1997. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.362159.

Full text
APA, Harvard, Vancouver, ISO, and other styles
24

Wong, Hau San. "Adaptive image processing using computational intelligence techniques." Thesis, The University of Sydney, 1998. https://hdl.handle.net/2123/27658.

Full text
Abstract:
In this thesis, we illustrate the essential aspects of the adaptive image processing problem in terms of two applications: the adaptive assignment of the regularization parameters in image restoration, and the adaptive characterization of edges in feature detection applications. These two problems are representative of the general adaptive image processing paradigm in that the three requirements for its successive implementation: namely the segmentation of an image into its main feature types, the characterization of each of these features, and the optimization of the image model parameters corresponding to the individual features, are present. In view of these requirements, we have adopted the three main approaches within the class of computational intelligence algorithms, namely neu— ral network techniques, fuzzy set theory, and evolutionary computation, for solving the adaptive image processing problem. This is in view of the direct correspondence between some of the above requirements with the particular capabilities of specific computational intelligence approaches. We first applied neural network techniques to the adaptive regularization problem in image restoration. Instead of the usual approach of selecting the regularization parameter values by trial and error, we adopt a learning approach by treating the parameters in various local image regions as network weights of a model—based neural network with hierarchical architecture (HMBNN), such that they are adjustable through the supply of training examples specifying the desired image quality. In addition, we also applied the HMBNN to the problem
APA, Harvard, Vancouver, ISO, and other styles
25

Selén, Yngve. "Model selection /." Uppsala : Univ. : Dept. of Information Technology, Univ, 2004. http://www.it.uu.se/research/reports/lic/2004-003/.

Full text
APA, Harvard, Vancouver, ISO, and other styles
26

Perry, Stuart William. "Adaptive image restoration perception based neural network models and algorithms /." Connect to full text, 1998. http://hdl.handle.net/2123/389.

Full text
Abstract:
Thesis (Ph. D.)--University of Sydney, 1999.
Title from title screen (viewed Apr. 16, 2008). Submitted in fulfilment of the requirements for the degree of Doctor of Philosophy to the School of Electrical and Information Engineering, Faculty of Engineering. Degree awarded 1999; thesis submitted 1998. Includes bibliography. Also available in print form.
APA, Harvard, Vancouver, ISO, and other styles
27

Lannerhed, Petter. "Structural Diagnosis Implementation of Dymola Models using Matlab Fault Diagnosis Toolbox." Thesis, Linköpings universitet, Fordonssystem, 2017. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-138753.

Full text
Abstract:
Models are of great interest in many fields of engineering as they enable prediction of a systems behaviour, given an initial mode of the system. However, in the field of model-based diagnosis the models are used in a reverse manner, as they are combined with the observations of the systems behaviour in order to estimate the system mode. This thesis describes computation of diagnostic systems based on models implemented in Dymola. Dymola is a program that uses the language Modelica. The Dymola models are translated to Matlab, where an application called Fault Diagnosis Toolbox, FDT is applied. The FDT has functionality for pinpointing minimal overdetermined sets of equations, MSOs, which is developed further in this thesis. It is shown that the implemented algorithm has exponential time complexity with regards to what level the system is overdetermined,also known as the degree of redundancy. The MSOs are used to generate residuals, which are functions that are equal to zero given that the system is fault-free. Residual generation in Dymola is added to the original methods of the FDT andthe results of the Dymola methods are compared to the original FDT methods, when given identical data. Based on these tests it is concluded that adding the Dymola methods to the FDT results in higher accuracy, as well as a new way tocompute optimal observer gain. The FDT methods are applied to 2 models, one model is based on a system ofJAS 39 Gripen; SECS, which stands for Secondary Enviromental Control System. Also, applications are made on a simpler model; a Two Tank System. It is validated that the computational properties of the developed methods in Dymolaand Matlab differs and that it therefore exists benefits of adding the Dymola implementations to the current FDT methods. Furthermore, the investigation of the potential isolability based on the current setup of sensors in SECS shows that full isolability is achievable by adding 2 mass flow sensors, and that the isolability is not limited by causality constraints. One of the found MSOs is solvable in Dymola when given data from a fault-free simulation. However, if the simulation is not fault-free, the same MSO results in a singular equation system. By utilizing MSOs that had no reaction to any modelled faults, certain non-monitored faults is isolated from the monitored ones and therefore the risk of false alarms is reduced. Some residuals are generated as observers, and a new method for constructing observers is found during the thesis by using Lannerheds theorem in combination with Pontryagin’s Minimum Priniple. This method enables evaluation of observer based residuals in Dymola without any selection of a specific operating point, as well as evaluation of observers based on high-index Differential Algebraic Equations, DAEs. The method also results in completely different behaviourof the estimation error compared to the method that is already implemented inthe FDT. For example, one of the new observer-implementations achieves both an estimation error that converges faster towards zero when no faults are implementedin the monitored system, and a sharper reaction to implemented faults.
APA, Harvard, Vancouver, ISO, and other styles
28

Stoffell, Kevin M. "Implementation of a Quadrature Mirror Filter Bank on an SRC reconfigurable computer for real-time signal processing." Thesis, Monterey, Calif. : Springfield, Va. : Naval Postgraduate School ; Available from National Technical Information Service, 2006. http://library.nps.navy.mil/uhtbin/hyperion/06Sep%5FStoffell.pdf.

Full text
Abstract:
Thesis (M.S. in Electrical Engineering)--Naval Postgraduate School, September 2006.
Thesis Advisor(s): Douglas J. Fouts. "September 2006." Includes bibliographical references (p. 111-112). Also available in print.
APA, Harvard, Vancouver, ISO, and other styles
29

Hong, Jung. "Statistical Parametric Models and Inference for Biomedical Signal Processing: Applications in Speech and Magnetic Resonance Imaging." Thesis, Harvard University, 2012. http://dissertations.umi.com/gsas.harvard:10074.

Full text
Abstract:
In this thesis, we develop statistical methods for extracting significant information from biomedical signals. Biomedical signals are not only generated from a complex system but also affected by various random factors during their measurement. The biomedical signals may then be studied in two aspects: observational noise that biomedical signals experience and intrinsic nature that noise-free signals possess. We study Magnetic Resonance (MR) images and speech signals as applications in the one- and two-dimensional signal representation. In MR imaging, we study how observational noise can be effectively modeled and then removed. Magnitude MR images suffer from Rician-distributed signal-dependent noise. Observing that the squared-magnitude MR image follows a scaled non-central Chi-square distribution on two degrees of freedom, we optimize the parameters involved in the proposed Rician-adapted Non-local Mean (RNLM) estimator by minimizing the Chi-square unbiased risk estimate in the minimum mean square error sense. A linear expansion of RNLM's is considered in order to achieve the global optimality of the parameters without data-dependency. Parallel computations and convolution operations are considered as acceleration techniques. Experiments show the proposed method favorably compares with benchmark denoising algorithms. Parametric modelings of noise-free signals are studied for robust speech applications. The voiced speech signals are often modeled as the harmonic model with the fundamental frequency, commonly assumed to be a smooth function of time. As an important feature in various speech applications, pitch, the perceived tone, is obtained by way of estimating the fundamental frequency. In this thesis, two model-based pitch estimation schemes are introduced. In the first, an iterative Auto Regressive Moving Average technique estimates harmonically tied sinusoidal components in noisy speech signals. Dynamic programming implements the smoothness of the fundamental frequency. The second introduces the Continuous-time Voiced Speech (CVS) model, which models the smooth fundamental frequency as a linear combination of block-wise continuous polynomial bases. The model parameters are obtained via a convex optimization with constraints, providing an estimate of the instantaneous fundamental frequency. Experiments validate robustness and accuracy of the proposed methods compared with some current state-of-the-art pitch estimation algorithms.
Engineering and Applied Sciences
APA, Harvard, Vancouver, ISO, and other styles
30

Arnold, Dieter M. "Computing information rates of finite-state models with application to magnetic recording /." Konstanz : Hartung-Gorre, 2003. http://www.loc.gov/catdir/toc/fy0609/2005530413.html.

Full text
APA, Harvard, Vancouver, ISO, and other styles
31

Hymel, Shawn. "Massively Parallel Hidden Markov Models for Wireless Applications." Thesis, Virginia Tech, 2011. http://hdl.handle.net/10919/36017.

Full text
Abstract:
Cognitive radio is a growing field in communications which allows a radio to automatically configure its transmission or reception properties in order to reduce interference, provide better quality of service, or allow for more users in a given spectrum. Such processes require several complex features that are currently being utilized in cognitive radio. Two such features, spectrum sensing and identification, have been implemented in numerous ways, however, they generally suffer from high computational complexity. Additionally, Hidden Markov Models (HMMs) are a widely used mathematical modeling tool used in various fields of engineering and sciences. In electrical and computer engineering, it is used in several areas, including speech recognition, handwriting recognition, artificial intelligence, queuing theory, and are used to model fading in communication channels. The research presented in this thesis proposes a new approach to spectrum identification using a parallel implementation of Hidden Markov Models. Algorithms involving HMMs are usually implemented in the traditional serial manner, which have prohibitively long runtimes. In this work, we study their use in parallel implementations and compare our approach to traditional serial implementations. Timing and power measurements are taken and used to show that the parallel implementation can achieve well over 100Ã speedup in certain situations. To demonstrate the utility of this new parallel algorithm using graphics processing units (GPUs), a new method for signal identification is proposed for both serial and parallel implementations using HMMs. The method achieved high recognition at -10 dB Eb/N0. HMMs can benefit from parallel implementation in certain circumstances, specifically, in models that have many states or when multiple models are used in conjunction.
Master of Science
APA, Harvard, Vancouver, ISO, and other styles
32

Jonsson, Patrik. "Surface Status Classification, Utilizing Image Sensor Technology and Computer Models." Doctoral thesis, Mittuniversitetet, Avdelningen för elektronikkonstruktion, 2015. http://urn.kb.se/resolve?urn=urn:nbn:se:miun:diva-24828.

Full text
Abstract:
There is a great need to develop systems that can continuously provide correct information about road surface status depending on the prevailing weather conditions. This will minimize accidents and optimize transportation. In this thesis different methods for the determination of the road surface status have been studied and analyzed, and suggestions of new technology are proposed. Information about the road surface status is obtained traditionally from various sensors mounted directly in the road surface. This information must then be analyzed to create automated warning systems for road users and road maintenance personnel. The purpose of this thesis is to investigate how existing technologies can be used to obtain a more accurate description of the current road conditions. Another purpose is also to investigate how existing technologies can be used to obtain a more accurate description of the current road conditions. Furthermore, the aim is to develop non-contact technologies able to determine and classify road conditions over a larger area, since there is no system available today that can identify differences in road surface status in the wheel tracks and between the wheel tracks. Literature studies have been carried out to find the latest state of the art research and technology, and the research work is mainly based on empirical studies. A large part of the research has involved planning and setting up laboratory experiments to test and verify hypotheses that have emerged from the literature studies. Initially a few traditional road-mounted sensors were analyzed regarding their ability to determine the road conditions and the impact on their measured values when the sensors were exposed to contamination agents such as glycol and oil. Furthermore, non-contact methods for determining the status of the road surface have been studied. Images from cameras working in the visible range, together data from the Swedish Transportation Administration road weather stations, have been used to develop computerized road status classification models that can distinguish between a dry, wet, icy and snowy surface. Field observations have also been performed to get the ground truth for developing these models. In order to improve the ability to accurately distinguish between different surface statuses, measurement systems involving sensors working in the Near-Infrared (NIR) range have been utilized. In this thesis a new imaging method for determining road conditions with NIR camera technology is developed and described. This method was tested in a field study performed during the winter 2013-2014 with successful results. The results show that some traditional sensors could be used even with future user-friendly de-icing chemicals. The findings from using visual camera systems and meteorological parameters to determine the road status showed that they provide previously unknown information about road conditions. It was discovered that certain road conditions such as black ice is not always detectable using this technology. Therefore, research was performed that utilized the NIR region where it proved to be possible to detect and distinguish different road conditions, such as black ice. NIR camera technology was introduced in the research since the aim of the thesis was to find a method that provides information on the status of the road over a larger area. The results show that if several images taken in different spectral bands are analyzed with the support of advanced computer models, it is possible to distinguish between a dry, wet, icy and snowy surface. This resulted in the development of a NIR camera system that can distinguish between different surface statuses. Finally, two of these prototype systems for road condition classification were evaluated. These systems were installed at E14 on both sides of the border between Sweden and Norway. The results of these field tests show that this new road status classification, based on NIR imaging spectral analysis, provides new information about the status of the road surface, compared to what can be obtained from existing measurement systems, particularly for detecting differences in and between the wheel tracks.
APA, Harvard, Vancouver, ISO, and other styles
33

Wigren, Anna. "Exploiting conjugacy in state-space models with sequential Monte Carlo." Licentiate thesis, Uppsala universitet, Avdelningen för systemteknik, 2020. http://urn.kb.se/resolve?urn=urn:nbn:se:uu:diva-429236.

Full text
Abstract:
Many processes we encounter in our daily lives are dynamical systems that can be described mathematically using state-space models. Exact inference of both states and parameters in these models is, in general, intractable. Instead, approximate methods, such as sequential Monte Carlo and Markov chain Monte Carlo, are used to infer quantities of interest. However, sample based inference inherently introduce variance in the estimates. In this thesis we explore different aspects of how conjugacy relations in a model can improve the performance of sequential Monte Carlo-based inference methods.A conjugacy relation between the prior distribution and the likelihood implies that the posterior distribution has the same distributional form as the prior, allowing for analytic updates in place of numerical integration. In Paper I we consider state inference in state-space models where the transition density is intractable. By adding artificial noise conjugate to the observation density we can design an efficient proposal for sequential Monte Carlo inference that can reduce the variance of the state estimates. Conjugacy can also be utilized in the setting of parameter inference. In Paper II we show that the performance of particle Gibbs-type samplers, in terms of the autocorrelation of the samples, can be improved when conjugacy relations allow for marginalizing out the dependence on parameters in the state update.Despite enabling analytical evaluation of integrals, the derivation and implementation of conjugacy updates is cumbersome in all but the simplest cases, which limits the usefulness in practice. Recently, the emerging  field of probabilistic programming has changed this, by providing a framework for automating inference in probabilistic models - including identifying and utilizing conjugacy relations. In Paper II we make use of probabilistic programming to automatically exploit conjugacy in an epidemiological state-space model describing the spread of dengue fever.
APA, Harvard, Vancouver, ISO, and other styles
34

Li, Xiao, and 李驍. "Channel estimation and timing synchronization in cooperative communication systems." Thesis, The University of Hong Kong (Pokfulam, Hong Kong), 2009. http://hub.hku.hk/bib/B42841835.

Full text
APA, Harvard, Vancouver, ISO, and other styles
35

Selli, Giuseppe. "BGA footprints modeling and physics based via models validation for power and signal integrity applications." Diss., Rolla, Mo. : University of Missouri-Rolla, 2007. http://scholarsmine.umr.edu/thesis/pdf/Selli_09007dcc8040f1b6.pdf.

Full text
Abstract:
Thesis (Ph. D.)--University of Missouri--Rolla, 2007.
Vita. The entire thesis text is included in file. Title from title screen of thesis/dissertation PDF file (viewed December 7, 2007). Includes bibliographical references.
APA, Harvard, Vancouver, ISO, and other styles
36

Sayeed, Taufique. "Methods and models in signal processing for gait analysis using waist-worn accelerometer : a contribution to Parkinson’s disease." Doctoral thesis, Universitat Politècnica de Catalunya, 2015. http://hdl.handle.net/10803/317390.

Full text
Abstract:
Parkinson's disease (PD) is a neurodegenerative disease that predominantly alters patients' motor performance and compromises the speed, the automaticity and fluidity of natural movements. After some years, patients fluctuate between periods in which they can move almost normally for some hours (ON state) and periods with motor disorders (OFF state). Reduced step length and inability of step are important symptoms associated with PD. Monitoring patients¿ step length helps to infer patients¿ motor state fluctuations during daily life and, therefore, enables neurologists to track the evolution of the disease and improve medication regimen. In this sense, MEMS accelerometers can be used to detect steps and to estimate the step length outside the laboratory setting during unconstrained daily life activities. This thesis presents the original contributions of the author in the field of human movement analysis based on MEMS accelerometers, specifically on step detection and step length estimation of patients with Parkinson's disease. In this thesis, a user-friendly position, the lateral side of the waist, is selected to locate a triaxial accelerometer. The position was selected to enhance comfortability and acceptability. Assuming this position, first, a new method for step detection has been developed for the signals captured by the accelerometer from this location. The method is validated on healthy persons and patients with Parkinson's disease while compared to current state-of-the-art methods, performing better than the existing ones. Second, current methods of selected step length estimators that were originally developed for the signals from lower back close to L4-L5 region are modified in order to be adapted to the new sensor positions. Results obtained from 25 PD patients are discussed and the effects of calibrating in each motor state are compared. A generic correction factor is also proposed and compared with the best method to use instead of individual calibration. Despite variable gait speed and different motor state, the new step detection method achieved overall accuracy of 96.76% in detecting steps. Comparing the original and adapted methods, adapted methods performs better than the original ones. The best one is with multiplying individual correction factors that consider left and right step length separately providing average error of 0.033 m. Finally, an adapted inverted pendulum (IP) model based step length estimators is proposed using the signals from left lateral side of waist. The model considers vertical displacement of waist as an inverted pendulum during right step.For left step, the displacement during single support and double support phase is considered as an inverted pendulum and a standard pendulum respectively.Results obtained from 25 PD patients are discussed.Validity and reliability of the new model is compared with three existing estimators. Experimental results show that ICE-CETpD estimates step length with higher accuracy than the three best contenders taken from the literature.The mean errors of this method during OFF state and ON states are 0.021m and 0.029m respectively.The standard deviation and RMSE shown as (SD) RMSE are (0.02)0.029m during OFF state and (0.027)0.038m during ON state. The intra-class correlations of proposed estimator with reference step length are above 0.9 during both motor states.The calibration of model parameters in each motor state is tested and found that the training sessions done with patients in ON state provide more accurate results than in OFF state. Given that training is in ON state, the advantage of this approach is that patients would not need to attend without medication in order to train the method.
La enfermedad de Parkinson (EP) es una enfermedad neurodegenerativa que altera, de forma predominante, la capacidad motora de los pacientes y, además, afecta la velocidad, la automaticidad y la fluidez de los movimientos naturales. Tras varios años, los pacientes fluctúan entre unos periodos en los cuales pueden moverse de forma casi normal durante varias horas (periodos o estados ON) y periodos donde los desórdenes del movimiento aparecen (periodos o estados OFF). Entre otros síntomas, los pacientes con la EP sufren una reducción de la longitud del paso y una inhabilitación de la marcha. Monitorizar la longitud del paso contribuye a inferir el estado motor de los pacientes, a conocer las fluctuaciones durante su vida diaria y, en consecuencia, permitiría a los neurólogos realizar un seguimiento de la evolución de la enfermedad y mejorar la pauta terapéutica. En este sentido, los acelerómetros MEMS pueden ser usados para detectar pasos y estimar la longitud del paso más allá de las instalaciones de los laboratorios, es decir, en entornos no controlados. Esta tesis presenta las contribuciones originales del autor en el campo del análisis del movimiento humano basado en acelerómetros MEMS, específicamente en la detección de pasos y la estimación de la longitud del paso en pacientes con la EP. En esta tesis, se ha seleccionado una posición amigable en la cual localizar un acelerómetro MEMS triaxial. La posición, que consiste en el lateral de la cintura cerca de la cresta ilíaca, fue seleccionada para mejorar la comodidad y la aceptabilidad desde el punto de vista del paciente. Asumiendo esta posición, en primer lugar, se presenta un análisis de los distintos métodos existentes en la literatura para la detección de pasos y, además, se presenta una nueva técnica de detección. Los métodos se han testado en usuarios sanos y en pacientes con Parkinson, mostrando que el nuevo método obtiene un porcentaje de acierto en la detección más alto que el resto de métodos. En segundo lugar, se han seleccionado aquellos métodos de estimación de la longitud de paso que fueron desarrollados mediante un sensor situado en el centro de la espalda, cerca de las vértebras L4-L5. Estos métodos fueron modificados con el fin de ser adaptados a la nueva posición del sensor y validados en señales obtenidas de 25 pacientes con EP. Además, se propone un factor de corrección genérico, el cual se compara con el mejor de los métodos obtenidos, para ser usado en lugar de una calibración individual. A pesar de la variabilidad en la velocidad de la marcha debida a las fluctuaciones motoras, el nuevo método alcanza un 96,76% de precisión en la detección de pasos y, respecto la estimación de la longitud del paso, los métodos modificados obtienen mayor precisión que los originales. El mejor de los métodos obtenidos consiste en el uso de un factor de corrección multiplicador que considera los pasos de cada lado por separado, proporcionando un error medio de 0,03 m. Finalmente, se presenta un nuevo modelo de la marcha representada como un péndulo invertido modificado que se emplea para analizar las señales de acelerometría obtenidas desde el lateral izquierdo de la cintura. De forma más concreta, este modelo considera el desplazamiento vertical de la cadera como un péndulo invertido durante el paso derecho (lado contrario del sensor). Para el paso izquierdo, el desplazamiento durante la fase single support y double support se model iza como un péndulo invertido y un péndulo simple, respectivamente. Los resultados obtenidos en 25 pacientes con EP son presentados y discutidos. La validez y fiabilidad del nuevo modelo son comparados con tres modelos distintos. Los resultados experimentales obtenidos muestran que el nuevo modelo, llamado ICE-CETpD, estima la longitud del paso con una precisión mayor que el resto de métodos seleccionados de la literatura. El error promedio de este método durante el estado OFF y ON es de 0,021 m. y 0,029 m., respectivamente, con una correlación intraclase superior a 0.9 en ambos estados motores. La calibración de los parámetros del modelo en cada estado motor ha sido evaluada, concluyendo que una calibración en ON proporciona más precisión en los resultados. En consecuencia, la ventaja de la aproximación propuesta residiría en no requerir señales en OFF de los pacientes con EP, por lo cual no sería necesario que los pacientes prescindieran de tomas de medicación.
La malaltia de Parkinson (MP) és una malaltia neurodegenerativa que altera de forma predominant la capacitat motora dels pacients i, a més, afecta la velocitat, l’automatització i la fluïdesa dels moviments naturals. Després de diversos anys, els pacients fluctuen entre uns períodes en els quals poden moure’s de forma quasi normal i que duren vàries hores (períodes o estats ON) i períodes on els desordres del moviment apareixen (períodes o estats OFF). Entre altres símptomes, els pacients amb la MP sofreixen una reducció de la longitud del pas i una inhabilitació de la marxa. La monitorització de la longitud del pas contribueix a inferir l’estat motor del pacient i a conèixer les fluctuacions durant la seva vida diària permetent als neuròlegs, en conseqüència, realitzar un seguiment de l’evolució de la malaltia i millorar la pauta terapèutica. En aquest sentit, els acceleròmetres MEMS poden ser utilitzats per tal de detectar passes i estimar la longitud del pas fora de les instal·lacions dels laboratoris, és a dir, en entorns no controlats. Aquesta tesis presenta les contribucions originals de l’autor en el camp de l’anàlisi del moviment humà basat en acceleròmetres MEMS, específicament en la detecció de passes i l’estimació de la longitud del pas en pacients amb MP. En aquesta tesis, s’ha seleccionat una posició amigable en la qual localitzar un acceleròmetre MEMS triaxial. La posició, que consisteix en el lateral de la cintura prop de la cresta ilíaca, va ser seleccionada per maximitzar la comoditat i l’acceptabilitat des del punt de vista del pacient. Assumint aquesta posició, en primer lloc, es presenta un anàlisi dels diferents mètodes existents a la literatura en detecció de passes i, a més, es presenta una nova tècnica de detecció basada en acceleròmetres. Tots els mètodes han estat provats en usuaris sans i en pacients amb la MP; els resultats mostren que el nou mètode obté un percentatge d’encert en la detecció de passes més alt que la resta de mètodes. En segon lloc, s’han seleccionat aquells mètodes d’estimació de la longitud de pas que van ser desenvolupats per a tractar les senyals d’un sensor situat prop de les vèrtebres L4-L5. Aquests mètodes van ser modificats amb la fi de ser adaptats a la nova posició del sensor. Tots ells van ser validats en senyals obtingudes de 25 pacients amb la MP. A més, es proposa un factor de correcció genèric, el qual es compara amb el millor dels mètodes obtinguts per tal de ser usat en lloc d’una calibració individual. A pesar de la variabilitat en la velocitat de la marxa deguda a les fluctuacions motores, el nou mètode assoleix un 96,76% de precisió en la detecció de passes i, respecte l’estimació de la longitud de pas, els mètodes modificats obtenen una major precisió que els originals. El millor d’ells consisteix en un factor de correcció multiplicador que considera les passes de cada costat per separat, proporcionant un error mig de 0,033 m. Finalment, es presenta un nou model de la marxa representada com un pèndul invertit modificat que és utilitzat per analitzar les senyals d’accelerometria obtingudes des del lateral esquerra de la cintura. De forma més concreta, aquest model considera el desplaçament vertical del maluc com un pèndul invertit durant la passa dreta (costat contrari al del sensor). Durant la passa esquerra, el desplaçament durant la fase single suport i double suport es modelitza com un pèndul invertit i un pèndul simple, respectivament. Els resultats obtinguts en 25 pacients amb MP són presentats i discutits. La validesa i fiabilitat del nou model són comparats amb els de tres models diferents. Els resultats experimentals obtinguts mostren que el nou model, anomenat ICE—CETpD, estima la longitud de la passa amb una major precisió que la resta de mètodes seleccionats de la literatura. L’error mitjà d’aquest mètode durant l’estat OFF i ON és de 0, 021 i 0,029 m., respectivament, amb una correlació intraclasse superior a 0,9 en ambdós estats motors. La calibració dels paràmetres del model en cada estat motor ha estat avaluada, obtenint que una calibració en ON proporciona més precisió en els resultats. D’aquesta manera, l’avantatge de l’aproximació proposada residiria en no requerir de senyals en OFF dels pacients amb MP, per la qual cosa no seria necessari que els pacients prescindissin de preses de medicació.
APA, Harvard, Vancouver, ISO, and other styles
37

Lockowandt, Karin. "Parsing and Validation of Modelica Models Utilising Fault Diagnosis." Thesis, Linköpings universitet, Fordonssystem, 2017. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-137542.

Full text
Abstract:
Models have become an indispensable tool within most industrial sectors and are used to reduce costs, enhance the performance of a system etc. The computer support within modelling is extensive, whereof the programming language Modelica is eminent, especially for multi-domain models. Dymola, a commercial program, is built on Modelica and is foremost used for simulation purposes, but many applications for which models are useful are not supported by Dymola. Instead other tools, e.g. Matlab, could be used to exploit the full potential of a model, which means that it first would be needed to be translated. This master's thesis examines one of the possible ways to accomplish this. Specifically the possibility to translate Modelica-models via an XML file, generated by Dymola, is examined. The structure and content of this file is explored, and based thereupon a software is implemented in Python, which successfully translates the models constituting the base for this thesis. Specifically the method was developed on a model of a sub-system of Saab 39 Gripen air-plane. Besides porting models between different languages, it is of great interest to determine how well a model describes the system on which it is based. Hence a new method for model validation is developed using the Matlab Fault Diagnosis Toolbox, which also determines the Matlab syntax of the Modelica translation. The novelty with the developed method, compared to traditional model validation methods, is that it is equation based. It is meant to point out specifically which equations are poorly fitted to validation data. On a simple example model the method was successfully used to isolate a poorly fitted equation. This is accomplished by introducing faults to the equations and generating residuals, based on sets of over-determined equations. As a measure of the modelling error the estimation error of the simulated residuals is used, which are weighted together depending on the fault properties of the residuals.
APA, Harvard, Vancouver, ISO, and other styles
38

Chen, Xin. "Ensemble methods in large vocabulary continuous speech recognition." Diss., Columbia, Mo. : University of Missouri-Columbia, 2008. http://hdl.handle.net/10355/5797.

Full text
Abstract:
Thesis (M.S.)--University of Missouri-Columbia, 2008.
The entire dissertation/thesis text is included in the research.pdf file; the official abstract appears in the short.pdf file (which also appears in the research.pdf); a non-technical general description, or public abstract, appears in the public.pdf file. Title from title screen of research.pdf file (viewed on August 28, 2008) Vita. Includes bibliographical references.
APA, Harvard, Vancouver, ISO, and other styles
39

Santos, Diliana Maria Barradas Rebelo dos. "Human activity recognition for an intelligent knee orthosis." Master's thesis, Faculdade de Ciências e Tecnologia, 2012. http://hdl.handle.net/10362/8493.

Full text
Abstract:
Dissertação para obtenção do Grau de Mestre em Engenharia Biomédica
Activity recognition with body-worn sensors is a large and growing field of research. In this thesis we evaluate the possibility to recognize human activities based on data from biosignal sensors solely placed on or under an existing passive knee orthosis, which will produce the needed information to integrate sensors into the orthosis in the future. The development of active orthotic knee devices will allow population to ambulate in a more natural, efficient and less painful manner than they might with a traditional orthosis. Thus, the term ’active orthosis’ refers to a device intended to increase the ambulatory ability of a person suffering from a knee pathology by applying forces to correct the position only when necessary and thereby make usable over longer periods of time. The contribution of this work is the evaluation of the ability to recognize activities with these restrictions on sensor placement as well as providing a proof-of-concept for the development of an activity recognition system for an intelligent orthosis. We use accelerometers and a goniometer placed on the orthosis and Electromyography (EMG) sensors placed on the skin under the orthosis to measure motion and muscle activity respectively. We segment signals in motion primitives semi-automatically and apply Hidden-Markov-Models (HMM) to classify the isolated motion primitives. We discriminate between seven activities like for example walking stairs up and ascend a hill. In a user study with six participants, we evaluate the systems performance for each of the different biosignal modalities alone as well as the combinations of them. For the best performing combination, we reach an average person-dependent accuracy of 98% and a person-independent accuracy of 79%.
APA, Harvard, Vancouver, ISO, and other styles
40

Ablin, Pierre. "Exploration of multivariate EEG /MEG signals using non-stationary models." Thesis, Université Paris-Saclay (ComUE), 2019. http://www.theses.fr/2019SACLT051.

Full text
Abstract:
L'Analyse en Composantes Indépendantes (ACI) modèle un ensemble de signaux comme une combinaison linéaire de sources indépendantes. Cette méthode joue un rôle clé dans le traitement des signaux de magnétoencéphalographie (MEG) et électroencéphalographie (EEG). L'ACI de tels signaux permet d'isoler des sources de cerveau intéressantes, de les localiser, et de les séparer d'artefacts. L'ACI fait partie de la boite à outils de nombreux neuroscientifiques, et est utilisée dans de nombreux articles de recherche en neurosciences. Cependant, les algorithmes d'ACI les plus utilisés ont été développés dans les années 90. Ils sont souvent lents lorsqu'ils sont appliqués sur des données réelles, et sont limités au modèle d'ACI classique.L'objectif de cette thèse est de développer des algorithmes d'ACI utiles en pratique aux neuroscientifiques. Nous suivons deux axes. Le premier est celui de la vitesse : nous considérons le problème d'optimisation résolu par deux des algorithmes les plus utilisés par les praticiens: Infomax et FastICA. Nous développons une nouvelle technique se basant sur un préconditionnement par des approximations de la Hessienne de l'algorithm L-BFGS. L'algorithme qui en résulte, Picard, est conçu pour être appliqué sur données réelles, où l'hypothèse d’indépendance n'est jamais entièrement vraie. Sur des données de M/EEG, il converge plus vite que les implémentations `historiques'.Les méthodes incrémentales, qui traitent quelques échantillons à la fois au lieu du jeu de données complet, constituent une autre possibilité d’accélération de l'ACI. Ces méthodes connaissent une popularité grandissante grâce à leur faculté à bien passer à l'échelle sur de grands jeux de données. Nous proposons un algorithme incrémental pour l'ACI, qui possède une importante propriété de descente garantie. En conséquence, cet algorithme est simple d'utilisation, et n'a pas de paramètre critique et difficile à régler comme un taux d'apprentissage.En suivant un second axe, nous proposons de prendre en compte du bruit dans le modèle d'ACI. Le modèle resultant est notoirement difficile et long à estimer sous l'hypothèse standard de non-Gaussianité de l'ACI. Nous nous reposons donc sur une hypothèse de diversité spectrale, qui mène à un algorithme facile d'utilisation et utilisable en pratique, SMICA. La modélisation du bruit permet de nouvelles possibilités inenvisageables avec un modèle d'ACI classique, comme une estimation fine des source et l'utilisation de l'ACI comme une technique de réduction de dimension statistiquement bien posée. De nombreuses expériences sur données M/EEG démontrent l'utilité de cette nouvelle approche.Tous les algorithmes développés dans cette thèse sont disponibles en accès libre sur internet. L’algorithme Picard est inclus dans les librairies de traitement de données M/EEG les plus populaires en Python (MNE) et en Matlab (EEGlab)
Independent Component Analysis (ICA) models a set of signals as linear combinations of independent sources. This analysis method plays a key role in electroencephalography (EEG) and magnetoencephalography (MEG) signal processing. Applied on such signals, it allows to isolate interesting brain sources, locate them, and separate them from artifacts. ICA belongs to the toolbox of many neuroscientists, and is a part of the processing pipeline of many research articles. Yet, the most widely used algorithms date back to the 90's. They are often quite slow, and stick to the standard ICA model, without more advanced features.The goal of this thesis is to develop practical ICA algorithms to help neuroscientists. We follow two axes. The first one is that of speed. We consider the optimization problems solved by two of the most widely used ICA algorithms by practitioners: Infomax and FastICA. We develop a novel technique based on preconditioning the L-BFGS algorithm with Hessian approximation. The resulting algorithm, Picard, is tailored for real data applications, where the independence assumption is never entirely true. On M/EEG data, it converges faster than the `historical' implementations.Another possibility to accelerate ICA is to use incremental methods, which process a few samples at a time instead of the whole dataset. Such methods have gained huge interest in the last years due to their ability to scale well to very large datasets. We propose an incremental algorithm for ICA, with important descent guarantees. As a consequence, the proposed algorithm is simple to use and does not have a critical and hard to tune parameter like a learning rate.In a second axis, we propose to incorporate noise in the ICA model. Such a model is notoriously hard to fit under the standard non-Gaussian hypothesis of ICA, and would render estimation extremely long. Instead, we rely on a spectral diversity assumption, which leads to a practical algorithm, SMICA. The noise model opens the door to new possibilities, like finer estimation of the sources, and use of ICA as a statistically sound dimension reduction technique. Thorough experiments on M/EEG datasets demonstrate the usefulness of this approach.All algorithms developed in this thesis are open-sourced and available online. The Picard algorithm is included in the largest M/EEG processing Python library, MNE and Matlab library, EEGlab
APA, Harvard, Vancouver, ISO, and other styles
41

Nilsson, Mats. "Building Reconstruction of Digital Height Models with the Markov Chain Monte Carlo Method." Thesis, Linköpings universitet, Datorseende, 2018. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-148886.

Full text
Abstract:
Data about the earth is increasing in value and demand from customers, but itis difficult to produce accurately and cheap. This thesis examines if it is possible to take low resolution and distorted 3D data and increase the accuracy of building geometry by performing building reconstruction. Building reconstruction is performed with a Markov chain Monte Carlo method where building primitives are placed iteratively until a good fit is found. The digital height model and pixel classification used is produced by Vricon. The method is able to correctly place primitive models, but often overestimate their dimensions by about 15%.
APA, Harvard, Vancouver, ISO, and other styles
42

洪觀宇 and Roy Hung. "Time domain analysis and synthesis of cello tones based on perceptual quality and playing gestures." Thesis, The University of Hong Kong (Pokfulam, Hong Kong), 1998. http://hub.hku.hk/bib/B31215348.

Full text
APA, Harvard, Vancouver, ISO, and other styles
43

Di, Mauro Mario. "Statistical models for the characterization, identification and mitigation of distributed attacks in data networks." Doctoral thesis, Universita degli studi di Salerno, 2018. http://hdl.handle.net/10556/3088.

Full text
Abstract:
2016 - 2017
The thesis focuses on statistical approaches to model, mitigate, and prevent distributed network attacks. When dealing with distributed network attacks (and, more in general, with cyber-security problems), three fundamental phases/issues emerge distinctly. The first issue concerns the threat propagation across the network, which entails an "avalanche" effect, with the number of infected nodes increasing exponentially as time elapses. The second issue regards the design of proper mitigation strategies (e.g., threat detection, attacker's identification) aimed at containing the propagation phenomenon. Finally (and this is the third issue), it is also desirable to act on the system infrastructure to grant a conservative design by adding some controlled degree of redundancy, in order to face those cases where the attacker has not been yet defeated. The contributions of the present thesis address the aforementioned relevant issues, namely, propagation, mitigation and prevention of distributed network attacks. A brief summary of the main contributions is reported below. The first contribution concerns the adoption of Kendall’s birth-and-death process as an analytical model for threat propagation. Such a model exhibits two main properties: i) it is a stochastic model (a desirable requirement to embody the complexity of real-world networks) whereas many models are purely deterministic; ii) it is able to capture the essential features of threat propagation through a few parameters with a clear physical meaning. By exploiting the remarkable properties of Kendall’s model, the exact solution for the optimal resource allocation problem (namely, the optimal mitigation policy) has been provided for both conditions of perfectly known parameters, and unknown parameters (with the latter case being solved through a Maximum-Likelihood estimator). The second contribution pertains to the formalization of a novel kind of randomized Distributed Denial of Service (DDoS) attack. In particular, a botnet (a network of malicious entities) is able to emulate some normal traffic, by picking messages from a dictionary of admissible requests. Such a model allows to quantify the botnet “learning ability”, and to ascertain the real nature of users (normal or bot) via an indicator referred to as MIR (Message Innovation Rate). Exploiting the considered model, an algorithm that allows to identify a botnet (possibly) hidden in the network has been devised. The results are then extended to the case of a multi-cluster environment, where different botnets are concurrently present in the network, and an algorithm to identify the different clusters is conceived. The third contribution concerns the formalization of the network resilience problem and the consequent design of a prevention strategy. Two statistical frameworks are proposed to model the high availability requirements of network infrastructures, namely, the Stochastic Reward Network (SRN), and the Universal Generating Function (UGF) frameworks. In particular, since in the network environment dealing with multidimensional quantities is crucial, an extension of the classic UGF framework, called Multi-dimensional UGF (MUGF), is devised. [edited by author]
XVI n.s.
APA, Harvard, Vancouver, ISO, and other styles
44

Al-Muhtaseb, Husni A., Sabri A. Mahmoud, and Rami S. R. Qahwaji. "Recognition of off-line printed Arabic text using Hidden Markov Models." Elsevier, 2008. http://hdl.handle.net/10454/4105.

Full text
Abstract:
yes
This paper describes a technique for automatic recognition of off-line printed Arabic text using Hidden Markov Models. In this work different sizes of overlapping and non-overlapping hierarchical windows are used to generate 16 features from each vertical sliding strip. Eight different Arabic fonts were used for testing (viz. Arial, Tahoma, Akhbar, Thuluth, Naskh, Simplified Arabic, Andalus, and Traditional Arabic). It was experimentally proven that different fonts have their highest recognition rates at different numbers of states (5 or 7) and codebook sizes (128 or 256). Arabic text is cursive, and each character may have up to four different shapes based on its location in a word. This research work considered each shape as a different class, resulting in a total of 126 classes (compared to 28 Arabic letters). The achieved average recognition rates were between 98.08% and 99.89% for the eight experimental fonts. The main contributions of this work are the novel hierarchical sliding window technique using only 16 features for each sliding window, considering each shape of Arabic characters as a separate class, bypassing the need for segmenting Arabic text, and its applicability to other languages.
APA, Harvard, Vancouver, ISO, and other styles
45

Rastgoufard, Rastin. "The Interacting Multiple Models Algorithm with State-Dependent Value Assignment." ScholarWorks@UNO, 2012. http://scholarworks.uno.edu/td/1477.

Full text
Abstract:
The value of a state is a measure of its worth, so that, for example, waypoints have high value and regions inside of obstacles have very small value. We propose two methods of incorporating world information as state-dependent modifications to the interacting multiple models (IMM) algorithm, and then we use a game's player-controlled trajectories as ground truths to compare the normal IMM algorithm to versions with our proposed modifications. The two methods involve modifying the model probabilities in the update step and modifying the transition probability matrix in the mixing step based on the assigned values of different target states. The state-dependent value assignment modifications are shown experimentally to perform better than the normal IMM algorithm in both estimating the target's current state and predicting the target's next state.
APA, Harvard, Vancouver, ISO, and other styles
46

Kelsall, A. "Flexible Shape Models for Marine Animal Detection in Underwater Images." Thesis, Honours thesis, University of Tasmania, 2005. https://eprints.utas.edu.au/248/1/afkThesis_FINAL.pdf.

Full text
Abstract:
Many industries are benefiting from computer automation, however the area of image analysis is still limited. The process of finding a potential object in an image is hard in itself, let alone classifying it. Automating these tasks would significantly reduce the time it takes to complete them thus allowing much more data to be processed. This becomes a problem when data is collect faster than it can be analysed. Images and video sequences are captured for different purposes and need to be manually processed in order to discover their contents. The fishing industry is a perfect example of this. A fish farm needs to know the average size of the fish in a ring. At present, this involves either manually taking a sample of fish from the ring and measuring them, or taking a series of stereoscopic images and manually tracing a sample of fish. By using active shape models, the process of tracing a fish sample can be automated. The Active Shape Model (ASM) Toolkit is an implementation of active appearance models, an advanced type of active shape model. The wrapper application that was written as part of this research allows a more streamlined process to input region data into the ASM Toolkit for searching. Once a sample has been matched, it is possible to use the key points around it to base further calculations on such as its size and weight. The ASM Toolkit and the wrapper program demonstrate how the process of identifying a fish in an image can be automated and that it is possible to calculate the size and weight of fish. In an ideal manual test, the most effective model matched 68% of samples, and in the automated test matched 50% of the samples. If the program can run over several days collecting appropriate samples, the model will be able to match enough fish to estimate the average size and weight within a ring. It is shown that the types of samples used in training the model affects the performance more than the number of samples used.
APA, Harvard, Vancouver, ISO, and other styles
47

Vono, Maxime. "Asymptotically exact data augmentation : models and Monte Carlo sampling with applications to Bayesian inference." Thesis, Toulouse, INPT, 2020. http://www.theses.fr/2020INPT0098.

Full text
Abstract:
De nombreuses tâches d'apprentissage statistique et de traitement du signal/de l'image peuvent être formulées comme des problèmes d'inférence statistique. Un exemple typique sont les systèmes de recommandation qui reposent sur la complétion d'une matrice utilisateur/objet partiellement observée, qui peut être réalisée par l'estimation conjointe de facteurs latents et de coefficients d'activation. Plus formellement, l'objet à estimer est généralement défini comme la solution d'un problème d'optimisation variationnelle ou stochastique. En particulier, dans un cadre bayésien, cette solution est définie comme le minimiseur d'une fonction de coût, appelée fonction de perte a posteriori. Dans le cas simple où cette fonction est choisie comme quadratique, l'estimateur bayésien est connu pour être la moyenne a posteriori qui minimise l'erreur quadratique moyenne et qui est définie comme une intégrale par rapport à la distribution a posteriori. Dans la plupart des contextes applicatifs du monde réel, le calcul de telles intégrales n'est pas simple. Une alternative consiste à utiliser l'intégration de Monte Carlo, qui se résume à approximer toute espérance selon la distribution a posteriori par une moyenne empirique impliquant des échantillons générés selon la distribution a posteriori. Cette intégration dite de Monte Carlo nécessite la disponibilité de schémas algorithmiques efficaces capables de générer des échantillons à partir d'une distribution a posteriori souhaitée. Une vaste littérature consacrée à la génération de variables aléatoires a proposé divers algorithmes de Monte Carlo. Par exemple, les méthodes de Monte Carlo à chaîne de Markov (MCMC), dont les exemples particuliers sont le célèbre échantillonneur de Gibbs et l'algorithme de Metropolis-Hastings, définissent une large classe d'algorithmes qui permettent de générer une chaîne de Markov avec la distribution stationnaire souhaitée. Malgré leur simplicité et leur caractère générique en apparence, les algorithmes MCMC classiques peuvent se révéler inefficaces pour les problèmes à grande dimension, distribués et/ou très structurés. L'objectif principal de cette thèse consiste à introduire de nouveaux modèles et approches MCMC pour pallier ces problèmes. L'intractabilité de la distribution a posteriori est abordée en proposant une classe de modèles augmentés approximés mais asymptotiquement exacts (AXDA). Ensuite, deux échantillonneurs de Gibbs ciblant des distributions a posteriori approximées construites dans le cadre AXDA sont proposés et leurs avantages sont illustrés sur des problèmes difficiles de traitement du signal, de traitement d'images et d'apprentissage statistique. Une étude théorique détaillée du taux de convergence associé à l'un de ces deux échantillonneurs de Gibbs est également menée et révèle des dépendances explicites en ce qui concerne la dimension, le conditionnement du potentiel de la loi de la posterior et de la précision prescrite. Dans ce travail, nous prêtons également attention à la faisabilité des étapes d'échantillonnage impliquées dans les échantillonneurs de Gibbs proposés. Comme l'une de ces étapes nécessite d'échantillonner selon une distribution gaussienne en grande dimension, nous passons en revue et unifions les approches existantes en introduisant un cadre qui s'interprète comme la contrepartie stochastique du célèbre algorithme du point proximal. Ce lien fort entre la simulation et l'optimisation n'est pas isolé dans cette thèse. En effet, nous montrons également que les échantillonneurs de Gibbs proposés partagent des liens étroits avec les méthodes de pénalité quadratique et que le cadre AXDA génère une classe de fonctions d'enveloppe liées à celle de Moreau
Numerous machine learning and signal/image processing tasks can be formulated as statistical inference problems. As an archetypal example, recommendation systems rely on the completion of partially observed user/item matrix, which can be conducted via the joint estimation of latent factors and activation coefficients. More formally, the object to be inferred is usually defined as the solution of a variational or stochastic optimization problem. In particular, within a Bayesian framework, this solution is defined as the minimizer of a cost function, referred to as the posterior loss. In the simple case when this function is chosen as quadratic, the Bayesian estimator is known to be the posterior mean which minimizes the mean square error and defined as an integral according to the posterior distribution. In most real-world applicative contexts, computing such integrals is not straightforward. One alternative lies in making use of Monte Carlo integration, which consists in approximating any expectation according to the posterior distribution by an empirical average involving samples from the posterior. This so-called Monte Carlo integration requires the availability of efficient algorithmic schemes able to generate samples from a desired posterior distribution. A huge literature dedicated to random variable generation has proposed various Monte Carlo algorithms. For instance, Markov chain Monte Carlo (MCMC) methods, whose particular instances are the famous Gibbs sampler and Metropolis-Hastings algorithm, define a wide class of algorithms which allow a Markov chain to be generated with the desired stationary distribution. Despite their seemingly simplicity and genericity, conventional MCMC algorithms may be computationally inefficient for large-scale, distributed and/or highly structured problems. The main objective of this thesis consists in introducing new models and related MCMC approaches to alleviate these issues. The intractability of the posterior distribution is tackled by proposing a class of approximate but asymptotically exact augmented (AXDA) models. Then, two Gibbs samplers targetting approximate posterior distributions based on the AXDA framework, are proposed and their benefits are illustrated on challenging signal processing, image processing and machine learning problems. A detailed theoretical study of the convergence rates associated to one of these two Gibbs samplers is also conducted and reveals explicit dependences with respect to the dimension, condition number of the negative log-posterior and prescribed precision. In this work, we also pay attention to the feasibility of the sampling steps involved in the proposed Gibbs samplers. Since one of this step requires to sample from a possibly high-dimensional Gaussian distribution, we review and unify existing approaches by introducing a framework which stands for the stochastic counterpart of the celebrated proximal point algorithm. This strong connection between simulation and optimization is not isolated in this thesis. Indeed, we also show that the derived Gibbs samplers share tight links with quadratic penalty methods and that the AXDA framework yields a class of envelope functions related to the Moreau one
APA, Harvard, Vancouver, ISO, and other styles
48

Barry, Brendan(Brendan Cael). "Distributional models of ocean carbon export." Thesis, Massachusetts Institute of Technology, 2019. https://hdl.handle.net/1721.1/122321.

Full text
Abstract:
Thesis: Ph. D., Joint Program in Physical Oceanography (Massachusetts Institute of Technology, Department of Earth, Atmospheric, and Planetary Sciences; and the Woods Hole Oceanographic Institution), 2019
Cataloged from PDF version of thesis.
Includes bibliographical references (pages 137-153).
Each year, surface ocean ecosystems export sinking particles containing gigatons of carbon into the ocean's interior. This particle flux connects the entire ocean microbiome and constitutes a fundamental aspect of marine microbial ecology and biogeochemical cycles. Particle flux is also variable and intricately complex, impeding its mechanistic or quantitative description. In this thesis we pair compilations of available data with novel mathematical models to explore the relationships between particle flux and other key variables - temperature, net primary production, and depth. Particular use is made of (probability) distributional descriptions of quantities that are known to vary appreciably. First, using established thermodynamic dependencies for primary production and respiration, a simple mechanistic model is developed relating export efficiency (i.e. the fraction of primary production that is exported out of the surface ocean via particle flux) to temperature.
The model accounts for the observed variability in export efficiency due to temperature without idealizing out the remaining variability that evinces particle flux's complexity. This model is then used to estimate the metabolically-driven change in average export efficiency over the era of long-term global sea surface temperature records, and it is shown that the underlying mechanism may help explain glacial-interglacial atmospheric carbon dioxide drawdown. The relationship between particle flux and net primary production is then explored. Given that these are inextricable but highly variable and measured on different effective scales, it is hypothesized that a quantitative relationship emerges between collections of the two measurements - i.e. that they can be related not measurement-by-measurement but rather via their probability distributions.
It is shown that on large spatial or temporal scales both are consistent with lognormal distributions, as expected if each is considered as the collective result of many subprocesses. A relationship is then derived between the log-moments of their distributions and agreement is found between independent estimates of this relationship, suggesting that upper ocean particle flux is predictable from net primary production on large spatiotemporal scales. Finally, the attenuation of particle flux with depth is explored. It is shown that while several particle flux-versus-depth models capture observations equivalently, these carry very different implications mechanistically and for magnitudes of export out of the surface ocean. A model is then proposed for this relationship that accounts for measurements of both the flux profile and of the settling velocity distribution of particulate matter, and is thus more consistent with and constrained by empirical knowledge.
Possible future applications of these models are discussed, as well as how they could be tested and/or constrained observationally.
by Brendan Barry.
Ph. D.
Ph.D. Joint Program in Physical Oceanography (Massachusetts Institute of Technology, Department of Earth, Atmospheric, and Planetary Sciences; and the Woods Hole Oceanographic Institution)
APA, Harvard, Vancouver, ISO, and other styles
49

Bengtsson, Jerker. "Models and Methods for Development of DSP Applications on Manycore Processors." Doctoral thesis, Högskolan i Halmstad, Centrum för forskning om inbyggda system (CERES), 2009. http://urn.kb.se/resolve?urn=urn:nbn:se:hh:diva-14706.

Full text
Abstract:
Advanced digital signal processing systems require specialized high-performance embedded computer architectures. The term high-performance translates to large amounts of data and computations per time unit. The term embedded further implies requirements on physical size and power efficiency. Thus the requirements are of both functional and non-functional nature. This thesis addresses the development of high-performance digital signal processing systems relying on manycore technology. We propose building two-level hierarchical computer architectures for this domain of applications. Further, we outline a tool flow based on methods and analysis techniques for automated, multi-objective mapping of such applications on distributed memory manycore processors. In particular, the focus is put on how to provide a means for tunable strategies for mapping of task graphs on array structured distributed memory manycores, with respect to given application constraints. We argue for code mapping strategies based on predicted execution performance, which can be used in an auto-tuning feedback loop or to guide manual tuning directed by the programmer. Automated parallelization, optimisation and mapping to a manycore processor benefits from the use of a concurrent programming model as the starting point. Such a model allows the programmer to express different types and granularities of parallelism as well as computation characteristics of importance in the addressed class of applications. The programming model should also abstract away machine dependent hardware details. The analytical study of WCDMA baseband processing in radio base stations, presented in this thesis, suggests dataflow models as a good match to the characteristics of the application and as execution model abstracting computations on a manycore. Construction of portable tools further requires a manycore machine model and an intermediate representation. The models are needed in order to decouple algorithms, used to transform and map application software, from hardware. We propose a manycore machine model that captures common hardware resources, as well as resource dependent performance metrics for parallel computation and communication. Further, we have developed a multifunctional intermediate representation, which can be used as source for code generation and for dynamic execution analysis. Finally, we demonstrate how we can dynamically analyse execution using abstract interpretation on the intermediate representation. It is shown that the performance predictions can be used to accurately rank different mappings by best throughput or shortest end-to-end computation latency.
APA, Harvard, Vancouver, ISO, and other styles
50

Loscos, Àlex. "Spectral processing of the singing voice." Doctoral thesis, Universitat Pompeu Fabra, 2007. http://hdl.handle.net/10803/7542.

Full text
Abstract:
Aquesta tesi doctoral versa sobre el processament digital de la veu cantada, més concretament, sobre l'anàlisi, transformació i síntesi d'aquets tipus de veu en el domini espectral, amb especial èmfasi en aquelles tècniques rellevants per al desenvolupament d'aplicacions musicals.

La tesi presenta nous procediments i formulacions per a la descripció i transformació d'aquells atributs específicament vocals de la veu cantada. La tesis inclou, entre d'altres, algorismes per l'anàlisi i la generació de desordres vocals como ara rugositat, ronquera, o veu aspirada, detecció i modificació de la freqüència fonamental de la veu, detecció de nasalitat, conversió de veu cantada a melodia, detecció de cops de veu, mutació de veu cantada, i transformació de veu a instrument; exemplificant alguns d'aquests algorismes en aplicacions concretes.
Esta tesis doctoral versa sobre el procesado digital de la voz cantada, más concretamente, sobre el análisis, transformación y síntesis de este tipo de voz basándose e dominio espectral, con especial énfasis en aquellas técnicas relevantes para el desarrollo de aplicaciones musicales.

La tesis presenta nuevos procedimientos y formulaciones para la descripción y transformación de aquellos atributos específicamente vocales de la voz cantada. La tesis incluye, entre otros, algoritmos para el análisis y la generación de desórdenes vocales como rugosidad, ronquera, o voz aspirada, detección y modificación de la frecuencia fundamental de la voz, detección de nasalidad, conversión de voz cantada a melodía, detección de los golpes de voz, mutación de voz cantada, y transformación de voz a instrumento; ejemplificando algunos de éstos en aplicaciones concretas.
This dissertation is centered on the digital processing of the singing voice, more concretely on the analysis, transformation and synthesis of this type of voice in the spectral domain, with special emphasis on those techniques relevant for music applications.

The thesis presents new formulations and procedures for both describing and transforming those attributes of the singing voice that can be regarded as voice specific. The thesis includes, among others, algorithms for rough and growl analysis and transformation, breathiness estimation and emulation, pitch detection and modification, nasality identification, voice to melody conversion, voice beat onset detection, singing voice morphing, and voice to instrument transformation; being some of them exemplified with concrete applications.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography