Dissertations / Theses on the topic 'Time series analysi'

To see the other types of publications on this topic, follow the link: Time series analysi.

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 dissertations / theses for your research on the topic 'Time series analysi.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.

1

CHEMLA, ROMEU SANTOS AXEL CLAUDE ANDRE'. "MANIFOLD REPRESENTATIONS OF MUSICAL SIGNALS AND GENERATIVE SPACES." Doctoral thesis, Università degli Studi di Milano, 2020. http://hdl.handle.net/2434/700444.

Full text
Abstract:
Tra i diversi campi di ricerca nell’ambito dell’informatica musicale, la sintesi e la generazione di segnali audio incarna la pluridisciplinalità di questo settore, nutrendo insieme le pratiche scientifiche e musicale dalla sua creazione. Inerente all’informatica dalla sua creazione, la generazione audio ha ispirato numerosi approcci, evolvendo colle pratiche musicale e gli progressi tecnologici e scientifici. Inoltre, alcuni processi di sintesi permettono anche il processo inverso, denominato analisi, in modo che i parametri di sintesi possono anche essere parzialmente o totalmente estratti dai suoni, dando una rappresentazione alternativa ai segnali analizzati. Per di più, la recente ascesa dei algoritmi di l’apprendimento automatico ha vivamente interrogato il settore della ricerca scientifica, fornendo potenti data-centered metodi che sollevavano diversi epistemologici interrogativi, nonostante i sui efficacia. Particolarmente, un tipo di metodi di apprendimento automatico, denominati modelli generativi, si concentrano sulla generazione di contenuto originale usando le caratteristiche che hanno estratti dei dati analizzati. In tal caso, questi modelli non hanno soltanto interrogato i precedenti metodi di generazione, ma anche sul modo di integrare questi algoritmi nelle pratiche artistiche. Mentre questi metodi sono progressivamente introdotti nel settore del trattamento delle immagini, la loro applicazione per la sintesi di segnali audio e ancora molto marginale. In questo lavoro, il nostro obiettivo e di proporre un nuovo metodo di audio sintesi basato su questi nuovi tipi di generativi modelli, rafforazti dalle nuove avanzati dell’apprendimento automatico. Al primo posto, facciamo una revisione dei approcci esistenti nei settori dei sistemi generativi e di sintesi sonore, focalizzando sul posto di nostro lavoro rispetto a questi disciplini e che cosa possiamo aspettare di questa collazione. In seguito, studiamo in maniera più precisa i modelli generativi, e come possiamo utilizzare questi recenti avanzati per l’apprendimento di complesse distribuzione di suoni, in un modo che sia flessibile e nel flusso creativo del utente. Quindi proponiamo un processo di inferenza / generazione, il quale rifletta i processi di analisi/sintesi che sono molto usati nel settore del trattamento del segnale audio, usando modelli latenti, che sono basati sull’utilizzazione di un spazio continuato di alto livello, che usiamo per controllare la generazione. Studiamo dapprima i risultati preliminari ottenuti con informazione spettrale estratte da diversi tipi di dati, che valutiamo qualitativamente e quantitativamente. Successiva- mente, studiamo come fare per rendere questi metodi più adattati ai segnali audio, fronteggiando tre diversi aspetti. Primo, proponiamo due diversi metodi di regolarizzazione di questo generativo spazio che sono specificamente sviluppati per l’audio : una strategia basata sulla traduzione segnali / simboli, e una basata su vincoli percettivi. Poi, proponiamo diversi metodi per fronteggiare il aspetto temporale dei segnali audio, basati sull’estrazione di rappresentazioni multiscala e sulla predizione, che permettono ai generativi spazi ottenuti di anche modellare l’aspetto dinamico di questi segnali. Per finire, cambiamo il nostro approccio scientifico per un punto di visto piú ispirato dall’idea di ricerca e creazione. Primo, descriviamo l’architettura e il design della nostra libreria open-source, vsacids, sviluppata per permettere a esperti o non-esperti musicisti di provare questi nuovi metodi di sintesi. Poi, proponiamo una prima utilizzazione del nostro modello con la creazione di una performance in real- time, chiamata ægo, basata insieme sulla nostra libreria vsacids e sull’uso di une agente di esplorazione, imparando con rinforzo nel corso della composizione. Finalmente, tramo dal lavoro presentato alcuni conclusioni sui diversi modi di migliorare e rinforzare il metodo di sintesi proposto, nonché eventuale applicazione artistiche.
Among the diverse research fields within computer music, synthesis and generation of audio signals epitomize the cross-disciplinarity of this domain, jointly nourishing both scientific and artistic practices since its creation. Inherent in computer music since its genesis, audio generation has inspired numerous approaches, evolving both with musical practices and scientific/technical advances. Moreover, some syn- thesis processes also naturally handle the reverse process, named analysis, such that synthesis parameters can also be partially or totally extracted from actual sounds, and providing an alternative representation of the analyzed audio signals. On top of that, the recent rise of machine learning algorithms earnestly questioned the field of scientific research, bringing powerful data-centred methods that raised several epistemological questions amongst researchers, in spite of their efficiency. Especially, a family of machine learning methods, called generative models, are focused on the generation of original content using features extracted from an existing dataset. In that case, such methods not only questioned previous approaches in generation, but also the way of integrating this methods into existing creative processes. While these new generative frameworks are progressively introduced in the domain of image generation, the application of such generative techniques in audio synthesis is still marginal. In this work, we aim to propose a new audio analysis-synthesis framework based on these modern generative models, enhanced by recent advances in machine learning. We first review existing approaches, both in sound synthesis and in generative machine learning, and focus on how our work inserts itself in both practices and what can be expected from their collation. Subsequently, we focus a little more on generative models, and how modern advances in the domain can be exploited to allow us learning complex sound distributions, while being sufficiently flexible to be integrated in the creative flow of the user. We then propose an inference / generation process, mirroring analysis/synthesis paradigms that are natural in the audio processing domain, using latent models that are based on a continuous higher-level space, that we use to control the generation. We first provide preliminary results of our method applied on spectral information, extracted from several datasets, and evaluate both qualitatively and quantitatively the obtained results. Subsequently, we study how to make these methods more suitable for learning audio data, tackling successively three different aspects. First, we propose two different latent regularization strategies specifically designed for audio, based on and signal / symbol translation and perceptual constraints. Then, we propose different methods to address the inner temporality of musical signals, based on the extraction of multi-scale representations and on prediction, that allow the obtained generative spaces that also model the dynamics of the signal. As a last chapter, we swap our scientific approach to a more research & creation-oriented point of view: first, we describe the architecture and the design of our open-source library, vsacids, aiming to be used by expert and non-expert music makers as an integrated creation tool. Then, we propose an first musical use of our system by the creation of a real-time performance, called aego, based jointly on our framework vsacids and an explorative agent using reinforcement learning to be trained during the performance. Finally, we draw some conclusions on the different manners to improve and reinforce the proposed generation method, as well as possible further creative applications.
À travers les différents domaines de recherche de la musique computationnelle, l’analysie et la génération de signaux audio sont l’exemple parfait de la trans-disciplinarité de ce domaine, nourrissant simultanément les pratiques scientifiques et artistiques depuis leur création. Intégrée à la musique computationnelle depuis sa création, la synthèse sonore a inspiré de nombreuses approches musicales et scientifiques, évoluant de pair avec les pratiques musicales et les avancées technologiques et scientifiques de son temps. De plus, certaines méthodes de synthèse sonore permettent aussi le processus inverse, appelé analyse, de sorte que les paramètres de synthèse d’un certain générateur peuvent être en partie ou entièrement obtenus à partir de sons donnés, pouvant ainsi être considérés comme une représentation alternative des signaux analysés. Parallèlement, l’intérêt croissant soulevé par les algorithmes d’apprentissage automatique a vivement questionné le monde scientifique, apportant de puissantes méthodes d’analyse de données suscitant de nombreux questionnements épistémologiques chez les chercheurs, en dépit de leur effectivité pratique. En particulier, une famille de méthodes d’apprentissage automatique, nommée modèles génératifs, s’intéressent à la génération de contenus originaux à partir de caractéristiques extraites directement des données analysées. Ces méthodes n’interrogent pas seulement les approches précédentes, mais aussi sur l’intégration de ces nouvelles méthodes dans les processus créatifs existants. Pourtant, alors que ces nouveaux processus génératifs sont progressivement intégrés dans le domaine la génération d’image, l’application de ces techniques en synthèse audio reste marginale. Dans cette thèse, nous proposons une nouvelle méthode d’analyse-synthèse basés sur ces derniers modèles génératifs, depuis renforcés par les avancées modernes dans le domaine de l’apprentissage automatique. Dans un premier temps, nous examinerons les approches existantes dans le domaine des systèmes génératifs, sur comment notre travail peut s’insérer dans les pratiques de synthèse sonore existantes, et que peut-on espérer de l’hybridation de ces deux approches. Ensuite, nous nous focaliserons plus précisément sur comment les récentes avancées accomplies dans ce domaine dans ce domaine peuvent être exploitées pour l’apprentissage de distributions sonores complexes, tout en étant suffisamment flexibles pour être intégrées dans le processus créatif de l’utilisateur. Nous proposons donc un processus d’inférence / génération, reflétant les paradigmes d’analyse-synthèse existant dans le domaine de génération audio, basé sur l’usage de modèles latents continus que l’on peut utiliser pour contrôler la génération. Pour ce faire, nous étudierons déjà les résultats préliminaires obtenus par cette méthode sur l’apprentissage de distributions spectrales, prises d’ensembles de données diversifiés, en adoptant une approche à la fois quantitative et qualitative. Ensuite, nous proposerons d’améliorer ces méthodes de manière spécifique à l’audio sur trois aspects distincts. D’abord, nous proposons deux stratégies de régularisation différentes pour l’analyse de signaux audio : une basée sur la traduction signal/ symbole, ainsi qu’une autre basée sur des contraintes perceptives. Nous passerons par la suite à la dimension temporelle de ces signaux audio, proposant de nouvelles méthodes basées sur l’extraction de représentations temporelles multi-échelle et sur une tâche supplémentaire de prédiction, permettant la modélisation de caractéristiques dynamiques par les espaces génératifs obtenus. En dernier lieu, nous passerons d’une approche scientifique à une approche plus orientée vers un point de vue recherche & création. Premièrement, nous présenterons notre librairie open-source, vsacids, visant à être employée par des créateurs experts et non-experts comme un outil intégré. Ensuite, nous proposons une première utilisation musicale de notre système par la création d’une performance temps réel, nommée ægo, basée à la fois sur notre librarie et sur un agent d’exploration appris dynamiquement par renforcement au cours de la performance. Enfin, nous tirons les conclusions du travail accompli jusqu’à maintenant, concernant les possibles améliorations et développements de la méthode de synthèse proposée, ainsi que sur de possibles applications créatives.
APA, Harvard, Vancouver, ISO, and other styles
2

Pope, Kenneth James. "Time series analysis." Thesis, University of Cambridge, 1993. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.318445.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Yin, Jiang Ling. "Financial time series analysis." Thesis, University of Macau, 2011. http://umaclib3.umac.mo/record=b2492929.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Gore, Christopher Mark. "A time series classifier." Diss., Rolla, Mo. : Missouri University of Science and Technology, 2008. http://scholarsmine.mst.edu/thesis/pdf/Gore_09007dcc804e6461.pdf.

Full text
Abstract:
Thesis (M.S.)--Missouri University of Science and Technology, 2008.
Vita. The entire thesis text is included in file. Title from title screen of thesis/dissertation PDF file (viewed April 29, 2008) Includes bibliographical references (p. 53-55).
APA, Harvard, Vancouver, ISO, and other styles
5

Bastos, Camila Bianka Silva. "Estudo dos impactos de um sistema fotovoltaico conectado à rede elétrica utilizando análises QSTS." Universidade do Estado de Santa Catarina, 2015. http://tede.udesc.br/handle/handle/2081.

Full text
Abstract:
Made available in DSpace on 2016-12-12T20:27:38Z (GMT). No. of bitstreams: 1 Camila Bianka Silva Bastos.pdf: 1963598 bytes, checksum: bee88eacc3f6e3c327425297316a691d (MD5) Previous issue date: 2015-02-27
Coordenação de Aperfeiçoamento de Pessoal de Nível Superior
This dissertation presents a study of the operation of two different three-phase grid-connected test-grids with the connection of a 1MWp photovoltaic system. Two analysis methods are used to evaluate the impacts of this photovoltaic systeM, these methods being conventional static analysis and the analysis known as Quasi-Static Time-Series Analysis. Despite the fact that all grids have unique characteristics, it is important to use test-grids, which simulate the real grid characteristics, to analyze the kinds of problems that can occur and then look for alternatives, if necessary. The impacts evaluated are related to the system losses, minimized with the allocation study of the generation on the grid, voltage profile and tap position curve, when automatic load tap changers are used. It was verified that the photovoltaic system interconnection point is the most influenced one after its connection to the grid. The Quasi-Static Time-Series Analysis allow the correct evaluation of the load-generation interaction, running the time series power flow through estimated data for the load and irradiance curves during 168 hours. The conventional static analysis only considers critical operation conditions, like minimum and maximum load, and no generation or maximum generation, and does not evaluate different case scenarios that occur in reality. The photovoltaic systems can bring many advantages to the electric systems, like the improvement on the final consumer voltage profile, line losses reduction, and also environmental impacts reduction. However, with the increase of distributed photovoltaic generation on the electrical grid, it s necessary to be aware of the impacts that this may cause by performing interconnection studies.
Esta dissertação apresenta um estudo da operação de uma rede teste trifásica de média tensão com a interligação de um sistema fotovoltaico de 1,0 MWp. Dois métodos de análise são utilizados para avaliar os impactos deste sistema fotovoltaico, sendo estes métodos as análises estáticas convencionais eas análises conhecidas como Quasi-Static Time-Series Analysis. Apesar de cada rede elétrica apresentar características únicas, é importante a utilização de sistemas testes, que simulam as características de sistemas reais, para analisar que tipos problemas podem surgir e então buscar alternativas, se necessário. Os impactos avaliados se referem às perdas no sistema, minimizadas com a correta alocação da geração, perfil de tensão e curva de posição do tap, no caso de transformador com comutação automática de tap. Contata-se que o ponto de conexão do sistema fotovoltaico é o mais influenciado pela sua conexão à rede. As análises QSTS possibilitam avaliar corretamente a iteração entre carga e geração, efetuando o fluxo de potência consecutivo através de dados estimados para as curvas de carga e de irradiância solar ao longo de 168 horas. Já as análises convencionais consideram apenas condições críticas de operação, como por exemplo, carga leve ou nominal e geração nula ou máxima, não avaliando então diferentes cenários de operação que ocorrem na prática. Os sistemas fotovoltaicos podem trazer muitos benefícios aos sistemas elétricos, como melhoria do perfil de tensão de atendimento ao consumidor, redução de perdas nas linhas, além da redução nos impactos ambientais. Entretanto, com o aumento de geração fotovoltaica distribuída na rede, é necessário estar atento aos impactos que isto pode causar através de estudos de interconexão.
APA, Harvard, Vancouver, ISO, and other styles
6

Ishida, Isao. "Essays on financial time series /." Diss., Connect to a 24 p. preview or request complete full text in PDF format. Access restricted to UC campuses, 2004. http://wwwlib.umi.com/cr/ucsd/fullcit?p3153696.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Lam, Vai Iam. "Time domain approach in time series analysis." Thesis, University of Macau, 2000. http://umaclib3.umac.mo/record=b1446633.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Li, Tak-wai Wilson. "Forecasting of tide heights : an application of smoothness priors in time series modelling /." [Hong Kong] : University of Hong Kong, 1991. http://sunzi.lib.hku.hk/hkuto/record.jsp?B13154357.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Malan, Karien. "Stationary multivariate time series analysis." Pretoria : [s.n.], 2008. http://upetd.up.ac.za/thesis/available/etd-06132008-173800.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Huang, Naijing. "Essays in time series analysis." Thesis, Boston College, 2015. http://hdl.handle.net/2345/bc-ir:104627.

Full text
Abstract:
Thesis advisor: Zhijie Xiao
I have three chapters in my dissertation. The first chapter is about the estimation and inference for DSGE model; the second chapter is about testing financial contagion among stock markets, and in the last chapter, I propose a new econometrics method to forecast inflation interval. This first chapter studies proper inference and asymptotically accurate structural break tests for parameters in Dynamic Stochastic General Equilibrium (DSGE) models in a maximum likelihood framework. Two empirically relevant issues may invalidate the conventional inference procedures and structural break tests for parameters in DSGE models: (i) weak identification and (ii) moderate parameter instability. DSGE literatures focus on dealing with weak identification issue, but ignore the impact of moderate parameter instability. This paper contributes to the literature via considering the joint impact of two issues in DSGE framework. The main results are: in a weakly identified DSGE model, (i) moderate instability from weakly identified parameters would not affect the validity of standard inference procedures or structural break tests; (ii) however, if strongly identified parameters are featured with moderate time-variation, the asymptotic distributions of test statistics would deviate from standard ones and would no longer be nuisance parameter free, which renders standard inference procedures and structural break tests invalid and provides practitioners misleading inference results; (iii) as long as I concentrate out strongly identified parameters, the instability impact of them would disappear as the sample size goes to infinity, which recovers the power of conventional inference procedure and structural break tests for weakly identified parameters. To illustrate my results, I simulate and estimate a modified version of the Hansen (1985) Real Business Cycle model and find that my theoretical results provide reasonable guidance for finite sample inference of the parameters in the model. I show that confidence intervals that incorporate weak identification and moderate parameter instability reduce the biases of confidence intervals that ignore those effects. While I focus on DSGE models in this paper, all of my theoretical results could be applied to any linear dynamic models or nonlinear GMM models. The second chapter, regarding the asymmetric and leptokurtic behavior of financial data, we propose a new contagion test in the quantile regression framework that is robust to model misspecification. Unlike conventional correlation-based tests, the proposed quantile contagion test allows us to investigate the stock market contagion at various quantiles, not only at the mean. We show that the quantile contagion test can detect a contagion effect that is possibly ignored by correlation-based tests. A wide range of simulation studies show that the proposed test is superior to the correlation-based tests in terms of size and power. We compare our test with correlation-based tests using three real data sets: the 1994 Tequila crisis, the 1997 Asia crisis, and the 2001 Argentina crisis. Empirical results show substantial differences between two types of tests. In the third chapter, I use Quantile Bayesian Approach-- to do the interval forecast for inflation in the semi-parametric framework. This new method introduces Bayesian solution to the quantile framework for two reasons: 1. It enables us to get more efficient quantile estimates when the informative prior is used (He and Yang (2012)); 2. We use Markov Chain Monte Carlo (MCMC) algorithm to generate samples of the posterior distribution for unknown parameters and take the mean or mode as the estimates. This MCMC estimator takes advantage of numerical integration over the standard numerical differentiation based optimization, especially when the likelihood function is complicated and multi-modal. Simulation results find better interval forecasting performance of Quantile Bayesian Approach than commonly used parametric approach
Thesis (PhD) — Boston College, 2015
Submitted to: Boston College. Graduate School of Arts and Sciences
Discipline: Economics
APA, Harvard, Vancouver, ISO, and other styles
11

Alagon, J. "Discriminant analysis for time series." Thesis, University of Oxford, 1986. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.375222.

Full text
APA, Harvard, Vancouver, ISO, and other styles
12

Warnes, Alexis. "Diagnostics in time series analysis." Thesis, Durham University, 1994. http://etheses.dur.ac.uk/5159/.

Full text
Abstract:
The portmanteau diagnostic test for goodness of model fit is studied. It is found that the true variances of the estimated residual autocorrelation function are potentially deflated considerably below their asymptotic level, and exhibit high correlations with each other. This suggests a new portmanteau test, ignoring the first p + q residual autocorrelation terms and hence approximating the asymptotic chi-squared distribution more closely. Simulations show that this alternative portmanteau test produces greater accuracy in its estimated significance levels, especially in small samples. Theory and discussions follow, pertaining to both the Dynamic Linear Model and the Bayesian method of forecasting. The concept of long-term equivalence is defined. The difficulties with the discounting approach in the DLM are then illustrated through an example, before deriving equations for the step-ahead forecast distribution which could, instead, be used to estimate the evolution variance matrix W(_t). Non-uniqueness of W in the constant time series DLM is the principal drawback with this idea; however, it is proven that in any class of long-term equivalent models only p degrees of freedom can be fixed in W, leading to a potentially diagonal form for this matrix. The bias in the k(^th) step-ahead forecast error produced by any TSDLM variance (mis)specification is calculated. This yields the variances and covariances of the forecast error distribution; given sample estimates of these, it proves possible to solve equations arising from these calculations both for V and p elements of W. Simulations, and a "head-to-head" comparison, for the frequently-applied steady model illustrate the accuracy of the predictive calculations, both in the convergence properties of the sample (co)variances, and the estimates Ṽ and Ŵ. The method is then applied to a 2-dimensional constant TSDLM. Further simulations illustrate the success of the approach in producing accurate on-line estimates for the true variance specifications within this widely-used model.
APA, Harvard, Vancouver, ISO, and other styles
13

Chan, Hon Tsang. "Discriminant analysis of time series." Thesis, University of Newcastle Upon Tyne, 1991. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.315614.

Full text
APA, Harvard, Vancouver, ISO, and other styles
14

Fulcher, Benjamin D. "Highly comparative time-series analysis." Thesis, University of Oxford, 2012. http://ora.ox.ac.uk/objects/uuid:642b65cf-4686-4709-9f9d-135e73cfe12e.

Full text
Abstract:
In this thesis, a highly comparative framework for time-series analysis is developed. The approach draws on large, interdisciplinary collections of over 9000 time-series analysis methods, or operations, and over 30 000 time series, which we have assembled. Statistical learning methods were used to analyze structure in the set of operations applied to the time series, allowing us to relate different types of scientific methods to one another, and to investigate redundancy across them. An analogous process applied to the data allowed different types of time series to be linked based on their properties, and in particular to connect time series generated by theoretical models with those measured from relevant real-world systems. In the remainder of the thesis, methods for addressing specific problems in time-series analysis are presented that use our diverse collection of operations to represent time series in terms of their measured properties. The broad utility of this highly comparative approach is demonstrated using various case studies, including the discrimination of pathological heart beat series, classification of Parkinsonian phonemes, estimation of the scaling exponent of self-affine time series, prediction of cord pH from fetal heart rates recorded during labor, and the assignment of emotional content to speech recordings. Our methods are also applied to labeled datasets of short time-series patterns studied in temporal data mining, where our feature-based approach exhibits benefits over conventional time-domain classifiers. Lastly, a feature-based dimensionality reduction framework is developed that links dependencies measured between operations to the number of free parameters in a time-series model that could be used to generate a time-series dataset.
APA, Harvard, Vancouver, ISO, and other styles
15

Hwang, Peggy May T. "Factor analysis of time series /." The Ohio State University, 1997. http://rave.ohiolink.edu/etdc/view?acc_num=osu1487944660933305.

Full text
APA, Harvard, Vancouver, ISO, and other styles
16

Alexander, Miranda Abhilash. "Spectral factor model for time series learning." Doctoral thesis, Universite Libre de Bruxelles, 2011. http://hdl.handle.net/2013/ULB-DIPOT:oai:dipot.ulb.ac.be:2013/209812.

Full text
Abstract:
Today's computerized processes generate

massive amounts of streaming data.

In many applications, data is collected for modeling the processes. The process model is hoped to drive objectives such as decision support, data visualization, business intelligence, automation and control, pattern recognition and classification, etc. However, we face significant challenges in data-driven modeling of processes. Apart from the errors, outliers and noise in the data measurements, the main challenge is due to a large dimensionality, which is the number of variables each data sample measures. The samples often form a long temporal sequence called a multivariate time series where any one sample is influenced by the others.

We wish to build a model that will ensure robust generation, reviewing, and representation of new multivariate time series that are consistent with the underlying process.

In this thesis, we adopt a modeling framework to extract characteristics from multivariate time series that correspond to dynamic variation-covariation common to the measured variables across all the samples. Those characteristics of a multivariate time series are named its 'commonalities' and a suitable measure for them is defined. What makes the multivariate time series model versatile is the assumption regarding the existence of a latent time series of known or presumed characteristics and much lower dimensionality than the measured time series; the result is the well-known 'dynamic factor model'.

Original variants of existing methods for estimating the dynamic factor model are developed: The estimation is performed using the frequency-domain equivalent of the dynamic factor model named the 'spectral factor model'. To estimate the spectral factor model, ideas are sought from the asymptotic theory of spectral estimates. This theory is used to attain a probabilistic formulation, which provides maximum likelihood estimates for the spectral factor model parameters. Then, maximum likelihood parameters are developed with all the analysis entirely in the spectral-domain such that the dynamically transformed latent time series inherits the commonalities maximally.

The main contribution of this thesis is a learning framework using the spectral factor model. We term learning as the ability of a computational model of a process to robustly characterize the data the process generates for purposes of pattern matching, classification and prediction. Hence, the spectral factor model could be claimed to have learned a multivariate time series if the latent time series when dynamically transformed extracts the commonalities reliably and maximally. The spectral factor model will be used for mainly two multivariate time series learning applications: First, real-world streaming datasets obtained from various processes are to be classified; in this exercise, human brain magnetoencephalography signals obtained during various cognitive and physical tasks are classified. Second, the commonalities are put to test by asking for reliable prediction of a multivariate time series given its past evolution; share prices in a portfolio are forecasted as part of this challenge.

For both spectral factor modeling and learning, an analytical solution as well as an iterative solution are developed. While the analytical solution is based on low-rank approximation of the spectral density function, the iterative solution is based on the expectation-maximization algorithm. For the human brain signal classification exercise, a strategy for comparing similarities between the commonalities for various classes of multivariate time series processes is developed. For the share price prediction problem, a vector autoregressive model whose parameters are enriched with the maximum likelihood commonalities is designed. In both these learning problems, the spectral factor model gives commendable performance with respect to competing approaches.

Les processus informatisés actuels génèrent des quantités massives de flux de données. Dans nombre d'applications, ces flux de données sont collectées en vue de modéliser les processus. Les modèles de processus obtenus ont pour but la réalisation d'objectifs tels que l'aide à la décision, la visualisation de données, l'informatique décisionnelle, l'automatisation et le contrôle, la reconnaissance de formes et la classification, etc. La modélisation de processus sur la base de données implique cependant de faire face à d’importants défis. Outre les erreurs, les données aberrantes et le bruit, le principal défi provient de la large dimensionnalité, i.e. du nombre de variables dans chaque échantillon de données mesurées. Les échantillons forment souvent une longue séquence temporelle appelée série temporelle multivariée, où chaque échantillon est influencé par les autres. Notre objectif est de construire un modèle robuste qui garantisse la génération, la révision et la représentation de nouvelles séries temporelles multivariées cohérentes avec le processus sous-jacent.

Dans cette thèse, nous adoptons un cadre de modélisation capable d’extraire, à partir de séries temporelles multivariées, des caractéristiques correspondant à des variations - covariations dynamiques communes aux variables mesurées dans tous les échantillons. Ces caractéristiques sont appelées «points communs» et une mesure qui leur est appropriée est définie. Ce qui rend le modèle de séries temporelles multivariées polyvalent est l'hypothèse relative à l'existence de séries temporelles latentes de caractéristiques connues ou présumées et de dimensionnalité beaucoup plus faible que les séries temporelles mesurées; le résultat est le bien connu «modèle factoriel dynamique». Des variantes originales de méthodes existantes pour estimer le modèle factoriel dynamique sont développées :l'estimation est réalisée en utilisant l'équivalent du modèle factoriel dynamique au niveau du domaine de fréquence, désigné comme le «modèle factoriel spectral». Pour estimer le modèle factoriel spectral, nous nous basons sur des idées relatives à la théorie des estimations spectrales. Cette théorie est utilisée pour aboutir à une formulation probabiliste, qui fournit des estimations de probabilité maximale pour les paramètres du modèle factoriel spectral. Des paramètres de probabilité maximale sont alors développés, en plaçant notre analyse entièrement dans le domaine spectral, de façon à ce que les séries temporelles latentes transformées dynamiquement héritent au maximum des points communs.

La principale contribution de cette thèse consiste en un cadre d'apprentissage utilisant le modèle factoriel spectral. Nous désignons par apprentissage la capacité d'un modèle de processus à caractériser de façon robuste les données générées par le processus à des fins de filtrage par motif, classification et prédiction. Dans ce contexte, le modèle factoriel spectral est considéré comme ayant appris une série temporelle multivariée si la série temporelle latente, une fois dynamiquement transformée, permet d'extraire les points communs de façon fiable et maximale. Le modèle factoriel spectral sera utilisé principalement pour deux applications d'apprentissage de séries multivariées :en premier lieu, des ensembles de données sous forme de flux venant de différents processus du monde réel doivent être classifiés; lors de cet exercice, la classification porte sur des signaux magnétoencéphalographiques obtenus chez l'homme au cours de différentes tâches physiques et cognitives; en second lieu, les points communs obtenus sont testés en demandant une prédiction fiable d'une série temporelle multivariée étant donnée l'évolution passée; les prix d'un portefeuille d'actions sont prédits dans le cadre de ce défi.

À la fois pour la modélisation et pour l'apprentissage factoriel spectral, une solution analytique aussi bien qu'une solution itérative sont développées. Tandis que la solution analytique est basée sur une approximation de rang inférieur de la fonction de densité spectrale, la solution itérative est basée, quant à elle, sur l'algorithme de maximisation des attentes. Pour l'exercice de classification des signaux magnétoencéphalographiques humains, une stratégie de comparaison des similitudes entre les points communs des différentes classes de processus de séries temporelles multivariées est développée. Pour le problème de prédiction des prix des actions, un modèle vectoriel autorégressif dont les paramètres sont enrichis avec les points communs de probabilité maximale est conçu. Dans ces deux problèmes d’apprentissage, le modèle factoriel spectral atteint des performances louables en regard d’approches concurrentes.
Doctorat en Sciences
info:eu-repo/semantics/nonPublished

APA, Harvard, Vancouver, ISO, and other styles
17

Michel, Jonathan R. "Essays in Nonlinear Time Series Analysis." The Ohio State University, 2019. http://rave.ohiolink.edu/etdc/view?acc_num=osu1555001297904158.

Full text
APA, Harvard, Vancouver, ISO, and other styles
18

Schwill, Stephan. "Entropy analysis of financial time series." Thesis, University of Manchester, 2016. https://www.research.manchester.ac.uk/portal/en/theses/entropy-analysis-of-financial-time-series(7e0c84fe-5d0b-41bc-96c6-5e41ffa5b8fe).html.

Full text
Abstract:
This thesis applies entropy as a model independent measure to address research questions concerning the dynamics of various financial time series. The thesis consists of three main studies as presented in chapters 3, 4 and 5. Chapters 3 and 4 apply an entropy measure to conduct a bivariate analysis of drawdowns and drawups in foreign exchange rates. Chapter 5 investigates the dynamics of investment strategies of hedge funds using entropy of realised volatility in a conditioning model. In all three studies, methods from information theory are applied in novel ways to financial time series. As Information Theory and its central concept of entropy are not widely used in the economic sciences, a methodology chapter was therefore included in chapter 2 that gives an overview on the theoretical background and statistical features of the entropy measures used in the three main studies. In the first two studies the focus is on mutual information and transfer entropy. Both measures are used to identify dependencies between two exchange rates. The chosen measures generalise, in a well defined manner, correlation and Granger causality. A different entropy measure, the approximate entropy, is used in the third study to analyse the serial structure of S&P realised volatility. The study of drawdowns and drawups has so far been concentrated on their uni- variate characteristics. Encoding the drawdown information of a time series into a time series of discrete values, Chapter 3 uses entropy measures to analyse the correlation and cross correlations of drawdowns and drawups. The method to encode the drawdown information is explained and applied to daily and hourly EUR/USD and GBP/USD exchange rates from 2001 to 2012. For the daily series, we find evidence of dependence among the largest draws (i.e. 5% and 95% quantiles), but it is not as strong as the correlation between the daily returns of the same pair of FX rates. There is also dependence between lead/lagged values of these draws. Similar and stronger findings were found among the hourly data. We further use transfer entropy to examine the spill over and lead-lag information flow between drawup/drawdown of the two exchange rates. Such information flow is indeed detectable in both daily and hourly data. The amount of information transferred is considerably higher for the hourly than the daily data. Both daily and hourly series show clear evidence of information flowing from EUR/USD to GBP/USD and, slightly stronger, in the reverse direction. Robustness tests, using effective transfer entropy, show that the information measured is not due to noise. Chapter 4 uses state space models of volatility to investigate volatility spill overs between exchange rates. Our use of entropy related measures in the investigation of dependencies of two state space series is novel. A set of five daily exchange rates from emerging and developed economies against the dollar over the period 1999 to 2012 is used. We find that among the currency pairs, the co-movement of EUR/USD and CHF/USD volatility states show the strongest observed relationship. With the use of transfer entropy, we find evidence for information flows between the volatility state series of AUD, CAD and BRL.Chapter 5 uses the entropy of S&P realised volatility in detecting changes of volatility regime in order to re-examine the theme of market volatility timing of hedge funds. A one-factor model is used, conditioned on information about the entropy of market volatility, to measure the dynamic of hedge funds equity exposure. On a cross section of around 2500 hedge funds with a focus on the US equity markets we find that, over the period from 2000 to 2014, hedge funds adjust their exposure dynamically in response to changes in volatility regime. This adds to the literature on the volatility timing behaviour of hedge fund manager, but using entropy as a model independent measure of volatility regime. Finally, chapter 6 summarises and concludes with some suggestions for future research.
APA, Harvard, Vancouver, ISO, and other styles
19

Morrill, Jeffrey P., and Jonathan Delatizky. "REAL-TIME RECOGNITION OF TIME-SERIES PATTERNS." International Foundation for Telemetering, 1993. http://hdl.handle.net/10150/608854.

Full text
Abstract:
International Telemetering Conference Proceedings / October 25-28, 1993 / Riviera Hotel and Convention Center, Las Vegas, Nevada
This paper describes a real-time implementation of the pattern recognition technology originally developed by BBN [Delatizky et al] for post-processing of time-sampled telemetry data. This makes it possible to monitor a data stream for a characteristic shape, such as an arrhythmic heartbeat or a step-response whose overshoot is unacceptably large. Once programmed to recognize patterns of interest, it generates a symbolic description of a time-series signal in intuitive, object-oriented terms. The basic technique is to decompose the signal into a hierarchy of simpler components using rules of grammar, analogous to the process of decomposing a sentence into phrases and words. This paper describes the basic technique used for pattern recognition of time-series signals and the problems that must be solved to apply the techniques in real time. We present experimental results for an unoptimized prototype demonstrating that 4000 samples per second can be handled easily on conventional hardware.
APA, Harvard, Vancouver, ISO, and other styles
20

Rivera, Pablo Marshall. "Analysis of a cross-section of time series using structural time series models." Thesis, London School of Economics and Political Science (University of London), 1990. http://etheses.lse.ac.uk/13/.

Full text
Abstract:
This study deals with multivariate structural time series models, and in particular, with the analysis and modelling of cross-sections of time series. In this context, no cause and effect relationships are assumed between the time series, although they are subject to the same overall environment. The main motivations in the analysis of cross-sections of time series are (i) the gains in efficiency in the estimation of the irregular, trend and seasonal components; and (ii) the analysis of models with common effects. The study contains essentially two parts. The first one considers models with a general specification for the correlation of the irregular, trend and seasonal components across the time series. Four structural time series models are presented, and the estimation of the components of the time series, as well as the estimation of the parameters which define this components, is discussed. The second part of the study deals with dynamic error components models where the irregular, trend and seasonal components are generated by common, as well as individual, effects. The extension to models for multivariate observations of cross-sections is also considered. Several applications of the methods studied are presented. Particularly relevant is an econometric study of the demand for energy in the U. K.
APA, Harvard, Vancouver, ISO, and other styles
21

Mok, Ching-wah. "A comparison of two approaches to time series forecasting." Hong Kong : University of Hong Kong, 1993. http://sunzi.lib.hku.hk/hkuto/record.jsp?B20666342.

Full text
APA, Harvard, Vancouver, ISO, and other styles
22

Reiss, Joshua D. "The analysis of chaotic time series." Diss., Full text available online (restricted access), 2001. http://images.lib.monash.edu.au/ts/theses/reiss.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
23

Healey, J. J. "Qualitative analysis of experimental time series." Thesis, University of Oxford, 1991. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.302891.

Full text
APA, Harvard, Vancouver, ISO, and other styles
24

謝永然 and Wing-yin Tse. "Time series analysis in inventory management." Thesis, The University of Hong Kong (Pokfulam, Hong Kong), 1993. http://hub.hku.hk/bib/B31977510.

Full text
APA, Harvard, Vancouver, ISO, and other styles
25

Yiu, Fu-keung, and 饒富強. "Time series analysis of financial index." Thesis, The University of Hong Kong (Pokfulam, Hong Kong), 1996. http://hub.hku.hk/bib/B31267804.

Full text
APA, Harvard, Vancouver, ISO, and other styles
26

Dunne, Peter Gerard. "Essays in financial time-series analysis." Thesis, Queen's University Belfast, 1996. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.337690.

Full text
APA, Harvard, Vancouver, ISO, and other styles
27

Brunsdon, T. M. "Time series analysis of compositional data." Thesis, University of Southampton, 1987. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.378257.

Full text
APA, Harvard, Vancouver, ISO, and other styles
28

Correia, Maria Inês Costa. "Cluster analysis of financial time series." Master's thesis, Instituto Superior de Economia e Gestão, 2020. http://hdl.handle.net/10400.5/21016.

Full text
Abstract:
Mestrado em Mathematical Finance
Esta dissertação aplica o método da Signature como medida de similaridade entre dois objetos de séries temporais usando as propriedades de ordem 2 da Signature e aplicando-as a um método de Clustering Asimétrico. O método é comparado com uma abordagem de Clustering mais tradicional, onde a similaridade é medida usando Dynamic Time Warping, desenvolvido para trabalhar com séries temporais. O intuito é considerar a abordagem tradicional como benchmark e compará-la ao método da Signature através do tempo de computação, desempenho e algumas aplicações. Estes métodos são aplicados num conjunto de dados de séries temporais financeiras de Fundos Mútuos do Luxemburgo. Após a revisão da literatura, apresentamos o método Dynamic Time Warping e o método da Signature. Prossegue-se com a explicação das abordagens de Clustering Tradicional, nomeadamente k-Means, e Clustering Espectral Assimétrico, nomeadamente k-Axes, desenvolvido por Atev (2011). O último capítulo é dedicado à Investigação Prática onde os métodos anteriores são aplicados ao conjunto de dados. Os resultados confirmam que o método da Signature têm efectivamente potencial para Machine Learning e previsão, como sugerido por Levin, Lyons and Ni (2013).
This thesis applies the Signature method as a measurement of similarities between two time-series objects, using the Signature properties of order 2, and its application to Asymmetric Spectral Clustering. The method is compared with a more Traditional Clustering approach where similarities are measured using Dynamic Time Warping, developed to work with time-series data. The intention for this is to consider the traditional approach as a benchmark and compare it to the Signature method through computation times, performance, and applications. These methods are applied to a financial time series data set of Mutual Exchange Funds from Luxembourg. After the literature review, we introduce the Dynamic Time Warping method and the Signature method. We continue with the explanation of Traditional Clustering approaches, namely k-Means, and Asymmetric Clustering techniques, namely the k-Axes algorithm, developed by Atev (2011). The last chapter is dedicated to Practical Research where the previous methods are applied to the data set. Results confirm that the Signature method has indeed potential for machine learning and prediction, as suggested by Levin, Lyons, and Ni (2013).
info:eu-repo/semantics/publishedVersion
APA, Harvard, Vancouver, ISO, and other styles
29

Åkerlund, Agnes. "Time-Series Analysis of Pulp Prices." Thesis, Mittuniversitetet, Institutionen för informationssystem och –teknologi, 2020. http://urn.kb.se/resolve?urn=urn:nbn:se:miun:diva-39726.

Full text
Abstract:
The pulp and paper industry has a significant role in Europe’s economy and society, and its significance is still growing. The pulp market and the customers’ requirements are highly affected by the pulp market prices and the requested kind of pulp, i.e., Elementary Chlorine Free (ECF) or Total Chlorine Free (TCF). There is a need to predict different market aspects, where the market price is one, to gain a better understanding of a business situation. Understanding market dynamics can support organizations to optimize their processes and production. Forecasting future pulp prices has not recently been done, but it would help businesses to make decisions that are more informed about where to sell their product. The studies existing about the pulp industry and forecast of market prices were completed over 20 years ago, and the market has changed since then in terms of, e.g., demand and production volume. There is a research gap within the pulp industry from a market price perspective. The pulp market is similar to, e.g., the energy industry in some aspects, and time-series analysis has been used to forecast electricity prices to support decision making by electricity producers and retailers. Autoregressive Integrated Moving Average (ARIMA) is one time-series analysis method that is used when data are collected with a constant frequency and when the average is not constant. Holt-Winters model is a well-known and simple time-series analysis. In this thesis, time-series analysis is used to predict the weekly market price for pulp the three upcoming months, with the research question “With what accuracy can time-series analysis be used to forecast the European PIX price on pulp on a week-ahead basis?”. The research method in this thesis is a case study where data are collected through the data collection method documents. First, articles are studied to gain understanding within the problem area leading to the use of the artefact time-series analyses and a case study. Then, historical data are collected from the organization FOEX Fastmarkets, where a new market price of pulp has been released every Tuesday since September 1996. The dataset has a total of 1200 data points. After data cleaning, it is merged to 1196 data points that are used for the analysis. To evaluate the results from the time-series analysis models ARIMA and Holt-Winter, Mean Absolute Error (MAE) and Mean Absolute Percentage Error (MAPE) are used. The software RStudio is used for programming. The results shows that the ARIMA model provides the most accurate results. The mean value for MAE is 16,59 for ARIMA and 44,61 for Holt-Winters. The mean value for MAPE is 1,99% for ARIMA and 5,37% for Holt-Winters.
APA, Harvard, Vancouver, ISO, and other styles
30

Khalfaoui, Rabeh. "Wavelet analysis of financial time series." Thesis, Aix-Marseille, 2012. http://www.theses.fr/2012AIXM1083.

Full text
Abstract:
Cette thèse traite la contribution des méthodes d'ondelettes sur la modélisation des séries temporelles économiques et financières et se compose de deux parties: une partie univariée et une partie multivariée. Dans la première partie (chapitres 2 et 3), nous adoptons le cas univarié. Premièrement, nous examinons la classe des processus longue mémoire non-stationnaires. Une étude de simulation a été effectuée afin de comparer la performance de certaines méthodes d'estimation semi-paramétrique du paramètre d'intégration fractionnaire. Nous examinons aussi la mémoire longue dans la volatilité en utilisant des modèles FIGARCH pour les données de l'énergie. Les résultats montrent que la méthode d'estimation Exact Local Whittle de Shimotsu et Phillips [2005] est la meilleure méthode de détection de longue mémoire et la volatilité du pétrole exhibe une forte évidence de phénomène de mémoire longue. Ensuite, nous analysons le risque de marché des séries de rendements univariées de marchés boursier, qui est mesurée par le risque systématique (bêta) à différents horizons temporels. Les résultats montrent que le Bêta n'est pas stable, en raison de multi-trading stratégies des investisseurs. Les résultats basés sur l'analyse montrent que le risque mesuré par la VaR est plus concentrée aux plus hautes fréquences. La deuxième partie (chapitres 4 et 5) traite l'estimation de la variance et la corrélation conditionnelle des séries temporelles multivariées. Nous considérons deux classes de séries temporelles: les séries temporelles stationnaires (rendements) et les séries temporelles non-stationnaires (séries en niveaux)
This thesis deals with the contribution of wavelet methods on modeling economic and financial time series and consists of two parts: the univariate time series and multivariate time series. In the first part (chapters 2 and 3), we adopt univariate case. First, we examine the class of non-stationary long memory processes. A simulation study is carried out in order to compare the performance of some semi-parametric estimation methods for fractional differencing parameter. We also examine the long memory in volatility using FIGARCH models to model energy data. Results show that the Exact local Whittle estimation method of Shimotsu and Phillips [2005] is the better one and the oil volatility exhibit strong evidence of long memory. Next, we analyze the market risk of univariate stock market returns which is measured by systematic risk (beta) at different time horizons. Results show that beta is not stable, due to multi-trading strategies of investors. Results based on VaR analysis show that risk is more concentrated at higher frequency. The second part (chapters 4 and 5) deals with estimation of the conditional variance and correlation of multivariate time series. We consider two classes of time series: the stationary time series (returns) and the non-stationary time series (levels). We develop a novel approach, which combines wavelet multi-resolution analysis and multivariate GARCH models, i.e. the wavelet-based multivariate GARCH approach. However, to evaluate the volatility forecasts we compare the performance of several multivariate models using some criteria, such as loss functions, VaR estimation and hedging strategies
APA, Harvard, Vancouver, ISO, and other styles
31

Yiu, Fu-keung. "Time series analysis of financial index /." Hong Kong : University of Hong Kong, 1996. http://sunzi.lib.hku.hk/hkuto/record.jsp?B18003047.

Full text
APA, Harvard, Vancouver, ISO, and other styles
32

Guthrey, Delparde Raleigh. "Time series analysis of ozone data." CSUSB ScholarWorks, 1998. https://scholarworks.lib.csusb.edu/etd-project/1788.

Full text
APA, Harvard, Vancouver, ISO, and other styles
33

ZANETTI, CHINI EMILIO. "Essays in nonlinear time series analysis." Doctoral thesis, Università degli Studi di Roma "Tor Vergata", 2013. http://hdl.handle.net/2108/203343.

Full text
Abstract:
This paper introduces a variant of the smooth transition autoregression (STAR).Theproposedmodelisabletoparametrizetheasymmetryinthetails of the transition equation by using a particular generalization of the logistic function. The null hypothesis of symmetric adjustment toward a new regime is tested by building two different LM-type tests. The first one maintains the original parametrization, while the second one is based on a third-order expanded auxiliary regression. Three diagnostic tests for no error autocorrelation, no additive asymmetry and parameter constancy are also discussed. The empirical size and power of the new symmetry as well as diagnostic tests are investigated by an extensive Monte Carlo experiment. An empirical application of the so generalized STAR (GSTAR) model to four economic time series reveals that the asymmetry in the transition between two regimes is a feature to be considered for economic analysi
APA, Harvard, Vancouver, ISO, and other styles
34

Sorice, Domenico <1995&gt. "Random forests in time series analysis." Master's Degree Thesis, Università Ca' Foscari Venezia, 2020. http://hdl.handle.net/10579/17482.

Full text
Abstract:
Machine learning algorithms are becoming more relevant in many fields from neuroscience to biostatistics, due to their adaptability and the possibility to learn from the data. In recent years, those techniques became popular in economics and found different applications in policymaking, financial forecasting, and portfolio optimization. The aim of this dissertation is two-fold. First, I will provide a review of the classification and Regression Tree and Random Forest methods proposed by [Breiman, 1984], [Breiman, 2001], then I study the effectiveness of those algorithms in time series analysis. I review the CART model and the Random Forest, which is an ensemble machine learning algorithm, based on the CART, using a variety of applications to test the performance of the algorithms. Second, I will implement an application on financial data: I will use the Random Forest algorithm to estimate a factor model based on macroeconomic variables with the aim of verifying if the Random Forest is able to capture part of the non-linear relationship between the factor considered and the index return.
APA, Harvard, Vancouver, ISO, and other styles
35

Mazel, David S. "Fractal modeling of time-series data." Diss., Georgia Institute of Technology, 1991. http://hdl.handle.net/1853/13916.

Full text
APA, Harvard, Vancouver, ISO, and other styles
36

Li, Tak-wai Wilson, and 李德煒. "Forecasting of tide heights: an application of smoothness priors in time series modelling." Thesis, The University of Hong Kong (Pokfulam, Hong Kong), 1991. http://hub.hku.hk/bib/B3121048X.

Full text
APA, Harvard, Vancouver, ISO, and other styles
37

Whitcher, Brandon. "Assessing nonstationary time series using wavelets /." Thesis, Connect to this title online; UW restricted, 1998. http://hdl.handle.net/1773/8957.

Full text
APA, Harvard, Vancouver, ISO, and other styles
38

Hossain, Md Jobayer. "Analysis of nonstationary time series with time varying frequencies." Ann Arbor, Mich. : ProQuest, 2006. http://gateway.proquest.com/openurl?url_ver=Z39.88-2004&rft_val_fmt=info:ofi/fmt:kev:mtx:dissertation&res_dat=xri:pqdiss&rft_dat=xri:pqdiss:3220410.

Full text
Abstract:
Thesis (Ph.D. in Statistical Science)--S.M.U.
Title from PDF title page (viewed July 6, 2007). Source: Dissertation Abstracts International, Volume: 67-05, Section: B, page: 2641. Advisers: Wayne A. Woodward; Henry L. Gray. Includes bibliographical references.
APA, Harvard, Vancouver, ISO, and other styles
39

Xiong, Yimin. "Time series clustering using ARMA models /." View abstract or full-text, 2004. http://library.ust.hk/cgi/db/thesis.pl?COMP%202004%20XIONG.

Full text
Abstract:
Thesis (M. Phil.)--Hong Kong University of Science and Technology, 2004.
Includes bibliographical references (leaves 49-55). Also available in electronic version. Access restricted to campus users.
APA, Harvard, Vancouver, ISO, and other styles
40

Cheung, King Chau. "Modelling multiple time series with missing observations." Thesis, Canberra, ACT : The Australian National University, 1993. http://hdl.handle.net/1885/133887.

Full text
Abstract:
This thesis introduces an approach to the state space modelling of time series that may possess missing observations. The procedure starts by estimating the autocovariance sequence using an idea proposed by Parzen(1963) and Stoffer(1986). Successive Hankel matrices are obtained via Autoregressive approximations. The rank of the Hankel matrix is determined by a singular value decomposition in conjunction with an appropriate model selection criterion . An in tern ally balanced state space realisation of the selected Hankel matrix provides initial estimate for maximum likelihood estimation. Finally, theoretical evaluation of the Fisher information matrix with missing observations is considered. The methodology is illustrated by applying the implied algorithm to real data. We consider modelling the white blood cell counts of a patient who has Leukaemia. Our modelling objective is to be able to describe the dynamic behaviour of the white blood cell counts.
APA, Harvard, Vancouver, ISO, and other styles
41

Liu, Zhao, and 劉釗. "On mixture double autoregressive time series models." Thesis, The University of Hong Kong (Pokfulam, Hong Kong), 2013. http://hdl.handle.net/10722/196465.

Full text
Abstract:
Conditional heteroscedastic models are one important type of time series models which have been widely investigated and brought out continuously by scholars in time series analysis. Those models play an important role in depicting the characteristics of the real world phenomenon, e.g. the behaviour of _nancial market. This thesis proposes a mixture double autoregressive model by adopting the exibility of mixture models to the double autoregressive model, a novel conditional heteroscedastic model recently proposed by Ling (2004). Probabilistic properties including strict stationarity and higher order moments are derived for this new model and, to make it more exible, a logistic mixture double autoregressive model is further introduced to take into account the time varying mixing proportions. Inference tools including the maximum likelihood estimation, an EM algorithm for searching the estimator and an information criterion for model selection are carefully studied for the logistic mixture double autoregressive model. We notice that the shape changing characteristics of the multimodal conditional distributions is an important feature of this new type of model. The conditional heteroscedasticity of time series is also well depicted. Monte Carlo experiments give further support to these two new models, and the analysis of an empirical example based on our new models as well as other mainstream ones is also reported.
published_or_final_version
Statistics and Actuarial Science
Master
Master of Philosophy
APA, Harvard, Vancouver, ISO, and other styles
42

Cheung, Chung-pak, and 張松柏. "Multivariate time series analysis on airport transportation." Thesis, The University of Hong Kong (Pokfulam, Hong Kong), 1991. http://hub.hku.hk/bib/B31976499.

Full text
APA, Harvard, Vancouver, ISO, and other styles
43

Leonardi, Mary L. "Prediction and geometry of chaotic time series." Monterey, Calif. : Springfield, Va. : Naval Postgraduate School ; Available from National Technical Information Service, 1997. http://handle.dtic.mil/100.2/ADA333449.

Full text
Abstract:
Thesis (M.S. in Applied Mathematics) Naval Postgraduate School, June 1997.
Thesis advisors, Christopher Frenzen, Philip Beaver. Includes bibliographical references (p. 103-104). Also available online.
APA, Harvard, Vancouver, ISO, and other styles
44

Thyer, Mark Andrew. "Modelling long-term persistence in hydrological time series." Diss., 2000, 2000. http://www.newcastle.edu.au/services/library/adt/public/adt-NNCU20020531.035349/index.html.

Full text
APA, Harvard, Vancouver, ISO, and other styles
45

Koller, Stefan. "Applications of Time Series Analysis for Finance." St. Gallen, 2007. http://www.biblio.unisg.ch/org/biblio/edoc.nsf/wwwDisplayIdentifier/05604814001/$FILE/05604814001.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
46

Mui, Chi Seong. "Frequency domain approach to time series analysis." Thesis, University of Macau, 2000. http://umaclib3.umac.mo/record=b1446676.

Full text
APA, Harvard, Vancouver, ISO, and other styles
47

Purutcuoglu, Vilda. "Unit Root Problems In Time Series Analysis." Master's thesis, METU, 2004. http://etd.lib.metu.edu.tr/upload/2/12604701/index.pdf.

Full text
Abstract:
In time series models, autoregressive processes are one of the most popular stochastic processes, which are stationary under certain conditions. In this study we consider nonstationary autoregressive models of order one, which have iid random errors. One of the important nonstationary time series models is the unit root process in AR (1), which simply implies that a shock to the system has permanent effect through time. Therefore, testing unit root is a very important problem. However, under nonstationarity, any estimator of the autoregressive coefficient does not have a known exact distribution and the usual t &ndash
statistic is not accurate even if the sample size is very large. Hence,Wiener process is invoked to obtain the asymptotic distribution of the LSE under normality. The first four moments of under normality have been worked out for large n. In 1998, Tiku and Wong proposed the new test statistics and whose type I error and power values are calculated by using three &ndash
moment chi &ndash
square or four &ndash
moment F approximations. The test statistics are based on the modified maximum likelihood estimators and the least square estimators, respectively. They evaluated the type I errors and the power of these tests for a family of symmetric distributions (scaled Student&rsquo
s t). In this thesis, we have extended this work to skewed distributions, namely, gamma and generalized logistic.
APA, Harvard, Vancouver, ISO, and other styles
48

Glover, James N. "Time series analysis near a fixed point." Thesis, University of Cambridge, 1993. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.295353.

Full text
APA, Harvard, Vancouver, ISO, and other styles
49

Al-Wasel, Ibrahim A. "Spectral analysis for replicated biomedical time series." Thesis, Lancaster University, 1993. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.412585.

Full text
APA, Harvard, Vancouver, ISO, and other styles
50

Manrique, Garcia Aurora. "Econometric analysis of limited dependent time series." Thesis, University of Oxford, 1997. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.389797.

Full text
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography