To see the other types of publications on this topic, follow the link: Time series.

Dissertations / Theses on the topic 'Time series'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 dissertations / theses for your research on the topic 'Time series.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.

1

Rajan, Jebu Jacob. "Time series classification." Thesis, University of Cambridge, 1994. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.339538.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Pope, Kenneth James. "Time series analysis." Thesis, University of Cambridge, 1993. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.318445.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Yin, Jiang Ling. "Financial time series analysis." Thesis, University of Macau, 2011. http://umaclib3.umac.mo/record=b2492929.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Gore, Christopher Mark. "A time series classifier." Diss., Rolla, Mo. : Missouri University of Science and Technology, 2008. http://scholarsmine.mst.edu/thesis/pdf/Gore_09007dcc804e6461.pdf.

Full text
Abstract:
Thesis (M.S.)--Missouri University of Science and Technology, 2008.
Vita. The entire thesis text is included in file. Title from title screen of thesis/dissertation PDF file (viewed April 29, 2008) Includes bibliographical references (p. 53-55).
APA, Harvard, Vancouver, ISO, and other styles
5

NETO, ANSELMO CHAVES. "BOOTSTRAP IN TIME SERIES." PONTIFÍCIA UNIVERSIDADE CATÓLICA DO RIO DE JANEIRO, 1991. http://www.maxwell.vrac.puc-rio.br/Busca_etds.php?strSecao=resultado&nrSeq=8324@1.

Full text
Abstract:
O bootstrap de B. Efron, que não poderia ser imaginado sem os computadores de hoje, pode resolver vários problemas livre da suposição de Gaussianidade para os dados. Este trabalho tem o objetivo de apresentar essa técnica computacionalmente intensiva no contexto de Séries temporais - Metodologia Box and Jenkins. Como se sabe essa Metodologia possui alguns resultados assintóticos. Então, na fase da identificação da estrutura do modelo, pode apresentar problemas em regiões do espaço paramétrico aqui determinadas,. O bootstrap é proposto como opção e um estudo de simulação, comparativo, é apresentado. Constrói- se a distribuição bootstrap da autocorrelação e autocorrelação parcial, amostrais, e ainda a distribuição bootstrap do estimador de MQNL dos coeficientes de modelos ARMA (p, q). consequentemente, fica disponí­vel medida não- paramétrica da precisão da estimativa. O estudo de simulação que aborda o estimador de MQNL dos coeficientes enfoca, basicamente, a região de fronteira da estacionariedade e inversibilidade.
The bootstrap of B. Efron, what should not be imagined without fast andcheaper computation, can solve several problems free from assumption that the data conform to a bell-shaped curve. This work has the aim to present this computer-intensive technics in the context of Time Series - Box and Jenkins´s Methodology. As we know this methodology own some asymptotic results. Then in the identification stage of the structure of the model it may present some troubles on regions of the parametric space, as we show later on the bootstrap is proposed as an aption and a comparative simulation study is pointed out. We build up the bootstrap distribution of the sample autocorrelation and sample partial autocorrelation, and yet a bootstrap distribution to the non-linear LS estimator of the coefficients to the ARMA (p,q) model. As a consequence we get the non- parametric measure of the accuracy of the estimates. The study of simulation wich takes into account the non-linear LS estimato to the coefficients, actually focalize the borden of the stationarity and invertibility region.
APA, Harvard, Vancouver, ISO, and other styles
6

AGUIAR, JOSE LUIZ DO NASCIMENTO DE. "TIME SERIES SYMILARITY MEASURES." PONTIFÍCIA UNIVERSIDADE CATÓLICA DO RIO DE JANEIRO, 2016. http://www.maxwell.vrac.puc-rio.br/Busca_etds.php?strSecao=resultado&nrSeq=27789@1.

Full text
Abstract:
PONTIFÍCIA UNIVERSIDADE CATÓLICA DO RIO DE JANEIRO
COORDENAÇÃO DE APERFEIÇOAMENTO DO PESSOAL DE ENSINO SUPERIOR
PROGRAMA DE EXCELENCIA ACADEMICA
Atualmente, uma tarefa muito importante na mineração de dados é compreender como extrair os dados mais informativos dentre um número muito grande de dados. Uma vez que todos os campos de conhecimento apresentam uma grande quantidade de dados que precisam ser reduzidas até as informações mais representativas, a abordagem das séries temporais é definitivamente um método muito forte para representar e extrair estas informações. No entanto nós precisamos ter uma ferramenta apropriada para inferir os dados mais significativos destas séries temporais, e para nos ajudar, podemos utilizar alguns métodos de medida de similaridade para saber o grau de igualdade entre duas séries temporais, e nesta pesquisa nós vamos realizar um estudo utilizando alguns métodos de similaridade baseados em medidas de distância e aplicar estes métodos em alguns algoritmos de clusterização para fazer uma avaliação de se existe uma combinação (método de similaridade baseado em distância / algoritmo de clusterização) que apresenta uma performance melhor em relação a todos os outros utilizados neste estudo, ou se existe um método de similaridade baseado em distância que mostra um desempenho melhor que os demais.
Nowadays a very important task in data mining is to understand how to collect the most informative data in a very amount of data. Once every single field of knowledge have lots of data to summarize in the most representative information, the time series approach is definitely a very strong way to represent and collect this information from it (12, 22). On other hand we need to have an appropriate tool to extract the most significant data from this time series. To help us we can use some similarity methods to know how similar is one time series from another In this work we will perform a research using some distance-based similarity methods and apply it in some clustering algorithms to do an assessment to see if there is a combination (distance-based similarity methods / clustering algorithm) that present a better performance in relation with all the others used in this work or if there exists one distancebased similarity method that shows a better performance between the others.
APA, Harvard, Vancouver, ISO, and other styles
7

Yin, Yong. "Outliers in Time Series /." Connect to resource, 1995. http://rave.ohiolink.edu/etdc/view.cgi?acc%5Fnum=osu1262638388.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Rana, Md Mashud. "Energy time series prediction." Thesis, The University of Sydney, 2014. http://hdl.handle.net/2123/11745.

Full text
Abstract:
Reliable operations and economical utilization of power systems require electricity load forecasting at a wide range of forecasting horizons. The objective of this thesis is two-fold: developing accurate prediction models for electricity load forecasting, and quantifying the load forecasting uncertainty. At first, we consider the task of feature selection for electricity load forecasting. We propose a two-step approach - identifying a set of candidate features based on the data characteristics and then selecting a subset of them using four different methods. We evaluate the performance of these methods using state-of-the-art prediction algorithms. The results show that all feature selection methods are able to identify small subsets of highly relevant features for electricity load forecasting. We then present a generic approach for very short term electricity load forecasting. It combines multilevel wavelet packet transform, a non-linear feature selection method based on mutual information, and machine learning prediction algorithms. The evaluation shows that the proposed approach is robust and outperforms several non-wavelet based approaches. We also propose a novel approach for forecasting the daily load profile. The proposed approach uses mutual information for feature selection and an ensemble of neural networks for building a prediction model. The evaluation using two years of electricity load data for Australia, Portugal and Spain shows that it provides accurate predictions. Finally, we present LUBEX, a neural networks based approach for forecasting prediction intervals to quantify the uncertainty associated with electricity load prediction. LUBEX extends an existing method (LUBE) by including an advanced feature selection method and using an ensemble of neural networks. A comprehensive evaluation using 24 different case studies shows that LUBEX is able to generate high quality prediction intervals.
APA, Harvard, Vancouver, ISO, and other styles
9

Grubb, Howard John. "Multivariate time series modelling." Thesis, University of Bath, 1990. https://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.280803.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Ahsan, Ramoza. "Time Series Data Analytics." Digital WPI, 2019. https://digitalcommons.wpi.edu/etd-dissertations/529.

Full text
Abstract:
Given the ubiquity of time series data, and the exponential growth of databases, there has recently been an explosion of interest in time series data mining. Finding similar trends and patterns among time series data is critical for many applications ranging from financial planning, weather forecasting, stock analysis to policy making. With time series being high-dimensional objects, detection of similar trends especially at the granularity of subsequences or among time series of different lengths and temporal misalignments incurs prohibitively high computation costs. Finding trends using non-metric correlation measures further compounds the complexity, as traditional pruning techniques cannot be directly applied. My dissertation addresses these challenges while meeting the need to achieve near real-time responsiveness. First, for retrieving exact similarity results using Lp-norm distances, we design a two-layered time series index for subsequence matching. Time series relationships are compactly organized in a directed acyclic graph embedded with similarity vectors capturing subsequence similarities. Powerful pruning strategies leveraging the graph structure greatly reduce the number of time series as well as subsequence comparisons, resulting in a several order of magnitude speed-up. Second, to support a rich diversity of correlation analytics operations, we compress time series into Euclidean-based clusters augmented by a compact overlay graph encoding correlation relationships. Such a framework supports a rich variety of operations including retrieving positive or negative correlations, self correlations and finding groups of correlated sequences. Third, to support flexible similarity specification using computationally expensive warped distance like Dynamic Time Warping we design data reduction strategies leveraging the inexpensive Euclidean distance with subsequent time warped matching on the reduced data. This facilitates the comparison of sequences of different lengths and with flexible alignment still within a few seconds of response time. Comprehensive experimental studies using real-world and synthetic datasets demonstrate the efficiency, effectiveness and quality of the results achieved by our proposed techniques as compared to the state-of-the-art methods.
APA, Harvard, Vancouver, ISO, and other styles
11

Milton, Robert. "Time-series in distributed real-time databases." Thesis, University of Skövde, Department of Computer Science, 2003. http://urn.kb.se/resolve?urn=urn:nbn:se:his:diva-827.

Full text
Abstract:

In a distributed real-time environment where it is imperative to make correct decisions it is important to have all facts available to make the most accurate decision in a certain situation. An example of such an environment is an Unmanned Aerial Vehicle (UAV) system where several UAVs cooperate to carry out a certain task and the data recorded is analyzed after the completion of the mission. This project aims to define and implement a time series architecture for use together with a distributed real-time database for the ability to store temporal data. The result from this project is a time series (TS) architecture that uses DeeDS, a distributed real-time database, for storage. The TS architecture is used by an application modelled from a UAV scenario for storing temporal data. The temporal data is produced by a simulator. The TS architecture solves the problem of storing temporal data for applications using DeeDS. The TS architecture is also useful as a foundation for integrating time series in DeeDS since it is designed for space efficiency and real-time requirements.

APA, Harvard, Vancouver, ISO, and other styles
12

Lam, Vai Iam. "Time domain approach in time series analysis." Thesis, University of Macau, 2000. http://umaclib3.umac.mo/record=b1446633.

Full text
APA, Harvard, Vancouver, ISO, and other styles
13

Morrill, Jeffrey P., and Jonathan Delatizky. "REAL-TIME RECOGNITION OF TIME-SERIES PATTERNS." International Foundation for Telemetering, 1993. http://hdl.handle.net/10150/608854.

Full text
Abstract:
International Telemetering Conference Proceedings / October 25-28, 1993 / Riviera Hotel and Convention Center, Las Vegas, Nevada
This paper describes a real-time implementation of the pattern recognition technology originally developed by BBN [Delatizky et al] for post-processing of time-sampled telemetry data. This makes it possible to monitor a data stream for a characteristic shape, such as an arrhythmic heartbeat or a step-response whose overshoot is unacceptably large. Once programmed to recognize patterns of interest, it generates a symbolic description of a time-series signal in intuitive, object-oriented terms. The basic technique is to decompose the signal into a hierarchy of simpler components using rules of grammar, analogous to the process of decomposing a sentence into phrases and words. This paper describes the basic technique used for pattern recognition of time-series signals and the problems that must be solved to apply the techniques in real time. We present experimental results for an unoptimized prototype demonstrating that 4000 samples per second can be handled easily on conventional hardware.
APA, Harvard, Vancouver, ISO, and other styles
14

Rivera, Pablo Marshall. "Analysis of a cross-section of time series using structural time series models." Thesis, London School of Economics and Political Science (University of London), 1990. http://etheses.lse.ac.uk/13/.

Full text
Abstract:
This study deals with multivariate structural time series models, and in particular, with the analysis and modelling of cross-sections of time series. In this context, no cause and effect relationships are assumed between the time series, although they are subject to the same overall environment. The main motivations in the analysis of cross-sections of time series are (i) the gains in efficiency in the estimation of the irregular, trend and seasonal components; and (ii) the analysis of models with common effects. The study contains essentially two parts. The first one considers models with a general specification for the correlation of the irregular, trend and seasonal components across the time series. Four structural time series models are presented, and the estimation of the components of the time series, as well as the estimation of the parameters which define this components, is discussed. The second part of the study deals with dynamic error components models where the irregular, trend and seasonal components are generated by common, as well as individual, effects. The extension to models for multivariate observations of cross-sections is also considered. Several applications of the methods studied are presented. Particularly relevant is an econometric study of the demand for energy in the U. K.
APA, Harvard, Vancouver, ISO, and other styles
15

Cuevas, Tello Juan Carlos. "Estimating time delays between irregularly sampled time series." Thesis, University of Birmingham, 2007. http://etheses.bham.ac.uk//id/eprint/88/.

Full text
Abstract:
The time delay estimation between time series is a real-world problem in gravitational lensing, an area of astrophysics. Lensing is the most direct method of measuring the distribution of matter, which is often dark, and the accurate measurement of time delays set the scale to measure distances over cosmological scales. For our purposes, this means that we have to estimate a time delay between two or more noisy and irregularly sampled time series. Estimations have been made using statistical methods in the astrophysics literature, such as interpolation, dispersion analysis, discrete correlation function, Gaussian processes and Bayesian method, among others. Instead, this thesis proposes a kernel-based approach to estimating the time delay, which is inspired by kernel methods in the context of statistical and machine learning. Moreover, our methodology is evolved to perform model selection, regularisation and time delay estimation globally and simultaneously. Experimental results show that this approach is one of the most accurate methods for gaps (missing data) and distinct noise levels. Results on artificial and real data are shown.
APA, Harvard, Vancouver, ISO, and other styles
16

Xu, Mengyuan Tracy. "Filtering non-stationary time series by time deformation." Ann Arbor, Mich. : ProQuest, 2008. http://gateway.proquest.com/openurl?url_ver=Z39.88-2004&rft_val_fmt=info:ofi/fmt:kev:mtx:dissertation&res_dat=xri:pqdiss&rft_dat=xri:pqdiss:3309151.

Full text
Abstract:
Thesis (Ph.D. in Statistical Science)--S.M.U.
Title from PDF title page (viewed Mar. 16, 2009). Source: Dissertation Abstracts International, Volume: 69-04, Section: B, page: 2402. Advisers: Wayne A. Woodward; Henry L. Gray. Includes bibliographical references.
APA, Harvard, Vancouver, ISO, and other styles
17

Saffell, Matthew John. "Knowledge discovery for time series /." Full text open access at:, 2005. http://content.ohsu.edu/u?/etd,247.

Full text
APA, Harvard, Vancouver, ISO, and other styles
18

Rekdal, Espen Ekornes. "Metric Indexing in Time Series." Thesis, Norwegian University of Science and Technology, Department of Computer and Information Science, 2008. http://urn.kb.se/resolve?urn=urn:nbn:no:ntnu:diva-10487.

Full text
APA, Harvard, Vancouver, ISO, and other styles
19

Andreassen, Børge Solli. "Wavelets and irregular time series." Thesis, Norges teknisk-naturvitenskapelige universitet, Institutt for matematiske fag, 2012. http://urn.kb.se/resolve?urn=urn:nbn:no:ntnu:diva-19011.

Full text
Abstract:
In this thesis we study time series containing pressure measurements from a three phase flow pipeline at the Ekofisk oil field. The pipeline transports a mixture of oil, water and gas from $15$ wells for approximately 2.5km to a production facility. Our aim is to develop techniques that allow the selection and (to some extent) prediction of "non-standard" behavior in the system (sharp pressure changes and other type of instabilities). To advice this aim we perform a scalewise decomposition of the input signal/time series and investigate the behavior of each scale separately. We introduce the Sliding Window Wavelet Transform (SWWT) method. The method evaluate the variability on different scales within the time interval of a characteristic length (a window) and then trace these characteristics as the window slides in time.We use the discrete wavelet transform (DWT) in order to obtain the scalewise decomposition within the window. Using orthonormal discrete wavelets, we show that the variability of such sequences can be decomposed into their corresponding scales. Based on this, a thresholding algorithm is applied, characterizing the state of the system any given time. The results we find are promising and we show that different parameters in the thresholding algorithm extracts different types of special events. We also show that in some cases, this approach allows to predict special events before they really occur.While we investigate one particular system in this thesis, the procedures developed can be applied to other complicated systems where instability in system parameters is important.
APA, Harvard, Vancouver, ISO, and other styles
20

Matus, Castillejos Abel, and n/a. "Management of Time Series Data." University of Canberra. Information Sciences & Engineering, 2006. http://erl.canberra.edu.au./public/adt-AUC20070111.095300.

Full text
Abstract:
Every day large volumes of data are collected in the form of time series. Time series are collections of events or observations, predominantly numeric in nature, sequentially recorded on a regular or irregular time basis. Time series are becoming increasingly important in nearly every organisation and industry, including banking, finance, telecommunication, and transportation. Banking institutions, for instance, rely on the analysis of time series for forecasting economic indices, elaborating financial market models, and registering international trade operations. More and more time series are being used in this type of investigation and becoming a valuable resource in today�s organisations. This thesis investigates and proposes solutions to some current and important issues in time series data management (TSDM), using Design Science Research Methodology. The thesis presents new models for mapping time series data to relational databases which optimise the use of disk space, can handle different time granularities, status attributes, and facilitate time series data manipulation in a commercial Relational Database Management System (RDBMS). These new models provide a good solution for current time series database applications with RDBMS and are tested with a case study and prototype with financial time series information. Also included is a temporal data model for illustrating time series data lifetime behaviour based on a new set of time dimensions (confidentiality, definitiveness, validity, and maturity times) specially targeted to manage time series data which are introduced to correctly represent the different status of time series data in a timeline. The proposed temporal data model gives a clear and accurate picture of the time series data lifecycle. Formal definitions of these time series dimensions are also presented. In addition, a time series grouping mechanism in an extensible commercial relational database system is defined, illustrated, and justified. The extension consists of a new data type and its corresponding rich set of routines that support modelling and operating time series information within a higher level of abstraction. It extends the capability of the database server to organise and manipulate time series into groups. Thus, this thesis presents a new data type that is referred to as GroupTimeSeries, and its corresponding architecture and support functions and operations. Implementation options for the GroupTimeSeries data type in relational based technologies are also presented. Finally, a framework for TSDM with enough expressiveness of the main requirements of time series application and the management of that data is defined. The framework aims at providing initial domain know-how and requirements of time series data management, avoiding the impracticability of designing a TSDM system on paper from scratch. Many aspects of time series applications including the way time series data are organised at the conceptual level are addressed. The central abstraction for the proposed domain specific framework is the notions of business sections, group of time series, and time series itself. The framework integrates comprehensive specification regarding structural and functional aspects for time series data management. A formal framework specification using conceptual graphs is also explored.
APA, Harvard, Vancouver, ISO, and other styles
21

Malan, Karien. "Stationary multivariate time series analysis." Pretoria : [s.n.], 2008. http://upetd.up.ac.za/thesis/available/etd-06132008-173800.

Full text
APA, Harvard, Vancouver, ISO, and other styles
22

Pawson, Ian Alexander. "Characterization of chaotic time series." Thesis, Imperial College London, 2004. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.406744.

Full text
APA, Harvard, Vancouver, ISO, and other styles
23

Alagon, J. "Discriminant analysis for time series." Thesis, University of Oxford, 1986. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.375222.

Full text
APA, Harvard, Vancouver, ISO, and other styles
24

Yildirim, Dilem. "Modelling Nonlinear Nonstationary Time Series." Thesis, University of Manchester, 2009. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.502993.

Full text
APA, Harvard, Vancouver, ISO, and other styles
25

Ruiz, Ortega Esther. "Heteroscedasticity in financial time series." Thesis, London School of Economics and Political Science (University of London), 1992. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.308386.

Full text
Abstract:
This thesis deals with two different topics, both related to modelling time-varying variances in high frequency financial time series. The first topic concerns the estimation of unobserved component models with autoregressive conditional heteroscedastic (ARCH) effects. The second topic concerns the quasi-maximum likelihood estimation of stochastic variance processes. These are an alternative to ARCH processes for modelling conditionally heteroscedastic time series. The motivation of the work is based on the increasing interest in the financial area in modelling volatility. In financial markets, many decisions are based on the volatility of a specific stock or index, which is closely related to the variance. Therefore, it is important to develop good statistical models able to describe time-varying variances. 2
APA, Harvard, Vancouver, ISO, and other styles
26

Warnes, Alexis. "Diagnostics in time series analysis." Thesis, Durham University, 1994. http://etheses.dur.ac.uk/5159/.

Full text
Abstract:
The portmanteau diagnostic test for goodness of model fit is studied. It is found that the true variances of the estimated residual autocorrelation function are potentially deflated considerably below their asymptotic level, and exhibit high correlations with each other. This suggests a new portmanteau test, ignoring the first p + q residual autocorrelation terms and hence approximating the asymptotic chi-squared distribution more closely. Simulations show that this alternative portmanteau test produces greater accuracy in its estimated significance levels, especially in small samples. Theory and discussions follow, pertaining to both the Dynamic Linear Model and the Bayesian method of forecasting. The concept of long-term equivalence is defined. The difficulties with the discounting approach in the DLM are then illustrated through an example, before deriving equations for the step-ahead forecast distribution which could, instead, be used to estimate the evolution variance matrix W(_t). Non-uniqueness of W in the constant time series DLM is the principal drawback with this idea; however, it is proven that in any class of long-term equivalent models only p degrees of freedom can be fixed in W, leading to a potentially diagonal form for this matrix. The bias in the k(^th) step-ahead forecast error produced by any TSDLM variance (mis)specification is calculated. This yields the variances and covariances of the forecast error distribution; given sample estimates of these, it proves possible to solve equations arising from these calculations both for V and p elements of W. Simulations, and a "head-to-head" comparison, for the frequently-applied steady model illustrate the accuracy of the predictive calculations, both in the convergence properties of the sample (co)variances, and the estimates Ṽ and Ŵ. The method is then applied to a 2-dimensional constant TSDLM. Further simulations illustrate the success of the approach in producing accurate on-line estimates for the true variance specifications within this widely-used model.
APA, Harvard, Vancouver, ISO, and other styles
27

Chinipardaz, Rahim. "Discrimination of time series data." Thesis, University of Newcastle Upon Tyne, 1996. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.481472.

Full text
APA, Harvard, Vancouver, ISO, and other styles
28

Chan, Hon Tsang. "Discriminant analysis of time series." Thesis, University of Newcastle Upon Tyne, 1991. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.315614.

Full text
APA, Harvard, Vancouver, ISO, and other styles
29

AZEVEDO, RONALDO. "GRANGER CAUSALITY IN TIME SERIES." PONTIFÍCIA UNIVERSIDADE CATÓLICA DO RIO DE JANEIRO, 1991. http://www.maxwell.vrac.puc-rio.br/Busca_etds.php?strSecao=resultado&nrSeq=8782@1.

Full text
Abstract:
REDE FERROVIÁRIA FEDERAL SA
Neste trabalho fazemos uma revisita à causalidade no sentido de Granger aplicada às Séries Temporais bivariadas no domínio do tempo e da freqüência. Um programa computacional foi escrito usando a linguagem Pascal para, testando casos reais e simulados, construir modelos de causalidade/feedback, que são então analisados no ambiente espectral, com ênfase maior à discussão da coerência e da fase de causalidade.
In this work causality in the sense defined by Granger is revisited. Applications to bivariante temporal systems in time domain and frequency-domain were analysed, using a computer program written in Pascal. After this, spectral methods were developed, with special emphasis on phase and causality-coerence.
APA, Harvard, Vancouver, ISO, and other styles
30

VALENTIM, CAIO DIAS. "DATA STRUCTURES FOR TIME SERIES." PONTIFÍCIA UNIVERSIDADE CATÓLICA DO RIO DE JANEIRO, 2012. http://www.maxwell.vrac.puc-rio.br/Busca_etds.php?strSecao=resultado&nrSeq=21522@1.

Full text
Abstract:
PONTIFÍCIA UNIVERSIDADE CATÓLICA DO RIO DE JANEIRO
CONSELHO NACIONAL DE DESENVOLVIMENTO CIENTÍFICO E TECNOLÓGICO
Séries temporais são ferramentas importantes para análise de eventos que ocorrem em diferentes domínios do conhecimento humano, como medicina, física, meteorologia e finanças. Uma tarefa comum na análise de séries temporais é a busca por eventos pouco frequentes que refletem fatos de interesse sobre o domínio de origem da série. Neste trabalho, buscamos desenvolver técnicas para detecção de eventos raros em séries temporais. Formalmente, uma série temporal A igual a (a1, a2,..., an) é uma sequência de valores reais indexados por números inteiros de 1 a n. Dados dois números, um inteiro t e um real d, dizemos que um par de índices i e j formam um evento-(t, d) em A se, e somente se, 0 menor que j - i menor ou igual a t e aj - ai maior ou igual a d. Nesse caso, i é o início do evento e j o fim. Os parâmetros t e d servem para controlar, respectivamente, a janela de tempo em que o evento pode ocorrer e a magnitude da variação na série. Assim, nos concentramos em dois tipos de perguntas relacionadas aos eventos-(t, d), são elas: - Quais são os eventos-(t, d) em uma série A? - Quais são os índices da série A que participam como inícios de ao menos um evento-(t, d)? Ao longo desse trabalho estudamos, do ponto de vista prático e teórico, diversas estruturas de dados e algoritmos para responder às duas perguntas listadas.
Time series are important tools for the anaylsis of events that occur in different fields of human knowledge such as medicine, physics, meteorology and finance. A common task in analysing time series is to try to find events that happen infrequently as these events usually reflect facts of interest about the domain of the series. In this study, we develop techniques for the detection of rare events in time series. Technically, a time series A equal to (a1, a2,..., an) is a sequence of real values indexed by integer numbers from 1 to n. Given an integer t and a real number d, we say that a pair of time indexes i and j is a (t, d)-event in A, if and only if 0 less than j - i less than or equal to t and aj - ai greater than or equal to d. In this case, i is said to be the beginning of the event and j is its end. The parameters t and d control, respectively, the time window in which the event can occur and magnitude of the variation in the series. Thus, we focus on two types of queries related to the (t, d)-events, which are: - What are the (t, d)-events in a series A? - What are the indexes in the series A which are the beginning of at least one (t, d)-event? Throughout this study we discuss, from both theoretical and practical points of view, several data structures and algorithms to answer the two queries mentioned above.
APA, Harvard, Vancouver, ISO, and other styles
31

Buonocore, Riccardo Junior. "Complexity in financial time-series." Thesis, King's College London (University of London), 2018. https://kclpure.kcl.ac.uk/portal/en/theses/complexity-in-financial-timeseries(7c54cd37-fd3a-475b-83c1-539a55b4e3f9).html.

Full text
Abstract:
Many aspects contribute to make financial markets one of the most challenging system to understand. The aim of this thesis is to study some aspects of their complexity by focusing on univariate e multivariate properties of log-returns time-series, namely multifractality and cross-dependence. In this thesis, we started by performing a thorough analysis of the scaling properties of synthetic time-series with different known scaling properties. This enabled us to do two things: find the presence of a strong bias in the estimation of the scaling exponents, and interpret measurement on real data which led us to uncover the true source of the multifractal behaviour of financial log-prices, which has been long debated in the literature. We addressed the presence of the bias by proposing a method which manages to filter out its presence and we validate it by applying it to synthetic time-series with known scaling properties and on empirical ones. We also found that this bias is due to the stability under aggregation of the log-returns which, due to their long memory, are processes which for high aggregation tend to a random variable which displays an exact multifractal scaling. Finally we focused the attention on linking the scaling properties of log-returns to their cross-correlation properties within a given market finding an intriguing non-linear relationship between the two quantities.
APA, Harvard, Vancouver, ISO, and other styles
32

Huang, Naijing. "Essays in time series analysis." Thesis, Boston College, 2015. http://hdl.handle.net/2345/bc-ir:104627.

Full text
Abstract:
Thesis advisor: Zhijie Xiao
I have three chapters in my dissertation. The first chapter is about the estimation and inference for DSGE model; the second chapter is about testing financial contagion among stock markets, and in the last chapter, I propose a new econometrics method to forecast inflation interval. This first chapter studies proper inference and asymptotically accurate structural break tests for parameters in Dynamic Stochastic General Equilibrium (DSGE) models in a maximum likelihood framework. Two empirically relevant issues may invalidate the conventional inference procedures and structural break tests for parameters in DSGE models: (i) weak identification and (ii) moderate parameter instability. DSGE literatures focus on dealing with weak identification issue, but ignore the impact of moderate parameter instability. This paper contributes to the literature via considering the joint impact of two issues in DSGE framework. The main results are: in a weakly identified DSGE model, (i) moderate instability from weakly identified parameters would not affect the validity of standard inference procedures or structural break tests; (ii) however, if strongly identified parameters are featured with moderate time-variation, the asymptotic distributions of test statistics would deviate from standard ones and would no longer be nuisance parameter free, which renders standard inference procedures and structural break tests invalid and provides practitioners misleading inference results; (iii) as long as I concentrate out strongly identified parameters, the instability impact of them would disappear as the sample size goes to infinity, which recovers the power of conventional inference procedure and structural break tests for weakly identified parameters. To illustrate my results, I simulate and estimate a modified version of the Hansen (1985) Real Business Cycle model and find that my theoretical results provide reasonable guidance for finite sample inference of the parameters in the model. I show that confidence intervals that incorporate weak identification and moderate parameter instability reduce the biases of confidence intervals that ignore those effects. While I focus on DSGE models in this paper, all of my theoretical results could be applied to any linear dynamic models or nonlinear GMM models. The second chapter, regarding the asymmetric and leptokurtic behavior of financial data, we propose a new contagion test in the quantile regression framework that is robust to model misspecification. Unlike conventional correlation-based tests, the proposed quantile contagion test allows us to investigate the stock market contagion at various quantiles, not only at the mean. We show that the quantile contagion test can detect a contagion effect that is possibly ignored by correlation-based tests. A wide range of simulation studies show that the proposed test is superior to the correlation-based tests in terms of size and power. We compare our test with correlation-based tests using three real data sets: the 1994 Tequila crisis, the 1997 Asia crisis, and the 2001 Argentina crisis. Empirical results show substantial differences between two types of tests. In the third chapter, I use Quantile Bayesian Approach-- to do the interval forecast for inflation in the semi-parametric framework. This new method introduces Bayesian solution to the quantile framework for two reasons: 1. It enables us to get more efficient quantile estimates when the informative prior is used (He and Yang (2012)); 2. We use Markov Chain Monte Carlo (MCMC) algorithm to generate samples of the posterior distribution for unknown parameters and take the mean or mode as the estimates. This MCMC estimator takes advantage of numerical integration over the standard numerical differentiation based optimization, especially when the likelihood function is complicated and multi-modal. Simulation results find better interval forecasting performance of Quantile Bayesian Approach than commonly used parametric approach
Thesis (PhD) — Boston College, 2015
Submitted to: Boston College. Graduate School of Arts and Sciences
Discipline: Economics
APA, Harvard, Vancouver, ISO, and other styles
33

Lee, Seonhwi. "Essays in financial time series." Thesis, University of Exeter, 2015. http://hdl.handle.net/10871/18569.

Full text
Abstract:
This thesis consists of three essays on topics in financial time series with particular emphases on specification testing, structural breaks and long memory. The first essay develops an asymptotically valid specification testing framework for the Realised GARCH model of Hansen et al. (2012). The misspecification tests account for the joint dependence between return and the realised measure of volatility and thus extend the existing literature for testing the adequacy of GARCH models. The testing procedure is constructed based on the conditional moment principle and the first-order asymptotic theory. Our Monte Carlo results reveal good finite sample size and power properties. In the second essay, a Monte Carlo experiment is conducted to investigate the relative out-of-sample predictive ability of a class of conditional variance models when either a structural break or long memory is allowed. Our Monte Carlo results reveal that if the true volatility process is stationary short memory and its persistence level is not too high, but is contaminated by a structural break, the presence of the structural break is of importance in choosing a proper size of estimation window in the short-run forecast. If the persistence level is very high, spurious long memory may often dominate the true structural break in the longer-run forecast. For data generation processes without any structural break, the forecasting models, which can characterise the properties of the true conditional variance process, are favourable. In the last essay, we analyse the properties of the S&P 500 stock index return volatility process using historical and realised measures of volatility. We investigate a true property of the stochastic volatility processes by means of econometric tests, which may disentangle true or spurious long memory. The realised variance and realised kernel of the US stock market return exhibit true long memory. However, the historical volatility process shows some evidence of spurious long memory. We examine relative out-of-sample performance of one-day-ahead forecasts, with emphasis on the predictive content of structural changes and long memory. A class of ARFIMA models consistently produces the best-performing forecasts compared to a class of GARCH models. Among the GARCH models, it is shown that a rolling window GARCH forecast and GARCH forecasts which account for breaks outperform the long memory-based GARCH models even with the long memory proxy process.
APA, Harvard, Vancouver, ISO, and other styles
34

Fulcher, Benjamin D. "Highly comparative time-series analysis." Thesis, University of Oxford, 2012. http://ora.ox.ac.uk/objects/uuid:642b65cf-4686-4709-9f9d-135e73cfe12e.

Full text
Abstract:
In this thesis, a highly comparative framework for time-series analysis is developed. The approach draws on large, interdisciplinary collections of over 9000 time-series analysis methods, or operations, and over 30 000 time series, which we have assembled. Statistical learning methods were used to analyze structure in the set of operations applied to the time series, allowing us to relate different types of scientific methods to one another, and to investigate redundancy across them. An analogous process applied to the data allowed different types of time series to be linked based on their properties, and in particular to connect time series generated by theoretical models with those measured from relevant real-world systems. In the remainder of the thesis, methods for addressing specific problems in time-series analysis are presented that use our diverse collection of operations to represent time series in terms of their measured properties. The broad utility of this highly comparative approach is demonstrated using various case studies, including the discrimination of pathological heart beat series, classification of Parkinsonian phonemes, estimation of the scaling exponent of self-affine time series, prediction of cord pH from fetal heart rates recorded during labor, and the assignment of emotional content to speech recordings. Our methods are also applied to labeled datasets of short time-series patterns studied in temporal data mining, where our feature-based approach exhibits benefits over conventional time-domain classifiers. Lastly, a feature-based dimensionality reduction framework is developed that links dependencies measured between operations to the number of free parameters in a time-series model that could be used to generate a time-series dataset.
APA, Harvard, Vancouver, ISO, and other styles
35

Djennad, Abdelmadjid. "Generalized structural time series model." Thesis, London Metropolitan University, 2014. http://repository.londonmet.ac.uk/672/.

Full text
Abstract:
new class of univariate time series models is developed, the Generalized Structural (GEST) time series model. The GEST model extends Gaussian structural time series models by allowing the distribution of the dependent variable to come from any parametric distribution, including highly skew and=or kurtotic distributions. Furthermore, the GEST model expands the systematic part of time series models to allow the explicit modelling of any or all of the distribution parameters as structural terms and (smoothed) functions of independent variables. The proposed GEST model primarily addresses the difficulty in modelling time-varying skewness and kurtosis (beyond location and dispersion time series models). The originality of the thesis starts from Chapter 6 and in particular Chapter 7 and Chapter 8, with applications of the GEST model in Chapter 9. Chapters 2 and 3 contain the literature review of non-Gaussian time series models, Chapter 4 is a reproduction of Chapter 17 in Pawitan (2001), which contains an alternative method for estimating the hyperparameters instead of using the Kalman filter, and Chapter 5 is an application of Chapter 4 to smoothing Gaussian structural time series models.
APA, Harvard, Vancouver, ISO, and other styles
36

Barsk, Viktor. "Time Series Search Using Traits." Thesis, Umeå universitet, Institutionen för datavetenskap, 2016. http://urn.kb.se/resolve?urn=urn:nbn:se:umu:diva-128580.

Full text
Abstract:
Time series data occurs in many real world applications. For examplea system might have a database with a large number of time series, and a user could have a query like Find all stocks tha tbehave ”similarly” to stock A. The meaning of ”similarly” can vary between different users, use cases and domains. The goal of this thesis is to develop a method for time series search that can search based on domain specific patterns. We call these domain specific patterns traits. We have chosen to apply a trait based approach on top of a interest point based search method. First the search is conducted using a interest point method and then the results are ranked using the traits. The traits are extracted from sections of the time series and converted to a string representing its structure. The strings are then compared using Levenshtein distance to rank the search results. We have developed two types of traits. The new time series search method can be useful in many applications where a user is not looking for point-wise similarity, but rather looks at the general structure and some specific patterns. Using a trait based approach can better translate to how a user perceives time series search. The method can also yield more relevant results, since this new method can find results that a classic point-wise based search would rule out.
APA, Harvard, Vancouver, ISO, and other styles
37

Hwang, Peggy May T. "Factor analysis of time series /." The Ohio State University, 1997. http://rave.ohiolink.edu/etdc/view?acc_num=osu1487944660933305.

Full text
APA, Harvard, Vancouver, ISO, and other styles
38

Sakarya, Neslihan. "Essays in time series econometrics." The Ohio State University, 2017. http://rave.ohiolink.edu/etdc/view?acc_num=osu149187075883834.

Full text
APA, Harvard, Vancouver, ISO, and other styles
39

Matam, Basava R. "Watermarking biomedical time series data." Thesis, Aston University, 2009. http://publications.aston.ac.uk/15351/.

Full text
Abstract:
This thesis addresses the problem of information hiding in low dimensional digital data focussing on issues of privacy and security in Electronic Patient Health Records (EPHRs). The thesis proposes a new security protocol based on data hiding techniques for EPHRs. This thesis contends that embedding of sensitive patient information inside the EPHR is the most appropriate solution currently available to resolve the issues of security in EPHRs. Watermarking techniques are applied to one-dimensional time series data such as the electroencephalogram (EEG) to show that they add a level of confidence (in terms of privacy and security) in an individual’s diverse bio-profile (the digital fingerprint of an individual’s medical history), ensure belief that the data being analysed does indeed belong to the correct person, and also that it is not being accessed by unauthorised personnel. Embedding information inside single channel biomedical time series data is more difficult than the standard application for images due to the reduced redundancy. A data hiding approach which has an in built capability to protect against illegal data snooping is developed. The capability of this secure method is enhanced by embedding not just a single message but multiple messages into an example one-dimensional EEG signal. Embedding multiple messages of similar characteristics, for example identities of clinicians accessing the medical record helps in creating a log of access while embedding multiple messages of dissimilar characteristics into an EPHR enhances confidence in the use of the EPHR. The novel method of embedding multiple messages of both similar and dissimilar characteristics into a single channel EEG demonstrated in this thesis shows how this embedding of data boosts the implementation and use of the EPHR securely.
APA, Harvard, Vancouver, ISO, and other styles
40

Azevedo, Joao Vale E. "Essays in time series econometrics /." May be available electronically:, 2007. http://proquest.umi.com/login?COPT=REJTPTU1MTUmSU5UPTAmVkVSPTI=&clientId=12498.

Full text
APA, Harvard, Vancouver, ISO, and other styles
41

Ishida, Isao. "Essays on financial time series /." Diss., Connect to a 24 p. preview or request complete full text in PDF format. Access restricted to UC campuses, 2004. http://wwwlib.umi.com/cr/ucsd/fullcit?p3153696.

Full text
APA, Harvard, Vancouver, ISO, and other styles
42

Cassisi, Carmelo. "Geophysical time series data mining." Doctoral thesis, Università di Catania, 2013. http://hdl.handle.net/10761/1366.

Full text
Abstract:
The process of automatic extraction, recognition, description and classification of patterns from huge amount of data plays an important role in modern volcano monitoring techniques. In particular, the ability of certain systems to recognize different volcano status can help the researchers to better understand the complex dynamics underlying the geophysical system. The geophysical data are automatically measured and recorded by geophysical instruments. Their interpretation is very important for the investigation of earth s behavior. The fundamental task of volcano monitoring is to follow volcanic activity and promptly recognize any changes. To achieve such goals, different geophysical techniques (i.e. seismology, ground deformation, remote sensing, magnetic and electromagnetic studies, gravimetric) are used to obtain precise measurements of the variations induced by an evolving magmatic system. To proper exploit the wealth of such heterogeneous data, algorithms and techniques of data mining are fundamental tools. This thesis can be considered a detailed report about the application of the data mining discipline in the geophysical area. After introducing the basic concepts and the most important techniques constituting the state-of-art in the data mining field, we will apply several methods able to reach important results about the extraction of unknown recurrent patterns in seismic and infrasonic signals, and we will show the implementation of systems representing efficient tools for the monitoring purpose.
APA, Harvard, Vancouver, ISO, and other styles
43

Combettes, Sylvain. "Symbolic representations of time series." Electronic Thesis or Diss., université Paris-Saclay, 2024. http://www.theses.fr/2024UPASM002.

Full text
Abstract:
Les objectifs de cette thèse sont de définir de nouvelles représentations symboliques et des mesures de distance adaptées aux séries temporelles pouvant être multivariées et non-stationnaires. De plus, elles doivent préserver l'information temporelle, être interprétables et rapides à calculer. Nous passons en revue les représentations symboliques de séries temporelles, ainsi que les mesures de distance sur séries temporelles, chaînes de caractères et séquences symboliques (qui résultent d'un processus de symbolisation).Nous proposons deux contributions: ASTRIDE pour un ensemble de séries temporelles univariées, et d_{symb} pour un ensemble de séries temporelles multivariées. Nous avons également développé le d_{symb} playground, un outil interactif en ligne permettant aux utilisateurs d'appliquer d_{symb} à leurs données téléversées. ASTRIDE et d_{symb} sont pilotées par les données, car elles utilisent la détection de ruptures pour l'étape de segmentation, puis des quantiles ou un partitionnement par les K-moyennes pour l'étape de quantification. Enfin, elles appliquent la distance d'édition générale avec des coûts personnalisés entre les séquences symboliques obtenues.Nous montrons les performances d'ASTRIDE, comparé à 4 autres représentations symboliques, sur des tâches de reconstruction, et lorsque cela s'applique, sur des tâches de classification. Pour d_{symb}, les expériences montrent à quel point la symbolisation est interprétable. De plus, comparée à 9 distances élastiques sur une tâche de partitionnement, d_{symb} atteint des performances compétitives tout en étant plusieurs ordres de grandeur plus rapide
The objectives of this thesis are to define novel symbolic representations and distance measures that are suited for time series that can be multivariate and non-stationary. In addition, they should preserve the time information, be interpretable, and fast to compute. We review symbolic representations of time series (that transform a real-valued series into a shorter discrete-valued series), as well as distance measures on time series, strings, and symbolic sequences (that result from a symbolization process).We propose two contributions: ASTRIDE for a data set of univariate time series, and d_{symb} for a data set of multivariate time series. We also developed the d_{symb} playground, an online interactive tool that allows users to apply d_{symb} to their uploaded data. ASTRIDE and d_{symb} are data-driven as they use change-point detection for the segmentation step, then either quantiles or a K-means clustering algorithm for the quantization step. Finally, they apply the general edit distance with custom costs between the resulting symbolic sequences.We show the performance of ASTRIDE compared to 4 other symbolic representations on reconstruction and, when applicable, on classification tasks. For d_{symb}, experiments show how interpretable the symbolization is. Moreover, compared to 9 elastic distances on a clustering task, d_{symb} achieves a competitive performance while being several orders of magnitude faster
APA, Harvard, Vancouver, ISO, and other styles
44

Kim, Doo Young. "Statistical Modeling of Carbon Dioxide and Cluster Analysis of Time Dependent Information: Lag Target Time Series Clustering, Multi-Factor Time Series Clustering, and Multi-Level Time Series Clustering." Scholar Commons, 2016. http://scholarcommons.usf.edu/etd/6277.

Full text
Abstract:
The current study consists of three major parts. Statistical modeling, the connection between statistical modeling and cluster analysis, and proposing new methods to cluster time dependent information. First, we perform a statistical modeling of the Carbon Dioxide (CO2) emission in South Korea in order to identify the attributable variables including interaction effects. One of the hot issues in the earth in 21st century is Global warming which is caused by the marriage between atmospheric temperature and CO2 in the atmosphere. When we confront this global problem, we first need to verify what causes the problem then we can find out how to solve the problem. Thereby, we find and rank the attributable variables and their interactions based on their semipartial correlation and compare our findings with the results from the United States and European Union. This comparison shows that the number one contributing variable in South Korea and the United States is Liquid Fuels while it is the number 8 ranked in EU. This comparison provides the evidence to support regional policies and not global, to control CO2 in an optimal level in our atmosphere. Second, we study regional behavior of the atmospheric CO2 in the United States. Utilizing the longitudinal transitional modeling scheme, we calculate transitional probabilities based on effects from five end-use sectors that produce most of the CO2 in our atmosphere, that is, the commercial sector, electric power sector, industrial sector, residential sector, and the transportation sector. Then, using those transitional probabilities we perform a hierarchical clustering procedure to classify the regions with similar characteristics based on nine US climate regions. This study suggests that our elected officials can proceed to legislate regional policies by end-use sectors in order to maintain the optimal level of the atmospheric CO2 which is required by global consensus. Third, we propose new methods to cluster time dependent information. It is almost impossible to find data that are not time dependent among floods of information that we have nowadays, and it needs not to emphasize the importance of data mining of the time dependent information. The first method we propose is called “Lag Target Time Series Clustering (LTTC)” which identifies actual level of time dependencies among clustering objects. The second method we propose is the “Multi-Factor Time Series Clustering (MFTC)” which allows us to consider the distance in multi-dimensional space by including multiple information at a time. The last method we propose is the “Multi-Level Time Series Clustering (MLTC)” which is especially important when you have short term varying time series responses to cluster. That is, we extract only pure lag effect from LTTC. The new methods that we propose give excellent results when applied to time dependent clustering. Finally, we develop appropriate algorithm driven by the analytical structure of the proposed methods to cluster financial information of the ten business sectors of the N.Y. Stock Exchange. We used in our clustering scheme 497 stocks that constitute the S&P 500 stocks. We illustrated the usefulness of the subject study by structuring diversified financial portfolio.
APA, Harvard, Vancouver, ISO, and other styles
45

Qiang, Fu. "Bayesian multivariate time series models for forecasting European macroeconomic series." Thesis, University of Hull, 2000. http://hydra.hull.ac.uk/resources/hull:8068.

Full text
Abstract:
Research on and debate about 'wise use' of explicitly Bayesian forecasting procedures has been widespread and often heated. This situation has come about partly in response to the dissatisfaction with the poor forecasting performance of conventional methods and partly in view of the development of computational capacity and macro-data availability. Experience with Bayesian econometric forecasting schemes is still rather limited, but it seems to be an attractive alternative to subjectively adjusted statistical models [see, for example, Phillips (1995a), Todd (1984) and West & Harrison (1989)]. It provides effective standards of forecasting performance and has demonstrated success in forecasting macroeconomic variables. Therefore, there would seem a case for seeking some additional insights into the important role of such methods in achieving objectives within the macroeconomics profession. The primary concerns of this study, motivated by the apparent deterioration of mainstream macroeconometric forecasts of the world economy in recent years [Wallis (1989), pp.34-43], are threefold. The first is to formalize a thorough, yet simple, methodological framework for empirical macroeconometric modelling in a Bayesian spirit. The second is to investigate whether improved forecasting accuracy is feasible within a European-based multicountry context. This is conducted with particular emphasis on the construction and implementation of Bayesian vector autoregressive (BVAR) models that incorporate both a priori and cointegration restrictions. The third is to extend the approach and apply it to the joint-modelling of system-wide interactions amongst national economies. The intention is to attempt to generate more accurate answers to a variety of practical questions about the future path towards a united Europe. The use of BVARs has advanced considerably. In particular, the value of joint-modelling with time-varying parameters and much more sophisticated prior distributions has been stressed in the econometric methodology literature. See e.g. Doan et al. (1984). Kadiyala and Karlsson (1993, 1997), Litterman (1986a), and Phillips (1995a, 1995b). Although trade-linked multicountry macroeconomic models may not be able to clarify all the structural and finer economic characteristics of each economy, they do provide a flexible and adaptable framework for analysis of global economic issues. In this thesis, the forecasting record for the main European countries is examined using the 'post mortem' of IMF, DECO and EEC sources. The formulation, estimation and selection of BVAR forecasting models, carried out using Microfit, MicroTSP, PcGive and RATS packages, are reported. Practical applications of BVAR models especially address the issues as to whether combinations of forecasts explicitly outperform the forecasts of a single model, and whether the recent failures of multicountry forecasts can be attributed to an increase in the 'internal volatility' of the world economic environment. See Artis and Holly (1992), and Barrell and Pain (1992, p.3). The research undertaken consolidates existing empirical and theoretical knowledge of BVAR modelling. It provides a unified coverage of economic forecasting applications and develops a common, effective and progressive methodology for the European economies. The empirical results reflect that in simulated 'out-of-sample' forecasting performances, the gains in forecast accuracy from imposing prior and long-run constraints are statistically significant, especially for small estimation sample sizes and long forecast horizons.
APA, Harvard, Vancouver, ISO, and other styles
46

Michel, Jonathan R. "Essays in Nonlinear Time Series Analysis." The Ohio State University, 2019. http://rave.ohiolink.edu/etdc/view?acc_num=osu1555001297904158.

Full text
APA, Harvard, Vancouver, ISO, and other styles
47

Hossain, Md Jobayer. "Analysis of nonstationary time series with time varying frequencies." Ann Arbor, Mich. : ProQuest, 2006. http://gateway.proquest.com/openurl?url_ver=Z39.88-2004&rft_val_fmt=info:ofi/fmt:kev:mtx:dissertation&res_dat=xri:pqdiss&rft_dat=xri:pqdiss:3220410.

Full text
Abstract:
Thesis (Ph.D. in Statistical Science)--S.M.U.
Title from PDF title page (viewed July 6, 2007). Source: Dissertation Abstracts International, Volume: 67-05, Section: B, page: 2641. Advisers: Wayne A. Woodward; Henry L. Gray. Includes bibliographical references.
APA, Harvard, Vancouver, ISO, and other styles
48

Mercurio, Danilo. "Adaptive estimation for financial time series." Doctoral thesis, [S.l. : s.n.], 2004. http://deposit.ddb.de/cgi-bin/dokserv?idn=972597263.

Full text
APA, Harvard, Vancouver, ISO, and other styles
49

Lundbergh, Stefan. "Modelling economic high-frequency time series." Doctoral thesis, Handelshögskolan i Stockholm, Ekonomisk Statistik (ES), 1999. http://urn.kb.se/resolve?urn=urn:nbn:se:hhs:diva-637.

Full text
APA, Harvard, Vancouver, ISO, and other styles
50

Sjolander, Morne Rowan. "Time series models for paired comparisons." Thesis, Nelson Mandela Metropolitan University, 2011. http://hdl.handle.net/10948/d1012858.

Full text
Abstract:
The method of paired comparisons is seen as a technique used to rank a set of objects with respect to an abstract or immeasurable property. To do this, the objects get to be compared two at a time. The results are input into a model, resulting in numbers known as weights being assigned to the objects. The weights are then used to rank the objects. The method of paired comparisons was first used for psychometric investigations. Various other applications of the method are also present, for example economic applications, and applications in sports statistics. This study involves taking paired comparison models and making them time-dependent. Not much research has been done in this area. Three new time series models for paired comparisons are created. Simulations are done to support the evidence obtained, and theoretical as well as practical examples are given to illustrate the results and to verify the efficiency of the new models. A literature study is given on the method of paired comparisons, as well as on the areas in which we apply our models. Our first two time series models for paired comparisons are the Linear-Trend Bradley- Terry Model and the Sinusoidal Bradley-Terry Model. We use the maximum likelihood approach to solve these models. We test our models using exact and randomly simulated data for various time periods and various numbers of objects. We adapt the Linear-Trend Bradley-Terry Model and received our third time series model for paired comparisons, the Log Linear-Trend Bradley-Terry Model. The daily maximum and minimum temperatures were received for Port Elizabeth, Uitenhage and Coega for 2005 until 2009. To evaluate the performance of the Linear-Trend Bradley-Terry Model and the Sinusoidal Bradley-Terry Model on estimating missing temperature data, we artificially remove observations of temperature from Coega’s temperature dataset for 2006 until 2008, and use various forms of these models to estimate the missing data points. The exchange rates for 2005 until 2008 between the following currencies: the Rand, Dollar, Euro, Pound and Yen, were obtained and various forms of our Log Linear-Trend Bradley-Terry Model are used to forecast the exchange rate for one day ahead for each month in 2006 until 2008. One of the features of this study is that we apply our time series models for paired comparisons to areas which comprise non-standard paired comparisons; and we want to encourage the use of the method of paired comparisons in a broader sense than what it is traditionally used for. The results of this study can be used in various other areas, like for example, in sports statistics, to rank the strength of sports players and predict their future scores; in Physics, to calculate weather risks of electricity generation, particularly risks related to nuclear power plants, and so forth, as well as in many other areas. It is hoped that this research will open the door to much more research in combining time series analysis with the method of paired comparisons.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography