Dissertations / Theses on the topic 'Non-Autoregressive'

To see the other types of publications on this topic, follow the link: Non-Autoregressive.

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 33 dissertations / theses for your research on the topic 'Non-Autoregressive.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.

1

Clayton, Maya. "Econometric forecasting of financial assets using non-linear smooth transition autoregressive models." Thesis, University of St Andrews, 2011. http://hdl.handle.net/10023/1898.

Full text
Abstract:
Following the debate by empirical finance research on the presence of non-linear predictability in stock market returns, this study examines forecasting abilities of nonlinear STAR-type models. A non-linear model methodology is applied to daily returns of FTSE, S&P, DAX and Nikkei indices. The research is then extended to long-horizon forecastability of the four series including monthly returns and a buy-and-sell strategy for a three, six and twelve month holding period using non-linear error-correction framework. The recursive out-of-sample forecast is performed using the present value model equilibrium methodology, whereby stock returns are forecasted using macroeconomic variables, in particular the dividend yield and price-earnings ratio. The forecasting exercise revealed the presence of non-linear predictability for all data periods considered, and confirmed an improvement of predictability for long-horizon data. Finally, the present value model approach is applied to the housing market, whereby the house price returns are forecasted using a price-earnings ratio as a measure of fundamental levels of prices. Findings revealed that the UK housing market appears to be characterised with asymmetric non-linear dynamics, and a clear preference for the asymmetric ESTAR model in terms of forecasting accuracy.
APA, Harvard, Vancouver, ISO, and other styles
2

Liu, Ka-yee. "Bayes and empirical Bayes estimation for the panel threshold autoregressive model and non-Gaussian time series." Click to view the E-thesis via HKUTO, 2005. http://sunzi.lib.hku.hk/hkuto/record/B30706166.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Liu, Ka-yee, and 廖家怡. "Bayes and empirical Bayes estimation for the panel threshold autoregressive model and non-Gaussian time series." Thesis, The University of Hong Kong (Pokfulam, Hong Kong), 2005. http://hub.hku.hk/bib/B30706166.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Nyman, Nick, and Smura Michel Postigo. "Examining how unforeseen events affect accuracy and recovery of a non-linear autoregressive neural network in stock market prognoses." Thesis, KTH, Skolan för datavetenskap och kommunikation (CSC), 2016. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-186435.

Full text
Abstract:
This report studies how a non-linear autoregressive neural network algorithm for stock market value prognoses is affected by unforeseen events. The study attempts to find out the recovery period for said algorithms after an event, and whether the magnitude of the event affects the recovery period. Tests of 1-day prognoses' deviations from the observed value are carried out on five real stock events and four created simulation sets which exclude the noisy data of the stock market and isolates different kinds of events. The study concludes that the magnitude has no discernible impact on recovery, and that a sudden event will allow recovery within days regardless of magnitude or change in price development rate. However, less sudden events will cause the recovery period to extend. Noise such as surrounding micro-events, aftershocks, or lingering instability of stock prices will affect accuracy and recovery time significantly.
Denna studie undersöker hur ett icke-linjärt autoregressivt neuronnät för aktiemarknadsprognoser påverkas av oväntade händelser. Studien ämnar finna återhämtningsperioden för nätverket efter en händelse, och ta reda på om den initiala påverkan av händelsen påverkar återhämtningen. Tester av endagsprognosers avvikelse från det verkliga värdet genomförs på fem verkliga aktier och fyra skapade dataset som exkluderar den omgivande variationen från aktiemarknaden. Dessa simulerade set isolerar därmed specifika typer av händelser. Studien drar slutsatsen att storleken av händelsen har försumbar betydelse på återhämtningstiden och att plötsliga händelser tillåter återhämtning på några dagar oavsett händelsens ursprungliga storlek eller förändring av prisutvecklingshastighet. Däremot förlänger utdragna händelser återhämtningstiden. Likaså påverkar efterskalv eller kvarvarande instabilitet i prisutvecklingen tillförlitlighet och återhämtningstid avsevärt.
APA, Harvard, Vancouver, ISO, and other styles
5

Krisztin, Tamás. "Semi-parametric spatial autoregressive models in freight generation modeling." Elsevier, 2018. https://publish.fid-move.qucosa.de/id/qucosa%3A72336.

Full text
Abstract:
This paper proposes for the purposes of freight generation a spatial autoregressive model framework, combined with non-linear semi-parametric techniques. We demonstrate the capabilities of the model in a series of Monte Carlo studies. Moreover, evidence is provided for non-linearities in freight generation, through an applied analysis of European NUTS-2 regions. We provide evidence for significant spatial dependence and for significant non-linearities related to employment rates in manufacturing and infrastructure capabilities in regions. The non-linear impacts are the most significant in the agricultural freight generation sector.
APA, Harvard, Vancouver, ISO, and other styles
6

Wang, Yuefeng. "Essays on modelling house prices." Thesis, Brunel University, 2018. http://bura.brunel.ac.uk/handle/2438/16242.

Full text
Abstract:
Housing prices are of crucial importance in financial stability management. The severe financial crises that originated in the housing market in the US and subsequently spread throughout the world highlighted the crucial role that the housing market plays in preserving financial stability. After the severe housing market crash, many financial institutions in the US suffered from high default rates, severe liquidity shortages, and even bankruptcy. Against this background, researchers have sought to use econometric models to capture and forecast prices of homes. Available empirical research indicates that nonlinear models may be suitable for modelling price cycles. Accordingly, this thesis focuses primarily on using nonlinear models to empirically investigate cyclical patterns in housing prices. More specifically, the content of this thesis can be summarised in three essays which complement the existing literature on price modelling by using nonlinear models. The first essay contributes to the literature by testing the ability of regime switching models to capture and forecast house prices. The second essay examines the impact of banking factors on house price fluctuations. To account for house price characteristics, the regime switching model and generalised autoregressive conditionally heteroscedastic (GARCH) in-mean model have been used. The final essay investigates the effect of structural breaks on the unit root test and shows that a time-varying GARCH in-mean model can be used to estimate the housing price cycle in the UK.
APA, Harvard, Vancouver, ISO, and other styles
7

Cugliari, Jairo. "Prévision non paramétrique de processus à valeurs fonctionnelles : application à la consommation d’électricité." Thesis, Paris 11, 2011. http://www.theses.fr/2011PA112234/document.

Full text
Abstract:
Nous traitons dans cette thèse le problème de la prédiction d’un processus stochastique à valeurs fonctionnelles. Nous commençons par étudier le modèle proposé par Antoniadis et al. (2006) dans le cadre d’une application pratique -la demande d’énergie électrique en France- où l’hypothèse de stationnarité semble ne pas se vérifier. L’écart du cadre stationnaire est double: d’une part, le niveau moyen de la série semble changer dans le temps, d’autre part il existe groupes dans les données qui peuvent être vus comme des classes de stationnarité.Nous explorons corrections qui améliorent la performance de prédiction. Les corrections visent à prendre en compte la présence de ces caractéristiques non stationnaires. En particulier, pour traiter l’existence de groupes, nous avons contraint le modèle de prévision à n’utiliser que les données qui appartiennent au même groupe que celui de la dernière observation disponible. Si le regroupement est connu, un simple post-traitement suffit pour obtenir des meilleures performances de prédiction.Si le regroupement en blocs est inconnu, nous proposons de découvrir le regroupement en utilisant des algorithmes d’analyse de classification non supervisée. La dimension infinie des trajectoires, pas nécessairement stationnaires, doit être prise en compte par l’algorithme. Nous proposons deux stratégies pour ce faire, toutes les deux basées sur les transformées en ondelettes. La première se base dans l’extraction d’attributs associés à la transformée en ondelettes discrète. L’extraction est suivie par une sélection des caractéristiques le plus significatives pour l’algorithme de classification. La seconde stratégie classifie directement les trajectoires à l’aide d’une mesure de dissimilarité sur les spectres en ondelettes. La troisième partie de la thèse est consacrée à explorer un modèle de prédiction alternatif qui intègre de l’information exogène. A cet effet, nous utilisons le cadre des processus Autorégressifs Hilbertiens. Nous proposons une nouvelle classe de processus que nous appelons processus Conditionnels Autorégressifs Hilbertiens (CARH). Nous développons l’équivalent des estimateurs par projection et par résolvant pour prédire de tels processus
This thesis addresses the problem of predicting a functional valued stochastic process. We first explore the model proposed by Antoniadis et al. (2006) in the context of a practical application -the french electrical power demand- where the hypothesis of stationarity may fail. The departure from stationarity is twofold: an evolving mean level and the existence of groupsthat may be seen as classes of stationarity.We explore some corrections that enhance the prediction performance. The corrections aim to take into account the presence of these nonstationary features. In particular, to handle the existence of groups, we constraint the model to use only the data that belongs to the same group of the last available data. If one knows the grouping, a simple post-treatment suffices to obtain better prediction performances.If the grouping is unknown, we propose it from data using clustering analysis. The infinite dimension of the not necessarily stationary trajectories have to be taken into account by the clustering algorithm. We propose two strategies for this, both based on wavelet transforms. The first one uses a feature extraction approach through the Discrete Wavelet Transform combined with a feature selection algorithm to select the significant features to be used in a classical clustering algorithm. The second approach clusters directly the functions by means of a dissimilarity measure of the Continuous Wavelet spectra.The third part of thesis is dedicated to explore an alternative prediction model that incorporates exogenous information. For this purpose we use the framework given by the Autoregressive Hilbertian processes. We propose a new class of processes that we call Conditional Autoregressive Hilbertian (carh) and develop the equivalent of projection and resolvent classes of estimators to predict such processes
APA, Harvard, Vancouver, ISO, and other styles
8

Hili, Ouagnina. "Contribution à l'estimation des modèles de séries temporelles non linéaires." Université Louis Pasteur (Strasbourg) (1971-2008), 1995. http://www.theses.fr/1995STR13169.

Full text
Abstract:
Le but de la these est d'effectuer l'inference statistique d'une classe generale de modeles de series temporelles non lineaires. Notre contribution consiste d'abord a determiner des conditions assurant l'existence d'une loi stationnaire, l'existence des moments de cette loi stationnaire et la forte melangeance de tels modeles. Nous etablissons ensuite les proprietes asymptotiques de l'estimateur du minimum de distance d'hellinger du parametre d'interet. La robustesse de cet estimateur est egalement envisagee. Nous examinons aussi, via la methode des moindres carres, les proprietes asymptotiques des estimateurs des coefficients des modeles autoregressifs a seuils
APA, Harvard, Vancouver, ISO, and other styles
9

Caron, Nathalie. "Approches alternatives d'une théorie non informative des tests bayésiens." Rouen, 1994. http://www.theses.fr/1994ROUES028.

Full text
Abstract:
Le but de cette thèse est de comparer les réponses classiques (p-values) et bayésiennes dans le cadre d'un test bilatéral et d'introduire de nouvelles réponses bayésiennes non informatives. Elle comprend donc une présentation des réponses classiques et bayésiennes aux différents types de tests sous l'angle de la théorie de la décision. Le deuxième chapitre est consacré à la comparaison des réponses classiques et bayésiennes dans le cadre unidimensionnel en explicitant les critères de choix retenus pour définir une réponse bayésienne objective. Dans les chapitres trois et six, deux nouvelles réponses bayésiennes non informatives sont développées. Les autres chapitres constituent une généralisation du cadre unidimensionnel: les chapitres 4, 5 et 7 généralisent les résultats respectivement au cadre multidimensionnel, au cas d'un test avec des paramètres de nuisance et au cas d'observations corrélées par un modèle autorégressif
APA, Harvard, Vancouver, ISO, and other styles
10

Korale, Asoka Jeevaka Maligaspe. "Non-stationary adaptive signal prediction with error bounds." Thesis, Imperial College London, 2000. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.326258.

Full text
APA, Harvard, Vancouver, ISO, and other styles
11

Arotiba, Gbenga Joseph. "Pricing American Style Employee Stock Options having GARCH Effects." Thesis, University of the Western Cape, 2010. http://etd.uwc.ac.za/index.php?module=etd&action=viewtitle&id=gen8Srv25Nme4_3057_1298615964.

Full text
Abstract:

We investigate some simulation-based approaches for the valuing of the employee stock options. The mathematical models that deal with valuation of such options include the work of Jennergren and Naeslund [L.P Jennergren and B. Naeslund, A comment on valuation of executive stock options and the FASB proposal, Accounting Review 68 (1993) 179-183]. They used the Black and Scholes [F. Black and M. Scholes, The pricing of options and corporate liabilities, Journal of Political Economy 81(1973) 637-659] and extended partial differential equation for an option that includes the early exercise. Some other major relevant works to this mini thesis are Hemmer et al. [T Hemmer, S. Matsunaga and T Shevlin, The influence of risk diversification on the early exercise of employee stock options by executive officers, Journal of Accounting and Economics 21(1) (1996) 45-68] and Baril et al. [C. Baril, L. Betancourt, J. Briggs, Valuing employee stock options under SFAS 123 R using the Black-Scholes-Merton and lattice model approaches, Journal of Accounting Education 25 (1-2) (2007) 88-101]. The underlying assets are studied under the GARCH (generalized autoregressive conditional heteroskedasticity) effects. Particular emphasis is made on the American style employee stock options.

APA, Harvard, Vancouver, ISO, and other styles
12

Jin, Fei. "Essays in Spatial Econometrics: Estimation, Specification Test and the Bootstrap." The Ohio State University, 2013. http://rave.ohiolink.edu/etdc/view?acc_num=osu1365612737.

Full text
APA, Harvard, Vancouver, ISO, and other styles
13

Sànchez, Pérez Andrés. "Agrégation de prédicteurs pour des séries temporelles, optimalité dans un contexte localement stationnaire." Thesis, Paris, ENST, 2015. http://www.theses.fr/2015ENST0051/document.

Full text
Abstract:
Cette thèse regroupe nos résultats sur la prédiction de séries temporelles dépendantes. Le document comporte trois chapitres principaux où nous abordons des problèmes différents. Le premier concerne l’agrégation de prédicteurs de décalages de Bernoulli Causales, en adoptant une approche Bayésienne. Le deuxième traite de l’agrégation de prédicteurs de ce que nous définissions comme processus sous-linéaires. Une attention particulaire est portée aux processus autorégressifs localement stationnaires variables dans le temps, nous examinons un schéma de prédiction adaptative pour eux. Dans le dernier chapitre nous étudions le modèle de régression linéaire pour une classe générale de processus localement stationnaires
This thesis regroups our results on dependent time series prediction. The work is divided into three main chapters where we tackle different problems. The first one is the aggregation of predictors of Causal Bernoulli Shifts using a Bayesian approach. The second one is the aggregation of predictors of what we define as sub-linear processes. Locally stationary time varying autoregressive processes receive a particular attention; we investigate an adaptive prediction scheme for them. In the last main chapter we study the linear regression problem for a general class of locally stationary processes
APA, Harvard, Vancouver, ISO, and other styles
14

Gomes, Leonaldo da Silva. "Redes Neurais Aplicadas à InferÃncia dos Sinais de Controle de Dosagem de Coagulantes em uma ETA por FiltraÃÃo RÃpida." Universidade Federal do CearÃ, 2012. http://www.teses.ufc.br/tde_busca/arquivo.php?codArquivo=8105.

Full text
Abstract:
Considerando a importÃncia do controle da coagulaÃÃo quÃmica para o processo de tratamento de Ãgua por filtraÃÃo rÃpida, esta dissertaÃÃo propÃe a aplicaÃÃo de redes neurais artificiais para inferÃncia dos sinais de controle de dosagem de coagulantes principal e auxiliar, no processo de coagulaÃÃo quÃmica em uma estaÃÃo de tratamento de Ãgua por filtraÃÃo rÃpida. Para tanto, foi feito uma anÃlise comparativa da aplicaÃÃo de modelos baseados em redes neurais do tipo: alimentada adiante focada atrasada no tempo (FTLFN); alimentada adiante atrasada no tempo distribuÃda (DTLFN); recorrente de Elman (ERN) e auto-regressiva nÃo-linear com entradas exÃgenas (NARX). Da anÃlise comparativa, o modelo baseado em redes NARX apresentou melhores resultados, evidenciando o potencial do modelo para uso em casos reais, o que contribuirà para a viabilizaÃÃo de projetos desta natureza em estaÃÃes de tratamento de Ãgua de pequeno porte.
Considering the importance of the chemical coagulation control for the water treatment by direct filtration, this work proposes the application of artificial neural networks for inference of dosage control signals of principal and auxiliary coagulant, in the chemical coagulation process in a water treatment plant by direct filtration. To that end, was made a comparative analysis of the application of models based on neural networks, such as: Focused Time Lagged Feedforward Network (FTLFN); Distributed Time Lagged Feedforward Network (DTLFN); Elman Recurrent Network (ERN) and Non-linear Autoregressive with exogenous inputs (NARX). From the comparative analysis, the model based on NARX networks showed better results, demonstrating the potential of the model for use in real cases, which will contribute to the viability of projects of this nature in small size water treatment plants.
APA, Harvard, Vancouver, ISO, and other styles
15

Dupré, la Tour Tom. "Nonlinear models for neurophysiological time series." Thesis, Université Paris-Saclay (ComUE), 2018. http://www.theses.fr/2018SACLT018/document.

Full text
Abstract:
Dans les séries temporelles neurophysiologiques, on observe de fortes oscillations neuronales, et les outils d'analyse sont donc naturellement centrés sur le filtrage à bande étroite.Puisque cette approche est trop réductrice, nous proposons de nouvelles méthodes pour représenter ces signaux.Nous centrons tout d'abord notre étude sur le couplage phase-amplitude (PAC), dans lequel une bande haute fréquence est modulée en amplitude par la phase d'une oscillation neuronale plus lente.Nous proposons de capturer ce couplage dans un modèle probabiliste appelé modèle autoregressif piloté (DAR). Cette modélisation permet une sélection de modèle efficace grâce à la mesure de vraisemblance, ce qui constitue un apport majeur à l'estimation du PAC.%Nous présentons différentes paramétrisations des modèles DAR et leurs algorithmes d'inférence rapides, et discutons de leur stabilité.Puis nous montrons comment utiliser les modèles DAR pour l'analyse du PAC, et démontrons l'avantage de l'approche par modélisation avec trois jeux de donnée.Puis nous explorons plusieurs extensions à ces modèles, pour estimer le signal pilote à partir des données, le PAC sur des signaux multivariés, ou encore des champs réceptifs spectro-temporels.Enfin, nous proposons aussi d'adapter les modèles de codage parcimonieux convolutionnels pour les séries temporelles neurophysiologiques, en les étendant à des distributions à queues lourdes et à des décompositions multivariées. Nous développons des algorithmes d'inférence efficaces pour chaque formulations, et montrons que l'on obtient de riches représentations de façon non-supervisée
In neurophysiological time series, strong neural oscillations are observed in the mammalian brain, and the natural processing tools are thus centered on narrow-band linear filtering.As this approach is too reductive, we propose new methods to represent these signals.We first focus on the study of phase-amplitude coupling (PAC), which consists in an amplitude modulation of a high frequency band, time-locked with a specific phase of a slow neural oscillation.We propose to use driven autoregressive models (DAR), to capture PAC in a probabilistic model. Giving a proper model to the signal enables model selection by using the likelihood of the model, which constitutes a major improvement in PAC estimation.%We first present different parametrization of DAR models, with fast inference algorithms and stability discussions.Then, we present how to use DAR models for PAC analysis, demonstrating the advantage of the model-based approach on three empirical datasets.Then, we explore different extensions to DAR models, estimating the driving signal from the data, PAC in multivariate signals, or spectro-temporal receptive fields.Finally, we also propose to adapt convolutional sparse coding (CSC) models for neurophysiological time-series, extending them to heavy-tail noise distribution and multivariate decompositions. We develop efficient inference algorithms for each formulation, and show that we obtain rich unsupervised signal representations
APA, Harvard, Vancouver, ISO, and other styles
16

Lopez, Marcano Juan L. "Classification of ADHD and non-ADHD Using AR Models and Machine Learning Algorithms." Thesis, Virginia Tech, 2016. http://hdl.handle.net/10919/73688.

Full text
Abstract:
As of 2016, diagnosis of ADHD in the US is controversial. Diagnosis of ADHD is based on subjective observations, and treatment is usually done through stimulants, which can have negative side-effects in the long term. Evidence shows that the probability of diagnosing a child with ADHD not only depends on the observations of parents, teachers, and behavioral scientists, but also on state-level special education policies. In light of these facts, unbiased, quantitative methods are needed for the diagnosis of ADHD. This problem has been tackled since the 1990s, and has resulted in methods that have not made it past the research stage and methods for which claimed performance could not be reproduced. This work proposes a combination of machine learning algorithms and signal processing techniques applied to EEG data in order to classify subjects with and without ADHD with high accuracy and confidence. More specifically, the K-nearest Neighbor algorithm and Gaussian-Mixture-Model-based Universal Background Models (GMM-UBM), along with autoregressive (AR) model features, are investigated and evaluated for the classification problem at hand. In this effort, classical KNN and GMM-UBM were also modified in order to account for uncertainty in diagnoses. Some of the major findings reported in this work include classification performance as high, if not higher, than those of the highest performing algorithms found in the literature. One of the major findings reported here is that activities that require attention help the discrimination of ADHD and Non-ADHD subjects. Mixing in EEG data from periods of rest or during eyes closed leads to loss of classification performance, to the point of approximating guessing when only resting EEG data is used.
Master of Science
APA, Harvard, Vancouver, ISO, and other styles
17

Kamanu, Timothy Kevin Kuria. "Location-based estimation of the autoregressive coefficient in ARX(1) models." Thesis, University of the Western Cape, 2006. http://etd.uwc.ac.za/index.php?module=etd&action=viewtitle&id=gen8Srv25Nme4_9551_1186751947.

Full text
Abstract:

In recent years, two estimators have been proposed to correct the bias exhibited by the leastsquares (LS) estimator of the lagged dependent variable (LDV) coefficient in dynamic regression models when the sample is finite. They have been termed as &lsquo
mean-unbiased&rsquo
and &lsquo
medianunbiased&rsquo
estimators. Relative to other similar procedures in the literature, the two locationbased estimators have the advantage that they offer an exact and uniform methodology for LS estimation of the LDV coefficient in a first order autoregressive model with or without exogenous regressors i.e. ARX(1).


However, no attempt has been made to accurately establish and/or compare the statistical properties among these estimators, or relative to those of the LS estimator when the LDV coefficient is restricted to realistic values. Neither has there been an attempt to 
compare their performance in terms of their mean squared error (MSE) when various forms of the exogenous regressors are considered. Furthermore, only implicit confidence intervals have been given for the &lsquo
medianunbiased&rsquo
estimator. Explicit confidence bounds that are directly usable for inference are not available for either estimator. In this study a new estimator of the LDV coefficient is proposed
the &lsquo
most-probably-unbiased&rsquo
estimator. Its performance properties vis-a-vis the existing estimators are determined and compared when the parameter space of the LDV coefficient is restricted. In addition, the following new results are established: (1) an explicit computable form for the density of the LS estimator is derived for the first time and an efficient method for its numerical evaluation is proposed
(2) the exact bias, mean, median and mode of the distribution of the LS estimator are determined in three specifications of the ARX(1) model
(3) the exact variance and MSE of LS estimator is determined
(4) the standard error associated with the determination of same quantities when simulation rather than numerical integration method is used are established and the methods are compared in terms of computational time and effort
(5) an exact method of evaluating the density of the three estimators is described
(6) their exact bias, mean, variance and MSE are determined and analysed
and finally, (7) a method of obtaining the explicit exact confidence intervals from the distribution functions of the estimators is proposed.


The discussion and results show that the estimators are still biased in the usual sense: &lsquo
in expectation&rsquo
. However the bias is substantially reduced compared to that of the LS estimator. The findings are important in the specification of time-series regression models, point and interval estimation, decision theory, and simulation.

APA, Harvard, Vancouver, ISO, and other styles
18

Rabah-Romdhane, Zohra. "Etudes sur le cycle économique. Une approche par les modèles à changements de régime." Thesis, Université de Lorraine, 2013. http://www.theses.fr/2013LORR0322.

Full text
Abstract:
L'ampleur de la Grande Récession a suscité un regain d'intérêt pour l'analyse conjoncturelle, plus particulièrement du cycle économique. Notre thèse participe de ce renouveau d'attention pour l'étude des fluctuations économiques.Après une présentation générale des modèles à changements de régime dans le chapitre 1, le chapitre suivant propose une chronologie du cycle des affaires de l'économie française sur la période 1970-2009. Trois méthodes de datation sont utilisées à cette fin : la règle des deux trimestres consécutifs de croissance négative, l'approche non paramétrique de Bry et Boschan (1971) et le modèle markovien à changements de régime de Hamilton (1989). Les résultats montrent que l'existence de ruptures structurelles peut empêcher ce dernier modèle d'identifier correctement les points de retournement cycliques. Cependant, quandces ruptures sont prises en considération, le calendrier des récessions françaises obtenu à l'aide du modèle d'Hamilton coïncide largement avec celui obtenu par les deux autres méthodes. Le chapitre 3 développe une analyse de la non-linéarité dans le modèle à changements de régime en utilisant un ensemble de tests non-standards. Une étude par simulation Monte Carlo révèle qu'un test récemment proposé par Carrasco, Hu et Ploberger (2013) présente une faible puissance pour des processus générateurs des données empiriquement pertinents et ce, lorsqu'on tient compte de l'autocorrélation sous l'hypothèse nulle. En revanche, untest "bootstrap" paramétrique basé sur le rapport des vraisemblances a, pour sa part une puissance plus élevée, ce qui traduit l'existence probable de non-linéarités significatives dans le PIB réel trimestriel de la France et des Etats-Unis. Quand il s'agit de tester un changement de régime en moyenne ou en constante, il est important de tenir compte de l'autocorrélation sous l'hypothèse nulle de linéarité. En effet, dans le cas contraire, un rejet de la linéarité pourrait simplement refléter une mauvaise spécification de la persistance des données, plutôt que d'une non-linéarité inhérente.Le chapitre 4 examine une question importante : la considération de ruptures structurelles dans les séries améliore-t-elle la performance prédictive du modèle markovien relativement à son homologue linéaire ? La démarche adoptée pour y répondre consiste à combiner les prévisions obtenues pour différentes périodes d'estimation. Voici le principal résultat dû à l'application de cette démarche : la prise en compte des données provenant des intervalles de temps précédant les ruptures structurelles et la "Grande Modération" améliore les prévisions basées sur des données tirées exclusivement de ces épisodes. De la sorte, les modèles à changements de régime s'avèrent capables de prédire la probabilité d'événements tels que la Grande Récession, avec plus de précision que ses homologues linéaires.Les conclusions générales synthétisent les principaux acquis de la thèse et évoqueplusieurs perspectives de recherche future
The severity of the Great Recession has renewed interest in the analysis of business cycles. Our thesis pertains to this revival of attention for the study of cyclical fluctuations. After reviewing the regime-switching models in Chapter one, the following chapter suggests a chronology of the classical business cycle in French economy for the 1970-2009 period. To that end, three dating methodologies are used: the rule of thumb of two consecutive quarters of negative growth, the non-parametric approach of Bry and Boschan (1971), and the Markov-switching approach of Hamilton (1989). The results show that,omitted structural breaks may hinder the Markov-switching approach to capture business-cycle fluctuations. However, when such breaks are allowed for, the timing of the French recessions provided by the Markov-switching model closely matches those derived by the rule-based approaches.Chapter 3 performs a nonlinearity analysis inMarkov-switching modelling using a set of non-standard tests. Monte Carlo analysis reveals that a recently test proposed by Carrasco, Hu, and Ploberger (2013) for Markov switching has low power for empirically-relevant data generating processes when allowing for serial correlation under the null. By contrast, a parametric bootstrap likelihood ratio (LR) test of Markov switching has higher power in the same setting, providing stronger support for nonlinearity in quarterly French and U.S. real GDP. When testing for Markov switching in mean or intercept of an autoregressive process, it is important to allow for serial correlation under the null hypothesis of linearity.Otherwise, a rejection of linearity could merely reflect misspecification of the persistence properties of the data, rather than any inherent nonlinearity.Chapter 4 examines whether controlling for structural breaks improves the forecasting performance of the Markov-switching models, as compared to their linear counterparts.The approach considered to answer this issue is to combined forecasts across different estimation windows. The outcome of applying such an approach shows that, including data from periods preceding structural breaks and particularly the "Great Moderation" improves upon forecasts based on data drawn exclusively from these episodes. Accordingly, Markov-switching models forecast the probability of events such as the Great Recession more accurately than their linear counterparts.The general conclusions summarize the main results of the thesis and, suggest several directions for future research
APA, Harvard, Vancouver, ISO, and other styles
19

Olivier, Adelaïde. "Analyse statistique des modèles de croissance-fragmentation." Thesis, Paris 9, 2015. http://www.theses.fr/2015PA090047/document.

Full text
Abstract:
Cette étude théorique est pensée en lien étroit avec un champ d'application : il s'agit de modéliser la croissance d'une population de cellules qui se divisent selon un taux de division inconnu, fonction d’une variable dite structurante – l’âge et la taille des cellules étant les deux exemples paradigmatiques étudiés. Le champ mathématique afférent se situe à l'interface de la statistique des processus, de l’estimation non-paramétrique et de l’analyse des équations aux dérivées partielles. Les trois objectifs de ce travail sont les suivants : reconstruire le taux de division (fonction de l’âge ou de la taille) pour différents schémas d’observation (en temps généalogique ou en temps continu) ; étudier la transmission d'un trait biologique général d'une cellule à une autre et étudier le trait d’une cellule typique ; comparer la croissance de différentes populations de cellules à travers le paramètre de Malthus (après introduction de variabilité dans le taux de croissance par exemple)
This work is concerned with growth-fragmentation models, implemented for investigating the growth of a population of cells which divide according to an unknown splitting rate, depending on a structuring variable – age and size being the two paradigmatic examples. The mathematical framework includes statistics of processes, nonparametric estimations and analysis of partial differential equations. The three objectives of this work are the following : get a nonparametric estimate of the division rate (as a function of age or size) for different observation schemes (genealogical or continuous) ; to study the transmission of a biological feature from one cell to an other and study the feature of one typical cell ; to compare different populations of cells through their Malthus parameter, which governs the global growth (when introducing variability in the growth rate among cells for instance)
APA, Harvard, Vancouver, ISO, and other styles
20

Relvas, Carlos Eduardo Martins. "Modelos parcialmente lineares com erros simétricos autoregressivos de primeira ordem." Universidade de São Paulo, 2013. http://www.teses.usp.br/teses/disponiveis/45/45133/tde-28052013-182956/.

Full text
Abstract:
Neste trabalho, apresentamos os modelos simétricos parcialmente lineares AR(1), que generalizam os modelos parcialmente lineares para a presença de erros autocorrelacionados seguindo uma estrutura de autocorrelação AR(1) e erros seguindo uma distribuição simétrica ao invés da distribuição normal. Dentre as distribuições simétricas, podemos considerar distribuições com caudas mais pesadas do que a normal, controlando a curtose e ponderando as observações aberrantes no processo de estimação. A estimação dos parâmetros do modelo é realizada por meio do critério de verossimilhança penalizada, que utiliza as funções escore e a matriz de informação de Fisher, sendo todas essas quantidades derivadas neste trabalho. O número efetivo de graus de liberdade e resultados assintóticos também são apresentados, assim como procedimentos de diagnóstico, destacando-se a obtenção da curvatura normal de influência local sob diferentes esquemas de perturbação e análise de resíduos. Uma aplicação com dados reais é apresentada como ilustração.
In this master dissertation, we present the symmetric partially linear models with AR(1) errors that generalize the normal partially linear models to contain autocorrelated errors AR(1) following a symmetric distribution instead of the normal distribution. Among the symmetric distributions, we can consider heavier tails than the normal ones, controlling the kurtosis and down-weighting outlying observations in the estimation process. The parameter estimation is made through the penalized likelihood by using score functions and the expected Fisher information. We derive these functions in this work. The effective degrees of freedom and asymptotic results are also presented as well as the residual analysis, highlighting the normal curvature of local influence under different perturbation schemes. An application with real data is given for illustration.
APA, Harvard, Vancouver, ISO, and other styles
21

Van, Heerden Petrus Marthinus Stephanus. "The relationship between the forward– and the realized spot exchange rate in South Africa / Petrus Marthinus Stephanus van Heerden." Thesis, North-West University, 2010. http://hdl.handle.net/10394/4511.

Full text
Abstract:
The inability to effectively hedge against unfavourable exchange rate movements, using the current forward exchange rate as the only guideline, is a key inhibiting factor of international trade. Market participants use the current forward exchange rate quoted in the market to make decisions regarding future exchange rate changes. However, the current forward exchange rate is not solely determined by the interaction of demand and supply, but is also a mechanistic estimation, which is based on the current spot exchange rate and the carry cost of the transaction. Results of various studies, including this study, demonstrated that the current forward exchange rate differs substantially from the realized future spot exchange rate. This phenomenon is known as the exchange rate puzzle. This study contributes to the dynamics of modelling exchange rate theories by developing an exchange rate model that has the ability to explain the realized future spot exchange rate and the exchange rate puzzle. The exchange rate model is based only on current (time t) economic fundamentals and includes an alternative approach of incorporating the impact of the interaction of two international financial markets into the model. This study derived a unique exchange rate model, which proves that the exchange rate puzzle is a pseudo problem. The pseudo problem is based on the generally excepted fallacy that current non–stationary, level time series data cannot be used to model exchange rate theories, because of the incorrect assumption that all the available econometric methods yield statistically insignificant results due to spurious regressions. Empirical evidence conclusively shows that using non–stationary, level time series data of current economic fundamentals can statistically significantly explain the realized future spot exchange rate and, therefore, that the exchange rate puzzle can be solved. This model will give market participants in the foreign exchange market a better indication of expected future exchange rates, which will considerably reduce the dependence on the mechanistically derived forward points. The newly derived exchange rate model will also have an influence on the demand and supply of forward exchange, resulting in forward points that are a more accurate prediction of the realized future exchange rate.
Thesis (Ph.D. (Risk management))--North-West University, Potchefstroom Campus, 2011.
APA, Harvard, Vancouver, ISO, and other styles
22

Kahaei, Mohammad Hossein. "Performance analysis of adaptive lattice filters for FM signals and alpha-stable processes." Thesis, Queensland University of Technology, 1998. https://eprints.qut.edu.au/36044/7/36044_Digitised_Thesis.pdf.

Full text
Abstract:
The performance of an adaptive filter may be studied through the behaviour of the optimal and adaptive coefficients in a given environment. This thesis investigates the performance of finite impulse response adaptive lattice filters for two classes of input signals: (a) frequency modulated signals with polynomial phases of order p in complex Gaussian white noise (as nonstationary signals), and (b) the impulsive autoregressive processes with alpha-stable distributions (as non-Gaussian signals). Initially, an overview is given for linear prediction and adaptive filtering. The convergence and tracking properties of the stochastic gradient algorithms are discussed for stationary and nonstationary input signals. It is explained that the stochastic gradient lattice algorithm has many advantages over the least-mean square algorithm. Some of these advantages are having a modular structure, easy-guaranteed stability, less sensitivity to the eigenvalue spread of the input autocorrelation matrix, and easy quantization of filter coefficients (normally called reflection coefficients). We then characterize the performance of the stochastic gradient lattice algorithm for the frequency modulated signals through the optimal and adaptive lattice reflection coefficients. This is a difficult task due to the nonlinear dependence of the adaptive reflection coefficients on the preceding stages and the input signal. To ease the derivations, we assume that reflection coefficients of each stage are independent of the inputs to that stage. Then the optimal lattice filter is derived for the frequency modulated signals. This is performed by computing the optimal values of residual errors, reflection coefficients, and recovery errors. Next, we show the tracking behaviour of adaptive reflection coefficients for frequency modulated signals. This is carried out by computing the tracking model of these coefficients for the stochastic gradient lattice algorithm in average. The second-order convergence of the adaptive coefficients is investigated by modeling the theoretical asymptotic variance of the gradient noise at each stage. The accuracy of the analytical results is verified by computer simulations. Using the previous analytical results, we show a new property, the polynomial order reducing property of adaptive lattice filters. This property may be used to reduce the order of the polynomial phase of input frequency modulated signals. Considering two examples, we show how this property may be used in processing frequency modulated signals. In the first example, a detection procedure in carried out on a frequency modulated signal with a second-order polynomial phase in complex Gaussian white noise. We showed that using this technique a better probability of detection is obtained for the reduced-order phase signals compared to that of the traditional energy detector. Also, it is empirically shown that the distribution of the gradient noise in the first adaptive reflection coefficients approximates the Gaussian law. In the second example, the instantaneous frequency of the same observed signal is estimated. We show that by using this technique a lower mean square error is achieved for the estimated frequencies at high signal-to-noise ratios in comparison to that of the adaptive line enhancer. The performance of adaptive lattice filters is then investigated for the second type of input signals, i.e., impulsive autoregressive processes with alpha-stable distributions . The concept of alpha-stable distributions is first introduced. We discuss that the stochastic gradient algorithm which performs desirable results for finite variance input signals (like frequency modulated signals in noise) does not perform a fast convergence for infinite variance stable processes (due to using the minimum mean-square error criterion). To deal with such problems, the concept of minimum dispersion criterion, fractional lower order moments, and recently-developed algorithms for stable processes are introduced. We then study the possibility of using the lattice structure for impulsive stable processes. Accordingly, two new algorithms including the least-mean P-norm lattice algorithm and its normalized version are proposed for lattice filters based on the fractional lower order moments. Simulation results show that using the proposed algorithms, faster convergence speeds are achieved for parameters estimation of autoregressive stable processes with low to moderate degrees of impulsiveness in comparison to many other algorithms. Also, we discuss the effect of impulsiveness of stable processes on generating some misalignment between the estimated parameters and the true values. Due to the infinite variance of stable processes, the performance of the proposed algorithms is only investigated using extensive computer simulations.
APA, Harvard, Vancouver, ISO, and other styles
23

Petitjean, Julien. "Contributions au traitement spatio-temporel fondé sur un modèle autorégressif vectoriel des interférences pour améliorer la détection de petites cibles lentes dans un environnement de fouillis hétérogène Gaussien et non Gaussien." Thesis, Bordeaux 1, 2010. http://www.theses.fr/2010BOR14157/document.

Full text
Abstract:
Cette thèse traite du traitement adaptatif spatio-temporel dans le domaine radar. Pour augmenter les performances en détection, cette approche consiste à maximiser le rapport entre la puissance de la cible et celle des interférences, à savoir le bruit thermique et le fouillis. De nombreuses variantes de cet algorithme existent, une d’entre elles est fondée sur une modélisation autorégressive vectorielle des interférences. Sa principale difficulté réside dans l’estimation des matrices autorégressives à partir des données d’entrainement ; ce point constitue l’axe de notre travail de recherche. En particulier, notre contribution porte sur deux aspects. D’une part, dans le cas où l’on suppose que le bruit thermique est négligeable devant le fouillis non gaussien, les matrices autorégressives sont estimées en utilisant la méthode du point fixe. Ainsi, l’algorithme est robuste à la distribution non gaussienne du fouillis.D’autre part, nous proposons une nouvelle modélisation des interférences différenciant le bruit thermique et le fouillis : le fouillis est considéré comme un processus autorégressif vectoriel, gaussien et perturbé par le bruit blanc thermique. Ainsi, de nouvelles techniques d'estimation des matrices autorégressives sont proposées. La première est une estimation aveugle par bloc reposant sur la technique à erreurs dans les variables. Ainsi, l’estimation des matrices autorégressives reste robuste pour un rapport faible entre la puissance de la cible et celle du fouillis (< 5 dB). Ensuite, des méthodes récursives ont été développées. Elles sont fondées sur des approches du type Kalman : filtrage de Kalman étendu et filtrage par sigma point (UKF et CDKF), ainsi que sur le filtre H∞.Une étude comparative sur des données synthétiques et réelles, avec un fouillis gaussien ou non gaussien, est menée pour révéler la pertinence des différents estimateurs en terme de probabilité de détection
This dissertation deals with space-time adaptive processing in the radar’s field. To improve the detection’s performances, this approach consists in maximizing the ratio between the target’s power and the interference’s one, i.e. the thermal noise and the clutter. Several variants of its algorithm exist, one of them is based on multichannel autoregressive modelling of interferences. Its main problem lies in the estimation of autoregressive matrices with training data and guides our research’s work. Especially, our contribution is twofold.On the one hand, when thermal noise is considered negligible, autoregressive matrices are estimated with fixed point method. Thus, the algorithm is robust against non-gaussian clutter.On the other hand, a new modelling of interferences is proposed. The clutter and thermal noise are separated : the clutter is considered as a multichannel autoregressive process which is Gaussian and disturbed by the white thermal noise. Thus, new estimation’s algorithms are developed. The first one is a blind estimation based on errors in variable methods. Then, recursive approaches are proposed and used extension of Kalman filter : the extended Kalman filter and the Sigma Point Kalman filter (UKF and CDKF), and the H∞ filter. A comparative study on synthetic and real data with Gausian and non Gaussian clutter is carried out to show the relevance of the different algorithms about detection’s probability
APA, Harvard, Vancouver, ISO, and other styles
24

Ahmed, Mohamed Salem. "Contribution à la statistique spatiale et l'analyse de données fonctionnelles." Thesis, Lille 3, 2017. http://www.theses.fr/2017LIL30047/document.

Full text
Abstract:
Ce mémoire de thèse porte sur la statistique inférentielle des données spatiales et/ou fonctionnelles. En effet, nous nous sommes intéressés à l’estimation de paramètres inconnus de certains modèles à partir d’échantillons obtenus par un processus d’échantillonnage aléatoire ou non (stratifié), composés de variables indépendantes ou spatialement dépendantes.La spécificité des méthodes proposées réside dans le fait qu’elles tiennent compte de la nature de l’échantillon étudié (échantillon stratifié ou composé de données spatiales dépendantes).Tout d’abord, nous étudions des données à valeurs dans un espace de dimension infinie ou dites ”données fonctionnelles”. Dans un premier temps, nous étudions les modèles de choix binaires fonctionnels dans un contexte d’échantillonnage par stratification endogène (échantillonnage Cas-Témoin ou échantillonnage basé sur le choix). La spécificité de cette étude réside sur le fait que la méthode proposée prend en considération le schéma d’échantillonnage. Nous décrivons une fonction de vraisemblance conditionnelle sous l’échantillonnage considérée et une stratégie de réduction de dimension afin d’introduire une estimation du modèle par vraisemblance conditionnelle. Nous étudions les propriétés asymptotiques des estimateurs proposées ainsi que leurs applications à des données simulées et réelles. Nous nous sommes ensuite intéressés à un modèle linéaire fonctionnel spatial auto-régressif. La particularité du modèle réside dans la nature fonctionnelle de la variable explicative et la structure de la dépendance spatiale des variables de l’échantillon considéré. La procédure d’estimation que nous proposons consiste à réduire la dimension infinie de la variable explicative fonctionnelle et à maximiser une quasi-vraisemblance associée au modèle. Nous établissons la consistance, la normalité asymptotique et les performances numériques des estimateurs proposés.Dans la deuxième partie du mémoire, nous abordons des problèmes de régression et prédiction de variables dépendantes à valeurs réelles. Nous commençons par généraliser la méthode de k-plus proches voisins (k-nearest neighbors; k-NN) afin de prédire un processus spatial en des sites non-observés, en présence de co-variables spatiaux. La spécificité du prédicteur proposé est qu’il tient compte d’une hétérogénéité au niveau de la co-variable utilisée. Nous établissons la convergence presque complète avec vitesse du prédicteur et donnons des résultats numériques à l’aide de données simulées et environnementales.Nous généralisons ensuite le modèle probit partiellement linéaire pour données indépendantes à des données spatiales. Nous utilisons un processus spatial linéaire pour modéliser les perturbations du processus considéré, permettant ainsi plus de flexibilité et d’englober plusieurs types de dépendances spatiales. Nous proposons une approche d’estimation semi paramétrique basée sur une vraisemblance pondérée et la méthode des moments généralisées et en étudions les propriétés asymptotiques et performances numériques. Une étude sur la détection des facteurs de risque de cancer VADS (voies aéro-digestives supérieures)dans la région Nord de France à l’aide de modèles spatiaux à choix binaire termine notre contribution
This thesis is about statistical inference for spatial and/or functional data. Indeed, weare interested in estimation of unknown parameters of some models from random or nonrandom(stratified) samples composed of independent or spatially dependent variables.The specificity of the proposed methods lies in the fact that they take into considerationthe considered sample nature (stratified or spatial sample).We begin by studying data valued in a space of infinite dimension or so-called ”functionaldata”. First, we study a functional binary choice model explored in a case-controlor choice-based sample design context. The specificity of this study is that the proposedmethod takes into account the sampling scheme. We describe a conditional likelihoodfunction under the sampling distribution and a reduction of dimension strategy to definea feasible conditional maximum likelihood estimator of the model. Asymptotic propertiesof the proposed estimates as well as their application to simulated and real data are given.Secondly, we explore a functional linear autoregressive spatial model whose particularityis on the functional nature of the explanatory variable and the structure of the spatialdependence. The estimation procedure consists of reducing the infinite dimension of thefunctional variable and maximizing a quasi-likelihood function. We establish the consistencyand asymptotic normality of the estimator. The usefulness of the methodology isillustrated via simulations and an application to some real data.In the second part of the thesis, we address some estimation and prediction problemsof real random spatial variables. We start by generalizing the k-nearest neighbors method,namely k-NN, to predict a spatial process at non-observed locations using some covariates.The specificity of the proposed k-NN predictor lies in the fact that it is flexible and allowsa number of heterogeneity in the covariate. We establish the almost complete convergencewith rates of the spatial predictor whose performance is ensured by an application oversimulated and environmental data. In addition, we generalize the partially linear probitmodel of independent data to the spatial case. We use a linear process for disturbancesallowing various spatial dependencies and propose a semiparametric estimation approachbased on weighted likelihood and generalized method of moments methods. We establishthe consistency and asymptotic distribution of the proposed estimators and investigate thefinite sample performance of the estimators on simulated data. We end by an applicationof spatial binary choice models to identify UADT (Upper aerodigestive tract) cancer riskfactors in the north region of France which displays the highest rates of such cancerincidence and mortality of the country
APA, Harvard, Vancouver, ISO, and other styles
25

Shakibi, Babak. "Resolution Enhancement of Ultrasonic Signals using Autoregressive Spectral Extrapolation." Thesis, 2011. http://hdl.handle.net/1807/29619.

Full text
Abstract:
Time of Flight Diffraction (TOFD) is one of the most accurate ultrasonic methods for crack detection and sizing in pipeline girth welds. Its performance, however, is limited by the temporal resolution of the signal. In this thesis, we develop a signal processing method based on autoregressive spectral extrapolation to improve the temporal resolution of ultrasonic signals. The original method cannot be used in industrial applications since its performance is highly dependent on selection of a number of free parameters. This method is modified by optimizing its various steps and limiting the number of free parameters, and an automated algorithm for selection of values for the remaining free parameters is proposed based on the analysis of a large set of synthetic signals. The performance of the final algorithm is evaluated using experimental data; it is shown that the uncertainty in crack sizing accuracy can be reduced by as much as 80%. Furthermore, the proposed method is shown to be capable of resolving overlapping echoes; therefore, smaller cracks that have echoes that are not clearly resolved in the raw signal, can be detected and sized in the enhanced signal.
APA, Harvard, Vancouver, ISO, and other styles
26

Nunes, Diana Catherina Manaig. "Modelling granules size distribution produced on a continuous manufacturating line with non-linear autoregressive artificial neural networks." Master's thesis, 2018. http://hdl.handle.net/10451/40066.

Full text
Abstract:
Tese de mestrado, Engenharia Farmacêutica, Universidade de Lisboa, Faculdade de Farmácia, 2018
Particle size is a critical quality parameter in several pharmaceutical unit operations. An adequate particle size distribution is essential to ensure optimal manufacturability which, in turn, has an important impact on the safety, efficacy and quality of the end product. Thus, the monitoring and control of the particle size via in-process size measurements is crucial to the pharmaceutical industry. Currently, a wide range of techniques are available for the determination of particle size distribution, however a technique that enables relevant real-time process data is highly preferable, as a better understanding and control over the process is offered. The pharmaceutical industry follows the “technology-push model” as it depends on scientific and technological advances. Hence, optimization of product monitoring technologies for drug products have been receiving more attention as it helps to increase profitability. An increasing interest in the usage of virtual instruments as an alternative to physical instruments has arisen in recent years. A software sensor utilizes information collected from a process operation to estimate values of some property of interest, typically difficult to measure experimentally. One of the most significant benefits of the computational approach is the possibility to adapt the measuring system through several optimization solutions. The present thesis focuses on the development of a mathematical dynamic model capable of predicting particle size distribution in-real time. For this purpose, multivariate data coming from univariate sensors placed in multiple locations of the continuous production line, ConsiGmaTM-25, was utilized to determine the size distribution (d50) of granules evaluated at a specific site within the line. The ConsiGmaTM-25 system is a continuous granulation line developed by GEA Pharma. It consists of three modules: a continuous twin-screw granulation module, a six-segmented cell fluid bed dryer and a product control unit. In the continuous granulation module, granules are produced inside the twin-screw granulator via mixing of the powder and the granulation liquid (water) fed into the granulation barrel. Once finalized the granulation operation, the produced granules are then pneumatically transferred to the fluid bed dryer module. In the dryer module, the granules are relocated to one specific dryer cell, where drying is performed for a pre-defined period of time. The dry granules are formerly transported to the product control hopper with an integrated mill situated in the product control unit. The granules are milled, and the resulting product is gravitationally discharged and can undergo further processing steps, such as blending, tableting and coating. The size distribution (d50) of the granules to be determined in this work were assessed inside dryer cell no.4, located at the dryer module. The size distribution was measured every ten seconds by a focused beam reflectance measurement technique. A non-linear autoregressive with exogenous inputs network was developed to achieve accurate predictions of granules size distribution values. The development of the predictive model consisted of the implementation of an optimization strategy in terms of topology, inputs, delays and training methodology. The network was trained against the d50 obtained from particle size distribution collected in-situ by the focused beam reflectance measurement technique mentioned above. The model presented the ability to predict the d50 value from the beginning to the end of the several drying cycles. The accuracy of the artificial neural network was determined by a root mean squared error of prediction of 6.9%, which demonstrated the capability to produce close results to the experimental data of the cycles/runs included on the testing set. The predictive ability of the neural network, however, could not be extended to drying cycle that presented irregular fluctuations. Due to the importance of the precise monitoring of the size distribution within pharmaceutical operations, a future adjustment of the optimization strategy is of great interest. In the future, a higher number of experimental runs/cycles can be used during the training process to enable the network to identify and predict more easily atypical cases. In addition, a more realistic optimization strategy could be performed for all process parameters in simultaneous through the implementation of a genetic algorithm, for example. Changes in terms of network topology can also be considered.
O tamanho de partícula é um parâmetro crítico de qualidade em diversas operações unitárias da indústria farmacêutica. Uma distribuição de tamanho de partícula adequada é essencial para garantir condições ideais de fabrico, o que por sua vez, possui um impacto significativo na segurança, eficácia e qualidade do produto final. Deste modo, a monitorização e controlo do tamanho de partícula através de medições efetuadas durante o processo são consideradas cruciais para a indústria. Atualmente, uma ampla gama de técnicas encontra-se disponível para a determinação da distribuição de tamanho de partícula. Contudo, uma técnica que permita a obtenção de dados relevantes em tempo real é altamente preferível, visto que um melhor entendimento e controlo sobre o processo é obtido. A indústria farmacêutica encontra-se altamente dependente de avanços científicos e tecnológicos. Nos últimos anos, um interesse crescente no uso de instrumentos virtuais como alternativa à instrumentalização física na monitorização de produto é evidente. Um sensor virtual faz uso da informação contida num determinado conjunto de dados para efetuar medições adequadas de uma propriedade de interesse. Uma das vantagens mais importantes desta abordagem computacional corresponde à possibilidade de adaptação do sistema de medição, recorrendo a variados métodos de otimização. A presente tese encontra-se focada no desenvolvimento de um modelo matemático dinâmico capaz de prever a distribuição de tamanho de partícula em tempo real. Para o efeito, dados multivariados gerados, a cada segundo, por sensores localizados em múltiplos locais da linha de produção contínua, ConsiGmaTM-25, foram utilizados para determinar a distribuição de tamanho (d50) de grânulos avaliada num ponto específico da linha. O sistema ConsiGmaTM-25 trata-se de uma linha contínua de produção de grânulos, que pode ser dividida, essencialmente, em três módulos principais: granulador contínuo, secador de leito fluido e unidade de acondicionamento de produto. No módulo de granulação, ocorre a produção de grânulos através da mistura de pó e água (líquido de granulação). Uma vez finalizada a operação unitária, os grânulos produzidos são pneumaticamente transferidos para o secador de leito fluido. Neste local, os grânulos são introduzidos numa das seis células de secagem, onde ocorre o processo de secagem durante um período de tempo pré-definido. Os grânulos secos resultantes são, de seguida, transferidos para a unidade de acondicionamento de produto, integrado por um moinho, responsável pela operação de moagem. O material moído é gravitacionalmente descarregado e pode ser novamente processado através de operações como a mistura, compressão ou revestimento. A distribuição de tamanho (d50) dos grânulos a ser determinada neste trabalho foi medida, a cada dez segundos, através da técnica de reflectância por um feixe de luz focalizado. Um total de dezasseis corridas realizadas no mês de agosto foram utilizadas neste trabalho. Para cada corrida, dados relativos a parâmetros de processo tais como, pressões, temperaturas, fluxos de ar, entre outros, bem como, a distribuição do tamanho (d50) dos grânulos foram disponibilizados. Com base na discrepância temporal verificada entre os dados de processo e os valores de distribuição de tamanho (d50) dos grânulos, diversas etapas de processamento foi executadas. O processamento de dados foi realizado, essencialmente, em três fases distintas: alinhamento, filtragem e organização/fragmentação. Uma vez finalizado o processamento, os dados foram utilizados no desenvolvimento do modelo preditivo (rede neural). Uma rede neuronal não-linear autorregressiva com três entradas exógenas foi desenvolvida para realizar previsões da distribuição de tamanho (d50) dos grânulos. O desenvolvimento do modelo preditivo consistiu na implementação de uma estratégia de otimização em termos de topologia, atrasos, dados de entrada, seleção de corridas e metodologia de treino. Para cada variável de processo (entrada), um atraso foi assinalado com base em pressupostos fundamentados por estudos relativos ao tempo de residência dos três módulos da linha contínua. Os dados de entrada foram definidos com base no resultado de um modelo matemático desenvolvido para designar o conjunto de variáveis para o qual se observava um menor erro médio quadrático de previsão da propriedade de interesse, d50. De forma a possibilitar o treino da rede, os dados fragmentados foram divididos em dois principais conjuntos: treino e teste. A rede foi treinada e validada com dados de treino, sendo os dados de teste seguidamente utilizados para avaliar a capacidade preditiva do modelo otimizado. O modelo apresentou a capacidade de prever o valor de d50 ao longo dos vários ciclos de secagem. A precisão da rede neural foi determinada por um valor de erro médio quadrático de previsão de 6,9%, demonstrando sua capacidade de produzir resultados próximos aos dados experimentais incluídos no conjunto de teste. A capacidade preditiva da rede neural, no entanto, não foi capaz de abranger casos atípicos. Considerando a importância de uma monitorização precisa da distribuição de tamanho nas operações farmacêuticas, uma futura alteração na estratégia de otimização implementada é altamente aconselhável. No futuro, o uso de um número mais elevado de ciclos/corridas de secagem durante o processo de treino da rede poderá permitir que esta seja capaz de identificar e prever com maior facilidade casos atípicos. Adicionalmente, uma abordagem mais realista da estratégia de otimização poderá ser executada para todas os parâmetros de processo em simultâneo através da implementação de um algoritmo genético. Ainda, alterações na topologia da rede poderão ser também consideradas.
APA, Harvard, Vancouver, ISO, and other styles
27

LEE, SONG-SIOU, and 李松修. "The Non-linear Adjustment of Taiwan Stock Index Returns and Macroeconomic Variables: Using Smooth Transition Autoregressive STAR-ANSTGARCH Model." Thesis, 2012. http://ndltd.ncl.edu.tw/handle/54661775463446713392.

Full text
Abstract:
碩士
國立臺北大學
企業管理學系
100
This paper attempts to investigate how the TAIEX stock returns were adjusted through time path with and without macroeconomic variables using monthly sample data from June 1996 to June 2011. By employing smooth transition autoregressive model(STAR) and ANSTGARCH model to depict asymmetric and nonlinear behaviors of the TAIEX returns, the findings are listed below. 1. TAIEX returns adjust in non-linear path. 2. The LSTAR model is better than the ESTAR model in measuring TAIEX returns adjustment process. 3. The STAR-ANSTGARCH model could properly estimate the asymmetric and nonlinear behavior of conditional mean and variance of TAIEX returns. 4. The Taiwan Coincident indicator could effectively explain the conditional mean and conditional variance of TAIEX returns.
APA, Harvard, Vancouver, ISO, and other styles
28

Huang, Hsiao-Yun, and 黃筱雲. "A Study of Non-linear Co-relation between Mutual Fund’s Holding Rate and Stock Price under Different Stock Market Conditions - Panel Threshold Autoregressive Model Approach." Thesis, 2007. http://ndltd.ncl.edu.tw/handle/57443962123975402196.

Full text
Abstract:
碩士
淡江大學
財務金融學系碩士在職專班
95
The purpose of this paper is to study the “threshold effect” between the proportion of mutual fund’s holding on electronic company and its stock prices by applying the panel threshold auto-co-relation model. It examines if there is one or multiple optimal thresholds to have reverse effects between the proportions of the stock holding and the stock prices. Applying the empirical results, we can conclude that investors can increase their stock holding when the proportion of the mutual fund’s holding starts increasing from a very low level. Even the stock prices fall in the beginning when the proportion of the mutual fund’s start increasing in a falling market. However, the stock prices eventually go up higher from a long-term perspective. On the other hand, it is less indicative that the stock prices go up or down when the market is consolidating according to the empirical results. Finally, the transaction volume is positively related to the stock prices and therefore is a useful indicator. On the other hand, the Nasdaq index or the exchange rate is less useful.
APA, Harvard, Vancouver, ISO, and other styles
29

Zhang, Kai. "Does purchasing power parity hold between European countries? : investigation using non-linear STAR model." Master's thesis, 2012. http://hdl.handle.net/10400.14/15429.

Full text
Abstract:
This paper investigates whether purchasing power parity (PPP) holds or not between each European country. PPP is a fundamental building block of international economics. The research of PPP not only can help economist understand exchange rate behaviour, but also assist with monetary maker to establish sensible exchange rate policies. For examining the PPP validity in euro, this paper starts at a general introduction of the exchange rate importance. The following is an in-depth overview of PPP studies and econometric technique developments since 1970s. We applied the non-linear Exponential Smooth Transition Autoregressive (ESTAR) for a new test (KSS test) and ADF test to the real exchange rates between 9 European countries. The results demonstrate the new test giving more support to PPP than that of ADF test. And while the linear ADF test can only reject a unit root in 6 cases, the new test is able to reject in 13 cases out of 36, which are in favour of PPP and offer evidences of non-linear mean reversion in real exchange rate.
APA, Harvard, Vancouver, ISO, and other styles
30

Ngwangwa, Harry Magadhlela. "Road surface profile monitoring based on vehicle response and artificial neural network simulation." Thesis, 2015. http://hdl.handle.net/2263/43788.

Full text
Abstract:
Road damage identification is still largely based on visual inspection methods and profilometer data. Visual inspection methods heavily rely on expert knowledge which is often very subjective. They also result in traffic flow interference due to the need for redirection of traffic to alternative routes during inspection. In addition to this, accurate high-speed profilometers, such as scanning vehicles, are extremely expensive often requiring strong economic justifications for their acquisition. The low-cost profilometers are very slow, typically operating at or less than walking speeds, causing their use to be labour-intensive if applied to large networks.This study aims at developing a road damage identification methodology for both paved and unpaved roads based on modelling the road-vehicle interaction system with an artificial neural network. The artificial neural network is created and trained with vehicle acceleration data as inputs and road profiles as targets. Then the trained neural network is consequently used for reconstruction of road profiles upon simulating it with vertical vehicle accelerations. The simulation process is very fast and can often be completed in a very short time thus making it possible to implement the methodology in real-time. Three case studies were used to demonstrate the feasibility of the methodology and the results on field tests carried out on mine vehicles with crudely measured road profiles showed a majority of the tested roads were reconstructed to within a fitting accuracy of less than 40% at a correlation level of greater than 55% which in this study was found to be practically acceptable considering the limitations imposed by the sizes of the haul trucks and their tyres as well as the quality of the road profiles and lack of control in the vehicle operation.
Thesis (PhD)--University of Pretoria, 2015.
Mechanical and Aeronautical Engineering
Unrestricted
APA, Harvard, Vancouver, ISO, and other styles
31

Costa, Sofia Martinho de Almeida. "Cross-sectional modeling of bank deposits." Master's thesis, 2019. http://hdl.handle.net/10362/91822.

Full text
Abstract:
The dynamics of liquidity risk is an important issue in what concerns banks’ activity. It can be approached by studying the evolution of banks’ clients deposits in order to mitigate the probability of bankruptcy and to efficiently manage banks’ resources. A sound liquidity risk model is also an important component of any liquidity stress testing methodology. In this research1, we aim to develop a model that can help banks to properly manage their activity, by explaining the evolution of clients deposits throughout time. For this purpose, we considered the momentum, a frequently used tool in finance that helps to clarify observed trends. Therefore, we obtained an AR(2) model that was then used to simulate trajectories, through the use of the R software, for possible evolutions of the deposits. Another feature that we pondered was panel data. By considering different banks in our sample, the simulations would generate varied trajectories, including both good and bad scenarios, which is useful for stress testing purposes. The mostly referred model in the literature is the AR(1) model with only one time series, which often does not generate distress episodes. In order to validate our model we had to perform several tests, including to the normality and autocorrelation of the residuals of our model. Furthermore, we considered the most used model in the literature for comparison with two different individual banks. We simulated trajectories for all cases and evaluated them through the use of indicators such as the Maximum Drawdown and density plots. When simulating trajectories for banks’ deposits, the panel data model gives more realistic scenarios, including episodes of financial distress, showing much higher drawdowns and density plots that present a wide range of possible values, corresponding to booms and financial crises. Therefore, our methodology is more suitable for planning the management of banks’ resources, as well as for conducting liquidity stress tests.
APA, Harvard, Vancouver, ISO, and other styles
32

Grégoire, Gabrielle. "Sur les modèles non-linéaires autorégressifs à transition lisse et le calcul de leurs prévisions." Thèse, 2019. http://hdl.handle.net/1866/22550.

Full text
APA, Harvard, Vancouver, ISO, and other styles
33

Lemyre, Gabriel. "Modèles de Markov à variables latentes : matrice de transition non-homogène et reformulation hiérarchique." Thesis, 2021. http://hdl.handle.net/1866/25476.

Full text
Abstract:
Ce mémoire s’intéresse aux modèles de Markov à variables latentes, une famille de modèles dans laquelle une chaîne de Markov latente régit le comportement d’un processus stochastique observable à travers duquel transparaît une version bruitée de la chaîne cachée. Pouvant être vus comme une généralisation naturelle des modèles de mélange, ces processus stochastiques bivariés ont entre autres démontré leur faculté à capter les dynamiques variables de maintes séries chronologiques et, plus spécifiquement en finance, à reproduire la plupart des faits stylisés des rendements financiers. Nous nous intéressons en particulier aux chaînes de Markov à temps discret et à espace d’états fini, avec l’objectif d’étudier l’apport de leurs reformulations hiérarchiques et de la relaxation de l’hypothèse d’homogénéité de la matrice de transition à la qualité de l’ajustement aux données et des prévisions, ainsi qu’à la reproduction des faits stylisés. Nous présentons à cet effet deux structures hiérarchiques, la première permettant une nouvelle interprétation des relations entre les états de la chaîne, et la seconde permettant de surcroît une plus grande parcimonie dans la paramétrisation de la matrice de transition. Nous nous intéressons de plus à trois extensions non-homogènes, dont deux dépendent de variables observables et une dépend d’une autre variable latente. Nous analysons pour ces modèles la qualité de l’ajustement aux données et des prévisions sur la série des log-rendements du S&P 500 et du taux de change Canada-États-Unis (CADUSD). Nous illustrons de plus la capacité des modèles à reproduire les faits stylisés, et présentons une interprétation des paramètres estimés pour les modèles hiérarchiques et non-homogènes. Les résultats obtenus semblent en général confirmer l’apport potentiel de structures hiérarchiques et des modèles non-homogènes. Ces résultats semblent en particulier suggérer que l’incorporation de dynamiques non-homogènes aux modèles hiérarchiques permette de reproduire plus fidèlement les faits stylisés—même la lente décroissance de l’autocorrélation des rendements centrés en valeur absolue et au carré—et d’améliorer la qualité des prévisions obtenues, tout en conservant la possibilité d’interpréter les paramètres estimés.
This master’s thesis is centered on the Hidden Markov Models, a family of models in which an unobserved Markov chain dictactes the behaviour of an observable stochastic process through which a noisy version of the latent chain is observed. These bivariate stochastic processes that can be seen as a natural generalization of mixture models have shown their ability to capture the varying dynamics of many time series and, more specifically in finance, to reproduce the stylized facts of financial returns. In particular, we are interested in discrete-time Markov chains with finite state spaces, with the objective of studying the contribution of their hierarchical formulations and the relaxation of the homogeneity hypothesis for the transition matrix to the quality of the fit and predictions, as well as the capacity to reproduce the stylized facts. We therefore present two hierarchical structures, the first allowing for new interpretations of the relationships between states of the chain, and the second allowing for a more parsimonious parameterization of the transition matrix. We also present three non-homogeneous models, two of which have transition probabilities dependent on observed explanatory variables, and the third in which the probabilities depend on another latent variable. We first analyze the goodness of fit and the predictive power of our models on the series of log returns of the S&P 500 and the exchange rate between canadian and american currencies (CADUSD). We also illustrate their capacity to reproduce the stylized facts, and present interpretations of the estimated parameters for the hierarchical and non-homogeneous models. In general, our results seem to confirm the contribution of hierarchical and non-homogeneous models to these measures of performance. In particular, these results seem to suggest that the incorporation of non-homogeneous dynamics to a hierarchical structure may allow for a more faithful reproduction of the stylized facts—even the slow decay of the autocorrelation functions of squared and absolute returns—and better predictive power, while still allowing for the interpretation of the estimated parameters.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography