Dissertations / Theses on the topic 'Forecasting function'

To see the other types of publications on this topic, follow the link: Forecasting function.

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 dissertations / theses for your research on the topic 'Forecasting function.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.

1

Abdullah, Rozi. "Rainfall forecasting algorithms for real time flood forecasting." Thesis, University of Newcastle Upon Tyne, 1996. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.296151.

Full text
Abstract:
A fast catchment response usually leads to a shorter lag time, and under these conditions the forecast lead time obtained from a rainfall-runoff model or correlation between upstream and downstream flows may be infeasible for flood warning purposes. Additional lead time can be obtained from short-term quantitative rainfall forecasts that extend the flood warning time and increase the economic viability of a flood forecasting system. For this purpose algorithms which forecasts the quantitative rainfall amounts up to six hours ahead have been developed, based on lumped and distributed approaches. The lumped forecasting algorithm includes the essential features of storm dynamics such as rainband and raincell movements which are represented within the framework of a linear transfer function model. The dynamics of a storm are readily captured by radar data. A space-time rainfall model is used to generate synthetic radar data with known features, e.g. rainband and raincell velocities. This enables the algorithm to be assessed under ideal conditions, as errors are present in observed radar data. The transfer function algorithm can be summarised as follows. The dynamics of the rainbands and raincells are incorporated as inputs into the transfer function model. The algorithm employs simple spatial cross-correlation techniques to estimate the rainband and raincell velocities. The translated rainbands and raincells then form the auxiliary inputs to the transfer function. An optimal predictor based on minimum square error is then derived from the transfer function model, and its parameters are estimated from the auxiliary inputs and observed radar data in real-time using a recursive least squares algorithm. While the transfer-function algorithm forecasts areal rainfalls, a distributed approach which performs rainfall forecasting at a fine spatial resolution (referred to as the advection equation algorithm) is also evaluated in this thesis. The algorithm expresses the space-time rainfall on a Cartesian coordinate system via a partial differential advection equation. A simple explicit finite difference solution scheme is applied to the equation. A comparison of model parameter estimates is undertaken using a square root information filter data processing algorithm, and single-input single-output and multiple-input multiple-output least squares algorithms.
APA, Harvard, Vancouver, ISO, and other styles
2

Burger, S. (Stephan). "Managing the forecasting function within the fast moving consumer goods industry." Thesis, Stellenbosch : Stellenbosch University, 2003. http://hdl.handle.net/10019.1/53494.

Full text
Abstract:
Thesis (MBA)--Stellenbosch University, 2003.
ENGLISH ABSTRACT: Forecasting the future has always been one of the man's strongest desires. The aim to determine the future has resulted in scientifically based forecasting models of human health, behaviour, economics, weather, etc. The main purpose of forecasting is to reduce the range of uncertainty within which management decisions must be made. Forecasts are only effective if they are utilized by those who have decisionmaking authority. Forecasts need to be understood and appreciated by decision makers so that they find their way into management of the firm. Companies still predominantly rely on judgemental forecasting methods, most often on an informal basis. There is a large literature base that point to the numerous biases inherent in judgemental forecasting. Most companies know that their forecasts are incorrect but don't know what to do about it and choose to ignore the issue, hoping that the problem will solve itself. The collaborative forecasting process attempts to use history as a baseline, but supplement current knowledge about specific trends, events and other items. This approach integrates the knowledge and information that exists internally and externally into a single, more accurate forecast that supports the entire supply chain. Demand forecasting is not just a matter of duplicating or predicting history into the future. It is important that one person should lead and manage the process. Accountability needs to be established. An audit on the writer's own organization indicated that no formal forecasting process was present. The company's forecasting process was very political, since values were entered just to add up to the required targets. The real gap was never fully understood. Little knowledge existed regarding statistical analysis and forecasting within the marketing department who is accountable for the forecast. The forecasting method was therefore a top-down approach and never really checked with a bottom up approach. It was decided to learn more about the new demand planning process prescribed by the head office, and to start implementing the approach. The approach is a form of a collaborative approach which aims to involve all stakeholders when generating the forecast, therefore applying a bottom up approach. Statistical forecasting was applied to see how accurate the output was versus that of the old way of forecasting. The statistical forecast approach performed better with product groups where little changed from previous years existed, while the old way performed better where new activities were planned or known by the marketing team. This indicates that statistical forecasting is very important for creating the starting point or baseline forecast, but requires qualitative input from all stakeholders. Statistical forecasting is therefore not the solution to improved forecasting, but rather part of the solution to create robust forecasts.
AFRIKAANSE OPSOMMING: Vooruitskatting van die toekoms was nog altyd een van die mens se grootste begeertes. Die doel om die toekoms te bepaal het gelei tot wiskundige gebaseerde modelle van die mens se gesondheid, gedrag, ekonomie, weer, ens. The hoofdoel van vooruitskatting is om die reeks van risikos te verminder waarbinne bestuur besluite moet neem. Vooruitskattings is slegs effektief as dit gebruik word deur hulle wat besluitnemingsmag het. Vooruitskattings moet verstaan en gewaardeer word deur die besluitnemers sodat dit die weg kan vind na die bestuur van die firma. Maatskappye vertrou nog steeds hoofsaaklik op eie oordeel vooruitskatting metodes, en meestal op 'n informele basis. Daar is 'n uitgebreide literatuurbasis wat daarop dui dat heelwat sydigheid betrokke is by vooruitskattings wat gebaseer is op eie oordeel. Baie organisasies weet dat hulle vooruitskattings verkeerd is, maar weet nie wat daaromtrent te doen nie en kies om die probleem te ignoreer, met die hoop dat die probleem vanself sal oplos. Die geïntegreerde vooruitskattingsproses probeer om die verlede te gebruik as 'n basis, maar voeg huidige kennis rakende spesifieke neigings, gebeurtenisse, en ander items saam. Hierdie benadering integreer die kennis en informasie wat intern en ekstern bestaan in 'n enkele, meer akkurate vooruitskatting wat die hele verskaffingsketting ondersteun. Vraagvooruitskatting is nie alleen 'n duplisering of vooruitskatting van die verlede in die toekoms in nie. Dit is belangrik dat een persoon die proses moet lei en bestuur. Verantwoordelikhede moet vasgestel word. 'n Oudit op die skrywer se organisasie het getoon dat geen formele vooruitskattingsprosesse bestaan het nie. Die maatskappy se vooruitskattingsproses was hoogs gepolitiseerd, want getalle was vasgestel wat in lyn was met die nodige teikens. Die ware gaping was nooit werklik begryp nie. Min kennis was aanwesig rakende statistiese analises en vooruitskatting binne die bemarkingsdepartement wat verantwoordelik is vir die vooruitskatting. Die vooruitskatting is dus eerder gedoen op 'n globale vlak en nie noodwendig getoets deur die vooruitskatting op te bou uit detail nie. Daar is besluit om meer te leer rakende die nuwe vraagbeplanningsproses, wat voorgeskryf is deur hoofkantoor, en om die metode te begin implementeer. Die metode is 'n vorm van 'n geïntegreerde model wat beoog om alle aandeelhouers te betrek wanneer die vooruitskatting gedoen word, dus die vooruitskatting opbou met detail. Statistiese vooruitskatting was toegepas om te sien hoe akkuraat die uitset was teenoor die ou manier van vooruitskatting. Die statistiese proses het beter gevaar waar die produkgroepe min verandering van vorige jare ervaar het, terwyl die ou manier beter gevaar het waar bemarking self die nuwe aktiwiteite beplan het of bewus was daarvan. Dit bewys dat statistiese vooruitskatting baie belangrik is om die basis vooruitskatting te skep, maar dit benodig kwalitatiewe insette van all aandeelhouers. Statistiese vooruitskattings is dus nie die oplossing vir beter vooruitskattings nie, maar deel van die oplossing om kragtige vooruitskattings te skep.
APA, Harvard, Vancouver, ISO, and other styles
3

Mosmann, Gabriela. "Axiomatic systemic risk measures forecasting." reponame:Biblioteca Digital de Teses e Dissertações da UFRGS, 2018. http://hdl.handle.net/10183/178875.

Full text
Abstract:
Neste trabalho, aprofundamos o estudo sobre risco sistêmico via funções de agregação. Consideramos três carteiras diferentes como proxy para um sistema econômico, estas carteiras são consistidas por duas funções de agregação, baseadas em todos as ações do E.U.A, e um índice de mercado. As medidas de risco aplicadas são Value at Risk (VaR), Expected Shortfall (ES) and Expectile Value at Risk (EVaR), elas são previstas através do modelo GARCH clássico unido com nove funções de distribuição de probabilidade diferentes e mais por um método não paramétrico. As previsões são avaliadas por funções de perda e backtests de violação. Os resultados indicam que nossa abordagem pode gerar uma função de agregação adequada para processar o risco de um sistema previamente selecionado.
In this work, we deepen the study of systemic risk measurement via aggregation functions. We consider three different portfolios as a proxy for an economic system, these portfolios are consisted in two aggregation functions, based on all U.S. stocks and a market index. The risk measures applied are Value at Risk (VaR), Expected Shortfall (ES) and Expectile Value at Risk (EVaR), they are forecasted via the classical GARCH model along with nine distribution probability functions and also by a nonparametric approach. The forecasts are evaluated by loss functions and violation backtests. Results indicate that our approach can generate an adequate aggregation function to process the risk of a system previously selected.
APA, Harvard, Vancouver, ISO, and other styles
4

Kattekola, Sravanthi. "Weather Radar image Based Forecasting using Joint Series Prediction." ScholarWorks@UNO, 2010. http://scholarworks.uno.edu/td/1238.

Full text
Abstract:
Accurate rainfall forecasting using weather radar imagery has always been a crucial and predominant task in the field of meteorology [1], [2], [3] and [4]. Competitive Radial Basis Function Neural Networks (CRBFNN) [5] is one of the methods used for weather radar image based forecasting. Recently, an alternative CRBFNN based approach [6] was introduced to model the precipitation events. The difference between the techniques presented in [5] and [6] is in the approach used to model the rainfall image. Overall, it was shown that the modified CRBFNN approach [6] is more computationally efficient compared to the CRBFNN approach [5]. However, both techniques [5] and [6] share the same prediction stage. In this thesis, a different GRBFNN approach is presented for forecasting Gaussian envelope parameters. The proposed method investigates the concept of parameter dependency among Gaussian envelopes. Experimental results are also presented to illustrate the advantage of parameters prediction over the independent series prediction.
APA, Harvard, Vancouver, ISO, and other styles
5

Ford, Debra M. "Forecasting tropical cyclone recurvature using an empirical othogonal [sic] function representation of vorticity fields." Thesis, Monterey, California : Naval Postgraduate School, 1990. http://handle.dtic.mil/100.2/ADA238489.

Full text
Abstract:
Thesis (M.S. in Meteorology and Oceanography)--Naval Postgraduate School, September 1990.
Thesis Advisor(s): Elsberry, Russell L. ; Harr, Patrick A. "September 1990." Description based on title screen as viewed on December 16, 2009. DTIC Identifier(s): EOF (empirical orthogonal functions). Author(s) subject terms: Tropical cyclones, recurvature, empirical orthogonal functions. Includes bibliographical references (p. 73-74). Also available in print.
APA, Harvard, Vancouver, ISO, and other styles
6

Boulougari, Andromachi. "Application of a power-exponential function based model to mortality rates forecasting." Thesis, Mälardalens högskola, Akademin för utbildning, kultur och kommunikation, 2018. http://urn.kb.se/resolve?urn=urn:nbn:se:mdh:diva-39921.

Full text
Abstract:
De modellering van een wet of mortaliteit heeft een consequente interesse van een grote meerderheid van onderzoekers en vele modellen door de jaren is voorgesteld. The first aim of this thesis is to systematically evaluate a selection of models --- Modified Perks, Heligman-Pollard and Power-exponential --- to determine their relative strengths and weaknesses with regard to forecasting the mortality rate using the Lee-Carter model. Den andre målsætningen er at tilpasse dødelighedsdata ved de selektive modeller fra USA, Sverige og Grækenland ved hjælp af numeriske teknikker til kurvefitting med den ikke-lineære mindst kvadratmetode. The results indicate that the Heligman-Pollard model performs better especially when the phenomenon of the `` accident hump '' occurs during adulthood.
APA, Harvard, Vancouver, ISO, and other styles
7

Weller, Jennifer N. "Bayesian Inference In Forecasting Volcanic Hazards: An Example From Armenia." [Tampa, Fla.] : University of South Florida, 2004. http://purl.fcla.edu/fcla/etd/SFE0000485.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Carriere, Thomas. "Towards seamless value-oriented forecasting and data-driven market valorisation of photovoltaic production." Thesis, Université Paris sciences et lettres, 2020. http://www.theses.fr/2020UPSLM019.

Full text
Abstract:
La décarbonation de la production d’électricité à échelle mondiale est un élément de réponse clé face aux pressions exercées par les différents enjeux environnementaux. Par ailleurs, la baisse des coûts de la filière photovoltaïque (PV) ouvre la voie à une augmentation significative de la production PV dans le monde. L’objectif principal de cette thèse est alors de maximiser le revenu d’un producteur d’énergie PV sous incertitude des prix de marché et de la production. Pour cela, un modèle de prévision probabiliste de la production PV à court (5 minutes) et moyen (24 heures) terme est proposé. Ce modèle est couplé à une méthode de participation au marché maximisant l’espérance du revenu. Dans un second temps, le couplage entre une centrale PV et une batterie est étudié, et une analyse de sensibilité des résultats est réalisée pour étudier la rentabilité et le dimensionnement de tels systèmes. Une méthode de participation alternative est proposée, pour lequel un réseau de neurones artificiel apprend à participer avec ou sans batterie au marché de l’électricité, ce qui permet de simplifier le processus de valorisation de l'énergie PV en diminuant le nombre de modèles requis
The decarbonation of electricity production on a global scale is a key element in responding to the pressures of different environmental issues. In addition, the decrease in the costs of the photovoltaic (PV) sector is paving the way for a significant increase in PV production worldwide. The main objective of this thesis is then to maximize the income of a PV energy producer under uncertainty of market prices and production. For this purpose, a probabilistic forecast model of short (5 minutes) and medium (24 hours) term PV production is proposed. This model is coupled with a market participation method that maximizes income expectation. In a second step, the coupling between a PV plant and a battery is studied, and a sensitivity analysis of the results is carried out to study the profitability and sizing of such systems. An alternative participation method is proposed, for which an artificial neural network learns to participate with or without batteries in the electricity market, thus simplifying the process of PV energy valuation by reducing the number of models required
APA, Harvard, Vancouver, ISO, and other styles
9

Schweim, Jarrett Joshua. "Do any of a set of Lower Extremity Functional Assessment tests predict in the incidence of injury among a Cohort of collegiate freshmen football players? A Pilot Study." Columbus, Ohio : Ohio State University, 2009. http://rave.ohiolink.edu/etdc/view?acc%5Fnum=osu1243851951.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Alves, Jose Henrique Gomes de Mattos Mathematics UNSW. "A Saturation-Dependent Dissipation Source Function for Wind-Wave Modelling Applications." Awarded by:University of New South Wales. Mathematics, 2000. http://handle.unsw.edu.au/1959.4/17786.

Full text
Abstract:
This study reports on a new formulation of the spectral dissipation source term Sds for wind-wave modelling applications. This new form of Sds features a nonlinear dependence on the local wave spectrum, expressed in terms of the azimuthally integrated saturation parameter B(k)=k^4 F(k). The basic form of this saturation-dependent Sds is based on a new framework for the onset of deep-water wave breaking due to the nonlinear modulation of wave groups. The new form of Sds is succesfully validated through numerical experiments that include exact nonlinear computations of fetch-limited wind-wave evolution and hindcasts of two-dimensional wave fields made with an operational wind-wave model. The newly-proposed form of Sds generates integral spectral parameters that agree more closely with observations when compared to other dissipation source terms used in state-of-the-art wind-wave models. It also provides more flexibility in controlling properties of the wave spectrum within the high wavenumber range. Tests using a variety of wind speeds, three commonly-used wind input source functions and two alternative full-development evolution limits further demonstrate the robustness and flexibility of the new saturation-dependent dissipation source term. Finally, improved wave hindcasts obtained with an implementation of the new form of Sds in a version of the WAM model demonstrate its potential usefulness in operational wind-wave forecasting applications.
APA, Harvard, Vancouver, ISO, and other styles
11

Paretkar, Piyush S. "Short-Term Forecasting of Power Flows over Major Pacific Northwestern Interties: Using Box and Jenkins ARIMA Methodology." Thesis, Virginia Tech, 2008. http://hdl.handle.net/10919/35392.

Full text
Abstract:
The deregulation of the Electricity Sector in US has led to a tremendous increase in the inter-regional wholesale electricity trade between neighboring utilities or regions. For instance, the generation deficit regions may choose to import power from surplus regions; thus the wholesale electricity market prices in the regions are also affected by the dynamics of its electricity trade with other regions. Valuable insights into such imports/exports ahead of time have become crucial market intelligence for the various academicians and the market players associated with the industry. In this thesis, the task of short-term forecasting of the power flows over three major transmission interties of the Pacific Northwest region, namely the Pacific AC Intertie, the Pacific DC Intertie and the Northern Intertie, is successfully accomplished. The Pacific AC and the Pacific DC interties connect the Pacific Northwest region of US with the state of California. The Northern Intertie is the only intertie connecting the British Columbia region in Canada with the Pacific Northwest US. Box-Jenkins ARIMA (Auto Regressive Integrated Moving Average) and Transfer function methodologies are used as the statistical tools to identify the forecasting models in this thesis. The data requirement for all of the models is restricted to publicly available data.
Master of Science
APA, Harvard, Vancouver, ISO, and other styles
12

Ricci, Lorenzo. "Essays on tail risk in macroeconomics and finance: measurement and forecasting." Doctoral thesis, Universite Libre de Bruxelles, 2017. http://hdl.handle.net/2013/ULB-DIPOT:oai:dipot.ulb.ac.be:2013/242122.

Full text
Abstract:
This thesis is composed of three chapters that propose some novel approaches on tail risk for financial market and forecasting in finance and macroeconomics. The first part of this dissertation focuses on financial market correlations and introduces a simple measure of tail correlation, TailCoR, while the second contribution addresses the issue of identification of non- normal structural shocks in Vector Autoregression which is common on finance. The third part belongs to the vast literature on predictions of economic growth; the problem is tackled using a Bayesian Dynamic Factor model to predict Norwegian GDP.Chapter I: TailCoRThe first chapter introduces a simple measure of tail correlation, TailCoR, which disentangles linear and non linear correlation. The aim is to capture all features of financial market co- movement when extreme events (i.e. financial crises) occur. Indeed, tail correlations may arise because asset prices are either linearly correlated (i.e. the Pearson correlations are different from zero) or non-linearly correlated, meaning that asset prices are dependent at the tail of the distribution.Since it is based on quantiles, TailCoR has three main advantages: i) it is not based on asymptotic arguments, ii) it is very general as it applies with no specific distributional assumption, and iii) it is simple to use. We show that TailCoR also disentangles easily between linear and non-linear correlations. The measure has been successfully tested on simulated data. Several extensions, useful for practitioners, are presented like downside and upside tail correlations.In our empirical analysis, we apply this measure to eight major US banks for the period 2003-2012. For comparison purposes, we compute the upper and lower exceedance correlations and the parametric and non-parametric tail dependence coefficients. On the overall sample, results show that both the linear and non-linear contributions are relevant. The results suggest that co-movement increases during the financial crisis because of both the linear and non- linear correlations. Furthermore, the increase of TailCoR at the end of 2012 is mostly driven by the non-linearity, reflecting the risks of tail events and their spillovers associated with the European sovereign debt crisis. Chapter II: On the identification of non-normal shocks in structural VARThe second chapter deals with the structural interpretation of the VAR using the statistical properties of the innovation terms. In general, financial markets are characterized by non- normal shocks. Under non-Gaussianity, we introduce a methodology based on the reduction of tail dependency to identify the non-normal structural shocks.Borrowing from statistics, the methodology can be summarized in two main steps: i) decor- relate the estimated residuals and ii) the uncorrelated residuals are rotated in order to get a vector of independent shocks using a tail dependency matrix. We do not label the shocks a priori, but post-estimate on the basis of economic judgement.Furthermore, we show how our approach allows to identify all the shocks using a Monte Carlo study. In some cases, the method can turn out to be more significant when the amount of tail events are relevant. Therefore, the frequency of the series and the degree of non-normality are relevant to achieve accurate identification.Finally, we apply our method to two different VAR, all estimated on US data: i) a monthly trivariate model which studies the effects of oil market shocks, and finally ii) a VAR that focuses on the interaction between monetary policy and the stock market. In the first case, we validate the results obtained in the economic literature. In the second case, we cannot confirm the validity of an identification scheme based on combination of short and long run restrictions which is used in part of the empirical literature.Chapter III :Nowcasting NorwayThe third chapter consists in predictions of Norwegian Mainland GDP. Policy institutions have to decide to set their policies without knowledge of the current economic conditions. We estimate a Bayesian dynamic factor model (BDFM) on a panel of macroeconomic variables (all followed by market operators) from 1990 until 2011.First, the BDFM is an extension to the Bayesian framework of the dynamic factor model (DFM). The difference is that, compared with a DFM, there is more dynamics in the BDFM introduced in order to accommodate the dynamic heterogeneity of different variables. How- ever, in order to introduce more dynamics, the BDFM requires to estimate a large number of parameters, which can easily lead to volatile predictions due to estimation uncertainty. This is why the model is estimated with Bayesian methods, which, by shrinking the factor model toward a simple naive prior model, are able to limit estimation uncertainty.The second aspect is the use of a small dataset. A common feature of the literature on DFM is the use of large datasets. However, there is a literature that has shown how, for the purpose of forecasting, DFMs can be estimated on a small number of appropriately selected variables.Finally, through a pseudo real-time exercise, we show that the BDFM performs well both in terms of point forecast, and in terms of density forecasts. Results indicate that our model outperforms standard univariate benchmark models, that it performs as well as the Bloomberg Survey, and that it outperforms the predictions published by the Norges Bank in its monetary policy report.
Doctorat en Sciences économiques et de gestion
info:eu-repo/semantics/nonPublished
APA, Harvard, Vancouver, ISO, and other styles
13

Sulemana, Hisham. "Comparison of mortality rate forecasting using the Second Order Lee–Carter method with different mortality models." Thesis, Mälardalens högskola, Akademin för utbildning, kultur och kommunikation, 2019. http://urn.kb.se/resolve?urn=urn:nbn:se:mdh:diva-43563.

Full text
Abstract:
Mortality information is very important for national planning and health of a country. Mortality rate forecasting is a basic contribution for the projection of financial improvement of pension plans, well-being and social strategy planning. In the first part of the thesis, we fit the selected mortality rate models, namely the Power-exponential function based model, the ModifiedPerks model and the Heligman and Pollard (HP4) model to the data obtained from the HumanMortality Database [22] for the male population ages 1–70 of the USA, Japan and Australia. We observe that the Heligman and Pollard (HP4) model performs well and better fit the data as compared to the Power-exponential function based model and the Modified Perks model. The second part is to systematically compare the quality of the mortality rate forecasting using the second order Lee–Carter method with the selected mortality rate models. The results indicate that Power-exponential function based model and the Heligman and Pollard (HP4) model gives a more reliable forecast depending on individual countries.
APA, Harvard, Vancouver, ISO, and other styles
14

Altran, Alessandra Bonato. "Sistema inteligente para previsão de carga multinodal em sistemas elétricos de potência /." Ilha Solteira : [s.n.], 2010. http://hdl.handle.net/11449/100304.

Full text
Abstract:
Resumo: A previsão de carga, em sistemas de energia elétrica, constitui-se numa atividade de grande importância, tendo em vista que a maioria dos estudos realizados (fluxo de potência, despacho econômico, planejamento da expansão, compra e venda de energia, etc.) somente poderá ser efetivada se houver a disponibilidade de uma boa estimativa da carga a ser atendida. Deste modo, visando contribuir para que o planejamento e operação dos sistemas de energia elétrica ocorram de forma segura, confiável e econômica, foi desenvolvida uma metodologia para previsão de carga, a previsão multinodal, que pode ser entendida como um sistema inteligente que considera vários pontos da rede elétrica durante a realização da previsão. O sistema desenvolvido conta com o uso de uma rede neural artificial composta por vários módulos, sendo esta do tipo perceptron multicamadas, cujo treinamento é baseado no algoritmo retropropagação. Porém, foi realizada uma modificação na função de ativação da rede, em substituição à função usual, a função sigmoide, foram utilizadas as funções de base radial. Tal metodologia foi aplicada ao problema de previsão de cargas elétricas a curto-prazo (24 horas à frente)
Abstract: Load forecasting in electric power systems is a very important activity due to several studies, e.g. power flow, economic dispatch, expansion planning, purchase and sale of energy that are extremely dependent on a good estimate of the load. Thus, contributing to a safe, reliable, economic and secure operation and planning this work is developed, which is an intelligent system for multinodal electric load forecasting considering several points of the network. The multinodal system is based on an artificial neural network composed of several modules. The neural network is a multilayer perceptron trained by backpropagation where the traditional sigmoide is substituted by radial basis functions. The methodology is applied to forecast loads 24 hours in advance
Orientador: Carlos Roberto. Minussi
Coorientador: Francisco Villarreal Alvarado
Banca: Anna Diva Plasencia Lotufo
Banca: Maria do Carmo Gomes da Silveira
Banca: Gelson da Cruz Junior
Banca: Edmárcio Antonio Belati
Doutor
APA, Harvard, Vancouver, ISO, and other styles
15

Borges, Bruna Kasprzak. "Avaliação da habilidade preditiva entre modelos Garch multivariados : uma análise baseada no critério Model Confidence Set." reponame:Biblioteca Digital de Teses e Dissertações da UFRGS, 2012. http://hdl.handle.net/10183/70011.

Full text
Abstract:
Esta dissertação analisa a questão da seleção de modelos GARCH multivariados em termos da perfomance de previsão da matriz de covariância condicional. A aplicação empírica é realizada com 7 retornos de índices de ações envolvendo um conjunto de 34 especificações de modelos para os quais computamos as previsões da variância condicional um passo a frente para uma amostra com 60 observações para cada especificação dos modelos GARCH multivariados. A comparação entre os modelos é baseada no procedimento Model Confidence Set (MCS) avaliado através de duas funções perdas robustas a proxies de volatilidade imperfeitas. O MCS é um procedimento que permite comparar vários modelos simultaneamente em termos de sua habilidade preditiva e determinar um conjunto de modelos estatisticamente semelhantes em termos de previsão, dado um nível de confiança.
This paper considers the question of the selection of multivariate GARCH models in terms of covariance matrix forecasting. In the empirical application we consider 7 series of returns and compare a set of 34 model specifications based on one-step-ahead conditional variance forecasts over a sample with 60 observations. The comparison between models is performed with the Model Confidence Set (MCS) procedure evaluated using two loss functions that are robust against imperfect volatility proxies. The MCS is a procedure that allows both a multiple model comparison in terms of forecasting accuracy and the determination of a model set composed of statistically equivalent models, under a confidence level.
APA, Harvard, Vancouver, ISO, and other styles
16

Paduru, Anirudh. "Fast Algorithm for Modeling of Rain Events in Weather Radar Imagery." ScholarWorks@UNO, 2009. http://scholarworks.uno.edu/td/1097.

Full text
Abstract:
Weather radar imagery is important for several remote sensing applications including tracking of storm fronts and radar echo classification. In particular, tracking of precipitation events is useful for both forecasting and classification of rain/non-rain events since non-rain events usually appear to be static compared to rain events. Recent weather radar imaging-based forecasting approaches [3] consider that precipitation events can be modeled as a combination of localized functions using Radial Basis Function Neural Networks (RBFNNs). Tracking of rain events can be performed by tracking the parameters of these localized functions. The RBFNN-based techniques used in forecasting are not only computationally expensive, but also moderately effective in modeling small size precipitation events. In this thesis, an existing RBFNN technique [3] was implemented to verify its computational efficiency and forecasting effectiveness. The feasibility of modeling precipitation events using RBFNN effectively was evaluated, and several modifications to the existing technique have been proposed.
APA, Harvard, Vancouver, ISO, and other styles
17

Altran, Alessandra Bonato [UNESP]. "Sistema inteligente para previsão de carga multinodal em sistemas elétricos de potência." Universidade Estadual Paulista (UNESP), 2010. http://hdl.handle.net/11449/100304.

Full text
Abstract:
Made available in DSpace on 2014-06-11T19:30:50Z (GMT). No. of bitstreams: 0 Previous issue date: 2010-11-27Bitstream added on 2014-06-13T19:47:13Z : No. of bitstreams: 1 altran_ab_dr_ilha.pdf: 733564 bytes, checksum: 382a61569b0f5da4ceb6a2f45c0815a4 (MD5)
Coordenação de Aperfeiçoamento de Pessoal de Nível Superior (CAPES)
A previsão de carga, em sistemas de energia elétrica, constitui-se numa atividade de grande importância, tendo em vista que a maioria dos estudos realizados (fluxo de potência, despacho econômico, planejamento da expansão, compra e venda de energia, etc.) somente poderá ser efetivada se houver a disponibilidade de uma boa estimativa da carga a ser atendida. Deste modo, visando contribuir para que o planejamento e operação dos sistemas de energia elétrica ocorram de forma segura, confiável e econômica, foi desenvolvida uma metodologia para previsão de carga, a previsão multinodal, que pode ser entendida como um sistema inteligente que considera vários pontos da rede elétrica durante a realização da previsão. O sistema desenvolvido conta com o uso de uma rede neural artificial composta por vários módulos, sendo esta do tipo perceptron multicamadas, cujo treinamento é baseado no algoritmo retropropagação. Porém, foi realizada uma modificação na função de ativação da rede, em substituição à função usual, a função sigmoide, foram utilizadas as funções de base radial. Tal metodologia foi aplicada ao problema de previsão de cargas elétricas a curto-prazo (24 horas à frente)
Load forecasting in electric power systems is a very important activity due to several studies, e.g. power flow, economic dispatch, expansion planning, purchase and sale of energy that are extremely dependent on a good estimate of the load. Thus, contributing to a safe, reliable, economic and secure operation and planning this work is developed, which is an intelligent system for multinodal electric load forecasting considering several points of the network. The multinodal system is based on an artificial neural network composed of several modules. The neural network is a multilayer perceptron trained by backpropagation where the traditional sigmoide is substituted by radial basis functions. The methodology is applied to forecast loads 24 hours in advance
APA, Harvard, Vancouver, ISO, and other styles
18

Koller, Simon. "Multiple Time Series Analysis of Freight Rate Indices." Thesis, KTH, Matematisk statistik, 2020. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-288500.

Full text
Abstract:
In this master thesis multiple time series of shipping industry and financial data are analysed in order to create a forecasting model to forecast freight rate indices. The data of main interest which are predicted are the two freight rate indices, BDI and BDTI, from the Baltic Exchange. The project investigates the possibilities for aggregated Vector Autoregression(VAR) models to outperform simple univariate models, in this case, an Autoregressive Integrated Moving Average(ARIMA) with seasonal components. The other part of this thesis is to model market shocks in the freight rate indices, given impulses in the other underlying VAR-model time series using the impulse response function. The main results are that the VAR-model forecast outperforms the ARIMA-model in forecasting the tanker freight rate index (BDTI), while the the bulk freight rate index(BDI) is better predicted by the simple ARIMA when calculating the forecast mean square error.
I denna avhandling analyseras multipla tidsserier över rederinärings- och finansiell data i syfte att skapa en prognosticerande modell för att prognosticera fraktratsindex. Dataserierna som i huvudsak prognosticeras är fraktratsindexen BDI och BDTI från Baltic exchange. I projektet undersöks om en aggregerad Vektor Autoregressiv(VAR) modell överträffar en univariat modell, i detta fall en Autoregressive Integrated Moving Average(ARIMA) med säsongsvariabel. I andra delen av denna avhandling modelleras chocker i fraktratsindexen givet impulser i de andra underliggande tidsserierna i de aggregerade VAR-modellerna. Huvudresultaten är att VAR-modellens prognos överträffar ARIMA-modellen för tankerraterna (BDTI), medan bulkraterna(BDI) bättre prognosticeras av ARIMA-modellen, i avseende på prognosernas beräknade mean square error.
APA, Harvard, Vancouver, ISO, and other styles
19

Дудка, Богдан Романович. "Ймовірнісно-статистичні моделі нелінійних нестаціонарних процесів в економіці та фінансах." Master's thesis, Київ, 2018. https://ela.kpi.ua/handle/123456789/23903.

Full text
Abstract:
Магістерська дисертація: 89 с., 21 рис., 22 табл., 19 джерел. В роботі розглядаються питання дослідження нелінійних нестаціонарних процесів в економіці та фінансах, які представлені статистичними даними. Детально розглянута задача визначення нелінійності та нестаціонарності досліджуваного процесу. Також розглянута задача заповнення пропусків у статистичних даних. Представлена та застосована методика побудови нелінійних нестаціонарних процесів. Об’єкт дослідження: статистичні дані стосовно розвитку вибраних фінансово-економічних процесів. Предмет дослідження: методика побудови моделей нестаціонарних процесів, методи дослідження пропусків даних, регресійні моделі, статистичні характеристики адекватності моделей і оцінок прогнозів. Методи дослідження: статистичний аналіз даних, методи заповнення пропусків даних, регресійний аналіз, мережі Байєса, фільтр Калмана. Мета дослідження: реалізація методики побудови моделей нестаціонарних процесів. Розроблена та застосована методика побудови моделей нестаціонарних процесів. Проведений аналіз впливу пропусків даних та застосування методів згладжування на статистичні характеристики моделей даних/.
The theme: Probabilistic and Statistical Models of Nonstationary Processes in Economy and Finances. Master thesis: 89 p., 21 fig., 22 tabl., 1 appendixes, 19 ref. In this work the problem of building non-linear nonstationary processes models and. Introduced appropriate methodology for building models of non-linear nonstationary processes. An important task of imputation of missing values in statistical data was considered. Their influence was analyzed on statistical characteristics of the data model. Object of the research: statistical data about developing of chosen macro- economic processes. Subject of the research: nonlinear processes models building methodology, detecting methods of missing values, statistical characteristics of model adequacy and forecasting evaluations. The methods of the research are as follows: modeling and forecasting theory, regression analysis, statistical analysis, methods of imputation of missing values. Target of research: implementation of methodology for building models of non-linear nonstationary processes, analysis of influence of missing values on model adequacy of dynamical processes. Developed methodology of the time series models building was used for building of model of nonlinear processes. Analysis of missing values was conducted to define their influence on statistical characteristics of model.
APA, Harvard, Vancouver, ISO, and other styles
20

Pereira, Marina Meireles. "PREVISÃO DE RETORNO DE PNEUS INSERVÍVEIS EM UMA CADEIA DE SUPRIMENTOS DE CICLO FECHADO." Pontifícia Universidade Católica de Goiás, 2016. http://localhost:8080/tede/handle/tede/2481.

Full text
Abstract:
Made available in DSpace on 2016-08-10T10:40:36Z (GMT). No. of bitstreams: 1 MARINA MEIRELES PEREIRA.pdf: 5021994 bytes, checksum: 7b9170817bc15d6c6c8810e83ac1330e (MD5) Previous issue date: 2016-02-15
This research aims to apply a prediction model to a tire closed-loop supply chain to estimate the volume returned of scrap tires, through the variables that influence the amount and time that these tires are returned to destination. The methodological approach applied in this research is the modeling by applying the Transfer Function Model. It starts with the analysis that the tire closed-loop supply chain of Goiás and the Federal District is structured and there is a direct relationship between sales of tires with the amount returned. Were adopted as model input variables the amount of tires placed on the market for after-market and the size of the current fleet of these places, representing the amount of tires entered the market for new cars sold. For the output variable was considered the quantity of scrap tires collected and sent for disposal. The data for the survey were collected in the organization s databases adopted as an object of study, IBAMA, DENATRAN, ANIP and AliceWeb considering a period of 54 months. Data were analyzed by the transfer function model and the results showed that the lag time after the tires were entered on the market was around 12 months for all input variables, the return probability of the after-market are greater than the return probability of the tire fleets, and the behavior of the predicted return showed an approximate behavior of the real return with a percentage deviation of 3.4%. Therefore, this study enabled us to identify the variables that influence the return of scrap tires and scale the amount of returned volume tires and the time of this return to facilitate the planning of the tires of closed-loop supply chain.
Esta pesquisa visa aplicar um modelo de previsão a uma cadeia de suprimentos de ciclo fechado de pneus, para estimar o volume de pneus inservíveis retornados, por meio das variáveis que influenciam na quantidade e no tempo que estes pneus retornam para serem destinados. A abordagem metodológica aplicada nessa pesquisa se situa na Modelagem, aplicando o Modelo de Função de Transferência. Parte-se da análise de que a cadeia de suprimentos de ciclo fechado do Estado de Goiás e Distrito Federal está estruturada e que há uma relação direta entre as vendas de pneus com a quantidade retornada. Foram adotadas como variáveis de entrada do modelo a quantidade de pneus inseridos no mercado, pelo mercado de reposição e o tamanho da frota circulante destas localidades, representando a quantidade de pneus inseridos no mercado pelos carros novos vendidos. Para a variável de saída foi considerada a quantidade de pneus inservíveis coletados e encaminhados para destinação final. Os dados utilizados na pesquisa foram coletados em bancos de dados da organização adotada como objeto de estudo, IBAMA, DENATRAN, ANIP e AliceWeb, considerando de um período de 54 meses. Os dados foram analisados pelo modelo de função de transferência e os resultados obtidos mostraram que o tempo de defasagem da entrada de pneus no mercado foi em torno de 12 meses para todas as variáveis de entrada, que as probabilidades de retorno do mercado de reposição são maiores que as probabilidades de retorno dos pneus das frotas e que a previsão de retorno apresentou um comportamento aproximado do comportamento real do retorno com um desvio percentual de 3,4%. Portanto, este estudo possibilitou identificar as variáveis que influenciam no retorno de pneus inservíveis e a dimensionar a quantidade de volume de pneus retornados e o tempo desse retorno para viabilizar o planejamento da cadeia de suprimentos de ciclo fechado de pneus.
APA, Harvard, Vancouver, ISO, and other styles
21

Drevna, Michael J. "An application of Box-Jenkins transfer functions to natural gas demand forecasting." Ohio : Ohio University, 1985. http://www.ohiolink.edu/etd/view.cgi?ohiou1183999594.

Full text
APA, Harvard, Vancouver, ISO, and other styles
22

Ramos, Anthony Kojo. "Forecasting Mortality Rates using the Weighted Hyndman-Ullah Method." Thesis, Mälardalens högskola, Akademin för utbildning, kultur och kommunikation, 2021. http://urn.kb.se/resolve?urn=urn:nbn:se:mdh:diva-54711.

Full text
Abstract:
The performance of three methods of mortality modelling and forecasting are compared. These include the basic Lee–Carter and two functional demographic models; the basic Hyndman–Ullah and the weighted Hyndman–Ullah. Using age-specific data from the Human Mortality Database of two developed countries, France and the UK (England&Wales), these methods are compared; through within-sample forecasting for the years 1999-2018. The weighted Hyndman–Ullah method is adjudged superior among the three methods through a comparison of mean forecast errors and qualitative inspection per the dataset of the selected countries. The weighted HU method is then used to conduct a 32–year ahead forecast to the year 2050.
APA, Harvard, Vancouver, ISO, and other styles
23

Capistran, Carmona Carlos. "Essays on forecast evaluation under general loss functions /." Diss., Connect to a 24 p. preview or request complete full text in PDF format. Access restricted to UC campuses, 2005. http://wwwlib.umi.com/cr/ucsd/fullcit?p3175283.

Full text
APA, Harvard, Vancouver, ISO, and other styles
24

Kozel, Tomáš. "Stochastické řízení zásobní funkce nádrže s pomocí metod umělé inteligence." Doctoral thesis, Vysoké učení technické v Brně. Fakulta stavební, 2018. http://www.nusl.cz/ntk/nusl-390282.

Full text
Abstract:
The main advantage of stochastic forecasting is fan of possible value, which deterministic method of forecasting could not give us. Future development of random process is described better by stochastic then deterministic forecasting. We can categorize discharge in measurement profile as random process. Stochastic management is worked with dispersion of controlling discharge value. In thesis is described construction and evaluation of adaptive stochastic model base on fuzzy logic, neural networks and evolution algorithm, which are used stochastic forecast from forecasting models described in thesis. The learning fuzzy model and neural network is used as replacement of classic optimization algorithm (evolution algorithm). Model was tested and validated on made up large open water reservoir. Results were evaluated and were compared with model base on traditional algorithms, which was used for 100% forecast (forecasted values are real values). The management of the large open water reservoir with storage function, which was given by stochastic adaptive managing, was logical. The main advantage of fuzzy model and neural network model is computing speed. Classical optimization model is needed much more time for same calculation as fuzzy and neural network model, therefore classic model used clusters for stochastic calculation.
APA, Harvard, Vancouver, ISO, and other styles
25

Martinho, Carla Alexandra Lopes. "Modelos vectoriais ARMA : estudo e potencialidades." Master's thesis, Instituto Superior de Economia e Gestão, 1997. http://hdl.handle.net/10400.5/21745.

Full text
Abstract:
Mestrado em Matemática Aplicada à Economia e Gestão
Neste trabalho vai-se proceder ao estudo e à aplicação prática sobre sucessões cronológicas reais dos modelos vectoriais ARMA. Estes modelos generalizam os modelos univariados ARMA e os modelos multivariados de função transferência, tendo vantagem sobre estes últimos porque permitem a análise conjunta de sucessões cronológicas que apresentam efeito de feedback. E de esperar que a modelação conjunta de sucessões potencie a capacidade de as descrever, obtendo-se ganhos significativos em termos previsionais. Deste modo, procerder-se-á ao estudo, com base na análise de dois exemplos concretos, do comportamento dos modelos vectoriais ARMA, conffontando-os com os resultados obtidos pelos modelos univariados e pelos modelos de função transferência.
The aim of this work is to present the methodology of the vectorial ARMA models applied to real time series. These models are generalisations of the univariate ARMA models and of the multivariate transfer function models. The advantage of the vectorial ARMA modelling is to allow the joint analysis of the time series which exhibit feedback effects. It is our intention to show that this joint modelization increases the capacity of describing and forecasting. The application was made with the use of two real examples comparing the results ffom the vectorial ARMA, the univariate and the transfer function modelling.
info:eu-repo/semantics/publishedVersion
APA, Harvard, Vancouver, ISO, and other styles
26

SILVA, David Augusto. "Otimização da função de fitness para a evolução de redes neurais com o uso de análise envoltória de dados aplicada à previsão de séries temporais." Universidade Federal Rural de Pernambuco, 2011. http://www.tede2.ufrpe.br:8080/tede2/handle/tede2/4875.

Full text
Abstract:
Submitted by (ana.araujo@ufrpe.br) on 2016-06-28T16:05:18Z No. of bitstreams: 1 David Augusto Silva.pdf: 1453777 bytes, checksum: 4516b869e7e749b770a803eb7e91a084 (MD5)
Made available in DSpace on 2016-06-28T16:05:18Z (GMT). No. of bitstreams: 1 David Augusto Silva.pdf: 1453777 bytes, checksum: 4516b869e7e749b770a803eb7e91a084 (MD5) Previous issue date: 2011-07-01
The techniques for Time Series Analysis and Forecasting have great presence on the literature over the years. The computational resources combined with statistical techniques are improving the predictive results, and these results have been become increasingly accurate. Computational methods base on Artificial Neural Networks (ANN) and Evolutionary Computing (EC) are presenting a new approach to solve the Time Series Analysis and Forecasting problem. These computational methods are contained in the branch of Artificial Intelligence (AI), and they are biologically inspired, where the ANN models are based on the neural structure of intelligent organism, and the EC uses the concept of nature selection of Charles Darwin. Both methods acquire experience from prior knowledge and example of the given problem. In particular, for the Time Series Forecasting Problem, the objective is to find the predictive model with highest forecast perfomance, where the performance measure are statistical errors. However, there is no universal criterion to identify the best performance measure. Since the ANNs are the predictive models, the EC will constantly evaluate the forecast performance of the ANNs, using a fitness functions to guide the predictive model for an optimal solution. The Data Envelopment Analysis (DEA) was employed to predictive determine the best combination of variables based on the relative efficiency of the best models. Therefore, this work to study the optimization Fitness Function process with Data Envelopment Analysis applied the Intelligence Hybrid System for time series forecasting problem. The data analyzed are composed by financial data series, agribusiness and natural phenomena. The C language program was employed for implementation of the hybrid intelligent system and the R Environment version 2.12 for analysis of DEA models. In general, the perspective of using DEA procedure to evaluate the fitness functions were satisfactory and serves as an additional resource in the branch of time series forecasting. Researchers need to compute the results under different perspectives, whether in the matter of the computational cost of implementing a particular function or which function was more efficient in the aspect of assessing which combinations are unwanted saving time and resources.
As técnicas de análise e previsão de séries temporais alcançaram uma posição de distinção na literatura ao longo dos anos. A utilização de recursos computacionais, combinada com técnicas estatísticas, apresenta resultados mais precisos quando comparados com os recursos separadamente. Em particular, técnicas que usam Redes Neurais Artificiais (RNA) e Computação Evolutiva (CE), apresenta uma posição de destaque na resolução de problemas de previsão na análise de séries temporais. Estas técnicas de Inteligência Artificial (AI) são inspiradas biologicamente, no qual o modelo de RNA é baseado na estrutura neural de organismos inteligentes, que adquirem conhecimento através da experiência. Para o problema de previsão em séries temporais, um fator importante para o maior desempenho na previsão é encontrar um método preditivo com a melhor acurácia possível, tanto quanto possível, no qual o desempenho do método pode ser analisado através de erros de previsão. Entretanto, não existe um critério universal para identificar qual a melhor medida de desempenho a ser utilizada para a caracterização da previsão. Uma vez que as RNAs são os modelos de previsão, a CE constantemente avaliará o desempenho de previsão das RNAs, usando uma função de fitness para guiar o modelo preditivo para uma solução ótima. Desejando verificar quais critérios seriam mais eficientes no momento de escolher o melhor modelo preditivo, a Análise Envoltória de Dados (DEA) é aplicada para fornecer a melhor combinação de variáveis visando a otimização do modelo. Portanto, nesta dissertação, foi estudado o processo de otimização de Funções de Fitness através do uso da Análise Envoltória de Dados utilizando-se de técnicas hibridas de Inteligência Artificial aplicadas a área de previsão de séries temporais. O banco de dados utilizado foi obtido de séries históricas econômico- financeiras, fenômenos naturais e agronegócios obtidos em diferentes órgãos específicos de cada área. Quanto à parte operacional, utilizou-se a linguagem de programação C para implementação do sistema híbrido inteligente e o ambiente R versão 2.12 para a análise dos modelos DEA. Em geral, a perspectiva do uso da DEA para avaliar as Funções de Fitness foi satisfatório e serve como recurso adicional na área de previsão de séries temporais. Cabe ao pesquisador, avaliar os resultados sob diferentes óticas, quer seja sob a questão do custo computacional de implementar uma determinada Função que foi mais eficiente ou sob o aspecto de avaliar quais combinações não são desejadas poupando tempo e recursos.
APA, Harvard, Vancouver, ISO, and other styles
27

Yilmaz, Ozturk Isik Ekin. "The Application And Evaluation Of Functional Link Net Techniques In Forecasting Electricity Demand." Master's thesis, METU, 2008. http://etd.lib.metu.edu.tr/upload/12610230/index.pdf.

Full text
Abstract:
This thesis analyzes the application of functional link-net (FLN) method in forecasting electricity demand in Turkey. Current official forecasting model (MAED), which is employed by Turkish Electricity Transmission Company (TEiAS) and other methods are discussed. An emprical investigation and evaluation of using functional link nets is provided.
APA, Harvard, Vancouver, ISO, and other styles
28

Kantanantha, Nantachai. "Crop decision planning under yield and price uncertainties." Diss., Atlanta, Ga. : Georgia Institute of Technology, 2007. http://hdl.handle.net/1853/24676.

Full text
Abstract:
Thesis (Ph.D.)--Industrial and Systems Engineering, Georgia Institute of Technology, 2007.
Committee Co-Chair: Griffin, Paul; Committee Co-Chair: Serban, Nicoleta; Committee Member: Liang, Steven; Committee Member: Sharp, Gunter; Committee Member: Tsui, Kwok-Leung
APA, Harvard, Vancouver, ISO, and other styles
29

Jonéus, Paulina. "The more the merrier? On the performance of factor-augmented models." Thesis, Uppsala universitet, Statistiska institutionen, 2015. http://urn.kb.se/resolve?urn=urn:nbn:se:uu:diva-256760.

Full text
Abstract:
Vector autoregression (VAR) models are widely used in an attempt to identify and measure the effect of monetary policy shocks on an economy and to forecast economic times series. However, the sparse information sets used in the VAR approach have been subject to criticism and in recent decades, the use of factor models as a means of dimension reduction has been a subject of greater focus. The method of summarizing information contained in a large set of macroeconomic time series by principal components, and use these as regressors in VAR models, has been pointed out as a potential solution to the problems of limited information and estimation of too many parameters. This paper combines the standard VAR methodology with dynamic factor analysis on Swedish data for two purposes, to assess the effects of monetary policy shocks and to examine the forecasting properties. Latent factors estimated by the principal components method are in this study found to contribute to a more coherent picture in line with economic theory, when examining monetary policy shocks to the Swedish economy. The factor-augmented models can on the other hand not be shown to increase the forecasting accuracy to a great extent compared to standard models.
APA, Harvard, Vancouver, ISO, and other styles
30

Sen, Caner. "Tsunami Source Inversion Using Genetic Algorithm." Master's thesis, METU, 2011. http://etd.lib.metu.edu.tr/upload/12612939/index.pdf.

Full text
Abstract:
Tsunami forecasting methodology developed by the United States National Oceanic and Atmospheric Administration&rsquo
s Center for Tsunami Research is based on the concept of a pre-computed tsunami database which includes tsunami model results from Mw 7.5 earthquakes called tsunami source functions. Tsunami source functions are placed along the subduction zones of the oceans of the world in several rows. Linearity of tsunami propagation in an open ocean allows scaling and/or combination of the pre-computed tsunami source functions. An offshore scenario is obtained through inverting scaled and/or combined tsunami source functions against Deep-ocean Assessment and Reporting of Tsunami (DART) buoy measurements. A graphical user interface called Genetic Algorithm for INversion (GAIN) was developed in MATLAB using general optimization toolbox to perform an inversion. The 15 November 2006 Kuril and 27 February 2010 Chile tsunamis are chosen as case studies. One and/or several DART buoy measurement(s) is/are used to test different error minimization functions with/without earthquake magnitude as constraint. The inversion results are discussed comparing the forecasting model results with the tide gage measurements.
APA, Harvard, Vancouver, ISO, and other styles
31

Hunt, Julian David. "Integration of rationale management with multi-criteria decision analysis, probabilistic forecasting and semantics : application to the UK energy sector." Thesis, University of Oxford, 2013. http://ora.ox.ac.uk/objects/uuid:2cc24d23-3e93-42e0-bb7a-6e39a65d7425.

Full text
Abstract:
This thesis presents a new integrated tool and decision support framework to approach complex problems resulting from the interaction of many multi-criteria issues. The framework is embedded in an integrated tool called OUTDO (Oxford University Tool for Decision Organisation). OUTDO integrates Multi-Criteria Decision Analysis (MCDA), decision rationale management with a modified Issue-Based Information Systems (IBIS) representation, and probabilistic forecasting to effectively capture the essential reasons why decisions are made and to dynamically re-use the rationale. In doing so, it allows exploration of how changes in external parameters affect complicated and uncertain decision making processes in the present and in the future. Once the decision maker constructs his or her own decision process, OUTDO checks if the decision process is consistent and coherent and looks for possible ways to improve it using three new semantic-based decision support approaches. For this reason, two ontologies (the Decision Ontology and the Energy Ontology) were integrated into OUTDO to provide it with these semantic capabilities. The Decision Ontology keeps a record of the decision rationale extracted from OUTDO and the Energy Ontology describes the energy generation domain, focusing on the water requirement in thermoelectric power plants. A case study, with the objective of recommending electricity generation and steam condensation technologies for ten different regions in the UK, is used to verify OUTDO’s features and reach conclusions about the overall work.
APA, Harvard, Vancouver, ISO, and other styles
32

Willersjö, Nyfelt Emil. "Comparison of the 1st and 2nd order Lee–Carter methods with the robust Hyndman–Ullah method for fitting and forecasting mortality rates." Thesis, Mälardalens högskola, Akademin för utbildning, kultur och kommunikation, 2020. http://urn.kb.se/resolve?urn=urn:nbn:se:mdh:diva-48383.

Full text
Abstract:
The 1st and 2nd order Lee–Carter methods were compared with the Hyndman–Ullah method in regards to goodness of fit and forecasting ability of mortality rates. Swedish population data was used from the Human Mortality Database. The robust estimation property of the Hyndman–Ullah method was also tested with inclusion of the Spanish flu and a hypothetical scenario of the COVID-19 pandemic. After having presented the three methods and making several comparisons between the methods, it is concluded that the Hyndman–Ullah method is overall superior among the three methods with the implementation of the chosen dataset. Its robust estimation of mortality shocks could also be confirmed.
APA, Harvard, Vancouver, ISO, and other styles
33

Henttu-Aho, T. (Tiina). "The emerging practices of modern budgeting and the role of controller." Doctoral thesis, Oulun yliopisto, 2016. http://urn.fi/urn:isbn:9789526214399.

Full text
Abstract:
Abstract The presumed dominance of the traditional annual budgeting process as a cornerstone of management control has been called into question in recent years. Various new developments in budgeting have been seen replacing or complementing organisations’ conventional budgeting. The dissertation provides a comprehensive picture of these new budgetary practices and their implications for management accounting work, through investigation of the fragmentation among various purposes of budgeting and of the ways in which the role of controller and new budgetary practices can complement each other. The dissertation is composed of four inter-related essays, which provide qualitative evidence of how firmly established practices such as budgeting change and what implications the change has for the conflicting purposes of budgeting. It also provides insight into the ways controllers are able to build holistic view of the totality of budgetary control and compile new accounting information. The empirical findings presented in the dissertation give rise to novel concept of fragmentation, which can be defined as an arrangement wherein a new mix of diverse controls is used to serve several purposes of budgeting and a single budgeting process is either replaced with or complemented by other control mechanisms. Fragmentation can serve as a common denominator for recent developments in budgeting but also aid in understanding the variation in new budgetary practices. Fragmentation of budgeting makes the boundaries of a budgetary system blurred but also enables designing flexibility into the control system itself. For the role of controller, fragmented budgetary practices mean co-ordinating the linkages between various budget-related methods, wider communication and interaction with organisational actors, new business-oriented skills related to key purposes of budgeting, and a growing professional role in increasing the ‘realism’ of accounting information in the lateral budgetary planning process
Tiivistelmä Perinteisen vuosibudjetoinnin valta-asemaa johdon ohjausjärjestelmien kulmakivenä on alettu kyseenalaistaa viime vuosina. Budjetoinnin uudet kehityssuuntaukset näyttäisivät joko korvaavan tai täydentävän vakiintunutta budjetointikäytäntöä organisaatioissa. Tämä väitöskirja tarjoaa kokonaisvaltaisen kuvan uusista käytännöistä, sekä niiden vaikutuksista johdon laskentatoimen työhön. Väitöskirja tutkii budjetoinnin eri tehtävien pirstaloitumista sekä sitä, miten controllerin rooli ja uudet budjetointikäytännöt voivat toimia toisiaan täydentävästi. Väitöskirja muodostuu neljästä toisiinsa liittyvästä esseestä, jotka tuovat laadullisen tutkimuksen keinoin esille, kuinka vakiintunut käytäntö, kuten budjetointi, muuttuu, ja mitä vaikutuksia tällä muutoksella on budjetoinnin eri tehtäviin. Tutkimus tarjoaa myös käsityksen siitä, miten kontrollerit muodostavat kokonaiskuvan budjetoinnillisesta ohjausjärjestelmästä ja tuottavat uutta laskentatoimen informaatiota. Tämä väitöskirja kuvaa budjetoinnin muutosta käsitteellä pirstaloituminen (fragmentation). Se voidaan määritellä järjestelyksi, jossa uutta, erilaisten ohjausmenetelmien yhdistelmää käytetään palvelemaan budjetoinnin eri tehtäviä, ja jossa yksittäinen budjetointiprosessi joko korvautuu tai täydentyy muilla ohjausmekanismeilla. Tämä käsite tarjoaa yhteisen nimittäjän viimeaikaisille budjetoinnin kehityssuuntauksille, mutta auttaa myös ymmärtämään paremmin budjetoinnin eri variaatioita. Budjetoinnin pirstaloituminen hämärtää budjetointijärjestelmän rajat, mutta mahdollistaa myös joustavuuden suunnittelun järjestelmään itsessään. Kontrollerin roolin näkökulmasta budjetoinnin pirstaloituminen merkitsee budjetoinnin eri menetelmien välisten yhteyksien koordinointia, laajempaa kommunikaatiota ja vuorovaikutusta organisaation eri toimijoiden kanssa, uusien liiketoimintaorientoituneiden taitojen lisääntymistä sekä ammatillisen roolin korostumista laskentainformaation realismin parantamisessa budjetoinnin lateraalisessa suunnitteluprosessissa
APA, Harvard, Vancouver, ISO, and other styles
34

Ozkaya, Evren. "Demand management in global supply chains." Diss., Atlanta, Ga. : Georgia Institute of Technology, 2008. http://hdl.handle.net/1853/26617.

Full text
Abstract:
Thesis (Ph.D)--Industrial and Systems Engineering, Georgia Institute of Technology, 2009.
Committee Chair: Keskinocak, Pinar; Committee Co-Chair: Vande Vate, John; Committee Member: Ferguson, Mark; Committee Member: Griffin, Paul; Committee Member: Swann, Julie. Part of the SMARTech Electronic Thesis and Dissertation Collection.
APA, Harvard, Vancouver, ISO, and other styles
35

Bunger, R. C. (Robert Charles). "Derivation of Probability Density Functions for the Relative Differences in the Standard and Poor's 100 Stock Index Over Various Intervals of Time." Thesis, University of North Texas, 1988. https://digital.library.unt.edu/ark:/67531/metadc330882/.

Full text
Abstract:
In this study a two-part mixed probability density function was derived which described the relative changes in the Standard and Poor's 100 Stock Index over various intervals of time. The density function is a mixture of two different halves of normal distributions. Optimal values for the standard deviations for the two halves and the mean are given. Also, a general form of the function is given which uses linear regression models to estimate the standard deviations and the means. The density functions allow stock market participants trading index options and futures contracts on the S & P 100 Stock Index to determine probabilities of success or failure of trades involving price movements of certain magnitudes in given lengths of time.
APA, Harvard, Vancouver, ISO, and other styles
36

Cugliari, Jairo. "Prévision non paramétrique de processus à valeurs fonctionnelles : application à la consommation d’électricité." Thesis, Paris 11, 2011. http://www.theses.fr/2011PA112234/document.

Full text
Abstract:
Nous traitons dans cette thèse le problème de la prédiction d’un processus stochastique à valeurs fonctionnelles. Nous commençons par étudier le modèle proposé par Antoniadis et al. (2006) dans le cadre d’une application pratique -la demande d’énergie électrique en France- où l’hypothèse de stationnarité semble ne pas se vérifier. L’écart du cadre stationnaire est double: d’une part, le niveau moyen de la série semble changer dans le temps, d’autre part il existe groupes dans les données qui peuvent être vus comme des classes de stationnarité.Nous explorons corrections qui améliorent la performance de prédiction. Les corrections visent à prendre en compte la présence de ces caractéristiques non stationnaires. En particulier, pour traiter l’existence de groupes, nous avons contraint le modèle de prévision à n’utiliser que les données qui appartiennent au même groupe que celui de la dernière observation disponible. Si le regroupement est connu, un simple post-traitement suffit pour obtenir des meilleures performances de prédiction.Si le regroupement en blocs est inconnu, nous proposons de découvrir le regroupement en utilisant des algorithmes d’analyse de classification non supervisée. La dimension infinie des trajectoires, pas nécessairement stationnaires, doit être prise en compte par l’algorithme. Nous proposons deux stratégies pour ce faire, toutes les deux basées sur les transformées en ondelettes. La première se base dans l’extraction d’attributs associés à la transformée en ondelettes discrète. L’extraction est suivie par une sélection des caractéristiques le plus significatives pour l’algorithme de classification. La seconde stratégie classifie directement les trajectoires à l’aide d’une mesure de dissimilarité sur les spectres en ondelettes. La troisième partie de la thèse est consacrée à explorer un modèle de prédiction alternatif qui intègre de l’information exogène. A cet effet, nous utilisons le cadre des processus Autorégressifs Hilbertiens. Nous proposons une nouvelle classe de processus que nous appelons processus Conditionnels Autorégressifs Hilbertiens (CARH). Nous développons l’équivalent des estimateurs par projection et par résolvant pour prédire de tels processus
This thesis addresses the problem of predicting a functional valued stochastic process. We first explore the model proposed by Antoniadis et al. (2006) in the context of a practical application -the french electrical power demand- where the hypothesis of stationarity may fail. The departure from stationarity is twofold: an evolving mean level and the existence of groupsthat may be seen as classes of stationarity.We explore some corrections that enhance the prediction performance. The corrections aim to take into account the presence of these nonstationary features. In particular, to handle the existence of groups, we constraint the model to use only the data that belongs to the same group of the last available data. If one knows the grouping, a simple post-treatment suffices to obtain better prediction performances.If the grouping is unknown, we propose it from data using clustering analysis. The infinite dimension of the not necessarily stationary trajectories have to be taken into account by the clustering algorithm. We propose two strategies for this, both based on wavelet transforms. The first one uses a feature extraction approach through the Discrete Wavelet Transform combined with a feature selection algorithm to select the significant features to be used in a classical clustering algorithm. The second approach clusters directly the functions by means of a dissimilarity measure of the Continuous Wavelet spectra.The third part of thesis is dedicated to explore an alternative prediction model that incorporates exogenous information. For this purpose we use the framework given by the Autoregressive Hilbertian processes. We propose a new class of processes that we call Conditional Autoregressive Hilbertian (carh) and develop the equivalent of projection and resolvent classes of estimators to predict such processes
APA, Harvard, Vancouver, ISO, and other styles
37

Ahmidi, Amir. "Participation de parcs de production éolienne au réglage de la tension et de la puissance réactive dans les réseaux électriques." Phd thesis, Ecole Centrale de Lille, 2010. http://tel.archives-ouvertes.fr/tel-00590371.

Full text
Abstract:
Nous proposons dans cette thèse, différents outils de réglage de tension et de gestion de la puissance réactive en fonction des conditions de raccordement des éoliennes au réseau électrique. Trois cas figure sont étudiés : raccordement direct sur un poste source de distribution, raccordement des éoliennes réparties dans un réseau de distribution et raccordement d'un ensemble de parcs éoliens au réseau de transport.Un algorithme de réglage basé sur l'asservissement de la puissance réactive est proposé pour le raccordement direct d'un parc sur un poste source. Un réglage coordonné de tension en présence de régleur en charge est proposé (D-RCT) pour le raccordement des éoliennes reparties dans un réseau de distribution. On propose aussi une version plus décentralisée du réglage coordonné (D2-RCT) qui pourrait être implantée sous forme de système multi-agents intelligents (SMA). Un system de control multi-niveaux est proposé pour le raccordement d'un ensemble de parcs éoliens au réseau de transport. Il permet de répondre de manière optimale à une demande puissance réactive envoyée par le gestionnaire du réseau de transport. Les différents types de réglages proposés sont basés sur des algorithmes d'optimisation multi-objectifs. Afin de valider en temps réel le bon fonctionnement des stratégies de réglages développées ainsi que leurs modes de communication, une implantation expérimentale sous simulateur temps réel RT-Lab a été effectuée. Enfin, les résultats des simulations montrent l'amélioration de l'intégration de la production décentralisée dans les réseaux électriques
APA, Harvard, Vancouver, ISO, and other styles
38

DELLA, NOCE MATTEO. "Un modello VAR-GARCH multivariato per il mercato elettrico italiano." Doctoral thesis, Università Cattolica del Sacro Cuore, 2011. http://hdl.handle.net/10280/1108.

Full text
Abstract:
E’ stato estesamente appurato che i mercati dell'elettricità mostrano mean-reversion e elevata volatilità dei prezzi. Questo lavoro utilizza un modello VAR-MGARCH al fine di cogliere queste caratteristiche presenti sul mercato dell'energia elettrica italiana (IPEX) e analizzare le interrelazioni esistenti tra le diverse regioni in cui il mercato è suddiviso. L’analisi è condotta sui prezzi giornalieri dal 1 ° gennaio 2006 al 31 dicembre 2008. I coefficienti stimati dalle equazioni condizionali indicano che i mercati regionali sono abbastanza integrati e i prezzi regionali dell'energia elettrica possono essere adeguatamente previsti impiegando i prezzi passati di ciascun mercato zonale. La volatilità e la cross-volatility sono significative per tutti i mercati, indicando la presenza di forti componenti ARCH e GARCH e la sostanziale inefficienza dei mercati. E’ inoltre evidente un’elevata persistenza della volatilità e della cross-volatility in tutti i mercati. I risultati indicano inoltre che gli shock rilevati, sia nella volatilità, sia nei vari mercati, persistono nel tempo e che in ogni mercato la persistenza è più marcata quando è causata da innovazioni stimate sulle stesso mercato rispetto a shock stimati su altre aree. Questa persistenza descrive la tendenza delle variazioni dei prezzi a raggrupparsi nel tempo.
It is commonly known that spot electricity markets show mean-reversion and high price volatility. This work employs a VAR-MGARCH model to capture these features in the Italian electricity market (IPEX) and analyze the interrelation existing among the different regions in which the market is divided. Daily spot prices from 1 January 2006 to 31 December 2008 are employed. The estimated coefficients from the conditional mean equations indicate that the regional markets are quite integrated and regional electricity prices could be usefully forecasted using lagged prices from either the same market or from the other areal markets. Volatility and cross-volatility spill-overs are significant for all markets, indicating the presence of strong ARCH and GARCH effects and market inefficiency. Strong persistence of volatility and cross-volatility are also evident in all local markets. The results also indicate that volatility innovations or shocks in all markets persist over time and that in every market this persistence is more marked for own-innovations or shocks than cross-innovations or shocks. This persistence captures the propensity of price changes of similar magnitude to cluster in time.
APA, Harvard, Vancouver, ISO, and other styles
39

楊勝斌. "Belief function and fuzzy time series forecasting." Thesis, 2002. http://ndltd.ncl.edu.tw/handle/71129730116507575374.

Full text
APA, Harvard, Vancouver, ISO, and other styles
40

Tsai, Yen-lung, and 蔡炎龍. "Dynamical Radial Basis Function Networks and Chaotic Forecasting." Thesis, 1993. http://ndltd.ncl.edu.tw/handle/02000011855628912218.

Full text
Abstract:
碩士
國立政治大學
應用數學研究所
81
The forecasting technique is important for many researches and applications. In this paper, we shall construct a new model of neural networks -- the dynamical radial basis function (DRBF) networks and use the DRBF networks as "function approximators" to solve some forecasting problems. Different learning algorithms are used to test the capability of DRBF networks.
APA, Harvard, Vancouver, ISO, and other styles
41

Kung, Chih-Yun, and 龔志澐. "Forecasting Ability for Long Memory and Deterministic Volitility Function on TXO." Thesis, 2009. http://ndltd.ncl.edu.tw/handle/84799819953525120221.

Full text
Abstract:
碩士
淡江大學
財務金融學系碩士班
97
The paper estimates the implied volatilities of the at-the-money (ATM) option, deterministic volatility function (DVF) and realized volatility (RV) using ARFIMA model derived from TAIFEX options on Taiwan stock index during December 2001 to May 2008. We compare the predictive ability of encompassing regression model, especially, we use the predicted values as independent variables. The results indicate that we confirm not only the presence of long memory behavior in the TX volatility but also accurately fitted by ARFIMA. Comparing the different predictive variables, we find the DVF model has the highest forecasting ability of implied volatility for call and put options. Moreover, after including three predictive variables, the encompassing regression has the highest forecasting ability of the implied volatility in the sample and the smallest forecasting error out of the sample for call and put options. Finally, in order to examine the forecasting ability of the encompassing regression model, we need to tell whether implied volatility forecasts can be used to formulate profitable out-of-sample trading strategies in TXO market or not. By using delta-neutral option, we construct straddle portfolios to estimate benefits of trading strategies. The results show that the encompassing regression with the realized volatility and DVF volatility of one week straddle portfolio has the best performance. Furthermore, regardless of the transaction cost, the encompassing regression with the smallest forecasting error can get positive return in TXO market.
APA, Harvard, Vancouver, ISO, and other styles
42

Choe, Yu-Ri, and 崔友莉. "Forecasting the Demand for Korea Tourism : Application of the Quartic Function." Thesis, 2014. http://ndltd.ncl.edu.tw/handle/c7782p.

Full text
Abstract:
碩士
國立臺灣大學
國家發展研究所
102
This research focuses on the demand for South Korean tourism. Data used in conducting this research is the total inbound visitors to South Korea from January 1975 to December 2013. This research makes prediction from the vertex form of Quartic model. In the part of out-of-sample forecast, the author uses MAPE and RMSE in comparison with those of SARIMA(2,1,2)(1,1,1)12 and NA&;Iuml;VEⅠ.
APA, Harvard, Vancouver, ISO, and other styles
43

Tsai, Wei-Lun, and 蔡維倫. "Fuzzy Time Series Models Based on Fitting Function for Forecasting Stock Index." Thesis, 2012. http://ndltd.ncl.edu.tw/handle/35714909641059730760.

Full text
Abstract:
碩士
國立雲林科技大學
資訊管理系碩士班
100
In the recent years, many time series model has been widely applied in forecasting stock index. However, the time series methods still have some problem as follows: (1) conventional time series models only considered single variable; (2) fuzzy time series model determined the interval length of linguistic value subjectively; (3) selecting variables depended on personal experience and opinion. Hence, this paper proposes a novel fuzzy time series model based on fitting function to forecast stock index. The proposed model employed Pearson’s correlation to select important technical indicators objectively. In order to evaluate the performance of the proposed model, the transaction records of TAIEX (Taiwan Stock Exchange Capitalization Weighted Stock index) and HSI (Hang Seng Indexes) from 1998/01/03 to 2006/12/31 are used as experimental dataset and the root mean square error (RMSE) as evaluation criterion. And Chen’s (2000) model, Yu’s (2005) model support vector regression (SVR) and partial least square regression (PLSR) are used as comparable models with our methods The results show that the proposed model outperforms the listing models in accuracy for forecasting Taiwan stock market and Hong Kong stock market.
APA, Harvard, Vancouver, ISO, and other styles
44

Yang, Shih-Yu, and 楊偲妤. "Forecasting the Demand for Taiwan Tourism-Application of the Transfer Function Model." Thesis, 2013. http://ndltd.ncl.edu.tw/handle/93900481056827939989.

Full text
Abstract:
碩士
國立臺灣大學
國家發展研究所
101
Taiwan’s tourism industry grows with significant contribution to the society as that of the whole world develops. This research combines the piecewise linear model and the time series analysis method to build a transfer function model for forecasting the demand for Taiwan tourism based on the monthly tourist arrivals. In addition, we use the mean absolute percentage error (MAPE) and root mean square error (RMSE) to assess the precision of the forecasting models.Finally, in order to compare the out-of-sample forecasting accuracy between different models, Naive method is added as a benchmark model and evaluates forecasting performance between two models by using the Diebold-Mariano test. The result turns out to be that piecewise linear model and transfer function model predict Taiwan tourism demand precisely and they are significantly outperform Naive method for the out-of-sample forecasting period. Hope this research can make a contribution to the relevant research fields.
APA, Harvard, Vancouver, ISO, and other styles
45

Chen, Zhen-Yao, and 陳振耀. "Application of Evolutionary Computation-Based Radial Basis Function Neural Network to IPC Sales Forecasting." Thesis, 2010. http://ndltd.ncl.edu.tw/handle/d83vya.

Full text
Abstract:
博士
國立臺北科技大學
工商管理研究所
98
Forecasting is one of the crucial factors in practical application since it ensures the effective allocation of capacity and proper amount of inventory. Since auto-regressive integrated moving average (ARIMA) models which are more suitable for linear data have their constraints in predicting complex data for the real-world problems, some approaches have been developed to conquer the challenge of nonlinear forecasting. Therefore, for the purpose of forecasting nonlinear data, this study intends to develop three integrated evolutionary computation (EC)-based algorithms for training radial basis function neural network (RBFnn). The EC-based algorithms include genetic algorithm (GA), particle swarm optimization (PSO), and artificial immune system (AIS). In order to verify these three developed integrated EC-based algorithms, three benchmark continuous test functions were employed. The experimental results of three integrated EC-based algorithms are really very promising. In addition, industrial personal computer (IPC) sales data provided by an international well-known IPC manufacturer in Taiwan is also applied to further assess these developed algorithms. The model evaluation results indicated that the developed algorithms really can forecast more accurately. Furthermore, if foreign exchange (FX) factor is considered, the forecasting results can be improved.
APA, Harvard, Vancouver, ISO, and other styles
46

Liang, Pei-Hwa, and 梁培華. "Simultaneous transfer function model building, structural analysis and forecasting for earnings and stock price." Thesis, 1993. http://ndltd.ncl.edu.tw/handle/80724947028030398961.

Full text
Abstract:
碩士
國立臺灣大學
會計學研究所
81
The study employs simultaneous transfer function ( STF ) model to estimate the relationship between price changes and earnings changes. The STF model provides a general framework for the integration of econometric and time series model. It is appropriate for both structural analysis and forcasting. In addition to earnings and stock price, the explanatory variables included in the model are 1. leading indicator, current indicator: as surrogates for economic-wide information, 2. weighted average stock price index: as a surrogate for market factor, 3. industrial calssified stock price index: as a surrogate for industrial factor, and 4. sales: nonearnings accounting numbers as earnings predictor. The empirical results are as follows: 1. price can convey information about earnings, 2. earnings can convey information about price, 3. there is no significant contemporaneous relationship between earnings changes and price changes, 4. the STF model produces better forecasts than the ARMA model.
APA, Harvard, Vancouver, ISO, and other styles
47

Huang, Chun-Hsun, and 黃俊勳. "Application of Membership Function and Back-Propagation Network on Urban Commute-Journey Forecasting Model." Thesis, 2001. http://ndltd.ncl.edu.tw/handle/07291200106593300271.

Full text
Abstract:
碩士
中原大學
土木工程研究所
89
The arrangement of daily journeys and activities depends on capability constraints, coupling constraints and authority constraints faced by individuals. In order to optimize the activities out of journeys, travelers tend to chain trips along the way to or from work. In this study, we use trip chains as analysis units and establish urban commute-journey forecasting models based on various methods. By using the regression method, numbers of assumptions need to be satisfied. However, the activity-travel behaviors are often affected by external environments, personal/household social-economics characteristics and the household-member interrelationships. The complexity makes the assumptions hard to be fulfilled. Recently, some studies apply back-propagation networks (BPNs) to simulate travel behaviors. By links of hidden layers and neurons, BPN models are capable of reflecting travel behaviors to a certain degree and generating better results than regression methods. In this study, back-propagation networks are combined with fuzzy membership functions to reflect the fuzziness in travel behaviors in order to improve the forecasting ability of models. Logistic regressions, back-propagation networks (BPN) and back-propagation networks combined membership functions (FBPN) are utilized separately to establish forecasting models on urban commute journeys. By using the travel data collected in Taipei metropolitan area in 1992, the forecasting results generated from the combination of back-propagation networks and membership functions are better than other models. Furthermore, the input and output fuzzification models (FBPN-all) seem to perform well among FBPNs. By using the membership functions, it is helpful to reflect the complexity and fuzziness existing in travel behaviors. In addition, certain personal characteristics, especially gender, make significant differences during the decision-making process of travel. In the latter section of this study, separate models based on genders are also established for comparison. Following the model evaluation, the results indicate the models by female travelers generate much better forecasting than the counterpart.
APA, Harvard, Vancouver, ISO, and other styles
48

Lee, Fwu Sheng, and 李福生. "APPLICATION OF AUTOREGRESSIVE-INTEGRATED-MOVING AVERAGE TRANSFER FUNCTION MODEL TO CUSTOMER SHORT TERM LOAD FORECASTING." Thesis, 1995. http://ndltd.ncl.edu.tw/handle/75640237149511371677.

Full text
Abstract:
碩士
國立中山大學
電機工程研究所
83
Short-term load forecast plays an important role in electric power system operation and planning. An accurate load forecast does not only reduce the generation cost in a power system, but also provide a good principle of effective operation. In this thesis, the Box-Jenkins transfer function model is applied to the short term load forecasting by considering weather-load relationship. For four different customer classes in Taipower system, which are residential load, commercial load, institutional load and industrial load, the summer transfer function models have been derived to proceed the short-term load forecast during one week. To demonstrate the effectiveness of the proposed method, this thesis compares the result of the transfer function model with the univariate ARIMA model. Besides the transfer function model's accuracy of the load forecast of weekend and workday is thoroughly investigated. To improve the accuracy level of load forecast, the temperature effect is included in the transfer function. According to the short term load forecasting of different customer classes, it is concluded that the transfer function can achieve better accuracy of load forecast than ARIMA model by consider the causality between power consumption and temperature.
APA, Harvard, Vancouver, ISO, and other styles
49

Retto, Gui Duarte Diniz de Abreu Bragança. "Forecasting stock-return volatility in the time-frequency domain." Master's thesis, 2018. http://hdl.handle.net/10400.14/29407.

Full text
Abstract:
Este estudo foca nos modelos autorregressivos de heterocedasticidade condicional, em especial nos modelos GARCH. A amostra principal usa dados do retorno do índice do S&P500 ajustados a divisão e dividendos de 1990 a 2008, usando uma janela fora da amostra de 2001 até ao final da amostra. O objetivo principal é analisar o desempenho das previsões do modelo num domínio tempo-frequência e, em seguida, compará-los com resultados em um cenário de domínio de tempo. Para fazer uma análise de domínio tempo-frequência, usamos técnicas de wavelets para decompor as séries temporais S&P500 originais em diferentes frequências, cada uma delas originalmente configurada no domínio do tempo. Em última análise, o objetivo é ver se a decomposição com wavelets traz um desempenho aprimorado na previsão/modelagem da volatilidade, observando a função de perdas de previsão de Quasi-Verossimilhança (QL), bem como os índices médios de perdas de previsão ao quadrado (MSFE). Embora a decomposição com wavelets ajude a capturar componentes periódicos ocultos das séries temporais originais, os resultados de domínio de frequência em termos de função de perda (QL e MSFE) não superam o resultado original do domínio do tempo para qualquer frequência dada. No entanto, a maioria das informações para a volatilidade futura é capturada em poucas frequências da série temporal do S&P500, especialmente, na parte de alta frequência dos espectros, representando horizontes de investimento muito curtos.
This research focuses on generalized autoregressive conditional heteroskedasticity (GARCH) model. The main sample uses daily split-adjusted and dividend-adjusted log-return data of the S&P500 index ranging from 1990 to 2008, using an out-of-sample window from 2001 until the end of the sample. The main goal is to analyze the performance of the model forecasts in a time-frequency domain and then to compare them with results in a time-domain scenario. To make a time-frequency domain analysis, this research uses wavelets techniques to decompose the original S&P500 time series into different frequencies brands, each of them originally set in time-domain. Ultimately, the aim is to see if the wavelet decomposition brings an enhanced performance on forecasting/modelling volatility by looking at the Quasi-Likelihood forecasting losses (QL) as well as the mean squared forecasting losses ratios (MSFE). Although the wavelet decomposition helps to capture hidden periodic components of the original time-series, frequency-domain results in terms of loss function (QL e MSFE) don’t outperform the original time-domain result for any given frequency. Nevertheless, most of the information for future volatility is captured in few frequencies of the S&P500 time-series, specially in the high-frequency part of the spectra, representing very short investment horizons.
APA, Harvard, Vancouver, ISO, and other styles
50

Chung, Edwin. "A proposed intelligent bandwidth management system based on Turksen's Fuzzy Function approach using reinforcement learning forecasting." 2005. http://link.library.utoronto.ca/eir/EIRdetail.cfm?Resources__ID=369969&T=F.

Full text
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography