Дисертації з теми "Flood forecasting Statistical methods"

Щоб переглянути інші типи публікацій з цієї теми, перейдіть за посиланням: Flood forecasting Statistical methods.

Оформте джерело за APA, MLA, Chicago, Harvard та іншими стилями

Оберіть тип джерела:

Ознайомтеся з топ-50 дисертацій для дослідження на тему "Flood forecasting Statistical methods".

Біля кожної праці в переліку літератури доступна кнопка «Додати до бібліографії». Скористайтеся нею – і ми автоматично оформимо бібліографічне посилання на обрану працю в потрібному вам стилі цитування: APA, MLA, «Гарвард», «Чикаго», «Ванкувер» тощо.

Також ви можете завантажити повний текст наукової публікації у форматі «.pdf» та прочитати онлайн анотацію до роботи, якщо відповідні параметри наявні в метаданих.

Переглядайте дисертації для різних дисциплін та оформлюйте правильно вашу бібліографію.

1

何添賢 and Tim Yin Timothy Ho. "Forecasting with smoothing techniques for inventory control." Thesis, The University of Hong Kong (Pokfulam, Hong Kong), 1994. http://hub.hku.hk/bib/B42574286.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
2

Nanzad, Bolorchimeg. "EVALUATION OF STATISTICAL METHODS FOR MODELING HISTORICAL RESOURCE PRODUCTION AND FORECASTING." OpenSIUC, 2017. https://opensiuc.lib.siu.edu/theses/2192.

Повний текст джерела
Анотація:
This master’s thesis project consists of two parts. Part I of the project compares modeling of historical resource production and forecasting of future production trends using the logit/probit transform advocated by Rutledge (2011) with conventional Hubbert curve fitting, using global coal production as a case study. The conventional Hubbert/Gaussian method fits a curve to historical production data whereas a logit/probit transform uses a linear fit to a subset of transformed production data. Within the errors and limitations inherent in this type of statistical modeling, these methods provide comparable results. That is, despite that apparent goodness-of-fit achievable using the Logit/Probit methodology, neither approach provides a significant advantage over the other in either explaining the observed data or in making future projections. For mature production regions, those that have already substantially passed peak production, results obtained by either method are closely comparable and reasonable, and estimates of ultimately recoverable resources obtained by either method are consistent with geologically estimated reserves. In contrast, for immature regions, estimates of ultimately recoverable resources generated by either of these alternative methods are unstable and thus, need to be used with caution. Although the logit/probit transform generates high quality-of-fit correspondence with historical production data, this approach provides no new information compared to conventional Gaussian or Hubbert-type models and may have the effect of masking the noise and/or instability in the data and the derived fits. In particular, production forecasts for immature or marginally mature production systems based on either method need to be regarded with considerable caution. Part II of the project investigates the utility of a novel alternative method for multicyclic Hubbert modeling tentatively termed “cycle-jumping” wherein overlap of multiple cycles is limited. The model is designed in a way that each cycle is described by the same three parameters as conventional multicyclic Hubbert model and every two cycles are connected with a transition width. Transition width indicates the shift from one cycle to the next and is described as weighted coaddition of neighboring two cycles. It is determined by three parameters: transition year, transition width, and γ parameter for weighting. The cycle-jumping method provides superior model compared to the conventional multicyclic Hubbert model and reflects historical production behavior more reasonably and practically, by better modeling of the effects of technological transitions and socioeconomic factors that affect historical resource production behavior by explicitly considering the form of the transitions between production cycles.
Стилі APA, Harvard, Vancouver, ISO та ін.
3

Leung, Caleb Chee Shan. "Time series modelling of birth data." Thesis, Canberra, ACT : The Australian National University, 1995. http://hdl.handle.net/1885/118134.

Повний текст джерела
Анотація:
Three basic methods namely cohort component projection methods, statistical time series methods and structural modelling methods are discussed for the purpose of forecasting births, with the main focus on univariate time series methods. A general autoregressive integrated moving average model for birth time series is developed from the mathematical demographic renewal equation for births. The four-stage Box-Jenkins modelling method of model identification, estimation, diagnosis and forecasting is investigated in detail. This method is employed to model and forecast Australian birth time series. Finally, the comparison between time series forecasts and cohort component projections of births for Australia is made.
Стилі APA, Harvard, Vancouver, ISO та ін.
4

Silva, Jesús, Naveda Alexa Senior, Guliany Jesús García, Núẽz William Niebles, and Palma Hugo Hernández. "Forecasting Electric Load Demand through Advanced Statistical Techniques." Institute of Physics Publishing, 2020. http://hdl.handle.net/10757/652142.

Повний текст джерела
Анотація:
Traditional forecasting models have been widely used for decision-making in production, finance and energy. Such is the case of the ARIMA models, developed in the 1970s by George Box and Gwilym Jenkins [1], which incorporate characteristics of the past models of the same series, according to their autocorrelation. This work compares advanced statistical methods for determining the demand for electricity in Colombia, including the SARIMA, econometric and Bayesian methods.
Стилі APA, Harvard, Vancouver, ISO та ін.
5

Kwok, Sai-man Simon, and 郭世民. "Statistical inference of some financial time series models." Thesis, The University of Hong Kong (Pokfulam, Hong Kong), 2006. http://hub.hku.hk/bib/B36885654.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
6

Davydenko, Andrey. "Integration of judgmental and statistical approaches for demand forecasting : models and methods." Thesis, Lancaster University, 2012. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.655734.

Повний текст джерела
Анотація:
The need for the composite use of judgmental and statistical approaches in forecasting is caused by the fact that each of these approaches itself cannot ensure the desired quality of forecasts. The topic of integrating forecasting methods has been long addressed in literature. However, due to the specific nature of demand data existing solutions in this area often cannot be efficiently applied in demand forecasting. The aim of the research is to develop efficient models and methods which would better correspond to realistic problem definitions in the context of demand forecasting. The first question that requires resolution is measuring the quality of demand forecasts. A critical analysis of existing error measures has shown that they are not always suitable for demand data due to their statistical properties. To provide a more robust and interpretable indication of forecasting performance the use of an enhanced statistic is proposed. One area of the research relates to the correction of judgmental forecasts. Since judgmental forecasts are inherently affected by cognitive biases, special means are required for producing an adequate probabilistic representation of future demand. Alongside the analysis of independent judgmental forecasts the research has examined the statistical properties of judgmental adjustments to statistically generated forecasts. Empirical analysis with real-world datasets shows that classical assumptions do not hold true and therefore standard procedures and tests cannot be correctly applied. Therefore more flexible methods have been designed to ensure more efficient and reliable analysis of judgmental forecasts. The results from the proposed techniques make it possible i) to reveal and eliminate systematic errors, and ii) to adequately evaluate the uncertainty associated with judgmental forecasts. Another area of research has focused on using prior judgmental information as an input " into statistical modelling, thereby obtaining consistent forecasts using both expert knowledge and latest observations. The proposed approach here is based on constructing a model with a combined dataset where available actual values and expert forecasts are described by means of corresponding regression equations. This allows the use of judgmental information in order to derive the prior characteristics of a data generation process. Model estimation is done using Bayesian inference and iterative algorithms, which make it possible to use sufficiently flexible model specifications. Analysis based on real data has shown that the approach and the proposed models can be easil?, and efficiently applied in practice. In summary, the contribution of the thesis is as follows. i) A number of previously unstudied effects are identified that can potentially lead to misinterpretation of measurement results obtained with the use of various well-known accuracy measures including MAPE, MdAPE, GMRAE, and MASE. ii) A new general error measure with improved statistical properties is proposed to overcome some imperfections of existing error measures. iii) New models and methods for efficient processing of point independent judgmental forecasts and judgmental adjustments to statistical forecasts are proposed based on Bayesian numerical analysis, iv) A new approach is proposed for the efficient incorporation of judgment into a statistical model of process dynamic.
Стилі APA, Harvard, Vancouver, ISO та ін.
7

Vasiljeva, Polina. "Combining Unsupervised and Supervised Statistical Learning Methods for Currency Exchange Rate Forecasting." Thesis, KTH, Matematisk statistik, 2016. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-190984.

Повний текст джерела
Анотація:
In this thesis we revisit the challenging problem of forecasting currency exchange rate. We combine machine learning methods such as agglomerative hierarchical clustering and random forest to construct a two-step approach for predicting movements in currency exchange prices of the Swedish krona and the US dollar. We use a data set with over 200 predictors comprised of different financial and macro-economic time series and their transformations. We perform forecasting for one week ahead with different parameterizations and find a hit rate of on average 53%, with some of the parameterizations yielding hit rates as high as 60%. However, there is no clear indicator that there exists a combination of the methods and parameters that outperforms all of the tested cases. In addition, our results indicate that the two-step approach is sensitive to changes in the training set. This work has been conducted at the Third Swedish National Pension Fund (AP3) and KTH Royal Institute of Technology.
I denna uppsats analyserar vi det svårlösta problemet med att prognostisera utvecklingen för en valutakurs. Vi kombinerar maskininlärningsmetoder såsom agglomerativ hierarkisk klustring och Random Forest för att konstruera en modell i två steg med syfte att förutsäga utvecklingen av valutakursen mellan den svenska kronan och den amerikanska dollarn. Vi använder över 200 prediktorer bestående av olika finansiella och makroekonomiska tidsserier samt deras transformationer och utför prognoser för en vecka framåt med olika modellparametriseringar. En träffsäkerhet på i genomsnitt 53% erhålls, med några fall där en träffsäkerhet så hög som 60% kunde observeras. Det finns emellertid ingen tydlig indikation på att det existerar en kombination av de analyserade metoderna eller parametriseringarna som är optimal inom samtliga av de testade fallen. Vidare konstaterar vi att metoden är känslig för förändringar i underliggande träningsdata. Detta arbete har utförts på Tredje AP-fonden (AP3) och Kungliga Tekniska Högskolan (KTH).
Стилі APA, Harvard, Vancouver, ISO та ін.
8

Du, Toit Cornel. "Non-parametric volatility measurements and volatility forecasting models." Thesis, Stellenbosch : Stellenbosch University, 2005. http://hdl.handle.net/10019.1/50401.

Повний текст джерела
Анотація:
Assignment (MComm)--Stellenbosch University, 2005.
ENGLISH ABSTRACT: Volatilty was originally seen to be constant and deterministic, but it was later realised that return series are non-stationary. Owing to this non-stationarity nature of returns, there were no reliable ex-post volatility measurements. Subsequently, researchers focussed on ex-ante volatility models. It was only then realised that before good volatility models can be created, reliable ex-post volatility measuremetns need to be defined. In this study we examine non-parametric ex-post volatility measurements in order to obtain approximations of the variances of non-stationary return series. A detailed mathematical derivation and discussion of the already developed volatility measurements, in particular the realised volatility- and DST measurements, are given In theory, the higher the sample frequency of returns is, the more accurate the measurements are. These volatility measurements referred to above, however, all have short-comings in that the realised volatility fails if the sample frequency becomes to high owing to microstructure effects. On the other hand, the DST measurement cannot handle changing instantaneous volatility. In this study we introduce a new volatility measurement, termed microstructure realised volatility, that overcomes these shortcomings. This measurement, as with realised volatility, is based on quadratic variation theory, but the underlying return model is more realistic.
AFRIKAANSE OPSOMMING: Volatiliteit is oorspronklik as konstant en deterministies beskou, dit was eers later dat besef is dat opbrengste nie-stasionêr is. Betroubare volatiliteits metings was nie beskikbaar nie weens die nie-stasionêre aard van opbrengste. Daarom het navorsers gefokus op vooruitskattingvolatiliteits modelle. Dit was eers op hierdie stadium dat navorsers besef het dat die definieering van betroubare volatiliteit metings 'n voorvereiste is vir die skepping van goeie vooruitskattings modelle. Nie-parametriese volatiliteit metings word in hierdie studie ondersoek om sodoende benaderings van die variansies van die nie-stasionêre opbrengste reeks te beraam. 'n Gedetaileerde wiskundige afleiding en bespreking van bestaande volatiliteits metings, spesifiek gerealiseerde volatiliteit en DST- metings, word gegee. In teorie salopbrengste wat meer dikwels waargeneem word tot beter akkuraatheid lei. Bogenoemde volatilitieits metings het egter tekortkominge aangesien gerealiseerde volatiliteit faal wanneer dit te hoog raak, weens mikrostruktuur effekte. Aan die ander kant kan die DST meting nie veranderlike oombliklike volatilitiet hanteer nie. Ons stel in hierdie studie 'n nuwe volatilitieits meting bekend, naamlik mikro-struktuur gerealiseerde volatiliteit, wat nie hierdie tekortkominge het nie. Net soos met gerealiseerde volatiliteit sal hierdie meting gebaseer wees op kwadratiese variasie teorie, maar die onderliggende opbrengste model is meer realisties.
Стилі APA, Harvard, Vancouver, ISO та ін.
9

莫正華 and Ching-wah Mok. "A comparison of two approaches to time series forecasting." Thesis, The University of Hong Kong (Pokfulam, Hong Kong), 1993. http://hub.hku.hk/bib/B31977431.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
10

Pharasi, Sid. "Development of statistical downscaling methods for the daily precipitation process at a local site." Thesis, McGill University, 2006. http://digitool.Library.McGill.CA:80/R/?func=dbin-jump-full&object_id=99786.

Повний текст джерела
Анотація:
Over the past decade, statistical procedures have been employed to downscale the outputs from global climate models (GCM) to assess the potential impacts of climate change and variability on the hydrological regime. These procedures are based on the empirical relationships between large-scale atmospheric predictor variables and local surface parameters such as precipitation and temperature. This research is motivated by the recognized lack of a comprehensive yet physically and statistically significant downscaling methodology for daily precipitation at a local site. The primary objectives are to move beyond the 'black box' approaches currently employed within the downscaling community, and develop improved statistical downscaling models that could outperform both raw GCM output and the current standard: the SDSM method. In addition, the downscaling methods could provide a more robust physical interpretation of the relationships between large-scale predictor climate variables and the daily precipitation characteristics at a local site.
The first component of this thesis consists of developing linear regression based downscaling models to predict both the occurrence and intensity of daily precipitation at a local site using stepwise, weighted least squares, and robust regression methods. The performance of these models was assessed using daily precipitation and NCEP re-analysis climate data available at Dorval Airport in Quebec for the 1961-1990 period. It was found that the proposed models could describe more accurately the statistical and physical properties of the local daily precipitation process as compared to the CGCM1 model. Further, the stepwise model outperforms the SDSM model for seven months of the year and produces markedly fewer outliers than the latter, particularly for the winter and spring months. These results highlight the necessity of downscaling precipitation for a local site because of the unreliability of the large-scale raw CGCM1 output, and demonstrate the comparative performance of the proposed stepwise model as compared with the SDSM model in reproducing both the statistical and physical properties of the observed daily rainfall series at Dorval.
In the second part of the thesis, a new downscaling methodology based on the principal component regression is developed to predict both the occurrence and amounts of the daily precipitation series at a local site. The principal component analysis created statistically and physically meaningful groupings of the NCEP predictor variables which explained 90% of the total variance. All models formulated outperformed the SDSM model in the description of the statistical properties of the precipitation series, as well as reproduced 4 out of 6 physical indices more accurately than the SDSM model, except for the summer season. Most importantly, this analysis yields a single, parismonious model; a non-redundant model, not stratified by month or season, with a single set of parameters that can predict both precipitation occurrence and intensity for any season of the year.
The third component of the research uses covariance structural modeling to ascertain the best predictors within the principal components that were developed previously. Best fit models with significant paths are generated for the winter and summer seasons via an iterative process. The direct and indirect effects of the variables left in the final models indicate that for either season, three main predictors exhibit direct effects on the daily precipitation amounts: the meridional velocity at the 850 HPa level, the vorticity at the 500 HPa level, and the specific humidity at the 500 HPa level. Each of these variables is heavily loaded onto the first three principal components respectively. Further, a key fact emerges: From season to season, the same seven significant large-scale NCEP predictors exhibit a similar model structure when the daily precipitation amounts at Dorval Airport were used as a dependent variable. This fact indicated that the covariance structural model was physically more consistent than the stepwise regression one since different model structures with different sets of significant variables could be identified when a stepwise procedure is employed.
Стилі APA, Harvard, Vancouver, ISO та ін.
11

Haddad, Khaled. "Design flood estimation for ungauged catchments in Victoria ordinary & generalised least squares methods compared /." View thesis, 2008. http://handle.uws.edu.au:8081/1959.7/30369.

Повний текст джерела
Анотація:
Thesis (M.Eng. (Hons.)) -- University of Western Sydney, 2008.
A thesis submitted towards the degree of Master of Engineering (Honours) in the University of Western Sydney, College of Health and Science, School of Engineering. Includes bibliographical references.
Стилі APA, Harvard, Vancouver, ISO та ін.
12

梁桂鏈 and Kwai-lin Leung. "An experiment with turning point forecasts using Hong Kong time seriesdata." Thesis, The University of Hong Kong (Pokfulam, Hong Kong), 1989. http://hub.hku.hk/bib/B31975975.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
13

Li, Siu-hang, and 李兆恆. "Modeling mortality assumptions in actuarial science." Thesis, The University of Hong Kong (Pokfulam, Hong Kong), 2004. http://hub.hku.hk/bib/B30289622.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
14

Harrington, Robert P. "Forecasting corporate performance." Diss., Virginia Polytechnic Institute and State University, 1985. http://hdl.handle.net/10919/54515.

Повний текст джерела
Анотація:
For the past twenty years, the usefulness of accounting information has been emphasized. In 1966 the American Accounting Association in its State of Basic Accounting Theory asserted that usefulness is the primary purpose of external financial reports. In 1978 the State of Financial Accounting Concepts, No. 1 affirmed the usefulness criterion. "Financial reporting should provide information that is useful to present and potential investors and creditors and other users..." Information is useful if it facilitates decision making. Moreover, all decisions are future-oriented; they are based on a prognosis of future events. The objective of this research, therefore, is to examine some factors that affect the decision maker's ability to use financial information to make good predictions and thereby good decisions. There are two major purposes of the study. The first is to gain insight into the amount of increase in prediction accuracy that is expected to be achieved when a model replaces the human decision-maker in the selection of cues. The second major purpose is to examine the information overload phenomenon to provide research evidence to determine the point at which additional information may contaminate prediction accuracy. The research methodology is based on the lens model developed by Eyon Brunswick in 1952. Multiple linear regression equations are used to capture the participants’ models, and correlation statistics are used to measure prediction accuracy.
Ph. D.
Стилі APA, Harvard, Vancouver, ISO та ін.
15

Boyer, Christopher A. (Christopher Andrew). "Statistical methods for forecasting and estimating passenger willingness-to-pay in airline revenue management." Thesis, Massachusetts Institute of Technology, 2010. http://hdl.handle.net/1721.1/61191.

Повний текст джерела
Анотація:
Thesis (S.M.)--Massachusetts Institute of Technology, Sloan School of Management, Operations Research Center, 2010.
Page 170 blank. Cataloged from PDF version of thesis.
Includes bibliographical references (p. 167-169).
The emergence of less restricted fare structures in the airline industry reduced the capability of airlines to segment demand through restrictions such as Saturday night minimum stay, advance purchase, non-refundability, and cancellation fees. As a result, new forecasting techniques such as Hybrid Forecasting and optimization methods such as Fare Adjustment were developed to account for passenger willingness-to- pay. This thesis explores statistical methods for estimating sell-up, or the likelihood of a passenger to purchase a higher fare class than they originally intended, based solely on historical booking data available in revenue management databases. Due to the inherent sparseness of sell-up data over the booking period, sell-up estimation is often difficult to perform on a per-market basis. On the other hand, estimating sell-up over an entire airline network creates estimates that are too broad and over-generalized. We apply the K-Means clustering algorithm to cluster markets with similar sell-up estimates in an attempt to address this problem, creating a middle ground between system-wide and per-market sell-up estimation. This thesis also formally introduces a new regression-based forecasting method known as Rational Choice. Rational Choice Forecasting creates passenger type categories based on potential willingness-to-pay levels and the lowest open fare class. Using this information, sell-up is accounted for within the passenger type categories, making Rational Choice Forecasting less complex than Hybrid Forecasting. This thesis uses the Passenger Origin-Destination Simulator to analyze the impact of these forecasting and sell-up methods in a controlled, competitive airline environment. The simulation results indicate that determining an appropriate level of market sell-up aggregation through clustering both increases revenue and generates sell-up estimates with a sufficient number of observations. In addition, the findings show that Hybrid Forecasting creates aggressive forecasts that result in more low fare class closures, leaving room for not only sell-up, but for recapture and spill-in passengers in higher fare classes. On the contrary, Rational Choice Forecasting, while simpler than Hybrid Forecasting with sell-up estimation, consistently generates lower revenues than Hybrid Forecasting (but still better than standard pick-up forecasting). To gain a better understanding of why different markets are grouped into different clusters, this thesis uses regression analysis to determine the relationship between a market's characteristics and its estimated sell-up rate. These results indicate that several market factors, in addition to the actual historical bookings, may predict to some degree passenger willingness-to-pay within a market. Consequently, this research illustrates the importance of passenger willingness-to-pay estimation and its relationship to forecasting in airline revenue management.
by Christopher A. Boyer.
S.M.
Стилі APA, Harvard, Vancouver, ISO та ін.
16

Cheung, Chi-shing Calvin, and 張志成. "Using statistical downscaling to project the future climate of Hong Kong." Thesis, The University of Hong Kong (Pokfulam, Hong Kong), 2014. http://hdl.handle.net/10722/208623.

Повний текст джерела
Анотація:
Climate in Hong Kong is very likely to be modified due to global climate change. In this study the output of General Circulation Models (GCMs) was statistically downscaled to produce future climate projections for the time periods 2046 –2065 and 2081 –2100 for Hong Kong. The future climate projections are based on two emission scenarios provided by the Intergovernmental Panel on Climate Change (IPCC). The emission scenarios, A1B (rapid economic growth with balanced energy technology) and B1 (global environmental sustainability), make assumptions on future human development, and the resulting emissions of greenhouse gases. This study established a method to evaluate GCMs for use in statistical downscaling and utilised six GCMs, selected from the 3rd phase of the Coupled Model Intercomparison Project (CMIP3). They were evaluated based upon their performance in simulating past climate in the southeast China region on three aspects: 1) monthly mean temperature; 2) sensitivity to greenhouse gases and 3) climate variability. Three GCMs were selected for statistical downscaling and climate projection in this study. Downscaling was undertaken by relating large scale climate variables, from NCEP/NCAR reanalysis, a gridded data set incorporating observations and climate models, to local scale observations. Temperature, specific humidity and wind speed were downscaled using multiple linear regressions methods. Rain occurrence was determined using logistic regression and rainfall volume from a generalised linear model. The resultant statistical models were subsequently applied to future climate projections. Overall, all three GCMs, via statistical downscaling, show that daily average, minimum and maximum temperatures, along with specific humidity, will increase under future climate scenarios. Comparing the model ensemble mean projections with current climate (1981 –2010), the annual average temperature in Hong Kong is projected to increase by 1.0 °C (B1) to 1.6 °C (A1B) in 2046 –2065, and by 1.4 °C (B1) to 2.2 °C (A1B) in 2081 –2100. Furthermore, the projections in this study show an increase of high temperature extremes (daily average temperature ≥ 29.6 °C), by three to four times in 2046 –2065 and four to five times in 2081 –2100. The projections of rainfall indicate that annual rainfall will increase in the future. Total annual rainfall is projected to increase by 4.9% (A1B) to 8% (B1) in 2046 –2065, and by 8.7% (B1) to 21.5% (A1B) in 2081 –2100. However, this change in rainfall is seasonally dependent; summer and autumn exhibit an increase in rainfall whilst spring and winter exhibit decreases. In order to test one possible impact of this change in climate, the downscaled climate variables were used to estimate how outdoor thermal comfort (using the Universal Thermal Comfort Index) might change under future climate scenarios in Hong Kong. Results showed that there will be a shift from 'No Thermal Stress' towards 'Moderate Heat Stress' and 'Strong Heat Stress' during the period 2046 –2065, becoming more severe for the later period (2081 –2100). The projections of future climate presented in this study will be important when assessing potential climate change impacts, along with adaptation and mitigation options, in Hong Kong.
published_or_final_version
Geography
Doctoral
Doctor of Philosophy
Стилі APA, Harvard, Vancouver, ISO та ін.
17

Gado, Djibo Abdouramane. "Exploration of Non-Linear and Non-Stationary Approaches to Statistical Seasonal Forecasting in the Sahel." Thesis, Université d'Ottawa / University of Ottawa, 2016. http://hdl.handle.net/10393/35130.

Повний текст джерела
Анотація:
Water resources management in the Sahel region of West Africa is extremely difficult because of high inter-annual rainfall variability as well as a general reduction of water availability in the region. Observed changes in streamflow directly disturb key socioeconomic activities such as the agriculture sector, which constitutes one of the main survival pillars of the West African population. Seasonal rainfall forecasting is considered as one possible way to increase resilience to climate variability by providing information in advance about the amount of rainfall expected in each upcoming rainy season. Moreover, the availability of reliable information about streamflow magnitude a few months before a rainy season will immensely benefit water users who want to plan their activities. However, since the 90s, several studies have attempted to evaluate the predictability of Sahelian weather characteristics and develop seasonal rainfall and streamflow forecast models to help stakeholders take better decisions. Unfortunately, two decades later, forecasting is still difficult, and forecasts have a limited value for decision-making. It is believed that the low performance in seasonal forecasting is due to the limits of commonly used predictors and forecast approaches for this region. In this study, new seasonal forecasting approaches are developed and new predictors tested in an attempt to predict the seasonal rainfall over the Sirba watershed located in between Niger and Burkina Faso, in West Africa. Using combined statistical methods, a pool of 84 predictors with physical links with the West African monsoon and its dynamics were selected, with their optimal lag times. They were first reduced through screening using linear correlation with satellite rainfall over West Africa. Correlation analysis and principal component analysis were used to keep the most predictive principal components. Linear regression was used to get synthetic forecasts, and the model was assessed to rank the tested predictors. The three best predictors, air temperature (from Pacific Tropical North), sea level pressure (from Atlantic Tropical South) and relative humidity (from Mediterranean East) were retained and tested as inputs for seasonal rainfall forecasting models. In this thesis it has been chosen to depart from the stationarity and linearity assumptions used in most seasonal forecasting methods: 1. Two probabilistic non-stationary methods based on change point detection were developed and tested. Each method uses one of the three best predictors. Model M1 allows for changes in model parameters according to annual rainfall magnitude, while M2 allows for changes in model parameters with time. M1 and M2 were compared to the classical linear model with constant parameters (M3) and to the linear model with climatology (M4). The model allowing changes in the predictand-predictor relationship according to rainfall amplitude (M1) and using AirTemp as a predictor was the best model for seasonal rainfall forecasting in the study area. 2. Non-linear models including regression trees, feed-forward neural networks and non-linear principal component analysis were implemented and tested to forecast seasonal rainfall using the same predictors. Forecast performances were compared using coefficients of determination, Nash-Sutcliffe coefficients and hit rate scores. Non-linear principal component analysis was the best non-linear model (R2: 0.46; Nash: 0.45; HIT: 60.7), while the feed-forward neural networks and regression tree models performed poorly. All the developed rainfall forecasting methods were subsequently used to forecast seasonal annual mean streamflow and maximum monthly streamflow by introducing the rainfall forecasted in a SWAT model of the Sirba watershed, and the results are summarized as follows: 1. Non-stationary models: Models M1 and M2 were compared to models M3 and M4, and the results revealed that model M3 using RHUM as a predictor at a lag time of 8 months was the best method for seasonal annual mean streamflow forecasting, whereas model M1 using air temperature as a predictor at a lag time of 4 months was the best model to predict maximum monthly streamflow in the Sirba watershed. Moreover, the calibrated SWAT model achieved a NASH value of 0.83. 2. Non-linear models: The seasonal rainfall obtained from the non-linear principal component analysis model was disaggregated into daily rainfall using the method of fragment, and then fed into the SWAT hydrological model to produce streamflow. This forecast was fairly acceptable, with a Nash value of 0.58. The evaluation of the level of risk associated with each seasonal forecast was carried out using a simple risk measure: the probability of overtopping of the flood protection dykes in Niamey, Niger. A HEC-RAS hydrodynamic model of the Niger River around Niamey was developed for the 1980-2014 period, and a copula analysis was used to model the dependence structure of streamflows and predict the distribution of streamflow in Niamey given the predicted streamflow on the Sirba watershed. Finally, the probabilities of overtopping of the flood protection dykes were estimated for each year in the 1980-2014 period. The findings of this study can be used as a guideline to improve the performance of seasonal forecasting in the Sahel. This research clearly confirmed the possibility of rainfall and streamflow forecasting in the Sirba watershed at a seasonal time scale using potential predictors other than sea surface temperature.
Стилі APA, Harvard, Vancouver, ISO та ін.
18

Pérez, José Fidel. "Integrating Global and Local Forecasting Resources and Methods for Flood Warning Systems in Central America and Caribbean Region." BYU ScholarsArchive, 2018. https://scholarsarchive.byu.edu/etd/7048.

Повний текст джерела
Анотація:
Hurricanes and tropical storms occur very frequently in the Central American and Caribbean Region (CA&CR). These extreme weather events produce a lot of rain and consequently a lot of flooding. Damages and loses have been estimated to amount to 13.4 billion dollars in the last ten years. Flood Warning Systems (FEWS) are a key preventive strategy to reduce risk. Technological progress is improving the resources made available for FEWS to be more viable. In spite of the international support for FEWS and the fast development of ICT, there are very few countries in the CA&CR that have succeeded in developing fully operational warning systems that are functioning in a sustainable manner for a long period of time. There is disconnection between the community-based systems and the centralized systems, as well as between the National Meteorological Services (NMS) and the National Hydrological Services (NHS) which tend to work many times in isolation. The general purpose of this work is to unravel the dysfunction/chaos of the way early warning systems are done and provide guidelines to integrate flood warning system at all scales to be used in operational forecasting, particularly for countries in the CA&CR. Flood warning can be seen as a set of sub-systems in which forecasting is only one of those sub-systems. A conceptual framework has been proposed to classify flood warning systems using the spatial and temporal scale at which the flood warning systems operate, subdividing them into Global, Regional, National and Local FEWS. In practice, these systems are not operated in an integrated manner. Emerging technology is available to allow the integration of global- and local-scale forecasting resources in the CA&CR. The Tethys Platform has a series of online tools applicable to flood forecasting. A workflow is given for the use of four apps in Tethys for flood forecasting: (i) Stream flow Prediction Tool; (ii) Reservoir operation tool; (iii) Hydro Viewer Hispaniola; Flood Map Visualization tool.
Стилі APA, Harvard, Vancouver, ISO та ін.
19

Xie, Yingfu. "Maximum likelihood estimation and forecasting for GARCH, Markov switching, and locally stationary wavelet processes /." Umeå : Dept. of Forest Economics, Swedish University of Agricultural Sciences, 2007. http://epsilon.slu.se/2007107.pdf.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
20

Wong, Ho-ting, and 黃浩霆. "Biometeorological modelling and forecasting of ambulance demand for Hong Kong: a spatio-temporal approach." Thesis, The University of Hong Kong (Pokfulam, Hong Kong), 2012. http://hub.hku.hk/bib/B4775297X.

Повний текст джерела
Анотація:
The demand for emergency ambulance services in Hong Kong is on the rise. Issues such as climate change, ageing population, constrained space, and limited resource capacity mean that the present way of meeting service demand by injecting more resources will reach its limit in the near future and unlikely to be sustainable. There is an urgent need to develop a more realistic forecast model to account for the anticipated demand for emergency ambulance services to enable better strategic planning of resources and more effective logistic arrangement. In this connection, the research objectives of this thesis include the following: 1. To examine relationships between weather and ambulance demand, with specific reference to temperature effects on demographic and admission characteristics of patients. 2. To establish a quantitative model for short-term (1-7 days ahead) forecast of ambulance demand in Hong Kong. 3. To estimate the longer-term demand for ambulance services by sub areas in Hong Kong, taking into account projected weather and population changes in 2019 and 2036. The research concurs with the findings of other researchers that temperature was the most important weather factor affecting the daily ambulance demand in 2006-2009, accounting for 49% of the demand variance. An even higher demand variance of 74% could be explained among people aged 65 and above. The incorporation of 1-7 day forecast data of the average temperature improved the forecast accuracy of daily ambulance demand on average by 33% in terms of R2 and 11% in terms of root mean square error (RMSE). Moreover, the forecast accuracy could be further improved by as much as 4% for both R2 and RMSE through spatial sub models. For demand projection of a longer-term, significant underestimation was observed if changes in the population demographics were not considered. The underestimation of annual ambulance demand for 2019 and 2036 was 16% and 38% respectively. The research has practical and methodological implications. First, the quantitative model for short-term forecast can inform demand in the next few days to enable logistic deployment of ambulance services beforehand, which, in turn, ensures that potential victims can be served in a swift and efficient manner. Second, the longer-term projection on the demand for ambulance services enables better preparation and planning for the expected rise in demand in time and space. Unbudgeted or unnecessary purchases of ambulances can be prevented without compromising preparedness and service quality. Third, the methodology is adaptable and the model can be reconstituted when more accurate projections on weather and population changes become available.
published_or_final_version
Geography
Doctoral
Doctor of Philosophy
Стилі APA, Harvard, Vancouver, ISO та ін.
21

Burger, S. (Stephan). "Managing the forecasting function within the fast moving consumer goods industry." Thesis, Stellenbosch : Stellenbosch University, 2003. http://hdl.handle.net/10019.1/53494.

Повний текст джерела
Анотація:
Thesis (MBA)--Stellenbosch University, 2003.
ENGLISH ABSTRACT: Forecasting the future has always been one of the man's strongest desires. The aim to determine the future has resulted in scientifically based forecasting models of human health, behaviour, economics, weather, etc. The main purpose of forecasting is to reduce the range of uncertainty within which management decisions must be made. Forecasts are only effective if they are utilized by those who have decisionmaking authority. Forecasts need to be understood and appreciated by decision makers so that they find their way into management of the firm. Companies still predominantly rely on judgemental forecasting methods, most often on an informal basis. There is a large literature base that point to the numerous biases inherent in judgemental forecasting. Most companies know that their forecasts are incorrect but don't know what to do about it and choose to ignore the issue, hoping that the problem will solve itself. The collaborative forecasting process attempts to use history as a baseline, but supplement current knowledge about specific trends, events and other items. This approach integrates the knowledge and information that exists internally and externally into a single, more accurate forecast that supports the entire supply chain. Demand forecasting is not just a matter of duplicating or predicting history into the future. It is important that one person should lead and manage the process. Accountability needs to be established. An audit on the writer's own organization indicated that no formal forecasting process was present. The company's forecasting process was very political, since values were entered just to add up to the required targets. The real gap was never fully understood. Little knowledge existed regarding statistical analysis and forecasting within the marketing department who is accountable for the forecast. The forecasting method was therefore a top-down approach and never really checked with a bottom up approach. It was decided to learn more about the new demand planning process prescribed by the head office, and to start implementing the approach. The approach is a form of a collaborative approach which aims to involve all stakeholders when generating the forecast, therefore applying a bottom up approach. Statistical forecasting was applied to see how accurate the output was versus that of the old way of forecasting. The statistical forecast approach performed better with product groups where little changed from previous years existed, while the old way performed better where new activities were planned or known by the marketing team. This indicates that statistical forecasting is very important for creating the starting point or baseline forecast, but requires qualitative input from all stakeholders. Statistical forecasting is therefore not the solution to improved forecasting, but rather part of the solution to create robust forecasts.
AFRIKAANSE OPSOMMING: Vooruitskatting van die toekoms was nog altyd een van die mens se grootste begeertes. Die doel om die toekoms te bepaal het gelei tot wiskundige gebaseerde modelle van die mens se gesondheid, gedrag, ekonomie, weer, ens. The hoofdoel van vooruitskatting is om die reeks van risikos te verminder waarbinne bestuur besluite moet neem. Vooruitskattings is slegs effektief as dit gebruik word deur hulle wat besluitnemingsmag het. Vooruitskattings moet verstaan en gewaardeer word deur die besluitnemers sodat dit die weg kan vind na die bestuur van die firma. Maatskappye vertrou nog steeds hoofsaaklik op eie oordeel vooruitskatting metodes, en meestal op 'n informele basis. Daar is 'n uitgebreide literatuurbasis wat daarop dui dat heelwat sydigheid betrokke is by vooruitskattings wat gebaseer is op eie oordeel. Baie organisasies weet dat hulle vooruitskattings verkeerd is, maar weet nie wat daaromtrent te doen nie en kies om die probleem te ignoreer, met die hoop dat die probleem vanself sal oplos. Die geïntegreerde vooruitskattingsproses probeer om die verlede te gebruik as 'n basis, maar voeg huidige kennis rakende spesifieke neigings, gebeurtenisse, en ander items saam. Hierdie benadering integreer die kennis en informasie wat intern en ekstern bestaan in 'n enkele, meer akkurate vooruitskatting wat die hele verskaffingsketting ondersteun. Vraagvooruitskatting is nie alleen 'n duplisering of vooruitskatting van die verlede in die toekoms in nie. Dit is belangrik dat een persoon die proses moet lei en bestuur. Verantwoordelikhede moet vasgestel word. 'n Oudit op die skrywer se organisasie het getoon dat geen formele vooruitskattingsprosesse bestaan het nie. Die maatskappy se vooruitskattingsproses was hoogs gepolitiseerd, want getalle was vasgestel wat in lyn was met die nodige teikens. Die ware gaping was nooit werklik begryp nie. Min kennis was aanwesig rakende statistiese analises en vooruitskatting binne die bemarkingsdepartement wat verantwoordelik is vir die vooruitskatting. Die vooruitskatting is dus eerder gedoen op 'n globale vlak en nie noodwendig getoets deur die vooruitskatting op te bou uit detail nie. Daar is besluit om meer te leer rakende die nuwe vraagbeplanningsproses, wat voorgeskryf is deur hoofkantoor, en om die metode te begin implementeer. Die metode is 'n vorm van 'n geïntegreerde model wat beoog om alle aandeelhouers te betrek wanneer die vooruitskatting gedoen word, dus die vooruitskatting opbou met detail. Statistiese vooruitskatting was toegepas om te sien hoe akkuraat die uitset was teenoor die ou manier van vooruitskatting. Die statistiese proses het beter gevaar waar die produkgroepe min verandering van vorige jare ervaar het, terwyl die ou manier beter gevaar het waar bemarking self die nuwe aktiwiteite beplan het of bewus was daarvan. Dit bewys dat statistiese vooruitskatting baie belangrik is om die basis vooruitskatting te skep, maar dit benodig kwalitatiewe insette van all aandeelhouers. Statistiese vooruitskattings is dus nie die oplossing vir beter vooruitskattings nie, maar deel van die oplossing om kragtige vooruitskattings te skep.
Стилі APA, Harvard, Vancouver, ISO та ін.
22

Sairam, Nivedita. "Bayesian Approaches for Modelling Flood Damage Processes." Doctoral thesis, Humboldt-Universität zu Berlin, 2021. http://dx.doi.org/10.18452/23083.

Повний текст джерела
Анотація:
Hochwasserschadensprozesse werden von den drei Komponenten des Hochwasserrisikos bestimmt – der Gefahr, der Exposition und der Vulnerabilität. Dabei bleiben wichtige Einflussgrößen auf die Vulnerabilität, wie die private Hochwasservorsorge aufgrund fehlender quantitativer Informationen unberücksichtigt. Diese Arbeit entwickelt daher eine robuste statistische Methode zur Quantifizierung des Einflusses von privater Hochwasservorsorge auf die Reduzierung der Vulnerabilität von Haushalten bei Hochwasser. Es konnte gezeigt werden, dass in Deutschland private Hochwasservorsorgemaßnahmen den durchschnittlichen Hochwasserschaden pro Wohngebäude um 11.000 bis 15.000 Euro reduzieren. Hochwasserschadensmodelle mit Expertenwissen und datengestützten Methoden sind dabei am besten in der Lage Unterschiede in der Vulnerabilität durch private Hochwasservorsorge zu erkennen. Die über Hochwasserschadenprozesse erhobenen Daten und Modellannahmen sind von Unsicherheit geprägt und so sind auch Schätzungen mit. Die Bayesschen Modelle, die in dieser Arbeit entwickelt und angewandt werden, nutzen Annahmen über Schadensprozesse als Prior und empirische Daten zur Aktualisierung der Wahrscheinlischkeitsverteilungen. Die Modelle bieten Hochwasserschadensschätzungen als Verteilung, welche die Bandbreite der Variabilität der Schadensprozesse und die Unsicherheit der Modellannahmen abbilden. Hochwasserschadensmodelle, hinsichtlich der Prognoseerstellung und Anwendbarkeit. Ins Besondere verbessert die Verwendung einer Beta–Verteilung die Zuverlässigkeit der Modellergebnisse im Vergleich zu den häufig genutzten Gaußschen oder nicht parametrischen Verteilungen. Der hierarchische Bayessche Ansatz schafft eine verbesserte Parametrisierung von Wasserstand-Schadens-Funktionen und ersetzt so die Notwendigkeit empirischer Daten durch regional- und Ereignis-spezifisches Expertenwissen. Auf diese Weise kann die Vorhersage bei einer zeitlich und räumlichen Übertragung des Models verbessert werden.
Flood damage processes are influenced by the three components of flood risk - hazard, exposure and vulnerability. In comparison to hazard and exposure, the vulnerability component, though equally important is often generalized in many flood risk assessments by a simple depth-damage curve. Hence, this thesis developed a robust statistical method to quantify the role of private precaution in reducing flood vulnerability of households. In Germany, the role of private precaution was found to be very significant in reducing flood damage (11 - 15 thousand euros, per household). Also, flood loss models with structure, parameterization and choice of explanatory variables based on expert knowledge and data-driven methods were successful in capturing changes in vulnerability, which makes them suitable for future risk assessments. Due to significant uncertainty in the underlying data and model assumptions, flood loss models always carry uncertainty around their predictions. This thesis develops Bayesian approaches for flood loss modelling using assumptions regarding damage processes as priors and available empirical data as evidence for updating. Thus, these models provide flood loss predictions as a distribution, that potentially accounts for variability in damage processes and uncertainty in model assumptions. The models presented in this thesis are an improvement over the state-of-the-art flood loss models in terms of prediction capability and model applicability. In particular, the choice of the response (Beta) distribution improved the reliability of loss predictions compared to the popular Gaussian or non-parametric distributions; the Hierarchical Bayesian approach resulted in an improved parameterization of the common stage damage functions that replaces empirical data requirements with region and event-specific expert knowledge, thereby, enhancing its predictive capabilities during spatiotemporal transfer.
Стилі APA, Harvard, Vancouver, ISO та ін.
23

Boulin, Juan Manuel. "Call center demand forecasting : improving sales calls prediction accuracy through the combination of statistical methods and judgmental forecast." Thesis, Massachusetts Institute of Technology, 2010. http://hdl.handle.net/1721.1/59159.

Повний текст джерела
Анотація:
Thesis (M.B.A.)--Massachusetts Institute of Technology, Sloan School of Management; and, (S.M.)--Massachusetts Institute of Technology, Engineering Systems Division; in conjunction with the Leaders for Global Operations Program at MIT, 2010.
Cataloged from PDF version of thesis.
Includes bibliographical references (p. 79-81).
Call centers are important for developing and maintaining healthy relationships with customers. At Dell, call centers are also at the core of the company's renowned direct model. For sales call centers in particular, the impact of proper operations is reflected not only in long-term relationships with customers, but directly on sales and revenue. Adequate staffing and proper scheduling are key factors for providing an acceptable service level to customers. In order to staff call centers appropriately to satisfy demand while minimizing operating expenses, an accurate forecast of this demand (sales calls) is required. During fiscal year 2009, inaccuracies in consumer sales call volume forecasts translated into approximately $1.1M in unnecessary overtime expenses and $34.5M in lost revenue for Dell. This work evaluates different forecasting techniques and proposes a comprehensive model to predict sales call volume based on the combination of ARIMA models and judgmental forecasting. The proposed methodology improves the accuracy of weekly forecasted call volume from 23% to 46% and of daily volume from 27% to 41%. Further improvements are easily achievable through the adjustment and projection processes introduced herein that rely on contextual information and the expertise of the forecasting team.
by Juan Manuel Boulin.
S.M.
M.B.A.
Стилі APA, Harvard, Vancouver, ISO та ін.
24

Field, John Jacob. "Surficial processes, channel change, and geological methods of flood-hazard assessment on fluvially dominated alluvial fans in Arizona." Diss., The University of Arizona, 1994. http://hdl.handle.net/10150/186649.

Повний текст джерела
Анотація:
A combination of geological and hydraulic techniques represents the most sensible approach to flood hazard analysis on alluvial fans. Hydraulic models efficiently yield predictions of flood depths and velocities, but the assumptions on which the models are based do not lead to accurate portrayals of natural fan processes. Geomorphological mapping, facies, mapping, and hydraulic reconstructions of past floods provide data on the location, types, and magnitude of flood hazards, respectively. Geological reconstructions of past floods should be compared with the results of hydraulic modeling before, potentially unsound, floodplain management decisions are implemented. The controversial Federal Emergency Management Agency procedure for delineating flood-hazard zones underestimated the extent, velocity, and depth of flow during recent floods on two alluvial fans by over 100, 25, and 70 percent, respectively. Flow on the alluvial fans occurs in one or more discontinuous ephemeral stream systems characterized by alternating sheetflood zones and channelized reaches. The importance of sheetflooding is greater on fans closer to the mountain front and with unstable channel banks. Channel diversions on five alluvial fans repeatedly occurred along low channel banks and bends where the greatest amount of overland flow is generated. Channel migration occurs through stream capture whereby overland flow from the main channel accelerates and directs erosion of adjacent secondary channels. The recurrence interval of major channel shifts is greater than 100 years, but minor changes occurred on all five fans during this century. Small aggrading flows are important, because they decrease bank heights and alter the location of greatest overland flow during subsequent floods. The results of this study demonstrate that (1) geological reconstructions of past floods can check the results of hydraulic models, (2) the character of flooding on alluvial fans can vary significantly in the same tectonic and climatic setting due to differences in drainage-basin characteristics, and (3) flood-hazard assessments on alluvial fans must be updated after each flood, because the location and timing of channel diversions can be affected by small floods.
Стилі APA, Harvard, Vancouver, ISO та ін.
25

Mota, Stephen Kopano. "A matched study to determine a conditional logistic model for prediction of business failure in South Africa." Thesis, Stellenbosch : Stellenbosch University, 2001. http://hdl.handle.net/10019.1/52084.

Повний текст джерела
Анотація:
Thesis (MBA)--Stellenbosch University, 2001
ENGLISH ABSTRACT: The subject of prediction of business failure from an academic point of view dates back to the turn of the century with the development of a single ratio, the current ratio, as an evaluation of credit-worthiness. Subsequently studies conducted have become complex using different statistical techniques and more than one variable to predict failure. The challenge in these studies has been to establish a reliable model to predict failure. The aim of this report was to find out which financial factors best predicted failure in the South African environment using a matched study by refining some elements of the study conducted by Court (1993). The data used was similar to that of Court (1993), which was independently obtained from the Bureau of Financial Analysis of the University of Pretoria. The variables used in the study were then computed from this raw data. The variables were then imputed into the stataΤΜ statical software package to run a conditional logistic regression model. As a result of a small sample size and a substantial number of missing variables in the sample size, the study did not reveal an accurate indication of the important variable. It was also found that with the instability and general complexity of conditional logistic regression the study need not have been a matched study. The recommendation is that future research be done with a larger sample size using the same methodology. It is also recommended that the data include non-financial variables.
AFRIKAANSE OPSOMMING: Die voorspelling van besigheidsmislukkings as 'n akademiese onderwerp, dateer vanaf die begin van die vorige eeu met die ontwikkeling van 'n enkele verhouding, die bedryfsverhouding, as maatstaf van kredietwaardigheid. Die toepassing van statistiese tegnieke en inkorporasie van meerdere veranderlikes het aan verdere studies 'n hoë mate van kompleksiteit verleen. Die gevolglike uitdaging was om 'n betroubare model te ontwikkel om besighiedsmislukkings akkuraat te kan voorspel. Die doel van hierdie verslag is om aan te dui welke finansiele faktore mees gepas sal wees om besigheidsmislukkings in die Suid Afrikaanse omgewing te voorspel. Die verslag gee weer die bevindinge van 'n gepaarde studie wat gegrond is op 'n verfyning van sekere elemente soos geneem uit die Court studie van 1993. Die data gebruik, is baie soos die wat die Court studie onderlê en is onafhanklik verkry vanaf die Bureau vir Finansiele Analise (Universiteit van Pretoria). Die veranderlikes wat in die studie gebruik is gebaseer op hierdie rou data en is ingesleutel en verwerk deur die stataΤΜ statistiese sagteware program na 'n kondisionele, logiese regressie model. As gevolg van 'n klein steekproef en 'n beduidenswaardige aantal ontbrekende veranderlikes in hierdie steekproef, kon die studie nie 'n belangrike veranderlike met akkuraatheid aandui nie. Dit is ook bevind dat die onstabiliteit en algemene kompleksiteit van die kondisionele, logiese regressie model die gebruik van 'n gepaarde studie onnodig gelaat het. Die aanbeveling is dat verdere navorsing dieselfde metodologie sal toepas op 'n groter steekproef. Dit word ook aanbeveel dat nie-finansiele veranderlikes by die data ingesluit word.
Стилі APA, Harvard, Vancouver, ISO та ін.
26

Falk, Anton, and Daniel Holmgren. "Sales Forecasting by Assembly of Multiple Machine Learning Methods : A stacking approach to supervised machine learning." Thesis, Umeå universitet, Institutionen för matematik och matematisk statistik, 2021. http://urn.kb.se/resolve?urn=urn:nbn:se:umu:diva-184317.

Повний текст джерела
Анотація:
Today, digitalization is a key factor for businesses to enhance growth and gain advantages and insight in their operations. Both in planning operations and understanding customers the digitalization processes today have key roles, and companies are spending more and more resources in this fields to gain critical insights and enhance growth. The fast-food industry is no exception where restaurants need to be highly flexible and agile in their work. With this, there exists an immense demand for knowledge and insights to help restaurants plan their daily operations and there is a great need for organizations to continuously adapt new technological solutions into their existing processes. Well implemented Machine Learning solutions in combination with feature engineering are likely to bring value into the existing processes. Sales forecasting, which is the main field of study in this thesis work, has a vital role in planning of fast food restaurant's operations, both for budgeting purposes, but also for staffing purposes. The word fast food describes itself. With this comes a commitment to provide high quality food and rapid service to the customers. Understaffing can risk violating either quality of the food or service while overstaffing leads to low overall productivity. Generating highly reliable sales forecasts are thus vital to maximize profits and minimize operational risk. SARIMA, XGBoost and Random Forest were evaluated on training data consisting of sales numbers, business hours and categorical variables describing date and month. These models worked as base learners where sales predictions from a specific dataset were used as training data for a Support Vector Regression model (SVR). A stacking approach to this type of project shows sufficient results with a significant gain in prediction accuracy for all investigated restaurants on a 6-week aggregated timeline compared to the existing solution.
Digitalisering har idag en nyckelroll för att skapa tillväxt och insikter för företag, dessa insikter ger fördelar både inom planering och i förståelsen om deras kunder. Det här är ett område som företag lägger mer och mer resurser på för att skapa större förståelse om sin verksamhet och på så sätt öka tillväxten. Snabbmatsindustrin är inget undantag då restauranger behöver en hög grad av flexibilitet i sina arbetssätt för att möta kundbehovet. Det här skapar en stor efterfrågan av kunskap och insikter för att hjälpa dem i planeringen av deras dagliga arbete och det finns ett stort behov från företagen att kontinuerligt implementera nya tekniska lösningar i befintliga processer. Med väl implementerade maskininlärningslösningar i kombination med att skapa mer informativa variabler från befintlig data kan aktörer skapa mervärde till redan existerande processer. Försäljningsprognostisering, som är huvudområdet för den här studien, har en viktig roll för verksamhetsplaneringen inom snabbmatsindustrin, både inom budgetering och bemanning. Namnet snabbmat beskriver sig själv, med det följer ett löfte gentemot kunden att tillhandahålla hög kvalitet på maten samt att kunna tillhandahålla snabb service. Underbemanning kan riskera att bryta någon av dessa löften, antingen i undermålig kvalitet på maten eller att inte kunna leverera snabb service. Överbemanning riskerar i stället att leda till ineffektivitet i användandet av resurser. Att generera högst tillförlitliga prognoser är därför avgörande för att kunna maximera vinsten och minimera operativ risk. SARIMA, XGBoost och Random Forest utvärderades på ett träningsset bestående av försäljningssiffror, timme på dygnet och kategoriska variabler som beskriver dag och månad. Dessa modeller fungerar som basmodeller vars prediktioner från ett specifikt testset används som träningsdata till en Stödvektorsreggresionsmodell (SVR). Att använda stapling av maskininlärningsmodeller till den här typen av problem visade tillfredställande resultat där det påvisades en signifikant förbättring i prediktionssäkerhet under en 6 veckors aggregerad period gentemot den redan existerande modellen.
Стилі APA, Harvard, Vancouver, ISO та ін.
27

Winn, David. "An analysis of neural networks and time series techniques for demand forecasting." Thesis, Rhodes University, 2007. http://hdl.handle.net/10962/d1004362.

Повний текст джерела
Анотація:
This research examines the plausibility of developing demand forecasting techniques which are consistently and accurately able to predict demand. Time Series Techniques and Artificial Neural Networks are both investigated. Deodorant sales in South Africa are specifically studied in this thesis. Marketing techniques which are used to influence consumer buyer behaviour are considered, and these factors are integrated into the forecasting models wherever possible. The results of this research suggest that Artificial Neural Networks can be developed which consistently outperform industry forecasting targets as well as Time Series forecasts, suggesting that producers could reduce costs by adopting this more effective method.
Стилі APA, Harvard, Vancouver, ISO та ін.
28

Gogonel, Adriana Geanina. "Statistical Post-Processing Methods And Their Implementation On The Ensemble Prediction Systems For Forecasting Temperature In The Use Of The French Electric Consumption." Phd thesis, Université René Descartes - Paris V, 2012. http://tel.archives-ouvertes.fr/tel-00798576.

Повний текст джерела
Анотація:
The thesis has for objective to study new statistical methods to correct temperature predictionsthat may be implemented on the ensemble prediction system (EPS) of Meteo France so toimprove its use for the electric system management, at EDF France. The EPS of Meteo Francewe are working on contains 51 members (forecasts by time-step) and gives the temperaturepredictions for 14 days. The thesis contains three parts: in the first one we present the EPSand we implement two statistical methods improving the accuracy or the spread of the EPS andwe introduce criteria for comparing results. In the second part we introduce the extreme valuetheory and the mixture models we use to combine the model we build in the first part withmodels for fitting the distributions tails. In the third part we introduce the quantile regressionas another way of studying the tails of the distribution.
Стилі APA, Harvard, Vancouver, ISO та ін.
29

Knoebel, Bruce R. "An investigation of a bivariate distribution approach to modeling diameter distributions at two points in time." Diss., Virginia Polytechnic Institute and State University, 1985. http://hdl.handle.net/10919/54310.

Повний текст джерела
Анотація:
A diameter distribution prediction procedure for single species stands was developed based on the bivariate SB distribution model. The approach not only accounted for and described the relationships between initial and future diameters and their distributions, but also assumed future diameter given initial diameter to be a random variable. While this method was the most theoretically correct, comparable procedures based on the definition of growth equations which assumed future diameter given initial diameter to be a constant, sometimes provided somewhat better results. Both approaches performed as well, and in some cases, better than the established methods of diameter distribution prediction such as parameter recovery, percentile prediction, and parameter prediction. The approaches based on the growth equations are intuitively and biologically appealing in that the future distribution is determined from an initial distribution and a specified initial-future diameter relationship. ln most appropriate. While this result simplified some procedures, it also implied that the initial and future diameter distributions differed only in location and scale, not in shape. This is a somewhat unrealistic assumption, however, due to the relatively short growth periods and the alterations in stand structure and growth due to the repeated thinnings, the data did not provide evidence against the linear growth equation assumption. The growth equation procedures not only required the initial and future diameter distributions to be of a particular form, but they also restricted the initial-future diameter relationship to be of a particular form. The individual tree model, which required no distributional assumptions or restrictions on the growth equation, proved to be the better approach to use in terms of predicting future stand tables as it performed better than all of the distribution-based approaches. For the bivariate distribution, the direct fit, parameter recovery, parameter prediction and percentile prediction diameter distribution prediction techniques, implied diameter relationships were defined. Evaluations revealed that these equations were both accurate and precise, indicating that the accurate specification of the initial distribution and the diameter diameter distribution.
Ph. D.
Стилі APA, Harvard, Vancouver, ISO та ін.
30

Krishnamurthy, Raju Chemical Sciences &amp Engineering Faculty of Engineering UNSW. "Prediction of consumer liking from trained sensory panel information: evaluation of artificial neural networks (ANN)." Awarded by:University of New South Wales. Chemical Sciences & Engineering, 2007. http://handle.unsw.edu.au/1959.4/40746.

Повний текст джерела
Анотація:
This study set out to establish artificial neural networks (ANN) as an alternate to regression methods (multiple linear, principal components and partial least squares regression) to predict consumer liking from trained sensory panel data. The study has two parts viz., I) Flavour study - evaluation of ANNs to predict consumer flavour preferences from trained sensory panel data and 2) Fragrance study ??? evaluation of different ANN architectures to predict consumer fragrance liking from trained sensory panel data. In this study, a multi-layer feedforward neural network architecture with input, hidden and output layer(s) was designed. The back-propagation algorithm was utilised in training of neural networks. The network learning parameters such as learning rate and momentum rate were optimised by the grid experiments for a fixed number of learning cycles. In flavour study, ANNs were trained using the trained sensory panel raw data as well as transformed data. The networks trained with sensory panel raw data achieved 98% correct learning, whereas the testing was within the range of 28 -35%. A suitable transformation methods were applied to reduce the variations in trained sensory panel raw data. The networks trained with transformed sensory panel data achieved between 80-90% correct learning and 80-95% correct testing. In fragrance study, ANNs were trained using the trained sensory panel raw data as well as principal component data. The networks trained with sensory panel raw data achieved 100% correct learning, and testing was in a range of 70-94%. Principal component analysis was applied to reduce redundancy in the trained sensory panel data. The networks trained with principal component data achieved about 100% correct learning and 90% correct testing. It was shown that due to its excellent noise tolerance property and ability to predict more than one type of consumer liking using a single model, the ANN approach promises to be an effective modelling tool.
Стилі APA, Harvard, Vancouver, ISO та ін.
31

Matějková, Petra. "Analýza ekonomických dat s využitím statistických metod." Master's thesis, Vysoké učení technické v Brně. Fakulta podnikatelská, 2017. http://www.nusl.cz/ntk/nusl-318359.

Повний текст джерела
Анотація:
This master‘s thesis evaluates the financial situation of the company using selected indicators of financial analysis using time series analysis and regression analysis. Theses is separated into two parts. The theoretical part focuses on the issue of economic indicators, financial analysis, interpretation and on time series. The practical part deals with analysis of economic indicators, which will then be subjected to statistical analysis, which tries using statistical methods to analyse the trend of individual indicators and on the basis of values from previous periods to predict future developments.
Стилі APA, Harvard, Vancouver, ISO та ін.
32

Marmara, Vincent Anthony. "Prediction of Infectious Disease outbreaks based on limited information." Thesis, University of Stirling, 2016. http://hdl.handle.net/1893/24624.

Повний текст джерела
Анотація:
The last two decades have seen several large-scale epidemics of international impact, including human, animal and plant epidemics. Policy makers face health challenges that require epidemic predictions based on limited information. There is therefore a pressing need to construct models that allow us to frame all available information to predict an emerging outbreak and to control it in a timely manner. The aim of this thesis is to develop an early-warning modelling approach that can predict emerging disease outbreaks. Based on Bayesian techniques ideally suited to combine information from different sources into a single modelling and estimation framework, I developed a suite of approaches to epidemiological data that can deal with data from different sources and of varying quality. The SEIR model, particle filter algorithm and a number of influenza-related datasets were utilised to examine various models and methodologies to predict influenza outbreaks. The data included a combination of consultations and diagnosed influenza-like illness (ILI) cases for five influenza seasons. I showed that for the pandemic season, different proxies lead to similar behaviour of the effective reproduction number. For influenza datasets, there exists a strong relationship between consultations and diagnosed datasets, especially when considering time-dependent models. Individual parameters for different influenza seasons provided similar values, thereby offering an opportunity to utilise such information in future outbreaks. Moreover, my findings showed that when the temperature drops below 14°C, this triggers the first substantial rise in the number of ILI cases, highlighting that temperature data is an important signal to trigger the start of the influenza epidemic. Further probing was carried out among Maltese citizens and estimates on the under-reporting rate of the seasonal influenza were established. Based on these findings, a new epidemiological model and framework were developed, providing accurate real-time forecasts with a clear early warning signal to the influenza outbreak. This research utilised a combination of novel data sources to predict influenza outbreaks. Such information is beneficial for health authorities to plan health strategies and control epidemics.
Стилі APA, Harvard, Vancouver, ISO та ін.
33

Olofsson, Erika. "Supporting management of the risk of wind damage in south Swedish forestry /." Alnarp : Southern Swedish Forest Research Centre, Swedish University of Agricultural Sciences, 2006. http://epsilon.slu.se/200646.pdf.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
34

Дудка, Богдан Романович. "Ймовірнісно-статистичні моделі нелінійних нестаціонарних процесів в економіці та фінансах". Master's thesis, Київ, 2018. https://ela.kpi.ua/handle/123456789/23903.

Повний текст джерела
Анотація:
Магістерська дисертація: 89 с., 21 рис., 22 табл., 19 джерел. В роботі розглядаються питання дослідження нелінійних нестаціонарних процесів в економіці та фінансах, які представлені статистичними даними. Детально розглянута задача визначення нелінійності та нестаціонарності досліджуваного процесу. Також розглянута задача заповнення пропусків у статистичних даних. Представлена та застосована методика побудови нелінійних нестаціонарних процесів. Об’єкт дослідження: статистичні дані стосовно розвитку вибраних фінансово-економічних процесів. Предмет дослідження: методика побудови моделей нестаціонарних процесів, методи дослідження пропусків даних, регресійні моделі, статистичні характеристики адекватності моделей і оцінок прогнозів. Методи дослідження: статистичний аналіз даних, методи заповнення пропусків даних, регресійний аналіз, мережі Байєса, фільтр Калмана. Мета дослідження: реалізація методики побудови моделей нестаціонарних процесів. Розроблена та застосована методика побудови моделей нестаціонарних процесів. Проведений аналіз впливу пропусків даних та застосування методів згладжування на статистичні характеристики моделей даних/.
The theme: Probabilistic and Statistical Models of Nonstationary Processes in Economy and Finances. Master thesis: 89 p., 21 fig., 22 tabl., 1 appendixes, 19 ref. In this work the problem of building non-linear nonstationary processes models and. Introduced appropriate methodology for building models of non-linear nonstationary processes. An important task of imputation of missing values in statistical data was considered. Their influence was analyzed on statistical characteristics of the data model. Object of the research: statistical data about developing of chosen macro- economic processes. Subject of the research: nonlinear processes models building methodology, detecting methods of missing values, statistical characteristics of model adequacy and forecasting evaluations. The methods of the research are as follows: modeling and forecasting theory, regression analysis, statistical analysis, methods of imputation of missing values. Target of research: implementation of methodology for building models of non-linear nonstationary processes, analysis of influence of missing values on model adequacy of dynamical processes. Developed methodology of the time series models building was used for building of model of nonlinear processes. Analysis of missing values was conducted to define their influence on statistical characteristics of model.
Стилі APA, Harvard, Vancouver, ISO та ін.
35

Bowling, Ernest H. "A stand level multi-species growth model for Appalachian hardwoods." Thesis, Virginia Polytechnic Institute and State University, 1985. http://hdl.handle.net/10919/104295.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
36

Bunger, R. C. (Robert Charles). "Derivation of Probability Density Functions for the Relative Differences in the Standard and Poor's 100 Stock Index Over Various Intervals of Time." Thesis, University of North Texas, 1988. https://digital.library.unt.edu/ark:/67531/metadc330882/.

Повний текст джерела
Анотація:
In this study a two-part mixed probability density function was derived which described the relative changes in the Standard and Poor's 100 Stock Index over various intervals of time. The density function is a mixture of two different halves of normal distributions. Optimal values for the standard deviations for the two halves and the mean are given. Also, a general form of the function is given which uses linear regression models to estimate the standard deviations and the means. The density functions allow stock market participants trading index options and futures contracts on the S & P 100 Stock Index to determine probabilities of success or failure of trades involving price movements of certain magnitudes in given lengths of time.
Стилі APA, Harvard, Vancouver, ISO та ін.
37

Deng, Yingjun. "Degradation modeling based on a time-dependent Ornstein-Uhlenbeck process and prognosis of system failures." Thesis, Troyes, 2015. http://www.theses.fr/2015TROY0004/document.

Повний текст джерела
Анотація:
Cette thèse est consacrée à la description, la prédiction et la prévention des défaillances de systèmes. Elle se compose de quatre parties relatives à la modélisation stochastique de dégradation, au pronostic de défaillance du système, à l'estimation du niveau de défaillance et à l'optimisation de maintenance.Le processus d'Ornstein-Uhlenbeck (OU) dépendant du temps est introduit dans un objectif de modélisation des dégradations. Sur la base de ce processus, le premier instant de passage d’un niveau de défaillance prédéfini est considéré comme l’instant de défaillance du système considéré. Différentes méthodes sont ensuite proposées pour réaliser le pronostic de défaillance. Dans la suite, le niveau de défaillance associé au processus de dégradation est estimé à partir de la distribution de durée de vie en résolvant un problème inverse de premier passage. Cette approche permet d’associer les enregistrements de défaillance et le suivi de dégradation pour améliorer la qualité du pronostic posé comme un problème de premier passage. Le pronostic de défaillances du système permet d'optimiser sa maintenance. Le cas d'un système contrôlé en permanence est considéré. La caractérisation de l’instant de premier passage permet une rationalisation de la prise de décision de maintenance préventive. L’aide à la décision se fait par la recherche d'un niveau virtuel de défaillance dont le calcul est optimisé en fonction de critères proposés
This thesis is dedicated to describe, predict and prevent system failures. It consists of four issues: i) stochastic degradation modeling, ii) prognosis of system failures, iii) failure level estimation and iv) maintenance optimization. The time-dependent Ornstein-Uhlenbeck (OU) process is introduced for degradation modeling. The time-dependent OU process is interesting from its statistical properties on controllable mean, variance and correlation. Based on such a process, the first passage time is considered as the system failure time to a pre-set failure level. Different methods are then proposed for the prognosis of system failures, which can be classified into three categories: analytical approximations, numerical algorithms and Monte-Carlo simulation methods. Moreover, the failure level is estimated from the lifetime distribution by solving inverse first passage problems. This is to make up the potential gap between failure and degradation records to reinforce the prognosis process via first passage problems. From the prognosis of system failures, the maintenance optimization for a continuously monitored system is performed. By introducing first passage problems, the arrangement of preventive maintenance is simplified. The maintenance decision rule is based on a virtual failure level, which is solution of an optimization problem for proposed objective functions
Стилі APA, Harvard, Vancouver, ISO та ін.
38

Souza, Izabel Oliva Marcilio de. "Previsão do volume diário de atendimentos no serviço de pronto socorro de um hospital geral: comparação de diferentes métodos." Universidade de São Paulo, 2013. http://www.teses.usp.br/teses/disponiveis/5/5137/tde-03102013-121222/.

Повний текст джерела
Анотація:
OBJETIVOS: O estudo explorou diferentes métodos de séries temporais visando desenvolver um modelo para a previsão do volume diário de pacientes no Pronto Socorro do Instituto Central do Hospital das Clínicas da Faculdade de Medicina da USP. MÉTODOS: Foram explorados seis diferentes modelos para previsão do número diário de pacientes no pronto socorro de acordo com algumas variáveis relacionadas ao calendário e à temperatura média diária. Para a construção dos modelos, utilizou-se a contagem diária de pacientes atendidos no pronto socorro entre 1° de janeiro de 2008 a 31 de dezembro de 2010. Os primeiros 33 meses do banco de dados foram utilizados para o desenvolvimento e ajuste dos modelos, e os últimos três meses foram utilizados para comparação dos resultados obtidos em termos da acurácia de previsão. A acurácia foi medida a partir do erro médio percentual absoluto. Os modelos foram desenvolvidos utilizando-se três diferentes métodos: modelos lineares generalizados, equações de estimação generalizadas e modelos sazonais autorregressivos integrados de média móvel (SARIMA). Para cada método, foram testados modelos que incluíram termos para controlar o efeito da temperatura média diária e modelos que não incluíram esse controle. RESULTADOS: Foram atendidos, em média, 389 pacientes diariamente no pronto socorro, número que variou entre 166 e 613. Observou-se uma sazonalidade semanal marcante na distribuição do volume de pacientes ao longo do tempo, com maior número de pacientes às segundas feiras e tendência linear decrescente ao longo da semana. Não foi observada variação significante no volume de pacientes de acordo com os meses do ano. Os modelos lineares generalizados e equações de estimação generalizada resultaram em melhor acurácia de previsão que os modelos SARIMA. No primeiro horizonte de previsão (outubro), por exemplo, os erros médios percentuais absolutos dos modelos lineares generalizados e de equação de estimação generalizada foram ambos 11,5% e 10,8% (modelos que incluíram e que não incluíram termo para controlar o efeito da temperatura, respectivamente), enquanto os erros médios percentuais absolutos para os modelos SARIMA foram 12,8% e 11,7% (modelos que incluíram e que não incluíram termo para controlar o efeito da temperatura, respectivamente). Para todos os modelos, incluir termos para controlar o efeito da temperatura média diária não resultou em melhor acurácia de previsão. A previsão a curto prazo (7 dias) em geral resultou em maior acurácia do que a previsão a longo prazo (30 dias). CONCLUSÕES: Este estudo indica que métodos de séries temporais podem ser aplicados na rotina do serviço de pronto socorro para a previsão do provável volume diário de pacientes no serviço. A previsão realizada para o curto prazo tem boa acurácia e pode ser incorporada à rotina do serviço, de modo a subsidiar seu planejamento e colaborar com a adequação de recursos materiais e humanos. Os modelos de previsão baseados unicamente em variáveis relacionadas ao calendário foram capazes de prever a variação no volume diário de pacientes, e os métodos aqui aplicados podem ser automatizados para gerar informações com antecedência suficiente para decisões de planejamento do serviço de pronto socorro
OBJECTIVES: This study aims to develop different models to forecast the daily number of patients seeking emergency department (ED) care in a general hospital according to calendar variables and ambient temperature readings and to compare the models in terms of forecasting accuracy. METHODS: We developed and tested six different models of ED patient visits using total daily counts of patient visits to the Instituto Central do Hospital das Clínicas Emergency Department from January 1, 2008 to December 31, 2010. We used the first 33 months of the dataset to develop the ED patient visits forecasting models (the training set), leaving the last 3 months to measure each model\'s forecasting accuracy by the mean absolute percentage error. Forecasting models were developed using 3 different time series analysis methods: generalized linear models, generalized estimating equations and seasonal autoregressive integrated moving average (SARIMA). For each method, we explored models with and without the effect of mean daily temperature as a predictive variable. RESULTS: Daily mean number of ED visits was 389, ranging from 166 to 613. Data showed a weekly seasonal distribution, with highest patient volumes on Mondays and lowest patient volumes on weekends. There was little variation in daily visits by month. Generalized linear models and generalized estimating equation models showed better forecasting accuracy than SARIMA models. For instance, the mean absolute percentage errors from generalized linear models and generalized estimating equations models at the first month of forecasting (October, 2012), were 11.5% and 10.8% (models with and without control for the temperature effect, respectively), while the mean absolute percentage errors from SARIMA models were 12.8% and 11.7% (models with and without control for the temperature effect, respectively). For all models, controlling for the effect of temperature resulted in worse or similar forecasting ability than models with calendar variables alone, and forecasting accuracy was better for the short term horizon (7 days in advance) than for the longer term (30 days in advance). CONCLUSIONS: Our study indicates that time series models can be developed to provide forecasts of daily ED patient visits, and forecasting ability was dependent on the type of model employed and the length of the time-horizon being predicted. In our setting, generalized linear models and generalized estimating equation models showed better accuracy, and including information about ambient temperature in the models did not improve forecasting accuracy. Forecasting models based on calendar variables alone did in general detect patterns of daily variability in ED volume, and thus could be used for developing an automated system for better planning of personnel resources
Стилі APA, Harvard, Vancouver, ISO та ін.
39

Haddad, Khaled, University of Western Sydney, College of Health and Science, and School of Engineering. "Design flood estimation for ungauged catchments in Victoria : ordinary and generalised least squares methods compared." 2008. http://handle.uws.edu.au:8081/1959.7/30369.

Повний текст джерела
Анотація:
Design flood estimation in small to medium sized ungauged catchments is frequently required in hydrologic analysis and design and is of notable economic significance. For this task Australian Rainfall and Runoff (ARR) 1987, the National Guideline for Design Flow Estimation, recommends the Probabilistic Rational Method (PRM) for general use in South- East Australia. However, there have been recent developments that indicated significant potential to provide more meaningful and accurate design flood estimation in small to medium sized ungauged catchments. These include the L moments based index flood method and a range of quantile regression techniques. This thesis focuses on the quantile regression techniques and compares two methods: ordinary least squares (OLS) and generalised least squares (GLS) based regression techniques. It also makes comparison with the currently recommended Probabilistic Rational Method. The OLS model is used by hydrologists to estimate the parameters of regional hydrological models. However, more recent studies have indicated that the parameter estimates are usually unstable and that the OLS procedure often violates the assumption of homoskedasticity. The GLS based regression procedure accounts for the varying sampling error, correlation between concurrent flows, correlations between the residuals and the fitted quantiles and model error in the regional model, thus one would expect more accurate flood quantile estimation by this method. This thesis uses data from 133 catchments in the state of Victoria to develop prediction equations involving readily obtainable catchment characteristics data. The GLS regression procedure is explored further by carrying out a 4-stage generalised least squares analysis where the development of the prediction equations is based on relating hydrological statistics such as mean flows, standard deviations, skewness and flow quantiles to catchment characteristics. This study also presents the validation of the two techniques by carrying out a split-sample validation on a set of independent test catchments. The PRM is also tested by deriving an updated PRM technique with the new data set and carrying out a split sample validation on the test catchments. The results show that GLS based regression provides more accurate design flood estimates than the OLS regression procedure and the PRM. Based on the average variance of prediction, standard error of estimate, traditional statistics and new statistics, rankings and the median relative error values, the GLS method provided more accurate flood frequency estimates especially for the smaller catchments in the range of 1-300 km2. The predictive ability of the GLS model is also evident in the regression coefficient values when comparing with the OLS method. However, the performance of the PRM method, particularly for the larger catchments appears to be satisfactory as well.
Master of Engineering (Honours)
Стилі APA, Harvard, Vancouver, ISO та ін.
40

Chauhan, Rajesh Kumar. "Statistical modelling and forecasting of mortality in India." Phd thesis, 2002. http://hdl.handle.net/1885/148553.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
41

Leka, K. D., S.-H. Park, K. Kusano, J. Andries, G. Barnes, S. Bingham, D. S. Bloomfield, et al. "A comparison of flare forecasting methods. III. Systematic behaviors of operational solar flare forecasting systems." 2019. http://hdl.handle.net/10454/17194.

Повний текст джерела
Анотація:
Yes
A workshop was recently held at Nagoya University (31 October – 02 November 2017), sponsored by the Center for International Collaborative Research, at the Institute for Space-Earth Environmental Research, Nagoya University, Japan, to quantitatively compare the performance of today’s operational solar flare forecasting facilities. Building upon Paper I of this series (Barnes et al. 2016), in Paper II (Leka et al. 2019) we described the participating methods for this latest comparison effort, the evaluation methodology, and presented quantitative comparisons. In this paper we focus on the behavior and performance of the methods when evaluated in the context of broad implementation differences. Acknowledging the short testing interval available and the small number of methods available, we do find that forecast performance: 1) appears to improve by including persistence or prior flare activity, region evolution, and a human “forecaster in the loop”; 2) is hurt by restricting data to disk-center observations; 3) may benefit from long-term statistics, but mostly when then combined with modern data sources and statistical approaches. These trends are arguably weak and must be viewed with numerous caveats, as discussed both here and in Paper II. Following this present work, we present in Paper IV a novel analysis method to evaluate temporal patterns of forecasting errors of both types (i.e., misses and false alarms; Park et al. 2019). Hence, most importantly, with this series of papers we demonstrate the techniques for facilitating comparisons in the interest of establishing performance-positive methodologies.
Стилі APA, Harvard, Vancouver, ISO та ін.
42

Leka, K. D., S.-H. Park, K. Kusano, J. Andries, G. Barnes, S. Bingham, D. S. Bloomfield, et al. "A Comparison of Flare Forecasting Methods. III. Systematic Behaviors of Operational Solar Flare Forecasting Systems." 2019. http://hdl.handle.net/10454/17366.

Повний текст джерела
Анотація:
Yes
A workshop was recently held at Nagoya University (31 October – 02 November 2017), sponsored by the Center for International Collaborative Research, at the Institute for Space-Earth Environmental Research, Nagoya University, Japan, to quantitatively compare the performance of today’s operational solar flare forecasting facilities. Building upon Paper I of this series (Barnes et al. 2016), in Paper II (Leka et al. 2019) we described the participating methods for this latest comparison effort, the evaluation methodology, and presented quantitative comparisons. In this paper we focus on the behavior and performance of the methods when evaluated in the context of broad implementation differences. Acknowledging the short testing interval available and the small number of methods available, we do find that forecast performance: 1) appears to improve by including persistence or prior flare activity, region evolution, and a human “forecaster in the loop”; 2) is hurt by restricting data to disk-center observations; 3) may benefit from long-term statistics, but mostly when then combined with modern data sources and statistical approaches. These trends are arguably weak and must be viewed with numerous caveats, as discussed both here and in Paper II. Following this present work, we present in Paper IV a novel analysis method to evaluate temporal patterns of forecasting errors of both types (i.e., misses and false alarms; Park et al. 2019). Hence, most importantly, with this series of papers we demonstrate the techniques for facilitating comparisons in the interest of establishing performance-positive methodologies.
We wish to acknowledge funding from the Institute for Space-Earth Environmental Research, Nagoya University for supporting the workshop and its participants. We would also like to acknowledge the “big picture” perspective brought by Dr. M. Leila Mays during her participation in the workshop. K.D.L. and G.B. acknowledge that the DAFFS and DAFFS-G tools were developed under NOAA SBIR contracts WC-133R-13-CN-0079 (Phase-I) and WC-133R-14-CN-0103 (PhaseII) with additional support from Lockheed-Martin Space Systems contract #4103056734 for Solar-B FPP Phase E support. A.E.McC. was supported by an Irish Research Council Government of Ireland Postgraduate Scholarship. D.S.B. and M.K.G were supported by the European Union Horizon 2020 research and innovation programme under grant agreement No. 640216 (FLARECAST project; http://flarecast.eu). MKG also acknowledges research performed under the A-EFFort project and subsequent service implementation, supported under ESA Contract number 4000111994/14/D/ MPR. S. A. M. is supported by the Irish Research Council Postdoctoral Fellowship Programme and the US Air Force Office of Scientific Research award FA9550-17-1-039. The operational Space Weather services of ROB/SIDC are partially funded through the STCE, a collaborative framework funded by the Belgian Science Policy Office.
Стилі APA, Harvard, Vancouver, ISO та ін.
43

Leka, K. D., S.-H. Park, K. Kusano, J. Andries, G. Barnes, S. Bingham, D. S. Bloomfield, et al. "A comparison of flare forecasting methods. II. Benchmarks, metrics and performance results for operational solar flare forecasting systems." 2019. http://hdl.handle.net/10454/17193.

Повний текст джерела
Анотація:
Yes
Solar flares are extremely energetic phenomena in our Solar System. Their impulsive, often drastic radiative increases, in particular at short wavelengths, bring immediate impacts that motivate solar physics and space weather research to understand solar flares to the point of being able to forecast them. As data and algorithms improve dramatically, questions must be asked concerning how well the forecasting performs; crucially, we must ask how to rigorously measure performance in order to critically gauge any improvements. Building upon earlier-developed methodology (Barnes et al. 2016, Paper I), international representatives of regional warning centers and research facilities assembled in 2017 at the Institute for Space-Earth Environmental Research, Nagoya University, Japan to – for the first time – directly compare the performance of operational solar flare forecasting methods. Multiple quantitative evaluation metrics are employed, with focus and discussion on evaluation methodologies given the restrictions of operational forecasting. Numerous methods performed consistently above the “no skill” level, although which method scored top marks is decisively a function of flare event definition and the metric used; there was no single winner. Following in this paper series we ask why the performances differ by examining implementation details (Leka et al. 2019, Paper III), and then we present a novel analysis method to evaluate temporal patterns of forecasting errors in (Park et al. 2019, Paper IV). With these works, this team presents a well-defined and robust methodology for evaluating solar flare forecasting methods in both research and operational frameworks, and today’s performance benchmarks against which improvements and new methods may be compared.
Стилі APA, Harvard, Vancouver, ISO та ін.
44

Barnes, G., K. D. Leka, C. J. Schrijver, Tufan Colak, Rami S. R. Qahwaji, Omar Ashamari, Y. Yuan, et al. "A comparison of flare forecasting methods, I: results from the “All-clear” workshop." 2016. http://hdl.handle.net/10454/8663.

Повний текст джерела
Анотація:
Yes
Solar flares produce radiation which can have an almost immediate effect on the near-Earth environ- ment, making it crucial to forecast flares in order to mitigate their negative effects. The number of published approaches to flare forecasting using photospheric magnetic field observations has prolifer- ated, with varying claims about how well each works. Because of the different analysis techniques and data sets used, it is essentially impossible to compare the results from the literature. This problem is exacerbated by the low event rates of large solar flares. The challenges of forecasting rare events have long been recognized in the meteorology community, but have yet to be fully acknowledged by the space weather community. During the interagency workshop on “all clear” forecasts held in Boulder, CO in 2009, the performance of a number of existing algorithms was compared on common data sets, specifically line-of-sight magnetic field and continuum intensity images from MDI, with consistent definitions of what constitutes an event. We demonstrate the importance of making such systematic comparisons, and of using standard verification statistics to determine what constitutes a good prediction scheme. When a comparison was made in this fashion, no one method clearly outperformed all others, which may in part be due to the strong correlations among the parameters used by different methods to characterize an active region. For M-class flares and above, the set of methods tends towards a weakly positive skill score (as measured with several distinct metrics), with no participating method proving substantially better than climatological forecasts.
This work is the outcome of many collaborative and cooperative efforts. The 2009 “Forecasting the All-Clear” Workshop in Boulder, CO was sponsored by NASA/Johnson Space Flight Center’s Space Radiation Analysis Group, the National Center for Atmospheric Research, and the NOAA/Space Weather Prediction Center, with additional travel support for participating scientists from NASA LWS TRT NNH09CE72C to NWRA. The authors thank the participants of that workshop, in particular Drs. Neal Zapp, Dan Fry, Doug Biesecker, for the informative discussions during those three crazy days, and NCAR’s Susan Baltuch and NWRA’s Janet Biggs for organizational prowess. Workshop preparation and analysis support was provided for GB, KDL by NASA LWS TRT NNH09CE72C, and NASA Heliophysics GI NNH12CG10C. PAH and DSB received funding from the European Space Agency PRODEX Programme, while DSB and MKG also received funding from the European Union’s Horizon 2020 research and in- novation programme under grant agreement No. 640216 (FLARECAST project). MKG also acknowledges research performed under the A-EFFort project and subsequent service implementation, supported under ESA Contract number 4000111994/14/D/MPR. YY was supported by the National Science Foundation under grants ATM 09-36665, ATM 07-16950, ATM-0745744 and by NASA under grants NNX0-7AH78G, NNXO-8AQ90G. YY owes his deepest gratitude to his advisers Prof. Frank Y. Shih, Prof. Haimin Wang and Prof. Ju Jing for long discussions, for reading previous drafts of his work and providing many valuable comments that improved the presentation and contents of this work. JMA was supported by NSF Career Grant AGS-1255024 and by a NMSU Vice President for Research Interdisciplinary Research Grant.
Стилі APA, Harvard, Vancouver, ISO та ін.
45

Rama, Kavir D. "An empirical evaluation of the Altman (1968) failure prediction model on South African JSE listed companies." Thesis, 2013. http://hdl.handle.net/10539/12535.

Повний текст джерела
Анотація:
Credit has become very important in the global economy (Cynamon and Fazzari, 2008). The Altman (1968) failure prediction model, or derivatives thereof, are often used in the identification and selection of financially distressed companies as it is recognized as one of the most reliable in predicting company failure (Eidleman, 1995). Failure of a firm can cause substantial losses to creditors and shareholders, therefore it is important, to detect company failure as early as possible. This research report empirically tests the Altman (1968) failure prediction model on 227 South African JSE listed companies using data from the 2008 financial year to calculate the Z-score within the model, and measuring success or failure of firms in the 2009 and 2010 years. The results indicate that the Altman (1968) model is a viable tool in predicting company failure for firms with positive Z-scores, and where Z-scores do not fall into the range of uncertainty as specified. The results also suggest that the model is not reliable when the Z–scores are negative or when they are in the range of uncertainty (between 2.99 and 1.81). If one is able to predict firm failure in advance, it should be possible for management to take steps to avert such an occurrence (Deakin, 1972; Keasey and Watson, 1991; Platt and Platt, 2002).
Стилі APA, Harvard, Vancouver, ISO та ін.
46

Chetty, Kershani. "An assessment of scale issues related to the configuration of the ACRU model for design flood estimation." Thesis, 2010. http://hdl.handle.net/10413/707.

Повний текст джерела
Анотація:
There is a frequent need for estimates of design floods by hydrologists and engineers for the design of hydraulic structures. There are various techniques for estimating these design floods which are dependent largely on the availability of data. The two main approaches to design flood estimation are categorised as methods based on the analysis of floods and those based on rainfall-runoff relationships. Amongst the methods based on the analysis of floods, regional flood frequency analysis is seen as a reliable and robust method and is the recommended approach. Design event models are commonly used for design flood estimation in rainfall-runoff based analyses. However, these have several simplifying assumptions which are important in design flood estimation. A continuous simulation approach to design flood estimation has many advantages and overcomes many of the limitations of the design event approach. A major concern with continuous simulation using a hydrological model is the scale at which should take place. According to Martina (2004) the “level” of representation that will preserve the “physical chain” of the hydrological processes, both in terms of scale of representation and level of description of the physical parameters for the modelling process, is a critical question to be addressed. The objectives of this study were to review the literature on different approaches commonly used in South Africa and internationally for design flood estimation and, based on the literature, assess the potential for the use of a continuous simulation approach to design flood estimation. Objectives of both case studies undertaken in this research were to determine the optimum levels of catchment discretisation, optimum levels of soil and land cover information required and, to assess the optimum use of daily rainfall stations for the configuration of the ACRU agrohydrological model when used as a continuous simulation model for design flood estimation. The last objective was to compare design flood estimates from flows simulated by the ACRU model with design flood estimates obtained from observed data. Results obtained for selected quaternary catchments in the Thukela Catchment and Lions River catchment indicated that modelling at the level of hydrological response units (HRU’s), using area weighted soils information and more than one driver rainfall station where possible, produced the most realistic results when comparing observed and simulated streamflows. Design flood estimates from simulated flows compared reasonably well with design flood estimates obtained from observed data only for QC59 and QCU20B.
Thesis (M.Sc.)-University of KwaZulu-Natal, Pietermaritzburg, 2010.
Стилі APA, Harvard, Vancouver, ISO та ін.
47

Alves, Carlos Miguel Ferreira. "Demand forecasting in a multi-specialty hospital setting: a comparative study of machine learning and classical statistical methods." Master's thesis, 2018. https://repositorio-aberto.up.pt/handle/10216/114091.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
48

Marques, Rúben Alexandre da Fonseca. "A Comparison on Statistical Methods and Long Short Term Memory Network Forecasting the Demand of Fresh Fish Products." Master's thesis, 2020. https://hdl.handle.net/10216/126819.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
49

Marques, Rúben Alexandre da Fonseca. "A Comparison on Statistical Methods and Long Short Term Memory Network Forecasting the Demand of Fresh Fish Products." Dissertação, 2020. https://hdl.handle.net/10216/126819.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
50

Alves, Carlos Miguel Ferreira. "Demand forecasting in a multi-specialty hospital setting: a comparative study of machine learning and classical statistical methods." Dissertação, 2018. https://repositorio-aberto.up.pt/handle/10216/114091.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Ми пропонуємо знижки на всі преміум-плани для авторів, чиї праці увійшли до тематичних добірок літератури. Зв'яжіться з нами, щоб отримати унікальний промокод!

До бібліографії