Dissertations / Theses on the topic 'Forecasting accuracy'

To see the other types of publications on this topic, follow the link: Forecasting accuracy.

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 dissertations / theses for your research on the topic 'Forecasting accuracy.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.

1

Gunner, J. C. "A model of building price forecasting accuracy." Thesis, University of Salford, 1997. http://usir.salford.ac.uk/26702/.

Full text
Abstract:
The purpose of this research was to derive a statistical model comprising the significant factors influencing the accuracy of a designer's price forecast and as an aid to providing a theoretical framework for further study. To this end data, comprising 181 building contract details, was collected from the Singapore office of an international firm of quantity surveyors over the period 1980 to 1991. Bivariate analysis showed a number of independent variables having significant effect on bias which was in general agreement with previous work in this domain. The research also identified a number of independent variables having significant effect on the consistency, or precision, of designers' building price forecasts. With information gleaned from bivariate results attempts were made to build a multivariate model which would explain a significant portion of the errors occurring in building price forecasts. The results of the models built were inconclusive because they failed to satisfy the assumptions inherent in ordinary least squares regression. The main failure in the models was in satisfying the assumption of homoscedasticity, that is, the conditional variances of the residuals are equal around the mean. Five recognised methodologies were applied to the data in attempts to remove heteroscedasticity but none were successful. A different approach to model building was then adopted and a tenable model was constructed which satisfied all of the regression assumptions and internal validity checks. The statistically significant model also revealed that the variable of Price Intensity was the sole underlying influence when tested against all other independentpage xiv variables in the data of this work and after partialling out the effect of all other independent variables. From this a Price Intensity theory of accuracy is developed and a further review of the previous work in this field suggests that this may be of universal application.
APA, Harvard, Vancouver, ISO, and other styles
2

Lindström, Markus. "Forecasting day-ahead electricity prices in Sweden : Has the forecasting accuracy decreased?" Thesis, Umeå universitet, Nationalekonomi, 2021. http://urn.kb.se/resolve?urn=urn:nbn:se:umu:diva-184649.

Full text
Abstract:
Sweden is currently transitioning towards having 100% electricity generation from renewable energy sources by 2040. To reach this goal, Sweden will ramp up the generation from wind power while simultaneously phasing out nuclear power. Replacing nuclear power with an intermittent production source such as wind power has been proven to increase the variability of electricity prices. The purpose of this study has been to investigate if the increasing electricity generation through wind power in Sweden has decreased the accuracy of price forecasts provided by ARIMA models. Using an automated algorithm in R, optimal ARIMA models were chosen to forecast on-peak and off-peak hours for both winter and summer periods in 2015. These forecasts were then compared to forecasts provided by ARIMA models calibrated on data from 2020. The results from the empirical analysis showed that the overall forecast errors are twice as large for the 2020 forecasts indicating that increasing electricity generation from wind power has decreased the forecasting accuracy of price-only statistical models.
APA, Harvard, Vancouver, ISO, and other styles
3

Zbib, Imad J. (Imad Jamil). "Sales Forecasting Accuracy Over Time: An Empirical Investigation." Thesis, University of North Texas, 1991. https://digital.library.unt.edu/ark:/67531/metadc332526/.

Full text
Abstract:
This study investigated forecasting accuracy over time. Several quantitative and qualitative forecasting models were tested and a number of combinational methods was investigated. Six time series methods, one causal model, and one subjective technique were compared in this study. Six combinational forecasts were generated and compared to individual forecasts. A combining technique was developed. Thirty data sets, obtained from a market leader in the cosmetics industry, were used to forecast sales. All series represent monthly sales from January 1985 to December 1989. Gross sales forecasts from January 1988 to June 1989 were generated by the company using econometric models. All data sets exhibited seasonality and trend.
APA, Harvard, Vancouver, ISO, and other styles
4

SESKAUSKIS, ZYGIMANTAS, and ROKAS NARKEVICIUS. "Sales forecasting management." Thesis, Högskolan i Borås, Akademin för textil, teknik och ekonomi, 2016. http://urn.kb.se/resolve?urn=urn:nbn:se:hb:diva-10685.

Full text
Abstract:
The purpose of this research is to investigate current company business process from sales forecasting perspective and provide potential improvements of how to deal with unstable market demand and increase overall precision of forecasting. The problem which company face is an unstable market demand and not enough precision in sales forecasting process. Therefore the research questions are:  How current forecasting process can be improved?  What methods, can be implemented in order to increase the precision of forecasting? Study can be described as an action research using an abductive approach supported by combination of quantitative and qualitative analysis practices. In order to achieve high degree of reliability the study was based on verified scientific literature and data collected from the case company while collaborating with company’s COO. Research exposed the current forecasting process of the case company. Different forecasting methods were chosen according to the existing circumstances and analyzed in order to figure out which could be implemented in order to increase forecasting precision and forecasting as a whole. Simple exponential smoothing showed the most promising accuracy results, which were measured by applying MAD, MSE and MAPE measurement techniques. Moreover, trend line analysis was applied as well, as a supplementary method. For the reason that the case company presents new products to the market limited amount of historical data was available. Therefore simple exponential smoothing technique did not show accurate results as desired. However, suggested methods can be applied for testing and learning purposes, supported by currently applied qualitative methods.
APA, Harvard, Vancouver, ISO, and other styles
5

Novela, George. "Testing maquiladora forecast accuracy." To access this resource online via ProQuest Dissertations and Theses @ UTEP, 2008. http://0-proquest.umi.com.lib.utep.edu/login?COPT=REJTPTU0YmImSU5UPTAmVkVSPTI=&clientId=2515.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Karimi, Arizo. "VARs and ECMs in forecasting – a comparative study of the accuracy in forecasting Swedish exports." Thesis, Uppsala University, Department of Economics, 2008. http://urn.kb.se/resolve?urn=urn:nbn:se:uu:diva-9223.

Full text
Abstract:

In this paper, the forecast performance of an unrestricted Vector Autoregressive (VAR) model was compared against the forecast accuracy of a Vector error correction (VECM) model when computing out-of-sample forecasts for Swedish exports. The co-integrating relation used to estimate the error correction specification was based upon an economic theory for international trade suggesting that a long run equilibrium relation among the variables included in an export demand equation should exist. The results obtained provide evidence of a long run equilibrium relationship between the Swedish export volume and its main determinants. The models were estimated for manufactured goods using quarterly data for the period 1975-1999 and once estimated, the models were used to compute out-of-sample forecasts up to four-, eight- and twelve-quarters ahead for the Swedish export volume using both multi-step and one-step ahead forecast techniques. The main results suggest that the differences in forecasting ability between the two models are small, however according to the relevant evaluation criteria the unrestricted VAR model in general yields somewhat better forecast than the VECM model when forecasting Swedish exports over the chosen forecast horizons.

APA, Harvard, Vancouver, ISO, and other styles
7

Eroglu, Cuneyt. "An investigation of accuracy, learning and biases in judgmental adjustments of statistical forecasts." Columbus, Ohio : Ohio State University, 2006. http://rave.ohiolink.edu/etdc/view?acc%5Fnum=osu1150398313.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Bilodeau, Bernard. "Accuracy of a truncated barotropic spectral model : numerical versus analytical solutions." Thesis, McGill University, 1985. http://digitool.Library.McGill.CA:80/R/?func=dbin-jump-full&object_id=66037.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Yongtao, Yu. "Exchange rate forecasting model comparison: A case study in North Europe." Thesis, Uppsala universitet, Statistiska institutionen, 2011. http://urn.kb.se/resolve?urn=urn:nbn:se:uu:diva-154948.

Full text
Abstract:
In the past, a lot of studies about the comparison of exchange rate forecasting models have been carried out. Most of these studies have a similar result which is the random walk model has the best forecasting performance. In this thesis, I want to find a model to beat the random walk model in forecasting the exchange rate. In my study, the vector autoregressive model (VAR), restricted vector autoregressive model (RVAR), vector error correction model (VEC), Bayesian vector autoregressive model are employed in the analysis. These multivariable time series models are compared with the random walk model by evaluating the forecasting accuracy of the exchange rate for three North European countries both in short-term and long-term. For short-term, it can be concluded that the random walk model has the best forecasting accuracy. However, for long-term, the random walk model is beaten. The equal accuracy test proves this phenomenon really exists.
APA, Harvard, Vancouver, ISO, and other styles
10

Orrebrant, Richard, and Adam Hill. "Increasing sales forecast accuracy with technique adoption in the forecasting process." Thesis, Tekniska Högskolan, Högskolan i Jönköping, JTH, Industriell organisation och produktion, 2014. http://urn.kb.se/resolve?urn=urn:nbn:se:hj:diva-24038.

Full text
Abstract:
Abstract   Purpose - The purpose with this thesis is to investigate how to increase sales forecast accuracy.   Methodology – To fulfil the purpose a case study was conducted. To collect data from the case study the authors performed interviews and gathered documents. The empirical data was then analysed and compared with the theoretical framework.   Result – The result shows that inaccuracies in forecasts are not necessarily because of the forecasting technique but can be a result from an unorganized forecasting process and having an inefficient information flow. The result further shows that it is not only important to review the information flow within the company but in the supply chain as whole to improve a forecast’s accuracy. The result also shows that time series can generate more accurate sales forecasts compared to only using qualitative techniques. It is, however, necessary to use a qualitative technique when creating time series. Time series only take time and sales history into account when forecasting, expertise regarding consumer behaviour, promotion activity, and so on, is therefore needed. It is also crucial to use qualitative techniques when selecting time series technique to achieve higher sales forecast accuracy. Personal expertise and experience are needed to identify if there is enough sales history, how much the sales are fluctuating, and if there will be any seasonality in the forecast. If companies gain knowledge about the benefits from each technique the combination can improve the forecasting process and increase the accuracy of the sales forecast.   Conclusions – This thesis, with support from a case study, shows how time series and qualitative techniques can be combined to achieve higher accuracy. Companies that want to achieve higher accuracy need to know how the different techniques work and what is needed to take into account when creating a sales forecast. It is also important to have knowledge about the benefits of a well-designed forecasting process, and to do that, improving the information flow both within the company and the supply chain is a necessity.      Research limitations – Because there are several different techniques to apply when creating a sales forecast, the authors could have involved more techniques in the investigation. The thesis work could also have used multiple case study objects to increase the external validity of the thesis.
APA, Harvard, Vancouver, ISO, and other styles
11

PARKASH, MOHINDER. "THE IMPACT OF A FIRM'S CONTRACTS AND SIZE ON THE ACCURACY, DISPERSION AND REVISIONS OF FINANCIAL ANALYSTS' FORECASTS: A THEORETICAL AND EMPIRICAL INVESTIGATION." Diss., The University of Arizona, 1987. http://hdl.handle.net/10150/184093.

Full text
Abstract:
The evidence presented in this study suggests that the dispersion, accuracy and transitory component in revisions of financial analysts' forecasts (FAF) are determined by production/investment/financing decisions, accounting choices as well as firm specific characteristics including the type of control, debt to equity ratio and size of the firm. Firms with managers control (owners control), high (low) debt to equity ratio and large (small) size are hypothesized to have higher (lower) dispersion, forecast error and transitory components in revisions of FAF. These hypotheses are motivated by the contracting cost and political visibility theories. The information availability theory is included as a contrast to the political visibility hypothesis. The information availability hypothesis predicts large (small) firms to have lower (higher) dispersion, forecast error and transitory component in revisions of FAF. The regression results are sensitive to deflated and undeflated measures of the dispersion and accuracy of FAF and size of the firm. The appropriateness of the two measures of firm's size, the book value of total assets and the market value of common stock plus long-term debt, as well as the deflated and undeflated measures of dispersion and accuracy of FAF are investigated. It is concluded that deflated measures of the dispersion and forecast errors and the market value as measure of firm size are misspecified in the present context. The current year forecast revisions are assumed to consist of the transitory and permanent components. The second year forecast revisions are used to represent the long-term forecast revisions and are used as a control for the permanent component of forecast revisions. The regression results are consistent with the contracting and political visibility hypotheses. The firm specific characteristics are hypothesized to influence forecast errors and dispersion directly and indirectly through business risk and accounting policy choices. The links between firm characteristics and business risk, accounting policy choices, dispersion and forecast errors are established and path analysis is used to test these relationships. These relationships are observed to be consistent with predictions and significant.
APA, Harvard, Vancouver, ISO, and other styles
12

Lambert, Tara Denise Barton. "Accuracy of Atlantic and Eastern North Pacific tropical cyclone intensity guidance." Thesis, Monterey, Calif. : Springfield, Va. : Naval Postgraduate School ; Available from National Technical Information Service, 2005. http://library.nps.navy.mil/uhtbin/hyperion/05Sep%5FLambert.pdf.

Full text
Abstract:
Thesis (M.S. in Meteorology and Physical Oceanography)--Naval Postgraduate School, September 2005.
Thesis Advisor(s): Russell L. Elsberry. Includes bibliographical references (p.115-117). Also available online.
APA, Harvard, Vancouver, ISO, and other styles
13

Leibrecht, Markus. "Zur Präzision der Steuerprognose in Österreich." Austrian Statistical Society, 2004. http://epub.wu.ac.at/5635/1/445%2D1337%2D1%2DSM.pdf.

Full text
Abstract:
Der Beitrag analysiert die Präzision der Aufkommensprognose wichtiger Bundesabgaben der Jahre 1976 bis 2002 in Österreich. Dadurch wird eine im Schrifttum bestehende Forschungs- und Informationslücke verringert. Eine Prognose wird dazu als präzise verstanden, wenn sie sowohl unverzerrt als auch im Mittel genau ist. Die Prognose des Steueraufkommens auf Bundesebene ist in Österreich gemessen am Bruttogesamtabgabenaufkommen präzise. Dennoch sind aufgrund der unpräzisen Prognosen wichtiger Einzelsteuern Verbesserungen möglich. Als mögliche Ursachen für die Verschätzungen werden die Organisation der Prognose, die verwendeten Prognosemethoden, der Vorsteuerbetrug, Ausgliederungstendenzen aus dem Staatshaushalt und neue kommunale Finanzierungsformen isoliert. Eine Erhöhung der Präzision sollte durch die Kombination mehrerer unabhängiger Prognosen zu einer Gesamtprognose, durch eine stärkere Dokumentation der Prognose, durch die Verwendung univariater Zeitreihenmethoden für die Prognose des Aufkommens an veranlagter Einkommensteuer und an Körperschaftsteuer und durch die Reduktion (Umsatzsteuer) bzw. Erhöhung (Mineralölsteuer) der verwendeten Aufkommenselastizitäten erreicht werden.
APA, Harvard, Vancouver, ISO, and other styles
14

Silva, Rodolfo Benedito da. "Previsão de demanda no setor de suplementação animal usando combinação e ajuste de previsões." reponame:Biblioteca Digital de Teses e Dissertações da UFRGS, 2014. http://hdl.handle.net/10183/98117.

Full text
Abstract:
A previsão de demanda desempenha um papel de fundamental importância dentro das organizações, pois através dela é possível obter uma declaração antecipada do volume demandado no futuro, permitindo aos gestores a tomarem decisões mais consistentes e alocarem os recursos de modo eficaz para atender esta demanda. Entretanto, a eficiência na tomada de decisões e alocação dos recursos requer previsões cada vez mais acuradas. Diante deste contexto, a combinação de previsões tem sido amplamente utilizada com o intuito de melhorar a acurácia e, consequentemente, a precisão das previsões. Este estudo tem por objetivo fazer a adaptação de um modelo de previsão para estimar a demanda de produtos destinados à suplementação animal através da combinação de previsões, considerando as variáveis que possam impactar na demanda e a opinião de especialistas. O trabalho está estruturado em dois artigos, sendo que no primeiro buscou-se priorizar e selecionar, através do Processo Hierárquico Analítico (AHP), variáveis que possam impactar na demanda para que estas pudessem ser avaliadas na modelagem via regressão do artigo 2. Por sua vez, no segundo artigo, realizou-se a adaptação do modelo composto de previsão idealizado por Werner (2004), buscando uma previsão final mais acurada. Os resultados obtidos reforçam que as previsões, quando combinadas, apresentam desempenhos superiores para as medidas de acurácia MAPE, MAE e MSE, em relação às previsões individuais.
The demand prediction has a role of fundamental importance inside the organizations, because trough it is possible to obtain a previous declaration of the demanded amount in the future, allowing the managers to take more consistent decisions and to allocate the resources in an efficient manner in order to satisfy this demand. However, the efficiency in the support decision and resource allocation demands accurated predictions. So, the combination of predictions have been used with the aim of improving the accuracy and, consequently, the precision of the prediction. This study has as objective to do an adaptation of a prediction model to estimate the demand of products designated to animal supplementation through the combination of prediction, considering the variables that can impact in the demand and in the expert opinion. The work is structured in two papers, considering that the first searches to priorize and select through the Analitic Hierarch Process (AHP), variables that can impact in the demand, so they could be evalute in the regression modelling of the paper 2. By the way, in the second paper, it was done an adaptation of the composed prediction model proposed by Werner (2004), searching for a more accurated final prediction. The obtained results reinforce that the prediction, when combined, present superior performance to the accuracy metrics MAPE, MAE and MSE, in relation to the individual predictions.
APA, Harvard, Vancouver, ISO, and other styles
15

King, Julie. "Colour forecasting : an investigation into how its development and use impacts on accuracy." Thesis, University of the Arts London, 2011. http://ualresearchonline.arts.ac.uk/5657/.

Full text
Abstract:
Colour forecasting is a sector of trend forecasting which is arguably the most important link in the product development process, yet little is known about it, the methodology behind its development or its accuracy. It is part of a global trend forecasting industry valued recently at $36bn, providing information which is developed commercially eighteen months to two years ahead of the season. Used throughout the garment supply chain, by the yarn and fibre manufacturers, the fabric mills, garment designers and retailers, it plays a pivotal role in the fashion and textile industry, but appears in many different forms. Colour forecasts were first commercially produced in 1917, but became more widely used during the 1970s, and in recent years digital versions of colour forecasts have become increasingly popular. The investigation aimed to establish the historical background of the industry, mindful of the considerable changes to fashion manufacturing and retailing in recent decades. For the purposes of the investigation, a period spanning 25 years was selected, from 1985 to 2010. In reviewing the available literature, and the methodologies currently used in developing forecasting information, it became clear that there was a view that the process is very intuitive, and thus a lack of in depth academic literature. This necessitated a considerable quantity of primary research in order to fill the gaps in the knowledge regarding the development, use and accuracy of colour forecasting. A mixed method approach to primary research was required to answer the aim of the thesis, namely to investigate how colour forecasts are compiled, and examine their use, influence and accuracy within the fashion and textiles industry, suggesting methods for developing more accurate forecasts in the future. Interviews were conducted with industry practitioners comprising forecasters, designers and retailers to better understand how colour was developed and used within industry. Two longitudinal studies were carried out with the two largest UK clothing retailers to map their development and use of colour palettes, and understand better how colour contributes to the critical path and supply chain. Two colour development meetings were observed, one with a commercial colour forecaster, the other with an industry association, and two colour archives were studied to establish whether or not any identifiable and predictable colour cycles existed. Data from the interviews and longitudinal studies were analysed using a grounded approach, and revealed some new insights into the influences upon the development of colour forecasts both commercially and from the retailer's perspective. The sell through rates of merchandise, EPOS analysis and range of practices between those interviewed and the two retailers studied provided an interesting insight into working practices and how colour forecasting information is changed when used by the retailers. It was found that a group of core colours existed, which were used season after season, and consistently demonstrated a high sell through rate, such as black, white, grey and navy. In order to establish whether or not colour cycles were consistently predictable in their repetition, two colour forecasting archives were assessed. If predictable colour cycles existed, they would be a useful tool in developing more accurate forecasts. Unfortunately this was not the case, as no clear colour cycles were found. However, the archive, together with evidence from the retailers demonstrated the 'lifecycle' of fashion colours was longer than expected, as they took time to phase in and out. It was concluded that in general the less fashion led brands used their own signature colours and were able to develop colour palettes far later in the product development timeline. This approach could be adopted more widely by retailers and designers as it was discovered that although accuracy rates for colour forecasts are generally accepted to be around 80%, the commercial forecasters provide colour update cards closer to the season where at least 40% of the colours are changed. Very early information, two years ahead of the season is no longer necessary in the contemporary fashion and textiles industry.
APA, Harvard, Vancouver, ISO, and other styles
16

Ng, Yuen-yuen, and 吳淵源. "Construction price forecasting: an empirical study on improving estimating accuracy for building works." Thesis, The University of Hong Kong (Pokfulam, Hong Kong), 1995. http://hub.hku.hk/bib/B31251390.

Full text
APA, Harvard, Vancouver, ISO, and other styles
17

Ng, Yuen-yuen. "Construction price forecasting : an empirical study on improving estimating accuracy for building works /." Hong Kong : University of Hong Kong, 1995. http://sunzi.lib.hku.hk/hkuto/record.jsp?B25947837.

Full text
APA, Harvard, Vancouver, ISO, and other styles
18

Dyussekeneva, Karima. "New product sales forecasting : the relative accuracy of statistical, judgemental and combination forecasts." Thesis, University of Bath, 2011. https://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.550612.

Full text
Abstract:
This research investigates three approaches to new product sales forecasting: statistical, judgmental and the integration of these two approaches. The aim of the research is to find a simple, easy-to-use, low cost and accurate tool which can be used by managers to forecast the sales of new products. A review of the literature suggested that the Bass diffusion model was an appropriate statistical method for new product sales forecasting. For the judgmental approach, after considering different methods and constraints, such as bias, complexity, lack of accuracy, high cost and time involvement, the Delphi method was identified from the literature as a method, which has the potential to mitigate bias and produces accurate predictions at a low cost in a relatively short time. However, the literature also revealed that neither of the methods: statistical or judgmental, can be guaranteed to give the best forecasts independently, and a combination of them is the often best approach to obtaining the most accurate predictions. The study aims to compare these three approaches by applying them to actual sales data. To forecast the sales of new products, the Bass diffusion model was fitted to the sales history of similar (analogous) products that had been launched in the past and the resulting model was used to produce forecasts for the new products at the time of their launch. These forecasts were compared with forecasts produced through the Delphi method and also through a combination of statistical and judgmental methods. All results were also compared to the benchmark levels of accuracy, based on previous research and forecasts based on various combinations of the analogous products’ historic sales data. Although no statistically significant difference was found in the accuracy of forecasts, produced by the three approaches, the results were more accurate than those obtained using parameters suggested by previous researchers. The limitations of the research are discussed at the end of the thesis, together with suggestions for future research.
APA, Harvard, Vancouver, ISO, and other styles
19

Gramz, James. "Using Evolutionary Programming to increase the accuracy of an ensemble model for energy forecasting." Thesis, Marquette University, 2014. http://pqdtopen.proquest.com/#viewpdf?dispub=1554240.

Full text
Abstract:

Natural gas companies are always trying to increase the accuracy of their forecasts. We introduce evolutionary programming as an approach to forecast natural gas demand more accurately. The created Evolutionary Programming Engine and Evolutionary Programming Ensemble Model use the current GasDay models, along with weather and historical flow to create an overall forecast for the amount of natural gas a company will need to supply to their customers on a given day. The existing ensemble model uses the GasDay component models and then tunes their individual forecasts and combines them to create an overall forecast.

The inputs into the Evolutionary Programming Engine and Evolutionary Programming Ensemble Model were determined based on currently used inputs and domain knowledge about what variables are important for natural gas forecasting. The ensemble model design is based on if-statements that allow different equations to be used on different days to create a more accurate forecast, given the expected weather conditions.

This approach is compared to what GasDay currently uses based on a series of error metrics and comparisons on different types of weather days and during different months. Three different operating areas are evaluated, and the results show that the created Evolutionary Programming Ensemble Model is capable of creating improved forecasts compared to the existing ensemble model, as measured by Root Mean Square Error (RMSE) and Standard Error (Std Error). However, the if-statements in the ensemble models were not able to produce individually reasonable forecasts, which could potentially cause errant forecasts if a different set of if-statements are true on a given day.

APA, Harvard, Vancouver, ISO, and other styles
20

Ragnerstam, Elsa. "How to calculate forecast accuracy for stocked items with a lumpy demand : A case study at Alfa Laval." Thesis, Mälardalens högskola, Akademin för innovation, design och teknik, 2016. http://urn.kb.se/resolve?urn=urn:nbn:se:mdh:diva-30961.

Full text
Abstract:
Inventory management is an important part of a good functioning logistic. Nearly all the literature on optimal inventory management uses criteria of cost minimization and profit maximization. To have a well functioning forecasting system it is important to have a balance in the inventory. But, it exist different factors that can results in uncertainties and difficulties to maintain this balance. One important factor is the customers’ demand. Over half of the stocked items are in stock to prevent irregular orders and an uncertainty demand. The customers’ demand can be categorized into four categories: Smooth, Erratic, Intermittent and Lumpy. Items with a lumpy demand i.e. the items that are both intermittent and erratic are the hardest to manage and to forecast. The reason for this is that the quantity and demand for these items varies a lot. These items may also have periods of zero demand. Because of this, it is a challenge for companies to forecast these items. It is hard to manage the random values that appear at random intervals and leaving many periods with zero demand. Due to the lumpy demand, an ongoing problem for most organization is the inaccuracy of forecasts. It is almost impossible to predict exact forecasts. It does not matter how good the forecasts are or how complex the forecast techniques are, the instability of the markets confirm that the forecasts always will be wrong and that errors therefore always will exist. Therefore, we need to accept this but still work with this issue to keep the errors as minimal and small as possible. The purpose with measuring forecast errors is to identify single random errors and systematic errors that show if the forecast systematically is too high or too low. To calculate the forecast errors and measure the forecast accuracy also helps to dimensioning how large the safety stock should be and control that the forecast errors are within acceptable error margins. The research questions answered in this master thesis are: How should one calculate forecast accuracy for stocked items with a lumpy demand? How do companies measure forecast accuracy for stocked items with a lumpy demand, which are the differences between the methods? What kind of information do one need to apply these methods? To collect data and answer the research questions, a literature study have been made to compare how different researchers and authors write about this specific topic. Two different types of case studies have also been made. Firstly, a benchmarking process was made to compare how different companies work with this issue. And secondly, a case study in form of a hypothesis test was been made to test the hypothesis based on the analysis from the literature review and the benchmarking process. The analysis of the hypothesis test finally generated a conclusion that shows that a combination of the measurements WAPE, Weighted Absolute Forecast Error, and CFE, Cumulative Forecast Error, is a solution to calculate forecast accuracy for items with a lumpy demand. The keywords that have been used to search for scientific papers are: lumpy demand, forecast accuracy, forecasting, forecast error.
APA, Harvard, Vancouver, ISO, and other styles
21

Blackerby, Jason S. "Accuracy of Western North Pacific tropical cyclone intensity guidance /." Thesis, Monterey, Calif. : Springfield, Va. : Naval Postgraduate School ; Available from National Technical Information Service, 2005. http://library.nps.navy.mil/uhtbin/hyperion/05Mar%5FBlackberry.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
22

Usman, Adeem Syed 1975. "Demand forecasting accuracy in airline revenue management : analysis of practical issues with forecast error reduction." Thesis, Massachusetts Institute of Technology, 2003. http://hdl.handle.net/1721.1/82802.

Full text
APA, Harvard, Vancouver, ISO, and other styles
23

Hines, Karen Anne. "Predicting Future Emotions from Different Points of View: The Influence of Imagery Perspective on Affective Forecasting Accuracy." The Ohio State University, 2010. http://rave.ohiolink.edu/etdc/view?acc_num=osu1282066755.

Full text
APA, Harvard, Vancouver, ISO, and other styles
24

Li, Gong. "Improvement of Wind Forecasting Accuracy and its Impacts on Bidding Strategy Optimization for Wind Generation Companies." Diss., North Dakota State University, 2012. https://hdl.handle.net/10365/26815.

Full text
Abstract:
One major issue of wind generation is its intermittence and uncertainty due to the highly volatile nature of wind resource, and it affects both the economy and the operation of the wind farms and the distribution networks. It is thus urgently needed to develop modeling methods for accurate and reliable forecasts on wind power generation. Meanwhile, along with the ongoing electricity market deregulation and liberalization, wind energy is expected to be directly auctioned in the wholesale market. This brings the wind generation companies another issue of particular importance, i.e., how to maximize the profits by optimizing the bids in the gradually deregulated electricity market based on the improved wind forecasts. As such, the main objective of this dissertation research is to investigate and develop reliable modeling methods for tackling the two issues. To reach the objective, three main research tasks are identified and accomplished. Task 1 is about testing forecasting models for wind speed and power. After a thorough investigation into currently available forecasting methods, several representative models including autoregressive integrated moving average (ARIMA) and artificial neural networks (ANN) are developed for short-term wind forecasting. The forecasting performances are evaluated and compared in terms of mean absolute error (MAE), root mean square error (RMSE), and mean absolute percentage error (MAPE). The results reveal that no single model can outperform others universally. This indicates the need of generating a single robust and reliable forecast by applying a post-processing method. As such, a reliable and adaptive model for short-term forecasting the wind power is developed via adaptive Bayesian model averaging algorithms in Task 2. Experiments are performed for both long-term wind assessment and short-term wind forecasting. The results show that the proposed BMA-based model can always provide adaptive, reliable, and iv comparatively accurate forecast results in terms of MAE, RMSE, and MAPE. It also provides a unified approach to tackle the challenging model selection issue in wind forecasting applications. Task 3 is about developing a modeling method for optimizing the wind power bidding process in the deregulated electricity wholesale market. The optimal bids on wind power must take into account the uncertainty in wind forecasts and wind power generation. This research investigates the application of combining improved wind forecasts with agent-based models to optimize the bid and maximize the net earnings. The WSCC 9-bus 3-machine power system network and the IEEE 30-bus 9-GenCo power system network are adopted. Both single-sided and double-sided auctions are considered. The results demonstrate that improving wind forecasting accuracy helps increase the net earnings of wind generation companies, and that the implementation of agent learning algorithms further improves the earnings. The results also verify that agent-based simulation is a viable modeling tool for providing realistic insights about the complex interactions among different market participants and various market factors.
APA, Harvard, Vancouver, ISO, and other styles
25

Costantini, Mauro, Cuaresma Jesus Crespo, and Jaroslava Hlouskova. "Forecasting errors, directional accuracy and profitability of currency trading: The case of EUR/USD exchange rate." Wiley, 2016. http://dx.doi.org/10.1002/for.2398.

Full text
Abstract:
We provide a comprehensive study of out-of-sample forecasts for the EUR/USD exchange rate based on multivariate macroeconomic models and forecast combinations. We use profit maximization measures based on directional accuracy and trading strategies in addition to standard loss minimization measures. When comparing predictive accuracy and profit measures, data snooping bias free tests are used. The results indicate that forecast combinations, in particular those based on principal components of forecasts, help to improve over benchmark trading strategies, although the excess return per unit of deviation is limited.
APA, Harvard, Vancouver, ISO, and other styles
26

Andersen, Frans, and David Fagersand. "Forecasting commodities : - A study of methods, interests and preception." Thesis, Uppsala universitet, Företagsekonomiska institutionen, 2014. http://urn.kb.se/resolve?urn=urn:nbn:se:uu:diva-230411.

Full text
Abstract:
This study aims to investigate reasons for variation in accuracy between different forecast methods by studying the choice of methods, learning processes, biases and opinions within the firms using them; enabling us to provide recommendations of how to improve accuracy within each forecast method. Eleven Swedish and international companies that are regularly forecasting commodity price-levels have been interviewed. Since there is a cultural aspect to the development of forecast methods; the authors have chosen to conduct a qualitative study, using a semi-structured interview technique that enables us to illustrate company-specific determinants. The results show that choice of methods, learning processes, biases and opinions all have potentially substantial implications on the accuracy achieved. The phenomena’s individual implication on accuracy varies amongst method-group.
APA, Harvard, Vancouver, ISO, and other styles
27

Asar, Ozgur. "On Multivariate Longitudinal Binary Data Models And Their Applications In Forecasting." Master's thesis, METU, 2012. http://etd.lib.metu.edu.tr/upload/12614510/index.pdf.

Full text
Abstract:
Longitudinal data arise when subjects are followed over time. This type of data is typically dependent, due to including repeated observations and this type of dependence is termed as within-subject dependence. Often the scientific interest is on multiple longitudinal measurements which introduce two additional types of associations, between-response and cross-response temporal dependencies. Only the statistical methods which take these association structures might yield reliable and valid statistical inferences. Although the methods for univariate longitudinal data have been mostly studied, multivariate longitudinal data still needs more work. In this thesis, although we mainly focus on multivariate longitudinal binary data models, we also consider other types of response families when necessary. We extend a work on multivariate marginal models, namely multivariate marginal models with response specific parameters (MMM1), and propose multivariate marginal models with shared regression parameters (MMM2). Both of these models are generalized estimating equation (GEE) based, and are valid for several response families such as Binomial, Gaussian, Poisson, and Gamma. Two different R packages, mmm and mmm2 are proposed to fit them, respectively. We further develop a marginalized multilevel model, namely probit normal marginalized transition random effects models (PNMTREM) for multivariate longitudinal binary response. By this model, implicit function theorem is introduced to explicitly link the levels of marginalized multilevel models with transition structures for the first time. An R package, bf pnmtrem is proposed to fit the model. PNMTREM is applied to data collected through Iowa Youth and Families Project (IYFP). Five different models, including univariate and multivariate ones, are considered to forecast multivariate longitudinal binary data. A comparative simulation study, which includes a model-independent data simulation process, is considered for this purpose. Forecasting independent variables are taken into account as well. To assess the forecasts, several accuracy measures, such as expected proportion of correct prediction (ePCP), area under the receiver operating characteristic (AUROC) curve, mean absolute scaled error (MASE) are considered. Mother'
s Stress and Children'
s Morbidity (MSCM) data are used to illustrate this comparison in real life. Results show that marginalized models yield better forecasting results compared to marginal models. Simulation results are in agreement with these results as well.
APA, Harvard, Vancouver, ISO, and other styles
28

Ribeiro, Ramos Francisco Fernando, and fr1960@clix pt. "Essays in time series econometrics and forecasting with applications in marketing." RMIT University. Economics, Finance and Marketing, 2007. http://adt.lib.rmit.edu.au/adt/public/adt-VIT20071220.144516.

Full text
Abstract:
This dissertation is composed of two parts, an integrative essay and a set of published papers. The essay and the collection of papers are placed in the context of development and application of time series econometric models in a temporal-axis from 1970s through 2005, with particular focus in the Marketing discipline. The main aim of the integrative essay is on modelling the effects of marketing actions on performance variables, such as sales and market share in competitive markets. Such research required the estimation of two kinds of time series econometric models: multivariate and multiple time series models. I use Autoregressive Integrated Moving Average (ARIMA) intervention models and the Pierce and Haugh statistical test to model the impact of a single marketing instrument, mainly price promotions, to measure own and cross-short term sales effects, and to study asymmetric marketing competition. I develop and apply Vector AutoRegressive (VAR) and Bayesian Vector AutoRegressive (BVAR) models to estimate dynamic relationships in the market and to forecast market share. Especially, BVAR models are advantageous because they contain all relevant dynamic and interactive effects. They accommodate not only classical competitive reaction effects, but also own and cross-market share brand feedback effects and internal decision rules and provided substantively useful insights into the dynamics of demand. The integrative essay is structured in four main parts. The introduction sets the basic ideas behind the published papers, with particular focus on the motivation of the essay, the types of competitive reaction effects analysed, an overview of the time series econometric models in marketing, a short discussion of the basic methodology used in the research and a brief description of the inter-relationships across the published papers and structure of the essay. The discussion is centred on how to model the effects of marketing actions at the selective demand or brand level and at the primary demand or product level. At the brand level I discuss the research contribution of my work on (i) modelling promotional short-term effects of price and non-price actions on sales and market share for consumer packaged goods, with no competition, (ii) how to measure own and cross short-term sales effects of advertising and price, in particular, cross-lead and lag effects, asymmetric sales behaviour and competition without retaliatory actions, in an automobile market, (iii) how to model the marketing-mix effectiveness at the short and long-term on market shares in a car market, (iv) what is the best method to forecast market share, and (v) the study of causal linkages at different time horizons between sales and marketing activity for a particular brand. At the product or commodity level, I propose a way to model the flows of tourists that come from different origins (countries) to the same country-destination as market segments defining the primary demand of a commodity - the product
APA, Harvard, Vancouver, ISO, and other styles
29

Frank, James Allen. "An assessment of the forecasting accuracy of the structured accession planning system for officers (STRAP-O) model." Thesis, Monterey, Calif. : Springfield, Va. : Naval Postgraduate School ; Available from National Technical Information Service, 1993. http://handle.dtic.mil/100.2/ADA273000.

Full text
APA, Harvard, Vancouver, ISO, and other styles
30

Lobban, Stacey, and Hana Klimsova. "Demand Forecasting : A study at Alfa Laval in Lund." Thesis, Växjö University, School of Management and Economics, 2008. http://urn.kb.se/resolve?urn=urn:nbn:se:vxu:diva-2127.

Full text
Abstract:

Accurate forecasting is a real problem at many companies and that includes Alfa Laval in Lund. Alfa Laval experiences problems forecasting for future raw material demand. Management is aware that the forecasting methods used today can be improved or replaced by others. A change could lead to better forecasting accuracy and lower errors which means less inventory, shorter cycle times and better customer service at lower costs.

The purpose of this study is to analyze Alfa Laval’s current forecasting models for demand of raw material used for pressed plates, and then determine if other models are better suited for taking into consideration trends and seasonal variation.

APA, Harvard, Vancouver, ISO, and other styles
31

Knost, Benjamin R. "Evaluating the Accuracy of Pavement Deterioration Forecasts: Application to United States Air Force Airfields." The Ohio State University, 2016. http://rave.ohiolink.edu/etdc/view?acc_num=osu1480665140928498.

Full text
APA, Harvard, Vancouver, ISO, and other styles
32

Zhai, Yuzheng. "Improving scalability and accuracy of text mining in grid environment." Connect to thesis, 2009. http://repository.unimelb.edu.au/10187/5927.

Full text
Abstract:
The advance in technologies such as massive storage devices and high speed internet has led to an enormous increase in the volume of available documents in electronic form. These documents represent information in a complex and rich manner that cannot be analysed using conventional statistical data mining methods. Consequently, text mining is developed as a growing new technology for discovering knowledge from textual data and managing textual information. Processing and analysing textual information can potentially obtain valuable and important information, yet these tasks also requires enormous amount of computational resources due to the sheer size of the data available. Therefore, it is important to enhance the existing methodologies to achieve better scalability, efficiency and accuracy.
The emerging Grid technology shows promising results in solving the problem of scalability by splitting the works from text clustering algorithms into a number of jobs, each to be executed separately and simultaneously on different computing resources. That allows for a substantial decrease in the processing time and maintaining the similar level of quality at the same time.
To improve the quality of the text clustering results, a new document encoding method is introduced that takes into consideration of the semantic similarities of the words. In this way, documents that are similar in content will be more likely to be group together.
One of the ultimate goals of text mining is to help us to gain insights to the problem and to assist in the decision making process together with other source of information. Hence we tested the effectiveness of incorporating text mining method in the context of stock market prediction. This is achieved by integrating the outcomes obtained from text mining with the ones from data mining, which results in a more accurate forecast than using any single method.
APA, Harvard, Vancouver, ISO, and other styles
33

Boulin, Juan Manuel. "Call center demand forecasting : improving sales calls prediction accuracy through the combination of statistical methods and judgmental forecast." Thesis, Massachusetts Institute of Technology, 2010. http://hdl.handle.net/1721.1/59159.

Full text
Abstract:
Thesis (M.B.A.)--Massachusetts Institute of Technology, Sloan School of Management; and, (S.M.)--Massachusetts Institute of Technology, Engineering Systems Division; in conjunction with the Leaders for Global Operations Program at MIT, 2010.
Cataloged from PDF version of thesis.
Includes bibliographical references (p. 79-81).
Call centers are important for developing and maintaining healthy relationships with customers. At Dell, call centers are also at the core of the company's renowned direct model. For sales call centers in particular, the impact of proper operations is reflected not only in long-term relationships with customers, but directly on sales and revenue. Adequate staffing and proper scheduling are key factors for providing an acceptable service level to customers. In order to staff call centers appropriately to satisfy demand while minimizing operating expenses, an accurate forecast of this demand (sales calls) is required. During fiscal year 2009, inaccuracies in consumer sales call volume forecasts translated into approximately $1.1M in unnecessary overtime expenses and $34.5M in lost revenue for Dell. This work evaluates different forecasting techniques and proposes a comprehensive model to predict sales call volume based on the combination of ARIMA models and judgmental forecasting. The proposed methodology improves the accuracy of weekly forecasted call volume from 23% to 46% and of daily volume from 27% to 41%. Further improvements are easily achievable through the adjustment and projection processes introduced herein that rely on contextual information and the expertise of the forecasting team.
by Juan Manuel Boulin.
S.M.
M.B.A.
APA, Harvard, Vancouver, ISO, and other styles
34

Zhao, Richard Folger. "Can model-based forecasts predict stock market volatility using range-based and implied volatility as proxies?" Master's thesis, Instituto Superior de Economia e Gestão, 2017. http://hdl.handle.net/10400.5/13917.

Full text
Abstract:
Mestrado em Finanças
This thesis attempts to evaluate the performance of parametric time series models and RiskMetrics methodology to predict volatility. Range-based price estimators and Model-free implied volatility are used as a proxy for actual ex-post volatility, with data collected from ten prominent global volatility indices. To better understand how volatility behaves, different models from the Generalized Autoregressive Conditional Heteroskedasticity (GARCH) class were selected with Normal, Student-t and Generalized Error distribution (GED) innovations. A fixed rolling window methodology was used to estimate the models and predict the movements of volatility and, subsequently, their forecasting performances were evaluated using loss functions and regression analysis. The findings are not clear-cut; there does not seem to be a single best performing GARCH model. Depending on the indices chosen, for range-based estimator, APARCH (1,1) model with normal distribution overall outperforms the other models with the noticeable exception of HSI and KOSPI, where RiskMetrics seems to take the lead. When it comes to implied volatility prediction, GARCH (1,1) with Student-t performs relative well with the exception of UKX and SMI indices where GARCH (1,1) with Normal innovations and GED seem to do well respectively. Moreover, we also find evidence that all volatility forecasts are somewhat biased but they bear information about the future volatility.
info:eu-repo/semantics/publishedVersion
APA, Harvard, Vancouver, ISO, and other styles
35

Pannell, J. C. "An investigation into improving the accuracy of real-time flood forecasting techniques for the Onkaparinga River catchment, South Australia /." Title page, contents and abstract only, 1997. http://web4.library.adelaide.edu.au/theses/09ENS/09ensp194.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
36

Jadevicius, Arvydas. "An evaluation of the use of combination techniques in improving forecasting accuracy for commercial property cycles in the UK." Thesis, Edinburgh Napier University, 2014. http://researchrepository.napier.ac.uk/Output/7558.

Full text
Abstract:
In light of the financial and property crisis of 2007-2013 it is difficult to ignore the existence of cycles in the general business sector, as well as in building and property. Moreover, this issue has grown to have significant importance in the UK, as the UK property market has been characterized by boom and bust cycles with a negative impact on the overall UK economy. Hence, an understanding of property cycles can be a determinant of success for anyone working in the property industry. This thesis reviews chronological research on the subject, which stretches over a century, characterises the major publications and commentary on the subject, and discusses their major implications. Subsequently, this thesis investigates property forecasting accuracy and its improvement. As the research suggests, commercial property market modelling and forecasting has been the subject of a number of studies. As a result, it led to the development of various forecasting models ranging from simple Single Exponential Smoothing specifications to more complex Econometric with stationary data techniques. However, as the findings suggest, despite these advancements in commercial property cycle modelling and forecasting, there still remains a degree of inaccuracy between model outputs and actual property market performance. The research therefore presents the principle of Combination Forecasting as a technique helping to achieve greater predictive outcomes. The research subsequently assesses whether combination forecasts from different forecasting techniques are better than single model outputs. It examines which of them - combination or single forecast - fits the UK commercial property market better, and which of these options forecasts best. As the results of the study suggest, Combination Forecasting, and Regression (OLS) based Combination Forecasting in particular, is useful for improving forecasting accuracy of commercial property cycles in the UK.
APA, Harvard, Vancouver, ISO, and other styles
37

Cordeiro, Clara Maria Henrique. "Métodos de reamostragem em modelos de previsão." Doctoral thesis, ISA/UTL, 2011. http://hdl.handle.net/10400.5/3866.

Full text
Abstract:
Doutoramento em Matemática e Estatística - Instituto Superior de Agronomia
The study of a time series has forecasting as one of its primary objectives. Exponential smoothing methods (EXPOS) stand out due to their versatility in the wide choice of models that they include. The widespread dissemination makes them the most widely used methods of modeling and forecasting in time series. An area that has given great support to the statistical inference is computational statistics, specifically the bootstrap methodology. In time series that methodology is most frequently used through the residual resampling. An automatic procedure that combines exponential smoothing methods and the bootstrap methodology was developed in environment. This procedure (Boot.EXPOS) selects the most appropriate model among a wide range of models, and performs an autoregressive (AR) adjustment to the EXPOS residuals. Once the stationarity of the residuals has been guaranteed, the AR residuals are resampled and the reconstruction of the original series is performed using the estimated components of the initial model. Point forecasts and prediction intervals are also provided. NABoot.EXPOS is an extension of that procedure that allows for the detection, estimation and imputation of missing values. An exhaustive study of several types of real time series given in competitions is presented in order to compare our procedures.
APA, Harvard, Vancouver, ISO, and other styles
38

Fenlason, Joel W. "Accuracy of tropical cyclone induced winds using TYDET at Kadena AB." Thesis, Monterey, Calif. : Springfield, Va. : Naval Postgraduate School ; Available from National Technical Information Service, 2006. http://library.nps.navy.mil/uhtbin/hyperion/06Mar%5FFenlason.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
39

Grek, Åsa. "Forecasting accuracy for ARCH models and GARCH (1,1) family : Which model does best capture the volatility of the Swedish stock market?" Thesis, Örebro universitet, Handelshögskolan vid Örebro Universitet, 2014. http://urn.kb.se/resolve?urn=urn:nbn:se:oru:diva-37495.

Full text
APA, Harvard, Vancouver, ISO, and other styles
40

Badenhorst, Dirk Jakobus Pretorius. "Improving the accuracy of prediction using singular spectrum analysis by incorporating internet activity." Thesis, Stellenbosch : Stellenbosch University, 2013. http://hdl.handle.net/10019.1/80056.

Full text
Abstract:
Thesis (MComm)--Stellenbosch University, 2013.
ENGLISH ABSTRACT: Researchers and investors have been attempting to predict stock market activity for years. The possible financial gain that accurate predictions would offer lit a flame of greed and drive that would inspire all kinds of researchers. However, after many of these researchers have failed, they started to hypothesize that a goal such as this is not only improbable, but impossible. Previous predictions were based on historical data of the stock market activity itself and would often incorporate different types of auxiliary data. This auxiliary data ranged as far as imagination allowed in an attempt to find some correlation and some insight into the future, that could in turn lead to the figurative pot of gold. More often than not, the auxiliary data would not prove helpful. However, with the birth of the internet, endless amounts of new sources of auxiliary data presented itself. In this thesis I propose that the near in finite amount of data available on the internet could provide us with information that would improve stock market predictions. With this goal in mind, the different sources of information available on the internet are considered. Previous studies on similar topics presented possible ways in which we can measure internet activity, which might relate to stock market activity. These studies also gave some insights on the advantages and disadvantages of using some of these sources. These considerations are investigated in this thesis. Since a lot of this work is therefore based on the prediction of a time series, it was necessary to choose a prediction algorithm. Previously used linear methods seemed too simple for prediction of stock market activity and a new non-linear method, called Singular Spectrum Analysis, is therefore considered. A detailed study of this algorithm is done to ensure that it is an appropriate prediction methodology to use. Furthermore, since we will be including auxiliary information, multivariate extensions of this algorithm are considered as well. Some of the inaccuracies and inadequacies of these current multivariate extensions are studied and an alternative multivariate technique is proposed and tested. This alternative approach addresses the inadequacies of existing methods. With the appropriate methodology chosen and the appropriate sources of auxiliary information chosen, a concluding chapter is done on whether predictions that includes auxiliary information (obtained from the internet) improve on baseline predictions that are simply based on historical stock market data.
AFRIKAANSE OPSOMMING: Navorsers en beleggers is vir jare al opsoek na maniere om aandeelpryse meer akkuraat te voorspel. Die moontlike finansiële implikasies wat akkurate vooruitskattings kan inhou het 'n vlam van geldgierigheid en dryf wakker gemaak binne navorsers regoor die wêreld. Nadat baie van hierdie navorsers onsuksesvol was, het hulle begin vermoed dat so 'n doel nie net onwaarskynlik is nie, maar onmoontlik. Vorige vooruitskattings was bloot gebaseer op historiese aandeelprys data en sou soms verskillende tipes bykomende data inkorporeer. Die tipes data wat gebruik was het gestrek so ver soos wat die verbeelding toegelaat het, in 'n poging om korrelasie en inligting oor die toekoms te kry wat na die guurlike pot goud sou lei. Navorsers het gereeld gevind dat hierdie verskillende tipes bykomende inligting nie van veel hulp was nie, maar met die geboorte van die internet het 'n oneindige hoeveelheid nuwe bronne van bykomende inligting bekombaar geraak. In hierdie tesis stel ek dus voor dat die data beskikbaar op die internet dalk vir ons kan inligting gee wat verwant is aan toekomstige aandeelpryse. Met hierdie doel in die oog, is die verskillende bronne van inligting op die internet gebestudeer. Vorige studies op verwante werk het sekere spesifieke maniere voorgestel waarop ons internet aktiwiteit kan meet. Hierdie studies het ook insig gegee oor die voordele en die nadele wat sommige bronne inhou. Hierdie oorwegings word ook in hierdie tesis bespreek. Aangesien 'n groot gedeelte van hierdie tesis dus gebasseer word op die vooruitskatting van 'n tydreeks, is dit nodig om 'n toepaslike vooruitskattings algoritme te kies. Baie navorsers het verkies om eenvoudige lineêre metodes te gebruik. Hierdie metodes het egter te eenvoudig voorgekom en 'n relatiewe nuwe nie-lineêre metode (met die naam "Singular Spectrum Analysis") is oorweeg. 'n Deeglike studie van hierdie algoritme is gedoen om te verseker dat die metode van toepassing is op aandeelprys data. Verder, aangesien ons gebruik wou maak van bykomende inligting, is daar ook 'n studie gedoen op huidige multivariaat uitbreidings van hierdie algoritme en die probleme wat dit inhou. 'n Alternatiewe multivariaat metode is toe voorgestel en getoets wat hierdie probleme aanspreek. Met 'n gekose vooruitskattingsmetode en gekose bronne van bykomende data is 'n gevolgtrekkende hoofstuk geskryf oor of vooruitskattings, wat die bykomende internet data inkorporeer, werklik in staat is om te verbeter op die eenvoudige vooruitskattings, wat slegs gebaseer is op die historiese aandeelprys data.
APA, Harvard, Vancouver, ISO, and other styles
41

Urbanec, Matěj. "Kvantitativní analýza predikce poptávky u vybrané společnosti." Master's thesis, Vysoká škola ekonomická v Praze, 2014. http://www.nusl.cz/ntk/nusl-193095.

Full text
Abstract:
This thesis deals with the prediction demand forecasting in a company, focusing especially on quantitative methods of prediction. The theoretical part presents the predictions of demand, its place and importance in a company. Secondly, it presents various methods of qualitative and quantitative demand forecasting and the methods for measuring prediction accuracy. The practical part applies several methods on a real data of the company. These are the methods of moving averages, exponential smoothing, Holt and Holt-Winters method and the simple linear regression. The accuracy of each method are compared with each other and most accurate method is then used to predict demand for the year 2015.
APA, Harvard, Vancouver, ISO, and other styles
42

Kumar, Akhil. "Budget-Related Prediction Models in the Business Environment with Special Reference to Spot Price Predictions." Thesis, North Texas State University, 1986. https://digital.library.unt.edu/ark:/67531/metadc331533/.

Full text
Abstract:
The purpose of this research is to study and improve decision accuracy in the real world. Spot price prediction of petroleum products, in a budgeting context, is the task chosen to study prediction accuracy. Prediction accuracy of executives in a multinational oil company is examined. The Brunswik Lens Model framework is used to evaluate prediction accuracy. Predictions of the individuals, the composite group (mathematical average of the individuals), the interacting group, and the environmental model were compared. Predictions of the individuals were obtained through a laboratory experiment in which experts were used as subjects. The subjects were required to make spot price predictions for two petroleum products. Eight predictor variables that were actually used by the subjects in real-world predictions were elicited through an interview process. Data for a 15 month period were used to construct 31 cases for each of the two products. Prediction accuracy was evaluated by comparing predictions with the actual spot prices. Predictions of the composite group were obtained by averaging the predictions of the individuals. Interacting group predictions were obtained ex post from the company's records. The study found the interacting group to be the least accurate. The implication of this finding is that even though an interacting group may be desirable for information synthesis, evaluation, or working toward group consensus, it is undesirable if prediction accuracy is critical. The accuracy of the environmental model was found to be the highest. This suggests that apart from random error, misweighting of cues by individuals and groups affects prediction accuracy. Another implication of this study is that the environmental model can also be used as an additional input in the prediction process to improve accuracy.
APA, Harvard, Vancouver, ISO, and other styles
43

Guragai, Binod. "Firm Performance and Analyst Forecast Accuracy Following Discontinued Operations: Evidence from the Pre-SFAS 144 and SFAS 144 Eras." Thesis, University of North Texas, 2017. https://digital.library.unt.edu/ark:/67531/metadc984135/.

Full text
Abstract:
Because of the non-recurring and transitory nature of discontinued operations, accounting standards require that the results of discontinued operations be separately reported on the income statement. Prior accounting literature supports the view that discontinued operations are non-recurring or transitory in nature, and also suggests that income classified as transitory has minimal relevance in firm valuation. Finance and management literature, however, suggest that firms discontinue operations to strategically utilize their scarce resources. Assuming that discontinued operations are a result of managerial motives to strategically concentrate resources into remaining continued operations, this dissertation examines the informativeness of discontinued operations. In doing so, this dissertation empirically tests the financial performance, investment efficiency, valuation, and analyst forecast accuracy effects of discontinued operations. In 2001, Financial Accounting Standards Board's (FASB) Statement of Financial Accounting Standards (SFAS) 144 (hereafter SFAS 144) replaced Accounting Principles Board's Opinion 30 (hereafter APB 30) and broadened the scope of divestiture transactions to be presented in discontinued operations. Some stakeholders of financial statements argued that discontinued operations were less decision-useful in the SFAS 144 era because too many transactions that do not represent a strategic shift in operations were separately stated as discontinued operations on the income statement. With the possibility that the discontinued operations reported in SFAS 144 era may not reflect a major strategic reallocation of resources, this dissertation examines whether the relationship between discontinued operations, firm performance, investment efficiency, and analyst forecast accuracy are different in the pre-SFAS 144 and SFAS 144 era. Using a sample of firms that discontinued operations between 1990 and 2012, this dissertation study finds limited evidence that firms experience improvement in financial performance following discontinued operations and that such improvement is only observed in pre-SFAS 144 era. The results also suggest that any improvement in financial performance documented is conditional on the profitability of the operations discontinued and provide no support for investment efficiency improvement following discontinued operations. Related to the valuation implications of discontinued operations, this dissertation shows that investors differentially value profitable and loss discontinued operations. However, such valuation differences are not dependent on the performance improvement implications. Finally, results support that analyst forecast accuracy of earnings decreases following the reporting of discontinued operations, but such effect is only observed in the pre-SFAS 144 era. This dissertation makes several contributions to the literature. First, this study extends the literature on corporate divestment by using a large sample of discontinuation decisions and hand-collected data on the profitability of the operations discontinued. Second, this research extends the literature on market studies by analyzing whether market response to a discontinuation decision is dependent upon the profitability of the operation discontinued. Third, based upon a review of the literature, it is believed that this is the first study to examine the possibility that analyst forecast accuracy may change following a discontinuation decision. Finally, this study extends the literature that examines the effects of changes in accounting rules and regulations on the informativeness of financial statement items. These results should be of interest to investors, regulators, and analysts.
APA, Harvard, Vancouver, ISO, and other styles
44

Nizam, Anisulrahman. "Improving long range forecast errors for better capacity decision making." Honors in the Major Thesis, University of Central Florida, 2013. http://digital.library.ucf.edu/cdm/ref/collection/ETH/id/893.

Full text
Abstract:
Long-range demand planning and capacity management play an important role for policy makers and airline managers alike. Each makes decisions regarding allocating appropriate levels of funds to align capacity with forecasted demand. Decisions today can have long lasting effects. Reducing forecast errors for long-range range demand forecasting will improve resource allocation decision making. This research paper will focus on improving long-range demand planning and forecasting errors of passenger traffic in the U.S. domestic airline industry. This paper will look to build upon current forecasting models being used for U.S. domestic airline passenger traffic with the aim of improving forecast errors published by Federal Aviation Administration (FAA). Using historical data, this study will retroactively forecast U.S. domestic passenger traffic and then compare it to actual passenger traffic, then comparing forecast errors. Forecasting methods will be tested extensively in order to identify new trends and causal factors that will enhance forecast accuracy thus increasing the likelihood of better capacity management and funding decisions.
B.S.B.A.
Bachelors
Business Administration
Finance
APA, Harvard, Vancouver, ISO, and other styles
45

Duarte, Cláudia Filipa Pires. "Essays on mixed-frequency data : forecasting and unit root testing." Doctoral thesis, Instituto Superior de Economia e Gestão, 2016. http://hdl.handle.net/10400.5/11662.

Full text
Abstract:
Doutoramento em Economia
Nas últimas décadas, OS investigadores têm tido acesso a bases de dados cada vez mais abrangentes, que incluem séries com frequências temporais mais elevadas e que são divulgadas mais atempadamente. Em contraste, algumas variáveis, nomeadamente alguns dos principais indicadores macroeconómicos, são divulgados com urn desfasamento temporal significativo e com baixa frequência. Esta situação levanta questões sobre como lidar com séries com frequências temporais diferentes, mistas. Ao longo do tempo, várias técnicas têm sido propostas. Esta tese debruça-se sobre uma técnica em particular - a abordagem MI(xed) DA{ta) S{ampling), proposta por Ghysels et al. (2004). No Capitulo 1 eu utilizo a técnica MIDAS para prever o crescimento do PIB na área do euro com base num pequcno conjunto de indicadores, cobrindo séries com diferentes frequências temporais e divulgadas com diferentes desfasamentos. Eu cornparo o desempenho de urn conjunto alargado de regressões MIDAS, utilizando a raiz quadrada do erro quadrático média de previsão e tomando como ponto de referência quer regressões autoregressivas, quer multivariadas (bridge models). A questão sobre a forma de introduzir tcrmos autoregressivos nas equações MIDAS é dirirnida. São consideradas diferentes combinações de variáveis, obtidas através da agregação de previsões ou de regressões multivariadas, assim como diferentes frequências ternporais. Os resultados sugerern que, em geral, a utilização de regressões MIDAS contribui para o aurnento da precisão das previsões. Adicionalmente, nesta tese são propostos novas testes de raízes unitárias que exploram inforrnação com frequências rnistas. Tipicamente, os testes de raízes unitárias têm baixa potência, especialrnente em amostras pequenas. Uma forma de combatcr esta dificuldade consiste em recorrer a testes que exploram informação adicional de urn regressor estacionário incluído na regressão de teste. Eu avalio se é possível melhorar 0 desempenho de alguns testes deste tipo ao explorar dados com frequêcias temporais mistas, através de regressões MIDAS. No Capitulo 2 eu proponho uma nova classe de testes da familia Dickey-Fuller (DF) com regressores adicionais de frequência temporal mista, tomando por base os testes DF com regressores adicionais (CADF) propostos por Hansen (1995) e uma versão modificada proposta por Pesavento (2006), semelhante ao filtro GLS aplicado ao teste ADF univariado em Elliott et al. (1996). Em alternativa aos testes da familia DF, Elliott and Jansson (2003) propõem urn teste de raízes unitárias viável que retém propriedades óptimas mesmo na presenc;a de variáveis deterministicas (EJ), tomando por base a versão univariada proposta por Elliott et al. (1996). No Capitulo 3 eu alargo o âmbito de aplicação destes testes de forma a incluir dados com frequência temporal mista. Dado que para implementar o teste EJ é necessário estimar modclos VAR, eu proponho urn modelo VAR-MIDAS não restrito, parcimonioso, que inclui séries de frequência temporal mista e é estimado com técnicas econométricas tradicionais. Os resultados de urn exercício de Monte Carlo indicam que os testes com dados de frequência temporal mista têrn urn desempenho em termos de potência melhor do que os testes que agregam todas as variáveis para a mcsma frequência temporal (necessariamente a frequência mais baixa). Os ganhos são robustos à dimensão da amostra, à escolha do número de desfasamentos a incluir nas regressões de teste e às frequências temporais concretas. Adicionalmente, os testes da familia EJ tendem a ter urn melhor desempenho do que os testes da familia CADF, independentemente das frequências temporais consideradas. Para ilustrar empiricamentc a utilização destes testes, analisa-se a série da taxa de desemprego nos EUA.
Over the last decades, researchers have had access to more comprehensive datasets, which are released on a more frequent and timely basis. Nevertheless, some variables, namely some key macroeconomic indicators, are released with a significant time delay and at low frequencies. This situation raises the question on how to deal with series released at different, mixed time frequencies. Over the years and for different purposes, several techniques have been put forward. This essav focuses on a particular technique - the MI(xed) DA(ta) S(ampling) framework, proposed by Ghysels et al. (2004). In Chapter 1 I use MIDAS for forecasting euro area GDP growth using a small set of selected indicators in an environment with different sampling frequencies and asynchronous releases of information. I run a horse race between a wide set of MIDAS regressions and evaluate their performance, in terms of root mean squared forecast error, against AR and quarterly bridge models. The issue on how to include autoregressive terms in MIDAS regressions is disentangled. Different combinations of variables, through forecast pooling and multi-variable regressions, and different time frequencies are also considered. The results obtained suggest that in general, using MIDAS regressions contributes to increase forecast accuracy. In addition, I propose new unit root tests that exploit mixed-frequency information. Unit root tests typically suffer from low power in small samples. To overcome this shortcoming, tests exploiting information from stationary covariates have been proposed. I assess whether it is possible to improve the power performance of some of these tests by exploiting mixed-frequency data, through the MIDAS approach. In Chapter 2 I put forward a new class of mixed-frequency covariate-augmented Dickey-Fuller (DF) tests, extending the covariate-augmented DF test (CADF test) proposed by Hansen (1995) and its modified version, similar to the GLS generalisation of the univariate ADF test in Elliott et al. (1996), proposed by Pesavento (2006). Alternatively to the CADF tests, Elliott and Jansson (2003) proposed a feasible point optimal unit root test in the presence of deterministic components (EJ test hereafter), which extended the univariate results in Elliott et al. (1996). In Chapter 3 I go one step further and include mixed-frequency data in the EJ testing framework. Given that implementing the EJ test requires estimating VAR models, in order to plug in mixed-frequency data in the test regression I propose an unconstrained, though parsimonious, stacked skip-sampled reduced-form VAR-MIDAS model, which is estimated using standard econometric techniques. The results from a Monte Carlo exercise indicate that mixed-frequency tests have better power performance than low-frequency tests. The gains are robust to the size of the sample, to the lag specification of the test regressions and to different combinations of time frequencies. Moreover, the EJ-family of tests tends to have a better power performance than the CADF-family of tests, either with low or mixed-frequency data. An empirical illustration using the US unemployment rate is presented.
APA, Harvard, Vancouver, ISO, and other styles
46

Thornes, Tobias. "Investigating the potential for improving the accuracy of weather and climate forecasts by varying numerical precision in computer models." Thesis, University of Oxford, 2018. http://ora.ox.ac.uk/objects/uuid:038874a3-710a-476d-a9f7-e94ef1036648.

Full text
Abstract:
Accurate forecasts of weather and climate will become increasingly important as the world adapts to anthropogenic climatic change. Forecasts' accuracy is limited by the computer power available to forecast centres, which determines the maximum resolution, ensemble size and complexity of atmospheric models. Furthermore, faster supercomputers are increasingly energy-hungry and unaffordable to run. In this thesis, a new means of making computer simulations more efficient is presented that could lead to more accurate forecasts without increasing computational costs. This 'scale-selective reduced precision' technique builds on previous work that shows that weather models can be run with almost all real numbers represented in 32 bit precision or lower without any impact on forecast accuracy, challenging the paradigm that 64 bits of numerical precision are necessary for sufficiently accurate computations. The observational and model errors inherent in weather and climate simulations, combined with the sensitive dependence on initial conditions of the atmosphere and atmospheric models, renders such high precision unnecessary, especially at small scales. The 'scale-selective' technique introduced here therefore represents smaller, less influential scales of motion with less precision. Experiments are described in which reduced precision is emulated on conventional hardware and applied to three models of increasing complexity. In a three-scale extension of the Lorenz '96 toy model, it is demonstrated that high resolution scale-dependent precision forecasts are more accurate than low resolution high-precision forecasts of a similar computational cost. A spectral model based on the Surface Quasi-Geostrophic Equations is used to determine a power law describing how low precision can be safely reduced as a function of spatial scale; and experiments using four historical test-cases in an open-source version of the real-world Integrated Forecasting System demonstrate that a similar power law holds for the spectral part of this model. It is concluded that the scale-selective approach could be beneficially employed to optimally balance forecast cost and accuracy if utilised on real reduced precision hardware.
APA, Harvard, Vancouver, ISO, and other styles
47

Burgada, Muñoz Santiago. "Improvement on the sales forecast accuracy for a fast growing company by the best combination of historical data usage and clients segmentation." reponame:Repositório Institucional do FGV, 2014. http://hdl.handle.net/10438/13322.

Full text
Abstract:
Submitted by SANTIAGO BURGADA (sburgada@maxam.net) on 2015-01-25T12:10:08Z No. of bitstreams: 1 DISSERTATION SANTIAGO BURGADA CORPORATE INTERNATIONAL MASTERS SUBMISSION VERSION.pdf: 3588309 bytes, checksum: b70385fd690a43ddea32379f34b4afe9 (MD5)
Approved for entry into archive by Janete de Oliveira Feitosa (janete.feitosa@fgv.br) on 2015-02-04T19:27:15Z (GMT) No. of bitstreams: 1 DISSERTATION SANTIAGO BURGADA CORPORATE INTERNATIONAL MASTERS SUBMISSION VERSION.pdf: 3588309 bytes, checksum: b70385fd690a43ddea32379f34b4afe9 (MD5)
Approved for entry into archive by Marcia Bacha (marcia.bacha@fgv.br) on 2015-02-11T13:27:32Z (GMT) No. of bitstreams: 1 DISSERTATION SANTIAGO BURGADA CORPORATE INTERNATIONAL MASTERS SUBMISSION VERSION.pdf: 3588309 bytes, checksum: b70385fd690a43ddea32379f34b4afe9 (MD5)
Made available in DSpace on 2015-02-11T13:34:18Z (GMT). No. of bitstreams: 1 DISSERTATION SANTIAGO BURGADA CORPORATE INTERNATIONAL MASTERS SUBMISSION VERSION.pdf: 3588309 bytes, checksum: b70385fd690a43ddea32379f34b4afe9 (MD5) Previous issue date: 2014-10-29
Industrial companies in developing countries are facing rapid growths, and this requires having in place the best organizational processes to cope with the market demand. Sales forecasting, as a tool aligned with the general strategy of the company, needs to be as much accurate as possible, in order to achieve the sales targets by making available the right information for purchasing, planning and control of production areas, and finally attending in time and form the demand generated. The present dissertation uses a single case study from the subsidiary of an international explosives company based in Brazil, Maxam, experiencing high growth in sales, and therefore facing the challenge to adequate its structure and processes properly for the rapid growth expected. Diverse sales forecast techniques have been analyzed to compare the actual monthly sales forecast, based on the sales force representatives’ market knowledge, with forecasts based on the analysis of historical sales data. The dissertation findings show how the combination of both qualitative and quantitative forecasts, by the creation of a combined forecast that considers both client´s demand knowledge from the sales workforce with time series analysis, leads to the improvement on the accuracy of the company´s sales forecast.
APA, Harvard, Vancouver, ISO, and other styles
48

Brzoska, Jan [Verfasser], Doris [Gutachter] Fischer, and Björn [Gutachter] Alpermann. "Market forecasting in China: An Artificial Neural Network approach to optimize the accuracy of sales forecasts in the Chinese automotive market / Jan Brzoska ; Gutachter: Doris Fischer, Björn Alpermann." Würzburg : Universität Würzburg, 2020. http://d-nb.info/1209881292/34.

Full text
APA, Harvard, Vancouver, ISO, and other styles
49

Mintchik, Natalia Maksimovna. "The Effect of SFAS No. 141 and SFAS No. 142 on the Accuracy of Financial Analysts' Earnings Forecasts after Mergers." Thesis, University of North Texas, 2005. https://digital.library.unt.edu/ark:/67531/metadc4731/.

Full text
Abstract:
This study examines the impact of Statements of Financial Accounting Standards No. 141 and No. 142 (hereafter SFAS 141, 142) on the characteristics of financial analysts' earnings forecasts after mergers. Specifically, I predict lower forecast errors for firms that experienced mergers after the enactment of SFAS 141, 142 than for firms that went through business combinations before those accounting changes. Study results present strong evidence that earnings forecast errors for companies involved in merging and acquisition activity decreased after the adoption of SFAS 141, 142. Test results also suggest that lower earnings forecast errors are attributable to factors specific to merging companies such as SFAS 141, 142 but not common to merging and non-merging companies. In addition, evidence implies that information in corporate annual reports of merging companies plays the critical role in this decrease of earnings forecast error. Summarily, I report that SFAS 141, 142 were effective in achieving greater transparency of financial reporting after mergers. In my complementary analysis, I also document the structure of corporate analysts' coverage in "leaders/followers" terms and conduct tests for differences in this structure: (1) across post-SFAS 141,142/pre-SFAS 141, 142 environments, and (2) between merging and non-merging firms. Although I do not identify any significant differences in coverage structure across environments, my findings suggest that lead analysts are not as accurate as followers when predicting earnings for firms actively involved in mergers. I also detect a significant interaction between the SFAS-environment code and leader/follower classification, which indicates greater improvement of lead analyst forecast accuracy in the post-SFAS 141, 142 environment relative to their followers. This interesting discovery demands future investigation and confirms the importance of financial reporting transparency for the accounting treatment of business combinations.
APA, Harvard, Vancouver, ISO, and other styles
50

Turß, Michaela. "Emotional understanding." Doctoral thesis, Humboldt-Universität zu Berlin, Mathematisch-Naturwissenschaftliche Fakultät II, 2013. http://dx.doi.org/10.18452/16836.

Full text
Abstract:
Im Rahmen des Leistungsansatzes von emotionaler Intelligenz sehen Mayer und Salovey (1997) Emotionsverstaendnis als Voraussetzung für Emotionsregulation. Es sollte nützlich sein zu wissen, wie man sich in bestimmten Situationen fühlen wird. Zur Messung werden unter anderem Vignetten eingesetzt, in denen Emotionen für hypothetische Situationen vorhergesagt werden. Im Gegensatz dazu postulieren Gilbert und Wilson (2003) charakteristische Fehler bei affektiven Vorhersagen, die motivational günstig sind. In der vorliegenden Arbeit wird die Akkuratheit emotionaler Vorhersagen im natürlichen Umfeld untersucht, um dessen adaptiven Wert zu beurteilen. Zunächst sollten Beamtenanwärter ihre Emotionen in einer bedeutenden Testsituation vorhersagen (N=143). Dann wurden studentische Arbeitsgruppen (180 Mitglieder in 43 Gruppen) gebeten, Gefühle zwischen den Mitgliedern zu prognostizieren (Zuneigung, Zufriedenheit mit der Zusammenarbeit, Freude und Ärger). Akkuratheit wurde als geringer Bias (euklidische Distanz) und hohe Korrespondenz (Profilkorrelation) definiert. Das Round Robin Design der zweiten Studie ermöglichte die Varianzzerlegung der Akkuratheit nach Cronbach (1955). In beiden Studien ist ein niedriger Bias adaptiv in Hinblick auf harte Kriterien, auch inkrementell über Intelligenz und Persönlichkeit hinaus. Bias hing teilweise mit Allgemeinwissen zusammen, aber nicht mit Intelligenz. Zusammenhänge zu emotionaler Intelligenz waren inkonsistent. Die Akkuratheit als Korrespondenz ist theoretisch interessant aber deutlich weniger reliabel. Auf Gruppenebene konnte die Korrespondenz Kriterien vorhersagen, aber es zeigte sich keine inkrementelle Validität. Zukünftige Forschung sollte sich auf spezifische Situationen und spezifische Emotionen konzentrieren sowie die Prozesse untersuchen, die emotionalen Vorhersagen zugrunde liegen.
In the ability model of emotional intelligence by Mayer and Salovey (1997), emotional understanding is a prerequisite for emotion regulation. Knowing which emotions occur in which situations should be beneficial and adaptive. One of the subtests for emotional understanding asks for likely emotional reactions in hypothetical situations. In contrast, Gilbert and Wilson (2003) argue that characteristic biases in affective forecasting are adaptive. The current thesis aims to measure accuracy of emotional predictions in a natural setting and examines its adaptive value. In the anxiety study, public officials were asked to predict future emotions in an important test (N=143). The second study focused on freshman student work-groups (N=180 in 43 groups). Group members predicted interpersonal feelings for each other (affection, satisfaction with the collaboration, fun, and anger). In both studies, accuracy of emotional predictions is defined as low bias (i.e. Euclidean distance) and high correspondence (i.e. profile correlation). The round robin design in the work-group study also allows to decompose accuracy following Cronbach (1955). In both studies, a low bias was adaptive in terms of strong criteria, also incrementally over and above intelligence and personality alone. Accuracy was partly related to general knowledge but not to intelligence. Associations to emotional intelligence were inconsistent. Accuracy as correspondence is theoretically interesting but much less reliable. There is some evidence for its adaptive value on a group level but no indication of incremental validity. Future research should focus on specific situations and specific emotions. Also, processes underlying affective forecasts should be evaluated in detail.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography