Pour voir les autres types de publications sur ce sujet consultez le lien suivant : Risk decomposition.

Thèses sur le sujet « Risk decomposition »

Créez une référence correcte selon les styles APA, MLA, Chicago, Harvard et plusieurs autres

Choisissez une source :

Consultez les 46 meilleures thèses pour votre recherche sur le sujet « Risk decomposition ».

À côté de chaque source dans la liste de références il y a un bouton « Ajouter à la bibliographie ». Cliquez sur ce bouton, et nous générerons automatiquement la référence bibliographique pour la source choisie selon votre style de citation préféré : APA, MLA, Harvard, Vancouver, Chicago, etc.

Vous pouvez aussi télécharger le texte intégral de la publication scolaire au format pdf et consulter son résumé en ligne lorsque ces informations sont inclues dans les métadonnées.

Parcourez les thèses sur diverses disciplines et organisez correctement votre bibliographie.

1

Surucu, Oktay. « Decomposition Techniques In Energy Risk Management ». Master's thesis, METU, 2005. http://etd.lib.metu.edu.tr/upload/12606552/index.pdf.

Texte intégral
Résumé :
The ongoing process of deregulation in energy markets changes the market from a monopoly into a complex one, in which large utilities and independent power producers are no longer suppliers with guaranteed returns but enterprisers which have to compete. This competence has forced utilities to improve their efficiency. In effect, they must still manage the challenges of physical delivery while operating in a complex market characterized by significant volatility, volumetric uncertainty and credit risk. In such an environment, risk management gains more importance than ever. In order to manage risk, first it must be measured and then this quantified risk must be utilized optimally. Using stochastic programming to construct a model for an energy company in liberalized markets is useful since it provides a generic framework to model the uncertainties and enable decisions that will perform well. However, the resulting stochastic programming problem is a large-scale one and decomposition techniques are needed to solve them.
Styles APA, Harvard, Vancouver, ISO, etc.
2

Gérard, Henri. « Stochastic optimization problems : decomposition and coordination under risk ». Thesis, Paris Est, 2018. http://www.theses.fr/2018PESC1111/document.

Texte intégral
Résumé :
Nous considérons des problèmes d'optimisation stochastique et de théorie des jeux avec des mesures de risque. Dans une première partie, nous mettons l'accent sur la cohérence temporelle. Nous commençons par prouver une équivalence entre cohérence temporelle et l'existence d'une formule imbriquée pour des fonctions. Motivés par des exemples bien connus dans les mesures de risque, nous étudions trois classes de fonctions: les fonctions invariantes par translation, les transformées de Fenchel-Moreau et les fonctions supremum. Ensuite, nous étendons le concept de cohérence temporelle à la cohérence entre joueurs, en remplaçant le temps séquentiel par un ensemble non ordonné et les fonctions par des relations binaires. Enfin, nous montrons comment la cohérence entre joueurs est liée à des formes de décomposition séquentielles et parallèles en optimisation. Dans une seconde partie, nous étudions l'impact des mesures de risque sur la multiplicité des équilibres dans les problèmes de jeux dynamiques dans les marchés complets et incomplets. Nous concevons un exemple où l'introduction de mesures de risque conduit à l'existence de trois équilibres au lieu d'un dans le cas risque neutre. Nous analysons la capacité de deux algorithmes différents à trouver les différents équilibres. Nous discutons des liens entre la cohérence des joueurs et les problèmes d'équilibre dans les jeux. Dans une troisième partie, nous étudions l'optimisation robuste pour l'apprentissage automatique. En utilisant des mesures de risque convexes, nous fournissons un cadre unifié et proposons un algorithme adapté couvrant trois ensembles d'ensembles d'ambiguïté étudiés dans la littérature
We consider stochastic optimization and game theory problems with risk measures. In a first part, we focus on time consistency. We begin by proving an equivalence between time consistent mappings and the existence of a nested formula. Motivated by well-known examples in risk measures, we investigate three classes of mappings: translation invariant, Fenchel-Moreau transform and supremum mappings. Then, we extend the concept of time consistency to player consistency, by replacing the sequential time by any unordered set and mappings by any relations. Finally, we show how player consistency relates to sequential and parallel forms of decomposition in optimization. In a second part, we study how risk measures impact the multiplicity of equilibria in dynamic game problems in complete and incomplete markets. We design an example where the introduction of risk measures leads to the existence of three equilibria instead of one in the risk neutral case. We analyze the ability of two different algorithms to recover the different equilibria. We discuss links between player consistency and equilibrium problems in games. In a third part, we study distribution ally robust optimization in machine learning. Using convex risk measures, we provide a unified framework and propose an adapted algorithm covering three ambiguity sets discussed in the literature
Styles APA, Harvard, Vancouver, ISO, etc.
3

Amaxopoulos, Fotios. « Hedge funds : risk decomposition, replication and the disposition effect ». Thesis, Imperial College London, 2011. http://hdl.handle.net/10044/1/6935.

Texte intégral
Résumé :
The purpose of this thesis is to contribute to the literature on hedge fund performance and risk analysis. The thesis is divided into three major chapters that apply novel factor model (Chapter 2) and return replication approaches (Chapter 3) as well as using hedge fund holdings information to examine the disposition effect (Chapter 4). Chapter 2 focuses on the implementation of an efficient Signal Processing technique called Independent Component Analysis, in order to try to identify the driving mechanisms of hedge fund returns. We propose a new algorithm to interpret economically the independent components derived by the data. We use a wide dataset of financial linear and non-linear factors and apply the classification given by the independent component factor models to form optimal portfolios of hedge funds. The results show that our approach outperforms the classic factor models for hedge funds in terms of explanatory power and statistical significance, both in and out of sample. Additionally the ICA model seems to outperform the other models in asset allocation and portfolio construction problems. In chapter 3 we use an effective classification algorithm called Support Vector Machines in order to classify and replicate hedge funds. We use hedge fund returns and exposures on the Fung and Hsieh factor model in order to classify the funds as the self declared strategies differ significantly in the majority of cases from the real one the funds follow. Then we replicate the hedge fund returns with the use of the Support Vector Regressions and we conduct: external replication using financial and economic factors that affect hedge fund returns. Finally in chapter 4 we examine whether hedge funds exhibit a disposition effect in equity markets that leads to under-reaction to news and return predictability. The tendency to hold losing investments too long and sell winning investments too soon has been documented for mutual funds and retail investors, but little is known about whether holdings of sophisticated institutional investors such as hedge funds exhibit such irrational behaviour. We examine the previously unexplored differences in the disposition effect and performance between hedge and mutual funds. Our results show that hedge funds' equity portfolio holdings are consistent with the disposition effect and lead to stronger predictability than that induced by mutual funds' disposition effect during the same sample period. A subsample analysis reveals that this is due to a relatively more pronounced moderation in the disposition-induced predictability in mutual fund holdings, which may, for example, be related to managers learning from their past suboptimal behaviour documented by earlier studies.
Styles APA, Harvard, Vancouver, ISO, etc.
4

Voßmann, Frank. « Decision weights in choice under risk and uncertainty : measurement and decomposition / ». [S.l. : s.n.], 2004. http://www.gbv.de/dms/zbw/490610218.pdf.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
5

Kagaruki-Kakoti, Generosa. « An Economic Analysis of School and Labor Market Outcomes For At-Risk Youth ». Digital Archive @ GSU, 2005. http://digitalarchive.gsu.edu/econ_diss/6.

Texte intégral
Résumé :
Federal education policy has targeted children who are disadvantaged in order to improve their academic performance. The most recent federal education policy is the No Child Left Behind law signed by President Bush in 2001. Indicators often used to identify an at-risk youth range from economic, personal, family, and neighborhood characteristics. A probit model is used in this study to estimate the probability that a student graduates from high school as a function of 8th grade variables. Students are classified as at-risk of dropping out of high school or non at-risk based on having one or more risk factor. The main measures of academic outcomes are high school completion and post-secondary academic achievements. The main measures of labor market outcomes are short-term and long-term earnings. The results show that a student who comes from a low income family, has a sibling who dropped out, has parents with low education, is home alone after school for three hours or more, or comes from a step family in the eighth grade is at-risk of dropping out of high school. At-risk students are less likely than non at-risk students to graduate from high school. They appear to be more sensitive to existing conditions that may impair/assist their academic progress while they are in high school. At-risk students are also less likely to select a bachelor’s degree. When they are compared to comparable non at-risk students, a greater percentage of at-risk students select a bachelor’s degree or post-graduate degrees than non at-risk students. At-risk individuals face long-term disadvantage in the labor market, receiving lower wage offers than the non at-risk group. Comparing only those without post secondary education shows that the average earnings offered to at-risk individuals were lower than those offered to non at-risk individuals. At-risk college graduates also receive lower earnings than non at-risk college graduates. The wage differential is largely due to the disadvantage at-risk individuals face in the labor market.
Styles APA, Harvard, Vancouver, ISO, etc.
6

Colletta, Renato Dalla. « Cash flow and discount rate risk decomposition and ICAPM for the US and brazilian stock markets ». reponame:Repositório Institucional do FGV, 2013. http://hdl.handle.net/10438/10566.

Texte intégral
Résumé :
Submitted by Renato Dalla Colletta (renatodcolletta@gmail.com) on 2013-02-27T20:15:20Z No. of bitstreams: 1 Tese MPFE Renato D Colletta.pdf: 1766902 bytes, checksum: 4daf523c0cf56d0533692bcd81b813db (MD5)
Approved for entry into archive by Vera Lúcia Mourão (vera.mourao@fgv.br) on 2013-02-27T21:01:52Z (GMT) No. of bitstreams: 1 Tese MPFE Renato D Colletta.pdf: 1766902 bytes, checksum: 4daf523c0cf56d0533692bcd81b813db (MD5)
Made available in DSpace on 2013-02-27T21:05:31Z (GMT). No. of bitstreams: 1 Tese MPFE Renato D Colletta.pdf: 1766902 bytes, checksum: 4daf523c0cf56d0533692bcd81b813db (MD5) Previous issue date: 2013-01-31
This work applies the intertemporal asset pricing model developed by Campbell (1993) and Campbell and Vuolteenaho (2004) to the Brazilian 2x3 Fama-French stock portfolios from January 2003 to April 2012 and to the US 5x5 Fama-French portfolios in dfferent time periods. The variables suggested by Campbell and Vuolteenaho (2004) to forecast US market excess returns from 1929 to 2001 were also good excess return predictors for the Brazilian market on the recent period, except the term structure yield spread. However, we found that an increase in the small stock value spread predicts a higher market excess return, which is not consistent with the intertemporal model explanation for the value premium. Moreover, using the residuals of the forecasting VAR to define the test portfolios’ cash flow and discount rate shock risk sensitivity, we found that the resulting intertemporal model explains little of the variance in the cross section of returns. For the US market, we conclude that the proposed variables’ ability to forecast market excess returns is not constant in time. Campbell and Vuolteenaho’s (2004) success in explaining the value premium for the US market in the 1963 to 2001 sub-sample is a result of the VAR specification in the full sample, since we show that none of the variables are statistically significant return predictors in this sub-sample.
Esse trabalho é uma aplicação do modelo intertemporal de apreçamento de ativos desenvolvido por Campbell (1993) e Campbell e Vuolteenaho (2004) para as carteiras de Fama-French 2x3 brasileiras no period de janeiro de 2003 a abril de 2012 e para as carteiras de Fama-French 5x5 americanas em diferentes períodos. As varíaveis sugeridas por Campbell e Vuolteenaho (2004) para prever os excessos de retorno do mercado acionário americano no period de 1929 a 2001 mostraram-se também bons preditores de excesso de retorno para o mercado brasileiro no período recente, com exceção da inclinação da estrutura a termo das taxas de juros. Entretanto, mostramos que um aumento no small stock value spread indica maior excesso de retorno no futuro, comportamento que não é coerente com a explicação para o prêmio de valor sugerida pelo modelo intertemporal. Ainda, utilizando os resíduos do VAR preditivo para definir o risco de choques de fluxo de caixa e de choques nas taxas de desconto das carteiras de teste, verificamos que o modelo intertemporal resultante não explica adequadamente os retornos observados. Para o mercado norte-americano, concluímos que a habilidade das variáveis propostas para explicar os excessos de retorno do mercado varia no tempo. O sucesso de Campbell e Vuolteenaho (2004) em explicar o prêmio de valor para o mercado norte-americano na amostra de 1963 a 2001 é resultado da especificação do VAR na amostra completa, pois mostramos que nenhuma das varíaveis é um preditor de retorno estatisticamente significante nessa sub-amostra.
Styles APA, Harvard, Vancouver, ISO, etc.
7

Soberanis, Policarpio Antonio. « Risk optimization with p-order conic constraints ». Diss., University of Iowa, 2009. https://ir.uiowa.edu/etd/437.

Texte intégral
Résumé :
My dissertation considers solving of linear programming problems with p-order conic constraints that are related to a class of stochastic optimization models with risk objective or constraints that involve higher moments of loss distributions. The general proposed approach is based on construction of polyhedral approximations for p-order cones, thereby approximating the non-linear convex p-order conic programming problems using linear programming models. It is shown that the resulting LP problems possess a special structure that makes them amenable to efficient decomposition techniques. The developed algorithms are tested on the example of portfolio optimization problem with higher moment coherent risk measures that reduces to a p-order conic programming problem. The conducted case studies on real financial data demonstrate that the proposed computational techniques compare favorably against a number of benchmark methods, including second-order conic programming methods.
Styles APA, Harvard, Vancouver, ISO, etc.
8

RROJI, EDIT. « Risk attribution and semi-heavy tailed distributions ». Doctoral thesis, Università degli Studi di Milano-Bicocca, 2013. http://hdl.handle.net/10281/49833.

Texte intégral
Résumé :
In this thesis we discuss the problem of risk attribution in a multifactor context using nonparametric approaches but we also introduce a new distribution for modeling returns. The risk measures considered are homogeneous since we exploit the Euler rule. Particular attention is given to the problem of attributing risk to user defined factors since the existing literature is limited when compared to other research arguments but of practical relevance. We point out the problems encountered during the analysis and present some methodologies that can be useful in practice. Each chapter combines both theoretical and practical issues.
Styles APA, Harvard, Vancouver, ISO, etc.
9

Iucci, Alessandro. « Explainable Reinforcement Learning for Risk Mitigation in Human-Robot Collaboration Scenarios ». Thesis, KTH, Skolan för elektroteknik och datavetenskap (EECS), 2021. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-296162.

Texte intégral
Résumé :
Reinforcement Learning (RL) algorithms are highly popular in the robotics field to solve complex problems, learn from dynamic environments and generate optimal outcomes. However, one of the main limitations of RL is the lack of model transparency. This includes the inability to provide explanations of why the output was generated. The explainability becomes even more crucial when RL outputs influence human decisions, such as in Human-Robot Collaboration (HRC) scenarios, where safety requirements should be met. This work focuses on the application of two explainability techniques, “Reward Decomposition” and “Autonomous Policy Explanation”, on a RL algorithm which is the core of a risk mitigation module for robots’ operation in a collaborative automated warehouse scenario. The “Reward Decomposition” gives an insight into the factors that impacted the robot’s choice by decomposing the reward function into sub-functions. It also allows creating Minimal Sufficient Explanation (MSX), sets of relevant reasons for each decision taken during the robot’s operation. The second applied technique, “Autonomous Policy Explanation”, provides a global overview of the robot’s behavior by answering queries asked by human users. It also provides insights into the decision guidelines embedded in the robot’s policy. Since the synthesis of the policy descriptions and the queries’ answers are in natural language, this tool facilitates algorithm diagnosis even by non-expert users. The results proved that there is an improvement in the RL algorithm which now chooses more evenly distributed actions and a full policy to the robot’s decisions is produced which is for the most part aligned with the expectations. The work provides an analysis of the results of the application of both techniques which both led to increased transparency of the robot’s decision process. These explainability methods not only built trust in the robot’s choices, which proved to be among the optimal ones in most of the cases but also made it possible to find weaknesses in the robot’s policy, making them a tool helpful for debugging purposes.
Algoritmer för förstärkningsinlärning (RL-algoritmer) är mycket populära inom robotikområdet för att lösa komplexa problem, att lära sig av dynamiska miljöer och att generera optimala resultat. En av de viktigaste begränsningarna för RL är dock bristen på modellens transparens. Detta inkluderar den oförmåga att förklara bakomliggande process (algoritm eller modell) som genererade ett visst returvärde. Förklarbarheten blir ännu viktigare när resultatet från en RL-algoritm påverkar mänskliga beslut, till exempel i HRC-scenarier där säkerhetskrav bör uppfyllas. Detta arbete fokuserar på användningen av två förklarbarhetstekniker, “Reward Decomposition” och “Autonomous policy Explanation”, tillämpat på en RL-algoritm som är kärnan i en riskreduceringsmodul för drift av samarbetande robotars på ett automatiserat lager. “Reward Decomposition” ger en inblick i vilka faktorer som påverkade robotens val genom att bryta ner belöningsfunktionen i mindre funktioner. Det gör det också möjligt att formulera en MSX (minimal sufficient explanation), uppsättning av relevanta skäl för varje beslut som har fattas under robotens drift. Den andra tillämpade tekniken, “Autonomous Policy Explanation”, ger en generellt prespektiv över robotens beteende genom att mänskliga användare får ställa frågor till roboten. Detta ger även insikt i de beslutsriktlinjer som är inbäddade i robotens policy. Ty syntesen av policybeskrivningarna och frågornas svar är naturligt språk underlättar detta en algoritmdiagnos även för icke-expertanvändare. Resultaten visade att det finns en förbättring av RL-algoritmen som nu väljer mer jämnt fördelade åtgärder. Dessutom produceras en fullständig policy för robotens beslut som för det mesta är anpassad till förväntningarna. Rapporten ger en analys av resultaten av tillämpningen av båda teknikerna, som visade att båda ledde till ökad transparens i robotens beslutsprocess. Förklaringsmetoderna gav inte bara förtroende för robotens val, vilket visade sig vara bland de optimala i de flesta fall, utan gjorde det också möjligt att hitta svagheter i robotens policy, vilket gjorde dem till ett verktyg som är användbart för felsökningsändamål.
Styles APA, Harvard, Vancouver, ISO, etc.
10

Berg, Simon, et Victor Elfström. « IRRBB in a Low Interest Rate Environment ». Thesis, KTH, Matematisk statistik, 2020. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-273589.

Texte intégral
Résumé :
Financial institutions are exposed to several different types of risk. One of the risks that can have a significant impact is the interest rate risk in the bank book (IRRBB). In 2018, the European Banking Authority (EBA) released a regulation on IRRBB to ensure that institutions make adequate risk calculations. This article proposes an IRRBB model that follows EBA's regulations. Among other things, this framework contains a deterministic stress test of the risk-free yield curve, in addition to this, two different types of stochastic stress tests of the yield curve were made. The results show that the deterministic stress tests give the highest risk, but that the outcomes are considered less likely to occur compared to the outcomes generated by the stochastic models. It is also demonstrated that EBA's proposal for a stress model could be better adapted to the low interest rate environment that we experience now. Furthermore, a discussion is held on the need for a more standardized framework to clarify, both for the institutions themselves and the supervisory authorities, the risks that institutes are exposed to.
Finansiella institutioner är exponerade mot flera olika typer av risker. En av de risker som kan ha en stor påverkan är ränterisk i bankboken (IRRBB). 2018 släppte European Banking Authority (EBA) ett regelverk gällande IRRBB som ska se till att institutioner gör tillräckliga riskberäkningar. Detta papper föreslår en IRRBB modell som följer EBAs regelverk. Detta regelverk innehåller bland annat ett deterministiskt stresstest av den riskfria avkastningskurvan, utöver detta så gjordes två olika typer av stokastiska stresstest av avkastningskurvan. Resultatet visar att de deterministiska stresstesten ger högst riskutslag men att utfallen anses vara mindre sannolika att inträffa jämfört med utfallen som de stokastiska modellera genererade. Det påvisas även att EBAs förslag på stressmodell skulle kunna anpassas bättre mot den lågräntemiljö som vi för tillfället befinner oss i. Vidare förs en diskussion gällande ett behov av ett mer standardiserat ramverk för att tydliggöra, både för institutioner själva och samt övervakande myndigheter, vilka risker institutioner utsätts för.
Styles APA, Harvard, Vancouver, ISO, etc.
11

Andronoudis, Dimos. « Essays on risk, stock return volatility and R&D intensity ». Thesis, University of Exeter, 2015. http://hdl.handle.net/10871/21278.

Texte intégral
Résumé :
This thesis consists of three empirical essays studying the capital market implications of the accounting for R&D costs. The first empirical study (Chapter 2) re-visits the debate over the positive R&D-returns relation. The second empirical study (Chapter 3) examines the risk relevance of current R&D accounting. The third empirical study (Chapter 4) explores the joint impact of R&D intensity and competition on the relative relevance of the idiosyncratic part of earnings. Prior research argues that the positive relation between current R&D activity and future returns is evidence of mispricing, a compensation for risk inherent in R&D or a transformation of the value/growth anomaly. The first empirical study contributes to this debate by taking into account the link between R&D activity, equity duration and systematic risk. This link motivates us to employ Campbell and Vuolteenaho (2004)'s intertemporal asset pricing model (ICAPM) which accommodates stochastic discount rates and investors' intertemporal preferences. The results support a risk based explanation; R&D intensive firms are exposed to higher discount rate risk. Hedge portfolio strategies show that the mispricing explanations is not economically significant. The second empirical study contributes to prior research on the value relevance of financial reporting information on R&D, by proposing an alternative approach which relies on a return variance decomposition model. We find that R&D intensity has a significant influence on market participants' revisions of expectations regarding future discount rates (or, discount rate news) and future cash flows (or, cash flow news), thereby driving returns variance. We extend this investigation to assess the risk relevance of this information by means of its influence on the sensitivity of cash flow and discount rate news to the market news. Our findings suggest R&D intensity is associated with significant variation in the sensitivity of cash flow news to the market news which implies that financial reporting information on R&D is risk relevant. Interestingly, we do not establish a similar pattern with respect to the sensitivity of discount news to the market news which may dismiss the impact of sentiment in stock returns of R&D intensive firms. The third empirical study examines the effect of financial reporting information on R&D to the value relevance of common and idiosyncratic earnings. More specifically, we investigate the value relevance of common and idiosyncratic earnings through an extension of the Vuolteenaho (2002) model which decomposes return variance into its discount rate, idiosyncratic and common cash flow news. We demonstrate that the relative importance of idiosyncratic over common cash flow news in explaining return variance increases with firm-level R&D intensity. Extending this analysis, we find that this relation varies with the level of R&D investment concentration in the industry. Those results indicate that the market perceives that more pronounced R&D activity leads to outcomes that enable the firm to differentiate itself from its rivals. However, our results also suggest that the market perceives that this relation depends upon the underlying economics of the industry where the firm operates.
Styles APA, Harvard, Vancouver, ISO, etc.
12

Wang, Yi. « Default risk in equity returns an industrial and cross-industrial study / ». Cleveland, Ohio : Cleveland State University, 2009. http://rave.ohiolink.edu/etdc/view?acc_num=csu1251906476.

Texte intégral
Résumé :
Thesis (Ph.D.)--Cleveland State University, 2009.
Abstract. Title from PDF t.p. (viewed on Sept. 8, 2009). Includes bibliographical references (p. 149-154). Available online via the OhioLINK ETD Center and also available in print.
Styles APA, Harvard, Vancouver, ISO, etc.
13

Souza, Enio Bonafé Mendonça de. « Mensuração e evidenciação contábil do risco financeiro de derivativos ». Universidade de São Paulo, 2015. http://www.teses.usp.br/teses/disponiveis/12/12136/tde-05032015-182918/.

Texte intégral
Résumé :
O grande diferencial da contabilidade enquanto ciência do controle financeiro são as técnicas que garantem a integridade das informações apresentadas, basicamente através da identidade débito/crédito. O que se faz nesta tese é mostrar como uma nova forma de registro dos derivativos, respeitando-se os princípios contábeis, é capaz de propiciar uma estimativa mais clara e precisa dos riscos financeiros envolvidos nas posições de balanço. É feita uma Decomposição Contábil das transações com derivativos, abrindo-se cada operação em ativo e passivo, com a diferença de ambos sendo o resultado a valor justo do mesmo. Posteriormente, uma nova Decomposição de Riscos abre ativo e passivo em seus fatores primitivos de risco, evidenciando a exposição a riscos por tipo de fator. Finalmente uma reagregação global de todas as decomposições realizadas por fatores de risco gera a DRF-Demonstração de Riscos Financeiros, que evidencia de forma sintética toda exposição a riscos envolvida nas transações carregadas no balanço patrimonial. É mostrado como a DRF evidencia de forma mais clara a eficácia de hedges carregados no balanço, para fins gerenciais internos e para fins do usuário externo. Também ficam evidentes os montantes de exposição em cada fator de risco de mercado. A grande vantagem deste procedimento é que são obtidas as exposições a risco nos derivativos de forma automaticamente conciliada com os registros contábeis.
The great advantage of accounting as a science of financial control are the techniques that guarantee the integrity of the information presented, primarily through the identity debit / credit. This thesis shows a new form to recognize and record derivatives while preserving accounting principles and providing a much more clear and precise estimate of the financial risks involved in the balance sheet items. An Accounting Decomposition is made over derivative transactions by spreading up each one of them into an asset and a liability; the difference being the fair value result of the transaction. Subsequently, a new Risks Decomposition opens up assets and liabilities in their primitive risk factors, highlighting the risk exposures by each type. Finally, a global reaggregation of all decompositions performed by risk factors generates the SFR-Statement of Financial Risks, showing synthetically the exposures to all risks involved in the transactions carried in the balance sheet. It is presented how the SFR shows effectiveness of hedges applied on the balance sheet more clearly, either, for internal management and for external user purposes. Also, it turns evident the amount of exposure to each market risk factor. The greatest advantage of this procedure is to obtain the risk exposures of derivatives automatically and straightly reconciled with accounting records.
Styles APA, Harvard, Vancouver, ISO, etc.
14

Motyka, Matt. « Risk measurement of mortgage-backed security portfolios via principal components and regression analyses ». Link to electronic thesis, 2003. http://www.wpi.edu/Pubs/ETD/Available/etd-0429103-231210.

Texte intégral
Résumé :
Thesis (M.S.)--Worcester Polytechnic Institute.
Keywords: portfolio risk decomposition; principal components regression; principal components analysis; mortgage-backed securities. Includes bibliographical references (p. 88-89).
Styles APA, Harvard, Vancouver, ISO, etc.
15

Berg, Edvin, et Karl Wilhelm Lange. « Enhancing ESG-Risk Modelling - A study of the dependence structure of sustainable investing ». Thesis, KTH, Matematisk statistik, 2020. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-266378.

Texte intégral
Résumé :
The interest in sustainable investing has increased significantly during recent years. Asset managers and institutional investors are urged to invest more sustainable from their stakeholders, reducing their investment universe. This thesis has found that sustainable investments have a different linear dependence structure compared to the regional markets in Europe and North America, but not in Asia-Pacific. However, the largest drawdowns of an sustainable compliant portfolio has historically been lower compared to the a random market portfolio, especially in Europe and North America.
Intresset för hållbara investeringar har ökat avsevärt de senaste åren. Fondförvaltare och institutionella investerare är, från deras intressenter, manade att investera mer hållbart vilket minskar förvaltarnas investeringsuniversum. Denna uppsats har funnit att hållbara investeringar har en beroendestruktur som är skild från de regionala marknaderna i Europa och Nordamerika, men inte för Asien-Stillahavsregionen. De största värdeminskningarna i en hållbar portfölj har historiskt varit mindre än värdeminskningarna från en slumpmässig marknadsportfölj, framförallt i Europa och Nordamerika.
Styles APA, Harvard, Vancouver, ISO, etc.
16

Celidoni, Martina. « Essays on vulnerability to poverty and inequality ». Doctoral thesis, Università degli studi di Padova, 2012. http://hdl.handle.net/11577/3422143.

Texte intégral
Résumé :
According to the recent report of The Commission on the Measurement of Economic Performance and Social Progress (CMEPSP), whose members are also Joseph Stiglitz, Amartya Sen and Jean Paul Fitoussi, statistical indicators are important for designing and assessing policies aiming at advancing the progress of society (Stiglitz, Sen, and Fitoussi, 2009). The main objective of the present work is to shed light on some aspects concerning the information provided by vulnerability to poverty and inequality indexes. The first chapter compares empirically the several measures of individual vulnerability to poverty proposed in the literature, in order to understand which is the best signal of poverty that can be used for policies purposes. To this aim the Receiver Operating Characteristic (ROC) curve, the Pearson and Spearman correlation coefficients are used as precision criteria. Using data from the British Household Panel Survey (BHPS), the German Socio-Economic Panel (SOEP) and the Survey on Household Income and Wealth (SHIW), the results show that two groups of indexes can be identified, high- and low-performers, and, among the former, that proposed by Dutta, Foster, and Mishra (2011) is the most precise. The second chapter applies a non-parametric decomposition of the Foster-Greer-Thorbecke poverty index to the measurement of individual vulnerability to poverty. I highlight that poverty risk can be expressed as a function of three components expected incidence, expected intensity and expected downward variability. This decomposition is useful for risk management purposes since it describes the characteristics of the poverty risk faced by individuals. An empirical illustration is provided using the British Household Panel Survey and the Survey on Household Income and Wealth. The third chapter focuses on inequality. According to Atkinson (1971), inequality attributable to age should be of little concern for policymakers because it is irrelevant for the distribution of lifetime income or wealth. Concerning that I provide age-adjusted measures of wealth inequality to understand the role of demographic changes in Italy in determining the trends in disparities. Using the Survey on Household Income and Wealth from 1991 to 2008, the results confirm previous findings: age-adjustments are not very important in terms of dynamics.
Alla luce del recente rapporto della Commissione sulla Misura della Performance Economica e del progresso Sociale (CMEPSP), composta anche da Joseph Stiglitz, Amartya Sen e Jean Paul Fitoussi, gli indicatori statistici sono importanti per il design e la valutazione delle politiche pubbliche in termini di progresso sociale (Stiglitz, Sen, and Fitoussi, 2009). L'obiettivo principale della tesi in oggetto è l'analisi dell'informazione fornita dagli indici di vulnerabilità alla povertà e disuguaglianza. Il primo capitolo confronta in termini empirici le misure individuali di vulnerabilità alla povertà proposte in letteratura. Lo scopo è capire quale sia l'indice più preciso nel predire la povertà, affinchè questo possa essere utilizzato come fonte di informazione per le politiche pubbliche. La Receiver Operating Characteristic (ROC) curve, i coefficenti di correlazione di Pearson e Spearman sono utilizzati come criteri per la valutazione della precisione. Usando dati del British Household Panel Survey (BHPS), del German Socio-Economic Panel (SOEP) e della Survey on Household Income and Wealth (SHIW), i risultati mostrano che possono essere identificate due categorie di indici, high- e low-performers; fra i primi, l'indice proposto da Dutta, Foster, and Mishra (2011) è il più preciso nell'identificare i futuri poveri. Il secondo capitolo applica una scomposizione non parametrica dell'indice di povertà Foster-Greer-Thorbecke alla vulnerabilità alla povertà individuale. Questo approccio mostra come il rischio di povertà può essere espresso come funzione di incidenza attesa, intensità attesa e variabilità negativa attesa. La scomposizione proposta è utile in termini di politiche di risk management per le informazioni circa le caratteristiche del rischio di povertà. Il capitolo prevede due illustrazioni empiriche con dati del British Household Panel Survey e della Survey on Household Income and Wealth. Il terzo capitolo di focalizza sugli indici di disuguaglianza. Secondo Atkinson (1971), la disuguaglianza attribuibile all'età è irrilevante se l'interesse è concentrato nella distribuzione di reddito e ricchezza di lungo periodo (lifetime perspective). Riguardo ciò, il terzo capitolo propone delle misure di disuguaglianza basate sulla ricchezza netta e corrette per l'effetto dei cambiamenti demografici nella popolazione italiana fra il 1991 ed il 2008. Utilizzando i dati della Survey on Household Income and Wealth della Banca d'Italia, i risultati confermano quanto già osservato in letteratura: gli aggiustamenti demografici non risultano determinanti nella dinamica della disuguaglianza in termini di ricchezza netta.
Styles APA, Harvard, Vancouver, ISO, etc.
17

Fausch, Jürg. « Essays on Financial Markets and the Macroeconomy ». Doctoral thesis, Stockholms universitet, Nationalekonomiska institutionen, 2017. http://urn.kb.se/resolve?urn=urn:nbn:se:su:diva-140151.

Texte intégral
Résumé :
Asset pricing implications of a DSGE model with recursive preferences and nominal rigidities. I study jointly macroeconomic dynamics and asset prices implied by a production economy featuring nominal price rigidities and Epstein-Zin (1989) preferences. Using a reasonable calibration, the macroeconomic DSGE model is consistent with a number of stylized facts observed in financial markets like the equity premium, a negative real term spread, a positive nominal term spread and the predictability of stock returns, without compromising the model's ability to fit key macroeconomic variables. The interest rate smoothing in the monetary policy rule helps generate a low risk-free rate volatility which has been difficult to achieve for standard real business cycle models where monetary policy is neutral. In an application, I show that the model provides a framework for analyzing monetary policy interventions and the associated effects on asset prices and the real economy. Macroeconomic news and the stock market: Evidence from the eurozone. This paper is an empirical study of excess return behavior in the stock market in the euro area around days when important macroeconomic news about inflation, unemployment or interest rates are scheduled for announcement. I identify state dependence such that equity risk premia on announcement days are significantly higher when the interests rates are in the vicinity of the zero lower bound. Moreover, I provide evidence that for the whole sample period, the average excess returns in the eurozone are only higher on days when FOMC announcements are scheduled for release. However, this result vanishes in a low interest rate regime. Finally, I document that the European stock market does not command a premium for scheduled announcements by the European Central Bank (ECB). The impact of ECB monetary policy surprises on the German stock market. We examine the impact of ECB monetary policy surprises on German excess stock returns and the possible reasons for such a response. First, we conduct an event study to asses the impact of conventional and unconventional monetary policy on stock returns. Second, within the VAR framework of Campbell and Ammer (1993), we decompose excess stock returns into news regarding expected excess returns, future dividends and future real interest rates. We measure conventional monetary policy shocks using futures markets data. Our main findings are that the overall variation in German excess stock returns mainly reflects revisions in expectations about dividends and that the stock market response to monetary policy shocks is dependent on the prevailing interest rate regime. In periods of negative real interest rates, a surprise monetary tightening leads to a decrease in excess stock returns. The channels behind this response are news about higher expected excess returns and lower future dividends.
Styles APA, Harvard, Vancouver, ISO, etc.
18

Wang, Nancy. « Spectral Portfolio Optimisation with LSTM Stock Price Prediction ». Thesis, KTH, Matematisk statistik, 2020. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-273611.

Texte intégral
Résumé :
Nobel Prize-winning modern portfolio theory (MPT) has been considered to be one of the most important and influential economic theories within finance and investment management. MPT assumes investors to be riskaverse and uses the variance of asset returns as a proxy of risk to maximise the performance of a portfolio. Successful portfolio management reply, thus on accurate risk estimate and asset return prediction. Risk estimates are commonly obtained through traditional asset pricing factor models, which allow the systematic risk to vary over time domain but not in the frequency space. This approach can impose limitations in, for instance, risk estimation. To tackle this shortcoming, interest in applications of spectral analysis to financial time series has increased lately. Among others, the novel spectral portfolio theory and the spectral factor model which demonstrate enhancement in portfolio performance through spectral risk estimation [1][11]. Moreover, stock price prediction has always been a challenging task due to its non-linearity and non-stationarity. Meanwhile, Machine learning has been successfully implemented in a wide range of applications where it is infeasible to accomplish the needed tasks traditionally. Recent research has demonstrated significant results in single stock price prediction by artificial LSTM neural network [6][34]. This study aims to evaluate the combined effect of these two advancements in a portfolio optimisation problem and optimise a spectral portfolio with stock prices predicted by LSTM neural networks. To do so, we began with mathematical derivation and theoretical presentation and then evaluated the portfolio performance generated by the spectral risk estimates and the LSTM stock price predictions, as well as the combination of the two. The result demonstrates that the LSTM predictions alone performed better than the combination, which in term performed better than the spectral risk alone.
Den nobelprisvinnande moderna portföjlteorin (MPT) är utan tvekan en av de mest framgångsrika investeringsmodellerna inom finansvärlden och investeringsstrategier. MPT antar att investerarna är mindre benägna till risktagande och approximerar riskexponering med variansen av tillgångarnasränteavkastningar. Nyckeln till en lyckad portföljförvaltning är därmed goda riskestimat och goda förutsägelser av tillgångspris. Riskestimering görs vanligtvis genom traditionella prissättningsmodellerna som tillåter risken att variera i tiden, dock inte i frekvensrummet. Denna begränsning utgör bland annat ett större fel i riskestimering. För att tackla med detta har intresset för tillämpningar av spektraanalys på finansiella tidsserier ökat de senast åren. Bland annat är ett nytt tillvägagångssätt för att behandla detta den nyintroducerade spektralportföljteorin och spektralfak- tormodellen som påvisade ökad portföljenprestanda genom spektralriskskattning [1][11]. Samtidigt har prediktering av aktierpriser länge varit en stor utmaning på grund av dess icke-linjära och icke-stationära egenskaper medan maskininlärning har kunnat använts för att lösa annars omöjliga uppgifter. Färska studier har påvisat signifikant resultat i aktieprisprediktering med hjälp av artificiella LSTM neurala nätverk [6][34]. Detta arbete undersöker kombinerade effekten av dessa två framsteg i ett portföljoptimeringsproblem genom att optimera en spektral portfölj med framtida avkastningar predikterade av ett LSTM neuralt nätverk. Arbetet börjar med matematisk härledningar och teoretisk introduktion och sedan studera portföljprestation som genereras av spektra risk, LSTM aktieprispredikteringen samt en kombination av dessa två. Resultaten visar på att LSTM-predikteringen ensam presterade bättre än kombinationen, vilket i sin tur presterade bättre än enbart spektralriskskattningen.
Styles APA, Harvard, Vancouver, ISO, etc.
19

Leclere, Vincent. « Contributions to decomposition methods in stochastic optimization ». Thesis, Paris Est, 2014. http://www.theses.fr/2014PEST1092/document.

Texte intégral
Résumé :
Le contrôle optimal stochastique (en temps discret) s'intéresse aux problèmes de décisions séquentielles sous incertitude. Les applications conduisent à des problèmes d'optimisation degrande taille. En réduisant leur taille, les méthodes de décomposition permettent le calcul numérique des solutions. Nous distinguons ici deux formes de décomposition. La emph{décomposition chaînée}, comme la Programmation Dynamique, résout successivement, des sous-problèmes de petite taille. La décomposition parallèle, comme le Progressive Hedging, consiste à résoudre itérativement et parallèlement les sous-problèmes, coordonnés par un algorithme maître. Dans la première partie de ce manuscrit, Dynamic Programming: Risk and Convexity, nous nous intéressons à la décomposition chaînée, en particulier temporelle, connue sous le nom de Programmation Dynamique. Dans le chapitre 2, nous étendons le cas traditionnel, risque-neutre, de la somme en temps des coûts, à un cadre plus général pour lequel nous établissons des résultats de cohérence temporelle. Dans le chapitre 3, nous étendons le résultat de convergence de l'algorithme SDDP (Stochastic Dual Dynamic Programming Algorithm) au cas où les fonctions de coûts (convexes) ne sont plus polyhédrales. Puis, nous nous tournons vers la décomposition parallèle, en particulier autour des méthodes de décomposition obtenues en dualisant les contraintes (contraintes spatiales presque sûres, ou de non-anticipativité). Dans la seconde partie de ce manuscrit, Duality in Stochastic Optimization, nous commençons par souligner que de telles contraintes peuvent soulever des problèmes de dualité délicats (chapitre 4).Nous établissons un résultat de dualité dans les espaces pairés $Bp{mathrm{L}^infty,mathrm{L}^1}$ au chapitre 5. Finalement, au chapitre 6, nous montrons un résultat de convergence de l'algorithme d'Uzawa dans $mathrm{L}^inftyp{Omega,cF,PP;RR^n}$, qui requière l'existence d'un multiplicateur optimal. La troisième partie de ce manuscrit, Stochastic Spatial Decomposition Methods, est consacrée à l'algorithme connu sous le nom de DADP (Dual Approximate Dynamic Programming Algorithm). Au chapitre 7, nous montrons qu'une suite de problèmes d'optimisation où une contrainte presque sûre est relaxée en une contrainte en espérance conditionnelle épi-converge vers le problème original si la suite des tribus converge vers la tribu globale. Finalement, au chapitre 8, nous présentons l'algorithme DADP, des interprétations, et des résultats de convergence basés sur la seconde partie du manuscrit
Stochastic optimal control addresses sequential decision-making under uncertainty. As applications leads to large-size optimization problems, we count on decomposition methods to tackle their mathematical analysis and their numerical resolution. We distinguish two forms of decomposition. In chained decomposition, like Dynamic Programming, the original problemis solved by means of successive smaller subproblems, solved one after theother. In parallel decomposition, like Progressive Hedging, the original problemis solved by means of parallel smaller subproblems, coordinated and updated by amaster algorithm. In the first part of this manuscript, Dynamic Programming: Risk and Convexity, we focus on chained decomposition; we address the well known time decomposition that constitutes Dynamic Programming with two questions. In Chapter 2, we extend the traditional additive in time and risk neutral setting to more general ones for which we establish time-consistency. In Chapter 3, we prove a convergence result for the Stochastic Dual Dynamic Programming Algorithm in the case where (convex) cost functions are no longer polyhedral. Then, we turn to parallel decomposition, especially decomposition methods obtained by dualizing constraints (spatial or non-anticipative). In the second part of this manuscript, Duality in Stochastic Optimization, we first point out that such constraints lead to delicate duality issues (Chapter 4).We establish a duality result in the pairing $Bp{mathrm{L}^infty,mathrm{L}^1}$ in Chapter 5. Finally, in Chapter 6, we prove the convergence of the Uzawa Algorithm in~$mathrm{L}^inftyp{Omega,cF,PP;RR^n}$.The third part of this manuscript, Stochastic Spatial Decomposition Methods, is devoted to the so-called Dual Approximate Dynamic Programming Algorithm. In Chapter 7, we prove that a sequence of relaxed optimization problems epiconverges to the original one, where almost sure constraints are replaced by weaker conditional expectation ones and that corresponding $sigma$-fields converge. In Chapter 8, we give theoretical foundations and interpretations to the Dual Approximate Dynamic Programming Algorithm
Styles APA, Harvard, Vancouver, ISO, etc.
20

Alais, Jean-Christophe. « Risque et optimisation pour le management d'énergies : application à l'hydraulique ». Thesis, Paris Est, 2013. http://www.theses.fr/2013PEST1071/document.

Texte intégral
Résumé :
L'hydraulique est la principale énergie renouvelable produite en France. Elle apporte une réserve d'énergie et une flexibilité intéressantes dans un contexte d'augmentation de la part des énergies intermittentes dans la production. Sa gestion soulève des problèmes difficiles dus au nombre des barrages, aux incertitudes sur les apports d'eau et sur les prix, ainsi qu'aux usages multiples de l'eau. Cette thèse CIFRE, effectuée en partenariat avec Electricité de France, aborde deux questions de gestion hydraulique formulées comme des problèmes d'optimisation dynamique stochastique. Elles sont traitées dans deux grandes parties.Dans la première partie, nous considérons la gestion de la production hydroélectrique d'un barrage soumise à une contrainte dite de cote touristique. Cette contrainte vise à assurer une hauteur de remplissage du réservoir suffisamment élevée durant l'été avec un niveau de probabilité donné. Nous proposons différentes modélisations originales de ce problème et nous développons les algorithmes de résolution correspondants. Nous présentons des résultats numériques qui éclairent différentes facettes du problème utiles pour les gestionnaires du barrage.Dans la seconde partie, nous nous penchons sur la gestion d'une cascade de barrages. Nous présentons une méthode de résolution approchée par décomposition-coordination, l'algorithme Dual Approximate Dynamic Programming (DADP). Nousmontrons comment décomposer, barrage par barrage, le problème de la cascade en sous-problèmes obtenus en dualisant la contrainte de couplage spatial ``déversé supérieur = apport inférieur''. Sur un cas à trois barrages, nous sommes en mesure de comparer les résultats de DADP à la solution exacte (obtenue par programmation dynamique), obtenant desgains à quelques pourcents de l'optimum avec des temps de calcul intéressants. Les conclusions auxquelles nous sommes parvenu offrent des perspectives encourageantes pour l'optimisation stochastique de systèmes de grande taille
Hydropower is the main renewable energy produced in France. It brings both an energy reserve and a flexibility, of great interest in a contextof penetration of intermittent sources in the production of electricity. Its management raises difficulties stemming from the number of dams, from uncertainties in water inflows and prices and from multiple uses of water. This Phd thesis has been realized in partnership with Electricité de France and addresses two hydropower management issues, modeled as stochastic dynamic optimization problems. The manuscript is divided in two parts. In the first part, we consider the management of a hydroelectric dam subject to a so-called tourist constraint. This constraint assures the respect of a given minimum dam stock level in Summer months with a prescribed probability level. We propose different original modelings and we provide corresponding numerical algorithms. We present numerical results that highlight the problem under various angles useful for dam managers. In the second part, we focus on the management of a cascade of dams. We present the approximate decomposition-coordination algorithm called Dual Approximate Dynamic Programming (DADP). We show how to decompose an original (large scale) problem into smaller subproblems by dualizing the spatial coupling constraints. On a three dams instance, we are able to compare the results of DADP with the exact solution (obtained by dynamic programming); we obtain approximate gains that are only at a few percents of the optimum, with interesting running times. The conclusions we arrived at offer encouraging perspectives for the stochastic optimization of large scale problems
Styles APA, Harvard, Vancouver, ISO, etc.
21

Moataz, Fatima Zahra. « Vers des réseaux optiques efficaces et tolérants aux pannes : complexité et algorithmes ». Thesis, Nice, 2015. http://www.theses.fr/2015NICE4077/document.

Texte intégral
Résumé :
Nous étudions dans cette thèse des problèmes d’optimisation avec applications dans les réseaux optiques. Les problèmes étudiés sont liés à la tolérance aux pannes et à l’utilisation efficace des ressources. Les résultats obtenus portent principalement sur la complexité de calcul de ces problèmes. La première partie de cette thèse est consacrée aux problèmes de trouver des chemins et des chemins disjoints. La recherche d’un chemin est essentielle dans tout type de réseaux afin d’y établir des connexions et la recherche de chemins disjoints est souvent utilisée pour garantir un certain niveau de protection contre les pannes dans les réseaux. Nous étudions ces problèmes dans des contextes différents. Nous traitons d’abord les problèmes de trouver un chemin et des chemins lien ou nœud- disjoints dans des réseaux avec nœuds asymétriques, c’est-à-dire des nœuds avec restrictions sur leur connectivité interne. Ensuite, nous considérons les réseaux avec des groupes de liens partageant un risque (SRLG) en étoile : ensembles de liens qui peuvent tomber en panne en même temps suite à un événement local. Dans ce type de réseaux, nous examinons le problème de recherche des chemins SRLG-disjoints. La deuxième partie de cette thèse est consacrée au problème de routage et d’allocation de spectre (RSA) dans les réseaux optiques élastiques (EONs). Les EONs sont proposés comme la nouvelle génération des réseaux optiques et ils visent une utilisation plus efficace et flexible des ressources optiques. Le problème RSA est central dans les EONs. Il concerne l’allocation de ressources aux requêtes sous plusieurs contraintes
We study in this thesis optimization problems with application in optical networks. The problems we consider are related to fault-tolerance and efficient resource allocation and the results we obtain are mainly related to the computational complexity of these problems. The first part of this thesis is devoted to finding paths and disjoint paths. Finding a path is crucial in all types of networks in order to set up connections and finding disjoint paths is a common approach used to provide some degree of protection against failures in networks. We study these problems under different settings. We first focus on finding paths and node or link-disjoint paths in networks with asymmetric nodes, which are nodes with restrictions on their internal connectivity. Afterwards, we consider networks with star Shared Risk Link Groups (SRLGs) which are groups of links that might fail simultaneously due to a localized event. In these networks, we investigate the problem of finding SRLG-disjoint paths. The second part of this thesis focuses on the problem of Routing and Spectrum Assignment (RSA) in Elastic Optical Networks (EONs). EONs are proposed as the new generation of optical networks and they aim at an efficient and flexible use of the optical resources. RSA is the key problem in EONs and it deals with allocating resources to requests under multiple constraints. We first study the static version of RSA in tree networks. Afterwards, we examine a dynamic version of RSA in which a non-disruptive spectrum defragmentation technique is used. Finally, we present in the appendix another problem that has been studied during this thesis
Styles APA, Harvard, Vancouver, ISO, etc.
22

Dibaji, Seyed Ahmad Reza. « Nonlinear Derating of High Intensity Therapeutic Ultrasound Beams using Decomposition of Gaussian Mode ». University of Cincinnati / OhioLINK, 2016. http://rave.ohiolink.edu/etdc/view?acc_num=ucin1458900246.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
23

Joshi, Neekita. « ASSESSING THE SIGNIFICANCE OF CLIMATE VARIABILITY ON GROUNDWATER RISE AND SEA LEVEL CHANGES ». OpenSIUC, 2021. https://opensiuc.lib.siu.edu/dissertations/1908.

Texte intégral
Résumé :
Climate variability is important to understand as its effects on groundwater are complex than surface water. Climate association between Groundwater Storage (GWS) and sea level changes have been missing from the Intergovernmental Panel on Climate Change, demanding a requisite study of their linkage and responses. The current dissertation is primarily focused on the ongoing issues that have not been focused on the previous literatures. Firstly, the study evaluated the effects of short-term persistence and abrupt shifts in sea level records along the US coast by utilizing popular robust statistical techniques. Secondly, the study evaluated the variability in groundwater due to variability in hydroclimatic variables like sea surface temperature (SST), precipitation, sea level, and terrestrial water storage. Moreover, a lagged correlation was also analyzed to obtain their teleconnection patterns. Lastly, the relationship between the groundwater rise and one of the most common short-term climate variability, ENSO was obtained. To accomplish the research goals the current dissertation was subdivided into three research tasks.The first task attempted to answer a major question, Is sea level change affected by the presence of autocorrelation and abrupt shift? This question reflects the importance of trend and shift detection analysis in sea level. The primary factor driving the global sea level rise is often related to climate change. The current study investigates the changes in sea level along the US coast. The sea level records of 59 tide gauge data were used to evaluate the trend, shift, and persistence using non-parametric statistical tests. Mann-Kendall and Pettitt’s tests were utilized to estimate gradual trends and abrupt shifts, respectively. The study also assessed the presence of autocorrelation in sea level records and its effect on both trend and shift was examined along the US coast. The presence of short-term persistence was found in 57 stations and the trend significance of most stations was not changed at a 95% confidence level. Total of 25 stations showed increasing shift between 1990–2000 that was evaluated from annual sea level records. Results from the current study may contribute to understanding sea level variability across the contiguous US. The second task dealt with variability in the Hydrologic Unit Code—03 region. It is one of the major U.S. watersheds in the southeast in which most of the variability is caused by Sea Surface Temperature (SST) variability in the Pacific and Atlantic Ocean, was identified. Furthermore, the SST regions were identified to assess its relationship with GWS, sea level, precipitation, and terrestrial water storage. Temporal and spatial variability were obtained utilizing the singular value decomposition statistical method. A gridded GWS anomaly from the Gravity Recovery and Climate Experiment (GRACE) was used to understand the relationship with sea level and SST. The negative pockets of SST were negatively linked with GWS. The identification of teleconnections with groundwater may substantiate temporal patterns of groundwater variability. The results confirmed that the SST regions exhibited El Niño Southern Oscillation patterns, resulting in GWS changes. Moreover, a positive correlation between GWS and sea level was observed on the east coast in contrast to the southwestern United States. The findings highlight the importance of climate-driven changes in groundwater attributing changes in sea level. Therefore, SST could be a good predictor, possibly utilized for prior assessment of variabilities plus groundwater forecasting. The primary goal of the third task is to better understand the effects of ENSO climate patterns on GWS in the South Atlantic-Gulf region. Groundwater issues are complex and different studies focused on groundwater depletion while few emphasized, “groundwater rise”. The current research is designed to develop an outline for assessing how climate patterns can affect groundwater fluctuation, which might lead to groundwater rise. The study assessed the effect of ENSO phases on spatiotemporal variability of groundwater using Spearman Rank Correlation. A significant positive correlation between ENSO and GWS was observed. An increasing trend was detected in GWS where most grids were observed in Florida by utilizing the non-parametric Mann-Kendall. A positive magnitude of the trend was also detected by utilizing Theil-Sen’s Slope method with high magnitude in the mid-Florida region. The highest GWS anomalies were observed in the peak of El Niño events and the lowermost GWS was observed during La Niña events. Furthermore, most of the stations were above normal groundwater conditions. This study provides a better understanding of the research gap between groundwater rise and ENSO.
Styles APA, Harvard, Vancouver, ISO, etc.
24

Pilon, Rémi. « Dynamique du système racinaire de l'écosystème prairial et contribution au bilan de carbone du sol sous changement climatique ». Phd thesis, Université Blaise Pascal - Clermont-Ferrand II, 2011. http://tel.archives-ouvertes.fr/tel-00673439.

Texte intégral
Résumé :
En Europe, les prairies occupent 25% de la surface du territoire soit près de 40% de la surface agricole utile. De nombreux services écosystémiques dépendent de cet écosystème comme par exemple la production fourragère, un réservoir de diversité végétale et animale et une capacité de stockage de carbone dans les sols. Dans un contexte de changement climatique (augmentation de la température moyenne de l'air et de la concentration atmosphérique en CO2) et de déprise agricole (extensification des prairies de moyenne montagne), les recherches actuelles s'intéressent au maintien des services écosystémiques tels que la capacité de stockage de carbone dans le sol pour limiter l'augmentation de la concentration atmosphérique en CO2, la production fourragère et la conservation de la richesse spécifique. Cette thèse a pour objectif d'observer in situ les effets des principaux déterminants du changement climatique (température de l'air, précipitations, concentration atmosphérique enCO2) sur le fonctionnement du système racinaire et des déterminants du stockage de carbone sur une prairie permanente de moyenne montagne gérée de manière extensive. Cette étude porte sur l'influence d'un scénario de changement climatique prévu à l'horizon 2080 pour le centre de la France. Ce scénario (ACCACIA A2) prévoit une augmentation de la température de l'air de 3.5°C (T) et de la concentration atmosphérique en CO2 de 200 Nmol mol-1 (CO2) et une réduction des précipitations estivales de 20 % (D). La démographie (croissance, mortalité, durée de vie et risque de mortalité) de cohortes racinaires a été suivie durant 3 à 4 ans à l'aide du Minirhizotron. La croissance potentielle des racines dans un ingrowth core a été suivie pendant une année après 4 ans de changement climatique avec en même temps des mesures de décomposition de litière racinaire et de respiration du sol. Après 3 ans d'expérimentation, un effet positif du réchauffement (T) et du changement climatique (TDCO2) a été observé sur la production racinaire, ainsi qu'une baisse de la durée de vie sous réchauffement. Une stimulation de l'allongement des racines (ratio longueur/volume), sous climat réchauffé (T, TD, TDCO2), a certainement permis de favoriser l'absorption en eau et en nutriments. Cependant, après 5 ans d'application des traitements, le réchauffement (T) a diminué la production racinaire et accéléré la décomposition d'une litière standard. L'augmentation du CO2 a permis de compenser l'effet négatif du réchauffement sur la production racinaire. Le changement climatique (TDCO2) a accéléré les entrées mais aussi les sorties (décomposition et respiration accélérées) de carbone du sol. De part, l'effet négatif du réchauffement sur la production aérienne et souterraine sur le moyen terme et sur la demande en nutriment, les matières organiques se sont accumulées dans le sol, contrairement à l'augmentation du CO2 qui a diminué cette quantité. Dans un contexte de changement climatique, la production racinaire semble en partie maintenue ainsi que les stocks de matière organique dans les sols. Les processus souterrains (croissance, mortalité, décomposition) et la respiration du sol se sont accrus. A l'avenir, le bilan de CO2 et des différents gaz à effet de serre pourrait être négatif et accentuer le changement climatique.
Styles APA, Harvard, Vancouver, ISO, etc.
25

Puzyrova, Polina. « The algorithm for constructing a decomposition matrix of innovative risks : the degree of their influence on innovation potential for integrated business structures in dynamic conditions of modern development ». Thesis, Perfect Publishing, Vancouver, Canada, 2021. https://er.knutd.edu.ua/handle/123456789/19280.

Texte intégral
Résumé :
The essential composition of innovation risks that affect integrated business structures in the micro and macro environment is determined; the conditions for the emergence of innovation risk in the activities of integrated business structures are studied; the algorithm of construction of the consolidated classification matrix of innovative risks for integrated business structures is determined; the decomposition matrix of innovation risks and the degree of their impact on the innovation potential with the subsequent ranking of integrated business structures and business units in the field of products – technologies – costs.
Styles APA, Harvard, Vancouver, ISO, etc.
26

Nosjean, Nicolas. « Management et intégration des risques et incertitudes pour le calcul de volumes de roches et de fluides au sein d’un réservoir, zoom sur quelques techniques clés d’exploration Integrated Post-stack Acoustic Inversion Case Study to Enhance Geological Model Description of Upper Ordovicien Statics : from imaging to interpretation pitfalls and an efficient way to overcome them Improving Upper Ordovician reservoir characterization - an Algerian case study Tracking Fracture Corridors in Tight Gas Reservoirs : An Algerian Case Study Integrated sedimentological case study of glacial Ordovician reservoirs in the Illizi Basin, Algeria A Case Study of a New Time-Depth Conversion Workflow Designed for Optimizing Recovery Proper Systemic Knowledge of Reservoir Volume Uncertainties in Depth Conversion Integration of Fault Location Uncertainty in Time to Depth Conversion Emergence of edge scenarios in uncertainty studies for reservoir trap analysis Enhancing geological model with the use of Spectral Decomposition - A case study of a prolific stratigraphic play in North Viking Graben, Norway Fracture corridor identification through 3D multifocusing to improve well deliverability, an Algerian tight reservoir case study Geological Probability Of Success Assessment for Amplitude-Driven Prospects, A Nile Delta Case Study ». Thesis, université Paris-Saclay, 2020. http://www.theses.fr/2020UPASS085.

Texte intégral
Résumé :
En tant que géoscientifique dans le domaine de l’Exploration pétrolière et gazière depuis une vingtaine d’années, mes fonctions professionnelles m’ont permis d’effectuer différents travaux de recherche sur la thématique de la gestion des risques et des incertitudes. Ces travaux de recherche se situent sur l’ensemble de la chaîne d’analyse Exploration, traitant de problématiques liées à l’acquisition et au traitement sismique, jusqu’au placement optimal de forages d’exploration. Un volet plus poussé de mes travaux s’est orienté sur la gestion des incertitudes géophysiques en Exploration pétrolière, là où l’incertitude est la plus importante et paradoxalement la moins travaillée.On peut regrouper mes travaux de recherche en trois grands domaines qui suivent les grandes étapes du processus Exploration : le traitement sismique, leur interprétation, et enfin l'analyse et l'extraction des différentes incertitudes qui vont nous permettre de calculer les volumes d’hydrocarbures en place et récupérables, ainsi que l’analyse de ses risques associés. L’ensemble des travaux de recherche ont été appliqués avec succès sur des cas d’études opérationnelles. Après avoir introduit quelques notions générales et détaillé les grandes étapes du processus Exploration et leur lien direct avec ces problématiques, je présenterai quatre grands projets de recherche sur un cas d’étude algérien
In the last 20 years, I have been conducting various research projects focused on the management of risks and uncertainties in the petroleum exploration domain. The various research projects detailed in this thesis are dealing with problematics located throughout the whole Exploration and Production chain, from seismic acquisition and processing, until the optimal exploration to development wells placement. Focus is made on geophysical risks and uncertainties, where these problematics are the most pronounced and paradoxically the less worked in the industry. We can subdivide my research projects into tree main axes, which are following the hydrocarbon exploration process, namely: seismic processing, seismic interpretation thanks to the integration with various well informations, and eventually the analysis and extraction of key uncertainties, which will be the basis for the optimal calculation of in place and recoverable volumes, in addition to the associated risk analysis on a given target structure. The various research projects that are detailed in this thesis have been applied successfully on operational North Africa and North Sea projects. After introducing risks and uncertainty notions, we will detail the exploration process and the key links with these issues. I will then present four major research projects with their theoretical aspects and applied case study on an Algerian asset
Styles APA, Harvard, Vancouver, ISO, etc.
27

Dakkoune, Amine. « Méthodes pour l'analyse et la prévention des risques d'emballement thermique Zero-order versus intrinsic kinetics for the determination of the time to maximum rate under adiabatic conditions (TMR_ad) : application to the decomposition of hydrogen peroxide Risk analysis of French chemical industry Fault detection in the green chemical process : application to an exothermic reaction Analysis of thermal runaway events in French chemical industry Early detection and diagnosis of thermal runaway reactions using model-based approaches in batch reactors ». Thesis, Normandie, 2019. http://www.theses.fr/2019NORMIR30.

Texte intégral
Résumé :
L’histoire des événements accidentels dans les industries chimiques montre que leurs conséquences sont souvent graves sur les plans humain, environnemental et économique. Cette thèse vise à proposer une approche de détection et de diagnostic des défauts dans les procédés chimiques afin de prévenir ces événements accidentels. La démarche commence par une étude préalable qui sert à identifier les causes majeures responsables des événements industriels chimiques en se basant sur le retour d’expérience (REX). En France, selon la base de données ARIA, 25% des évènements sont dus à l’emballement thermique à cause d’erreurs d’origine humaine. Il est donc opportun de développer une méthode de détection et de diagnostic précoce des défauts dus à l’emballement thermique. Pour cela nous développons une approche qui utilise des seuils dynamiques pour la détection et la collecte de mesures pour le diagnostic. La localisation des défauts est basée sur une classification des caractéristiques statistiques de la température en fonction de plusieurs modes défectueux. Un ensemble de classificateurs linéaires et de diagrammes de décision binaires indexés par rapport au temps sont utilisés. Enfin, la synthèse de l'acide peroxyformique dans un réacteur discontinu et semi-continu est considérée pour valider la méthode proposée par des simulations numériques et ensuite expérimentales. Les performances de détection de défauts se sont révélées satisfaisantes et les classificateurs ont démontré un taux de séparabilité des défauts élevés
The history of accidental events in chemical industries shows that their human, environmental and economic consequences are often serious. This thesis aims at proposing an approach of detection and diagnosis faults in chemical processes in order to prevent these accidental events. A preliminary study serves to identify the major causes of chemical industrial events based on experience feedback. In France, according to the ARIA database, 25% of the events are due to thermal runaway because of human errors. It is therefore appropriate to develop a method for early fault detection and diagnosis due to thermal runaway. For that purpose, we develop an approach that uses dynamical thresholds for the detection and collection of measurements for diagnosis. The localization of faults is based on a classification of the statistical characteristics of the temperature according to several defectives modes. A multiset of linear classifiers and binary decision diagrams indexed with respect to the time are used for that purpose. Finally, the synthesis of peroxyformic acid in a batch and semi batch reactor is considered to validate the proposed method by numerical simulations and then experiments. Faults detection performance has been proved satisfactory and the classifiers have proved a high isolability rate of faults
Styles APA, Harvard, Vancouver, ISO, etc.
28

Riahi, Hassen. « Analyse de structures à dimension stochastique élevée : application aux toitures bois sous sollicitation sismique ». Phd thesis, Université Blaise Pascal - Clermont-Ferrand II, 2013. http://tel.archives-ouvertes.fr/tel-00881187.

Texte intégral
Résumé :
Le problème de la dimension stochastique élevée est récurrent dans les analyses probabilistes des structures. Il correspond à l'augmentation exponentielle du nombre d'évaluations du modèle mécanique lorsque le nombre de paramètres incertains est élevé. Afin de pallier cette difficulté, nous avons proposé dans cette thèse, une approche à deux étapes. La première consiste à déterminer la dimension stochastique efficace, en se basant sur une hiérarchisation des paramètres incertains en utilisant les méthodes de criblage. Une fois les paramètres prépondérants sur la variabilité de la réponse du modèle identifiés, ils sont modélisés par des variables aléatoires et le reste des paramètres est fixé à leurs valeurs moyennes respectives, dans le calcul stochastique proprement dit. Cette tâche fut la deuxième étape de l'approche proposée, dans laquelle la méthode de décomposition de la dimension est utilisée pour caractériser l'aléa de la réponse du modèle, par l'estimation des moments statistiques et la construction de la densité de probabilité. Cette approche permet d'économiser jusqu'à 90% du temps de calcul demandé par les méthodes de calcul stochastique classiques. Elle est ensuite utilisée dans l'évaluation de l'intégrité d'une toiture à ossature bois d'une habitation individuelle installée sur un site d'aléa sismique fort. Dans ce contexte, l'analyse du comportement de la structure est basée sur un modèle éléments finis, dans lequel les assemblages en bois sont modélisés par une loi anisotrope avec hystérésis et l'action sismique est représentée par huit accélérogrammes naturels fournis par le BRGM. Ces accélérogrammes permettent de représenter différents types de sols selon en se référant à la classification de l'Eurocode 8. La défaillance de la toiture est définie par l'atteinte de l'endommagement, enregistré dans les assemblages situés sur les éléments de contreventement et les éléments d'anti-flambement, d'un niveau critique fixé à l'aide des résultats des essais. Des analyses déterministes du modèle éléments finis ont montré que la toiture résiste à l'aléa sismique de la ville du Moule en Guadeloupe. Les analyses probabilistes ont montré que parmi les 134 variables aléatoires représentant l'aléa dans le comportement non linéaire des assemblages, 15 seulement contribuent effectivement à la variabilité de la réponse mécanique ce qui a permis de réduire la dimension stochastique dans le calcul des moments statistiques. En s'appuyant sur les estimations de la moyenne et de l'écart-type on a montré que la variabilité de l'endommagement dans les assemblages situés dans les éléments de contreventement est plus importante que celle de l'endommagement sur les assemblages situés sur les éléments d'anti-flambement. De plus, elle est plus significative pour les signaux les plus nocifs sur la structure.
Styles APA, Harvard, Vancouver, ISO, etc.
29

Ozcan, Berkay. « The effects of marital transitions and spousal characteristic on economic outcomes ». Doctoral thesis, Universitat Pompeu Fabra, 2008. http://hdl.handle.net/10803/7251.

Texte intégral
Résumé :
My dissertation aims to improve our understanding of why and how couple dynamics and marital transitions affect four critical economic outcomes: household savings, labour supply, transition to self-employment and income distribution. In all of my papers, behavior of the couple is at the center. First chapter analayzes the likelihood of starting a business and examines at the influence of marriage, its duration and the characteristics of the spouse on the probability to make a transition to entrepreneurship. In the second chapter, I take advantage of Irish Divorce Law introduced in 1996 as quasi-natural experiment for the rise in the risk of divorce and explain its effects on household savings behavior. The third chapterturns its attention to labour supply behaviour of the men on women experiencing a risk in the marital stability. Similarly, the last paper is also concerned about entry and exits from marriage, but it considers these phenomena together with the rise in female employment. Consequently, this chapter sheds light to the mechanisms through which changes in family types and labor supply decisions of women are actually leading to higher or lower inequality. Generally, my dissertation covers both substantive and methodological issues on several fields from inequality research to family demographics and entrepreneurship.
Esta tesis tiene el objetivo de ampliar y perfeccionar nuestra comprensión de por qué y cómo la dinámica de pareja afecta cuatro críticos resultados económicos que están directamente realacionados con la desigualdad y la estratificación. Estos resultados son, respectivamente; ser autónomo, la oferta de trabajo, el ahorro de los hogares y la distribución del ingreso. A lo largo de la tesis, con la dinámica de pareja, concibo dos conceptos: en primer lugar implica formar parte de una pareja (es decir, tener una esposa/o con ciertas características) versus ser soltero/a y transiciones entre estos dos estados. Y la segunda se refiere a los cambios en el comportamiento de los esposos debido a un cambio de contexto, como un aumento en el riesgo de disolución de la pareja. Por consiguiente, analiza las implicaciones de estos dos conceptos en cada una de estas variables económicas. La tesis se utiliza una serie de grandes conjuntos de datos longitudinales de diferentes países (p.e. PSID, GSOEP, PHCE, Living in Ireland Survey) y estratégias econométricas. Estas características incluyen el análisis de supervivencia, las estimaciones de diff-en-diff, simulaciones y descomposiciones.
Styles APA, Harvard, Vancouver, ISO, etc.
30

Карчева, Ганна Тимофіївна, et Hanna Karcheva. « Забезпечення ефективного функціонування та розвитку банківської системи України ». Thesis, Університет банківської справи Національного банку України, 2013. http://ir.kneu.edu.ua/handle/2010/3090.

Texte intégral
Résumé :
Дисертація на здобуття наукового ступеня доктора економічних наук за спеціальністю 08.00.08 - гроші, фінанси і кредит - Університет банківської справи Національного банку України (м. Київ). - К., 2013. У дисертації розроблено та обгрунтовано теоретико-методологічні засади дослідження забезпечення ефективного функціонування та розвитку банківської системи України, що грунтуються на кібернетичному та синергетичному підходах, дотриманні динамічної рівноваги та фінансової стійкості з метою позитивного впливу на розвиток економіки та надання високого рівня банківських послуг населенню та суб’єктам господарювання. Досліджено сучасні підходи до визначення сутності банківської системи та основні її характеристики, функції, структура, визначені теоретичні основи ефективного функціонування та розвитку банківської системи та сучасні концепції її розвитку, забезпечення ефективності, які базуються на розгляді ефективності як інтегральної багатокомпонентної системної характеристики, що залежить від багатьох чинників. Обгрунтована синергетична концепція розвитку банківських систем в умовах біфуркаційного їх розвитку та визначені умови забезпечення синергетичної ефективності та важливість резонансних впливів регулятора для її забезпечечення. Проаналізована багатоваріантність та альтернативність розвитку банківських систем, обгрунтована доцільність подальшого розвитку банківської системи України за інтенсивною ендогенно орієнтованою моделлю, яка дасть змогу підвищити її фінансову стійкість та ефективність. Розроблено методичний підхід до комплексної оцінки ефективності банківської системи України, включаючи декомпозиційний аналіз рентабельності діяльності, оцінку ефективності управління активами і пасивами та процентним ризиком, рекапіталізації банків в України за участю держави з використанням економіко-статистичних методів. Розроблені теоретико-методологічні засади ефективного управління ризиком ліквідності банківської системи з урахуванням внутрішніх та зовнішніх (загальносистемних) чинників, нових вимог Базельського комітету та вдосконалення кількісної оцінки ризику. Проведено аналіз розвитку банківської системи України як багатомірного процесу її удосконалення та модернізації, виділені основні етапи її розвитку відповідно до моделі лінійних стадій розвитку та виділених X. Мінскі трьох режимів фінансування (забезпечене, спекулятивне, Понці-фінансування), визначені основні тенденції функціонування банківської системи у посткризовий період та підвищення її ефективності. Обгрунтовані шляхи забезпечення стабільного розвитку банківської системи, включаючи розроблення та реалізацію стратегії її розвитку, яка б передбачала перехід від екстенсивної до інтенсивної моделі розвитку, удосконалення регулювання діяльності банків з урахуванням міжнародних стандартів.
Doctoral thesis on economics, specialty 08.00.08 - Money, Finance and Credit -University of Banking of the National Bank of Ukraine. - Kyiv, 2013. Doctoral thesis developed and founded theoretical and methodological basis of the effective functioning and development of the banking system of Ukraine which are based on cybernetic and synergistic approaches, adherence of dynamic balance and financial stability aimed at the positive impact on economics development and providing high-quality service for people and companies. It was researched modem approaches for defining the nature of a banking system and its main characteristics, functions, structure, defined the theoretical basis of effective functioning and development of a banking system and modem concepts of development of a banking system and its effective functioning which are based on studying the effectiveness as a multicomponent system characteristics dependent on many factors. It was founded the synergistic concept of development of banking systems in conditions of their bifurcational development and defined conditions of synergistic effectiveness and importance of resonance effects of the regulator for its providing. It was analyzed the multivariate and alternative development of banking systems and founded the necessity of further development of the banking system of Ukraine by the intensive endogenously based model which will promote increase of its financial stability and effectiveness. It was developed the methodical approach to the complex evaluation of effectiveness of the banking system of Ukraine including decomposition analysis of profitability, evaluation of the effectiveness of assets and liabilities management and interest rate risk, to the state recapitalization of banks in Ukraine with the help of economic and statistical methods. It was developed the theoretical and methodological principles of effective control of liquidity risk of banking system including the internal and external (system-wide) factors, new requirements of the Basel Committee and improvement of quantitative risk assessment. It was conducted the analysis of development of the banking system of Ukraine as a multidimensional process of its improvement and modernization, pointed out the main phases of its development according to the model of linear stages of development and three regimes of funding determined by H. Minsky (provided, speculative, Ponzi finance), defined the main trends in development of the banking system in post-crisis period and increase of its efficiency. It was founded the ways of implementation of stable development of the banking system including the development and implementation of development strategy of the banking system which would predict the transition from extensive to intensive model of development, improvement of regulation of banking business considering the international standards.
Диссертация на соискание ученой степени доктора экономических наук по специальности 08.00.08 - деньги, финансы и кредит - Университет банковского дела Национального банка Украины (г. Киев). - К., 2013. Диссертация посвящена разработке теоретико-методологических подходов и практических приемов обеспечения эффективного функционирования и развития банковской системы Украины с целью повышения устойчивости и усиления влияния на социально-экономическое развитие страны, повышения доступности банковских услуг для населения и предприятий. В работе исследованы современные подходы к определению сущности банковской системы и ее эффективности, рассмотрены функции и структура, определены теоретические основы обеспечения эффективного функционирования и развития банковской системы, базирующиеся на соблюдении динамического равновесия и устойчивости, рассмотрении эффективности как интегральной многокомпонентной системной характеристики, зависящей от многих факторов и условий. Для поддержания устойчивости банковской системы важным является наличие определенного «потенциала устойчивости» в виде капитала, ликвидности, доходности, резервов на покрытие рисков. В работе предложено оценивать потенциал устойчивости банковской системы с помощью интегрального показателя, рассчитанного на основе оценки рисков и достаточности капитала для их покрытия. Выделены пять уровней устойчивости банковской системы в зависимости от значений параметров устойчивости, их волатильности и наличия потенциала устойчивости. Доказано, что неустойчивая банковская система, не поддерживающая динамическое равновесие, не может быть эффективной. Разработана синергетическая концепция развития банковских систем в условиях бифуркационного их развития и определены критерии вхождения банковской системы в фазу бифуркации. Предложен метод определения уровня хаотичности в банковской системе Украины и вхождения в фазу бифуркации на основе оценки волатильности текущих пассивов и возможности построения аналитического тренда. Разработан теоретико-методологический поход к исследованию обеспечения эффективного функционирования и развития банковской системы, обоснованы критерии и принципы обеспечения ее эффективности, условия достижения синергетических эффектов, выделены структурные компоненты эффективности и разработана система сбалансированных показателей для оценки всех выделенных видов эффективности, что позволит разработать эффективные механизмы поддержания финансовой устойчивости и обеспечения эффективности банковской системы. В диссертации предложен методический подход к проведению комплексного анализа эффективности банковской системы Украины, включая декомпозиционный анализ рентабельности деятельности, оценку эффективности управления активами и пассивами и процентным риском, рекапитализации банков в Украине с участием государства. Проанализирована многовариантность и альтернативность развития банковских систем и обоснована целесообразность дальнейшего развития банковской системы Украины в соответствии с эндогенно ориентированной моделью, которая будет способствовать повышению ее финансовой устойчивости и эффективности. Проведен анализ развития банковской системы Украины как многомерного процесса ее совершенствования и модернизации, выделены основные этапы ее развития в соответствии с моделью линейных стадий развития и выделенных X. Мински трех режимов финансирования (обеспеченного, спекулятивного, Понци-финансирования), определены основные тенденции развития банковской системы в посткризисный период и пути повышения ее эффективности. Разработаны теоретико-метологические основы повышения эффективности управления риском ликвидности банковской системы с учетом внутренних и внешних (общесистемных) факторов, новых требований Базельского комитета и совершенствования количественной оценки риска. Получили дальнейшее развитие методы определения и управления неснижаемым остатком текущих пассивов, предложен методологический подход к определению интегрального показателя оценки риска ликвидности банковской системы - динамического индикатора ликвидности, расчет которого основывается на принципах комплексности, системности, чувствительности к рискам и последствиям принятых решений. Предложен комплекс мероприятий по обеспечению эффективного функционирования и развития банковской системы Украины на основе совершенствования управления активами и пассивами и процентным риском с использованием экономико-статистических моделей, обоснована роль и определены концептуальные подходы к разработке государственной стратегии развития банковской системы Украины до 2020 г., которая должна предусматривать переход от экстенсивной к интенсивной модели развития банковской системы, совершенствование регулирования деятельности банков с учетом международных стандартов, внедрение инноваций, усиление влияния на развитие экономики и повышение доступности банковских услуг для населения и предприятий.
Styles APA, Harvard, Vancouver, ISO, etc.
31

Lee, Tai-Chin, et 李岱瑾. « Decomposition of Risk Assessments for Financial Structured Notes ». Thesis, 2009. http://ndltd.ncl.edu.tw/handle/95630339033826002865.

Texte intégral
Résumé :
碩士
元智大學
財務金融學系
97
In order to accelerate the development of the finance market, the authorities deregulated some rules and launched many new products recently. Equity-Linked Notes and Principal-Guaranteed Notes on NT dollars were deregulated in 2003. Besides, the interest rate is relatively low and structured notes are highly promoted by financial planner. Due to the above reasons, structured notes are very popular in financial market. The goal of this paper is to clarify the risk of structured notes which is sold in the market. We also hope this paper can help investors to determine the true risk of each product. We simulated eleven products in this paper and price each product via different pricing strategy and calculate Value at Risk (VaR) by historical estimation, variance-covariance estimation and Monte Carlo simulation. Moreover, we also conduct Marginal VaR and Incremental VaR to access it risk. From this paper, we find that if the underlying asset is stock, the coefficient of variance in risk will be relatively high of all. It means that it is difficult to conduct the risk management. As we know, if the product has more underlying assets, the effect of diversification will be more significant. However, the risk of main indices in developed countries is not lower than emerging countries. This result is counterintuitive and the reason may be related to the pricing strategy. The risk will be relatively high if the pricing strategy only contains single option. The underlying asset of interest rate seems much more stable than others but we find that the coefficient of variance in risk is relatively high of all. Most investors believe that the product of oil will have higher risk than others. However, we find that the variance of risk will be low if the pricing strategy is appropriate. Overall, analyzing the product’s pricing strategy and underlying asset correctly, it will help investors to make a good decision.
Styles APA, Harvard, Vancouver, ISO, etc.
32

He, Ying active 2013. « Decomposition of multiple attribute preference models ». 2013. http://hdl.handle.net/2152/22980.

Texte intégral
Résumé :
This dissertation consists of three research papers on Preference models of decision making, all of which adopt an axiomatic approach in which preference conditions are studied so that the models in this dissertation can be verified by checking their conditions at the behavioral level. The first paper “Utility Functions Representing Preference over Interdependent Attributes” studies the problem of how to assess a two attribute utility function when the attributes are interdependent. We consider a situation where the risk aversion on one attribute could be influenced by the level of the other attribute in a two attribute decision making problem. In this case, the multilinear utility model—and its special cases the additive and multiplicative forms—cannot be applied to assess a subject’s preference because utility independence does not hold. We propose a family of preference conditions called nth degree discrete distribution independence that can accommodate a variety of dependencies among two attributes. The special case of second degree discrete distribution independence is equivalent to the utility independence condition. Third degree discrete distribution independence leads to a decomposition formula that contains many other decomposition formulas in the existing literature as special cases. As the decompositions proposed in this research is more general than many existing ones, the study provides a model of preference that has potential to be used for assessing utility functions more accurately and with relatively little additional effort. The second paper “On the Axiomatization of the Satiation and Habit Formation Utility Models” studies the axiomatic foundations of the discounted utility model that incorporates both satiation and habit formation in temporal decision. We propose a preference condition called shifted difference independence to axiomatize a general habit formation and satiation model (GHS). This model allows for a general habit formation and satiation function that contains many functional forms in the literature as special cases. Since the GHS model can be reduced to either a general satiation model (GSa) or a general habit formation model (GHa), our theory also provides approaches to axiomatize both the GSa model and the GHa model. Furthermore, by adding extra preference conditions into our axiomatization framework, we obtain a GHS model with a linear habit formation function and a recursively defined linear satiation function. In the third paper “Hope, Dread, Disappointment, and Elation from Anticipation in Decision Making”, we propose a model to incorporate both anticipation and disappointment into decision making, where we define hope as anticipating a gain and dread as anticipating a loss. In this model, the anticipation for a lottery is a subjectively chosen outcome for a lottery that influences the decision maker’s reference point. The decision maker experiences elation or disappointment when she compares the received outcome with the anticipated outcome. This model captures the trade-off between a utility gain from higher anticipation and a utility loss from higher disappointment. We show that our model contains some existing decision models as its special cases, including disappointment models. We also use our model to explore how a person’s attitude toward the future, either optimistic or pessimistic, could mediate the wealth effect on her risk attitude. Finally, we show that our model can be applied to explain the coexistence of a demand for gambling and insurance and provides unique insights into portfolio choice and advertising decision problems.
text
Styles APA, Harvard, Vancouver, ISO, etc.
33

« Decomposition of the market risk : listed location and operation location ». 2005. http://library.cuhk.edu.hk/record=b5892590.

Texte intégral
Résumé :
Mok Ka Ming.
Thesis (M.Phil.)--Chinese University of Hong Kong, 2005.
Includes bibliographical references (leaf 31).
Abstracts in English and Chinese.
Chapter I --- Introduction --- p.1
Chapter II --- Data Description --- p.4
Chapter III --- Market risks for stocks --- p.6
Chapter 1. --- Listing Location --- p.7
Chapter 2. --- Operation Location --- p.9
Chapter 3. --- Measurements --- p.10
Chapter IV --- The Model --- p.13
Chapter V --- Empirical Results --- p.16
Chapter 1. --- Summary statistics --- p.16
Chapter 2. --- Diagnostics Test --- p.17
Chapter 3. --- The co-efficient --- p.18
Chapter 4. --- Comparing the result with US dollar-denominated returns --- p.21
Chapter VI --- Sub-period analysis --- p.26
Chapter VII --- Market analysis --- p.29
Chapter VIII --- Industrial analysis --- p.31
Chapter IX --- Conclusion --- p.35
Chapter X --- References --- p.37
Chapter XI --- Appendix --- p.39
Styles APA, Harvard, Vancouver, ISO, etc.
34

Cotton, Tanisha Green. « Computational Study of Mean-Risk Stochastic Programs ». Thesis, 2013. http://hdl.handle.net/1969.1/149619.

Texte intégral
Résumé :
Mean-risk stochastic programs model uncertainty by including risk measures in the objective function. This allows for modeling risk averseness for many problems in science and engineering. This dissertation addresses gaps in the literature on stochastic programs with mean-risk objectives. This includes a need for a computational study of the few available algorithms for this class of problems. The study was aimed at implementing and performing an empirical investigation of decomposition algorithms for stochastic linear programs with absolute semideviation (ASD) and quantile deviation (QDEV) as mean-risk measures. Specifically, the goals of the study were to analyze for specific instances how algorithms perform across different levels of risk, investigate the effect of using ASD and QDEV as risk measures, and understand when it is appropriate to use the risk-averse approach over the risk-neutral one. We derive two new subgradient based algorithms for the ASD and QDEV models, respectively. These algorithms are based on decomposing the stochastic program stage-wise and using a single (aggregated) cut in the master program to approximate the mean and deviation terms of the mean-risk objective function. We also consider a variant of each of the algorithms from the literature in which the mean-risk objective function is approximated by separate optimality cuts, one for the mean and one for the deviation term. These algorithms are implemented and applied to standard stochastic programming test instances to study their comparative performance. Both the aggregated cut and separate cut algorithms have comparable computational performance for ASD, while the separate cut algorithm outperforms its aggregate counterpart for QDEV. The computational study also reveals several insights on mean-risk stochastic linear programs. For example, the results show that for most standard test instances the risk-neutral approach is still appropriate. We show that this is the case due to the test instances having random variables with uniform marginal distributions. In contrast, when these distributions are changed to be non-uniform, the risk-averse approach is preferred. The results also show that the QDEV mean-risk measure has broader flexibility than ASD in modeling risk.
Styles APA, Harvard, Vancouver, ISO, etc.
35

Jiang, Yun. « Decomposition, Ignition and Flame Spread on Furnishing Materials ». 2006. http://eprints.vu.edu.au/481/1/481contents.pdf.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
36

Lee, Chung-Yi, et 李仲益. « Private Placements of Equity and Systematic Risk – Application of the Beta Decomposition Model ». Thesis, 2014. http://ndltd.ncl.edu.tw/handle/ck73xf.

Texte intégral
Résumé :
碩士
國立中興大學
財務金融系所
102
The existing literature finds that firms perform poorly after private placements, which is explained by investors overoptimism. This study uses the two-beta model: cash-flow beta and discount-rate beta, following Campbell and Vuolteenaho (2004) to investigate both issues. Cash-flow beta represents the risk of future investment opportunities, and discount-rate beta represents company’s sensitivity to market discount rate. The results show that firms with low cash-flow beta have poor long-run performance. This implies that with low sensitivity to cash flows are likely to perform poorly following private placements. Further, the negative relation between discount-rate beta and long-run performance indicate that investors are prone to be overoptimistic about high discount-rate beta firms.
Styles APA, Harvard, Vancouver, ISO, etc.
37

Lai, Ying-chu, et 賴映竹. « Applying Spectral Decomposition in Value-at-Risk : Empirical Evidence of Taiwan Stock Market ». Thesis, 2007. http://ndltd.ncl.edu.tw/handle/96514113920472666338.

Texte intégral
Résumé :
碩士
國立中央大學
財務金融研究所
95
Since more and more financial products have been invented, we have more ways to get more variety payoff and hedging. As a result, how to control financial risk becomes more and more important. Monte Carlo Simulation is the most widely used method to conduct value-at-risk. The most important part of computing value-at-risk of an asset portfolio is to derive the default correlation matrix to apply into Monte Carlo Simulation. Generally, we use the Cholesky decomposition to decompose the correlation matrix. However, there are some limitations to the Cholesky decomposition. The Cholesky decomposition cannot be used to decompose a non-positive correlation matrix. Under this circumstance, we may adopt the Spectral decomposition. This paper will show the efficiency of Spectral decomposition when facing the non-positive correlation matrix. Due to having fewer limitations, the Spectral decomposition could be more widely used rather than the Cholesky decomposition.
Styles APA, Harvard, Vancouver, ISO, etc.
38

Lin, Geng-Xian, et 林耕賢. « Causal effect decomposition and risk attribution - unification of counterfactual and sufficient cause framework ». Thesis, 2019. http://ndltd.ncl.edu.tw/handle/uvf4rx.

Texte intégral
Résumé :
碩士
國立交通大學
統計學研究所
107
The purpose of this study, attribution fraction and effect decomposition, is to unify the counterfactual framework and sufficient cause framework. This study is based on the SCC model, which is more logically finer, and the counterfactual model, which can quantify the causality. Effect decomposition decompose the effect of exposure A on outcome under the mSCC model into 6 pathways, which are DEs, Ago, MEs, Syns, Synmed and Agomed. We define the effects and the mechanism of the 6 pathways, and we estimate the effects with needed assumptions. Attribution fraction decompose the all effect on outcome Y into 9 pathways, which are ARn, PAE, ARm, PME, SYNref, SYNmed, AGONref, AGONmed, and AEagon. We calculate the proportion of every pathway, and we also define the effects and mechanism, and we estimate the effects with needed assumptions. This study also links to other methodologies of causal inference. The effect decomposition links to Vander Weele’s 4-way decomposition, and the attribution fraction links to 6-way decomposition. We use bootstrap method to validate our proposed methods. The results of simulation show the validation of these two methods. In the end, we use HCC, HBV, and HCV dataset to demonstrate this methodology.
Styles APA, Harvard, Vancouver, ISO, etc.
39

Jiang, Yun. « Decomposition, Ignition and Flame Spread on Furnishing Materials ». Thesis, 2006. https://vuir.vu.edu.au/481/.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
40

Atemnkeng, Tabi Rosy Christy. « Estimation of Longevity Risk and Mortality Modelling ». Master's thesis, 2022. http://hdl.handle.net/10362/135573.

Texte intégral
Résumé :
Dissertation presented as the partial requirement for obtaining a Master's degree in Statistics and Information Management, specialization in Risk Analysis and Management
Previous mortality models failed to account for improvements in human mortality rates thus in general, human life expectancy was underestimate. Declining mortality and increasing life expectancy (longevity) profoundly alter the population age distribution. This demographic transition has received considerable attention on pension and annuity providers. Concerns have been expressed about the implications of increased life expectancy for government spending on old-age support. The goal of this paper is to lay out a framework for measuring, understanding, and analyzing longevity risk, with a focus on defined pension plans. Lee-Carter proposed a widely used mortality forecasting model in 1992. The study looks at how well the Lee-Carter model performed for female and male populations in the selected country (France) from 1816 to 2018. The Singular Value Decomposition (SVD) method is used to estimate the parameters of the LC model. The mortality table then assesses future improvements in mortality and life expectancy, taking into account mortality assumptions, to see if pension funds and annuity providers are exposed to longevity risk. Mortality assumptions are predicted death rates based on a mortality table. The two types of mortality are mortality at birth and mortality in old age. Longevity risk must be effectively managed by pension and annuity providers. To mitigate this risk, pension providers must factor in future improvements in mortality and life expectancy, as mortality rates tend to decrease over time. The findings show that failing to account for future improvements in mortality results in an expected provision shortfall. Protection mechanisms and policy recommendations to manage longevity risk can help to mitigate the financial impact of an unexpected increase in longevity.
Styles APA, Harvard, Vancouver, ISO, etc.
41

Wang, Sheng Yuan, et 王聖元. « The Pricing, Credit Risk Decomposition and Hedging Analysis of CPDO Under The Jump Diffusion Model ». Thesis, 2011. http://ndltd.ncl.edu.tw/handle/01659379509792560049.

Texte intégral
Résumé :
碩士
國立政治大學
金融研究所
99
The increasing trading volumes and innovative structures of credit derivatives have attracted great academic attention in the quantification and analysis of their complex risk characteristics. The pricing and hedging issues of complex credit structuers after the 2009 financial crisis are especially vital, and they present great challegens to both the academic community and industry practitioners. Constant Proportion Debt Obligations (CPDOs) are one of the new credit-innovations that claim to provide risk-adverse investors with fixed-income cash flows and minimal risk-bearing, yet the cash-outs events of such products during the crisis unfolded risk characteristics that had been unseen to investors. This research focuses on the pricing risk quantification, and dynamic hedging issues of CPDOs under a Levy jump diffusion setting. Based on decomposing the product's risk structure, we derive explicit closed-form solutions in the form of time-dependent double digital knock-out barrier options. This enables us to explore, in terms of the associated hedging greeks, the embeded risk characteristics of CPDOs and propose feasible delta-netral strategies that are feasible to hedge such products. Numerical simulations are subsequently performed to provide benchmark measures for the proposed hedging strategies.
Styles APA, Harvard, Vancouver, ISO, etc.
42

Jootar, Jay, et Steven D. Eppinger. « A System Architecture-based Model for Planning Iterative Development Processes : General Model Formulation and Analysis of Special Cases ». 2002. http://hdl.handle.net/1721.1/4042.

Texte intégral
Résumé :
The development process for complex system is typically iterative in nature. Among the critical decisions in managing such process involves deciding how to partition the system development into iterations. This paper proposes a mathematical model that captures the dynamics of such iterative process. The analysis of two special cases of the model provides an insight into how such decision should be made.
Singapore-MIT Alliance (SMA)
Styles APA, Harvard, Vancouver, ISO, etc.
43

Li, Yuan-Hung, et 李沅泓. « Novel Weather Generator Using Empirical Mode Decomposition and K-NN Moving Window Method and Application to Climate Change Risk Assessment of Hsinchu Water Supply System ». Thesis, 2015. http://ndltd.ncl.edu.tw/handle/46885504872403627354.

Texte intégral
Résumé :
碩士
國立臺灣大學
生物環境系統工程學研究所
103
Climate change has drawn many attentions. However, not only human-induced climate change but also natural low frequency climate variability should be concerned. Weather generator is often used to produce weather data for climate change risk assessment. Thus, a novel weather generator needs to be developed to not only produce weather data based on climate scenarios but also reproduce the low frequency climate characteristics. The main purposes of this study is to develop a novel weather generation to preserve the low-frequency climate characteristics of the rainfall data and apply the generated data to evaluate the climate change risk of the Hsinchu water supply system. The first part of this study is to develop a novel weather generator. The proposed weather generator includes two steps. The first step is to apply Empirical Mode Decomposition (EMD) to decompose the time series of historical monthly rainfall amount into Intrinsic Mode Functions (IMF) and trends. Each IMF represents a generally simple component of the rainfall time series. During generating process, the envelope curve and every single phases of each month derived from IMFs is used to form new ones. Assemble the new IMFs and the trend to generate monthly rainfall series. The second step is to appropriately downscale the generated monthly rainfall series into daily rainfall series. After the monthly rainfall amounts has been produced, the daily rainfall amounts could be allocated from the monthly rainfall by k-NN moving window method. Thus the newly simulated daily rainfall series with long-term and low frequency climate characteristics is produced. Furthermore, weather parameters of nearby stations can also be produced by k-NN method. The statistical characteristics of the generated monthly and daily weather data, such as mean, standard deviation and autoregressive coefficient, must be comparable to the historical ones. If their similarity is conformed, further use of adopting GCM scenarios can be applied by adding statistical properties on the historical data and redo all the steps to produce future weather data. The second part of this study is to evaluate the climate change risk of a water supply system. The weather data generated by the novel weather generator are used as inputs for the GWLF model to simulate stream flows of the Hsinchu area. Then the simulated stream flows are further used in the water supply system dynamics model to analyze the risk of water deficits. The water supply system dynamics model considers current hydraulic facilities, different climate scenarios, hydrological conditions, and social and economic development. This study introduces several evaluation index and climate scenarios to assess future risks of the Hsinchu water supply system. Furthermore, the results are compared with the results using Richardson-Type generated weather data to identify the contribution of the proposed new weather generator.
Styles APA, Harvard, Vancouver, ISO, etc.
44

Ha, Hongjun. « Essays on Computational Problems in Insurance ». 2016. http://scholarworks.gsu.edu/rmi_diss/40.

Texte intégral
Résumé :
This dissertation consists of two chapters. The first chapter establishes an algorithm for calculating capital requirements. The calculation of capital requirements for financial institutions usually entails a reevaluation of the company's assets and liabilities at some future point in time for a (large) number of stochastic forecasts of economic and firm-specific variables. The complexity of this nested valuation problem leads many companies to struggle with the implementation. The current chapter proposes and analyzes a novel approach to this computational problem based on least-squares regression and Monte Carlo simulations. Our approach is motivated by a well-known method for pricing non-European derivatives. We study convergence of the algorithm and analyze the resulting estimate for practically important risk measures. Moreover, we address the problem of how to choose the regressors, and show that an optimal choice is given by the left singular functions of the corresponding valuation operator. Our numerical examples demonstrate that the algorithm can produce accurate results at relatively low computational costs, particularly when relying on the optimal basis functions. The second chapter discusses another application of regression-based methods, in the context of pricing variable annuities. Advanced life insurance products with exercise-dependent financial guarantees present challenging problems in view of pricing and risk management. In particular, due to the complexity of the guarantees and since practical valuation frameworks include a variety of stochastic risk factors, conventional methods that are based on the discretization of the underlying (Markov) state space may not be feasible. As a practical alternative, this chapter explores the applicability of Least-Squares Monte Carlo (LSM) methods familiar from American option pricing in this context. Unlike previous literature we consider optionality beyond surrendering the contract, where we focus on popular withdrawal benefits - so-called GMWBs - within Variable Annuities. We introduce different LSM variants, particularly the regression-now and regression-later approaches, and explore their viability and potential pitfalls. We commence our numerical analysis in a basic Black-Scholes framework, where we compare the LSM results to those from a discretization approach. We then extend the model to include various relevant risk factors and compare the results to those from the basic framework.
Styles APA, Harvard, Vancouver, ISO, etc.
45

Ondo, Guy-Roger Abessolo. « Mathematical methods for portfolio management ». Diss., 2002. http://hdl.handle.net/10500/784.

Texte intégral
Résumé :
Portfolio Management is the process of allocating an investor's wealth to in­ vestment opportunities over a given planning period. Not only should Portfolio Management be treated within a multi-period framework, but one should also take into consideration the stochastic nature of related parameters. After a short review of key concepts from Finance Theory, e.g. utility function, risk attitude, Value-at-rusk estimation methods, a.nd mean-variance efficiency, this work describes a framework for the formulation of the Portfolio Management problem in a Stochastic Programming setting. Classical solution techniques for the resolution of the resulting Stochastic Programs (e.g. L-shaped Decompo­ sition, Approximation of the probability function) are presented. These are discussed within both the two-stage and the multi-stage case with a special em­ phasis on the former. A description of how Importance Sampling and EVPI are used to improve the efficiency of classical methods is presented. Postoptimality Analysis, a sensitivity analysis method, is also described.
Statistics
M. Sc. (Operations Research)
Styles APA, Harvard, Vancouver, ISO, etc.
46

Anjos, Helena Maria Chaves. « What drives insurance companies´ stock returns ? The impact of new rules and regulation ». Master's thesis, 2015. http://hdl.handle.net/10362/30148.

Texte intégral
Résumé :
The contribution of this dissertation is to highlight the financial market efficiency theory based on insurance companies' stock performance focusing on the main drivers of expected returns and the impacts of new rules and regulation. The methodology is based on the return decomposition model. The insurance sector is represented by four major players and the analysis performed during the period of financial turbulences and economic recovery. Distinction is made between large and small players and relevance is given to “Return News” as main drivers of stock returns for the first and to “Cash- Flows News” for the latter. This evidence is supported by a vector autoregressive model and an impulse response analysis. These findings represent a major challenge for the sector in terms of risk management, strategy settings and supervision process, given the impact on market and equity risk modules, under the Solvency II regime, as risk-based approaches and capital adequacy framework, especially given the importance of large players in case of distress, for the systemic risk and real economy, and the low expressiveness of small players listed securities, in peripheral countries where new capital sources are available, toward the upcoming European capital markets union.
Styles APA, Harvard, Vancouver, ISO, etc.
Nous offrons des réductions sur tous les plans premium pour les auteurs dont les œuvres sont incluses dans des sélections littéraires thématiques. Contactez-nous pour obtenir un code promo unique!

Vers la bibliographie