Dissertations / Theses on the topic 'Conditional systemic risk measure'

To see the other types of publications on this topic, follow the link: Conditional systemic risk measure.

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 16 dissertations / theses for your research on the topic 'Conditional systemic risk measure.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.

1

DOLDI, ALESSANDRO. "EQUILIBRIUM, SYSTEMIC RISK MEASURES AND OPTIMAL TRANSPORT: A CONVEX DUALITY APPROACH." Doctoral thesis, Università degli Studi di Milano, 2021. http://hdl.handle.net/2434/812668.

Full text
Abstract:
This Thesis focuses on two main topics. Firstly, we introduce and analyze the novel concept of Systemic Optimal Risk Transfer Equilibrium (SORTE), and we progressively generalize it (i) to a multivariate setup and (ii) to a dynamic (conditional) setting. Additionally we investigate its relation to a recently introduced concept of Systemic Risk Measures (SRM). We present Conditional Systemic Risk Measures and study their properties, dual representation and possible interpretations of the associated allocations as equilibria in the sense of SORTE. On a parallel line of work, we develop a duality for the Entropy Martingale Optimal Transport problem and provide applications to problems of nonlinear pricing-hedging. The mathematical techniques we exploit are mainly borrowed from functional and convex analysis, as well as probability theory. More specifically, apart from a wide range of classical results from functional analysis, we extensively rely on Fenchel-Moreau-Rockafellar type conjugacy results, Minimax Theorems, theory of Orlicz spaces, compactness results in the spirit of Komlós Theorem. At the same time, mathematical results concerning utility maximization theory (existence of optima for primal and dual problems, just to mention an example) and optimal transport theory are widely exploited. The notion of SORTE is inspired by the Bühlmann's classical Equilibrium Risk Exchange (H. Bühlmann, "The general economic premium principle", Astin Bulletin, 1984). In both the Bühlmann and the SORTE definition, each agent is behaving rationally by maximizing his/her expected utility given a budget constraint. The two approaches differ by the budget constraints. In Bühlmann's definition the vector that assigns the budget constraint is given a priori. In the SORTE approach, on the contrary, the budget constraint is endogenously determined by solving a systemic utility maximization problem. SORTE gives priority to the systemic aspects of the problem, in order to first optimize the overall systemic performance, rather than to individual rationality. Single agents' preferences are, however, taken into account by the presence of individual optimization problems. The two aspects are simultaneously considered via an optimization problem for a value function given by summation of single agents' utilities. After providing a financial and theoretical justification for this new idea, in this research sufficient general assumptions that guarantee existence, uniqueness, and Pareto optimality of such a SORTE are presented. Once laid the theoretical foundation for the newly introduced SORTE, this Thesis proceeds in extending such a notion to the case when the value function to be optimized has two components, one being the sum of the single agents' utility functions, as in the aforementioned case of SORTE, the other consisting of a truly systemic component. This marks the progress from SORTE to Multivariate Systemic Optimal Risk Transfer Equilibrium (mSORTE). Technically, the extension of SORTE to the new setup requires developing a theory for multivariate utility functions and selecting at the same time a suitable framework for the duality theory. Conceptually, this more general setting allows us to introduce and study a Nash Equilibrium property of the optimizers. Existence, uniqueness, Pareto optimality and the Nash Equilibrium property of the newly defined mSORTE are proved in this Thesis. Additionally, it is shown how mSORTE is in fact a proper generalization, and covers both from the conceptual and the mathematical point of view the notion of SORTE. Proceeding further in the analysis, the relations between the concepts of mSORTE and SRM are investigated in this work. The notion of SRM we start from was introduced in the papers "A unified approach to systemic risk measures via acceptance sets" (Math. Finance, 2019) and "On fairness of systemic risk measures" (Finance Stoch., 2020) by F. Biagini, J.-P. Fouque, M. Frittelli, and T. Meyer-Brandis. SRM of Biagini et al. are generalized in this Thesis to a dynamic (namely conditional) setting, adding also a systemic, multivariate term in the threshold functions that Biagini et al. consider in their papers. The dynamic version of mSORTE is introduced, and it is proved that the optimal allocations of dynamic SRM, together with the corresponding fair pricing measures, yield a dynamic mSORTE. This in particular remains true if conditioning is taken with respect to the trivial sigma algebra, which is tantamount to working in the non-dynamic setting covered in Biagini et al. for SRM, and in the previous parts of our work for mSORTE. The case of exponential utility functions is thoroughly examined, and the explicit formulas we obtain for this specific choice of threshold functions allow for providing a time consistency property for allocations, dynamic SRM and dynamic mSORTE. The last part of this Thesis is devoted to a conceptually separate topic. Nonetheless, a clear mathematical link between the previous work and the one we are to describe is established by the use of common techniques. A duality between a novel Entropy Martingale Optimal Transport (EMOT) problem (D) and an associated optimization problem (P) is developed. In (D) the approach taken in Liero et al. (M. Liero, A. Mielke, and G. Savaré, "Optimal entropy-transport problems and a new Hellinger-Kantorovich distance between positive measures", Inventiones mathematicae, 2018) serves as a basis for adding the constraint, typical of Martingale Optimal Transport (MOT) theory, that the infimum of the cost functional is taken over martingale probability measures, instead of finite positive measures, as in Liero et al.. The Problem (D) differs from the corresponding problem in Liero et al. not only by the martingale constraint, but also because we admit less restrictive penalization terms D, which may not have a divergence formulation. In Problem (P) the objective functional, associated via Fenchel conjugacy to the terms D, is not any more linear, as in Optimal Transport or in MOT. This leads to a novel optimization problem which also has a clear financial interpretation as a non linear subhedging value. Our results in this Thesis establish a novel nonlinear robust pricing-hedging duality in financial mathematics, which covers a wide range of known robust results in its generality. The research for this Thesis resulted in the production of the following works: F. Biagini, A. Doldi, J.-P. Fouque, M. Frittelli, and T. Meyer-Brandis, "Systemic optimal risk transfer equilibrium", Mathematics and Financial Economics, 2021; A. Doldi and M. Frittelli, "Multivariate Systemic Optimal Risk Transfer Equilibrium", Preprint: arXiv:1912.12226, 2019; A. Doldi and M. Frittelli, "Conditional Systemic Risk Measures", Preprint: arXiv:2010.11515, 2020; A. Doldi and M. Frittelli, "Entropy Martingale Optimal Transport and Nonlinear Pricing-Hedging Duality", Preprint: arXiv:2005.12572, 2020.
APA, Harvard, Vancouver, ISO, and other styles
2

Hoffmann, Hannes [Verfasser], and Thilo [Akademischer Betreuer] Meyer-Brandis. "Multivariate conditional risk measures : with a view towards systemic risk in financial networks / Hannes Hoffmann ; Betreuer: Thilo Meyer-Brandis." München : Universitätsbibliothek der Ludwig-Maximilians-Universität, 2017. http://d-nb.info/1137835222/34.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Bjarnadottir, Frida. "Implementation of CoVaR, A Measure for Systemic Risk." Thesis, KTH, Matematik (Inst.), 2012. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-102684.

Full text
Abstract:
Abstract In recent years we have witnessed how distress can spread quickly through the financial system and threaten financial stability. Hence there has been increased focus on developing systemic risk indicators that can be used by central banks and others as a monitoring tool. For Sveriges Riksbank it is of great value to be able to quantify the risks that can threaten the Swedish financial system CoVaR is a systemic risk measure implemented here with that with that purpose. CoVaR, which stands for conditional Value at Risk, measures a financial institutions contribution to systemic risk and its contribution to the risk of other financial institutions. The conclusion is that CoVaR can together with other systemic risk indicators help get a better understanding of the risks threatening the stability of the Swedish financial system.
APA, Harvard, Vancouver, ISO, and other styles
4

ARDUCA, MARIA. "Measures of Risk: valuation and capital adequacy in illiquid markets, and systemic risk." Doctoral thesis, Università degli Studi di Milano-Bicocca, 2021. http://hdl.handle.net/10281/307643.

Full text
Abstract:
In questa tesi studiamo problemi di pricing e misure di rischio in mercati con frizioni, e misure di rischio sistemico. Il contesto è quello di mercati uniperiodali. Nel primo capitolo consideriamo un modello con costi di transazione convessi all'istante iniziale, vincoli convessi sui portafogli, e insieme di accettazione convesso che riflette le preferenze di un agente che agisce da compratore sul mercato. Definiamo l'insieme dei "prezzi consistenti" per ogni possbile payoff, dove con consistenti intendiamo sia rispetto al mercato, sia rispetto alle preferenze dell'agente. Mostriamo che l'estremo superiore di questo insieme coincide con il noto prezzo di superreplicazione, e in questo modo diamo un'interpretazione a quest ultimo al di là di quella classica nei problemi di hedging. Estendiamo il Teorema Fondamentale dell'Asset Pricing a un contesto dove gli "accordi accettabili" sostituiscono gli arbitraggi (cioè l'insieme di accettazione sostituisce il cono positivo), e i prezzi non sono lineari. Questo consente, sotto opportune ipotesi, di caratterizzare l'insieme dei prezzi consistenti di un payoff. Nel secondo capitolo, consideriamo un'economia astratta con costi di transazione sia all'istante iniziale sia a scadenza, e vincoli sui portafogli. Non assumiamo convessità a priori, anche se alcuni risultati valgono solo sotto opportune ipotesi di convessità. Un regolatore esterno fissa l'insieme di accettazione, cioè l'insieme delle possibili posizioni a scadenza di un agente che lui ritiene accettabili dal punto di vista della rischiosità. Definiamo requisiti di capitale che generalizzano le misure di rischio coerenti di Artzner, Delbaen, Eber e Heath (1999) in quanto rappresentano il minimo importo di capitale che l'agente deve investire sul mercato per reggiungere i requisiti di accettabilità. Il capitolo si propone di studiare le proprietà di queste misure di rischio generalizzate. In particolare, stabiliamo delle condizioni sui portafogli che assicurano che la misura di rischio sia semicontinua dal basso, e confrontiamo queste condizioni con le assunzioni del tipo "no accordi accettabili". Nel caso convesso e quasi convesso, forniamo anche una rappresentazione duale di questi funzionali. Nel terzo capitolo stabiliamo rappresentazioni duali di misure di rischio sistemico. Modelliamo le interazioni tra un numero finito di istituzioni attraverso una funzione di aggegazione, e assumiamo che un regolatore decida l'insieme di posizioni aggregate accettabili. Il rischio sistemico è misurato come il minimo quantitativo di capitale che deve essere inserito nel sistema (prima o dopo l'aggregazione) per far sì che la posizione aggregata sia accettabile. Lavoriamo dunque con misure di rischio sistemico sia del tipo "prima allocare, poi aggregare", sia "prima aggregare, poi allocare". In entrambi i casi forniamo un'analisi dettagliata del corrispondente insieme di accettazione sistemico e della sua funzione di supporto. Lo stesso approccio fornisce una semplice dimostrazione della rappresentazione duale di misure di rischio per posizioni univariate indotte da funzioni di utilità.
In this thesis, we study pricing and risk measures in markets with frictions, and systemic risk measures. All along the thesis, we focus on uniperiodal market models. In the first chapter, we consider a model with convex transaction costs at initial time, convex portfolio constraints and convex acceptance set that reflects the preferences of an agent who acts as a buyer in the market. We define the set of market consistent prices for every conceivable payoff, where consistent is meant with respect to the market and the preferences of the buyer. We show that the supremum of this set coincides with the well-known superreplication price, this giving to this functional an interpretation that goes beyond the classical hedging explanation. We develop an extension of the Fundamental Theorem of Asset Pricing in a context where arbitrages are replaced by acceptable deals (i.e. the positive cone is replaced by the acceptance set) and prices are not linear. This allows to characterize, under suitable assumptions, the set of market consistent prices of any payoff. In the second chapter, we consider an abstract economy with transaction costs both at initial time and at maturity, and portfolio constraints. We do not assume convexity a priori, tough some results hold only under convexity assumptions. An external regulator fixes the acceptance set, that is the set of possible agent's capital positions that he deems acceptable from a risk perspective. We define capital adequacy rules that generalize the coherent risk measures of Artzner, Delbaen, Eber and Heath (1999) in that they represent the minimum amount that the agent has to invest in the market in order to reach the acceptability requirements. The chapter aims to study the properties of these generalized risk measures. In particular, we establish conditions on the portfolios ensuring that they are lower semicontinuous, and we compare these conditions with no-acceptable deal type assumptions. In convex and quasi convex case, we also provide a dual representation of the functionals of interest. In the third chapter we establish dual representations of systemic risk measures. We model interactions among a finite number of institutions through an aggregation function, and we assume that a regulator fixes a set of acceptable aggregated positions. Systemic risk is estimated as the minimum amount of capital that has to be injected in the system (before or after aggregation) in order to make the aggregated position acceptable. Hence, we deal with systemic risk measures of both ``first allocate, then aggregate'' and ``first aggregate, then allocate'' type. In both cases, we provide a detailed analysis of the corresponding systemic acceptance sets and their support functions. Our general results cover some specific cases already studied in literature. The same approach delivers a simple and self-contained proof of the dual representation of utility-based risk measures for univariate positions.
APA, Harvard, Vancouver, ISO, and other styles
5

Kouaissah, Noureddine. "Financial Applications of the Conditional Expectation." Doctoral thesis, Università degli studi di Bergamo, 2017. http://hdl.handle.net/10446/77164.

Full text
Abstract:
This dissertation examines different financial applications of some conditional expectation estimators. In the first application, we provide some theoretical motivations behind the use of the moving average rule as one of the most popular trading tools among practitioners. In particular, we examine the conditional probability of the price increments and we study how this probability changes over time. In the second application, we present different approaches to evaluate the presence of the arbitrage opportunities in the option market. In particular, we investigate empirically the well-known put-call parity no-arbitrage relation and the state price density. We first measure the violation of the put-call parity as the difference in implied volatilities between call and put options. Furthermore, we propose alternative approaches to estimate the state price density under the classical hypothesis of the Black and Scholes model. In the third application, we investigate the implications for portfolio theory of using conditional expectation estimators. First, we focus on the approximation of the conditional expectation within large-scale portfolio selection problems. In this context, we propose a new consistent multivariate kernel estimator to approximate the conditional expectation. We show how the new estimator can be used for the return approximation of large-scale portfolio problems. Moreover, the proposed estimator optimizes the bandwidth selection of kernel type estimators, solving the classical selection problem. Second, we propose new performance measures based on the conditional expectation that takes into account the heavy tails of the return distributions. Third, we deal with the portfolio selection problem from the point of view of different non-satiable investors, namely risk-averse and risk-seeking investors. In particular, using a well-known ordering classification, we first identify different definitions of returns based on the investors’ preferences. The new definitions of returns are based on the conditional expected value between the random wealth assessed at different times. Finally, for each problem, we propose an empirical application of several admissible portfolio optimization problems using the US stock market.
APA, Harvard, Vancouver, ISO, and other styles
6

Karniychuk, Maryna. "Comparing Approximations for Risk Measures Related to Sums of Correlated Lognormal Random Variables." Master's thesis, Universitätsbibliothek Chemnitz, 2007. http://nbn-resolving.de/urn:nbn:de:swb:ch1-200700024.

Full text
Abstract:
In this thesis the performances of different approximations are compared for a standard actuarial and financial problem: the estimation of quantiles and conditional tail expectations of the final value of a series of discrete cash flows. To calculate the risk measures such as quantiles and Conditional Tail Expectations, one needs the distribution function of the final wealth. The final value of a series of discrete payments in the considered model is the sum of dependent lognormal random variables. Unfortunately, its distribution function cannot be determined analytically. Thus usually one has to use time-consuming Monte Carlo simulations. Computational time still remains a serious drawback of Monte Carlo simulations, thus several analytical techniques for approximating the distribution function of final wealth are proposed in the frame of this thesis. These are the widely used moment-matching approximations and innovative comonotonic approximations. Moment-matching methods approximate the unknown distribution function by a given one in such a way that some characteristics (in the present case the first two moments) coincide. The ideas of two well-known approximations are described briefly. Analytical formulas for valuing quantiles and Conditional Tail Expectations are derived for both approximations. Recently, a large group of scientists from Catholic University Leuven in Belgium has derived comonotonic upper and comonotonic lower bounds for sums of dependent lognormal random variables. These bounds are bounds in the terms of "convex order". In order to provide the theoretical background for comonotonic approximations several fundamental ordering concepts such as stochastic dominance, stop-loss and convex order and some important relations between them are introduced. The last two concepts are closely related. Both stochastic orders express which of two random variables is the "less dangerous/more attractive" one. The central idea of comonotonic upper bound approximation is to replace the original sum, presenting final wealth, by a new sum, for which the components have the same marginal distributions as the components in the original sum, but with "more dangerous/less attractive" dependence structure. The upper bound, or saying mathematically, convex largest sum is obtained when the components of the sum are the components of comonotonic random vector. Therefore, fundamental concepts of comonotonicity theory which are important for the derivation of convex bounds are introduced. The most wide-spread examples of comonotonicity which emerge in financial context are described. In addition to the upper bound a lower bound can be derived as well. This provides one with a measure of the reliability of the upper bound. The lower bound approach is based on the technique of conditioning. It is obtained by applying Jensen's inequality for conditional expectations to the original sum of dependent random variables. Two slightly different version of conditioning random variable are considered in the context of this thesis. They give rise to two different approaches which are referred to as comonotonic lower bound and comonotonic "maximal variance" lower bound approaches. Special attention is given to the class of distortion risk measures. It is shown that the quantile risk measure as well as Conditional Tail Expectation (under some additional conditions) belong to this class. It is proved that both risk measures being under consideration are additive for a sum of comonotonic random variables, i.e. quantile and Conditional Tail Expectation for a comonotonic upper and lower bounds can easily be obtained by summing the corresponding risk measures of the marginals involved. A special subclass of distortion risk measures which is referred to as class of concave distortion risk measures is also under consideration. It is shown that quantile risk measure is not a concave distortion risk measure while Conditional Tail Expectation (under some additional conditions) is a concave distortion risk measure. A theoretical justification for the fact that "concave" Conditional Tail Expectation preserves convex order relation between random variables is given. It is shown that this property does not necessarily hold for the quantile risk measure, as it is not a concave risk measure. Finally, the accuracy and efficiency of two moment-matching, comonotonic upper bound, comonotonic lower bound and "maximal variance" lower bound approximations are examined for a wide range of parameters by comparing with the results obtained by Monte Carlo simulation. It is justified by numerical results that, generally, in the current situation lower bound approach outperforms other methods. Moreover, the preservation of convex order relation between the convex bounds for the final wealth by Conditional Tail Expectation is demonstrated by numerical results. It is justified numerically that this property does not necessarily hold true for the quantile.
APA, Harvard, Vancouver, ISO, and other styles
7

Drapeau, Samuel. "Risk preferences and their robust representation." Doctoral thesis, Humboldt-Universität zu Berlin, Mathematisch-Naturwissenschaftliche Fakultät II, 2010. http://dx.doi.org/10.18452/16135.

Full text
Abstract:
Ziel dieser Dissertation ist es, den Begriff des Risikos unter den Aspekten seiner Quantifizierung durch robuste Darstellungen zu untersuchen. In einem ersten Teil wird Risiko anhand Kontext-Invarianter Merkmale betrachtet: Diversifizierung und Monotonie. Wir führen die drei Schlüsselkonzepte, Risikoordnung, Risikomaß und Risikoakzeptanzfamilen ein, und studieren deren eins-zu-eins Beziehung. Unser Hauptresultat stellt eine eindeutige duale robuste Darstellung jedes unterhalbstetigen Risikomaßes auf topologischen Vektorräumen her. Wir zeigen auch automatische Stetigkeitsergebnisse und robuste Darstellungen für Risikomaße auf diversen Arten von konvexen Mengen. Diese Herangehensweise lässt bei der Wahl der konvexen Menge viel Spielraum, und erlaubt damit eine Vielfalt von Interpretationen von Risiko: Modellrisiko im Falle von Zufallsvariablen, Verteilungsrisiko im Falle von Lotterien, Abdiskontierungsrisiko im Falle von Konsumströmen... Diverse Beispiele sind dann in diesen verschiedenen Situationen explizit berechnet (Sicherheitsäquivalent, ökonomischer Risikoindex, VaR für Lotterien, "variational preferences"...). Im zweiten Teil, betrachten wir Präferenzordnungen, die möglicherweise zusätzliche Informationen benötigen, um ausgedrückt zu werden. Hierzu führen wir einen axiomatischen Rahmen in Form von bedingten Präferenzordungen ein, die lokal mit der Information kompatibel sind. Dies erlaubt die Konstruktion einer bedingten numerischen Darstellung. Wir erhalten eine bedingte Variante der von Neumann und Morgenstern Darstellung für messbare stochastische Kerne und erweitern dieses Ergebnis zur einer bedingten Version der "variational preferences". Abschließend, klären wir das Zusammenpiel zwischen Modellrisiko und Verteilungsrisiko auf der axiomatischen Ebene.
The goal of this thesis is the conceptual study of risk and its quantification via robust representations. We concentrate in a first part on context invariant features related to this notion: diversification and monotonicity. We introduce and study the general properties of three key concepts, risk order, risk measure and risk acceptance family and their one-to-one relations. Our main result is a uniquely characterized dual robust representation of lower semicontinuous risk orders on topological vector space. We also provide automatic continuity and robust representation results on specific convex sets. This approach allows multiple interpretation of risk depending on the setting: model risk in the case of random variables, distributional risk in the case of lotteries, discounting risk in the case of consumption streams... Various explicit computations in those different settings are then treated (economic index of riskiness, certainty equivalent, VaR on lotteries, variational preferences...). In the second part, we consider preferences which might require additional information in order to be expressed. We provide a mathematical framework for this idea in terms of preorders, called conditional preference orders, which are locally compatible with the available information. This allows us to construct conditional numerical representations of conditional preferences. We obtain a conditional version of the von Neumann and Morgenstern representation for measurable stochastic kernels and extend then to a conditional version of the variational preferences. We finally clarify the interplay between model risk and distributional risk on the axiomatic level.
APA, Harvard, Vancouver, ISO, and other styles
8

Luo, Fei-Shan, and 羅妃珊. "Risk Measure, Conditional VaR and the Performance of Portfolio Optimization." Thesis, 2008. http://ndltd.ncl.edu.tw/handle/k8u57r.

Full text
Abstract:
碩士
國立虎尾科技大學
經營管理研究所
96
Since the return volatility of financial assets plays an important role on the performance of portfolios, investors can improve the performance of their portfolios by controlling the volatility of their assets. Therefore, this study examines the influence of the risk estimation on the performance of portfolios. The data used in this study consists of daily returns of the 150 listed companies in the TSEC Taiwan 50 index and TSEC Taiwan Mid-Cap 100 Index and spans from June, 2003 to April, 2008. Under the framework of the fixed window approach, three risk measures, namely the equally weighted moving average model, the exponentially weighted moving average model, and the bootstrap simulation model, are employed to predict the Value-at-Risk and the Conditional Value-at-Risk of the portfolios. After solving the minimization problems of the Conditional Value-at-Risk of the portfolios, the optimal portfolios could be held and their performances could then be compared. The results of this study are shown as follows: (1) All of the optimal portfolios built by minimizing the Conditional Value-at-Risk, which are calculated by different risk measures, have better performance than that of the Taiwan Stock Exchange Capitalization Weighted Stock Index. (2) The estimates of the Value-at-Risk and Conditional Value-at-Risk predicted by different risk measures have crucial influence on the performance of the optimal asset allocation. When the confidence level is 95%, the bootstrap simulation model seems to have the best performance in the risk measures. In case of the 99% confidence level, equally weighted moving average model is the best one among the risk measures.
APA, Harvard, Vancouver, ISO, and other styles
9

Yang, Te-Chun, and 楊德淳. "To Measure the Systemic Risk of Financial Institution in Taiwan." Thesis, 2006. http://ndltd.ncl.edu.tw/handle/33884682254869989221.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

陳嘉祺. "The Valuation and Risk Measure of CDO-Squared under Conditional Independence." Thesis, 2007. http://ndltd.ncl.edu.tw/handle/06886322499539142453.

Full text
Abstract:
碩士
國立政治大學
金融研究所
95
In this paper we address the pricing issues of CDO of CDOs. Underlying the conditional indepdence assumption we use the factor copula approach to characterize the correlation of defaults events. We provide an efficient recursive algorithm that constructs the loss distribution. Our algorithm accounts for the number of defaults, the location of defaults among inner CDOs, and in addition the degree of overlapping between inner CDOs. Our algorithm is a natural extension of the probability bucketing method of Hull and White (2004). We analyze the sensitivity of different parameters on the tranche spreads of a CDO-squared, and in order to characterize the risk-reward profiles of CDO-squared tranches, we introduces appropriate risk measures that quantify the degree of overlapping among the inner CDOs. Hull and White (2004) presents a recursive scheme known as probability bucketing approach to construct conditional loss distribution of CDO. However, this approach is insufficient to capture the complexities of CDO². In the case of the modeling of CDO, we are concerned for the probabilities of different number of defaults upon a time horizon t, e.g., the probabilities of 3 defaults happened within a year. With the mentioned probabilities, we can then calculate the expected loss within the time horizon, which enables us to figure out the spreads of CDO. However, in the modeling of CDO², an appropriate valuation should be able to overcome two more difficulties: (1) the overlapping structure of the underlying CDOs, and (2) the location where defaults happened, in order to get the fair spreads of CDO².
APA, Harvard, Vancouver, ISO, and other styles
11

Cong, Jianfa. "Risk Measure Approaches to Partial Hedging and Reinsurance." Thesis, 2013. http://hdl.handle.net/10012/8163.

Full text
Abstract:
Hedging has been one of the most important topics in finance. How to effectively hedge the exposed risk draws significant interest from both academicians and practitioners. In a complete financial market, every contingent claim can be hedged perfectly. In an incomplete market, the investor can eliminate his risk exposure by superhedging. However, both perfect hedging and superhedging usually call for a high cost. In some situations, the investor does not have enough capital or is not willing to spend that much to achieve a zero risk position. This brings us to the topic of partial hedging. In this thesis, we establish the risk measure based partial hedging model and study the optimal partial hedging strategies under various criteria. First, we consider two of the most common risk measures known as Value-at-Risk (VaR) and Conditional Value-at-Risk (CVaR). We derive the analytical forms of optimal partial hedging strategies under the criterion of minimizing VaR of the investor's total risk exposure. The knock-out call hedging strategy and the bull call spread hedging strategy are shown to be optimal among two admissible sets of hedging strategies. Since VaR risk measure has some undesired properties, we consider the CVaR risk measure and show that bull call spread hedging strategy is optimal under the criterion of minimizing CVaR of the investor's total risk exposure. The comparison between our proposed partial hedging strategies and some other partial hedging strategies, including the well-known quantile hedging strategy, is provided and the advantages of our proposed partial hedging strategies are highlighted. Then we apply the similar approaches in the context of reinsurance. The VaR-based optimal reinsurance strategies are derived under various constraints. Then we study the optimal partial hedging strategies under general risk measures. We provide the necessary and sufficient optimality conditions and use these conditions to study some specific hedging strategies. The robustness of our proposed CVaR-based optimal partial hedging strategy is also discussed in this part. Last but not least, we propose a new method, simulation-based approach, to formulate the optimal partial hedging models. By using the simulation-based approach, we can numerically obtain the optimal partial hedging strategy under various constraints and criteria. The numerical results in the examples in this part coincide with the theoretical results.
APA, Harvard, Vancouver, ISO, and other styles
12

Ferreira, Bárbara Mendes. "The effects of systemic risk in Portugal: A CoVaR approach." Master's thesis, 2020. http://hdl.handle.net/10071/21231.

Full text
Abstract:
The Great Recession in the context of financial globalization raised the interest in systemic risk's measurement. The main goal of this dissertation is the study of systemic risk dynamics in the Portuguese financial system between 02/06/2003 and 30/06/2020. Specifically, we analyze the impact of Portuguese banks distress on the domestic financial system as well as the repercussions of a crisis in the Portuguese financial system on domestic banks. For that purpose, we use ΔCoVaR systemic risk measure. Furthermore, the bootstrap KS test is applied to determine the statistical accuracy of the ΔCoVaR forecasts and to rank banks according to their systemic importance and systemic vulnerability. Throughout this dissertation alternative methodologies to obtain banks returns and to estimate VaR are applied to analyze the sensitivity of VaR and ΔCoVaR forecasts. The empirical results reveal that no Portuguese bank is considered systemic important or vulnerable in the analyzed period. However, considering the studied banks, all of them present its highest contribution to the financial system's systemic risk and its highest vulnerability to the system's shocks in the context of the Great Recession. Furthermore, BES and BNF are more vulnerable to the Portuguese financial system's impact in the last phase of their life cycles. Additionally, from 02/06/2003 to 13/10/2010, BCP is the bank with the major contribution to the financial system's systemic risk and the most vulnerable to system's shocks. Finally, VaR and ΔCoVaR estimates reveal sensitivity to the banks returns computation methodology as well as to the VaR model used.
No contexto da globalização financeira, a Grande Recessão aumentou o interesse na medição do risco sistémico. O principal objetivo desta dissertação é o estudo do risco sistémico no sistema financeiro português entre 02/06/2003 e 30/06/2020. Especificamente, é analisado o impacto da crise dos bancos portugueses no sistema financeiro nacional e as repercussões de uma crise no sistema financeiro português nos bancos nacionais. Para esse efeito, é utilizado como medida de risco sistémico o ΔCoVaR. Adicionalmente, o teste "bootstrap" KS é aplicado para determinar a precisão estatística das estimativas de ΔCoVaR e para ordenar os bancos de acordo com a sua importância e a sua vulnerabilidade sistémica. Ao longo da dissertação são utilizadas várias metodologias para obter os retornos dos bancos e o VaR de forma a analisar a sensibilidade dos valores de ΔCoVaR e VaR estimados. Os resultados empíricos mostram que nenhum banco português pode ser considerado sistemicamente importante ou vulnerável no período analisado. No entanto, entre os bancos considerados, todos apresentam uma maior contribuição para o risco sistémico do sistema e uma maior vulnerabilidade aos choques do sistema no contexto da Grande Recessão. Adicionalmente, o BES e o BNF são mais vulneráveis ao sistema na última fase dos seus ciclos de vida. Entre 02/06/2003 to 13/10/2010, o BCP é o banco que contribui mais para o risco do sistema e o mais vulnerável aos impactos do sistema. Por fim, as estimativas de ΔCoVaR e VaR revelaram-se sensíveis às metodologias utilizadas para calcular os retornos dos bancos e o VaR.
APA, Harvard, Vancouver, ISO, and other styles
13

Hledik, Juraj, and Riccardo Rastelli. "A dynamic network model to measure exposure diversification in the Austrian interbank market." 2018. http://epub.wu.ac.at/6579/1/network.pdf.

Full text
Abstract:
We propose a statistical model for weighted temporal networks capable of measuring the level of heterogeneity in a financial system. Our model focuses on the level of diversification of financial institutions; that is, whether they are more inclined to distribute their assets equally among partners, or if they rather concentrate their commitment towards a limited number of institutions. Crucially, a Markov property is introduced to capture time dependencies and to make our measures comparable across time. We apply the model on an original dataset of Austrian interbank exposures. The temporal span encompasses the onset and development of the financial crisis in 2008 as well as the beginnings of European sovereign debt crisis in 2011. Our analysis highlights an overall increasing trend for network homogeneity, whereby core banks have a tendency to distribute their market exposures more equally across their partners.
APA, Harvard, Vancouver, ISO, and other styles
14

Karniychuk, Maryna. "Comparing Approximations for Risk Measures Related to Sums of Correlated Lognormal Random Variables." Master's thesis, 2006. https://monarch.qucosa.de/id/qucosa%3A17598.

Full text
Abstract:
In this thesis the performances of different approximations are compared for a standard actuarial and financial problem: the estimation of quantiles and conditional tail expectations of the final value of a series of discrete cash flows. To calculate the risk measures such as quantiles and Conditional Tail Expectations, one needs the distribution function of the final wealth. The final value of a series of discrete payments in the considered model is the sum of dependent lognormal random variables. Unfortunately, its distribution function cannot be determined analytically. Thus usually one has to use time-consuming Monte Carlo simulations. Computational time still remains a serious drawback of Monte Carlo simulations, thus several analytical techniques for approximating the distribution function of final wealth are proposed in the frame of this thesis. These are the widely used moment-matching approximations and innovative comonotonic approximations. Moment-matching methods approximate the unknown distribution function by a given one in such a way that some characteristics (in the present case the first two moments) coincide. The ideas of two well-known approximations are described briefly. Analytical formulas for valuing quantiles and Conditional Tail Expectations are derived for both approximations. Recently, a large group of scientists from Catholic University Leuven in Belgium has derived comonotonic upper and comonotonic lower bounds for sums of dependent lognormal random variables. These bounds are bounds in the terms of "convex order". In order to provide the theoretical background for comonotonic approximations several fundamental ordering concepts such as stochastic dominance, stop-loss and convex order and some important relations between them are introduced. The last two concepts are closely related. Both stochastic orders express which of two random variables is the "less dangerous/more attractive" one. The central idea of comonotonic upper bound approximation is to replace the original sum, presenting final wealth, by a new sum, for which the components have the same marginal distributions as the components in the original sum, but with "more dangerous/less attractive" dependence structure. The upper bound, or saying mathematically, convex largest sum is obtained when the components of the sum are the components of comonotonic random vector. Therefore, fundamental concepts of comonotonicity theory which are important for the derivation of convex bounds are introduced. The most wide-spread examples of comonotonicity which emerge in financial context are described. In addition to the upper bound a lower bound can be derived as well. This provides one with a measure of the reliability of the upper bound. The lower bound approach is based on the technique of conditioning. It is obtained by applying Jensen's inequality for conditional expectations to the original sum of dependent random variables. Two slightly different version of conditioning random variable are considered in the context of this thesis. They give rise to two different approaches which are referred to as comonotonic lower bound and comonotonic "maximal variance" lower bound approaches. Special attention is given to the class of distortion risk measures. It is shown that the quantile risk measure as well as Conditional Tail Expectation (under some additional conditions) belong to this class. It is proved that both risk measures being under consideration are additive for a sum of comonotonic random variables, i.e. quantile and Conditional Tail Expectation for a comonotonic upper and lower bounds can easily be obtained by summing the corresponding risk measures of the marginals involved. A special subclass of distortion risk measures which is referred to as class of concave distortion risk measures is also under consideration. It is shown that quantile risk measure is not a concave distortion risk measure while Conditional Tail Expectation (under some additional conditions) is a concave distortion risk measure. A theoretical justification for the fact that "concave" Conditional Tail Expectation preserves convex order relation between random variables is given. It is shown that this property does not necessarily hold for the quantile risk measure, as it is not a concave risk measure. Finally, the accuracy and efficiency of two moment-matching, comonotonic upper bound, comonotonic lower bound and "maximal variance" lower bound approximations are examined for a wide range of parameters by comparing with the results obtained by Monte Carlo simulation. It is justified by numerical results that, generally, in the current situation lower bound approach outperforms other methods. Moreover, the preservation of convex order relation between the convex bounds for the final wealth by Conditional Tail Expectation is demonstrated by numerical results. It is justified numerically that this property does not necessarily hold true for the quantile.
APA, Harvard, Vancouver, ISO, and other styles
15

Muzikářová, Ivana. "Měření systémového rizika v časově-frekvenční doméně." Master's thesis, 2015. http://www.nusl.cz/ntk/nusl-347214.

Full text
Abstract:
This thesis provides an analysis of systemic risk in the US banking sector. We use conditional value at risk (∆CoVaR), marginal expected shortfall (MES) and cross-quantilogram (CQ) to statistically measure tail-dependence in return series of individual institutions and the system as a whole. Wavelet multireso- lution analysis is used to study systemic risk in the time-frequency domain. De- composition of returns on different scales allows us to isolate cycles of 2-8 days, 8-32 days and 32-64 days and analyze co-movement patterns which would oth- erwise stay hidden. Empirical results demonstrate that filtering out short-term noise from the return series improves the forecast power of ∆CoVaR. Eventu- ally, we investigate the connection between statistical measures of systemic risk and fundamental characteristics of institutions (size, leverage, market to book ratio) and conclude that size is the most robust determinant of systemic risk.
APA, Harvard, Vancouver, ISO, and other styles
16

Weng, Chengguo. "Optimal Reinsurance Designs: from an Insurer’s Perspective." Thesis, 2009. http://hdl.handle.net/10012/4766.

Full text
Abstract:
The research on optimal reinsurance design dated back to the 1960’s. For nearly half a century, the quest for optimal reinsurance designs has remained a fascinating subject, drawing significant interests from both academicians and practitioners. Its fascination lies in its potential as an effective risk management tool for the insurers. There are many ways of formulating the optimal design of reinsurance, depending on the chosen objective and constraints. In this thesis, we address the problem of optimal reinsurance designs from an insurer’s perspective. For an insurer, an appropriate use of the reinsurance helps to reduce the adverse risk exposure and improve the overall viability of the underlying business. On the other hand, reinsurance incurs additional cost to the insurer in the form of reinsurance premium. This implies a classical risk and reward tradeoff faced by the insurer. The primary objective of the thesis is to develop theoretically sound and yet practical solution in the quest for optimal reinsurance designs. In order to achieve such an objective, this thesis is divided into two parts. In the first part, a number of reinsurance models are developed and their optimal reinsurance treaties are derived explicitly. This part focuses on the risk measure minimization reinsurance models and discusses the optimal reinsurance treaties by exploiting two of the most common risk measures known as the Value-at-Risk (VaR) and the Conditional Tail Expectation (CTE). Some additional important economic factors such as the reinsurance premium budget, the insurer’s profitability are also considered. The second part proposes an innovative method in formulating the reinsurance models, which we refer as the empirical approach since it exploits explicitly the insurer’s empirical loss data. The empirical approach has the advantage that it is practical and intuitively appealing. This approach is motivated by the difficulty that the reinsurance models are often infinite dimensional optimization problems and hence the explicit solutions are achievable only in some special cases. The empirical approach effectively reformulates the optimal reinsurance problem into a finite dimensional optimization problem. Furthermore, we demonstrate that the second-order conic programming can be used to obtain the optimal solutions for a wide range of reinsurance models formulated by the empirical approach.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography