To see the other types of publications on this topic, follow the link: Insurance – Mathematics.

Dissertations / Theses on the topic 'Insurance – Mathematics'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 dissertations / theses for your research on the topic 'Insurance – Mathematics.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.

1

Yamazato, Makoto. "Non-life Insurance Mathematics." Pontificia Universidad Católica del Perú, 2014. http://repositorio.pucp.edu.pe/index/handle/123456789/96535.

Full text
Abstract:
In this work we describe the basic facts of non-life insurance and then explain risk processes. In particular, we will explain in detail the asymptotic behavior of the probability that an insurance product may end up in ruin during its lifetime. As expected, the behavior of such asymptotic probability will be highly dependent on the tail distribution of each claim.
En este artículo describimos los conceptos básicos relacionados a seguros que no sean de vida y luego explicamos procesos de riesgo. En particular, tratamos al detalle el comportamiento asintótico de la probabilidad de que un producto sea declarado en ruina. Como es suponible, el comportamiento en el horizonte depende de la cola de la distribución de las primas.
APA, Harvard, Vancouver, ISO, and other styles
2

Arvidsson, Hanna, and Sofie Francke. "Dependence in non-life insurance." Thesis, Uppsala University, Department of Mathematics, 2007. http://urn.kb.se/resolve?urn=urn:nbn:se:uu:diva-120621.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Gong, Qi, and 龔綺. "Gerber-Shiu function in threshold insurance risk models." Thesis, The University of Hong Kong (Pokfulam, Hong Kong), 2008. http://hub.hku.hk/bib/B40987966.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Wan, Lai-mei. "Ruin analysis of correlated aggregate claims models." Thesis, Click to view the E-thesis via HKUTO, 2005. http://sunzi.lib.hku.hk/hkuto/record/B30705708.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Ekheden, Erland. "Catastrophe, Ruin and Death - Some Perspectives on Insurance Mathematics." Doctoral thesis, Stockholms universitet, Matematiska institutionen, 2014. http://urn.kb.se/resolve?urn=urn:nbn:se:su:diva-103165.

Full text
Abstract:
This thesis gives some perspectives on insurance mathematics related to life insurance and / or reinsurance. Catastrophes and large accidents resulting in many lost lives are unfortunatley known to happen over and over again. A new model for the occurence of catastrophes is presented; it models the number of catastrophes, how many lives that are lost, how many lost lives that are insured by a specific insurer and the cost of the resulting claims, this  makes it possible to calculate the price of reinsurance contracts linked to catastrophic events.  Ruin is the result if claims exceed inital capital and the premiums collected by an insurance company. We analyze the Cramér-Lundberg approximation for the ruin probability and give an explicit rate of convergence in the case were claims are bounded by some upper limit. Death is known to be the only thing that is certain in life. Individual life spans are however random, models for and statistics of mortality are imortant for, amongst others, life insurance companies whose payments ultimatley depend on people being alive or dead. We analyse the stochasticity of mortality and perform a variance decomposition were the variation in mortality data is either explained by the covariates age and time, unexplained systematic variation or random noise due to a finite population. We suggest a mixed regression model for mortality and fit it to data from the US and Sweden, including prediction intervals of future mortalities.

At the time of the doctoral defense, the following papers were unpublished and had a status as follows: Paper 3: In press. Paper 4: Submitted.

APA, Harvard, Vancouver, ISO, and other styles
6

Lundvik, Andreas. "Portfolio insurance methods for index-funds." Thesis, Uppsala University, Department of Mathematics, 2005. http://urn.kb.se/resolve?urn=urn:nbn:se:uu:diva-121382.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Hagsjö, Renberg Oscar, and Oscar Hermansson. "Large claims in non-life insurance." Thesis, KTH, Matematisk statistik, 2017. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-215492.

Full text
Abstract:
It is of outmost importance for an insurance company to apply a fair pricing policy. If the price is too high, valuable customers are lost to other insurance companies while if it’s too low – it nets a negative profit. To achieve a good pricing policy, information regarding claim size history for a given type of customer is required. A problem arises as large extremal events occur and affects the claim size data. These extremal events take shape in individually large claim sizes that by themselves can alter the distribution for what certain groups of individuals are expected to cost. A remedy for this is to apply what is called a large claim limit. Any claim exceeding this limit is thought of as being outside the scope of what is captured by the original distribution of the claim size. These exceeding claims are treated separately and have their cost distributed across all insurance takers, rather than just the group they belong to. So, where exactly do you draw this limit? Do you treat the entire claim size this way (exclusion) or just the bit that is exceeding the threshold (truncation)? These questions are treated and answered in this master’s thesis for Trygg-Hansa. For each product code, a limit was achieved in addition to which method for exceeding data that was best to use.
Det är oerhört viktigt för ett försäkringsbolag att kunna tillämpa en god prissättning. Är priset för högt så förloras kunder till andra försäkringsbolag, och är den underprisad är det en förlustaffär. För att kunna sätta bra priser krävs information om vilka samt hur stora skador som kan tänkas inträffa för en given kundprofil. Ett problem uppstår när stora extremfall påverkar skadedatan. Dessa extremfall yttrar sig genom enskilda storskador som kan komma att påverka prissättningen för en hel grupp då distributionen för vad gruppen förväntas kosta kan ändras. Detta problem kan lösas genom att införa en storskadegräns till skadedatan. Skador över denna gräns räknas som extremfall och utanför ramen av vad den ursprungliga distributionen för skadorna beskriver. De hanteras separat och låter sin kostnad fördelas över samtliga försäkringstagare. Men vart dras denna gräns? Ska man behandla hela den överstigande kostnaden på detta sätt (exkludering) eller bara den biten av skadan som går över storskadegränsen (trunkering)? Dessa frågor behandlas och besvaras i denna masteruppsats i uppdrag åt Trygg-Hansa. För de olika produkttypkoderna beräknades varsin storskadegräns samt metod för överskridande data.
APA, Harvard, Vancouver, ISO, and other styles
8

Huang, Danwei, and 黃丹薇. "Robustness of generalized estimating equations in credibility models." Thesis, The University of Hong Kong (Pokfulam, Hong Kong), 2007. http://hub.hku.hk/bib/B38842312.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Lin, Yin, and 林印. "Some results on BSDEs with applications in finance and insurance." Thesis, The University of Hong Kong (Pokfulam, Hong Kong), 2013. http://hub.hku.hk/bib/B50899831.

Full text
Abstract:
Considerably much work has been done on Backward Stochastic Differential Equations (BSDEs) in continuous-time with deterministic terminal horizon or stopping times. Various new models have been introduced in these years in order to generalize BSDEs to solve new practical financial problems. One strand is focused on discrete-time models. Backward Stochastic Difference Equations (also called BSDEs if no ambiguity) on discrete-time finite-state space were introduced by Cohen and Elliott (2010a). The associated theory required only weak assumptions. In the first topic of this thesis, properties of non-linear expectations defined using the discrete-time finite-state BSDEs were studied. A converse comparison theorem was established. Properties of risk measures defined by non-linear expectations, especially the representation theorems, were given. Then the theory of BSDEs was applied to optimal design of dynamic risk measures. Another strand is about a general random terminal time, which is not necessarily a stopping time. The motivation of this model is a financial problem of hedging of defaultable contingent claims, where BSDEs with stopping times are not applicable. In the second topic of this thesis, discrete-time finite-state BSDEs under progressively enlarged filtration were considered. Martingale representation theorem, existence and uniqueness theorem and comparison theorem were established. Application to nonlinear expectations was also explored. Using the theory of BSDEs, the explicit solution for optimal design of dynamic default risk measures was obtained. In recent work on continuous-time BSDEs under progressively enlarged filtration, the reference filtration is generated by Brownian motions. In order to deal with cases with jumps, in the third topic of this thesis, a general reference filtration with predictable representation property and an initial time with immersion property were considered. The martingale representation theorem for square-integrable martingales under progressively enlarged filtration was established. Then the existence and uniqueness theorem of BSDEs under enlarged filtration using Lipschitz continuity of the driver was proved. Conditions for a comparison theorem were also presented. Finally applications to nonlinear expectations and hedging of defaultable contingent claims on Brownian-Poisson setting were explored.
published_or_final_version
Statistics and Actuarial Science
Doctoral
Doctor of Philosophy
APA, Harvard, Vancouver, ISO, and other styles
10

Nguyen, Mai. "Machine Learning Algorithmsfor Regression Modeling in Private Insurance." Thesis, KTH, Matematisk statistik, 2018. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-234857.

Full text
Abstract:
This thesis is focused on the Occupational Pension, an important part of the retiree’s total pension. It is paid by private insurance companies and determined by an annuity divisor. Regression modeling of the annuity divisor is done by using the monthly paid pension as a response and a set of 24 explanatory variables e.g. the expected remaining lifetime and advance interest rate. Two machine learning algorithms, artificial neural networks (ANN) and support vector machines for regression (SVR) are considered in detail. Specifically, different transfer functions for ANN are studied as well as the possibility to improve the SVR model by incorporating a non-linear Gaussian kernel. To compare our result with prior experience of the Swedish Pensions Agency in modeling and predicting the annuity divisor, we also consider the ordinary multiple linear regression (MLR) model. Although ANN, SVR and MLR are of different nature, they demonstrate similar performance accuracy. It turns out that for our data that MLR and SVR with a linear kernel achieve the highest prediction accuracy. When performing feature selection, all methods except SVR with a Gaussian kernel encompass the features corresponding to advance interest rate and expected remaining lifetime, which according to Swedish law {Swedish law: 5 kap. 12 § lagen (1998:674) om inkomstgrundad ålderspension} are main factors that determine the annuity divisor. The results of this study confirm the importance of the two main factors for accurate modeling of the annuity divisor in private insurance. We also conclude that, in addition to the methods used in previous research, methods such as MLR, ANN and SVR may be used to accurately model the annuity divisor.
Denna uppsats fokuserar på tjänstepensionen, en viktig del av en pensionärs totala pension. Den utbetalas av privata försäkringsbolag och beräknas med hjälp av ett så kallat delningstal. Regressionsmodellering av delningstalet görs genom att använda den månatliga utbetalda pensionen som svar och en uppsättning av 24 förklarande variabler såsom förväntad återstående livslängd och förskottsränta. Två maskininlärningsalgoritmer, artificiella neuronnät (ANN) och stödvektormaskiner för regression (SVR) betraktas i detalj. Specifikt så studeras olika överföringsfunktioner för ANN och möjligheten att förbättra SVR modellen genom att införa en ickelinjär Gaussisk kärna. För att jämföra våra resultat med tidigare erfarenhet från Pensionsmyndigheten vid modellering och förutsägande av delningstalet studerar vi även ordinär multipel linjär regression (MLR). Även om ANN, SVR och MLR är av olika natur påvisar dem liknande noggrannhet. Det visar sig för vår data att MLR och SVR med en linjär kärna uppnår den högsta noggrannheten på okänd data. Vid variabel urvalet omfattar samtliga metoder förutom SVR med en Gaussisk kärna variablerna motsvarande förväntad återstående livslängd och förskottsränta som enligt svensk lag är huvudfaktorer vid bestämning av delningstalet. Resultatet av denna studie bekräftar betydelsen av huvudfaktorerna för noggrann modellering av delningstalet inom privat försäkring. Vi drar även slutsatsen att utöver metoderna som använts i tidigare studier kan metoder såsom ANN, SVR och MLR användas med framgång för att noggrant modellera delningstalet.
APA, Harvard, Vancouver, ISO, and other styles
11

Sung, Ka-chun Joseph, and 宋家俊. "Optimal reinsurance: a contemporary perspective." Thesis, The University of Hong Kong (Pokfulam, Hong Kong), 2012. http://hub.hku.hk/bib/B47753031.

Full text
Abstract:
In recent years, general risk measures have played an important role in risk management in both finance and insurance industry. As a consequence, there is an increasing number of research on optimal reinsurance problems using risk measures as yard sticks beyond the classical expected utility framework. In this thesis, the stop-loss reinsurance is first shown to be an optimal contract under law-invariant convex risk measures via a new simple geometric argument. This similar approach is then used to tackle the same optimal reinsurance problem under Value at Risk and Conditional Tail Expectation; it is interesting to note that, instead of stop-loss reinsurances, insurance layers serve as the optimal solution in these cases. These two results hint that law-invariant convex risk measure may be better and more robust to expected larger claims than Value at Risk and Conditional Tail Expectation even though they are more commonly used. In addition, the problem of optimal reinsurance design for a basket of n insurable risks is studied. Without assuming any particular dependence structure, a minimax optimal reinsurance decision formulation for the problem has been successfully proposed. To solve it, the least favorable dependence structure is first identified, and then the stop-loss reinsurances are shown to minimize a general law-invariant convex risk measure of the total retained risk. Sufficient condition for ordering the optimal deductibles are also obtained. Next, a Principal-Agent model is adopted to describe a monopolistic reinsurance market with adverse selection. Under the asymmetry of information, the reinsurer (the principal) aims to maximize the average profit by selling a tailor-made reinsurance to every insurer (agent) from a (huge) family with hidden characteristics. In regard to Basel Capital Accord, each insurer uses Value at Risk as the risk assessment, and also takes the right to choose different risk tolerances. By utilizing the special features of insurance layers, their optimality as the first-best strategy over all feasible reinsurances is proved. Also, the same optimal reinsurance screening problem is studied under other subclass of reinsurances: (i) deductible contracts; (ii) quota-share reinsurances; and (iii) reinsurance contracts with convex indemnity, with the aid of indirect utility functions. In particular, the optimal indirect utility function is shown to be of the stop-loss form under both classes (i) and (ii); while on the other hand, its non-stop-loss nature under class (iii) is revealed. Lastly, a class of nonzero-sum stochastic differential reinsurance games between two insurance companies is studied. Each insurance company is assumed to maximize the difference of the opponent’s terminal surplus from that of its own by properly arranging its reinsurance schedule. The surplus process of each insurance company is modeled by a mixed regime-switching Cramer-Lundberg approximation. It is a diffusion risk process with coefficients being modulated by both a continuous-time finite-state Markov Chain and another diffusion process; and correlations among these surplus processes are allowed. In contrast to the traditional HJB approach, BSDE method is used and an explicit Nash equilibrium is derived.
published_or_final_version
Mathematics
Master
Master of Philosophy
APA, Harvard, Vancouver, ISO, and other styles
12

Osman, Abdelghafour Mohamed. "Structured products: Pricing, hedging and applications for life insurance companies." Thesis, Uppsala University, Department of Mathematics, 2009. http://urn.kb.se/resolve?urn=urn:nbn:se:uu:diva-119969.

Full text
APA, Harvard, Vancouver, ISO, and other styles
13

Pansera, Jérôme. "Local risk minimization, consistent interest-rate modeling, and applications to life insurance." Diss., University of Iowa, 2008. https://ir.uiowa.edu/etd/15.

Full text
Abstract:
This thesis studies local risk minimization, consistent interest-rate modeling, and their applications to life insurance. Part I considers local risk minimization, which is one possible approach to price and hedge claims in incomplete markets. In this first part, our two main results are Propositions 3.6 and 4.3: they provide an easy way to compute locally risk-minimizing hedging strategies for common life-insurance products in discrete time and in continuous time, respectively. Part II considers consistent interest-rate modeling; that is, interest-rate models in which a change in the yield curve can be explained by a change in the state variable, without changing the parameters of the model. In this second part, we present a single-factor interest-rate model (jointly specified under the physical and the risk-neutral probability measures), which allows for observation errors. Our main result is an algorithm to estimate the hidden values of the state variable, as well as the five parameters of our model. We also outline how our results can be extended to the multi-factor case. Part III combines the results of Parts I and II in a numerical example. In this example, we compute a locally risk-minimizing hedging strategy for a life annuity under stochastic interest rates. We assume that the insurance company is trying to hedge this product by trading zero-coupon bonds of various maturities. Since a perfect hedge is impossible in this case, we obtain (by simulation) the distribution of the cost resulting from the ``mis-hedge''. This distribution is with respect to the physical probability measure, while most of the existing literature considers it under a risk-neutral measure.
APA, Harvard, Vancouver, ISO, and other styles
14

Ndounkeu, Ludovic Tangpi. "Optimal cross hedging of Insurance derivatives using quadratic BSDEs." Thesis, Stellenbosch : Stellenbosch University, 2011. http://hdl.handle.net/10019.1/17950.

Full text
Abstract:
Thesis (MSc)--Stellenbosch University, 2011.
ENGLISH ABSTRACT: We consider the utility portfolio optimization problem of an investor whose activities are influenced by an exogenous financial risk (like bad weather or energy shortage) in an incomplete financial market. We work with a fairly general non-Markovian model, allowing stochastic correlations between the underlying assets. This important problem in finance and insurance is tackled by means of backward stochastic differential equations (BSDEs), which have been shown to be powerful tools in stochastic control. To lay stress on the importance and the omnipresence of BSDEs in stochastic control, we present three methods to transform the control problem into a BSDEs. Namely, the martingale optimality principle introduced by Davis, the martingale representation and a method based on Itô-Ventzell’s formula. These approaches enable us to work with portfolio constraints described by closed, not necessarily convex sets and to get around the classical duality theory of convex analysis. The solution of the optimization problem can then be simply read from the solution of the BSDE. An interesting feature of each of the different approaches is that the generator of the BSDE characterizing the control problem has a quadratic growth and depends on the form of the set of constraints. We review some recent advances on the theory of quadratic BSDEs and its applications. There is no general existence result for multidimensional quadratic BSDEs. In the one-dimensional case, existence and uniqueness strongly depend on the form of the terminal condition. Other topics of investigation are measure solutions of BSDEs, notably measure solutions of BSDE with jumps and numerical approximations. We extend the equivalence result of Ankirchner et al. (2009) between existence of classical solutions and existence of measure solutions to the case of BSDEs driven by a Poisson process with a bounded terminal condition. We obtain a numerical scheme to approximate measure solutions. In fact, the existing self-contained construction of measure solutions gives rise to a numerical scheme for some classes of Lipschitz BSDEs. Two numerical schemes for quadratic BSDEs introduced in Imkeller et al. (2010) and based, respectively, on the Cole-Hopf transformation and the truncation procedure are implemented and the results are compared. Keywords: BSDE, quadratic growth, measure solutions, martingale theory, numerical scheme, indifference pricing and hedging, non-tradable underlying, defaultable claim, utility maximization.
AFRIKAANSE OPSOMMING: Ons beskou die nuts portefeulje optimalisering probleem van ’n belegger wat se aktiwiteite beïnvloed word deur ’n eksterne finansiele risiko (soos onweer of ’n energie tekort) in ’n onvolledige finansiële mark. Ons werk met ’n redelik algemene nie-Markoviaanse model, wat stogastiese korrelasies tussen die onderliggende bates toelaat. Hierdie belangrike probleem in finansies en versekering is aangepak deur middel van terugwaartse stogastiese differensiaalvergelykings (TSDEs), wat blyk om ’n onderskeidende metode in stogastiese beheer te wees. Om klem te lê op die belangrikheid en alomteenwoordigheid van TSDEs in stogastiese beheer, bespreek ons drie metodes om die beheer probleem te transformeer na ’n TSDE. Naamlik, die martingale optimaliteits beginsel van Davis, die martingale voorstelling en ’n metode wat gebaseer is op ’n formule van Itô-Ventzell. Hierdie benaderings stel ons in staat om te werk met portefeulje beperkinge wat beskryf word deur geslote, nie noodwendig konvekse versamelings, en die klassieke dualiteit teorie van konvekse analise te oorkom. Die oplossing van die optimaliserings probleem kan dan bloot afgelees word van die oplossing van die TSDE. ’n Interessante kenmerk van elkeen van die verskillende benaderings is dat die voortbringer van die TSDE wat die beheer probleem beshryf, kwadratiese groei en afhanglik is van die vorm van die versameling beperkings. Ons herlei ’n paar onlangse vooruitgange in die teorie van kwadratiese TSDEs en gepaartgaande toepassings. Daar is geen algemene bestaanstelling vir multidimensionele kwadratiese TSDEs nie. In die een-dimensionele geval is bestaan ââen uniekheid sterk afhanklik van die vorm van die terminale voorwaardes. Ander ondersoek onderwerpe is maatoplossings van TSDEs, veral maatoplossings van TSDEs met spronge en numeriese benaderings. Ons brei uit op die ekwivalensie resultate van Ankirchner et al. (2009) tussen die bestaan van klassieke oplossings en die bestaan van maatoplossings vir die geval van TSDEs wat gedryf word deur ’n Poisson proses met begrensde terminale voorwaardes. Ons verkry ’n numeriese skema om oplossings te benader. Trouens, die bestaande self-vervatte konstruksie van maatoplossings gee aanleiding tot ’n numeriese skema vir sekere klasse van Lipschitz TSDEs. Twee numeriese skemas vir kwadratiese TSDEs, bekendgestel in Imkeller et al. (2010), en gebaseer is, onderskeidelik, op die Cole-Hopf transformasie en die afknot proses is geïmplementeer en die resultate word vergelyk.
APA, Harvard, Vancouver, ISO, and other styles
15

Postigo, Smura Michel Alexander. "Cluster analysis on sparse customer data on purchase of insurance products." Thesis, KTH, Matematisk statistik, 2019. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-249558.

Full text
Abstract:
This thesis work aims at performing a cluster analysis on customer data of insurance products. Three different clustering algorithms are investigated. These are K-means (center-based clustering), Two-Level clustering (SOM and Hierarchical clustering) and HDBSCAN (density-based clustering). The input to the algorithms is a high-dimensional and sparse data set. It contains information about the customers previous purchases, how many of a product they have bought and how much they have paid. The data set is partitioned in four different subsets done with domain knowledge and also preprocessed by normalizing respectively scaling before running the three different cluster algorithms on it. A parameter search is performed for each of the cluster algorithms and the best clustering is compared with the other results. The best is measured by the highest average silhouette index. The results indicates that all of the three algorithms performs approximately equally good, with single exceptions. However, it can be stated that the algorithm showing best general results is K-means on scaled data sets. The different preprocessings and partitions of the data impacts the results in different ways and this shows that it is important to preprocess the input data in several ways when performing a cluster analysis.
Målet med detta examensarbete är att utföra en klusteranalys på kunddata av försäkringsprodukter. Tre olika klusteralgoritmer undersöks. Dessa är Kmeans (center-based clustering), Two-Level clustering (SOM och Hierarchical clustering) och HDBSCAN (density-based clustering). Input till algoritmerna är ett högdimensionellt och glest dataset. Det innhåller information om kundernas tidigare köp, hur många produkter de har köpt och hur mycket de har betalat. Datasetet delas upp i fyra delmängder med kunskap inom området och förarbetas också genom att normaliseras respektive skalas innan klustringsalgoritmerna körs på det. En parametersökning utförs för dem tre olika algoritmerna och den bästa klustringen jämförs med de andra resultaten. Den bästa algoritmen bestäms genom att beräkna the högsta silhouette index-medelvärdet. Resultaten indikerar att alla tre algoritmerna levererar ungefärligt lika bra resultat, med enstaka undantag. Dock, kan det bekräftas att algoritmen som visar bäst resultat överlag är K-means på skalade dataset. De olika förberedelserna och uppdelningarna av datasetet påverkar resultaten på olika sätt och detta tyder på vikten av att förbereda input datat på flera sätt när en klusteranalys utförs.
APA, Harvard, Vancouver, ISO, and other styles
16

Passalidou, Eudokia. "Optimal premium pricing strategies for nonlife products in competitive insurance markets." Thesis, University of Liverpool, 2015. http://livrepository.liverpool.ac.uk/2033901/.

Full text
Abstract:
Non-life insurance pricing depends on different costs including claim and business acquisition costs, management expenses and other parameters such as margin for fluctuations in claims experience, expected profits etc. Nevertheless, in a competitive insurance market environment, company's premium should respond to changes in the level of premiums being offered by competitors. In this thesis, two major issues are being investigated. Primarily, it is explored how a company's optimal strategy can be determined in a competitive market and secondly a connection between this strategy and market's competition is established. More specifically, two functional equations for the volume of business are proposed. In the first place, the volume of business function is related to the past year's experience, the average premium of the market, the company's premium and a stochastic disturbance. Thus, an optimal premium strategy which maximizes the total expected linear discounted utility of company's wealth over a finite time horizon is defined analytically and endogenously. In the second place, the volume of business function is enriched with company's reputation, for the first time according to the author's knowledge. Moreover, the premium elasticity and reputation elasticity of the volume of business are taking into consideration. Thus, an optimal premium strategy which maximizes the total expected linear discounted utility of company's wealth over a finite time horizon is calculated and for some special cases analytical solutions are presented. Furthermore, an upper bound or a minimum premium excess strategy is found for a company with positive reputation and positive premium elasticity of the volume of business. Thirdly, the calculation of a fair premium in a competitive market is discussed. A nonlinear premium-reserve (P-R) model is presented and the premium is derived by minimizing a quadratic performance criterion concerns the present value of the reserve. The reserve is a stochastic equation, which includes an additive random nonlinear function of the state, premium and not necessarily Gaussian noise which is independently distributed in time, provided only that the mean value and the covariance of the random function is zero and a quadratic function of the state, premium and other parameters, respectively. In this quadratic representation of the covariance function, new parameters are implemented and enriched further the previous linear models, such as the income insurance elasticity of demand, the number of insured and the inflation in addition to the company's reputation. Interestingly, for the very first time, the derived optimal premium in a competitive market environment is also depended on the company's reserve among the other parameters. In each chapter numerical applications show the applicability of the proposed models and their results are further explained and analyzed. Finally, suggestions for further research and summary of the conclusions complete the thesis.
APA, Harvard, Vancouver, ISO, and other styles
17

Ni, Ying. "Modeling Insurance Claim Sizes using the Mixture of Gamma & Reciprocal Gamma Distributions." Thesis, Mälardalen University, Department of Mathematics and Physics, 2005. http://urn.kb.se/resolve?urn=urn:nbn:se:mdh:diva-454.

Full text
APA, Harvard, Vancouver, ISO, and other styles
18

Guleroglu, Cigdem. "Portfolio Insurance Strategies." Master's thesis, METU, 2012. http://etd.lib.metu.edu.tr/upload/12614809/index.pdf.

Full text
Abstract:
The selection of investment strategies and managing investment funds via employing portfolio insurance methods play an important role in asset liability management. Insurance strategies are designed to limit downside risk of portfolio while allowing some participation in potential gain of upside markets. In this thesis, we provide an extensive overview and investigation, particularly on the two most prominent portfolio insurance strategies: the Constant Proportion Portfolio Insurance (CPPI) and the Option-Based Portfolio Insurance (OBPI). The aim of the thesis is to examine, analyze and compare the portfolio insurance strategies in terms of their performances at maturity, via some of their statistical and dynamical properties, and of their optimality over the maximization of expected utility criterion. This thesis presents the financial market model in continuous-time containing no arbitrage opportunies, the CPPI and OBPI strategies with definitions and properties, and the analysis of these strategies in terms of comparing their performances at maturity, of their statistical properties and of their dynamical behaviour and sensitivities to the key parameters during the investment period as well as at the terminal date, with both formulations and simulations. Therefore, we investigate and compare optimal portfolio strategies which maximize the expected utility criterion. As a contribution on the optimality results existing in the literature, an extended study is provided by proving the existence and uniqueness of the appropriate number of shares invested in the unconstrained allocation in a wider interval.
APA, Harvard, Vancouver, ISO, and other styles
19

Badran, Rabih. "Insurance portfolio's with dependent risks." Doctoral thesis, Universite Libre de Bruxelles, 2014. http://hdl.handle.net/2013/ULB-DIPOT:oai:dipot.ulb.ac.be:2013/209547.

Full text
Abstract:
Cette thèse traite de portefeuilles d’assurance avec risques dépendants en théorie du risque.

Le premier chapitre traite les modèles avec risques équicorrelés. Nous proposons une structure mathématique qui amène à une fonction génératrice de probabilités particulière (fgp) proposé par Tallis. Cette fgp implique des variables équicorrelées. Puis, nous étudions l’effet de ce type de dépendance sur des quantités d’intérêt dans la littérature actuarielle telle que la fonction de répartition de la somme des montants des sinistres, les primes stop-loss et les probabilités de ruine sur horizon fini. Nous utilisons la structure proposée pour corriger des erreurs dans la littérature dues au fait que plusieurs auteurs agissaient comme si la somme des variables aléatoires équicorrélés aient nécessairement la fgp proposée par Tallis.

Dans le second chapitre, nous proposons un modèle qui combine les modèles avec chocs et les modèles avec mélanges communs en introduisant une variable qui contrôle le niveau du choc. Dans le cadre de ce nouveau modèle, nous considérons deux applications où nous généralisons le modèle de Bernoulli avec choc et le modèle de Poisson avec choc. Nous étudions, dans les deux applications, l’effet de la dépendance sur la fonction de répartition des montants des sinistres, les primes stop-loss et les probabilités de ruine sur horizon fini et infini. Pour la deuxième application, nous proposons une construction basée sur les copules qui permet de contrôler le niveau de dépendance avec le niveau du choc.

Dans le troisième chapitre, nous proposons, une généralisation du modèle classique de Poisson où les montants des sinistres et les intersinistres sont supposés dépendants. Nous calculons la transformée de Laplace des probabilités de survie. Dans le cas particulier où les montants des sinistres ont une distribution exponentielle nous obtenons des formules explicites pour les probabilités de survie.

Dans le quatrième chapitre nous généralisons le modèle classique de Poisson en introduisant de la dépendance entre les intersinistres. Nous utilisons le lien entre les files fluides et le processus du risque pour modéliser la dépendance. Nous calculons les probabilités de survie en utilisant un algorithme numérique et nous traitons le cas où les montants de

sinistres et les intersinistres ont des distributions de type phase.


Doctorat en Sciences
info:eu-repo/semantics/nonPublished

APA, Harvard, Vancouver, ISO, and other styles
20

Yan, Yujie yy. "A General Approach to Buhlmann Credibility Theory." Thesis, University of North Texas, 2017. https://digital.library.unt.edu/ark:/67531/metadc1011812/.

Full text
Abstract:
Credibility theory is widely used in insurance. It is included in the examination of the Society of Actuaries and in the construction and evaluation of actuarial models. In particular, the Buhlmann credibility model has played a fundamental role in both actuarial theory and practice. It provides a mathematical rigorous procedure for deciding how much credibility should be given to the actual experience rating of an individual risk relative to the manual rating common to a particular class of risks. However, for any selected risk, the Buhlmann model assumes that the outcome random variables in both experience periods and future periods are independent and identically distributed. In addition, the Buhlmann method uses sample mean-based estimators to insure the selected risk, which may be a poor estimator of future costs if only a few observations of past events (costs) are available. We present an extension of the Buhlmann model and propose a general method based on a linear combination of both robust and efficient estimators in a dependence framework. The performance of the proposed procedure is demonstrated by Monte Carlo simulations.
APA, Harvard, Vancouver, ISO, and other styles
21

Xiong, Sheng. "Stochastic Differential Equations: Some Risk and Insurance Applications." Diss., Temple University Libraries, 2011. http://cdm16002.contentdm.oclc.org/cdm/ref/collection/p245801coll10/id/133166.

Full text
Abstract:
Mathematics
Ph.D.
In this dissertation, we have studied diffusion models and their applications in risk theory and insurance. Let Xt be a d-dimensional diffusion process satisfying a system of Stochastic Differential Equations defined on an open set G Rd, and let Ut be a utility function of Xt with U0 = u0. Let T be the first time that Ut reaches a level u^*. We study the Laplace transform of the distribution of T, as well as the probability of ruin, psileft(u_{0}right)=Prleft{ TTemple University--Theses
APA, Harvard, Vancouver, ISO, and other styles
22

Yang, Lin. "Linear robust H-infinity stochastic control theory on the insurance premium-reserve processes." Thesis, University of Liverpool, 2015. http://livrepository.liverpool.ac.uk/2037227/.

Full text
Abstract:
This thesis deals with the stability analysis of linear discrete-time premium-reserve (P-R) systems in a stochastic framework. Such systems are characterised by a mixture of the premium pricing process and the medium- and long- term stability in the accumulated reserve (surplus) policy, and they play a key role in the modern actuarial literature. Although the mathematical and practical analysis of P-R systems is well studied and motivated, their stability properties have not been studied thoughtfully and they are restricted in a deterministic framework. In Engineering, during the last three decades, many useful techniques are developed in linear robust control theory. This thesis is the first attempt to use some useful tools from linear robust control theory in order to analyze the stability of these classical insurance systems. Analytically, in this thesis, P-R systems are first formulated with structural properties such that time-varying delays, random disturbance and parameter uncertainties. Then as an extension of the previous literature, the results of stabilization and the robust H-infinity control of P-R systems are modelled in stochastic framework. Meanwhile, the risky investment impact on the P-R system stability condition is shown. In this approach, the potential effects from changes in insurer's investment strategy is discussed. Next we develop regime switching P-R systems to describe the abrupt structural changes in the economic fundamentals as well as the periodic switches in the parameters. The results for the regime switching P-R system are illustrated by means of two different approaches: markovian and arbitrary regime switching systems. Finally, we show how robust guaranteed cost control could be implemented to solve an optimal insurance problem. In each chapter, Linear Matrix Inequality (LMI) sufficient conditions are derived to solve the proposed sub-problems and numerical examples are given to illustrate the applicability of the theoretical findings.
APA, Harvard, Vancouver, ISO, and other styles
23

Gip, Orreborn Jakob. "Asset-Liability Management with in Life Insurance." Thesis, KTH, Matematisk statistik, 2017. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-215339.

Full text
Abstract:
In recent years, new regulations and stronger competition have further increased the importance of stochastic asset-liability management (ALM) models for life insurance firms. However, the often complex nature of life insurance contracts makes modeling to a challenging task, and insurance firms often struggle with models quickly becoming too complicated and inefficient. There is therefore an interest in investigating if, in fact, certain traits of financial ratios could be exposed through a more efficient model. In this thesis, a discrete time stochastic model framework, for the simulation of simplified balance sheets of life insurance products, is proposed. The model is based on a two-factor stochastic capital market model, supports the most important product characteristics, and incorporates a reserve-dependent bonus declaration. Furthermore, a first approach to endogenously model customer transitions is proposed, where realized policy returns are used for assigning transition probabilities. The model's sensitivity to different input parameters, and ability to capture the most important behaviour patterns, are demonstrated by the use of scenario and sensitivity analyses. Furthermore, based on the findings from these analyses, suggestions for improvements and further research are also presented.
Införandet av nya regelverk och ökad konkurrens har medfört att stokastiska ALM-modeller blivit allt viktigare för livförsäkringsbolag. Den ofta komplexa strukturen hos försäkringsprodukter försvårar dock modelleringen, vilket gör att många modeller anses vara för komplicerade samt ineffektiva, av försäkringsbolagen. Det finns därför ett intresse i att utreda om egenskaper hos viktiga finansiella nyckeltal kan studeras utifrån en mer effektiv och mindre komplicerad modell. I detta arbete föreslås ett ramverk för stokastisk modellering av en förenklad version av balansräkningen hos typiska livförsäkringsbolag. Modellen baseras på en stokastisk kapitalmarknadsmodell, med vilken såväl aktiepriser som räntenivåer simuleras. Vidare så stödjer modellen simulering av de mest väsentliga produktegenskaperna, samt modellerar kundåterbäring som en funktion av den kollektiva konsolideringsgraden. Modellens förmåga att fånga de viktigaste egenskaperna hos balansräkningens ingående komponenter undersöks med hjälp av scenario- och känslighetsanalyser. Ytterligare undersöks även huruvida modellen är känslig för förändringar i olika indata, där fokus främst tillägnas de parametrar som kräver mer avancerade skattningsmetoder.
APA, Harvard, Vancouver, ISO, and other styles
24

Erturk, Huseyin. "Limit theorems for random exponential sums and their applications to insurance and the random energy model." Thesis, The University of North Carolina at Charlotte, 2016. http://pqdtopen.proquest.com/#viewpdf?dispub=10111893.

Full text
Abstract:

In this dissertation, we are mainly concerned with the sum of random exponentials. Here, the random variables are independent and identically distributed. Another distinctive assumption is the number of variables in this sum is a function of the constant on the exponent. Our first goal is to find the limiting distributions of the random exponential sums for new class of the random variables. For some classes, such results are known; normal distribution, Weibull distribution etc.

Secondly, we apply these limit theorems to some insurance models and the random energy model in statistical physics. Specifically for the first case, we give the estimate of the ruin probability in terms of the empirical data. For the random energy model, we present the analysis of the free energy for new class of distribution. In some particular cases, we prove the existence of several critical points for the free energy. In some other cases, we prove the absence of phase transitions.

Our results give a new approach to compute the ruin probabilities of insurance portfolios empirically when there is a sequence of insurance portfolios with a custom growth rate of the claim amounts. The second application introduces a simple method to drive the free energy in the case the random variables in the statistical sum can be represented as a function of standard exponential random variables. The technical tool of this study includes the classical limit theory for the sum of independent and identically distributed random variables and different asymptotic methods like the Euler-Maclaurin formula and Laplace method.

APA, Harvard, Vancouver, ISO, and other styles
25

Govorun, Maria. "Pension and health insurance, phase-type modeling." Doctoral thesis, Universite Libre de Bruxelles, 2013. http://hdl.handle.net/2013/ULB-DIPOT:oai:dipot.ulb.ac.be:2013/209447.

Full text
Abstract:
Depuis longtemps les modèles de type phase sont utilisés dans plusieurs domaines scientifiques pour décrire des systèmes qui peuvent être caractérisés par différents états. Les modèles sont bien connus en théorie des files d’attentes, en économie et en assurance.

La thèse est focalisée sur différentes applications des modèles de type phase en assurance et montre leurs avantages. En particulier, le modèle de Lin et Liu en 2007 est intéressant, parce qu’il décrit le processus de vieillissement de l’organisme humain. La durée de vie d’un individu suit une loi de type phase et les états de ce modèle représentent des états de santé. Le fait que le modèle prévoit la connexion entre les états de santé et l’âge de l’individu le rend très utile en assurance.

Les résultats principaux de la thèse sont des nouveaux modèles et méthodes en assurance pension et en assurance santé qui utilisent l’hypothèse de la loi de type phase pour décrire la durée de vie d’un individu.

En assurance pension le but d’estimer la profitabilité d’un fonds de pension. Pour cette raison, on construit un modèle « profit-test » qui demande la modélisation de plusieurs caractéristiques. On décrit l’évolution des participants du fonds en adaptant le modèle du vieillissement aux causes multiples de sortie. L’estimation des profits futurs exige qu’on détermine les valeurs des cotisations pour chaque état de santé, ainsi que l’ancienneté et l’état de santé initial pour chaque participant. Cela nous permet d’obtenir la distribution de profits futurs et de développer des méthodes pour estimer les risques de longevité et de changements de marché. De plus, on suppose que la diminution des taux de mortalité pour les pensionnés influence les profits futurs plus que pour les participants actifs. C’est pourquoi, pour évaluer l’impact de changement de santé sur la profitabilité, on modélise séparément les profits venant des pensionnés.

En assurance santé, on utilise le modèle de type phase pour calculer la distribution de la valeur actualisée des coûts futurs de santé. On développe des algorithmes récursifs qui permettent d’évaluer la distribution au cours d’une période courte, en utilisant des modèles fluides en temps continu, et pendant toute la durée de vie de l’individu, en construisant des modèles en temps discret. Les trois modèles en temps discret correspondent à des hypothèses différentes qu’on fait pour les coûts: dans le premier modèle on suppose que les coûts de santé sont indépendants et identiquement distribués et ne dépendent pas du vieillissement de l’individu; dans les deux autres modèles on suppose que les coûts dépendent de son état de santé.


Doctorat en Sciences
info:eu-repo/semantics/nonPublished

APA, Harvard, Vancouver, ISO, and other styles
26

Nilsson, Mattias, and Erik Sandberg. "Application and Evaluation of Artificial Neural Networks in Solvency Capital Requirement Estimations for Insurance Products." Thesis, KTH, Matematisk statistik, 2018. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-224789.

Full text
Abstract:
The least squares Monte Carlo (LSMC) approach is commonly used in the estimation of the solvency capital requirement (SCR), as a more computationally efficient alternative to a full nested Monte Carlo simulation. This study compares the performance of artificial neural networks (ANNs) to that of the LSMC approach in the estimation of the SCR of various financial portfolios. More specifically, feedforward ANNs with multiple hidden layers are implemented and the results show that an ANN is superior in terms of accuracy compared to the LSMC approach. The ANN and LSMC approaches reduce the computation time to approximately 2-5% compared to a full nested Monte Carlo simulation. Significant time is however spent on validating and tuning the ANNs in order to optimise their performance. Despite continuous improvements in tools and techniques available for optimization and visualisation of ANNs, they are to a certain degree still regarded as “black boxes”. This study employs various tools and techniques to visualise and validate the implemented ANN models as extensively as possible. Examples include software libraries such as TensorFlow and Seaborn as well as methods to prevent overfitting such as weight regularisation and dropout. These tools and techniques do indeed contribute to shedding some light on the black box. Certain aspects of ANNs are however still difficult to interpret, which might make them less manageable in practise.
Least squares Monte Carlo (LSMC) används ofta vid estimering av solvenskapitalkrav (SCR), som ett mer beräkningseffektivt alternativ till vad som annars kräver en stor mängd Monte Carlo-simuleringar (full nästlad Monte Carlo-simulering). Denna studie undersöker hur artificiella neuronnät (ANNs) presterar jämfört med LSMCmetoden vid estimering av SCR för ett antal olika finansiella portföljer. Mer specifikt implementeras feedforward ANNs med flertalet dolda lager och resultaten framhäver att ANNs överträffar LSMC med avseende på prediktionskapacitet. ANNs och LSMC minskar beräkningstiden till 2-5% jämfört med en full nästlad Monte Carlo-simulering. Utöver beräkningstid behöver dock betydande tid spenderas på att optimera och validera ANNs prestanda. Trots kontinuerliga framsteg inom tillgängliga verktyg och tekniker för optimering och visualisering av ANNs så upplevs de fortfarande till viss del som “svarta lådor”. För att visualisera och validera de implementerade ANN-modellerna på ett så utförligt sätt som möjligt, använder denna studie ett flertal verktyg och tekniker, som exempelvis mjukvarorna TensorFlow och Seaborn samt metoder för att undvika överpassade modeller så som regularisering av vikter och dropout. Dessa verktyg och tekniker bidrar till att kasta ljus över den svarta lådan, men vissa aspekter av ANNs är fortfarande svåra att tolka vilket kan göra dem svårhanterliga i praktiken.
APA, Harvard, Vancouver, ISO, and other styles
27

Dunbäck, Daniel, and Lars Mattsson. "Predicting Risk Exposure in the Insurance Sector : Application of Statistical Tools to Enhance Price Optimization at Trygg-Hansa." Thesis, Umeå universitet, Institutionen för matematik och matematisk statistik, 2021. http://urn.kb.se/resolve?urn=urn:nbn:se:umu:diva-184754.

Full text
Abstract:
Knowledge about future customer flow can be very important when trying to optimize a business, especially for an insurance company like Trygg-Hansa since the customer flow is connected to the risk exposure for the company. In this thesis it is shown how customer volume for certain time periods can be estimated using stratification of data and univariate time series models. From this a simulated customer flow can be created using stratified sampling from the historical population. Two different stratification approaches were tested, an expert-driven approach using visualization to partition the population in to smaller subsets, and a data-driven approach using a regression tree. It was found that both approaches were able to capture seasonal effects and trends and delivered better results than the current method used by the company today. However, due to the fact the neither of the methods outperformed the other, it is not possible to determine which of the methods that is the best one, and that should be implemented. It is therefore recommended that both methods needs to be investigated further. It was also found that the variation in population, when considering the effect on the company's risk exposure, mattered less than the customer volume.
APA, Harvard, Vancouver, ISO, and other styles
28

Borgman, Robin, and Axel Hellström. "Micro-Level Loss Reserving in Economic Disability Insurance." Thesis, KTH, Matematisk statistik, 2018. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-229064.

Full text
Abstract:
In this thesis we provide a construction of a micro-level reserving model for an economic disability insurance portfolio. The model is based on the mathematical framework developed by Norberg (1993). The data considered is provided by Trygg-Hansa. The micro model tracks the development of each individual claim throughout its lifetime. The model setup is straightforward and in line with the insurance contract for economic disability, with levels of disability categorized by 50%, 75% and 100%. Model parameters are estimated with the reported claim development data, up to the valuation time Τ. Using the estimated model parameters the development of RBNS and IBNR claims are simulated. The results of the simulations are presented on several levels and compared with Mack Chain-Ladder estimates. The distributions of end states and times to settlement from the simulations follow patterns that are representative of the reported data. The estimated ultimate of the micro model is considerably lower than the Mack Chain-ladder estimate. The difference can partly be explained by lower claim occurrence intensity for recent accident years, which is a consequence of the decreasing number of reported claims in data. Furthermore, the standard error of the micro model is lower than the standard error produced by Mack Chain-Ladder. However, no conclusion regarding accuracy of the two reserving models can be drawn. Finally, it is concluded that the opportunities of micro modelling are promising however complemented by some concerns regarding data and parameter estimations.
I detta examensarbete ges ett förslag på uppbyggnaden av en mikro-modell för reservsättning. Modellen är baserad på det matematiska ramverket utvecklat av Norberg (1993). Data som används är tillhandahållen av Trygg-Hansa och berör försäkringar kopplade till ekonomisk invaliditet. Mikro-modellen följer utvecklingen av varje enskild skada, från skadetillfälle till stängning. Modellen har en enkel struktur som följer försäkringsvillkoren för den aktuella portföljen, med tillstånd för invaliditetsgrader om 50%, 75% respektive 100%. Modellparametrarna är estimerade utifrån den historiska utvecklingen på skador, fram till och med utvärderingstillfället Τ. Med hjälp av de estimerade parametrarna simuleras den framtida utvecklingen av RBNS- och IBNR-skador. Resultat av simuleringarna presenteras på era nivåer och jämförs med Mack Chain-Ladder estimatet. Den simulerade fördelningen av sluttillstånd och tid mellan rapportering och stängning, följer mönster som stöds av rapporterade data. Den estimerade slutkostnaden från mikro-modellen är betydlig lägre än motsvarande från Mack Chain-Ladder. Skillnaden kan delvis förklaras av en låg skadeintensitet för de senaste skadeåren, vilket är en konsekvens av färre rapporterade skador i data. Vidare så är standardfelet lägre för simuleringarna från mikro-modellen jämfört med standardfelet för Mack Chain-Ladder. Däremot kan inga slutsatser angående reservsättningsmetodernas precision dras. Slutligen, framförs möjligheterna för mikro-modellering som intressanta, kompletterat med några svårigheter gällande datautbud och parameterestimering.
APA, Harvard, Vancouver, ISO, and other styles
29

Barnholdt, Jacob, and Josefin Grafford. "Predicting Large Claims within Non-Life Insurance." Thesis, KTH, Matematisk statistik, 2018. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-228983.

Full text
Abstract:
This bachelor thesis within the field of mathematical statistics aims to study the possibility of predicting specifically large claims from non-life insurance policies with commercial policyholders. This is done through regression analysis, where we seek to develop and evaluate a generalized linear model, GLM. The project is carried out in collaboration with the insurance company If P&C Insurance and most of the research is conducted at their headquarters in Stockholm. The explanatory variables of interest are characteristics associated with the policyholders. Due to the scarcity of large claims in the data set, the prediction is done in two steps. Firstly, logistic regression is used to model the probability of a large claim occurring. Secondly, the magnitude of the large claims is modelled using a generalized linear model with a gamma distribution. Two full models with all characteristics included are constructed and then reduced with computer intensive algorithms. This results in two reduced models, one with two characteristics excluded and one with one characteristic excluded.
Det här kandidatexamensarbetet inom matematisk statistik avser att studera möjligheten att predicera särskilt stora skador från sakförsäkringspolicys med företag som försäkringstagare. Detta görs med regressionsanalys, där vi ämnar att utveckla och bedöma en generaliserad linjär modell, GLM. Projektet utförs i samarbete med försäkringsbolaget If Skadeförsäkring och merparten av undersökningen sker på deras huvudkontor i Stockholm. Förklaringsvariablerna som är av intresse att undersöka är egenskaper associerade med försäkringstagarna. På grund av sällsynthet av storskador i datamängden görs prediktionen i två steg. Först används logistisk regression för att modellera sannolikheten för en storskada att inträffa. Sedan modelleras storskadornas omfattning genom en generaliserad linjär modell med en gammafördelning. Två grundmodeller med alla förklaringsvariabler konstrueras för att sedan reduceras med datorintensiva algoritmer. Det resulterar i två reducerade modeller, med två respektive en kundegenskap utesluten.
APA, Harvard, Vancouver, ISO, and other styles
30

Eichler, Andreas, Gunther Leobacher, and Michaela Szölgyenyi. "Utility indifference pricing of insurance catastrophe derivatives." Springer Berlin Heidelberg, 2017. http://dx.doi.org/10.1007/s13385-017-0154-2.

Full text
Abstract:
We propose a model for an insurance loss index and the claims process of a single insurance company holding a fraction of the total number of contracts that captures both ordinary losses and losses due to catastrophes. In this model we price a catastrophe derivative by the method of utility indifference pricing. The associated stochastic optimization problem is treated by techniques for piecewise deterministic Markov processes. A numerical study illustrates our results.
APA, Harvard, Vancouver, ISO, and other styles
31

Maciulevičiūtė, Alvyda. "Bonus-Malus sistemos su a priori koeficientais modeliavimas ir optimizavimas." Master's thesis, Lithuanian Academic Libraries Network (LABT), 2004. http://vddb.library.lt/obj/LT-eLABa-0001:E.02~2004~D_20040602_223628-10128.

Full text
Abstract:
In yhis work we will make two Bonus-Malus systems with the same transition rules, but with different a priori criteria (dependent from personal characteristics and from automobile charakteristics), will review components of a model, will analyze the stationarity of a mean premium and coefficient of variation, elasticity and optimization.
APA, Harvard, Vancouver, ISO, and other styles
32

Rinkevičiūtė, Laima. "Ne gyvybės draudimo analizė Lietuvoje." Master's thesis, Lithuanian Academic Libraries Network (LABT), 2006. http://vddb.library.lt/obj/LT-eLABa-0001:E.02~2006~D_20060606_150230-24295.

Full text
Abstract:
Insurance market in Lithuania is evolving yet, but this process is quite rapid. The destination of this work – analysis of non life insurance in Lithuania, which we will dispense, when we will interpret statistical information of insurance, also we will analyze paying capacity of non life insurance companies. Insurance companies calculate future’s contribution using data of past period. It would be better to correct contribution according to predictive future’s number of contracts and loss. So the number of contracts and loss, signed by Lithuanian insurance companies each quarter, are studied as time series. Several time series models were created for three principal kinds of insurance (Motor Third Party Liability Insurance, Land vehicles other than railway rolling stock Insurance, Property Insurance) and the one that meets the reality best was selected. We will analyze variation of number of non life insurance companies, number of paid losses, number of signed contributions and number of contracts. After analyses of Insurance market’s indicators, we get strong tendency that Insurance market becomes more stable. After analysis of insurance companies’ paying capacity we got, that two close private companies - “Baltic Polis” and “Industrijos garantas” – was close to bankrupt in 2004 year. After forecasting number of Motor Third Party Liability Insurance’s and Land vehicles other than railway rolling stock Insurance’s contracts we got that Autoregressive model is the best for... [to full text]
APA, Harvard, Vancouver, ISO, and other styles
33

Rosén, Henrik. "Automation of Medical Underwriting by Appliance of Machine Learning." Thesis, Umeå universitet, Institutionen för matematik och matematisk statistik, 2020. http://urn.kb.se/resolve?urn=urn:nbn:se:umu:diva-171843.

Full text
Abstract:
One of the most important fields regarding growth and development for mostorganizations today is the digitalization, or digital transformation. The offering oftechnological solutions to enhance existing, or create new, processes or products isemerging. That is, it’s of great importance that organizations continuously affirm thepotential of applying new technical solutions into their existing processes. For example, a well implemented AI solution for automation of an existing process is likely tocontribute with considerable business value.Medical underwriting for individual insurances, which is the process consideredin this project, is all about risk assessment based on the individuals medical record.Such task appears well suited for automation by a machine learning based applicationand would thereby contribute with substantial business value. However, to make aproper replacement of a manual decision making process, no important informationmight be excluded, which becomes rather challenging due to the fact that a considerable fraction of the information the medical records consists of unstructured textdata. In addition, the underwriting process is extremely sensible to mistakes regarding unnecessarily approve insurances where an enhanced risk of future claims can beassessed.Three algorithms, Logistic Regression, XGBoost and a Deep Learning model, wereevaluated on training data consisting of the medical records structured data from categorical and numerical answers, the text data as TF-IDF observation vectors, and acombination of both subsets of features. The XGBoost were the classifier performingbest according to the key metric, a pAUC over an FPR from 0 to 0.03.There is no question about the substantial importance of not to disregard anytype of information from the medical records when developing machine learning classifiers to predict the medical underwriting outcomes. At a very risk conservative andperformance pessimistic approach the best performing classifier did manage, if consider only the group of youngest kids (50% of sample), to recall close to 50% of allstandard risk applications at a false positive rate of 2%, when both structured andtext data were considered. Even though the structured data accounts for most of theexplanatory ability it becomes clear that the inclusive of the text data as TF-IDF observation vectors make for the differences needed to potentially generate a positivenet present value to an implementation of the model
APA, Harvard, Vancouver, ISO, and other styles
34

Gyllenberg, Felix, and Åström Leonard Rudolf. "INTEREST RATE RISK : A comparative study aimed at finding the most crucial shift in interest rate curves for a life insurance company." Thesis, Umeå universitet, Institutionen för matematik och matematisk statistik, 2019. http://urn.kb.se/resolve?urn=urn:nbn:se:umu:diva-160248.

Full text
Abstract:
Risk management is applied in many financial institutions under regulatory supervision. Life insurance companies face many challenges to ensure policy holders of future payouts. The inverted balance sheet of life insurance companies imply that the policy holder pay premiums in advance to the insurance company to later receive payouts at the age of retirement. This means a great responsibility for the life insurance company to be able to meet future liabilities. Due to this, one of the largest risks facing a life insurance company is the interest rate risk. Future liabilities depend on the interest rates and the difference in duration in assets and liabilities creates an imperfect negative correlation between the movements in assets and liabilities when the interest rate change. The bond market holds different types of bonds such as government bonds, housing bonds and corporate bonds with different maturities within each subgroup. The relationship between these subgroups and maturities within these subgroups are interesting to investigate in a forecasting point of view. This relationship is usually referred to as the term structure of interest rates and changes in the term structure are referred to as shifts. This thesis aims to find which of the three shifts, level, slope and curvature, that is most important to capture in interest rate models. This is investigated using three different simulation techniques and the results show that the first shift representing a level shift of the whole term structure has the largest effect on Skandia’s balance sheet.
APA, Harvard, Vancouver, ISO, and other styles
35

Rasoul, Ryan. "Comparison of Forecasting Models Used by The Swedish Social Insurance Agency." Thesis, Mälardalens högskola, Akademin för utbildning, kultur och kommunikation, 2020. http://urn.kb.se/resolve?urn=urn:nbn:se:mdh:diva-49107.

Full text
Abstract:
We will compare two different forecasting models with the forecasting model that was used in March 2014 by The Swedish Social Insurance Agency ("Försäkringskassan" in Swedish or "FK") in this degree project. The models are used for forecasting the number of cases. The two models that will be compared with the model used by FK are the Seasonal Exponential Smoothing model (SES) and Auto-Regressive Integrated Moving Average (ARIMA) model. The models will be used to predict case volumes for two types of benefits: General Child Allowance “Barnbidrag” or (BB_ABB), and Pregnancy Benefit “Graviditetspenning” (GP_ANS). The results compare the forecast errors at the short time horizon (22) months and at the long-time horizon (70) months for the different types of models. Forecast error is the difference between the actual and the forecast value of case numbers received every month. The ARIMA model used in this degree project for GP_ANS had forecast errors on short and long horizons that are lower than the forecasting model that was used by FK in March 2014. However, the absolute forecast error is lower in the actual used model than in the ARIMA and SES models for pregnancy benefit cases. The results also show that for BB_ABB the forecast errors were large in all models, but it was the lowest in the actual used model (even the absolute forecast error). This shows that random error due to laws, rules, and community changes is almost impossible to predict. Therefore, it is not feasible to predict the time series with tested models in the long-term. However, that mainly depends on what FK considers as accepted forecast errors and how those forecasts will be used. It is important to mention that the implementation of ARIMA differs across different software. The best model in the used software in this degree project SAS (Statistical Analysis System) is not necessarily the best in other software.
APA, Harvard, Vancouver, ISO, and other styles
36

Liu, Jiajun. "Asymptotic analysis of dependent risks and extremes in insurance and finance." Thesis, University of Liverpool, 2015. http://livrepository.liverpool.ac.uk/2042999/.

Full text
Abstract:
In this thesis, we are interested in the asymptotic analysis of extremes and risks. The heavy-tailed distribution function is used to model the extreme risks, which is widely applied in insurance and is gradually penetrating in finance as well. We also use various tools such as copula, to model dependence structures, and extreme value theorem, to model rare events. We focus on modelling and analysing of extreme risks as well as demonstrate how the derived results that can be used in practice. We start from a discrete-time risk model. More concretely, consider a discrete-time annuity-immediate risk model in which the insurer is allowed to invest its wealth into a risk-free or a risky portfolio under a certain regulation. Then the insurer is said to be exposed to a stochastic economic environment that contains two kinds of risk, the insurance risk and financial risk. The former is traditional liability risk caused by insurance loss while the latter is the asset risk resulting from investment. Within each period, the insurance risk is denoted by a real-valued random variable X, and the financial risk Y as a positive random variable fulfils some constraints. We are interested in the ruin probability and the tail behaviour of maximum of the stochastic present values of aggregate net loss with Sarmanov or Farlie-Gumbel-Morgenstern (FGM) dependent insurance and financial risks. We derive asymptotic formulas for the finite-ruin probability with lighted-tailed or moderately heavy-tailed insurance risk for both risk-free investment and risky investment. As an extension, we improve the result for extreme risks arising from a rare event, combining simulation with asymptotics, to compute the ruin probability more efficiently. Next, we consider a similar risk model but a special case that insurance and financial risks following the least risky FGM dependence structure with heavy-tailed distribution. We follow the study of Chen (2011) that the finite-time ruin probability in a discrete-time risk model in which insurance and financial risks form a sequence of independent and identically distributed random pairs following a common bivariate FGM distribution function with parameter -1 ≤ θ ≤ 1 governing the strength of dependence. For the subexponential case, when -1 < θ ≤ 1, a general asymptotic formula for the finite-time ruin probability was derived. However, the derivation there is not valid for θ = -1. In this thesis, we complete the study by extending Chen's work to θ = -1 that the insurance risk and financial risk are negatively dependent. We refer this situation as the least risky FGM dependent insurance risk and financial risk. The new formulas for θ = −1 look very different from, but are intrinsically consistent with, the existing one for -1 < θ ≤ 1, and they offer a quantitative understanding on how significantly the asymptotic ruin probability decreases when θ switches from its normal range to its negative extremum. Finally, we study a continuous-time risk model. Specifically, we consider a renewal risk model with a constant premium and a constant force of interest rate, where the claim sizes and inter-arrival times follow certain dependence structures via some restriction on their copula function. The infinite-time absolute ruin probabilities are studied instead of the traditional infinite-time ruin probability with light-tailed or moderately heavy-tailed claim-size. Under the assumption that the distribution of the claim-size belongs to the intersection of the convolution-equivalent class and the rapid-varying tailed class, or a larger intersection class of O-subexponential distribution, the generalized exponential class and the rapid-varying tailed class, the infinite-time absolute ruin probabilities are derived.
APA, Harvard, Vancouver, ISO, and other styles
37

de, Paz Monfort Abel. "Heterogeneous discounting. Time consistency in investment and insurance models." Doctoral thesis, Universitat de Barcelona, 2012. http://hdl.handle.net/10803/127346.

Full text
Abstract:
In Chapter 2 we extend the heterogeneous discounting model introduced in Marín-Solano and Patxot (2012) to a stochastic environment. Our main contribution in this chapter is to derive the DPE providing time-consistent solution for both the discrete and continuous time case. For the continuous time problem we derive the DPE following the two different procedures described above: the formal limiting procedure and the variational approach. However, an important limitation of these approaches is that the DPE obtained is a functional equation with a nonlocal term. As a consequence, it becomes very complicated to find solutions, not only analytically, but also numerically. For this reason, we also derive a set of two coupled partial differential equations which allows us to compute (analytically or numerically) the solutions for different economic problems. In particular, we are interested in analyzing how time-inconsistent preferences with heterogeneous discounting modify the classical consumption and portfolio rules (Merton (1971)). The introduction of stochastic terminal time is also discussed. In Chapter 3, the results of Chapter 2 are extended in several ways. First, we consider that the decision maker is subject to a mortality risk. Within this context, we derive the optimal consumption, investment and life insurance rules for an agent whose concern about both the bequest left to her descendants and her wealth at retirement increases with time. To this end we depart from the model in Pliska and Ye (2007) generalizing the individual time preferences by incorporating heterogeneous discount functions. In addition, following Kraft (2003), we derive the wealth process in terms of the portfolio elasticity with respect to the traded assets. This approach allows us to introduce options in the investment opportunity set as well as to enlarge it by any number of contingent claims while maintaining the analytical tractability of the model. Finally, we analyze how the standard solutions are modified depending on the attitude of the agent towards her changing preferences, showing the differences with some numerical illustrations. In Chapter 4 we extend the heterogeneous discount framework to the study of differential games with heterogeneous agents, i.e., agents who exhibit different instantaneous utility functions and different (but constant) discount rates of time preference. In fact, although the non-standard models have usually focused on individual agents, the framework has proved to be useful in the study of cooperative solutions for some standard discounting differential games. Our main contribution in this chapter is to provide a set of DPE in discrete and continuous time in order to obtain time-consistent cooperative solutions for $N$-person differential games with heterogeneous agents. The results are applied to the study of a cake eating problem describing the management of a common property exhaustible natural resource. The extension to a simple common renewable natural resource in infinite horizon is also discussed. Finally, in Chapter 5, we present a summary of the main results of the thesis.
APA, Harvard, Vancouver, ISO, and other styles
38

Saladūnienė, Ramunė. "TP savininkų ir valdytojų civilinės atsakomybės draudimas." Master's thesis, Lithuanian Academic Libraries Network (LABT), 2005. http://vddb.library.lt/obj/LT-eLABa-0001:E.02~2005~D_20050608_185959-15354.

Full text
Abstract:
Motor Third Party Liability Insurance has appeared some years after serial production of cars was started. This kind of insurance was made obligatory in many European countries after it was noticed, that not all the drivers, who did harm to others or damaged their property, were able to suit civil liability claims and compensate damage. The Lithuanian Motor Third Party Liability Insurance Law was accepted on June 14, 2001 and came into force on January 1, 2002; the demand to insure vehicle came into force after 3 months, on April 1. Lithuania was the last country in Europe, which brought into practice this obligatory kind of insurance. In this work insurance markets of Lithuania, Latvia and Estonia are being compared. These markets were chosen because of economical and political situation: ex-members of USSR, similar living standards, no insurance market. After retrieving independence insurance companies were established little by little. The aim of this work is to show how insurance markets that started in the same conditions, later developed differently and to compare their present situation. Both: annual and average data is being used for the comparison. In the first part of this work statistical data of Lithuanian, Latvian and Estonian insurance markets are compared: the number of insurance companies, average installment for a habitant, etc. In the second part the contracts, signed by Lithuanian insurance companies each quarter, are studied as time series. Several time... [to full text]
APA, Harvard, Vancouver, ISO, and other styles
39

Šutienė, Kristina. "Nemokumo trukmės vidurkio ir dispersijos įvertinimas draudime." Master's thesis, Lithuanian Academic Libraries Network (LABT), 2004. http://vddb.library.lt/obj/LT-eLABa-0001:E.02~2004~D_20040604_122827-11051.

Full text
APA, Harvard, Vancouver, ISO, and other styles
40

Webb, Jared Anthony. "A Topics Analysis Model for Health Insurance Claims." BYU ScholarsArchive, 2013. https://scholarsarchive.byu.edu/etd/3805.

Full text
Abstract:
Mathematical probability has a rich theory and powerful applications. Of particular note is the Markov chain Monte Carlo (MCMC) method for sampling from high dimensional distributions that may not admit a naive analysis. We develop the theory of the MCMC method from first principles and prove its relevance. We also define a Bayesian hierarchical model for generating data. By understanding how data are generated we may infer hidden structure about these models. We use a specific MCMC method called a Gibbs' sampler to discover topic distributions in a hierarchical Bayesian model called Topics Over Time. We propose an innovative use of this model to discover disease and treatment topics in a corpus of health insurance claims data. By representing individuals as mixtures of topics, we are able to consider their future costs on an individual level rather than as part of a large collective.
APA, Harvard, Vancouver, ISO, and other styles
41

Parker, Bobby I. Mr. "Assessment of the Sustained Financial Impact of Risk Engineering Service on Insurance Claims Costs." Digital Archive @ GSU, 2011. http://digitalarchive.gsu.edu/math_theses/100.

Full text
Abstract:
This research paper creates a comprehensive statistical model, relating financial impact of risk engineering activity, and insurance claims costs. Specifically, the model shows important statistical relationships among six variables including: types of risk engineering activity, risk engineering dollar cost, duration of risk engineering service, and type of customer by industry classification, dollar premium amounts, and dollar claims costs. We accomplish this by using a large data sample of approximately 15,000 customer-years of insurance coverage, and risk engineering activity. Data sample is from an international casualty/property insurance company and covers four years of operations, 2006-2009. The choice of statistical model is the linear mixed model, as presented in SAS 9.2 software. This method provides essential capabilities, including the flexibility to work with data having missing values, and the ability to reveal time-dependent statistical associations.
APA, Harvard, Vancouver, ISO, and other styles
42

Webb, Matthew Aaron. "Modeling Individual Health Care Utilization." BYU ScholarsArchive, 2016. https://scholarsarchive.byu.edu/etd/8832.

Full text
Abstract:
Health care represents an increasing proportion of global consumption. We discuss ways to model health care utilization on an individual basis. We present a probabilistic, generative model of utilization. Leveraging previously observed utilization levels, we learn a latent structure that can be used to accurately understand risk and make predictions. We evaluate the effectiveness of the model using data from a large population.
APA, Harvard, Vancouver, ISO, and other styles
43

Hardin, Patrik, and Sam Tabari. "Modelling Non-life Insurance Policyholder Price Sensitivity : A Statistical Analysis Performed with Logistic Regression." Thesis, KTH, Matematisk statistik, 2017. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-209773.

Full text
Abstract:
This bachelor thesis within mathematical statistics studies the possibility of modelling the renewal probability for commercial non-life insurance policyholders. The project was carried out in collaboration with the non-life insurance company If P&C Insurance Ltd. at their headquarters in Stockholm, Sweden. The paper includes an introduction to underlying concepts within insurance and mathematics and a detailed review of the analytical process followed by a discussion and conclusions. The first stages of the project were the initial collection and processing of explanatory insurance data and the development of a logistic regression model for policy renewal. An initial model was built and modern methods of mathematics and statistics were applied in order obtain a final model consisting of 9 significant characteristics. The regression model had a predictive power of 61%. This suggests that it to a certain degree is possible to predict the renewal probability of non-life insurance policyholders based on their characteristics. The results from the final model were ultimately translated into a measure of price sensitivity which can be implemented in both pricing models and CRM systems. We believe that price sensitivity analysis, if done correctly, is a natural step in improving the current pricing models in the insurance industry and this project provides a foundation for further research in this area.
Detta kandidatexamensarbete inom matematisk statistik undersöker möjligheten att modellera förnyelsegraden för kommersiella skadeförsärkringskunder. Arbetet utfördes i samarbete med If Skadeförsäkring vid huvudkontoret i Stockholm, Sverige. Uppsatsen innehåller en introduktion till underliggande koncept inom försäkring och matematik samt en utförlig översikt över projektets analytiska process, följt av en diskussion och slutsatser. De huvudsakliga delarna av projektet var insamling och bearbetning av förklarande försäkringsdata samt utvecklandet och tolkningen av en logistisk regressionsmodell för förnyelsegrad. En första modell byggdes och moderna metoder inom matematik och statistik utfördes för att erhålla en slutgiltig regressionsmodell uppbyggd av 9  signifikanta kundkaraktäristika. Regressionsmodellen hade en förklaringsgrad av 61% vilket pekar på att det till en viss grad är möjligt att förklara förnyelsegraden hos försäkringskunder utifrån dessa karaktäristika. Resultaten från den slutgiltiga modellen översattes slutligen till ett priskänslighetsmått vilket möjliggjorde implementering i prissättningsmodeller samt CRM-system. Vi anser att priskänslighetsanalys, om korrekt genomfört, är ett naturligt steg i utvecklingen av dagens prissättningsmodeller inom försäkringsbranschen och detta projekt lägger en grund för fortsatta studier inom detta område.
APA, Harvard, Vancouver, ISO, and other styles
44

Ludovic, Moreau. "A Contribution in Stochastic Control Applied to Finance and Insurance." Phd thesis, Université Paris Dauphine - Paris IX, 2012. http://tel.archives-ouvertes.fr/tel-00737624.

Full text
Abstract:
Le but de cette thèse est d'apporter une contribution à la problématique de valorisation de produits dérivés en marchés incomplets. Nous considérons tout d'abord les cibles stochastiques introduites par Soner et Touzi (2002) afin de traiter le problème de sur-réplication, et récemment étendues afin de traiter des approches plus générales par Bouchard, Elie et Touzi (2009). Nous généralisons le travail de Bouchard {\sl et al} à un cadre plus général où les diffusions sont sujettes à des sauts. Nous devons considérer dans ce cas des contrôles qui prennent la forme de fonctions non bornées, ce qui impacte de façon non triviale la dérivation des EDP correspondantes. Notre deuxième contribution consiste à établir une version des cibles stochastiques qui soit robuste à l'incertitude de modèle. Dans un cadre abstrait, nous établissons une version faible du principe de programmation dynamique géométrique de Soner et Touzi (2002), et nous dérivons, dans un cas d'EDS controllées, l'équation aux dérivées partielles correspondantes, au sens des viscosités. Nous nous intéressons ensuite à un exemple de couverture partielle sous incertitude de Knightian. Finalement, nous nous concentrons sur le problème de valorisation de produits dérivées {\sl hybrides} (produits dérivés combinant finance de marché et assurance). Nous cherchons plus particulièrement à établir une condition suffisante sous laquelle une règle de valorisation (populaire dans l'industrie), consistant à combiner l'approches actuarielle de mutualisation avec une approche d'arbitrage, soit valable.
APA, Harvard, Vancouver, ISO, and other styles
45

Evkaya, Ozan Omer. "Modelling Weather Index Based Drought Insurance For Provinces In The Central Anatolia Region." Master's thesis, METU, 2012. http://etd.lib.metu.edu.tr/upload/12614572/index.pdf.

Full text
Abstract:
Drought, which is an important result of the climate change, is one of the most serious natural hazards globally. It has been agreed all over the world that it has adverse impacts on the production of agriculture, which plays a major role in the economy of a country. Studies showed that the results of the drought directly affected the crop yields, and it seems that this negative impact will continue drastically soon. Moreover, many researches revealed that, Turkey will be affected from the results of climate change in many aspects, especially the agricultural production will encounter dry seasons after the rapid changes in the precipitation amount. Insurance is a well-established method, which is used to share the risk based on natural disasters by people and organizations. Furthermore, a new way of insuring against the weather shocks is designing index-based insurance, and it has gained special attention in many developing countries. In this study, our aim is to model weather index based drought insurance product to help the small holder farmers in the Cental Anatolia Region under different models. At first, time series techniques were applied to forecast the wheat yield relying on the past data. Then, the AMS (AgroMetShell) software outputs, NDVI (Normalized Difference Vegetation Index) values were used, and SPI values for distinct time steps were chosen to develop a basic threshold based drought insurance for each province. Linear regression equations were used to calculate the trigger points for weather index, afterwards based on these trigger levels
pure premium and indemnity calculations were made for each province separately. In addition to this, Panel Data Analysis were used to construct an alternative linear model for drought insurance. It can be helpful to understand the direct and actual effects of selected weather index measures on wheat yield and also reduce the basis risks for constructed contracts. A simple ratio was generated to compare the basis risk of the different index-based insurance contracts.
APA, Harvard, Vancouver, ISO, and other styles
46

Nilsson, Mattias. "Tail Estimation for Large Insurance Claims, an Extreme Value Approach." Thesis, Linnaeus University, School of Computer Science, Physics and Mathematics, 2010. http://urn.kb.se/resolve?urn=urn:nbn:se:lnu:diva-7826.

Full text
Abstract:

In this thesis are extreme value theory used to estimate the probability that large insuranceclaims are exceeding a certain threshold. The expected claim size, given that the claimhas exceeded a certain limit, are also estimated. Two different models are used for thispurpose. The first model is based on maximum domain of attraction conditions. A Paretodistribution is used in the other model. Different graphical tools are used to check thevalidity for both models. Länsförsäkring Kronoberg has provided us with insurance datato perform the study.Conclusions, which have been drawn, are that both models seem to be valid and theresults from both models are essential equal.


I detta arbete används extremvärdesteori för att uppskatta sannolikheten att stora försäkringsskadoröverträffar en vis nivå. Även den förväntade storleken på skadan, givetatt skadan överstiger ett visst belopp, uppskattas. Två olika modeller används. Den förstamodellen bygger på antagandet att underliggande slumpvariabler tillhör maximat aven extremvärdesfördelning. I den andra modellen används en Pareto fördelning. Olikagrafiska verktyg används för att besluta om modellernas giltighet. För att kunna genomförastudien har Länsförsäkring Kronoberg ställt upp med försäkringsdata.Slutsatser som dras är att båda modellerna verkar vara giltiga och att resultaten ärlikvärdiga.

APA, Harvard, Vancouver, ISO, and other styles
47

Guimarães, Sérgio Rangel. "Fundamentação técnica e atuarial dos seguros de vida : um estudo comparativo entre o seguro de vida individual e o seguro de vida em grupo no Brasil." reponame:Biblioteca Digital de Teses e Dissertações da UFRGS, 2003. http://hdl.handle.net/10183/3227.

Full text
Abstract:
A indústria de seguros é uma atividade econômica relativamente jovem, possuindo raízes na revolução industrial. O desenvolvimento dessa indústria ocorreu de forma bastante intensa durante o século passado, quando a atividade passou a ser inserida na área de gestão de riscos. As Companhias de Seguros que trabalham nesse ambiente de negócio fundamentam todo o processo de precificação dos seus produtos em rígidas bases técnicas e atuariais. O presente trabalho dedica-se ao estudo dessas questões, abordando especificamente os seguros de vida, com ênfase à cobertura de morte. A pesquisa tem por objetivo comparar duas modalidades distintas de seguros que são ofertadas ao mercado: o seguro de vida individual e o seguro de vida em grupo. Embora ofereçam aos consumidores coberturas bastante similares, ambas as modalidades devem obedecer a requisitos e princípios técnicos diferenciados por parte das instituições que fazem a sua gestão.
The insurance industry is a relatively young economic activity; its bases are found in the industrial revolution. The development of such industry occurred in a very intense way in the last century, when the activity started being placed in the area of management of risks. The insurance companies that work in this business environment base the whole pricing process of their products on rigid technical and actuarial bases. The present work aims at studying these questions, focusing on the life insurance, with emphasis on the death coverage. The research intends to explore and compare two distinct modalities of insurance that are offered to the market: the individual life insurance and the group life insurance. Even though they offer similar coverage, they must fulfill requirements and different technical principles ruled by the institutions which are responsible for their management.
APA, Harvard, Vancouver, ISO, and other styles
48

Adams, Joseph Allen. "A Matched Payout Model for Investment, Consumption, and Insurance with a Risky Annuity Income." BYU ScholarsArchive, 2019. https://scholarsarchive.byu.edu/etd/7474.

Full text
Abstract:
We introduce a new insurance instrument allowing retirees to hedge against risk of mortality and risk of default. At retirement, the retiree is allowed to purchase an annuity that provides a defaultable income stream over his lifetime. The time of mortality and time of default are both uncertain, but are accompanied by determined hazard rates. The retiree will make consumption and investment choices throughout his lifetime, which have certain restrictions: the retiree can never enter a bankruptcy state (negative total wealth), and the investment choices are made in a risk-free financial instrument (such as a treasury bill or bond) and a risky instrument (such as commodities or stock). The retiree also makes insurance premium payments which hedge against mortality and default risks simultaneously. This new form of insurance is one which can be implemented by financial institutions as a means for retirees to protect their illiquid assets. In doing so, we calculate the optimal annuity rate a retiree should purchase to maximize his utility of consumption and bequest.Throughout the paper, we develop stochastic control models for a retiree's optimal investment and consumption policies over an uncertain planning horizon in several models which may or may not allow for insurance purchases. We find exact solutions to several models, and apply dynamic programming and the logarithmic transformation to other models to find numerical solutions when constraints are needed. We also analyze the effects of loading on insurance, analyzing the effects of more expensive insurance on the retiree's control policies and value functions. In particular, we will consider the model in which the retiree can purchase life insurance and credit default insurance (in the form of a credit default swap, or CDS) separately to hedge against life events. CDS's do not exist for annuities, but we extend this model by incorporating life insurance and the CDS into a single entity, which can be a viable, and realistic, option to hedge against risk. This model is beneficial in providing a solution to the annuity problem by showing that minimal annuity purchase is optimal.
APA, Harvard, Vancouver, ISO, and other styles
49

Widing, Björn, and Jimmy Jansson. "Valuation Practices of IFRS 17." Thesis, KTH, Matematisk statistik, 2018. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-224211.

Full text
Abstract:
This research assesses the IFRS 17 Insurance Contracts standard from a mathematical and actuarial point of view. Specifically, a valuation model that complies with the standard is developed in order to investigate implications of the standard on financial statements of insurance companies. This includes a deep insight into the standard, construction a valuation model of a fictive traditional life insurance product and an investigation of the outcomes of the model. The findings show firstly that an investment strategy favorable for valuing insurance contracts according to the standard may conflict with the Asset & Liability Management of the firm. Secondly, that a low risk adjustment increases the contractual service margin (CSM) and hence the possibility of smoothing profits over time. Thirdly, that the policy for releasing the CSM should take both risk-neutral and real assumptions into account.
I denna rapport ansätts redovisningsstandarden IFRS 17 Insurance Contracts utifrån ett matematiskt och aktuariellt perspektiv. En värderingsmodell som överensstämmer med standarden konstrueras för att undersöka standardens implikationer på ett försäkringsbolags resultaträkning. Detta inkluderar en fördjupning i standarden, konstruktion och modellering av en fiktiv traditionell livförsäkringsprodukt samt undersökning av resultaten från modellen. Resultaten visar att det finns en möjlig konflikt mellan investeringsstrategier som är gynnsamma med avseende på värdering enligt standarden och ett försäkringsbolags tillgångs- och skuldförvaltning. Vidare leder en låg riskjustering till en högre avtalsmässig servicemarginal (CSM) vilket ökar möjligheten att utjämna vinster över tid. Slutligen bör policyn för hur CSM frisläpps beakta både risk-neutrala och verkliga antaganden.
APA, Harvard, Vancouver, ISO, and other styles
50

Guterstam, Rasmus, and Vidar Trojenborg. "Exploring a personal property pricing method in insurance context using multiple regression analysis." Thesis, KTH, Matematisk statistik, 2019. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-254300.

Full text
Abstract:
In general, insurance companies and especially their clients face long and complicated claims processes where payments rarely, and almost reluctantly, are made the same day. A part of this slow moving procedure is the fact that in some cases the insurer has to value the personal property themselves, which can be a tedious process. In conjunction with the insurance company Hedvig, this project address this issue by examining a pricing model for a specific personal property; smartphones - one of the most common occurring claim types in the insurance context. Using multiple linear regression with data provided by PriceRunner, 10 key characteristics out of 91 where found to have significant explanatory power in predicting the market price of a smartphone. The model successfully simulates this market price with an explained variance of 90%. Furthermore this thesis illustrates an intuitive example regarding pricing models for personal property of other sorts, identifying limiting key components to be data availability and product complexity.
I dagsläget står försäkringsbolag och deras kunder allt för ofta inför långa och komplicerade försäkringsärenden, där utbetalningar i regel aldrig sker samma dag. En del i denna långsamma och utdragna utbetalningsprocess är det faktum att försäkringsbolaget på egen hand måste uppskatta egendomens värde, vilket kan vara en mycket komplicerad process. I samarbete med försäkringsbolaget Hedvig undersöker denna rapport en värderingsmodell för ett av de vanligaste försäkringsärendena gällande personlig egendom, nämligen smartphones. Genom att använda multipel linjär regression med data försedd av PriceRunner har 10 av 91 nyckelfaktorer identifierats ha signifikant förklaringsgrad vid modellering av marknadsvärdet av en smartphone. Den framtagna modellen simulerar framgångsrikt marknadsvärdet med en 90-procentig förklaringsgrad av variansen. Vidare illustrerar denna rapport intuitiva riktlinjer för värderingsmodellering till andra typer av personlig egendom, samtidigt som den identifierar begränsande nyckelaspekter som exempelvis tillgången på data och egendomens inneboende komplexitet.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography