Pour voir les autres types de publications sur ce sujet consultez le lien suivant : Calibration sets.

Thèses sur le sujet « Calibration sets »

Créez une référence correcte selon les styles APA, MLA, Chicago, Harvard et plusieurs autres

Choisissez une source :

Consultez les 18 meilleures thèses pour votre recherche sur le sujet « Calibration sets ».

À côté de chaque source dans la liste de références il y a un bouton « Ajouter à la bibliographie ». Cliquez sur ce bouton, et nous générerons automatiquement la référence bibliographique pour la source choisie selon votre style de citation préféré : APA, MLA, Harvard, Vancouver, Chicago, etc.

Vous pouvez aussi télécharger le texte intégral de la publication scolaire au format pdf et consulter son résumé en ligne lorsque ces informations sont inclues dans les métadonnées.

Parcourez les thèses sur diverses disciplines et organisez correctement votre bibliographie.

1

Prateepasen, Asa. « Tool wear monitoring in turning using fused data sets of calibrated acoustic emission and vibration ». Thesis, Brunel University, 2001. http://bura.brunel.ac.uk/handle/2438/5415.

Texte intégral
Résumé :
The main aim of this research is to develop an on-line tool wear condition monitoring intelligent system for single-point turning operations. This is to provide accurate and reliable information on the different states of tool wear. Calibrated acoustic emission and vibration techniques were implemented to monitor the progress of wear on carbide tool tips. Previous research has shown that acoustic emission (AE) is sensitive to tool wear. However, AE, as a monitoring technique, is still not widely adopted by industry. This is because it is as yet impossible to achieve repeatable measurements of AE. The variability is due to inconsistent coupling of the sensor with structures and the fact that the tool structure may have different geometry and material property. Calibration is therefore required so that the extent of variability becomes quantifiable, and hence accounted for or removed altogether. Proper calibration needs a well-defined and repeatable AE source. In this research, various artificial sources were reviewed in order to assess their suitability as an AE calibration source for the single-point machining process. Two artificial sources were selected for studying in detail. These are an air jet and a pulsed laser; the former produces continuous-type AE and the latter burst type AE. Since the air jet source has a power spectrum resembling closely the AE produced from single-point machining and since it is readily available in a machine shop, not to mention its relative safety compared to laser, an air-jet source is a more appealing choice. The calibration procedure involves setting up an air jet at a fixed stand-off distance from the top rake of the tool tip, applying in sequence a set of increasing pressures and measuring the corresponding AE. It was found that the root-mean-square value of the AE obtained is linearly proportional to the pressure applied. Thus, irrespective of the layout of the sensor and AE source in a tool structure, AE can be expressed in terms of the common currency of 'pressure' using the calibration curve produced for that particular layout. Tool wear stages can then be defined in terms of the 'pressure' levels. In order to improve the robustness of the monitoring system, in addition to AE, vibration information is also used. In this case, the acceleration at the tool tip in the tangential and feed directions is measured. The coherence function between these two signals is then computed. The coherence is a function of the vibration frequency and has a value ranging from 0 to 1, corresponding to no correlation and full correlation respectively between the two acceleration signals. The coherence function method is an attempt to provide a solution, which is relatively insensitive to the dynamics and the process variables except tool wear. Three features were identified to be sensitive to tool wear and they are; AErms, and the coherence function of the acceleration at natural frequency (2.5-5.5 kHz) of the tool holder and at high frequency end (18-25kHz) respectively. A belief network, based on Bayes' rule, was created providing fusion of data from AE and vibration for tool wear classification. The conditional probabilities required for the belief network to operate were established from examples. These examples were presented to the belief network as a file of cases. The file contains the three features mentioned earlier, together with cutting conditions and the tool wear states. Half of the data in this file was used for training while the other half was used for testing the network. The performance of the network gave an overall classification error rate of 1.6 % with the WD acoustic emission sensor and an error rate of 4.9 % with the R30 acoustic emission sensor.
Styles APA, Harvard, Vancouver, ISO, etc.
2

Söhl, Jakob. « Central limit theorems and confidence sets in the calibration of Lévy models and in deconvolution ». Doctoral thesis, Humboldt-Universität zu Berlin, Mathematisch-Naturwissenschaftliche Fakultät II, 2013. http://dx.doi.org/10.18452/16732.

Texte intégral
Résumé :
Zentrale Grenzwertsätze und Konfidenzmengen werden in zwei verschiedenen, nichtparametrischen, inversen Problemen ähnlicher Struktur untersucht, und zwar in der Kalibrierung eines exponentiellen Lévy-Modells und im Dekonvolutionsmodell. Im ersten Modell wird eine Geldanlage durch einen exponentiellen Lévy-Prozess dargestellt, Optionspreise werden beobachtet und das charakteristische Tripel des Lévy-Prozesses wird geschätzt. Wir zeigen, dass die Schätzer fast sicher wohldefiniert sind. Zu diesem Zweck beweisen wir eine obere Schranke für Trefferwahrscheinlichkeiten von gaußschen Zufallsfeldern und wenden diese auf einen Gauß-Prozess aus der Schätzmethode für Lévy-Modelle an. Wir beweisen gemeinsame asymptotische Normalität für die Schätzer von Volatilität, Drift und Intensität und für die punktweisen Schätzer der Sprungdichte. Basierend auf diesen Ergebnissen konstruieren wir Konfidenzintervalle und -mengen für die Schätzer. Wir zeigen, dass sich die Konfidenzintervalle in Simulationen gut verhalten, und wenden sie auf Optionsdaten des DAX an. Im Dekonvolutionsmodell beobachten wir unabhängige, identisch verteilte Zufallsvariablen mit additiven Fehlern und schätzen lineare Funktionale der Dichte der Zufallsvariablen. Wir betrachten Dekonvolutionsmodelle mit gewöhnlich glatten Fehlern. Bei diesen ist die Schlechtgestelltheit des Problems durch die polynomielle Abfallrate der charakteristischen Funktion der Fehler gegeben. Wir beweisen einen gleichmäßigen zentralen Grenzwertsatz für Schätzer von Translationsklassen linearer Funktionale, der die Schätzung der Verteilungsfunktion als Spezialfall enthält. Unsere Ergebnisse gelten in Situationen, in denen eine Wurzel-n-Rate erreicht werden kann, genauer gesagt gelten sie, wenn die Sobolev-Glattheit der Funktionale größer als die Schlechtgestelltheit des Problems ist.
Central limit theorems and confidence sets are studied in two different but related nonparametric inverse problems, namely in the calibration of an exponential Lévy model and in the deconvolution model. In the first set-up, an asset is modeled by an exponential of a Lévy process, option prices are observed and the characteristic triplet of the Lévy process is estimated. We show that the estimators are almost surely well-defined. To this end, we prove an upper bound for hitting probabilities of Gaussian random fields and apply this to a Gaussian process related to the estimation method for Lévy models. We prove joint asymptotic normality for estimators of the volatility, the drift, the intensity and for pointwise estimators of the jump density. Based on these results, we construct confidence intervals and sets for the estimators. We show that the confidence intervals perform well in simulations and apply them to option data of the German DAX index. In the deconvolution model, we observe independent, identically distributed random variables with additive errors and we estimate linear functionals of the density of the random variables. We consider deconvolution models with ordinary smooth errors. Then the ill-posedness of the problem is given by the polynomial decay rate with which the characteristic function of the errors decays. We prove a uniform central limit theorem for the estimators of translation classes of linear functionals, which includes the estimation of the distribution function as a special case. Our results hold in situations, for which a square-root-n-rate can be obtained, more precisely, if the Sobolev smoothness of the functionals is larger than the ill-posedness of the problem.
Styles APA, Harvard, Vancouver, ISO, etc.
3

FURLANETTO, GIULIA. « Quantitative reconstructions of climatic series in mountain environment based on paleoecological and ecological data ». Doctoral thesis, Università degli Studi di Milano-Bicocca, 2019. http://hdl.handle.net/10281/241319.

Texte intégral
Résumé :
La vegetazione di ambiente montano è nota per essere sensibile alle variazioni climatiche. Il forte gradiente climatico altitudinale, che caratterizza le aree montane, dà luogo ad un marcato gradiente ecologico, in cui numerosi ecotoni sono presenti in una piccola area. Sequenze polliniche investigate all’interno o poco al di sopra dell’ecotono attuale della timberline sono archivi ideali per investigare le relazioni esistenti fra clima ed ecosistemi. Ricostruzioni quantitative delle condizioni climatiche del passato da record pollinici fossili richiedono la comprensione della rappresentazione pollinica attuale lungo gradienti climatici ed ecologici. Obbiettivi di questa ricerca di dottorato sono: lo sviluppo di transetti altitudinali di polline-vegetazione-clima attuale, l’elaborazione di modelli e la loro validazione per valutare le relazioni polline-clima grazie a gradienti altitudinali, la ricerca di nuovi archivi naturali ad alta risoluzione per ottenere dati proxy e ricostruire le variazioni paleoambientali e paleoclimatiche durante l’Olocene, l’applicazione di questi modelli ai dati pollinici stratigrafici ed infine il confronto dei risultati con diverse ricostruzioni su base proxy. E’ stata dimostrata l’importanza di transetti altitudinali locali con campioni pollinici moderni e temperature sito-specifiche come strumento per le ricostruzioni climatiche. Sono stati sviluppati due transetti altitudinali (la Valle di La Thuile e l’Alta Val Brembana) per ottenere consistenti correlazioni locali polline-clima, per trovare taxa pollinici sensibili utili per le ricostruzioni paleoclimatiche, per stimare gli effetti di parametri locali (lapse rate, clima, trasporto di polline verso l’alto e impatto antropico) e infine sono stati utilizzati come test sets per valutare i modelli polline-clima basati su calibration sets estratti dall’European Modern Pollen Database. Sono state investigate le relazioni rappresentazione pollinica attuale-vegetazione-clima lungo un gradiente altitudinale in Alta Val Brembana. Qui la rappresentazione pollinica attuale (trappole polliniche e campioni di muschio), vegetazione, quota e clima sono stati rilevati in 16 siti di campionamento situati lungo un gradiente altitudinale da 1240 m slm a 2390 m slm. I risultati della CCA mostrano una buona concordanza con studi precedenti, che hanno individuato la quota come il maggior gradiente nella variazione della rappresentazione pollinica attuale e della vegetazione in aree montane. Lo studio stratigrafico di proxies paleoecologici e sedimentari nella torbiera Armentarga hanno permesso di ricostruire la vegetazione e il clima negli ultimi 10 ka in un distretto oceanico di alta quota delle Alpi Italiane. Sono state ottenute e validate ricostruzioni quantitative di Tluglio e Pann applicando funzioni di trasferimento create da un vasto dataset di calibrazione polline-clima. Il record paleobotanico proveniente dalla torbiera Armentarga ha mostrato che questa composizione altitudinale della vegetazione è principalmente guidata da un aumento di precipitazioni nel medio-tardo Olocene, indipendente dalle sequenze millenarie di anomalie termiche già note da altri proxies alpini di alta quota (ad es. ghiacciai, timberline, chironomidi, speleotemi). Le variazioni delle precipitazioni annuali durante l’Olocene sono avvenute in tre step principali, partendo da un inizio Olocene moderatamente umido, caratterizzato dalla presenza precoce di foreste di Alnus viridis e seguito da un primo incremento delle precipitazioni a partire da 6.2 ka cal BP. Uno step prominente in avanti avvenne nella transizione medio-tardo Olocene, datata alla torbiera Armentarga tra 4.7 e 3.9 ka, che ha portato verso i valori attuali tipici di clima oceanici montani (Pann 1700-1850 mm) ed è stato probabilmente accompagnato da un incremento di precipitazioni nevose e runoff ed ha avuto un maggior impatto sulla depressione della timberline e l’espansione di praterie.
Montane vegetation is traditionally known to be particularly sensitive to climate changes. The strong elevational climatic gradient that characterises mountain areas results in a steep ecological slope, with several ecotones occurring in a small area. Pollen sequences investigated in or shortly above the modern timberline ecotone are ideal archives to analyze the relationships between climate and ecosystems. Quantitative reconstruction of past climate conditions from fossil pollen records requires understanding modern pollen representation along climatic and ecological gradients. The aims of this PhD research are the development of modern pollen-vegetation-climate elevational transects, model processing and validation for evaluating pollen-climate relationships thanks to elevational gradients, looking for new high-resolution natural archives to obtain proxy data and to reconstruct paleoenvironmental and palaeoclimatic changes during the Holocene, the application of these models to pollen-stratigraphical data and comparing the results with different proxy-based reconstructions. The importance of local elevational transects of modern pollen samples with site-specific temperature as a tool for paleoclimate reconstructions in the Alps was demonstrated. The two elevational transects (La Thuile Valley and Upper Brembana Valley) were developed to derive consistent local pollen-climate correlations, to find sensitive pollen taxa useful for paleoclimate reconstructions; to estimate the effects of local parameters (elevational lapse rate, climate, uphill pollen transport and human impact) and were used as test sets to evaluate pollen-climate models based on calibration sets extracted from the European Modern Pollen Database. Modern pollen assemblages-vegetation-climate relationships along an elevational gradient in the Upper Brembana Valley were investigated. Here modern pollen assemblages (pollen traps and moss samples), vegetation, elevation and climate have been collected at 16 sampling sites placed along an elevational gradient stretching from 1240 m asl to 2390 m asl. The results of CCA analysis demonstrated a general good agreement with previous studies, which identified elevation as the main gradient in the variation of modern pollen and vegetation assemblages in mountain areas. The stratigraphic study of paleoecological and sedimentary proxies in the Armentarga peat bog allowed to reconstruct the vegetation and climate history during the last 10 ka in a high-elevation, oceanic district of the Italian Alps. Quantitative reconstructions of Tjuly and Pann were obtained and validated by applying numerical transfer functions built on an extensive calibration pollen-climate dataset. The palaeobotanical record of the Armentarga peat bog has shown this elevational vegetation arrangement to be primarily driven by a Middle to late Holocene precipitation increase, substantially independent from the millennial sequence of thermal anomalies already known from other high-elevation Alpine proxies (i.e. glaciers, timberline, chironomids, speleothems). Changes in annual precipitation occurred in three main steps during the Holocene, starting with a moderately humid early Holocene marked by early occurrence of the Alnus viridis dwarf forests, and followed by a first step of precipitation increase starting at 6.2 ka cal BP. A prominent step forward occurred at the Middle to Late Holocene transition, dated between 4.7 and 3.9 ka at the Armentarga site, which led to present values typical for oceanic mountain climates (Pann 1700-1850 mm) and was probably accompanied by increased snowfall and runoff, and had a major impact on timberline depression and grassland expansion.
Styles APA, Harvard, Vancouver, ISO, etc.
4

Paudel, Danda Pani. « Local and global methods for registering 2D image sets and 3D point clouds ». Thesis, Dijon, 2015. http://www.theses.fr/2015DIJOS077/document.

Texte intégral
Résumé :
Pas de résumé
In this thesis, we study the problem of registering 2D image sets and 3D point clouds under threedifferent acquisition set-ups. The first set-up assumes that the image sets are captured using 2Dcameras that are fully calibrated and coupled, or rigidly attached, with a 3D sensor. In this context,the point cloud from the 3D sensor is registered directly to the asynchronously acquired 2D images.In the second set-up, the 2D cameras are internally calibrated but uncoupled from the 3D sensor,allowing them to move independently with respect to each other. The registration for this set-up isperformed using a Structure-from-Motion reconstruction emanating from images and planar patchesrepresenting the point cloud. The proposed registration method is globally optimal and robust tooutliers. It is based on the theory Sum-of-Squares polynomials and a Branch-and-Bound algorithm.The third set-up consists of uncoupled and uncalibrated 2D cameras. The image sets from thesecameras are registered to the point cloud in a globally optimal manner using a Branch-and-Prunealgorithm. Our method is based on a Linear Matrix Inequality framework that establishes directrelationships between 2D image measurements and 3D scene voxels
Styles APA, Harvard, Vancouver, ISO, etc.
5

Söhl, Jakob [Verfasser], Markus [Akademischer Betreuer] Reiß, Vladimir [Akademischer Betreuer] Spokoiny et Richard [Akademischer Betreuer] Nickl. « Central limit theorems and confidence sets in the calibration of Lévy models and in deconvolution / Jakob Söhl. Gutachter : Markus Reiß ; Vladimir Spokoiny ; Richard Nickl ». Berlin : Humboldt Universität zu Berlin, Mathematisch-Naturwissenschaftliche Fakultät II, 2013. http://d-nb.info/103457258X/34.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
6

DEAK, SZABOLCS. « Essays on fiscal policy : calibration, estimation and policy analisys ». Doctoral thesis, Università Bocconi, 2011. https://hdl.handle.net/11565/4054119.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
7

Fery, Natacha [Verfasser]. « Nearshore wind-wave modelling in semi-enclosed seas : cross calibration and application / Natacha Fery ». Kiel : Universitätsbibliothek Kiel, 2017. http://d-nb.info/1139253069/34.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
8

Zhou, Alexandre. « Etude théorique et numérique de problèmes non linéaires au sens de McKean en finance ». Thesis, Paris Est, 2018. http://www.theses.fr/2018PESC1128/document.

Texte intégral
Résumé :
Cette thèse est consacrée à l'étude théorique et numérique de deux problèmes non linéaires au sens de McKean en finance. Nous abordons dans la première partie le problème de calibration d'un modèle à volatilité locale et stochastique pour tenir compte des prix d'options Européennes vanilles observés sur le marché. Ce problème se traduit par l'étude d'une équation différentielle stochastique (EDS) non linéaire au sens de McKean à cause de la présence dans le coefficient de diffusion d'une espérance conditionnelle du facteur de volatilité stochastique par rapport à la solution de l'EDS. Nous obtenons l'existence du processus dans le cas particulier où le facteur de volatilité stochastique est un processus de sauts ayant un nombre fini d'états. Nous obtenons de plus la convergence faible à l'ordre 1 de la discrétisation en temps de l'EDS non linéaire au sens de McKean pour des facteurs de volatilité stochastique généraux. Dans l'industrie, la calibration est effectuée efficacement à l'aide d'une régularisation de l'espérance conditionnelle par un estimateur à noyau de type Nadaraya-Watson, comme proposé par Guyon et Henry-Labordère dans [JGPHL]. Nous proposons également un schéma numérique demi-pas de temps et étudions le système de particules associé que nous comparons à l'algorithme proposé par [JGPHL]. Dans la deuxième partie de la thèse, nous nous intéressons à un problème de valorisation de contrat avec appels de marge, une problématique apparue avec l'application de nouvelles régulations depuis la crise financière de 2008. Ce problème peut être modélisé par une équation différentielle stochastique rétrograde (EDSR) anticipative avec dépendance en la loi de la solution dans le générateur. Nous montrons que cette équation est bien posée et proposons une approximation de sa solution à l'aide d'EDSR standards linéaires lorsque la durée de liquidation de l'option en cas de défaut est petite. Enfin, nous montrons que le calcul des solutions de ces EDSR standards peut être amélioré à l'aide de la méthode de Monte-Carlo multiniveaux introduite par Giles dans [G]
This thesis is dedicated to the theoretical and numerical study of two problems which are nonlinear in the sense of McKean in finance. In the first part, we study the calibration of a local and stochastic volatility model taking into account the prices of European vanilla options observed in the market. This problem can be rewritten as a stochastic differential equation (SDE) nonlinear in the sense of McKean, due to the presence in the diffusion coefficient of a conditional expectation of the stochastic volatility factor computed w.r.t. the solution to the SDE. We obtain existence in the particular case where the stochastic volatility factor is a jump process with a finite number of states. Moreover, we obtain weak convergence at order 1 for the Euler scheme discretizing in time the SDE nonlinear in the sense of McKean for general stochastic volatility factors. In the industry, Guyon and Henry Labordere proposed in [JGPHL] an efficient calibration procedure which consists in approximating the conditional expectation using a kernel estimator such as the Nadaraya-Watson one. We also introduce a numerical half-step scheme and study the the associated particle system that we compare with the algorithm presented in [JGPHL]. In the second part of the thesis, we tackle the pricing of derivatives with initial margin requirements, a recent problem that appeared along with new regulation since the 2008 financial crisis. This problem can be modelled by an anticipative backward stochastic differential equation (BSDE) with dependence in the law of the solution in the driver. We show that the equation is well posed and propose an approximation of its solution by standard linear BSDEs when the liquidation duration in case of default is small. Finally, we show that the computation of the solutions to the standard BSDEs can be improved thanks to the multilevel Monte Carlo technique introduced by Giles in [G]
Styles APA, Harvard, Vancouver, ISO, etc.
9

GURNY, Martin. « Default probabilities in credit risk management : estimation, model calibration, and backtesting ». Doctoral thesis, Università degli studi di Bergamo, 2015. http://hdl.handle.net/10446/61848.

Texte intégral
Résumé :
This doctoral thesis is devoted to estimation and examination of default probabilities (PDs) within credit risk management and comprises three various studies. In the first study, we introduce a structural credit risk model based on stable non-Gaussian processes in order to overcome distributional drawbacks of the classical Merton model. Following the Moody’s KMV estimation methodology, we conduct an empirical comparison between the results obtained from the Merton model and the stable Paretian one. Our results suggest that PDs are generally underestimated by the Merton model and that the stable Lévy model is substantially more sensitive to the periods of financial crises. The second study is devoted to examination of the performance of static and multi-period credit-scoring models for determining PDs of financial institutions. Using an extensive sample of U.S. commercial banks provided by the FFIEC, we focus on evaluating the performance of the considered scoring techniques. We find that our models provide a high predictive accuracy in distinguishing between default and non-default financial institutions. Despite the difficulty of predicting defaults in the financial sector, the proposed models perform very well also in comparison to results on scoring techniques for the corporate sector. Finally, in the third study, we examine the relationship between distress risk and returns of U.S. renewable energy companies. Using the Expected Default Frequency (EDF) measure obtained from Moody’s KMV, we demonstrate that there is a positive cross-sectional relationship between returns and evidence for a distress risk premium in this sector. The positively priced distress premium is also confirmed by investigating returns corrected for common Fama and French and Carhart risk factors. We further show that raw and risk-adjusted returns of value-weighted portfolios that take a long position in the 20% most distressed stocks and a short position in the 20% safest stocks generally outperforms the S&P 500 index throughout our sample period.
Styles APA, Harvard, Vancouver, ISO, etc.
10

Fiorin, Lucio. « Essays on Quantization in Financial Mathematics ». Doctoral thesis, Università degli studi di Padova, 2018. http://hdl.handle.net/11577/3427145.

Texte intégral
Résumé :
This thesis is devoted to the study of some applications of quantization to Financial Mathematics, especially to option pricing and calibration of financial data. Quantization is a technique that comes originally from numerical probability, and consists in approximating random variables and stochastic processes taking infinitely many values, with a discrete version of them, in order to simplify the quadrature algorithms for the computation of expected values. The purpose of this thesis is to show the great flexibility that quantization can have in the area of numerical probability and option pricing. In the literature, often there are ad hoc methods for a particular type of model or derivative, but no general framework seems to exist. Finite difference methods are heavily affected by the curse of dimensionality, while Monte Carlo methods need intense computational effort in order to have good precision, and are not designed for calibration purposes. Quantization can give an alternative methodology for a broad class of models and deriva- tives. The aim of the thesis is twofold: first, the extension of the literature about quantization to a broad class of models, namely local and stochastic volatility models, affine, pure jumps and polynomial processes, is an interesting theoretical exercise in itself. In fact, every time we deal with a different model we have to take in consideration the properties of the process and therefore the quantization algorithm must be adapted. Second, it is important to consider the computational results of the new types of quantization introduced. Indeed, the algorithms that we have developed turn out to be fast and numerically stable, and these aspects are very relevant, as we can overcome some of the issues present in literature for other types of approach. The first line of research deals with a technique called Recursive Marginal Quantization. Introduced in Pagès and Sagna (2015), this methodology exploits the conditional distribution of the Euler scheme of a one dimensional stochastic differential equation in order to construct a step-by-step approximation of the process. In this thesis we deal with the generalization of this technique to systems of stochastic differential equations, in particular to the case of stochastic volatility models. The Recursive Marginal Quantization of multidimensional stochastic process allows us to price European and path dependent options, in particular American options, and to perform calibration on financial data, giving then an alternative, and sometimes overcoming, to the usual Monte Carlo techniques. The second line of research takes a different perspective on quantization. Instead of using discretization schemes in order to compute the distribution of a stochastic process, we exploit the properties of the characteristic function and of the moment generating function for a broad class of processes. We consider the price process at maturity as a random variable, and we focus on the quantization of the stochastic variable, instead of focusing on the quantization of the whole stochastic process. This gives a faster and more precise technology for the pricing of options, and allows the quantization of a huge set of models for which the Recursive Marginal Quantization cannot be applied or is not numerically competitive.
Questa tesi si occupa dello studio delle applicazioni della quantizzazione alla Finanza Matematica, in particolare al prezzaggio di opzioni e alla calibrazione su dati finanziari. La quantizzazione è una tecnica che ha le sue origini dalla probabilità numerica, e consiste nell’approssimare variabili aleatorie e processi stocastici continui nello spazio delle realizzazioni con una versione discreta, allo scopo di semplificare gli algoritmi di quadratura per il calcolo di valori attesi. L’obiettivo di questa tesi è di mostrare la grande flessibilità che può avere la quantizzazione nell’ambiente della probabilità numerica e del prezzaggio di opzioni. Nella letteratura spesso esistono metodi ad hoc per ogni tipo di modello e di derivato, ma non sembra esserci una metodologia unica. I metodi alle differenze finite soffrono fortemente della curse of dimensionality, mentre i metodi Monte Carlo necessitano di un grande sforzo computazionale, e non sono pensati per esercizi di calibrazione. La quantizzazione può a volte risolvere problemi specifici di queste tecnologie, e presenta una metodologia alternativa per una grande classe di modelli e di derivati. Lo scopo della tesi è duplice: in primo luogo, l’estensione della letteratura sulla quantizzazione ad un’ampia gamma di processi, cioè processi a volatilità locale e stocastica, affini, di puro salto e polinomiali, è di per se un interessante esercizio teorico. Infatti, per ogni tipo di processo dobbiamo considerare le sue proprietà specifiche, adattando quindi l’algoritmo di quantizzazione. Inoltre, è importante considerare i risultati computazioni dei nuovi tipi di quantizzazione introdotti, in quanto è fondamentale sviluppare algoritmi che siano veloci e stabili numericamente, allo scopo di superare le problematiche presenti nella letteratura per altri tipi di approcci. Il primo filone di ricerca si occupa di una tecnica chiamata Quantizzazione Marginale Ricorsiva. Introdotta in Pagès and Sagna (2015), questa metodologia sfrutta la distribuzione condizionale dello schema di Eulero di un’equazione differenziale stocastica unidimensionale per costruire un’approssimazione passo passo del processo. In questa tesi generalizziamo questa tecnica ai sistemi di equazioni differenziali stocastiche, in particolare al caso dei modelli a volatilità stocastica. La Quantizzazione Marginale Ricorsiva di processi stocastici multidimensionali permette il prezzaggio di opzioni Europee e di opzioni path dependent, in particolare le opzioni Americane, e di effettuare calibrazione su dati finanziari, dando quindi un’alternativa, e spesso superandole, alle tipiche tecniche di Monte Carlo. La seconda linea di ricerca tratta la quantizzazione da una prospettiva differente. Invece di usare schemi di discretizzazione per il calcolo della distribuzione di un processo stocastico, viene sfruttata le proprietà della funzione caratteristica e della funzione generatrice dei momenti di una vasta classe di processi. Consideriamo infatti il processo del prezzo a maturità come una variabile aleatoria, e ci focalizziamo sulla quantizzazione della variabile casuale, invece di considerare tutto il processo stocastico. Questo approccio porta a una tecnologia più veloce e precisa per il prezzaggio di opzioni, e permette la quantizzazione di un vasto insieme di modelli, che non potevano essere affrontati dalla Quantizzazione Marginale Ricorsiva.
Styles APA, Harvard, Vancouver, ISO, etc.
11

Gnoatto, Alessandro. « Wishart processes : theory and applications in mathematical finance ». Doctoral thesis, Università degli studi di Padova, 2012. http://hdl.handle.net/11577/3422528.

Texte intégral
Résumé :
This thesis is devoted to the study of Wishart processes from a theoretical and a practical point of view. Part 1 presents a new result concerning the explicit Laplace transform of the Wishart process. Par 2 and 3 feature applications of the Wishart process to mathematical finance, in particular to the evaluation of fixed income and foreign exchange derivatives
La tesi studia il processo di Wishart da un punto di vista teorico e applicativo. La prima parte e´ dedicata alla presentazione di una nuova formula per il calcolo della trasformata di Laplace associata al processo. Le parti 2 e 3 introducono applicazioni del processo di Wishart nell´ambito dei tassi di interesse e dei tassi di cambio
Styles APA, Harvard, Vancouver, ISO, etc.
12

GARDINI, MATTEO. « Financial models in continuous time with self-decomposability : application to the pricing of energy derivatives ». Doctoral thesis, Università degli studi di Genova, 2022. http://hdl.handle.net/11567/1070581.

Texte intégral
Résumé :
Based on the concept of self-decomposability we extend some recent multidimensional Lévy models by using multivariate subordination. Our aim is to construct multi-asset market models in which the systemic risk instead of affecting all markets at the same time presents some stochastic delay. In particular we derive new multidimensional versions of the well known Variance Gamma and inverse Gaussian processes. To this end, we extend some known approaches keeping their mathematical tractability, we study the properties of the new processes, we derive closed form expressions for their characteristic functions and, finally, we detail how new and efficient Monte Carlo schemes can be implemented. As second contribution of the work, we construct a new Lévy process, termed the Variance Gamma++ process, to model the dynamic of assets in illiquid markets. Such a process has the mathematical tractability of the Variance Gamma process and is obtained relying upon the self-decomposability of the gamma law. We give a full characterization of the Variance Gamma++ process in terms of its characteristic triplet, characteristic function and transition probability density. These results are instrumental to apply Fourier-based option pricing and maximum likelihood techniques for the parameter estimation. Furthermore, we provide efficient path simulation algorithms, both forward and backward in time. We also obtain an efficient “integral-free” explicit pricing formula for European options. Finally, we illustrate the applicability of our models in the context of gas, power and emission markets focusing on their calibration, on the pricing of spread options written on different underlying commodities and on the evaluation of exotic American derivatives, giving an economical interpretation to the obtained results.
Styles APA, Harvard, Vancouver, ISO, etc.
13

CALDANA, RUGGERO. « Spread and basket option pricing : an application to interconnected power markets ». Doctoral thesis, Università degli Studi di Milano-Bicocca, 2012. http://hdl.handle.net/10281/39422.

Texte intégral
Résumé :
An interconnector is an asset that gives the owner the right, but not the obligation, to transmit electricity between two locations each hour of the day over a prefixed time period. The financial value of the interconnector is given by a series of options that are written on the price differential between two electricity markets, that is, a strip of European options on an hourly spread. Since the hourly forward price is not directly observable on the market, Chapter 1 proposes a practical procedure to build an hourly forward price curve, fitting both base load and peak load forward quotations. One needs a stochastic model, a valuation formula, and a calibration method to evaluate interconnection capacity contracts. To capture the main features of the electricity price series, we model the energy price log-returns for each hour with a non-Gaussian mean-reverting stochastic process. Unfortunately no explicit solution to the spread option valuation problem is available. Chapter 2 develops a method for pricing the generic spread option in the non-Gaussian framework by extending the Bjerksund and Stensland (2011) approximation to a Fourier transform framework. We also obtain an upper bound on the estimation error. The method is applicable to models in which the joint characteristic function of the underlying assets is known analytically. Since an option on the difference of two prices is a particular case of a basket option, Chapter 3 extends our results to basket option pricing, obtaining a lower and an upper bound on the estimated price. We propose a general lower approximation to the basket option price and provide an upper bound on the estimation error. The method is applicable to models in which the joint characteristic function of the underlying assets and the geometric average is known. We test the performance of these new pricing algorithms, considering different stochastic dynamic models. Finally, in Chapter 4, we use the proposed spread option pricing method to price interconnectors. We show how to set up a calibration procedure: A market-coherent calibration is obtained, reproducing the hourly forward price curve. Finally, we present several examples of interconnector capacity contract valuation between European countries.
Styles APA, Harvard, Vancouver, ISO, etc.
14

De, Marco Stefano. « On Probability Distributions of Diffusions and Financial Models with non-globally smooth coefficients ». Doctoral thesis, Scuola Normale Superiore, 2011. http://hdl.handle.net/11384/85676.

Texte intégral
Résumé :
In this Ph.D. dissertation we deal with the issue of the regularity and the estimation of probability laws for diffusions with non-globally smooth coefficients, with particular focus on financial models. The analysis of probability laws for the solutions of Stochastic Differential Equations (SDEs) driven by the Brownian motion is among the main applications of the Malliavin calculus on the Wiener space: typical issues involve the existence and smoothness of a density, and the study of the asymptotic behaviour of the distribution’s tails. The classical results in this area are stated assuming global regularity conditions on the coefficients of the SDE: an assumption which fails to be fulfilled by several financial models, whose coefficients involve square-root or other non-Lipschitz continuous functions. Then, in the first part of this thesis (chapters 2, 3 and 4) we study the existence, smoothness and space asymptotics of densities when only local conditions on the coefficients of the SDE are considered. Our analysis is based on Malliavin calculus tools and on tube estimates for Itô processes, namely estimates on the probability that an Itô process remains around a deterministic curve up to a given time. We give applications of our results to general classes of option pricing models, including generalisations of CIR and CEV processes and Local Stochastic Volatility models. In the latter case, the estimates we derive on the law of the underlying price have an impact on moment explosion and, consequently, on the large-strike asymptotic behaviour of the implied volatility. Implied volatility modeling, in its turn, makes the object of the second part of this thesis (chapters 5 and 6). We deal with some questions related to the issue of an efficient and economical parametric modelisation of the volatility surface. We focus on J. Gatheral’s SVI model, first tackling the problem of its calibration to the market smile. We propose an effective quasi-explicit calibration procedure and display its performances on financial data. Then, we analyse the capability of SVI to generate efficient time-dependent approximations of symmetric smiles, providing the corresponding numerical applications in the framework of the Heston stochastic volatility model.
Styles APA, Harvard, Vancouver, ISO, etc.
15

RUSSO, Vincenzo. « Pricing and managing life insurance risks ». Doctoral thesis, Università degli studi di Bergamo, 2012. http://hdl.handle.net/10446/26710.

Texte intégral
Résumé :
The aim of this thesis is to investigate about the quantitative models used for pricing and managing life insurance risks. It was done analyzing the existing literature about methods and models used in the insurance field in order to developing (1) new stochastic models for longevity and mortality risks and (2) new pricing functions for life insurance policies and options embedded in such contracts. The motivations for this research are to be searched essentially in: (1) a new risk-based solvency framework for the insurance industry, the so-called Solvency II project, that will becomes effective in 2013/2014; (2) a new IAS/IFRS fair value-based accounting for insurance contracts, the so-called IFRS 4 (Phase 2) project (to be approval); (3) more rigorous quantitative analysis required by the industry in pricing and risk management of life insurance risks. The first part of the thesis (first and second chapters) contains a review of the quantitative models used for interest rates and longevity/mortality modeling. The second part (remaining chapters) describes new methods and quantitative models that it thinks could be useful in the context of pricing and insurance risk management.
Styles APA, Harvard, Vancouver, ISO, etc.
16

Pai, Kai-chung, et 白凱中. « The calibration and sensitivity analysis of a storm surge model for the seas around Taiwan ». Thesis, 2009. http://ndltd.ncl.edu.tw/handle/9b9dpc.

Texte intégral
Résumé :
碩士
國立中山大學
海洋環境及工程學系研究所
97
The topographical variations of the seas around Taiwan are great, which make the tides complicated. Taiwan is located in the juncture of the tropical and subtropical area. Geographically, it is located within the region of northwestern Pacific typhoon path. These seasonal and geographical situations causing Taiwan frequently threaten by typhoons during summer and autumn. In addition to natural disasters, the coastal area is over developed for the last few decades, which destroys the balance between nature and man. Storms and floods constantly threaten the lowland areas along the coast. An accurate and efficient storm surge model can be used to predict tides and storm surges. The model can be calibrated and verified with the field observations. Data measured by instruments at the tidal station constituting daily tidal variations and storm surge influences during typhoons. The model can offer both predictions to the management institutions and to the general public as pre-warning system and thus taking disaster-prevention measures. This study implements the numerical model, developed by Yu (1993) and Yu et al. (1994) to calculate the hydrodynamic in the seas around Taiwan. The main purpose of this study is to make a calibration and sensitivity analysis of the model parameters. Tidal gauge data around Taiwan coastal stations collected from June to October 2005 are used for the analysis and the comparison between the modeled data and the observations. Two steps have been taken for the model calibration and sensitivity analysis. First step is to calibrate the model for accurate prediction of the astronomical tide, and then the compound tide with meteorological influences. For the calibration of the astronomical tides, sensitivity analysis has been carried out by adjusting the horizontal diffusion coefficient and the bottom friction coefficients used in the model. The sensitivity of the time-step size used in the model and model grids fitted to coastlines are also checked. A depth dependent Chézy numbers are used in the model to describe bottom friction. The model has a better result when the Chézy value varied within 65 to 85. Modifying grids fitted to the coastline has improved the model results significantly. By improving the dynamic phenomenon brought about by the land features, the model calculation fits the real tidal phenomenon better. The analysis has shown that the model is less sensitive to the horizontal diffusion coefficient. Data from 22 tidal stations around Taiwan have been used for the comparisons. The maximum RMSE (root-mean-square error) is about 10 cm at WAi-Pu, whereas the minimum RMSE is about 1 cm for the stations along eastern coast. The calibration of the compound tide is divided into three cases. The first case is to calibrate the forecasted wind field. This has been done by comparing the forecasted wind field from the Central Weather Bureau with the satellite data obtained from QuikSCAT—Level 3. The satellite wind speed has been applied to adjust the forecasted wind speed. The adjusted forecast wind field has shown improvement to the model predictions in the tidal stations south of Taichung, slightly improved in the eastern coast. The second case is tuning the drag coefficient on sea surface used by the hydrodynamic model. Several empirical formulas to describe the sea surface drag have been tested. The model result has shown little influence using various drag formulations. The third case is to single the influences by the meteo-inputs, i.e. the wind field and the atmospheric pressure. The tidal level is more sensitive to the variation of the atmospheric pressure through out the tests carried out during typhoon periods. The model simulation for 2006 using the best selected parameters has shown that the model is consisted with good stability and accuracy for both stormy and calm weather conditions.
Styles APA, Harvard, Vancouver, ISO, etc.
17

CHIANELLA, DIEGO. « Estimation of the variance for different estimators of the change over time for overlapping samples ». Doctoral thesis, 2019. http://hdl.handle.net/11573/1315826.

Texte intégral
Résumé :
This work was inspired by the growing need to have a measure of the accuracy of the estimates produced within the short-term statistics in the Official Statistics. In particular, the aim of the work is to illustrate the methodology for the computation of the variance for the estimators currently used in the service turnover survey carried on by the Italian National Institute of Statistics, for the quarterly turnover growth rate estimation. While the calculation of the variance of the estimates produced for a given instant of time is now a good practice (also through the development of software packages), the same does not happen for the variation of two quantities over time. An estimator of variance must take into account of both the estimator and the sampling design (Wolter, K.M. (1985)). The greatest difficulty is that for many surveys, the samples for producing estimates in two different time are not independent each other, due to the rotation operations of the sample. In particular for business surveys, in order to take into account the birth-mortality of units in the population and changes in stratification variables (such as size category and type of economic activity), the sample is updated, and a part of the units is replaced with others. Moreover, many indicators are non-linear function of linear estimators (e.g. simple ratio, difference of ratios), therefore, to calculate their variance a first-order Taylor approximation can be used. Alternatively, balanced repeated replication (BRR) can be used. My methodological contribution is not only to suggest how to assess the variance of possible estimators of the turnover variation over time, but also to compare such estimators with respect to their variance to identify the best one. The performance of these estimators is assessed by a simulation study, which also has the aim of exploring under which conditions it is better to use all the observations or only the overlapping observations. The change estimators and the corresponding estimators of the variance are defined at stratum and estimation domain level and take into account the use of a stratified sampling design and the updating of the sample due to a replacement of some units and to a dynamic stratification of the population.
Styles APA, Harvard, Vancouver, ISO, etc.
18

GIACOMELLI, JACOPO. « Claim probability in Credit and Suretyship insurance ». Doctoral thesis, 2022. http://hdl.handle.net/11573/1637888.

Texte intégral
Résumé :
This work analyses the problems faced by a Credit and Suretyship (C&S) insurance company when inferring claim probabilities and proposes a set of dedicated tools to address these problems. Typically, a small amount of information is available to calibrate the C&S claim probability. C&S insurers observe a low claim frequency on average and, in credit insurance, are subject to a remarkable information asymmetry concerning their insured sellers. Further, they cannot enjoy the typical information facilities available for the banks that share information regarding their risk sources through centralized databases. These elements imply the need to perform a precise calibration of claim probabilities (both marginal and joint probabilities) based on data sets that are often scarcely populated. In the first part of the thesis, the central contracts offered by this insurance line of business are outlined regarding both credit insurance products and suretyship bonds. The contractual analysis aims to highlight to what extent the claims generated by the two sub-lines (Credit insurance and suretyship) can be legitimately modeled under the same framework. After identifying the main features shared among all the C&S contracts, a selection of classical credit risk models is considered to decide which best fits the modeling requirements for representing a C&S claim. The CreditRisk+ model turns out to cope with all the identified C&S claim features; thus, it is chosen to describe the dependence structure among future claim events. Concerning the estimation of marginal claim probabilities, the analysis is based on the fact that all the C&S claims are absorbing events and that - in credit insurance - the presence of censoring events must be taken into account. A selection of classical methods to estimate the real-world probability of absorbing events, either in the presence of censoring events or not, is compared against the specific features of the claims to be modeled. The classical estimator of the Bernoulli distribution’s parameter proves adequate to quantify the marginal claim probability for a suretyship risk. On the other hand, even the Kaplan Meyer and Cutler Ederer estimators are not adequate to model the credit insurance claim probability properly, despite being designed to handle the effects of censoring events. Given the vast lack of results developed explicitly for the C&S claim probability topic, many actuarial problems worth attention are yet to be investigated. In the second part of this work, we address three specific research problems encountered in the first part without claim to completeness. We propose a technique to calibrate the CreditRisk+ dependence structure parameters with particular reference to the cases when only a small data set is available, as is often the case in C&S insurance. The calibration is achieved by generalizing the model to a multi-period framework. With specific regard to suretyship insurance, we investigated the phenomenon of bid bond claims generated by the insurers themselves when they refuse to issue a performance bond to the winner of a public tender. This particular case turns out to be mainly originated from a poor choice of the starting price in the tender process. We propose a risk appetite framework specifically designed to prevent this type of claim. Finally, we propose a new parametric frequency estimator to address the problem of estimating claim probability in credit insurance, given the information asymmetry between the insurer and the insured and the related censoring events. The concluding remarks provide insight on some research lines worth to be considered and that this work leaves open.
Styles APA, Harvard, Vancouver, ISO, etc.
Nous offrons des réductions sur tous les plans premium pour les auteurs dont les œuvres sont incluses dans des sélections littéraires thématiques. Contactez-nous pour obtenir un code promo unique!

Vers la bibliographie