Thèses sur le sujet « Calibration sets »
Créez une référence correcte selon les styles APA, MLA, Chicago, Harvard et plusieurs autres
Consultez les 18 meilleures thèses pour votre recherche sur le sujet « Calibration sets ».
À côté de chaque source dans la liste de références il y a un bouton « Ajouter à la bibliographie ». Cliquez sur ce bouton, et nous générerons automatiquement la référence bibliographique pour la source choisie selon votre style de citation préféré : APA, MLA, Harvard, Vancouver, Chicago, etc.
Vous pouvez aussi télécharger le texte intégral de la publication scolaire au format pdf et consulter son résumé en ligne lorsque ces informations sont inclues dans les métadonnées.
Parcourez les thèses sur diverses disciplines et organisez correctement votre bibliographie.
Prateepasen, Asa. « Tool wear monitoring in turning using fused data sets of calibrated acoustic emission and vibration ». Thesis, Brunel University, 2001. http://bura.brunel.ac.uk/handle/2438/5415.
Texte intégralSöhl, Jakob. « Central limit theorems and confidence sets in the calibration of Lévy models and in deconvolution ». Doctoral thesis, Humboldt-Universität zu Berlin, Mathematisch-Naturwissenschaftliche Fakultät II, 2013. http://dx.doi.org/10.18452/16732.
Texte intégralCentral limit theorems and confidence sets are studied in two different but related nonparametric inverse problems, namely in the calibration of an exponential Lévy model and in the deconvolution model. In the first set-up, an asset is modeled by an exponential of a Lévy process, option prices are observed and the characteristic triplet of the Lévy process is estimated. We show that the estimators are almost surely well-defined. To this end, we prove an upper bound for hitting probabilities of Gaussian random fields and apply this to a Gaussian process related to the estimation method for Lévy models. We prove joint asymptotic normality for estimators of the volatility, the drift, the intensity and for pointwise estimators of the jump density. Based on these results, we construct confidence intervals and sets for the estimators. We show that the confidence intervals perform well in simulations and apply them to option data of the German DAX index. In the deconvolution model, we observe independent, identically distributed random variables with additive errors and we estimate linear functionals of the density of the random variables. We consider deconvolution models with ordinary smooth errors. Then the ill-posedness of the problem is given by the polynomial decay rate with which the characteristic function of the errors decays. We prove a uniform central limit theorem for the estimators of translation classes of linear functionals, which includes the estimation of the distribution function as a special case. Our results hold in situations, for which a square-root-n-rate can be obtained, more precisely, if the Sobolev smoothness of the functionals is larger than the ill-posedness of the problem.
FURLANETTO, GIULIA. « Quantitative reconstructions of climatic series in mountain environment based on paleoecological and ecological data ». Doctoral thesis, Università degli Studi di Milano-Bicocca, 2019. http://hdl.handle.net/10281/241319.
Texte intégralMontane vegetation is traditionally known to be particularly sensitive to climate changes. The strong elevational climatic gradient that characterises mountain areas results in a steep ecological slope, with several ecotones occurring in a small area. Pollen sequences investigated in or shortly above the modern timberline ecotone are ideal archives to analyze the relationships between climate and ecosystems. Quantitative reconstruction of past climate conditions from fossil pollen records requires understanding modern pollen representation along climatic and ecological gradients. The aims of this PhD research are the development of modern pollen-vegetation-climate elevational transects, model processing and validation for evaluating pollen-climate relationships thanks to elevational gradients, looking for new high-resolution natural archives to obtain proxy data and to reconstruct paleoenvironmental and palaeoclimatic changes during the Holocene, the application of these models to pollen-stratigraphical data and comparing the results with different proxy-based reconstructions. The importance of local elevational transects of modern pollen samples with site-specific temperature as a tool for paleoclimate reconstructions in the Alps was demonstrated. The two elevational transects (La Thuile Valley and Upper Brembana Valley) were developed to derive consistent local pollen-climate correlations, to find sensitive pollen taxa useful for paleoclimate reconstructions; to estimate the effects of local parameters (elevational lapse rate, climate, uphill pollen transport and human impact) and were used as test sets to evaluate pollen-climate models based on calibration sets extracted from the European Modern Pollen Database. Modern pollen assemblages-vegetation-climate relationships along an elevational gradient in the Upper Brembana Valley were investigated. Here modern pollen assemblages (pollen traps and moss samples), vegetation, elevation and climate have been collected at 16 sampling sites placed along an elevational gradient stretching from 1240 m asl to 2390 m asl. The results of CCA analysis demonstrated a general good agreement with previous studies, which identified elevation as the main gradient in the variation of modern pollen and vegetation assemblages in mountain areas. The stratigraphic study of paleoecological and sedimentary proxies in the Armentarga peat bog allowed to reconstruct the vegetation and climate history during the last 10 ka in a high-elevation, oceanic district of the Italian Alps. Quantitative reconstructions of Tjuly and Pann were obtained and validated by applying numerical transfer functions built on an extensive calibration pollen-climate dataset. The palaeobotanical record of the Armentarga peat bog has shown this elevational vegetation arrangement to be primarily driven by a Middle to late Holocene precipitation increase, substantially independent from the millennial sequence of thermal anomalies already known from other high-elevation Alpine proxies (i.e. glaciers, timberline, chironomids, speleothems). Changes in annual precipitation occurred in three main steps during the Holocene, starting with a moderately humid early Holocene marked by early occurrence of the Alnus viridis dwarf forests, and followed by a first step of precipitation increase starting at 6.2 ka cal BP. A prominent step forward occurred at the Middle to Late Holocene transition, dated between 4.7 and 3.9 ka at the Armentarga site, which led to present values typical for oceanic mountain climates (Pann 1700-1850 mm) and was probably accompanied by increased snowfall and runoff, and had a major impact on timberline depression and grassland expansion.
Paudel, Danda Pani. « Local and global methods for registering 2D image sets and 3D point clouds ». Thesis, Dijon, 2015. http://www.theses.fr/2015DIJOS077/document.
Texte intégralIn this thesis, we study the problem of registering 2D image sets and 3D point clouds under threedifferent acquisition set-ups. The first set-up assumes that the image sets are captured using 2Dcameras that are fully calibrated and coupled, or rigidly attached, with a 3D sensor. In this context,the point cloud from the 3D sensor is registered directly to the asynchronously acquired 2D images.In the second set-up, the 2D cameras are internally calibrated but uncoupled from the 3D sensor,allowing them to move independently with respect to each other. The registration for this set-up isperformed using a Structure-from-Motion reconstruction emanating from images and planar patchesrepresenting the point cloud. The proposed registration method is globally optimal and robust tooutliers. It is based on the theory Sum-of-Squares polynomials and a Branch-and-Bound algorithm.The third set-up consists of uncoupled and uncalibrated 2D cameras. The image sets from thesecameras are registered to the point cloud in a globally optimal manner using a Branch-and-Prunealgorithm. Our method is based on a Linear Matrix Inequality framework that establishes directrelationships between 2D image measurements and 3D scene voxels
Söhl, Jakob [Verfasser], Markus [Akademischer Betreuer] Reiß, Vladimir [Akademischer Betreuer] Spokoiny et Richard [Akademischer Betreuer] Nickl. « Central limit theorems and confidence sets in the calibration of Lévy models and in deconvolution / Jakob Söhl. Gutachter : Markus Reiß ; Vladimir Spokoiny ; Richard Nickl ». Berlin : Humboldt Universität zu Berlin, Mathematisch-Naturwissenschaftliche Fakultät II, 2013. http://d-nb.info/103457258X/34.
Texte intégralDEAK, SZABOLCS. « Essays on fiscal policy : calibration, estimation and policy analisys ». Doctoral thesis, Università Bocconi, 2011. https://hdl.handle.net/11565/4054119.
Texte intégralFery, Natacha [Verfasser]. « Nearshore wind-wave modelling in semi-enclosed seas : cross calibration and application / Natacha Fery ». Kiel : Universitätsbibliothek Kiel, 2017. http://d-nb.info/1139253069/34.
Texte intégralZhou, Alexandre. « Etude théorique et numérique de problèmes non linéaires au sens de McKean en finance ». Thesis, Paris Est, 2018. http://www.theses.fr/2018PESC1128/document.
Texte intégralThis thesis is dedicated to the theoretical and numerical study of two problems which are nonlinear in the sense of McKean in finance. In the first part, we study the calibration of a local and stochastic volatility model taking into account the prices of European vanilla options observed in the market. This problem can be rewritten as a stochastic differential equation (SDE) nonlinear in the sense of McKean, due to the presence in the diffusion coefficient of a conditional expectation of the stochastic volatility factor computed w.r.t. the solution to the SDE. We obtain existence in the particular case where the stochastic volatility factor is a jump process with a finite number of states. Moreover, we obtain weak convergence at order 1 for the Euler scheme discretizing in time the SDE nonlinear in the sense of McKean for general stochastic volatility factors. In the industry, Guyon and Henry Labordere proposed in [JGPHL] an efficient calibration procedure which consists in approximating the conditional expectation using a kernel estimator such as the Nadaraya-Watson one. We also introduce a numerical half-step scheme and study the the associated particle system that we compare with the algorithm presented in [JGPHL]. In the second part of the thesis, we tackle the pricing of derivatives with initial margin requirements, a recent problem that appeared along with new regulation since the 2008 financial crisis. This problem can be modelled by an anticipative backward stochastic differential equation (BSDE) with dependence in the law of the solution in the driver. We show that the equation is well posed and propose an approximation of its solution by standard linear BSDEs when the liquidation duration in case of default is small. Finally, we show that the computation of the solutions to the standard BSDEs can be improved thanks to the multilevel Monte Carlo technique introduced by Giles in [G]
GURNY, Martin. « Default probabilities in credit risk management : estimation, model calibration, and backtesting ». Doctoral thesis, Università degli studi di Bergamo, 2015. http://hdl.handle.net/10446/61848.
Texte intégralFiorin, Lucio. « Essays on Quantization in Financial Mathematics ». Doctoral thesis, Università degli studi di Padova, 2018. http://hdl.handle.net/11577/3427145.
Texte intégralQuesta tesi si occupa dello studio delle applicazioni della quantizzazione alla Finanza Matematica, in particolare al prezzaggio di opzioni e alla calibrazione su dati finanziari. La quantizzazione è una tecnica che ha le sue origini dalla probabilità numerica, e consiste nell’approssimare variabili aleatorie e processi stocastici continui nello spazio delle realizzazioni con una versione discreta, allo scopo di semplificare gli algoritmi di quadratura per il calcolo di valori attesi. L’obiettivo di questa tesi è di mostrare la grande flessibilità che può avere la quantizzazione nell’ambiente della probabilità numerica e del prezzaggio di opzioni. Nella letteratura spesso esistono metodi ad hoc per ogni tipo di modello e di derivato, ma non sembra esserci una metodologia unica. I metodi alle differenze finite soffrono fortemente della curse of dimensionality, mentre i metodi Monte Carlo necessitano di un grande sforzo computazionale, e non sono pensati per esercizi di calibrazione. La quantizzazione può a volte risolvere problemi specifici di queste tecnologie, e presenta una metodologia alternativa per una grande classe di modelli e di derivati. Lo scopo della tesi è duplice: in primo luogo, l’estensione della letteratura sulla quantizzazione ad un’ampia gamma di processi, cioè processi a volatilità locale e stocastica, affini, di puro salto e polinomiali, è di per se un interessante esercizio teorico. Infatti, per ogni tipo di processo dobbiamo considerare le sue proprietà specifiche, adattando quindi l’algoritmo di quantizzazione. Inoltre, è importante considerare i risultati computazioni dei nuovi tipi di quantizzazione introdotti, in quanto è fondamentale sviluppare algoritmi che siano veloci e stabili numericamente, allo scopo di superare le problematiche presenti nella letteratura per altri tipi di approcci. Il primo filone di ricerca si occupa di una tecnica chiamata Quantizzazione Marginale Ricorsiva. Introdotta in Pagès and Sagna (2015), questa metodologia sfrutta la distribuzione condizionale dello schema di Eulero di un’equazione differenziale stocastica unidimensionale per costruire un’approssimazione passo passo del processo. In questa tesi generalizziamo questa tecnica ai sistemi di equazioni differenziali stocastiche, in particolare al caso dei modelli a volatilità stocastica. La Quantizzazione Marginale Ricorsiva di processi stocastici multidimensionali permette il prezzaggio di opzioni Europee e di opzioni path dependent, in particolare le opzioni Americane, e di effettuare calibrazione su dati finanziari, dando quindi un’alternativa, e spesso superandole, alle tipiche tecniche di Monte Carlo. La seconda linea di ricerca tratta la quantizzazione da una prospettiva differente. Invece di usare schemi di discretizzazione per il calcolo della distribuzione di un processo stocastico, viene sfruttata le proprietà della funzione caratteristica e della funzione generatrice dei momenti di una vasta classe di processi. Consideriamo infatti il processo del prezzo a maturità come una variabile aleatoria, e ci focalizziamo sulla quantizzazione della variabile casuale, invece di considerare tutto il processo stocastico. Questo approccio porta a una tecnologia più veloce e precisa per il prezzaggio di opzioni, e permette la quantizzazione di un vasto insieme di modelli, che non potevano essere affrontati dalla Quantizzazione Marginale Ricorsiva.
Gnoatto, Alessandro. « Wishart processes : theory and applications in mathematical finance ». Doctoral thesis, Università degli studi di Padova, 2012. http://hdl.handle.net/11577/3422528.
Texte intégralLa tesi studia il processo di Wishart da un punto di vista teorico e applicativo. La prima parte e´ dedicata alla presentazione di una nuova formula per il calcolo della trasformata di Laplace associata al processo. Le parti 2 e 3 introducono applicazioni del processo di Wishart nell´ambito dei tassi di interesse e dei tassi di cambio
GARDINI, MATTEO. « Financial models in continuous time with self-decomposability : application to the pricing of energy derivatives ». Doctoral thesis, Università degli studi di Genova, 2022. http://hdl.handle.net/11567/1070581.
Texte intégralCALDANA, RUGGERO. « Spread and basket option pricing : an application to interconnected power markets ». Doctoral thesis, Università degli Studi di Milano-Bicocca, 2012. http://hdl.handle.net/10281/39422.
Texte intégralDe, Marco Stefano. « On Probability Distributions of Diffusions and Financial Models with non-globally smooth coefficients ». Doctoral thesis, Scuola Normale Superiore, 2011. http://hdl.handle.net/11384/85676.
Texte intégralRUSSO, Vincenzo. « Pricing and managing life insurance risks ». Doctoral thesis, Università degli studi di Bergamo, 2012. http://hdl.handle.net/10446/26710.
Texte intégralPai, Kai-chung, et 白凱中. « The calibration and sensitivity analysis of a storm surge model for the seas around Taiwan ». Thesis, 2009. http://ndltd.ncl.edu.tw/handle/9b9dpc.
Texte intégral國立中山大學
海洋環境及工程學系研究所
97
The topographical variations of the seas around Taiwan are great, which make the tides complicated. Taiwan is located in the juncture of the tropical and subtropical area. Geographically, it is located within the region of northwestern Pacific typhoon path. These seasonal and geographical situations causing Taiwan frequently threaten by typhoons during summer and autumn. In addition to natural disasters, the coastal area is over developed for the last few decades, which destroys the balance between nature and man. Storms and floods constantly threaten the lowland areas along the coast. An accurate and efficient storm surge model can be used to predict tides and storm surges. The model can be calibrated and verified with the field observations. Data measured by instruments at the tidal station constituting daily tidal variations and storm surge influences during typhoons. The model can offer both predictions to the management institutions and to the general public as pre-warning system and thus taking disaster-prevention measures. This study implements the numerical model, developed by Yu (1993) and Yu et al. (1994) to calculate the hydrodynamic in the seas around Taiwan. The main purpose of this study is to make a calibration and sensitivity analysis of the model parameters. Tidal gauge data around Taiwan coastal stations collected from June to October 2005 are used for the analysis and the comparison between the modeled data and the observations. Two steps have been taken for the model calibration and sensitivity analysis. First step is to calibrate the model for accurate prediction of the astronomical tide, and then the compound tide with meteorological influences. For the calibration of the astronomical tides, sensitivity analysis has been carried out by adjusting the horizontal diffusion coefficient and the bottom friction coefficients used in the model. The sensitivity of the time-step size used in the model and model grids fitted to coastlines are also checked. A depth dependent Chézy numbers are used in the model to describe bottom friction. The model has a better result when the Chézy value varied within 65 to 85. Modifying grids fitted to the coastline has improved the model results significantly. By improving the dynamic phenomenon brought about by the land features, the model calculation fits the real tidal phenomenon better. The analysis has shown that the model is less sensitive to the horizontal diffusion coefficient. Data from 22 tidal stations around Taiwan have been used for the comparisons. The maximum RMSE (root-mean-square error) is about 10 cm at WAi-Pu, whereas the minimum RMSE is about 1 cm for the stations along eastern coast. The calibration of the compound tide is divided into three cases. The first case is to calibrate the forecasted wind field. This has been done by comparing the forecasted wind field from the Central Weather Bureau with the satellite data obtained from QuikSCAT—Level 3. The satellite wind speed has been applied to adjust the forecasted wind speed. The adjusted forecast wind field has shown improvement to the model predictions in the tidal stations south of Taichung, slightly improved in the eastern coast. The second case is tuning the drag coefficient on sea surface used by the hydrodynamic model. Several empirical formulas to describe the sea surface drag have been tested. The model result has shown little influence using various drag formulations. The third case is to single the influences by the meteo-inputs, i.e. the wind field and the atmospheric pressure. The tidal level is more sensitive to the variation of the atmospheric pressure through out the tests carried out during typhoon periods. The model simulation for 2006 using the best selected parameters has shown that the model is consisted with good stability and accuracy for both stormy and calm weather conditions.
CHIANELLA, DIEGO. « Estimation of the variance for different estimators of the change over time for overlapping samples ». Doctoral thesis, 2019. http://hdl.handle.net/11573/1315826.
Texte intégralGIACOMELLI, JACOPO. « Claim probability in Credit and Suretyship insurance ». Doctoral thesis, 2022. http://hdl.handle.net/11573/1637888.
Texte intégral