Dissertations / Theses on the topic 'Stochastic block models'

To see the other types of publications on this topic, follow the link: Stochastic block models.

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 dissertations / theses for your research on the topic 'Stochastic block models.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.

1

Paltrinieri, Federico. "Modeling temporal networks with dynamic stochastic block models." Master's thesis, Alma Mater Studiorum - Università di Bologna, 2019. http://amslaurea.unibo.it/18805/.

Full text
Abstract:
Osservando il recente interesse per le reti dinamiche temporali e l'ampio numero di campi di applicazione, questa tesi ha due principali propositi: primo, di analizzare alcuni modelli teorici di reti temporali, specialmente lo stochastic blockmodel dinamico, al fine di descrivere la dinamica di sistemi reali e fare previsioni. Il secondo proposito della tesi è quello di creare due nuovi modelli teorici, basati sulla teoria dei processi autoregressivi, dai quali inferire nuovi parametri dalle reti temporali, come la matrice di evoluzione di stato e una migliore stima della varianza del rumore del processo di evoluzione temporale. Infine, tutti i modelli sono testati su un data set interbancario: questi rivelano la presenza di un evento atteso che divide la rete temporale in due periodi distinti con differenti configurazioni e parametri.
APA, Harvard, Vancouver, ISO, and other styles
2

Corneli, Marco. "Dynamic stochastic block models, clustering and segmentation in dynamic graphs." Thesis, Paris 1, 2017. http://www.theses.fr/2017PA01E012/document.

Full text
Abstract:
Cette thèse porte sur l’analyse de graphes dynamiques, définis en temps discret ou continu. Nous introduisons une nouvelle extension dynamique du modèle a blocs stochastiques (SBM), appelée dSBM, qui utilise des processus de Poisson non homogènes pour modéliser les interactions parmi les paires de nœuds d’un graphe dynamique. Les fonctions d’intensité des processus ne dépendent que des classes des nœuds comme dans SBM. De plus, ces fonctions d’intensité ont des propriétés de régularité sur des intervalles temporels qui sont à estimer, et à l’intérieur desquels les processus de Poisson redeviennent homogènes. Un récent algorithme d’estimation pour SBM, qui repose sur la maximisation d’un critère exact (ICL exacte) est ici adopté pour estimer les paramètres de dSBM et sélectionner simultanément le modèle optimal. Ensuite, un algorithme exact pour la détection de rupture dans les séries temporelles, la méthode «pruned exact linear time» (PELT), est étendu pour faire de la détection de rupture dans des données de graphe dynamique selon le modèle dSBM. Enfin, le modèle dSBM est étendu ultérieurement pour faire de l’analyse de réseau textuel dynamique. Les réseaux sociaux sont un exemple de réseaux textuels: les acteurs s’échangent des documents (posts, tweets, etc.) dont le contenu textuel peut être utilisé pour faire de la classification et détecter la structure temporelle du graphe dynamique. Le modèle que nous introduisons est appelé «dynamic stochastic topic block model» (dSTBM)
This thesis focuses on the statistical analysis of dynamic graphs, both defined in discrete or continuous time. We introduce a new extension of the stochastic block model (SBM) for dynamic graphs. The proposed approach, called dSBM, adopts non homogeneous Poisson processes to model the interaction times between pairs of nodes in dynamic graphs, either in discrete or continuous time. The intensity functions of the processes only depend on the node clusters, in a block modelling perspective. Moreover, all the intensity functions share some regularity properties on hidden time intervals that need to be estimated. A recent estimation algorithm for SBM, based on the greedy maximization of an exact criterion (exact ICL) is adopted for inference and model selection in dSBM. Moreover, an exact algorithm for change point detection in time series, the "pruned exact linear time" (PELT) method is extended to deal with dynamic graph data modelled via dSBM. The approach we propose can be used for change point analysis in graph data. Finally, a further extension of dSBM is developed to analyse dynamic net- works with textual edges (like social networks, for instance). In this context, the graph edges are associated with documents exchanged between the corresponding vertices. The textual content of the documents can provide additional information about the dynamic graph topological structure. The new model we propose is called "dynamic stochastic topic block model" (dSTBM).Graphs are mathematical structures very suitable to model interactions between objects or actors of interest. Several real networks such as communication networks, financial transaction networks, mobile telephone networks and social networks (Facebook, Linkedin, etc.) can be modelled via graphs. When observing a network, the time variable comes into play in two different ways: we can study the time dates at which the interactions occur and/or the interaction time spans. This thesis only focuses on the first time dimension and each interaction is assumed to be instantaneous, for simplicity. Hence, the network evolution is given by the interaction time dates only. In this framework, graphs can be used in two different ways to model networks. Discrete time […] Continuous time […]. In this thesis both these perspectives are adopted, alternatively. We consider new unsupervised methods to cluster the vertices of a graph into groups of homogeneous connection profiles. In this manuscript, the node groups are assumed to be time invariant to avoid possible identifiability issues. Moreover, the approaches that we propose aim to detect structural changes in the way the node clusters interact with each other. The building block of this thesis is the stochastic block model (SBM), a probabilistic approach initially used in social sciences. The standard SBM assumes that the nodes of a graph belong to hidden (disjoint) clusters and that the probability of observing an edge between two nodes only depends on their clusters. Since no further assumption is made on the connection probabilities, SBM is a very flexible model able to detect different network topologies (hubs, stars, communities, etc.)
APA, Harvard, Vancouver, ISO, and other styles
3

Vallès, Català Toni. "Network inference based on stochastic block models: model extensions, inference approaches and applications." Doctoral thesis, Universitat Rovira i Virgili, 2016. http://hdl.handle.net/10803/399539.

Full text
Abstract:
L'estudi de xarxes ha contribuït a la comprensió de sistemes complexos en una àmplia gamma de camps com la biologia molecular i cel·lular, l'anatomia, la neurociència, l'ecologia, l'economia i la sociologia. No obstant, el coneixement disponible sobre molts sistemes reals encara és limitat, per aquesta raó el poder predictiu de la ciència en xarxes s'ha de millorar per disminuir la bretxa entre coneixement i informació. Per abordar aquest tema fem servir la família de 'Stochastic Block Models' (SBM), una família de models generatius que està guanyant gran interès recentment a causa de la seva adaptabilitat a qualsevol tipus de xarxa. L'objectiu d'aquesta tesi és el desenvolupament de noves metodologies d'inferència basades en SBM que perfeccionaran la nostra comprensió de les xarxes complexes. En primer lloc, investiguem en quina mesura fer un mostreg sobre models pot millorar significativament la capacitat de predicció que considerar un únic conjunt òptim de paràmetres. Un cop sabem quin model és capaç de descriure millor una xarxa determinada, apliquem aquest mètode en un cas particular d'una xarxa real: una xarxa basada en les interaccions/sutures entre els ossos del crani en nounats. Concretament, descobrim que les sutures tancades a causa d'una malaltia patològica en el nounat humà son menys probables, des d'un punt de vista morfològic, que les sutures tancades sota un desenvolupament normal. Recents investigacions en xarxes multicapa conclou que el comportament de les xarxes d'una sola capa són diferents de les de múltiples capes; d'altra banda, les xarxes del món real se'ns presenten com xarxes d'una sola capa.
El estudio de las redes del mundo real han empujado hacia la comprensión de sistemas complejos en una amplia gama de campos como la biología molecular y celular, la anatomía, la neurociencia, la ecología, la economía y la sociología . Sin embargo, el conocimiento disponible de muchos sistemas reales aún es limitado, por esta razón el poder predictivo de la ciencia en redes se debe mejorar para disminuir la brecha entre conocimiento y información. Para abordar este tema usamos la familia de 'Stochastic Block Modelos' (SBM), una familia de modelos generativos que está ganando gran interés recientemente debido a su adaptabilidad a cualquier tipo de red. El objetivo de esta tesis es el desarrollo de nuevas metodologías de inferencia basadas en SBM que perfeccionarán nuestra comprensión de las redes complejas. En primer lugar, investigamos en qué medida hacer un muestreo sobre modelos puede mejorar significativamente la capacidad de predicción a considerar un único conjunto óptimo de parámetros. Seguidamente, aplicamos el método mas predictivo en una red real particular: una red basada en las interacciones/suturas entre los huesos del cráneo humano en recién nacidos. Concretamente, descubrimos que las suturas cerradas a causa de una enfermedad patológica en recién nacidos son menos probables, desde un punto de vista morfológico, que las suturas cerradas bajo un desarrollo normal. Concretamente, descubrimos que las suturas cerradas a causa de una enfermedad patológica en recién nacidos son menos probables, desde un punto de vista morfológico, que las suturas cerradas bajo un desarrollo normal. Recientes investigaciones en las redes multicapa concluye que el comportamiento de las redes en una sola capa son diferentes a las de múltiples capas; por otra parte, las redes del mundo real se nos presentan como redes con una sola capa. La parte final de la tesis está dedicada a diseñar un nuevo enfoque en el que dos SBM separados describen simultáneamente una red dada que consta de una sola capa, observamos que esta metodología predice mejor que la metodología de un SBM solo.
The study of real-world networks have pushed towards to the understanding of complex systems in a wide range of fields as molecular and cell biology, anatomy, neuroscience, ecology, economics and sociology. However, the available knowledge from most systems is still limited, hence network science predictive power should be enhanced to diminish the gap between knowledge and information. To address this topic we handle with the family of Stochastic Block Models (SBMs), a family of generative models that are gaining high interest recently due to its adaptability to any kind of network structure. The goal of this thesis is to develop novel SBM based inference approaches that will improve our understanding of complex networks. First, we investigate to what extent sampling over models significatively improves the predictive power than considering an optimal set of parameters alone. Once we know which model is capable to describe better a given network, we apply such method in a particular real world network case: a network based on the interactions/sutures between bones in newborn skulls. Notably, we discovered that sutures fused due to a pathological disease in human newborn were less likely, from a morphological point of view, that those sutures that fused under a normal development. Recent research on multilayer networks has concluded that the behavior of single-layered networks are different from those of multilayer ones; notwhithstanding, real world networks are presented to us as single-layered networks. The last part of the thesis is devoted to design a novel approach where two separate SBMs simultaneously describe a given single-layered network. We importantly find that it predicts better missing/spurious links that the single SBM approach.
APA, Harvard, Vancouver, ISO, and other styles
4

Arastuie, Makan. "Generative Models of Link Formation and Community Detection in Continuous-Time Dynamic Networks." University of Toledo / OhioLINK, 2020. http://rave.ohiolink.edu/etdc/view?acc_num=toledo1596718772873086.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Junuthula, Ruthwik Reddy. "Modeling, Evaluation and Analysis of Dynamic Networks for Social Network Analysis." University of Toledo / OhioLINK, 2018. http://rave.ohiolink.edu/etdc/view?acc_num=toledo1544819215833249.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Gulikers, Lennart. "Sur deux problèmes d’apprentissage automatique : la détection de communautés et l’appariement adaptatif." Thesis, Paris Sciences et Lettres (ComUE), 2017. http://www.theses.fr/2017PSLEE062/document.

Full text
Abstract:
Dans cette thèse, nous étudions deux problèmes d'apprentissage automatique : (I) la détection des communautés et (II) l'appariement adaptatif. I) Il est bien connu que beaucoup de réseaux ont une structure en communautés. La détection de ces communautés nous aide à comprendre et exploiter des réseaux de tout genre. Cette thèse considère principalement la détection des communautés par des méthodes spectrales utilisant des vecteurs propres associés à des matrices choisiesavec soin. Nous faisons une analyse de leur performance sur des graphes artificiels. Au lieu du modèle classique connu sous le nom de « Stochastic Block Model » (dans lequel les degrés sont homogènes) nous considérons un modèle où les degrés sont plus variables : le « Degree-Corrected Stochastic Block Model » (DC-SBM). Dans ce modèle les degrés de tous les nœuds sont pondérés - ce qui permet de générer des suites des degrés hétérogènes. Nous étudions ce modèle dans deux régimes: le régime dense et le régime « épars », ou « dilué ». Dans le régime dense, nous prouvons qu'un algorithme basé sur une matrice d'adjacence normalisée réussit à classifier correctement tous les nœuds sauf une fraction négligeable. Dans le régime épars il existe un seuil en termes de paramètres du modèle en-dessous lequel n'importe quel algorithme échoue par manque d'information. En revanche, nous prouvons qu'un algorithme utilisant la matrice « non-backtracking » réussit jusqu'au seuil - cette méthode est donc très robuste. Pour montrer cela nous caractérisons le spectre des graphes qui sont générés selon un DC-SBM dans son régime épars. Nous concluons cette partie par des tests sur des réseaux sociaux. II) Les marchés d'intermédiation en ligne tels que des plateformes de Question-Réponse et des plateformes de recrutement nécessitent un appariement basé sur une information incomplète des deux parties. Nous développons un modèle de système d'appariement entre tâches et serveurs représentant le comportement de telles plateformes. Pour ce modèle nous donnons une condition nécessaire et suffisante pour que le système puisse gérer un certain flux de tâches. Nous introduisons également une politique de « back-pressure » sous lequel le débit gérable par le système est maximal. Nous prouvons que cette politique atteint un débit strictement plus grand qu'une politique naturelle « gloutonne ». Nous concluons en validant nos résultats théoriques avec des simulations entrainées par des données de la plateforme Stack-Overflow
In this thesis, we study two problems of machine learning: (I) community detection and (II) adaptive matching. I) It is well-known that many networks exhibit a community structure. Finding those communities helps us understand and exploit general networks. In this thesis we focus on community detection using so-called spectral methods based on the eigenvectors of carefully chosen matrices. We analyse their performance on artificially generated benchmark graphs. Instead of the classical Stochastic Block Model (which does not allow for much degree-heterogeneity), we consider a Degree-Corrected Stochastic Block Model (DC-SBM) with weighted vertices, that is able to generate a wide class of degree sequences. We consider this model in both a dense and sparse regime. In the dense regime, we show that an algorithm based on a suitably normalized adjacency matrix correctly classifies all but a vanishing fraction of the nodes. In the sparse regime, we show that the availability of only a small amount of information entails the existence of an information-theoretic threshold below which no algorithm performs better than random guess. On the positive side, we show that an algorithm based on the non-backtracking matrix works all the way down to the detectability threshold in the sparse regime, showing the robustness of the algorithm. This follows after a precise characterization of the non-backtracking spectrum of sparse DC-SBM's. We further perform tests on well-known real networks. II) Online two-sided matching markets such as Q&A forums and online labour platforms critically rely on the ability to propose adequate matches based on imperfect knowledge of the two parties to be matched. We develop a model of a task / server matching system for (efficient) platform operation in the presence of such uncertainty. For this model, we give a necessary and sufficient condition for an incoming stream of tasks to be manageable by the system. We further identify a so-called back-pressure policy under which the throughput that the system can handle is optimized. We show that this policy achieves strictly larger throughput than a natural greedy policy. Finally, we validate our model and confirm our theoretical findings with experiments based on user-contributed content on an online platform
APA, Harvard, Vancouver, ISO, and other styles
7

Kasinos, Stavros. "Seismic response analysis of linear and nonlinear secondary structures." Thesis, Loughborough University, 2018. https://dspace.lboro.ac.uk/2134/33728.

Full text
Abstract:
Understanding the complex dynamics that underpin the response of structures in the occurrence of earthquakes is of paramount importance in ensuring community resilience. The operational continuity of structures is influenced by the performance of nonstructural components, also known as secondary structures. Inherent vulnerability characteristics, nonlinearities and uncertainties in their properties or in the excitation pose challenges that render their response determination as a non-straightforward task. This dissertation settles in the context of mathematical modelling and response quantification of seismically driven secondary systems. The case of bilinear hysteretic, rigid-plastic and free-standing rocking oscillators is first considered, as a representative class of secondary systems of distinct behaviour excited at a single point in the primary structure. The equations governing their full dynamic interaction with linear primary oscillators are derived with the purpose of assessing the appropriateness of simplified analysis methods where the secondary-primary feedback action is not accounted for. Analyses carried out in presence of pulse-type excitation have shown that the cascade approximation can be considered satisfactory for bilinear systems provided the secondary-primary mass ratio is adequately low and the system does not approach resonance. For the case of sliding and rocking systems, much lighter secondary systems need to be considered if the cascade analysis is to be adopted, with the validity of the approximation dictated by the selection of the input parameters. Based on the premise that decoupling is permitted, new analytical solutions are derived for the pulse driven nonlinear oscillators considered, conveniently expressing the seismic response as a function of the input parameters and the relative effects are quantified. An efficient numerical scheme for a general-type of excitation is also presented and is used in conjunction with an existing nonstationary stochastic far-field ground motion model to determine the seismic response spectra for the secondary oscillators at given site and earthquake characteristics. Prompted by the presence of uncertainty in the primary structure, and in line with the classical modal analysis, a novel approach for directly characterising uncertainty in the modal shapes, frequencies and damping ratios of the primary structure is proposed. A procedure is then presented for the identification of the model parameters and demonstrated with an application to linear steel frames with uncertain semi-rigid connections. It is shown that the proposed approach reduces the number of the uncertain input parameters and the size of the dynamic problem, and is thus particularly appealing for the stochastic assessment of existing structural systems, where partial modal information is available e.g. through operational modal analysis testing. Through a numerical example, the relative effect of stochasticity in a bi-directional seismic input is found to have a more prominent role on the nonlinear response of secondary oscillators when compared to the uncertainty in the primary structure. Further extending the analyses to the case of multi-attached linear secondary systems driven by deterministic seismic excitation, a convenient variant of the component-mode synthesis method is presented, whereby the primary-secondary dynamic interaction is accounted for through the modes of vibration of the two components. The problem of selecting the vibrational modes to be retained in analysis is then addressed for the case of secondary structures, which may possess numerous low frequency modes with negligible mass, and a modal correction method is adopted in view of the application for seismic analysis. The influence of various approaches to build the viscous damping matrix of the primary-secondary assembly is also investigated, and a novel technique based on modal damping superposition is proposed. Numerical applications are demonstrated through a piping secondary system multi-connected on a primary frame exhibiting various irregularities in plan and elevation, as well as a multi-connected flexible secondary system. Overall, this PhD thesis delivers new insights into the determination and understanding of the response of seismically driven secondary structures. The research is deemed to be of academic and professional engineering interest spanning several areas including seismic engineering, extreme events, structural health monitoring, risk mitigation and reliability analysis.
APA, Harvard, Vancouver, ISO, and other styles
8

Ludkin, Matthew Robert. "The autoregressive stochastic block model with changes in structure." Thesis, Lancaster University, 2017. http://eprints.lancs.ac.uk/125642/.

Full text
Abstract:
Network science has been a growing subject for the last three decades, with sta- tistical analysis of networks seing an explosion since the advent of online social networks. An important model within network analysis is the stochastic block model, which aims to partition the set of nodes of a network into groups which behave in a similar way. This thesis proposes Bayesian inference methods for problems related to the stochastic block model for network data. The presented research is formed of three parts. Firstly, two Markov chain Monte Carlo samplers are proposed to sample from the posterior distribution of the number of blocks, block memberships and edge-state parameters in the stochastic block model. These allow for non-binary and non-conjugate edge models, something not considered in the literature. Secondly, a dynamic extension to the stochastic block model is presented which includes autoregressive terms. This novel approach to dynamic network models allows the present state of an edge to influence future states, and is therefore named the autoregresssive stochastic block model. Furthermore, an algorithm to perform inference on changes in block membership is given. This problem has gained some attention in the literature, but not with autoregressive features to the edge-state distribution as presented in this thesis. Thirdly, an online procedure to detect changes in block membership in the au- toregresssive stochastic block model is presented. This allows networks to be monitored through time, drastically reducing the data storage requirements. On top of this, the network parameters can be estimated together with the block memberships. Finally, conclusions are drawn from the above contributions in the context of the network analysis literature and future directions for research are identified.
APA, Harvard, Vancouver, ISO, and other styles
9

Šigut, Jiří. "Modely oceňování opcí se stochastickou volatilitou." Master's thesis, Vysoká škola ekonomická v Praze, 2012. http://www.nusl.cz/ntk/nusl-150113.

Full text
Abstract:
This work describes stochastic volatility models and application of such models for option pricing. Models for underlying asset and then pricing models for options with stochastic volatility are derived. Black-Scholes and Heston-Nandi models are compared in empirical part of this work.
APA, Harvard, Vancouver, ISO, and other styles
10

Bartoň, Ľuboš. "Oceňovanie opcií so stochastickou volatilitou." Master's thesis, Vysoká škola ekonomická v Praze, 2010. http://www.nusl.cz/ntk/nusl-77823.

Full text
Abstract:
This diploma thesis deals with problem of option pricing with stochastic volatility. At first, the Black-Scholes model is derived and then its biases are discussed. We explain shortly the concept of volatility. Further, we introduce three pricing models with stochastic volatility- Hull-White model, Heston model and Stein-Stein model. At the end, these models are reviewed.
APA, Harvard, Vancouver, ISO, and other styles
11

Tabouy, Timothée. "Impact de l’échantillonnage sur l’inférence de structures dans les réseaux : application aux réseaux d’échanges de graines et à l’écologie." Thesis, Université Paris-Saclay (ComUE), 2019. http://www.theses.fr/2019SACLS289/document.

Full text
Abstract:
Dans cette thèse nous nous intéressons à l’étude du modèle à bloc stochastique (SBM) en présence de données manquantes. Nous proposons une classification des données manquantes en deux catégories Missing At Random et Not Missing At Random pour les modèles à variables latentes suivant le modèle décrit par D. Rubin. De plus, nous nous sommes attachés à décrire plusieurs stratégies d’échantillonnages de réseau et leurs lois. L’inférence des modèles de SBM avec données manquantes est faite par l’intermédiaire d’une adaptation de l’algorithme EM : l’EM avec approximation variationnelle. L’identifiabilité de plusieurs des SBM avec données manquantes a pu être démontrée ainsi que la consistance et la normalité asymptotique des estimateurs du maximum de vraisemblance et des estimateurs avec approximation variationnelle dans le cas où chaque dyade (paire de nœuds) est échantillonnée indépendamment et avec même probabilité. Nous nous sommes aussi intéressés aux modèles de SBM avec covariables, à leurs inférence en présence de données manquantes et comment procéder quand les covariables ne sont pas disponibles pour conduire l’inférence. Finalement, toutes nos méthodes ont été implémenté dans un package R disponible sur le CRAN. Une documentation complète sur l’utilisation de ce package a été écrite en complément
In this thesis we are interested in studying the stochastic block model (SBM) in the presence of missing data. We propose a classification of missing data into two categories Missing At Random and Not Missing At Random for latent variable models according to the model described by D. Rubin. In addition, we have focused on describing several network sampling strategies and their distributions. The inference of SBMs with missing data is made through an adaptation of the EM algorithm : the EM with variational approximation. The identifiability of several of the SBM models with missing data has been demonstrated as well as the consistency and asymptotic normality of the maximum likelihood estimators and variational approximation estimators in the case where each dyad (pair of nodes) is sampled independently and with equal probability. We also looked at SBMs with covariates, their inference in the presence of missing data and how to proceed when covariates are not available to conduct the inference. Finally, all our methods were implemented in an R package available on the CRAN. A complete documentation on the use of this package has been written in addition
APA, Harvard, Vancouver, ISO, and other styles
12

Saleh, Ali, and Ahmad Al-Kadri. "Option pricing under Black-Scholes model using stochastic Runge-Kutta method." Thesis, Mälardalens högskola, Akademin för utbildning, kultur och kommunikation, 2021. http://urn.kb.se/resolve?urn=urn:nbn:se:mdh:diva-53783.

Full text
Abstract:
The purpose of this paper is solving the European option pricing problem under the Black–Scholes model. Our approach is to use the so-called stochastic Runge–Kutta (SRK) numericalscheme to find the corresponding expectation of the functional to the stochastic differentialequation under the Black–Scholes model. Several numerical solutions were made to study howquickly the result converges to the theoretical value. Then, we study the order of convergenceof the SRK method with the help of MATLAB.
APA, Harvard, Vancouver, ISO, and other styles
13

Todeschi, Tiziano. "Calibration of local-stochastic volatility models with neural networks." Master's thesis, Alma Mater Studiorum - Università di Bologna, 2021. http://amslaurea.unibo.it/23052/.

Full text
Abstract:
During the last twenty years several models have been proposed to improve the classic Black-Scholes framework for equity derivatives pricing. Recently a new model has been proposed: Local-Stochastic Volatility Model (LSV). This model considers volatility as the product between a deterministic and a stochastic term. So far, the model choice was not only driven by the capacity of capturing empirically observed market features well, but also by the computational tractability of the calibration process. This is now undergoing a big change since machine learning technologies offer new perspectives on model calibration. In this thesis we consider the calibration problem to be the search for a model which generates given market prices and where additionally technology from generative adversarial networks can be used. This means parametrizing the model pool in a way which is accessible for machine learning techniques and interpreting the inverse problems a training task of a generative network, whose quality is assessed by an adversary. The calibration algorithm proposed for LSV models use as generative models so-called neural stochastic differential equations (SDE), which just means to parameterize the drift and volatility of an Ito-SDE by neural networks.
APA, Harvard, Vancouver, ISO, and other styles
14

Paulin, Carl, and Maja Lindström. "Option pricing models: A comparison between models with constant and stochastic volatilities as well as discontinuity jumps." Thesis, Umeå universitet, Institutionen för matematik och matematisk statistik, 2020. http://urn.kb.se/resolve?urn=urn:nbn:se:umu:diva-172226.

Full text
Abstract:
The purpose of this thesis is to compare option pricing models. We have investigated the constant volatility models Black-Scholes-Merton (BSM) and Merton’s Jump Diffusion (MJD) as well as the stochastic volatility models Heston and Bates. The data used were option prices from Microsoft, Advanced Micro Devices Inc, Walt Disney Company, and the S&P 500 index. The data was then divided into training and testing sets, where the training data was used for parameter calibration for each model, and the testing data was used for testing the model prices against prices observed on the market. Calibration of the parameters for each model were carried out using the nonlinear least-squares method. By using the calibrated parameters the price was calculated using the method of Carr and Madan. Generally it was found that the stochastic volatility models, Heston and Bates, replicated the market option prices better than both the constant volatility models, MJD and BSM for most data sets. The mean average relative percentage error for Heston and Bates was found to be 2.26% and 2.17%, respectively. Merton and BSM had a mean average relative percentage error of 6.90% and 5.45%, respectively. We therefore suggest that a stochastic volatility model is to be preferred over a constant volatility model for pricing options.
Syftet med denna tes är att jämföra prissättningsmodeller för optioner. Vi har undersökt de konstanta volatilitetsmodellerna Black-Scholes-Merton (BSM) och Merton’s Jump Diffusion (MJD) samt de stokastiska volatilitetsmodellerna Heston och Bates. Datat vi använt är optionspriser från Microsoft, Advanced Micro Devices Inc, Walt Disney Company och S&P 500 indexet. Datat delades upp i en träningsmängd och en test- mängd. Träningsdatat användes för parameterkalibrering med hänsyn till varje modell. Testdatat användes för att jämföra modellpriser med priser som observerats på mark- naden. Parameterkalibreringen för varje modell utfördes genom att använda den icke- linjära minsta-kvadratmetoden. Med hjälp av de kalibrerade parametrarna kunde priset räknas ut genom att använda Carr och Madan-metoden. Vi kunde se att de stokastiska volatilitetsmodellerna, Heston och Bates, replikerade marknadens optionspriser bättre än båda de konstanta volatilitetsmodellerna, MJD och BSM för de flesta dataseten. Medelvärdet av det relativa medelvärdesfelet i procent för Heston och Bates beräknades till 2.26% respektive 2.17%. För Merton och BSM beräknades medelvärdet av det relativa medelvärdesfelet i procent till 6.90% respektive 5.45%. Vi anser därför att en stokastisk volatilitetsmodell är att föredra framför en konstant volatilitetsmodell för att prissätta optioner.
APA, Harvard, Vancouver, ISO, and other styles
15

Gokgoz, Ismail Hakki. "Stochastic Credit Default Swap Pricing." Master's thesis, METU, 2012. http://etd.lib.metu.edu.tr/upload/12614921/index.pdf.

Full text
Abstract:
Credit risk measurement and management has great importance in credit market. Credit derivative products are the major hedging instruments in this market and credit default swap contracts (CDSs) are the most common type of these instruments. As observed in credit crunch (credit crisis) that has started from the United States and expanded all over the world, especially crisis of Iceland, CDS premiums (prices) are better indicative of credit risk than credit ratings. Therefore, CDSs are important indicators for credit risk of an obligor and thus these products should be understood by market participants well enough. In this thesis, initially, advanced credit risk models firsts, the structural (firm value) models, Merton Model and Black-Cox constant barrier model, and the intensity-based (reduced-form) models, Jarrow-Turnbull and Cox models, are studied. For each credit risk model studied, survival probabilities are calculated. After explaining the basic structure of a single name CDS contract, by the help of the general pricing formula of CDS that result from the equality of in and out cash flows of these contracts, CDS price for each structural models (Merton model and Black-Cox constant barrier model) and CDS price for general type of intensity based models are obtained. Before the conclusion, default intensities are obtained from the distribution functions of default under two basic structural models
Merton and Black-Cox constant barrier. Finally, we conclude our work with some inferences and proposals.
APA, Harvard, Vancouver, ISO, and other styles
16

Manzini, Muzi Charles. "Stochastic Volatility Models for Contingent Claim Pricing and Hedging." Thesis, University of the Western Cape, 2008. http://etd.uwc.ac.za/index.php?module=etd&action=viewtitle&id=gen8Srv25Nme4_8197_1270517076.

Full text
Abstract:

The present mini-thesis seeks to explore and investigate the mathematical theory and concepts that underpins the valuation of derivative securities, particularly European plainvanilla options. The main argument that we emphasise is that novel models of option pricing, as is suggested by Hull and White (1987) [1] and others, must account for the discrepancy observed on the implied volatility &ldquo
smile&rdquo
curve. To achieve this we also propose that market volatility be modeled as random or stochastic as opposed to certain standard option pricing models such as Black-Scholes, in which volatility is assumed to be constant.

APA, Harvard, Vancouver, ISO, and other styles
17

Rafiou, AS. "Foreign Exchange Option Valuation under Stochastic Volatility." University of the Western Cape, 2009. http://hdl.handle.net/11394/7777.

Full text
Abstract:
>Magister Scientiae - MSc
The case of pricing options under constant volatility has been common practise for decades. Yet market data proves that the volatility is a stochastic phenomenon, this is evident in longer duration instruments in which the volatility of underlying asset is dynamic and unpredictable. The methods of valuing options under stochastic volatility that have been extensively published focus mainly on stock markets and on options written on a single reference asset. This work probes the effect of valuing European call option written on a basket of currencies, under constant volatility and under stochastic volatility models. We apply a family of the stochastic models to investigate the relative performance of option prices. For the valuation of option under constant volatility, we derive a closed form analytic solution which relaxes some of the assumptions in the Black-Scholes model. The problem of two-dimensional random diffusion of exchange rates and volatilities is treated with present value scheme, mean reversion and non-mean reversion stochastic volatility models. A multi-factor Gaussian distribution function is applied on lognormal asset dynamics sampled from a normal distribution which we generate by the Box-Muller method and make inter dependent by Cholesky factor matrix decomposition. Furthermore, a Monte Carlo simulation method is adopted to approximate a general form of numeric solution The historic data considered dates from 31 December 1997 to 30 June 2008. The basket contains ZAR as base currency, USD, GBP, EUR and JPY are foreign currencies.
APA, Harvard, Vancouver, ISO, and other styles
18

Rich, Don R. "Incorporating default risk into the Black-Scholes model using stochastic barrier option pricing theory." Diss., This resource online, 1993. http://scholar.lib.vt.edu/theses/available/etd-06062008-171359/.

Full text
APA, Harvard, Vancouver, ISO, and other styles
19

Tran, Nguyen, and Anton Weigardh. "The SABR Model : Calibrated for Swaption's Volatility Smile." Thesis, Mälardalens högskola, Akademin för utbildning, kultur och kommunikation, 2014. http://urn.kb.se/resolve?urn=urn:nbn:se:mdh:diva-24627.

Full text
Abstract:
Problem: The standard Black-Scholes framework cannot incorporate the volatility smiles usually observed in the markets. Instead, one must consider alternative stochastic volatility models such as the SABR. Little research about the suitability of the SABR model for Swedish market (swaption) data has been found. Purpose: The purpose of this paper is to account for and to calibrate the SABR model for swaptions trading on the Swedish market. We intend to alter the calibration techniques and parameter values to examine which method is the most consistent with the market. Method: In MATLAB, we investigate the model using two different minimization techniques to estimate the model’s parameters. For both techniques, we also implement refinements of the original SABR model. Results and Conclusion: The quality of the fit relies heavily on the underlying data. For the data used, we find superior fit for many different swaption smiles. In addition, little discrepancy in the quality of the fit between methods employed is found. We conclude that estimating the α parameter from at-the-money volatility produces slightly smaller errors than using minimization techniques to estimate all parameters. Using refinement techniques marginally increase the quality of the fit.
APA, Harvard, Vancouver, ISO, and other styles
20

Saadat, Sajedeh, and Timo Kudljakov. "Deterministic Quadrature Formulae for the Black–Scholes Model." Thesis, Mälardalens högskola, Akademin för utbildning, kultur och kommunikation, 2021. http://urn.kb.se/resolve?urn=urn:nbn:se:mdh:diva-54612.

Full text
Abstract:
There exist many numerical methods for numerical solutions of the systems of stochastic differential equations. We choose the method of deterministic quadrature formulae proposed by Müller–Gronbach, and Yaroslavtseva in 2016. The idea is to apply a simplified version of the cubature in Wiener space. We explain the method and check how good it works in the simplest case of the classical Black–Scholes model.
APA, Harvard, Vancouver, ISO, and other styles
21

Janečka, Adam. "Stochastické rovnice a numerické řešení modelu oceňování opcí." Master's thesis, Vysoká škola ekonomická v Praze, 2012. http://www.nusl.cz/ntk/nusl-195450.

Full text
Abstract:
In the present work, we study the topic of stochastic differential equations, their numerical solution and solution of models for pricing of options which follow from stochastic differential equations using the Itô calculus. We present several numerical methods for solving stochastic differential equations. These methods are then implemented in MATLAB and we investigate their properties, especially their convergence characteristics. Furthermore, we formulate two models for pricing of European call options. We solve these models using a variant of the spectral collocation method, again in MATLAB.
APA, Harvard, Vancouver, ISO, and other styles
22

Albertyn, Martin. "Generic simulation modelling of stochastic continuous systems." Thesis, Pretoria : [s.n.], 2004. http://upetd.up.ac.za/thesis/available/etd-05242005-112442.

Full text
APA, Harvard, Vancouver, ISO, and other styles
23

Bažant, Petr. "Ohodnocování finančních derivátů." Master's thesis, Vysoká škola ekonomická v Praze, 2008. http://www.nusl.cz/ntk/nusl-3927.

Full text
Abstract:
Financial derivatives have been constituting one of the most dynamic fields in the mathematical finance. The main task is represented by the valuation or pricing of these instruments. This theses deals with standard models and their limits, tries to explore advanced methods of continuous martingale measures and on their bases proposes numerical methods applicable to derivatives valuation. Some procedures leading to elimination of certain simplifying assumptions are presented as well.
APA, Harvard, Vancouver, ISO, and other styles
24

Kluge, Tino. "Illustration of stochastic processes and the finite difference method in finance." Universitätsbibliothek Chemnitz, 2003. http://nbn-resolving.de/urn:nbn:de:bsz:ch1-200300079.

Full text
Abstract:
The presentation shows sample paths of stochastic processes in form of animations. Those stochastic procsses are usually used to model financial quantities like exchange rates, interest rates and stock prices. In the second part the solution of the Black-Scholes PDE using the finite difference method is illustrated
Der Vortrag zeigt Animationen von Realisierungen stochstischer Prozesse, die zur Modellierung von Groessen im Finanzbereich haeufig verwendet werden (z.B. Wechselkurse, Zinskurse, Aktienkurse). Im zweiten Teil wird die Loesung der Black-Scholes Partiellen Differentialgleichung mittels Finitem Differenzenverfahren graphisch veranschaulicht
APA, Harvard, Vancouver, ISO, and other styles
25

Kheirollah, Amir. "Monte Carlo Simulation of Heston Model in MATLAB GUI." Thesis, Mälardalen University, Mälardalen University, Department of Mathematics and Physics, 2006. http://urn.kb.se/resolve?urn=urn:nbn:se:mdh:diva-4253.

Full text
Abstract:

In the Black-Scholes model, the volatility considered being deterministic and it causes some

inefficiencies and trends in pricing options. It has been proposed by many authors that the

volatility should be modelled by a stochastic process. Heston Model is one solution to this

problem. To simulate the Heston Model we should be able to overcome the correlation

between asset price and the stochastic volatility. This paper considers a solution to this issue.

A review of the Heston Model presented in this paper and after modelling some investigations

are done on the applet.

Also the application of this model on some type of options has programmed by MATLAB

Graphical User Interface (GUI).

APA, Harvard, Vancouver, ISO, and other styles
26

Raj, Mahendra. "Discrete stochastic arbitrage models for pricing options on short and long term yields: Tests of the Ho and Lee and the Black, Derman and Toy models." Diss., The University of Arizona, 1992. http://hdl.handle.net/10150/186107.

Full text
Abstract:
The Chicago Board of Options Exchange introduced the short-term and the long-term options on interest rates in June, 1989. This paper develops versions of the Ho and Lee and the Black, Derman and Toy models, two of the most popular arbitrage based discrete stochastic term structure models, and conducts joint tests of the models and the option markets. Both parametric and non-parametric tests are employed. The data consist of the prices of these options and the term structure of interest rates from June, 1989 to June, 1992. The prices of the short and long-term interest options have been obtained from the Wall Street Journals for the period. The term structure of interest rates has been obtained as the prices of treasury bills and the seven, ten and thirty year treasury strips. It is found that the version of Black, Derman and Toy model used in the study is superior to the Ho and Lee model. The Black, Derman and Toy explains the option prices to some extent as seen from the significant R-square values but it does tend to underprice both the short and long-term options slightly. However, since the tests are joint tests of the model and the short and long-term interest rate option markets, it may be that though the model is pricing the options correctly, the markets are not efficient. This may be especially true in view of the fact that the trading in these options has been, at times, quite thin. The Ho and Lee model fails to significantly explain both the short and long-term option prices. The call prices are found to be reasonably close to the actual prices whereas the put prices are grossly over priced. As indicated by the theoretical concepts the unstable and sensitive nature of δ and the tendency for the interest rates at the extreme nodes to assume unrealistic values causes this problem.
APA, Harvard, Vancouver, ISO, and other styles
27

Zhao, Min. "Risk Measures Extracted from Option Market Data Using Massively Parallel Computing." Digital WPI, 2011. https://digitalcommons.wpi.edu/etd-theses/373.

Full text
Abstract:
The famous Black-Scholes formula provided the first mathematically sound mechanism to price financial options. It is based on the assumption, that daily random stock returns are identically normally distributed and hence stock prices follow a stochastic process with a constant volatility. Observed prices, at which options trade on the markets, don¡¯t fully support this hypothesis. Options corresponding to different strike prices trade as if they were driven by different volatilities. To capture this so-called volatility smile, we need a more sophisticated option-pricing model assuming that the volatility itself is a random process. The price we have to pay for this stochastic volatility model is that such models are computationally extremely intensive to simulate and hence difficult to fit to observed market prices. This difficulty has severely limited the use of stochastic volatility models in the practice. In this project we propose to overcome the obstacle of computational complexity by executing the simulations in a massively parallel fashion on the graphics processing unit (GPU) of the computer, utilizing its hundreds of parallel processors. We succeed in generating the trillions of random numbers needed to fit a monthly options contract in 3 hours on a desktop computer with a Tesla GPU. This enables us to accurately price any derivative security based on the same underlying stock. In addition, our method also allows extracting quantitative measures of the riskiness of the underlying stock that are implied by the views of the forward-looking traders on the option markets.
APA, Harvard, Vancouver, ISO, and other styles
28

Alkadri, Mohamed Yaser. "Freeway Control Via Ramp Metering: Development of a Basic Building Block for an On-Ramp, Discrete, Stochastic, Mesoscopic, Simulation Model within a Contextual Systems Approach." PDXScholar, 1991. https://pdxscholar.library.pdx.edu/open_access_etds/1308.

Full text
Abstract:
One of the most effective measures of congestion control on freeways has been ramp metering, where vehicle entry to the freeway is regulated by traffic signals (meters). Meters are run with calibrated influx rates to prevent highway saturation. However, recent observations of some metering sites in San Diego, CA indicate that metering, during peak hour demand, is helping freeway flow while sometimes creating considerable traffic back-ups on local streets, transferring congestion problems from the freeway to intersections. Metering problems stem largely from the difficulty of designing an integrated, dynamic metering scheme that responds not only to changing freeway conditions but also to fluctuating demand throughout the ramp network; a scheme whose objective is to maintain adequate freeway throughput as well as minimize disproportionate ramp delays and queue overspills onto surface streets. Simulation modeling is a versatile, convenient, relatively inexpensive and safe systems analysis tool for evaluating alternative strategies to achieve the above objective. The objective of this research was to establish a basic building block for a discrete system simulation model, ONRAMP, based on a stochastic, mesoscopic, queueing approach. ONRAMP is for modeling entrance ramp geometry, vehicular generation, platooning and arrivals, queueing activities, meters and metering rates. The architecture of ONRAMP's molecular unit is designed in a fashion so that it can be, with some model calibration, duplicated for a number of ramps and, if necessary, integrated into some other larger freeway network models. SLAM.II simulation language is used for computer implementation. ONRAMP has been developed and partly validated using data from eight ramps at Interstate-B in San Diego. From a systems perspective, simulation will be short-sided and problem analysis is incomplete unless the other non-technical metering problems are explored and considered. These problems include the impacts of signalizing entrance ramps on the vitality of adjacent intersections, land use and development, "fair" geographic distribution of meters and metering rates throughout the freeway corridor, public acceptance and enforcement, and the role and influence of organizations in charge of decision making in this regard. Therefore, an outline of a contextual systems approach for problem analysis is suggested. Benefits and problems of freeway control via ramp metering, both operational short-term and strategic long-term, are discussed in two dimensions: global (freeway) and local (intersection). The results of a pilot study which includes interviews with field experts and law enforcement officials and a small motorist survey are presented.
APA, Harvard, Vancouver, ISO, and other styles
29

Teixeira, Fernando Ormonde. "On the numerical methods for the Heston model." reponame:Repositório Institucional do FGV, 2017. http://hdl.handle.net/10438/19486.

Full text
Abstract:
Submitted by Fernando Teixeira (fernote7@gmail.com) on 2017-12-08T15:48:21Z No. of bitstreams: 1 Download File (1).pdf: 1437428 bytes, checksum: d6dfbfe41919a0cdd657900b6784f310 (MD5)
Approved for entry into archive by Janete de Oliveira Feitosa (janete.feitosa@fgv.br) on 2017-12-08T16:04:57Z (GMT) No. of bitstreams: 1 Download File (1).pdf: 1437428 bytes, checksum: d6dfbfe41919a0cdd657900b6784f310 (MD5)
Made available in DSpace on 2017-12-22T17:16:31Z (GMT). No. of bitstreams: 1 Download File (1).pdf: 1437428 bytes, checksum: d6dfbfe41919a0cdd657900b6784f310 (MD5) Previous issue date: 2017-09-29
In this thesis we revisit numerical methods for the simulation of the Heston model’sEuropean call. Specifically, we study the Euler, the Kahl-Jackel an two versions of theexact algorithm schemes. To perform this task, firstly we present a literature reviewwhich brings stochastic calculus, the Black-Scholes (BS) model and its limitations,the stochastic volatility methods and why they resolve the issues of the BS model,and the peculiarities of the numerical methods. We provide recommendations whenwe acknowledge that the reader might need more specifics and might need to divedeeper into a given topic. We introduce the methods aforementioned providing all ourimplementations in R language within a package.
APA, Harvard, Vancouver, ISO, and other styles
30

Zreik, Rawya. "Analyse statistique des réseaux et applications aux sciences humaines." Thesis, Paris 1, 2016. http://www.theses.fr/2016PA01E061/document.

Full text
Abstract:
Depuis les travaux précurseurs de Moreno (1934), l’analyse des réseaux est devenue une discipline forte, qui ne se limite plus à la sociologie et qui est à présent appliquée à des domaines très variés tels que la biologie, la géographie ou l’histoire. L’intérêt croissant pour l’analyse des réseaux s’explique d’une part par la forte présence de ce type de données dans le monde numérique d’aujourd’hui et, d’autre part, par les progrès récents dans la modélisation et le traitement de ces données. En effet, informaticiens et statisticiens ont porté leurs efforts depuis plus d’une dizaine d’années sur ces données de type réseau en proposant des nombreuses techniques permettant leur analyse. Parmi ces techniques on note les méthodes de clustering qui permettent en particulier de découvrir une structure en groupes cachés dans le réseau. De nombreux facteurs peuvent exercer une influence sur la structure d’un réseau ou rendre les analyses plus faciles à comprendre. Parmi ceux-ci, on trouve deux facteurs importants: le facteur du temps, et le contexte du réseau. Le premier implique l’évolution des connexions entre les nœuds au cours du temps. Le contexte du réseau peut alors être caractérisé par différents types d’informations, par exemple des messages texte (courrier électronique, tweets, Facebook, messages, etc.) échangés entre des nœuds, des informations catégoriques sur les nœuds (âge, sexe, passe-temps, Les fréquences d’interaction (par exemple, le nombre de courriels envoyés ou les commentaires affichés), et ainsi de suite. La prise en considération de ces facteurs nous permet de capturer de plus en plus d’informations complexes et cachées à partir des données. L’objectif de ma thèse été de définir des nouveaux modèles de graphes aléatoires qui prennent en compte les deux facteurs mentionnés ci-dessus, afin de développer l’analyse de la structure du réseau et permettre l’extraction de l’information cachée à partir des données. Ces modèles visent à regrouper les sommets d’un réseau en fonction de leurs profils de connexion et structures de réseau, qui sont statiques ou évoluant dynamiquement au cours du temps. Le point de départ de ces travaux est le modèle de bloc stochastique (SBM). Il s’agit d’un modèle de mélange pour les graphiques qui ont été initialement développés en sciences sociales. Il suppose que les sommets d’un réseau sont répartis sur différentes classes, de sorte que la probabilité d’une arête entre deux sommets ne dépend que des classes auxquelles ils appartiennent
Over the last two decades, network structure analysis has experienced rapid growth with its construction and its intervention in many fields, such as: communication networks, financial transaction networks, gene regulatory networks, disease transmission networks, mobile telephone networks. Social networks are now commonly used to represent the interactions between groups of people; for instance, ourselves, our professional colleagues, our friends and family, are often part of online networks, such as Facebook, Twitter, email. In a network, many factors can exert influence or make analyses easier to understand. Among these, we find two important ones: the time factor, and the network context. The former involves the evolution of connections between nodes over time. The network context can then be characterized by different types of information such as text messages (email, tweets, Facebook, posts, etc.) exchanged between nodes, categorical information on the nodes (age, gender, hobbies, status, etc.), interaction frequencies (e.g., number of emails sent or comments posted), and so on. Taking into consideration these factors can lead to the capture of increasingly complex and hidden information from the data. The aim of this thesis is to define new models for graphs which take into consideration the two factors mentioned above, in order to develop the analysis of network structure and allow extraction of the hidden information from the data. These models aim at clustering the vertices of a network depending on their connection profiles and network structures, which are either static or dynamically evolving. The starting point of this work is the stochastic block model, or SBM. This is a mixture model for graphs which was originally developed in social sciences. It assumes that the vertices of a network are spread over different classes, so that the probability of an edge between two vertices only depends on the classes they belong to
APA, Harvard, Vancouver, ISO, and other styles
31

Nguyen, Cu Ngoc. "Stochastic differential equations with long-memory input." Thesis, Queensland University of Technology, 2001.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
32

Menes, Matheus Dorival Leonardo Bombonato. "Versão discreta do modelo de elasticidade constante da variância." Universidade de São Paulo, 2012. http://www.teses.usp.br/teses/disponiveis/55/55134/tde-16042013-151325/.

Full text
Abstract:
Neste trabalho propomos um modelo de mercado através de uma discretização aleatória do movimento browniano proposta por Leão & Ohashi (2010). Com este modelo, dada uma função payoff, vamos desenvolver uma estratégia de hedging e uma metodologia para precificação de opções
In this work we propose a market model using a discretization scheme of the random Brownian motion proposed by Leão & Ohashi (2010). With this model, for any given payoff function, we develop a hedging strategy and a methodology to option pricing
APA, Harvard, Vancouver, ISO, and other styles
33

Canafoglia, Fabio. "An Introduction to Credit Risk and Asset Pricing." Master's thesis, Alma Mater Studiorum - Università di Bologna, 2016. http://amslaurea.unibo.it/12321/.

Full text
Abstract:
Into the Thesis, the author will try to give the basis of risk management and asset pricing. Both of them are fundamental elements to understand how the financial models work; this topic is judged important in the perspective of successive studies in financial math: having clear the starting point makes things easier. From the title it is clear that modern and more complex models will be only touched upon. We decide to divide the dissertation in two different parts because, in our opinion, it is more evident that two different ways to approach at credit risk exist: on one side we try to quantify the risk deriving from giving credit, on the other we will establish a strategy that allows us to invest money with the aim to pay the other part of the agreement. Everything became more clear chapter by chapter. Financial institutions like banks are exposed at both of this type of risk. Chapters 1 and 5 are the center of this thesis: they represent the zero point from which the modern models were originated.
APA, Harvard, Vancouver, ISO, and other styles
34

Krebs, Daniel. "Pricing a basket option when volatility is capped using affinejump-diffusion models." Thesis, KTH, Matematisk statistik, 2013. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-123395.

Full text
Abstract:
This thesis considers the price and characteristics of an exotic option called the Volatility-Cap-Target-Level(VCTL) option. The payoff function is a simple European option style but the underlying value is a dynamic portfolio which is comprised of two components: A risky asset and a non-risky asset. The non-risky asset is a bond and the risky asset can be a fund or an index related to any asset category such as equities, commodities, real estate, etc. The main purpose of using a dynamic portfolio is to keep the realized volatility of the portfolio under control and preferably below a certain maximum level, denoted as the Volatility-Cap-Target-Level (VCTL). This is attained by a variable allocation between the risky asset and the non-risky asset during the maturity of the VCTL-option. The allocation is reviewed and if necessary adjusted every 15th day. Adjustment depends entirely upon the realized historical volatility of the risky asset. Moreover, it is assumed that the risky asset is governed by a certain group of stochastic differential equations called affine jump-diffusion models. All models will be calibrated using out-of-the money European call options based on the Deutsche-Aktien-Index(DAX). The numerical implementation of the portfolio diffusions and the use of Monte Carlo methods will result in different VCTL-option prices. Thus, to price a nonstandard product and to comply with good risk management, it is advocated that the financial institution use several research models such as the SVSJ- and the Seppmodel in addition to the Black-Scholes model. Keywords: Exotic option, basket option, risk management, greeks, affine jumpdiffusions, the Black-Scholes model, the Heston model, Bates model with lognormal jumps, the Bates model with log-asymmetric double exponential jumps, the Stochastic-Volatility-Simultaneous-Jumps(SVSJ)-model, the Sepp-model.
APA, Harvard, Vancouver, ISO, and other styles
35

Cerqueira, Andressa. "Statistical inference on random graphs and networks." Universidade de São Paulo, 2018. http://www.teses.usp.br/teses/disponiveis/45/45133/tde-04042018-094802/.

Full text
Abstract:
In this thesis we study two probabilistic models defined on graphs: the Stochastic Block model and the Exponential Random Graph. Therefore, this thesis is divided in two parts. In the first part, we introduce the Krichevsky-Trofimov estimator for the number of communities in the Stochastic Block Model and prove its eventual almost sure convergence to the underlying number of communities, without assuming a known upper bound on that quantity. In the second part of this thesis we address the perfect simulation problem for the Exponential random graph model. We propose an algorithm based on the Coupling From The Past algorithm using a Glauber dynamics. This algorithm is efficient in the case of monotone models. We prove that this is the case for a subset of the parametric space. We also propose an algorithm based on the Backward and Forward algorithm that can be applied for monotone and non monotone models. We prove the existence of an upper bound for the expected running time of both algorithms.
Nessa tese estudamos dois modelos probabilísticos definidos em grafos: o modelo estocástico por blocos e o modelo de grafos exponenciais. Dessa forma, essa tese está dividida em duas partes. Na primeira parte nós propomos um estimador penalizado baseado na mistura de Krichevsky-Trofimov para o número de comunidades do modelo estocástico por blocos e provamos sua convergência quase certa sem considerar um limitante conhecido para o número de comunidades. Na segunda parte dessa tese nós abordamos o problema de simulação perfeita para o modelo de grafos aleatórios Exponenciais. Nós propomos um algoritmo de simulação perfeita baseado no algoritmo Coupling From the Past usando a dinâmica de Glauber. Esse algoritmo é eficiente apenas no caso em que o modelo é monotóno e nós provamos que esse é o caso para um subconjunto do espaço paramétrico. Nós também propomos um algoritmo de simulação perfeita baseado no algoritmo Backward and Forward que pode ser aplicado à modelos monótonos e não monótonos. Nós provamos a existência de um limitante superior para o número esperado de passos de ambos os algoritmos.
APA, Harvard, Vancouver, ISO, and other styles
36

Rahouli, Sami El. "Modélisation financière avec des processus de Volterra et applications aux options, aux taux d'intérêt et aux risques de crédit." Thesis, Université de Lorraine, 2014. http://www.theses.fr/2014LORR0042/document.

Full text
Abstract:
Ce travail étudie des modèles financiers pour les prix d'options, les taux d'intérêts et le risque de crédit, avec des processus stochastiques à mémoire et comportant des discontinuités. Ces modèles sont formulés en termes du mouvement Brownien fractionnaire, du processus de Lévy fractionnaire ou filtré (et doublement stochastique) et de leurs approximations par des semimartingales. Leur calcul stochastique est traité au sens de Malliavin, et des formules d'Itô sont déduites. Nous caractérisons les probabilités risque neutre en termes de ces processus pour des modèles d'évaluation d'options de type de Black-Scholes avec sauts. Nous étudions également des modèles de taux d'intérêts, en particulier les modèles de Vasicek, de Cox-Ingersoll-Ross et de Heath-Jarrow-Morton. Finalement nous étudions la modélisation du risque de crédit
This work investigates financial models for option pricing, interest rates and credit risk with stochastic processes that have memory and discontinuities. These models are formulated in terms of the fractional Brownian motion, the fractional or filtered Lévy process (also doubly stochastic) and their approximations by semimartingales. Their stochastic calculus is treated in the sense of Malliavin and Itô formulas are derived. We characterize the risk-neutral probability measures in terms of these processes for options pricing models of Black-Scholes type with jumps. We also study models of interest rates, in particular the models of Vasicek, Cox-Ingersoll-Ross and Heath-Jarrow-Morton. Finally we study credit risk models
APA, Harvard, Vancouver, ISO, and other styles
37

Martineau, Killian. "Quelques aspects de cosmologie et de physique des trous noirs en gravitation quantique à boucles Detailed investigation of the duration of inflation in loop quantum cosmology for a Bianchi I universe with different inflaton potentials and initial conditions Some clarifications on the duration of inflation in loop quantum cosmology A first step towards the inflationary trans-Planckian problem treatment in loop quantum cosmology Scalar spectra of primordial perturbations in loop quantum cosmology Phenomenology of quantum reduced loop gravity in the isotropic cosmological sector Primordial Power Spectra from an Emergent Universe: Basic Results and Clarifications Fast radio bursts and the stochastic lifetime of black holes in quantum gravity Quantum fields in the background spacetime of a polymeric loop black hole Quasinormal modes of black holes in a toy-model for cumulative quantum gravity Seeing through the cosmological bounce: Footprints of the contracting phase and luminosity distance in bouncing models Dark matter as Planck relics without too exotic hypotheses A Status Report on the Phenomenology of Black Holes in Loop Quantum Gravity: Evaporation, Tunneling to White Holes, Dark Matter and Gravitational Waves." Thesis, Université Grenoble Alpes (ComUE), 2019. http://www.theses.fr/2019GREAY044.

Full text
Abstract:
Cantonnée à la physique mathématique depuis des décennies, la gravitation quantique entre désormais dans le giron de la science expérimentale. Suivant cette mouvance nous considérons dans cette thèse trois cadres d’application de la gravitation quantique à boucles (LQG) : le système Univers, les trous noirs et les astroparticules. Le troisième n’est qu’esquissé tandis que les deux premiers sont présentés plus en détails.Le secteur cosmologique étant l’un des domaines les plus prometteurs pour tester et contraindre des théories de gravité quantique, le développement de différents modèles tentant d’appliquer les idées de la LQG à l’Univers primordial ne s’est pas fait attendre. Les travaux que nous présentons portent sur la phénoménologie associée à ces modèles; tant dans le secteur homogène (où nous nous focalisons notamment sur la durée de la phase d’inflation), que dans le secteur inhomogène (nous étudions ce coup-ci le devenir des spectres de puissance primordiaux). Ces études combinées nous permettent alors de préciser dans quelle mesure des effets de gravité quantique (à boucles) peuvent être observés dans les anisotropies du fond diffus cosmologique.D’autre part les trous noirs, non contents de faire partie des objets les plus étranges et les plus fascinants de l’Univers, constituent également des sondes privilégiées pour tester des théories de gravitation. Nous développons la phénoménologie associée à différents traitements des trous noirs en gravitation quantique à boucles. Celle-ci intervient sur une grande variété de fronts : de l’évaporation de Hawking aux ondes gravitationnelles, en passant par la matière noire. C’est sans nul doute un domaine riche et vaste.Finalement, l’existence d’une échelle de longueur minimale, prédite par la majorité des théories de gravité quantique, suggère une généralisation du principe d’incertitude de Heisenberg. Partant de ce constat nous présentons également dans ce manuscrit une méthodologie permettant de calculer une nouvelle relation de dispersion de la lumière à partir du principe d’incertitude généralisé le plus couramment répandu
After decades of being confined to mathematical physics, quantum gravity now enters the field of experimental science. Following this trend, we consider throughout this thesis three implementation frameworks of Loop Quantum Gravity (LQG): the Universe as a system, black holes and astroparticles. The last one is only outlined while the first two are presented in more detail.Since the cosmological sector is one of the most promising areas for testing and constraining quantum gravity theories, it was not long before the development of different models attempting to apply the ideas of the LQG to the primordial Universe. The work we present deals with the phenomenology associated with these models; both in the homogeneous sector (where we focus particularly on the duration of the inflation phase), as in the inhomogeneous sector (where this time, we study the fate of the primordial power spectra). These combined studies then allow us to specify to what extent effects of (loop) quantum gravity can be observed in the anisotropies of the cosmic microwave background.On the other hand black holes, not content to be among the strangest and most fascinating objects of the Universe, are also prominent probes to test the theories of gravitation. We develop the phenomenology associated with different treatments of black holes in the loop quantum gravity framework, which intervenes on multiple levels: from the evaporation of Hawking to gravitational waves, including dark matter. This is undoubtedly a rich and vast area.Finally, the existence of a minimal length scale, predicted by the majority of quantum gravity theories, suggests a generalization of the Heisenberg uncertainty principle. On the basis of this observation, we also present in this manuscript a methodology to derive a new relation dispersion of light from the most widely used generalized uncertainty principle
APA, Harvard, Vancouver, ISO, and other styles
38

Zhang, Jian. "Advance Surgery Scheduling with Consideration of Downstream Capacity Constraints and Multiple Sources of Uncertainty." Thesis, Bourgogne Franche-Comté, 2019. http://www.theses.fr/2019UBFCA023.

Full text
Abstract:
Les travaux de ce mémoire portent sur une gestion optimisée des blocs opératoires dans un service chirurgical. Les arrivées des patients chaque semaine, la durée des opérations et les temps de séjour des patients sont considérés comme des paramètres assujettis à des incertitudes. Chaque semaine, le gestionnaire hospitalier doit déterminer les blocs chirurgicaux à mettre en service et leur affecter certaines opérations figurant sur la liste d'attente. L'objectif est la minimisation d'une part des coûts liés à la réalisation et au report des opérations, et d'autre part des coûts hospitaliers liés aux ressources chirurgicaux. Lorsque nous considérons que les modèles de programmations mathématiques couramment utilisés dans la littérature n'optimisent pas la performance à long terme des programmes chirurgicaux, nous proposons un nouveau modèle d'optimisation à deux phases combinant le processus de décision Markovien (MDP) et la programmation stochastique. Le MDP de la première phase détermine les opérations à effectuer chaque semaine et minimise les coûts totaux sur un horizon infini. La programmation stochastique de la deuxième phase optimise les affectations des opérations sélectionnées dans les blocs chirurgicaux. Afin de résoudre la complexité des problèmes de grande taille, nous développons un algorithme de programmation dynamique approximatif basé sur l'apprentissage par renforcement et plusieurs autres heuristiques basés sur la génération de colonnes. Nous développons des applications numériques afin d'évaluer le modèle et les algorithmes proposés. Les résultats expérimentaux indiquent que ces algorithmes sont considérablement plus efficaces que les algorithmes traditionnels. Les programmes chirurgicaux du modèle d’optimisation à deux phases sont plus performants de manière significative que ceux d’un modèle de programmation stochastique classique en termes de temps d’attente des patients et de coûts totaux sur le long terme
This thesis deals with the advance scheduling of elective surgeries in an operating theatre that is composed of operating rooms and downstream recovery units. The arrivals of new patients in each week, the duration of each surgery, and the length-of-stay of each patient in the downstream recovery unit are subject to uncertainty. In each week, the surgery planner should determine the surgical blocks to open and assign some of the surgeries in the waiting list to the open surgical blocks. The objective is to minimize the patient-related costs incurred by performing and postponing surgeries as well as the hospital-related costs caused by the utilization of surgical resources. Considering that the pure mathematical programming models commonly used in literature do not optimize the long-term performance of the surgery schedules, we propose a novel two-phase optimization model that combines Markov decision process (MDP) and stochastic programming to overcome this drawback. The MDP model in the first phase determines the surgeries to be performed in each week and minimizes the expected total costs over an infinite horizon, then the stochastic programming model in the second phase optimizes the assignments of the selected surgeries to surgical blocks. In order to cope with the huge complexity of realistically sized problems, we develop a reinforcement-learning-based approximate dynamic programming algorithm and several column-generation-based heuristic algorithms as the solution approaches. We conduct numerical experiments to evaluate the model and algorithms proposed in this thesis. The experimental results indicate that the proposed algorithms are considerably more efficient than the traditional ones, and that the resulting schedules of the two-phase optimization model significantly outperform those of a conventional stochastic programming model in terms of the patients' waiting times and the total costs on the long run
APA, Harvard, Vancouver, ISO, and other styles
39

Loshchilov, Ilya. "Surrogate-Assisted Evolutionary Algorithms." Phd thesis, Université Paris Sud - Paris XI, 2013. http://tel.archives-ouvertes.fr/tel-00823882.

Full text
Abstract:
Les Algorithmes Évolutionnaires (AEs) ont été très étudiés en raison de leur capacité à résoudre des problèmes d'optimisation complexes en utilisant des opérateurs de variation adaptés à des problèmes spécifiques. Une recherche dirigée par une population de solutions offre une bonne robustesse par rapport à un bruit modéré et la multi-modalité de la fonction optimisée, contrairement à d'autres méthodes d'optimisation classiques telles que les méthodes de quasi-Newton. La principale limitation de AEs, le grand nombre d'évaluations de la fonction objectif, pénalise toutefois l'usage des AEs pour l'optimisation de fonctions chères en temps calcul. La présente thèse se concentre sur un algorithme évolutionnaire, Covariance Matrix Adaptation Evolution Strategy (CMA-ES), connu comme un algorithme puissant pour l'optimisation continue boîte noire. Nous présentons l'état de l'art des algorithmes, dérivés de CMA-ES, pour résoudre les problèmes d'optimisation mono- et multi-objectifs dans le scénario boîte noire. Une première contribution, visant l'optimisation de fonctions coûteuses, concerne l'approximation scalaire de la fonction objectif. Le meta-modèle appris respecte l'ordre des solutions (induit par la valeur de la fonction objectif pour ces solutions) ; il est ainsi invariant par transformation monotone de la fonction objectif. L'algorithme ainsi défini, saACM-ES, intègre étroitement l'optimisation réalisée par CMA-ES et l'apprentissage statistique de meta-modèles adaptatifs ; en particulier les meta-modèles reposent sur la matrice de covariance adaptée par CMA-ES. saACM-ES préserve ainsi les deux propriété clé d'invariance de CMA-ES~: invariance i) par rapport aux transformations monotones de la fonction objectif; et ii) par rapport aux transformations orthogonales de l'espace de recherche. L'approche est étendue au cadre de l'optimisation multi-objectifs, en proposant deux types de meta-modèles (scalaires). La première repose sur la caractérisation du front de Pareto courant (utilisant une variante mixte de One Class Support Vector Machone (SVM) pour les points dominés et de Regression SVM pour les points non-dominés). La seconde repose sur l'apprentissage d'ordre des solutions (rang de Pareto) des solutions. Ces deux approches sont intégrées à CMA-ES pour l'optimisation multi-objectif (MO-CMA-ES) et nous discutons quelques aspects de l'exploitation de meta-modèles dans le contexte de l'optimisation multi-objectif. Une seconde contribution concerne la conception d'algorithmes nouveaux pour l'optimi\-sation mono-objectif, multi-objectifs et multi-modale, développés pour comprendre, explorer et élargir les frontières du domaine des algorithmes évolutionnaires et CMA-ES en particulier. Spécifiquement, l'adaptation du système de coordonnées proposée par CMA-ES est couplée à une méthode adaptative de descente coordonnée par coordonnée. Une stratégie adaptative de redémarrage de CMA-ES est proposée pour l'optimisation multi-modale. Enfin, des stratégies de sélection adaptées aux cas de l'optimisation multi-objectifs et remédiant aux difficultés rencontrées par MO-CMA-ES sont proposées.
APA, Harvard, Vancouver, ISO, and other styles
40

Santhosh, D. "Stochastic Simulation Of Daily Rainfall Data Using Matched Block Bootstrap." Thesis, 2008. http://hdl.handle.net/2005/681.

Full text
Abstract:
Characterizing the uncertainty in rainfall using stochastic models has been a challenging area of research in the field of operational hydrology for about half a century. Simulated sequences drawn from such models find use in a variety of hydrological applications. Traditionally, parametric models are used for simulating rainfall. But the parametric models are not parsimonious and have uncertainties associated with identification of model form, normalizing transformation, and parameter estimation. None of the models in vogue have gained universal acceptability among practising engineers. This may either be due to lack of confidence in the existing models, or the inability to adopt models proposed in literature because of their complexity or both. In the present study, a new nonparametric Matched Block Bootstrap (MABB) model is proposed for stochastic simulation of rainfall at daily time scale. It is based on conditional matching of blocks formed from the historical rainfall data using a set of predictors (conditioning variables) proposed for matching the blocks. The efficiency of the developed model is demonstrated through application to rainfall data from India, Australia, and USA. The performance of MABB is compared with two non-parametric rainfall simulation models, k-NN and ROG-RAG, for a site in Melbourne, Australia. The results showed that MABB model is a feasible alternative to ROG-RAG and k-NN models for simulating daily rainfall sequences for hydrologic applications. Further it is found that MABB and ROG-RAG models outperform k-NN model. The proposed MABB model preserved the summary statistics of rainfall and fraction of wet days at daily, monthly, seasonal and annual scales. It could also provide reasonable performance in simulating spell statistics. The MABB is parsimonious and requires less computational effort than ROG-RAG model. It reproduces probability density function (marginal distribution) fairly well due to its data driven nature. Results obtained for sites in India and U.S.A. show that the model is robust and promising.
APA, Harvard, Vancouver, ISO, and other styles
41

(10725294), Nithish Kumar Kumar. "Stochastic Block Model Dynamics." Thesis, 2021.

Find full text
Abstract:
The past few years have seen an increasing focus on fairness and the long-term impact of algorithmic decision making in the context of Machine learning, Artificial Intelligence and other disciplines. In this thesis, we model hiring processes in enterprises and organizations using dynamic mechanism design. Using a stochastic block model to simulate the workings of a hiring process, we study fairness and long-term evolution in the system.
We first present multiple results on a deterministic variant of our model including convergence and an accurate approximate solution describing the state of the deterministic variant after any time period has elapsed. Using the differential equation method, it can be shown that this deterministic variant is in turn an accurate approximation of the evolution of our stochastic block model with high probability.
Finally, we derive upper and lower bounds on the expected state at each time step, and further show that in the limiting case of the long-term, these upper and lower bounds themselves converge to the state evolution of the deterministic system. These results offer conclusions on the long-term behavior of our model, thereby allowing reasoning on how fairness in organizations could be achieved. We conclude that without sufficient, systematic incentives, under-represented groups will wane out from organizations over time.
APA, Harvard, Vancouver, ISO, and other styles
42

Chang, Lun_tsung, and 張倫宗. "Application of The Black-Litterman model to the Stochastic Portfolio Theory." Thesis, 2007. http://ndltd.ncl.edu.tw/handle/96063781231211330071.

Full text
Abstract:
碩士
國立高雄第一科技大學
財務管理所
95
The goal of this paper is to construct an optimal portfolio that is a weighted combination of the stochastic portfolio and the views of the investor by Black-Litterman model. To construct the portfolio, it starts with neutral portfolio weights that are consistent with the equilibrium return of the stochastic portfolio. For each asset, to which the investor has no view, this is what will be handed over the optimizer. For the assets to which the investor has views, modified expected returns are calculated as a combination of the stochastic portfolio weights and the investor views. The feature of the model is that uses the B-L model to combine the subjective views of an investor regarding the expected returns of assets with the stochastic portfolio equilibrium vector of expected returns to form a new mixed estimate of expected returns. It differs only from the Markowitz model with respect to the expected returns. The portfolio has a simple, intuitive property. It provides the flexibility to combine the equilibrium return of the stochastic portfolio. Let the quantitative model tend to be an active portfolio selection model. We test the Taiwan stock market. The weight adjustment frequency divides into the week, the double week and the month. Our empirical period was from 1997 to 2006. Besides the TSEC that weight adjustment frequency is the month, the performance of the stochastic portfolio is better than the market portfolio constructing according to the capital weight. The return which weight adjustment frequency is the double week is relatively high. The return from OTC is better than TSEC. Furthermore, considering the views according to the momentum strategy, the empirical researches show that the reverse strategy performs better on TSEC market.
APA, Harvard, Vancouver, ISO, and other styles
43

Majmin, Lisa. "Local and Stochastic Volatility Models: An Investigation into the Pricing of Exotic Equity Options." Thesis, 2006. http://hdl.handle.net/10539/1495.

Full text
Abstract:
Faculty of Science; School of Computational and Applied Maths; MSC Thesis
The assumption of constant volatility as an input parameter into the Black-Scholes option pricing formula is deemed primitive and highly erroneous when one considers the terminal distribution of the log-returns of the underlying process. To account for the `fat tails' of the distribution, we consider both local and stochastic volatility option pricing models. Each class of models, the former being a special case of the latter, gives rise to a parametrization of the skew, which may or may not re°ect the correct dynamics of the skew. We investigate a select few from each class and derive the results presented in the corresponding papers. We select one from each class, namely the implied trinomial tree (Derman, Kani & Chriss 1996) and the SABR model (Hagan, Kumar, Lesniewski & Woodward 2002), and calibrate to the implied skew for SAFEX futures. We also obtain prices for both vanilla and exotic equity index options and compare the two approaches.
APA, Harvard, Vancouver, ISO, and other styles
44

Waczulík, Oliver. "Stochastické modely ve finanční matematice." Master's thesis, 2016. http://www.nusl.cz/ntk/nusl-346757.

Full text
Abstract:
Title: Stochastic Models in Financial Mathematics Author: Bc. Oliver Waczulík Department: Department of Probability and Mathematical Statistics Supervisor: doc. RNDr. Jan Hurt, CSc., Department of Probability and Mathe- matical Statistics Abstract: This thesis looks into the problems of ordinary stochastic models used in financial mathematics, which are often influenced by unrealistic assumptions of Brownian motion. The thesis deals with and suggests more sophisticated alternatives to Brownian motion models. By applying the fractional Brownian motion we derive a modification of the Black-Scholes pricing formula for a mixed fractional Bro- wnian motion. We use Lévy processes to introduce subordinated stable process of Ornstein-Uhlenbeck type serving for modeling interest rates. We present the calibration procedures for these models along with a simulation study for estima- tion of Hurst parameter. To illustrate the practical use of the models introduced in the paper we have used real financial data and custom procedures program- med in the system Wolfram Mathematica. We have achieved almost 90% decline in the value of Kolmogorov-Smirnov statistics by the application of subordinated stable process of Ornstein-Uhlenbeck type for the historical values of the monthly PRIBOR (Prague Interbank Offered Rate) rates in...
APA, Harvard, Vancouver, ISO, and other styles
45

Lin, Christy. "Unsupervised random walk node embeddings for network block structure representation." Thesis, 2021. https://hdl.handle.net/2144/43083.

Full text
Abstract:
There has been an explosion of network data in the physical, chemical, biological, computational, and social sciences in the last few decades. Node embeddings, i.e., Euclidean-space representations of nodes in a network, make it possible to apply to network data, tools and algorithms from multivariate statistics and machine learning that were developed for Euclidean-space data. Random walk node embeddings are a class of recently developed node embedding techniques where the vector representations are learned by optimizing objective functions involving skip-bigram statistics computed from random walks on the network. They have been applied to many supervised learning problems such as link prediction and node classification and have demonstrated state-of-the-art performance. Yet, their properties remain poorly understood. This dissertation studies random walk based node embeddings in an unsupervised setting within the context of capturing hidden block structure in the network, i.e., learning node representations that reflect their patterns of adjacencies to other nodes. This doctoral research (i) Develops VEC, a random walk based unsupervised node embedding algorithm, and a series of relaxations, and experimentally validates their performance for the community detection problem under the Stochastic Block Model (SBM). (ii) Characterizes the ergodic limits of the embedding objectives to create non-randomized versions. (iii) Analyzes the embeddings for expected SBM networks and establishes certain concentration properties of the limiting ergodic objective in the large network asymptotic regime. Comprehensive experimental results on real world and SBM random networks are presented to illustrate and compare the distributional and block-structure properties of node embeddings generated by VEC and related algorithms. As a step towards theoretical understanding, it is proved that for the variants of VEC with ergodic limits and convex relaxations, the embedding Grammian of the expected network of a two-community SBM has rank at most 2. Further experiments reveal that these extensions yield embeddings whose distribution is Gaussian-like, centered at the node embeddings of the expected network within each community, and concentrate in the linear degree-scaling regime as the number of nodes increases.
2023-09-24T00:00:00Z
APA, Harvard, Vancouver, ISO, and other styles
46

Hilebrand, William. "The valuation of callable defaultable bonds." Master's thesis, 2011. http://hdl.handle.net/10071/4337.

Full text
Abstract:
The present work studies the valuation of callable defaultable bonds, when firm value and interest rate are both stochastics. When valuing long term contingent claims which underlying asset is a bond, it is important to assume endogenous bankruptcy risk since, in the long run, firm value and interest rate might not walk together. The firm value is described by an one-factor geometric Brownian motion and the interest rate follows an one-factor square root process. This study presents sensitivity analysis on yield spreads and option values with variations on host bond prices and interest rate volatilities. Furthermore, three different assumptions to model the interest rate behaviour (the CIR model, the Vasicek model and a constant interest rate) will be compared, in order to find significant differences and try to understand which one fits better to the theory.
O presente trabalho debruça-se sobre a avaliação de obrigações callable com risco de falência, quando o valor da empresa e a taxa de juro são ambos estocásticos. Quando se avalia direitos contingentes de longo prazo cujo activo subjacente é uma obrigação, torna-se importante considerar o risco de falência como variável endógena visto que, no longo prazo, o valor da empresa e a taxa de juro podem não ter o mesmo comportamento. O valor da empresa é explicado por um movimento Browniano geométrico de um só factor e a taxa de juro segue um processo de raíz quadrada, também de um só factor. Este estudo apresenta análises de sensibilidade do spread das yields e do valor das opções, para variações no preço da obrigação subjacente e na volatilidade da taxa de juro. Serão também comparados três pressupostos diferentes para o comportamento da taxa de juro (modelo CIR, modelo Vasicek e taxa de juro constante), de forma a encontrar diferenças relevantes e perceber qual deles se adequa melhor à teoria.
APA, Harvard, Vancouver, ISO, and other styles
47

Cheng, Ya-ching, and 鄭雅菁. "The Comparison of the Volatility Forecasting Ability between Black-Scholes and Stochastic Volatility Model - Empirical Study of TXO." Thesis, 2005. http://ndltd.ncl.edu.tw/handle/72650687883254650421.

Full text
Abstract:
碩士
義守大學
財務金融學系碩士班
93
After the introduction of warrants in Taiwan in 1997, futures, options and other derivatives have been introduced successively. The financial market of Taiwan is becoming more diverse and rich. As a result, the pricing, trading strategies, and hedging of the derivatives are important issues. Volatility is a crucial factor in affecting the pricing, trading strategies, and hedging. Therefore, volatility takes a major part in the modern financial theories. This research studied the Taiwan stock index options and analyzed the advantages and disadvantages of the volatility models. Black-Scholes options evaluation model and Hull & White stochastic volatility model were grouped with historic volatility, implied volatility, and GARCH model. The empirical study showed that under the measurement error index of MAE, RMSE, MAPE, the historic volatility model reached the greatest price error. Non-parametric Wilcoxon sign-rank test was used to discuss the significance between the errors of the theoretic price and market price for different volatility models. The Hull & White stochastic volatility model was not found better than Black-Scholes model.
APA, Harvard, Vancouver, ISO, and other styles
48

Kan-Dobrowsky, Natalia. "Generalized Multinomial CRR Option Pricing Model and its Black-Scholes type limit." Doctoral thesis, 2005. http://hdl.handle.net/11858/00-1735-0000-0006-B401-6.

Full text
Abstract:
Wir bauen das verallgemeinerte diskrete Modell des zu Grunde liegenden Aktienpreisprozesses, der als eine bessere Annäherung an den Aktienpreisprozess dient als der klassische zufällige Spaziergang. Das verallgemeinerte Multinomial-Modell des Option-Preises in Bezug auf das neue Modell des Aktienpreisprozesses wird erhalten. Das entsprechende asymptotische Verfahren erlaubt, die verallgemeinerte Black-Scholes Formel zu erhalten, die die Formel als einen Begrenzungsfall des verallgemeinerten diskreten Option-Preis Modells bewertet.
APA, Harvard, Vancouver, ISO, and other styles
49

Xuan, Junyu. "Bayesian nonparametric learning for complicated text mining." Thesis, 2016. http://hdl.handle.net/10453/62405.

Full text
Abstract:
University of Technology Sydney. Faculty of Engineering and Information Technology.
Text mining has gained the ever-increasing attention of researchers in recent years because text is one of the most natural and easy ways to express human knowledge and opinions, and is therefore believed to have a variety of application scenarios and a potentially high commercial value. It is commonly accepted that Bayesian models with finite-dimensional probability distributions as building blocks, also known as parametric topic models, are effective tools for text mining. However, one problem in existing parametric topic models is that the hidden topic number needs to be fixed in advance. Determining an appropriate number is very difficult, and sometimes unrealistic, for many real-world applications and may lead to over-fitting or under-fitting issues. Bayesian nonparametric learning is a key approach for learning the number of mixtures in a mixture model (also called the model selection problem), and has emerged as an elegant way to handle a flexible number of topics. The core idea of Bayesian nonparametric models is to use stochastic processes as building blocks, instead of traditional fixed-dimensional probability distributions. Even though Bayesian nonparametric learning has gained considerable research attention and undergone rapid development, its ability to conduct complicated text mining tasks, such as: document-word co-clustering, document network learning, multi-label document learning, and so on, is still weak. Therefore, there is still a gap between the Bayesian nonparametric learning theory and complicated real-world text mining tasks. To fill this gap, this research aims to develop a set of Bayesian nonparametric models to accomplish four selected complex text mining tasks. First, three Bayesian nonparametric sparse nonnegative matrix factorization models, based on two innovative dependent Indian buffet processes, are proposed for document-word co-clustering tasks. Second, a Dirichlet mixture probability measure strategy is proposed to link the topics from different layers, and is used to build a Bayesian nonparametric deep topic model for topic hierarchy learning. Third, the thesis develops a Bayesian nonparametric relational topic model for document network learning tasks by a subsampling Markov random field. Lastly, the thesis develops Bayesian nonparametric cooperative hierarchical structure models for multi-label document learning task based on two stochastic process operations: inheritance and cooperation. The findings of this research not only contribute to the development of Bayesian nonparametric learning theory, but also provide a set of effective tools for complicated text mining applications.
APA, Harvard, Vancouver, ISO, and other styles
50

Qin, Juan. "A high-resolution hierarchical model for space-time rainfall." Thesis, 2011. http://hdl.handle.net/1959.13/808076.

Full text
Abstract:
Research Doctorate - Doctor of Philosophy (PhD)
The hydrologic response of urban catchments is sensitive to small scale space-time rainfall variations. A stochastic space-time rainfall model used for design purposes must reproduce important statistics at these small scales. However, current rainfall models make simplifying assumptions about the temporal characteristics of rainfields and thus cannot be expected to reproduce important statistics over various space and time scales. In this study, an extensive investigation of radar rainfall data for the Sydney region motivated the development of a new phenomenological hierarchical stochastic model to robustly simulate rainfall fields consistent with 10-minute 1-km2 pixel radar images. The hierarchical framework consists of three levels. The development of the first two levels which simulate the evolution of rainfall fields for a single storm is the focus of this thesis. The third level, which is designed for simulation of storm sequences, is left for future research. The first level simulates a latent Gaussian random field conditioned on the previous time step, , which is transformed to a rain field using a power transformation. A Toeplitz block circulant technique is used to achieve fast and accurate simulations of large Gaussian random fields (with lattice of 256 by 256), and is shown to be hugely more efficient than the traditional Cholesky decomposition method. In the second level, first-order autoregressive (AR(1)) models are used to describe the within-storm variations of the level-one parameters that control the evolution of the rain fields. Calibration is performed using a generalized method-of-moments approach. The parametric bootstrap validation technique was used to evaluate the performance of the first two levels of the model by comparing the characteristics of interest for four observed storm events (typical of frontal and convective storms experienced in Sydney, Australia) and synthetic storms. It is found that this two-level rainfall model produces realistic sequences of rain images which capture the physical hierarchical structure of clusters, patchiness of rain fields and the persistence exhibited during storm development. A variety of important statistics were adequately reproduced at both 10-min and 1-hr time scales over space scales ranging from 1 km up to 32 km. Finally, application of this model to short-term rainfall forecasting is presented.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography