To see the other types of publications on this topic, follow the link: Probabilities – Mathematical models.

Dissertations / Theses on the topic 'Probabilities – Mathematical models'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 dissertations / theses for your research on the topic 'Probabilities – Mathematical models.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.

1

Gong, Qi, and 龔綺. "Gerber-Shiu function in threshold insurance risk models." Thesis, The University of Hong Kong (Pokfulam, Hong Kong), 2008. http://hub.hku.hk/bib/B40987966.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Wan, Lai-mei. "Ruin analysis of correlated aggregate claims models." Thesis, Click to view the E-thesis via HKUTO, 2005. http://sunzi.lib.hku.hk/hkuto/record/B30705708.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Huang, Sheng, and 黄盛. "Some properties of [¯gamma*n] and error control with group network codes." Thesis, The University of Hong Kong (Pokfulam, Hong Kong), 2011. http://hub.hku.hk/bib/B46606117.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Wei, Zhenghong. "Empirical likelihood based evaluation for value at risk models." HKBU Institutional Repository, 2007. http://repository.hkbu.edu.hk/etd_ra/896.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Kwan, Kwok-man, and 關國文. "Ruin theory under a threshold insurance risk model." Thesis, The University of Hong Kong (Pokfulam, Hong Kong), 2007. http://hub.hku.hk/bib/B38320034.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Dunster, Joanne L. "Mathematical models of soft tissue injury repair : towards understanding musculoskeletal disorders." Thesis, University of Nottingham, 2012. http://eprints.nottingham.ac.uk/27797/.

Full text
Abstract:
The process of soft tissue injury repair at the cellular lew I can be decomposed into three phases: acute inflammation including coagulation, proliferation and remodelling. While the later phases are well understood the early phase is less so. We produce a series of new mathematical models for the early phases coagulation and inflammation. The models produced are relevant not only to soft tissue injury repair but also to the many disease states in which coagulation and inflammation play a role. The coagulation cascade and the subsequent formation of the enzyme thrombin are central to the creation of blood clots. By focusing on a subset of reactions that occur within the coagulation cascade, we develop a model that exhibits a rich asymptotic structure. Using singular perturbation theory we produce a sequence of simpler time-dependent model which enable us to elucidate the physical mechanisms that underlie the cascade and the formation of thrombin. There is considerable interest in identifying new therapeutic targets within the coagulation cascade, as current drugs for treating pathological coagulation (thrombosis) target multiple factors and cause the unwelcome side effect of excessive bleeding. Factor XI is thought to be a potential therapeutic target, as it is implicated in pathological coagulation but not in haemostasis (the stopping of bleeding), but its mechanism of activation is controversial. By extending our previous model of the coagulation cascade to include the whole cascade (albeit in a simplistic way) we use numerical methods to simulate experimental data of the coagulation cascade under normal as well as specific-factor-deficient conditions. We then provide simulations supporting the hypothesis that thrombin activates factor XI. The interest in inflammation is now increasing due to it being implicated in such diverse conditions as Alzmeimer's disease, cancer and heart disease. Inflammation can either resolve or settle into a self-perpetuating condition which in the context of soft tissue repair is termed chronic inflammation. Inflammation has traditionally been thought gradualIy to subside but new biological interest centres on the anti-inflammatory processes (relating to macrophages) that are thought to promote resolution and the pro-inflammatory role that neutrophils can provide by causing damage to healthy tissue. We develop a new ordinary differential equation model of the inflammatory process that accounts for populations of neutrophils and macrophages. We use numerical techniques and bifurcation theory to characterise and elucidate the physiological mechanisms that are dominant during the inflammatory phase and the roles they play in the healing process. There is therapeutic interest in modifying the rate of neutrophil apoptosis but we find that increased apoptosis is dependent on macrophage removal to be anti-inflammatory. We develop a simplified version of the model of inflammation reducing a system of nine ordinary equations to six while retaining the physical processes of neutrophil apoptosis and macrophage driven anti-inflammatory mechanisms. The simplified model reproduces the key outcomes that we relate to resolution or chronic inflammation. We then present preliminary work on the inclusion of the spatial effects of chemotaxis and diffusion.
APA, Harvard, Vancouver, ISO, and other styles
7

Venter, Rudolf Gerrit. "Pricing options under stochastic volatility." Diss., Pretoria : [s.n.], 2003. http://upetd.up.ac.za/thesis/available/etd09052005-120952.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Sirkin, Jeffrey M. "Quantifying the probabilities of selection of surface warfare officers to executive officer." Thesis, Monterey, Calif. : Springfield, Va. : Naval Postgraduate School ; Available from National Technical Information Service, 2006. http://library.nps.navy.mil/uhtbin/hyperion/06Sep%5FSirkin.pdf.

Full text
Abstract:
Thesis (M.S. in Operations Research)--Naval Postgraduate School, September 2006.
Thesis Advisor(s): Robert A. Koyak. "September 2006." Includes bibliographical references (p. 51). Also available in print.
APA, Harvard, Vancouver, ISO, and other styles
9

Przybyla, Craig Paul. "Microstructure-sensitive extreme value probabilities of fatigue in advanced engineering alloys." Diss., Georgia Institute of Technology, 2010. http://hdl.handle.net/1853/34780.

Full text
Abstract:
A novel microstructure-sensitive extreme value probabilistic framework is introduced to evaluate material performance/variability for damage evolution processes (e.g., fatigue, fracture, creep). This framework employs newly developed extreme value marked correlation functions (EVMCF) to identify the coupled microstructure attributes (e.g., phase/grain size, grain orientation, grain misorientation) that have the greatest statistical relevance to the extreme value response variables (e.g., stress, elastic/plastic strain) that describe the damage evolution processes of interest. This is an improvement on previous approaches that account for distributed extreme value response variables that describe the damage evolution process of interest based only on the extreme value distributions of a single microstructure attribute; previous approaches have given no consideration of how coupled microstructure attributes affect the distributions of extreme value response. This framework also utilizes computational modeling techniques to identify correlations between microstructure attributes that significantly raise or lower the magnitudes of the damage response variables of interest through the simulation of multiple statistical volume elements (SVE). Each SVE for a given response is constructed to be a statistical sample of the entire microstructure ensemble (i.e., bulk material); therefore, the response of interest in each SVE is not expected to be the same. This is in contrast to computational simulation of a single representative volume element (RVE), which often is untenably large for response variables dependent on the extreme value microstructure attributes. This framework has been demonstrated in the context of characterizing microstructure-sensitive high cycle fatigue (HCF) variability due to the processes of fatigue crack formation (nucleation and microstructurally small crack growth) in polycrystalline metallic alloys. Specifically, the framework is exercised to estimate the local driving forces for fatigue crack formation, to validate these with limited existing experiments, and to explore how the extreme value probabilities of certain fatigue indicator parameters (FIPs) affect overall variability in fatigue life in the HCF regime. Various FIPs have been introduced and used previously as a means to quantify the potential for fatigue crack formation based on experimentally observed mechanisms. Distributions of the extreme value FIPs are calculated for multiple SVEs simulated via the FEM with crystal plasticity constitutive relations. By using crystal plasticity relations, the FIPs can be computed based on the cyclic plastic strain on the scale of the individual grains. These simulated SVEs are instantiated such that they are statistically similar to real microstructures in terms of the crystallographic microstructure attributes that are hypothesized to have the most influence on the extreme value HCF response. The polycrystalline alloys considered here include the Ni-base superalloy IN100 and the Ti alloy Ti-6Al-4V. In applying this framework to study the microstructure dependent variability of HCF in these alloys, the extreme value distributions of the FIPs and associated extreme value marked correlations of crystallographic microstructure attributes are characterized. This information can then be used to rank order multiple variants of the microstructure for a specific material system for relative HCF performance or to design new microstructures hypothesized to exhibit improved performance. This framework enables limiting the (presently) large number of experiments required to characterize scatter in HCF and lends quantitative support to designing improved, fatigue-resistant materials and accelerating insertion of modified and new materials into service.
APA, Harvard, Vancouver, ISO, and other styles
10

Reischman, Diann. "Order restricted inferences on parameters in generalized linear models with emphasis on logistic regression /." free to MU campus, to others for purchase, 1997. http://wwwlib.umi.com/cr/mo/fullcit?p9842560.

Full text
APA, Harvard, Vancouver, ISO, and other styles
11

Jairu, Desiderio N. "Distributions of some random volumes and their connection to multivariate analysis." Thesis, McGill University, 1987. http://digitool.Library.McGill.CA:80/R/?func=dbin-jump-full&object_id=63999.

Full text
APA, Harvard, Vancouver, ISO, and other styles
12

He, Xin, and 何鑫. "Probabilistic quality-of-service constrained robust transceiver designin multiple antenna systems." Thesis, The University of Hong Kong (Pokfulam, Hong Kong), 2012. http://hub.hku.hk/bib/B48199527.

Full text
Abstract:
In downlink multi-user multiple-input multiple-output (MU-MIMO) systems, different users, even multiple data streams serving one user, might require different quality-of-services (QoS). The transceiver should allocate resources to different users aiming at satisfying their QoS requirements. In order to design the optimal transceiver, channel state information is necessary. In practice, channel state information has to to be estimated, and estimation error is unavoidable. Therefore, robust transceiver design, which takes the channel estimation uncertainty into consideration, is important. For the previous robust transceiver designs, bounded estimation errors or Gaussian estimation errors were assumed. However, if there exists unknown distributed interference, the distribution of the channel estimation error cannot be modeled accurately a priori. Therefore, in this thesis, we investigate the robust transceiver design problem in downlink MU-MIMO system under probabilistic QoS constraints with arbitrary distributed channel estimation error. To tackle the probabilistic QoS constraints under arbitrary distributed channel estimation error, the transceiver design problem is expressed in terms of worst-case probabilistic constraints. Two methods are then proposed to solve the worst-case problem. Firstly, the Chebyshev inequality based method is proposed. After the worst-case probabilistic constraint is approximated by the Chebyshev inequality, an iteration between two convex subproblems is proposed to solve the approximated problem. The convergence of the iterative method is proved, the implementation issues and the computational complexity are discussed. Secondly, in order to solve the worst-case probabilistic constraint more accurately, a novel duality method is proposed. After a series of reformulations based on duality and S-Lemma, the worst-case statistically constrained problem is transformed into a deterministic finite constrained problem, with strong duality guaranteed. The resulting problem is then solved by a convergence-guaranteed iteration between two subproblems. Although one of the subproblems is still nonconvex, it can be solved by a tight semidefinite relaxation (SDR). Simulation results show that, compared to the non-robust method, the QoS requirement is satisfied by both proposed algorithms. Furthermore, among the two proposed methods, the duality method shows a superior performance in transmit power, while the Chebyshev method demonstrates a lower computational complexity.
published_or_final_version
Electrical and Electronic Engineering
Master
Master of Philosophy
APA, Harvard, Vancouver, ISO, and other styles
13

Duncan, Kristin A. "Case and covariate influence implications for model assessment /." Connect to this title online, 2004. http://rave.ohiolink.edu/etdc/view?acc%5Fnum=osu1095357183.

Full text
Abstract:
Thesis (Ph. D.)--Ohio State University, 2004.
Title from first page of PDF file. Document formatted into pages; contains xi, 123 p.; also includes graphics (some col.). Includes bibliographical references (p. 120-123).
APA, Harvard, Vancouver, ISO, and other styles
14

Boissard, Emmanuel. "Problèmes d'interaction discret-continu et distances de Wasserstein." Toulouse 3, 2011. http://thesesups.ups-tlse.fr/1389/.

Full text
Abstract:
On étudie dans ce manuscrit plusieurs problèmes d'approximation à l'aide des outils de la théorie du transport optimal. Les distances de Wasserstein fournissent des bornes d'erreur pour l'approximation particulaire des solutions de certaines équations aux dérivées partielles. Elles jouent également le rôle de mesures de distorsion naturelles dans les problèmes de quantification et de partitionnement ("clustering"). Un problème associé à ces questions est d'étudier la vitesse de convergence dans la loi des grands nombres empirique pour cette distorsion. La première partie de cette thèse établit des bornes non-asymptotiques, en particulier dans des espaces de Banach de dimension infinie, ainsi que dans les cas où les observations sont non-indépendantes. La seconde partie est consacrée à l'étude de deux modèles issus de la modélisation des déplacements de populations d'animaux. On introduit un nouveau modèle individu-centré de formation de pistes de fourmis, que l'on étudie expérimentalement à travers des simulations numériques et une représentation en terme d'équations cinétiques. On étudie également une variante du modèle de Cucker-Smale de mouvement d'une nuée d'oiseaux : on montre le caractère bien posé de l'équation de transport de type Vlasov associée, et on établit des résultats sur le comportement en temps long de cette équation. Enfin, dans une troisième partie, on étudie certaines applications statistiques de la notion de barycentre dans l'espace des mesures de probabilités muni de la distance de Wasserstein, récemment introduite par M. Agueh et G. Carlier
We study several problems of approximation using tools from Optimal Transportation theory. The family of Wasserstein metrics are used to provide error bounds for particular approximation of some Partial Differential Equations. They also come into play as natural measures of distorsion for quantization and clustering problems. A problem related to these questions is to estimate the speed of convergence in the empirical law of large numbers for these distorsions. The first part of this thesis provides non-asymptotic bounds, notably in infinite-dimensional Banach spaces, as well as in cases where independence is removed. The second part is dedicated to the study of two models from the modelling of animal displacement. A new individual-based model for ant trail formation is introduced, and studied through numerical simulations and kinetic formulation. We also study a variant of the Cucker-Smale model of bird flock motion : we establish well-posedness of the associated Vlasov-type transport equation as well as long-time behaviour results. In a third part, we study some statistical applications of the notion of barycenter in Wasserstein space recently introduced by M. Agueh and G. Carlier
APA, Harvard, Vancouver, ISO, and other styles
15

Badran, Rabih. "Insurance portfolio's with dependent risks." Doctoral thesis, Universite Libre de Bruxelles, 2014. http://hdl.handle.net/2013/ULB-DIPOT:oai:dipot.ulb.ac.be:2013/209547.

Full text
Abstract:
Cette thèse traite de portefeuilles d’assurance avec risques dépendants en théorie du risque.

Le premier chapitre traite les modèles avec risques équicorrelés. Nous proposons une structure mathématique qui amène à une fonction génératrice de probabilités particulière (fgp) proposé par Tallis. Cette fgp implique des variables équicorrelées. Puis, nous étudions l’effet de ce type de dépendance sur des quantités d’intérêt dans la littérature actuarielle telle que la fonction de répartition de la somme des montants des sinistres, les primes stop-loss et les probabilités de ruine sur horizon fini. Nous utilisons la structure proposée pour corriger des erreurs dans la littérature dues au fait que plusieurs auteurs agissaient comme si la somme des variables aléatoires équicorrélés aient nécessairement la fgp proposée par Tallis.

Dans le second chapitre, nous proposons un modèle qui combine les modèles avec chocs et les modèles avec mélanges communs en introduisant une variable qui contrôle le niveau du choc. Dans le cadre de ce nouveau modèle, nous considérons deux applications où nous généralisons le modèle de Bernoulli avec choc et le modèle de Poisson avec choc. Nous étudions, dans les deux applications, l’effet de la dépendance sur la fonction de répartition des montants des sinistres, les primes stop-loss et les probabilités de ruine sur horizon fini et infini. Pour la deuxième application, nous proposons une construction basée sur les copules qui permet de contrôler le niveau de dépendance avec le niveau du choc.

Dans le troisième chapitre, nous proposons, une généralisation du modèle classique de Poisson où les montants des sinistres et les intersinistres sont supposés dépendants. Nous calculons la transformée de Laplace des probabilités de survie. Dans le cas particulier où les montants des sinistres ont une distribution exponentielle nous obtenons des formules explicites pour les probabilités de survie.

Dans le quatrième chapitre nous généralisons le modèle classique de Poisson en introduisant de la dépendance entre les intersinistres. Nous utilisons le lien entre les files fluides et le processus du risque pour modéliser la dépendance. Nous calculons les probabilités de survie en utilisant un algorithme numérique et nous traitons le cas où les montants de

sinistres et les intersinistres ont des distributions de type phase.


Doctorat en Sciences
info:eu-repo/semantics/nonPublished

APA, Harvard, Vancouver, ISO, and other styles
16

Gathy, Maude. "On some damage processes in risk and epidemic theories." Doctoral thesis, Universite Libre de Bruxelles, 2010. http://hdl.handle.net/2013/ULB-DIPOT:oai:dipot.ulb.ac.be:2013/210063.

Full text
Abstract:
Cette thèse traite de processus de détérioration en théorie du risque et en biomathématique.

En théorie du risque, le processus de détérioration étudié est celui des sinistres supportés par une compagnie d'assurance.

Le premier chapitre examine la distribution de Markov-Polya comme loi possible pour modéliser le nombre de sinistres et établit certains liens avec la famille de lois de Katz/Panjer. Nous construisons la loi de Markov-Polya sur base d'un modèle de survenance des sinistres et nous montrons qu'elle satisfait une récurrence élégante. Celle-ci permet notamment de déduire un algorithme efficace pour la loi composée correspondante. Nous déduisons la famille de Katz/Panjer comme famille limite de la loi de Markov-Polya.

Le second chapitre traite de la famille dite "Lagrangian Katz" qui étend celle de Katz/Panjer. Nous motivons par un problème de premier passage son utilisation comme loi du nombre de sinistres. Nous caractérisons toutes les lois qui en font partie et nous déduisons un algorithme efficace pour la loi composée. Nous examinons également son indice de dispersion ainsi que son comportement asymptotique.

Dans le troisième chapitre, nous étudions la probabilité de ruine sur horizon fini dans un modèle discret avec taux d'intérêt positifs. Nous déterminons un algorithme ainsi que différentes bornes pour cette probabilité. Une borne particulière nous permet de construire deux mesures de risque. Nous examinons également la possibilité de faire appel à de la réassurance proportionelle avec des niveaux de rétention égaux ou différents sur les périodes successives.

Dans le cadre de processus épidémiques, la détérioration étudiée consiste en la propagation d'une maladie de type SIE (susceptible - infecté - éliminé). La manière dont un infecté contamine les susceptibles est décrite par des distributions de survie particulières. Nous en déduisons la distribution du nombre total de personnes infectées à la fin de l'épidémie. Nous examinons en détails les épidémies dites de type Markov-Polya et hypergéométrique. Nous approximons ensuite cette loi par un processus de branchement. Nous étudions également un processus de détérioration similaire en théorie de la fiabilité où le processus de détérioration consiste en la propagation de pannes en cascade dans un système de composantes interconnectées.


Doctorat en Sciences
info:eu-repo/semantics/nonPublished

APA, Harvard, Vancouver, ISO, and other styles
17

Akrouche, Joanna. "Optimization of the availability of multi-states systems under uncertainty." Thesis, Compiègne, 2020. http://www.theses.fr/2020COMP2545.

Full text
Abstract:
La sûreté de fonctionnement (SdF) est devenue une nécessité dans le monde industriel au cours du XXe siècle. La SdF est un domaine d’activité qui propose des moyens d’augmenter les attributs du système dans un délai raisonnable et à moindre coût. Dans l’ingénierie des systèmes, la SdF est définie comme la propriété qui permet aux utilisateurs du système de placer une confiance justifiée dans le service qu’il leur fournit et c’est une mesure de la disponibilité, de la fiabilité et de la maintenabilité d’un système, et de la performance du support de maintenance, et, dans certains cas, d’autres caractéristiques telles que la durabilité, la sûreté et la sécurité. Le concept sur lequel notre travail est basé est la textbf disponibilité. La disponibilité A(t) est la capacité d’un système à être opérationnel à un moment précis. Le coût d’un système à haute disponibilité est très cher. Le concepteur doit faire un compromis entre la disponibilité et les coûts économiques. Les utilisateurs peuvent rejeter des systèmes dangereux, peu fiables ou non sécurisés. Par conséquent, tout utilisateur (ou industrie) posera cette question avant avoir un produit : "Quel est le produit optimal sur le marché ?" Pour répondre à cette question, nous devons combiner les deux points suivants : - La meilleure disponibilité du système : l’utilisateur souhaite un produit qui dure longtemps le plus possible. - Le meilleur coût du système : l’utilisateur veut un produit sans lui coûter une fortune. Le calcul de la disponibilité est basé principalement sur la connaissance des taux de défaillance et des réparations des composants du système. L’analyse de disponibilité permet de calculer la capacité d’un système à fournir un niveau de performance requis en fonction du niveau de dégradation. Plusieurs méthodes ont été utilisées pour calculer la disponibilité d’un système, parmi lesquelles on trouve la Fonction de Génératrice Universelle (UGF), la technique d’inclusion-exclusion, les modèles de Markov, etc. Ces méthodes utilisent différentes techniques probabilistes pour évaluer ce critère, mais ces approches proposées ne restent efficaces que pour des cas très spécifiques, par exemple les cas de systèmes binaires. Un système binaire est un système où deux cas sont possibles : fonctionnement parfait et défaillance totale. Alors que les systèmes multi-états (SME) restreint considérablement l’application de la plupart de ces méthodes. Dans la vie réelle, les systèmes correspondent à des SME. Dans de tels scénarios, les systèmes et leurs composants peuvent fonctionner à différents niveaux de performances entre l’état de fonctionnement parfait et l’état de défaillance totale. Cependant, l’évaluation de la disponibilité des SME est plus difficile que dans le cas binaire, car il faut tenir compte des différentes combinaisons des modes de défaillance des composants. Tout au long de cette thèse, nous recherchons une méthode qui nous aide à calculer et à optimiser la disponibilité de SME tenant compte le facteur coût
Dependability has become a necessity in the industrial world during the twentieth century. Dependability is an activity domain that proposes means to increase the attributes of the system in a reasonable time and with a less cost. In systems engineering, dependability is defined as the property that enables system users to place a justified confidence in the service it delivers to them and it is a measure of a system’s availability, reliability, and its maintainability, and maintenance support performance, and, in some cases, other characteristics such as durability, safety and security. The key concept that our work is based on is the availability. The availability A(t) is the ability of a system to be operational at a specific moment. The cost of some system with high availability is very expensive. The designer must compromise between the availability and the economic costs. Users can reject systems that are unsafe, unreliable or insecure. Therefore, any user (or industry) will ask this questionbefore getting any product: "What is the optimal product in the market?" To answer to this question, we must combine the following two points : - The best availability of the system : the user wants a product that lasts as long as possible. - The best cost of the system : the user wants a product without costing him a fortune. Availability calculation is based primarily on knowledge of failure rates and repairs of system components. Availability analysis helps to calculate the ability of a system to provide a required level of performance depending on the level of degradation. Several methods have been used to calculate the availability of a system, amongst which we find the Universal Generating Function (UGF), Inclusion-Exclusion technique, Markov models, etc. These methods employ different probabilistic techniques to evaluate this criterion, but these proposed approaches remain effective only for very specific cases, for example the cases of binary systems. A binary system is a system where only two cases are possible : perfect functioning and total failure. While the transition to multi-state systems (MSS) drastically restricts the application of most of these methods. In real life, the systems corresponds to MSS. In such scenarios, systems and their components can operate at different performance levels between working and failure states. However, the evaluation of the availability of the MSSs is more difficult than in the binary case, because we have to take into account the different combinations of the component failure modes. Throughout this thesis, we search for a method that helps us to compute and to optimize the availability of MSS
APA, Harvard, Vancouver, ISO, and other styles
18

Mu, Xiaoyu. "Ruin probabilities with dependent forces of interest." [Johnson City, Tenn. : East Tennessee State University], 2003. https://dc.etsu.edu/etd/796.

Full text
Abstract:
Thesis (M.S.)--East Tennessee State University, 2003.
Title from electronic submission form. ETSU ETD database URN: etd-0713103-233105. Includes bibliographical references. Also available via Internet at the UMI web site.
APA, Harvard, Vancouver, ISO, and other styles
19

Noel, Jonathan A. "Extremal combinatorics, graph limits and computational complexity." Thesis, University of Oxford, 2016. https://ora.ox.ac.uk/objects/uuid:8743ff27-b5e9-403a-a52a-3d6299792c7b.

Full text
Abstract:
This thesis is primarily focused on problems in extremal combinatorics, although we will also consider some questions of analytic and algorithmic nature. The d-dimensional hypercube is the graph with vertex set {0,1}d where two vertices are adjacent if they differ in exactly one coordinate. In Chapter 2 we obtain an upper bound on the 'saturation number' of Qm in Qd. Specifically, we show that for m ≥ 2 fixed and d large there exists a subgraph G of Qd of bounded average degree such that G does not contain a copy of Qm but, for every G' such that G ⊊ G' ⊆ Qd, the graph G' contains a copy of Qm. This result answers a question of Johnson and Pinto and is best possible up to a factor of O(m). In Chapter 3, we show that there exists ε > 0 such that for all k and for n sufficiently large there is a collection of at most 2(1-ε)k subsets of [n] which does not contain a chain of length k+1 under inclusion and is maximal subject to this property. This disproves a conjecture of Gerbner, Keszegh, Lemons, Palmer, Pálvölgyi and Patkós. We also prove that there exists a constant c ∈ (0,1) such that the smallest such collection is of cardinality 2(1+o(1))ck for all k. In Chapter 4, we obtain an exact expression for the 'weak saturation number' of Qm in Qd. That is, we determine the minimum number of edges in a spanning subgraph G of Qd such that the edges of E(Qd)\E(G) can be added to G, one edge at a time, such that each new edge completes a copy of Qm. This answers another question of Johnson and Pinto. We also obtain a more general result for the weak saturation of 'axis aligned' copies of a multidimensional grid in a larger grid. In the r-neighbour bootstrap process, one begins with a set A0 of 'infected' vertices in a graph G and, at each step, a 'healthy' vertex becomes infected if it has at least r infected neighbours. If every vertex of G is eventually infected, then we say that A0 percolates. In Chapter 5, we apply ideas from weak saturation to prove that, for fixed r ≥ 2, every percolating set in Qd has cardinality at least (1+o(1))(d choose r-1)/r. This confirms a conjecture of Balogh and Bollobás and is asymptotically best possible. In addition, we determine the minimum cardinality exactly in the case r=3 (the minimum cardinality in the case r=2 was already known). In Chapter 6, we provide a framework for proving lower bounds on the number of comparable pairs in a subset S of a partially ordered set (poset) of prescribed size. We apply this framework to obtain an explicit bound of this type for the poset 𝒱(q,n) consisting of all subspaces of 𝔽qnordered by inclusion which is best possible when S is not too large. In Chapter 7, we apply the result from Chapter 6 along with the recently developed 'container method,' to obtain an upper bound on the number of antichains in 𝒱(q,n) and a bound on the size of the largest antichain in a p-random subset of 𝒱(q,n) which holds with high probability for p in a certain range. In Chapter 8, we construct a 'finitely forcible graphon' W for which there exists a sequence (εi)i=1 tending to zero such that, for all i ≥ 1, every weak εi-regular partition of W has at least exp(εi-2/25log∗εi-2) parts. This result shows that the structure of a finitely forcible graphon can be much more complex than was anticipated in a paper of Lovász and Szegedy. For positive integers p,q with p/q ❘≥ 2, a circular (p,q)-colouring of a graph G is a mapping V(G) → ℤp such that any two adjacent vertices are mapped to elements of ℤp at distance at least q from one another. The reconfiguration problem for circular colourings asks, given two (p,q)-colourings f and g of G, is it possible to transform f into g by recolouring one vertex at a time so that every intermediate mapping is a p,q-colouring? In Chapter 9, we show that this question can be answered in polynomial time for 2 ≤ p/q < 4 and is PSPACE-complete for p/q ≥ 4.
APA, Harvard, Vancouver, ISO, and other styles
20

Lundström, Edvin. "On the Proxy Modelling of Risk-Neutral Default Probabilities." Thesis, KTH, Matematisk statistik, 2020. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-273624.

Full text
Abstract:
Since the default of Lehman Brothers in 2008, it has become increasingly important to measure, manage and price the default risk in financial derivatives. Default risk in financial derivatives is referred to as counterparty credit risk (CCR). The price of CCR is captured in Credit Valuation Adjustment (CVA). This adjustment should in principle always enter the valuation of a derivative traded over-the-counter (OTC). To calculate CVA, one needs to know the probability of default of the counterparty. Since CVA is a price, what one needs is the risk-neutral probability of default. The typical way of obtaining risk-neutral default probabilities is to build credit curves calibrated using Credit Default Swaps (CDS). However, for a majority of a bank's counterparties there are no CDSs liquidly traded. This constitutes a major challenge. How does one model the risk-neutral default probability in the absence of observable CDS spreads? A number of methods for constructing proxy credit curves have been proposed previously. A particularly popular choice is the so-called Nomura (or cross-section) model. In studying this model, we find some weaknesses, which in some instances lead to degenerate proxy credit curves. In this thesis we propose an altered model, where the modelling quantity is changed from the CDS spread to the hazard rate. This ensures that the obtained proxy curves are valid by construction. We find that in practice, the Nomura model in many cases gives degenerate proxy credit curves. We find no such issues for the altered model. In some cases, we see that the differences between the models are minor. The conclusion is that the altered model is a better choice since it is theoretically sound and robust.
Sedan Lehman Brothers konkurs 2008 har det blivit allt viktigare att mäta, hantera och prissätta kreditrisken i finansiella derivat. Kreditrisk i finansiella derivat benämns ofta motpartsrisk (CCR). Priset på motpartsrisk fångas i kreditvärderingsjustering (CVA). Denna justering bör i princip alltid ingå i värderingen av ett derivat som handlas över disk (eng. over-the-counter, OTC). För att beräkna CVA behöver man veta sannolikheten för fallissemang (konkurs) hos motparten. Eftersom CVA är ett pris, behöver man den riskneutrala sannolikheten för fallissemang. Det typiska tillvägagångsättet för att erhålla riskneutrala sannolikheter är att bygga kreditkurvor kalibrerade med hjälp av kreditswappar (CDS:er). För en majoritet av en banks motparter finns emellertid ingen likvid handel i CDS:er. Detta utgör en stor utmaning. Hur ska man modellera riskneutrala fallissemangssannolikheter vid avsaknad av observerbara CDS-spreadar? Ett antal metoder för att konstruera proxykreditkurvor har föreslagits tidigare. Ett särskilt populärt val är den så kallade Nomura- (eller cross-section) modellen. När vi studerar denna modell hittar vi ett par svagheter, som i vissa fall leder till degenererade proxykreditkurvor. I den här uppsatsen föreslår vi en förändrad modell, där den modellerade kvantiteten byts från CDS-spreaden till riskfrekvensen (eng. hazard rate). Därmed säkerställs att de erhållna proxykurvorna är giltiga, per konstruktion. Vi finner att Nomura-modellen i praktiken i många fall ger degenererade proxykreditkurvor. Vi finner inga sådana problem för den förändrade modellen. I andra fall ser vi att skillnaderna mellan modellerna är små. Slutsatsen är att den förändrade modellen är ett bättre val eftersom den är teoretiskt sund och robust.
APA, Harvard, Vancouver, ISO, and other styles
21

Uyanga, Enkhzul, and Lida Wang. "Algorithm that creates productcombinations based on customerdata analysis : An approach with Generalized Linear Modelsand Conditional Probabilities." Thesis, KTH, Matematisk statistik, 2017. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-210176.

Full text
Abstract:
This bachelor’s thesis is a combined study of applied mathematical statistics and industrial engineering and management implemented to develop an algorithm which creates product combinations based on customer data analysis for eleven AB. Mathematically, generalized linear modelling, combinatorics and conditional probabilities were applied to create sales prediction models, generate potential combinations and calculate the conditional probabilities of the combinations getting purchased. SWOT analysis was used to identify which factors can enhance the sales from an industrial engineering and management perspective. Based on the regression analysis, the study showed that the considered variables, which were sales prices, brands, ratings, purchase countries, purchase months and how new the products are, affected the sales amounts of the products. The algorithm takes a barcode of a product as an input and checks whether if the corresponding product type satisfies the requirements of predicted sales amount and conditional probability. The algorithm then returns a list of possible product combinations that fulfil the recommendations.
Detta kandidatexamensarbete är en kombinerad studie av tillämpad matematisk statistik och industriell ekonomisk implementering för att utveckla en algoritm som skapar produktkombinationer baserad på kunddata analys för eleven AB. I den matematiska delen tillämpades generaliserade linjära modeller, kombinatorik och betingade sannolikheter för att skapa prediktionsmodeller för försäljningsantal, generera potentiella kombinationer och beräkna betingade sannolikheter att kombinationerna bli köpta. SWOT-analys användes för att identifiera vilka faktorer som kan öka försäljningen från ett industriell ekonomiskt perspektiv. Baserat på regressionsanalysen, studien har visat att de betraktade variablerna, som var försäljningspriser, varumärken, försäljningsländer, försäljningsmånader och hur nya produkterna är, påverkade försäljningsantalen på produkterna. Algoritmen tar emot en streckkod av en produkt som inmatning och kontrollerar om den motsvarande produkttypen uppfyller kraven för predikterad försäljningssumma och betingad sannolikhet. Algoritmen returnerar en lista av alla möjliga kombinationer på produkter som uppfyller rekommendationerna.
APA, Harvard, Vancouver, ISO, and other styles
22

Eynon, James R. "Comparison of Logistic Force of Mortality Models for Predicting Life Table Probabilities of Death: A Simulation-Based Approach." Youngstown State University / OhioLINK, 2011. http://rave.ohiolink.edu/etdc/view?acc_num=ysu1329508121.

Full text
APA, Harvard, Vancouver, ISO, and other styles
23

Raeside, Robert. "Modelling and forecasting human populations using sigmoid models." Thesis, Edinburgh Napier University, 1987. http://researchrepository.napier.ac.uk/Output/1053286.

Full text
Abstract:
Early this century "S-shaped" curves, sigmoids, gained popularity among demographers. However, by 1940, the approach had "fallen out of favour", being criticised for giving poor results and having no theoretical validity. It was also considered that models of total population were of little practical interest, the main forecasting procedure currently adopted being the bottom-up "cohort-component" method. In the light of poor forecasting performance from component methods, a re-assessment is given in this thesis of the use of simple trend models. A suitable means of fitting these models to census data is developed, using a non-linear least squares algorithm based on minimisation of a proportionately weighted residual sum of squares. It is demonstrated that useful models can be obtained from which, by using a top-down methodology, component populations and vital components can be derived. When these models are recast in a recursive parameterisation, it is shown that forecasts can be obtained which, it is argued, are superior to existing official projections. Regarding theoretical validity, it is argued that sigmoid models relate closely to Malthusian theory and give a mathematical statement of the demographic transition. In order to judge the suitability of extrapolating from sigmoid models, a framework using Catastrophe Theory is developed. It is found that such a framework allows one qualitatively to model population changes resulting from subtle changes in influencing variables. The use of Catastrophe Theory has advantages over conventional demographic models as it allows a more holistic approach to population modelling.
APA, Harvard, Vancouver, ISO, and other styles
24

Yang, GuoLu. "Modèle de transport complet en rivière avec granulométrie étendue." Grenoble 1, 1989. http://www.theses.fr/1989GRE10011.

Full text
Abstract:
Les variations des lignes d'eau et du lit des rivieres alluvionnaires dans le cas du transport complet (charriage+suspension) des sediments en granulometrie etendue sont etudiees par un modele mathematique uni-dimensionnel. Dans ce modele le charriage et la suspension sont consideres comme deux phenomenes du transport en tenant compte d'un terme source-puits qui represente l'echange entre eux. Le terme source-puits est formule par un modele d'echanges stochastiques considerant trois etats: suspension, charriage et immobilite, les probabilites des etats sont obtenues par le processus de chaine de markov. Le modele conceptuel d'une "couche melangee" est introduit pour reproduire les phenomenes de pavage et de triage. Le systeme d'equations a resoudre est analyse par la methode des caracteristiques. Une solution numerique decouplee du systeme est presentee. Un nouvel algorithme, assurant le calcul couple du transport par convection-diffusion-reaction, est developpe. Des tests du modele mathematique sont systematiquement effectues afin d'examiner la sensibilite et montrer la precision du modele
APA, Harvard, Vancouver, ISO, and other styles
25

Alstermark, Olivia, and Evangelina Stolt. "Purchase Probability Prediction : Predicting likelihood of a new customer returning for a second purchase using machine learning methods." Thesis, Umeå universitet, Institutionen för matematik och matematisk statistik, 2021. http://urn.kb.se/resolve?urn=urn:nbn:se:umu:diva-184831.

Full text
Abstract:
When a company evaluates a customer for being a potential prospect, one of the key questions to answer is whether the customer will generate profit in the long run. A possible step to answer this question is to predict the likelihood of the customer returning to the company again after the initial purchase. The aim of this master thesis is to investigate the possibility of using machine learning techniques to predict the likelihood of a new customer returning for a second purchase within a certain time frame. To investigate to what degree machine learning techniques can be used to predict probability of return, a number of di↵erent model setups of Logistic Lasso, Support Vector Machine and Extreme Gradient Boosting are tested. Model development is performed to ensure well-calibrated probability predictions and to possibly overcome the diculty followed from an imbalanced ratio of returning and non-returning customers. Throughout the thesis work, a number of actions are taken in order to account for data protection. One such action is to add noise to the response feature, ensuring that the true fraction of returning and non-returning customers cannot be derived. To further guarantee data protection, axes values of evaluation plots are removed and evaluation metrics are scaled. Nevertheless, it is perfectly possible to select the superior model out of all investigated models. The results obtained show that the best performing model is a Platt calibrated Extreme Gradient Boosting model, which has much higher performance than the other models with regards to considered evaluation metrics, while also providing predicted probabilities of high quality. Further, the results indicate that the setups investigated to account for imbalanced data do not improve model performance. The main con- clusion is that it is possible to obtain probability predictions of high quality for new customers returning to a company for a second purchase within a certain time frame, using machine learning techniques. This provides a powerful tool for a company when evaluating potential prospects.
APA, Harvard, Vancouver, ISO, and other styles
26

De, Scheemaekere Xavier. "Essays in mathematical finance and in the epistemology of finance." Doctoral thesis, Universite Libre de Bruxelles, 2011. http://hdl.handle.net/2013/ULB-DIPOT:oai:dipot.ulb.ac.be:2013/209938.

Full text
Abstract:
The goal of this thesis in finance is to combine the use of advanced mathematical methods with a return to foundational economic issues. In that perspective, I study generalized rational expectations and asset pricing in Chapter 2, and a converse comparison principle for backward stochastic differential equations with jumps in Chapter 3. Since the use of stochastic methods in finance is an interesting and complex issue in itself - if only to clarify the difference between the use of mathematical models in finance and in physics or biology - I also present a philosophical reflection on the interpretation of mathematical models in finance (Chapter 4). In Chapter 5, I conclude the thesis with an essay on the history and interpretation of mathematical probability - to be read while keeping in mind the fundamental role of mathematical probability in financial models.
Doctorat en Sciences économiques et de gestion
info:eu-repo/semantics/nonPublished
APA, Harvard, Vancouver, ISO, and other styles
27

Mathema, Najma. "Predicting Plans and Actions in Two-Player Repeated Games." BYU ScholarsArchive, 2020. https://scholarsarchive.byu.edu/etd/8683.

Full text
Abstract:
Artificial intelligence (AI) agents will need to interact with both other AI agents and humans. One way to enable effective interaction is to create models of associates to help to predict the modeled agents' actions, plans, and intentions. If AI agents are able to predict what other agents in their environment will be doing in the future and can understand the intentions of these other agents, the AI agents can use these predictions in their planning, decision-making and assessing their own potential. Prior work [13, 14] introduced the S# algorithm, which is designed as a robust algorithm for many two-player repeated games (RGs) to enable cooperation among players. Because S# generates actions, has (internal) experts that seek to accomplish an internal intent, and associates plans with each expert, it is a useful algorithm for exploring intent, plan, and action in RGs. This thesis presents a graphical Bayesian model for predicting actions, plans, and intents of an S# agent. The same model is also used to predict human action. The actions, plans and intentions associated with each S# expert are (a) identified from the literature and (b) grouped by expert type. The Bayesian model then uses its transition probabilities to predict the action and expert type from observing human or S# play. Two techniques were explored for translating probability distributions into specific predictions: Maximum A Posteriori (MAP) and Aggregation approach. The Bayesian model was evaluated for three RGs (Prisoners Dilemma, Chicken and Alternator) as follows. Prediction accuracy of the model was compared to predictions from machine learning models (J48, Multi layer perceptron and Random Forest) as well as from the fixed strategies presented in [20]. Prediction accuracy was obtained by comparing the model's predictions against the actual player's actions. Accuracy for plan and intent prediction was measured by comparing predictions to the actual plans and intents followed by the S# agent. Since the plans and the intents of human players were not recorded in the dataset, this thesis does not measure the accuracy of the Bayesian model against actual human plans and intents. Results show that the Bayesian model effectively models the actions, plans, and intents of the S# algorithm across the various games. Additionally, the Bayesian model outperforms other methods for predicting human actions. When the games do not allow players to communicate using so-called “cheap talk”, the MAP-based predictions are significantly better than Aggregation-based predictions. There is no significant difference in the performance of MAP-based and Aggregation-based predictions for modeling human behavior when cheaptalk is allowed, except in the game of Chicken.
APA, Harvard, Vancouver, ISO, and other styles
28

Malmgren, Henrik. "Revision of an artificial neural network enabling industrial sorting." Thesis, Uppsala universitet, Institutionen för teknikvetenskaper, 2019. http://urn.kb.se/resolve?urn=urn:nbn:se:uu:diva-392690.

Full text
Abstract:
Convolutional artificial neural networks can be applied for image-based object classification to inform automated actions, such as handling of objects on a production line. The present thesis describes theoretical background for creating a classifier and explores the effects of introducing a set of relatively recent techniques to an existing ensemble of classifiers in use for an industrial sorting system.The findings indicate that it's important to use spatial variety dropout regularization for high resolution image inputs, and use an optimizer configuration with good convergence properties. The findings also demonstrate examples of ensemble classifiers being effectively consolidated into unified models using the distillation technique. An analogue arrangement with optimization against multiple output targets, incorporating additional information, showed accuracy gains comparable to ensembling. For use of the classifier on test data with statistics different than those of the dataset, results indicate that augmentation of the input data during classifier creation helps performance, but would, in the current case, likely need to be guided by information about the distribution shift to have sufficiently positive impact to enable a practical application. I suggest, for future development, updated architectures, automated hyperparameter search and leveraging the bountiful unlabeled data potentially available from production lines.
APA, Harvard, Vancouver, ISO, and other styles
29

Utria, Valdes Jaime Antonio 1988. "Transição de fase para um modelo de percolação dirigida na árvore homogênea." [s.n.], 2015. http://repositorio.unicamp.br/jspui/handle/REPOSIP/307034.

Full text
Abstract:
Orientador: Élcio Lebensztayn
Dissertação (mestrado) - Universidade Estadual de Campinas, Instituto de Matemática Estatística e Computação Científica
Made available in DSpace on 2018-08-27T03:09:48Z (GMT). No. of bitstreams: 1 UtriaValdes_JaimeAntonio_M.pdf: 525263 bytes, checksum: 3a980748a98761becf1b573639a361c1 (MD5) Previous issue date: 2015
Resumo: O Resumo poderá ser visualizado no texto completo da tese digital
Abstract: The Abstract is available with the full electronic digital document
Mestrado
Estatistica
Mestre em Estatística
APA, Harvard, Vancouver, ISO, and other styles
30

Chen, Chia-Jeng. "Hydro-climatic forecasting using sea surface temperatures." Diss., Georgia Institute of Technology, 2012. http://hdl.handle.net/1853/48974.

Full text
Abstract:
A key determinant of atmospheric circulation patterns and regional climatic conditions is sea surface temperature (SST). This has been the motivation for the development of various teleconnection methods aiming to forecast hydro-climatic variables. Among such methods are linear projections based on teleconnection gross indices (such as the ENSO, IOD, and NAO) or leading empirical orthogonal functions (EOFs). However, these methods deteriorate drastically if the predefined indices or EOFs cannot account for climatic variability in the region of interest. This study introduces a new hydro-climatic forecasting method that identifies SST predictors in the form of dipole structures. An SST dipole that mimics major teleconnection patterns is defined as a function of average SST anomalies over two oceanic areas of appropriate sizes and geographic locations. The screening process of SST-dipole predictors is based on an optimization algorithm that sifts through all possible dipole configurations (with progressively refined data resolutions) and identifies dipoles with the strongest teleconnection to the external hydro-climatic series. The strength of the teleconnection is measured by the Gerrity Skill Score. The significant dipoles are cross-validated and used to generate ensemble hydro-climatic forecasts. The dipole teleconnection method is applied to the forecasting of seasonal precipitation over the southeastern US and East Africa, and the forecasting of streamflow-related variables in the Yangtze and Congo Rivers. These studies show that the new method is indeed able to identify dipoles related to well-known patterns (e.g., ENSO and IOD) as well as to quantify more prominent predictor-predictand relationships at different lead times. Furthermore, the dipole method compares favorably with existing statistical forecasting schemes. An operational forecasting framework to support better water resources management through coupling with detailed hydrologic and water resources models is also demonstrated.
APA, Harvard, Vancouver, ISO, and other styles
31

MOUSAVI, NADOSHANI SEYED SAEID. "Composition des lois élémentaires en hydrologie régionale : application à l'étude des régimes de crue." Grenoble 1, 1997. http://www.theses.fr/1997GRE10165.

Full text
Abstract:
Un evenement hydrologique est souvent defini d'apres plusieurs variables aleatoires presentant un certain degre de liaison. L'etude probabiliste de cet evenement necessite alors la composition de lois de probabilite. Nous avons considere deux cas d'applications : 1) l'extrapolation de la distribution du debit de pointe et du debit seuil ; 2) l'estimation du quantile du debit de crue a l'aval d'une confluence. Nous avons teste plusieurs fonctions de deux variables : les modeles de farlie-gumbel-morgenstern, de farlie et hashino, en utilisant des echantillons sup-seuil. Pour ce dernier modele, nous utilisons le coefficient de correlation obtenu avec la totalite de l'information (evenements concomitants et non concomitants). Les performances des modeles de composition ont ete testees sur des donnees reelles, puis sur des donnees simulees a l'aide du modele de generation de pluies journalieres shypre, du modele pluie-debit gr4j et d'un modele de liaison spatiale des pluies. Nous avons enfin etudie les hydrogrammes d'apport a injecter dans un modele hydraulique, de facon a rester homogene en frequence sur tout le lineaire de la riviere.
APA, Harvard, Vancouver, ISO, and other styles
32

Dangauthier, Pierre-Charles. "Fondations, méthode et applications de l'apprentissage bayésien." Phd thesis, Grenoble INPG, 2007. http://tel.archives-ouvertes.fr/tel-00267643.

Full text
Abstract:
Le domaine de l'apprentissage automatique a pour but la création d'agents synthétiques améliorant leurs performances avec l'expérience. Pour pouvoir se perfectionner, ces agents extraient des régularités statistiques de données incertaines et mettent à jour leur modèle du monde. Les probabilités bayésiennes sont un outil rationnel pour répondre à la problématique de l'apprentissage. Cependant, comme ce problème est souvent difficile, des solutions proposant un compromis entre précision et rapidité doivent être mises en oeuvre. Ce travail présente la méthode d'apprentissage bayésien, ses fondations philosophiques et plusieurs applications innovantes. Nous nous intéressons d'abord à des questions d'apprentissage de paramètres. Dans ce cadre nous étudions deux problèmes d'analyse de données à variables cachées. Nous proposons d'abord une méthode bayésienne pour classer les joueurs d'échecs qui améliore sensiblement le système Elo. Le classement produit permet de répondre à des questions intéressantes comme celle de savoir qui fut le meilleur joueur d'échecs de tous les temps. Nous étudions aussi un système de filtrage collaboratif dont le but est de prévoir les goûts cinématographiques d'utilisateurs en fonction de leurs préférences passées. La deuxième partie de notre travail concerne l'apprentissage de modèles. D'abord nous nous intéressons à la sélection de variables pertinentes dans le cadre d'une application robotique. D'un point de vue cognitif, cette sélection permet au robot de transférer ses connaissances d'un domaine sensorimoteur vers un autre. Finalement, nous proposons une méthode permettant de découvrir automatiquement une nouvelle variable cachée afin de mieux modéliser l'environnement d'un robot.
APA, Harvard, Vancouver, ISO, and other styles
33

Maire, F. "Détection et classification de cibles multispectrales dans l'infrarouge." Phd thesis, Telecom ParisTech, 2014. http://tel.archives-ouvertes.fr/tel-01018701.

Full text
Abstract:
Les dispositifs de protection de sites sensibles doivent permettre de détecter des menaces potentielles suffisamment à l'avance pour pouvoir mettre en place une stratégie de défense. Dans cette optique, les méthodes de détection et de reconnaissance d'aéronefs se basant sur des images infrarouge multispectrales doivent être adaptées à des images faiblement résolues et être robustes à la variabilité spectrale et spatiale des cibles. Nous mettons au point dans cette thèse, des méthodes statistiques de détection et de reconnaissance d'aéronefs satisfaisant ces contraintes. Tout d'abord, nous spécifions une méthode de détection d'anomalies pour des images multispectrales, combinant un calcul de vraisemblance spectrale avec une étude sur les ensembles de niveaux de la transformée de Mahalanobis de l'image. Cette méthode ne nécessite aucune information a priori sur les aéronefs et nous permet d'identifier les images contenant des cibles. Ces images sont ensuite considérées comme des réalisations d'un modèle statistique d'observations fluctuant spectralement et spatialement autour de formes caractéristiques inconnues. L'estimation des paramètres de ce modèle est réalisée par une nouvelle méthodologie d'apprentissage séquentiel non supervisé pour des modèles à données manquantes que nous avons développée. La mise au point de ce modèle nous permet in fine de proposer une méthode de reconnaissance de cibles basée sur l'estimateur du maximum de vraisemblance a posteriori. Les résultats encourageants, tant en détection qu'en classification, justifient l'intérêt du développement de dispositifs permettant l'acquisition d'images multispectrales. Ces méthodes nous ont également permis d'identifier les regroupements de bandes spectrales optimales pour la détection et la reconnaissance d'aéronefs faiblement résolus en infrarouge.
APA, Harvard, Vancouver, ISO, and other styles
34

Ben, Daoued Amine. "Modélisation de la conjonction pluie-niveau marin et prise en compte des incertitudes et de l’impact du changement climatique : application au site du Havre." Thesis, Compiègne, 2019. http://www.theses.fr/2019COMP2528.

Full text
Abstract:
La modélisation des combinaisons de phénomènes d’inondation est une problématique d’actualité pour la communauté scientifique qui s’intéresse en priorité aux sites urbains et nucléaires. En effet, il est fort probable que l’approche déterministe explorant un certain nombre de scénarios possède certaines limites car ces scénarios déterministes assurent un conservatisme souvent excessif. Les approches probabilistes apportent une précision supplémentaire en s’appuyant sur les statistiques et les probabilités pour compléter les approches déterministes. Ces approches probabilistes visent à identifier et à combiner plusieurs scénarios d’aléa possibles pour couvrir plusieurs sources possibles du risque. L’approche probabiliste d’évaluation de l’aléa inondation (Probabilistic Flood Hazard Assessment ou PFHA) proposée dans cette thèse permet de caractériser une (des) quantité(s) d’intérêt (niveau d’eau, volume, durée d’immersion, etc.) à différents points d’un site en se basant sur les distributions des différents phénomènes de l’aléa inondation ainsi que les caractéristiques du site. Les principales étapes du PFHA sont : i) identification des phénomènes possibles (pluies, niveau marin, vagues, etc.), ii) identification et probabilisation des paramètres associés aux phénomènes d’inondation sélectionnés, iii) propagation de ces phénomènes depuis les sources jusqu’aux point d’intérêt sur le site, iv) construction de courbes d’aléa en agrégeant les contributions des phénomènes d’inondation. Les incertitudes sont un point important de la thèse dans la mesure où elles seront prises en compte dans toutes les étapes de l’approche probabiliste. Les travaux de cette thèse reposent sur l’étude de la conjonction de la pluie et du niveau marin et apportent une nouvelle méthode de prise en compte du déphasage temporel entre les phénomènes (coïncidence). Un modèle d’agrégation a été développé afin de combiner les contributions des différents phénomènes d’inondation. La question des incertitudes a été étudiée et une méthode reposant sur la théorie des fonctions de croyance a été utilisée car elle présente des avantages divers par rapport aux autres concepts (modélisation fidèle dans les cas d’ignorance totale et de manque d’informations, possibilité de combiner des informations d’origines et de natures différentes, etc.). La méthodologie proposée est appliquée au site du Havre, en France
The modeling of the combinations of flood hazard phenomena is a current issue for the scientific community which is primarily interested in urban and nuclear sites. Indeed, it is very likely that the deterministic approach exploring several scenarios has certain limits because these deterministic scenarios ensure an often excessive conservatism. Probabilistic approaches provide additional precision by relying on statistics and probabilities to complement deterministic approaches. These probabilistic approaches aim to identify and combine many possible hazard scenarios to cover many possible sources of risk. The Probabilistic Flood Hazard Assessment (PFHA) proposed in this thesis allows to characterize a quantity(ies) of interest (water level, volume, duration of immersion, ect.) at different points of interest of a site based on the distributions of the different phenomena of the flood hazard as well as the characteristics of the site. The main steps of the PFHA are: i) screening of the possible phenomena (rainfall, sea level, waves, ect.), ii) identification and probabilization of the parameters representative of the selected flood phenomena, iii) propagation of these phenomena from their sources to the point of interest on the site, iv) construction of hazard curves by aggregating the contributions of the flood phenomena. Uncertainties are an important topic of the thesis insofar as they will be taken into account in all the steps of the probabilistic approach. The work of this thesis is based on the study of the conjunction of rain and sea level and provide a new method for taking into account the temporal phase shift between the phenomena (coincidence). An aggregation model has been developed to combine the contributions of different flood phenomena. The question of uncertainties has been studied and a method based on the theory of belief functions has been used because it has various advantages (faithful modeling in cases of total ignorance and lack of information, possibility to combine information of different origins and natures, ect.). The proposed methodology is applied on the site of Le Havre in France
APA, Harvard, Vancouver, ISO, and other styles
35

Lardin, Pauline. "Estimation de synchrones de consommation électrique par sondage et prise en compte d'information auxiliaire." Phd thesis, Université de Bourgogne, 2012. http://tel.archives-ouvertes.fr/tel-00842199.

Full text
Abstract:
Dans cette thèse, nous nous intéressons à l'estimation de la synchrone de consommation électrique (courbe moyenne). Etant donné que les variables étudiées sont fonctionnelles et que les capacités de stockage sont limitées et les coûts de transmission élevés, nous nous sommes intéressés à des méthodes d'estimation par sondage, alternatives intéressantes aux techniques de compression du signal. Nous étendons au cadre fonctionnel des méthodes d'estimation qui prennent en compte l'information auxiliaire disponible afin d'améliorer la précision de l'estimateur de Horvitz-Thompson de la courbe moyenne de consommation électrique. La première méthode fait intervenir l'information auxiliaire au niveau de l'estimation, la courbe moyenne est estimée à l'aide d'un estimateur basé sur un modèle de régression fonctionnelle. La deuxième l'utilise au niveau du plan de sondage, nous utilisons un plan à probabilités inégales à forte entropie puis l'estimateur de Horvitz-Thompson fonctionnel. Une estimation de la fonction de covariance est donnée par l'extension au cadre fonctionnel de l'approximation de la covariance donnée par Hájek. Nous justifions de manière rigoureuse leur utilisation par une étude asymptotique. Pour chacune de ces méthodes, nous donnons, sous de faibles hypothèses sur les probabilités d'inclusion et sur la régularité des trajectoires, les propriétés de convergence de l'estimateur de la courbe moyenne ainsi que de sa fonction de covariance. Nous établissons également un théorème central limite fonctionnel. Afin de contrôler la qualité de nos estimateurs, nous comparons deux méthodes de construction de bande de confiance sur un jeu de données de courbes de charge réelles. La première repose sur la simulation de processus gaussiens. Une justification asymptotique de cette méthode sera donnée pour chacun des estimateurs proposés. La deuxième utilise des techniques de bootstrap qui ont été adaptées afin de tenir compte du caractère fonctionnel des données
APA, Harvard, Vancouver, ISO, and other styles
36

Parr, Bouberima Wafia. "Modèles de mélange de von Mises-Fisher." Phd thesis, Université René Descartes - Paris V, 2013. http://tel.archives-ouvertes.fr/tel-00987196.

Full text
Abstract:
Dans la vie actuelle, les données directionnelles sont présentes dans la majorité des domaines, sous plusieurs formes, différents aspects et de grandes tailles/dimensions, d'où le besoin de méthodes d'étude efficaces des problématiques posées dans ce domaine. Pour aborder le problème de la classification automatique, l'approche probabiliste est devenue une approche classique, reposant sur l'idée simple : étant donné que les g classes sont différentes entre elles, on suppose que chacune suit une loi de probabilité connue, dont les paramètres sont en général différents d'une classe à une autre; on parle alors de modèle de mélange de lois de probabilités. Sous cette hypothèse, les données initiales sont considérées comme un échantillon d'une variable aléatoire d-dimensionnelle dont la densité est un mélange de g distributions de probabilités spécifiques à chaque classe. Dans cette thèse nous nous sommes intéressés à la classification automatique de données directionnelles, en utilisant des méthodes de classification les mieux adaptées sous deux approches: géométrique et probabiliste. Dans la première, en explorant et comparant des algorithmes de type kmeans; dans la seconde, en s'attaquant directement à l'estimation des paramètres à partir desquels se déduit une partition à travers la maximisation de la log-vraisemblance, représentée par l'algorithme EM. Pour cette dernière approche, nous avons repris le modèle de mélange de distributions de von Mises-Fisher, nous avons proposé des variantes de l'algorithme EMvMF, soit CEMvMF, le SEMvMF et le SAEMvMF, dans le même contexte, nous avons traité le problème de recherche du nombre de composants et le choix du modèle de mélange, ceci en utilisant quelques critères d'information : Bic, Aic, Aic3, Aic4, Aicc, Aicu, Caic, Clc, Icl-Bic, Ll, Icl, Awe. Nous terminons notre étude par une comparaison du modèle vMF avec un modèle exponentiel plus simple ; à l'origine ce modèle part du principe que l'ensemble des données est distribué sur une hypersphère de rayon ρ prédéfini, supérieur ou égal à un. Nous proposons une amélioration du modèle exponentiel qui sera basé sur une étape estimation du rayon ρ au cours de l'algorithme NEM. Ceci nous a permis dans la plupart de nos applications de trouver de meilleurs résultats; en proposant de nouvelles variantes de l'algorithme NEM qui sont le NEMρ , NCEMρ et le NSEMρ. L'expérimentation des algorithmes proposés dans ce travail a été faite sur une variété de données textuelles, de données génétiques et de données simulées suivant le modèle de von Mises-Fisher (vMF). Ces applications nous ont permis une meilleure compréhension des différentes approches étudiées le long de cette thèse.
APA, Harvard, Vancouver, ISO, and other styles
37

Huet, Alexis. "Méthodes particulaires et vraisemblances pour l'inférence de modèles d'évolution avec dépendance au contexte." Phd thesis, Université Claude Bernard - Lyon I, 2014. http://tel.archives-ouvertes.fr/tel-01058827.

Full text
Abstract:
Cette thèse est consacrée à l'inférence de modèles stochastiques d'évolution de l'ADN avec dépendance au contexte, l'étude portant spécifiquement sur la classe de modèles stochastiques RN95+YpR. Cette classe de modèles repose sur un renforcement des taux d'occurrence de certaines substitutions en fonction du contexte local, ce qui introduit des phénomènes de dépendance dans l'évolution des différents sites de la séquence d'ADN. Du fait de cette dépendance, le calcul direct de la vraisemblance des séquences observées met en jeu des matrices de dimensions importantes, et est en général impraticable. Au moyen d'encodages spécifiques à la classe RN95+YpR, nous mettons en évidence de nouvelles structures de dépendance spatiales pour ces modèles, qui sont associées à l'évolution des séquences d'ADN sur toute leur histoire évolutive. Ceci rend notamment possible l'utilisation de méthodes numériques particulaires, développées dans le cadre des modèles de Markov cachés, afin d'obtenir des approximations consistantes de la vraisemblance recherchée. Un autre type d'approximation de la vraisemblance, basé sur des vraisemblances composites, est également introduit. Ces méthodes d'approximation de la vraisemblance sont implémentées au moyen d'un code en C++. Elles sont mises en œuvre sur des données simulées afin d'étudier empiriquement certaines de leurs propriétés, et sur des données génomiques, notamment à des fins de comparaison de modèles d'évolution
APA, Harvard, Vancouver, ISO, and other styles
38

Bouselmi, Aych. "Options américaines et processus de Lévy." Phd thesis, Université Paris-Est, 2013. http://tel.archives-ouvertes.fr/tel-00944239.

Full text
Abstract:
Les marchés financiers ont connu, grâce aux études réalisées durant les trois dernières décennies, une expansion considérable et ont vu l'apparition de produits dérivés divers et variés. Parmi les plus répandus, on retrouve les options américaines. Une option américaine est par définition une option qu'on a le droit d'exercer avant l'échéance convenue T. Les plus basiques sont le Put ou le Call américain (respectivement option de vente (K - x)+ ou d'achat (x - K)+). La première partie, et la plus conséquente, de cette thèse est consacrée à l'étude des options américaines dans des modèles exponentiels de Lévy. On commence dans un cadre multidimensionnel caractérise le prix d'une option américaine, dont le Pay-off appartient à une classe de fonctions non forcément bornées, à l'aide d'une inéquation variationnelle au sens des distributions. On étudie, ensuite, les propriétés générales de la région d'exercice ainsi que de la frontière libre. On affine encore ces résultats en étudiant, en particulier, la région d'exercice d'un Call américain sur un panier d'actifs, où on caractérise en particulier la région d'exercice limite (à l'échéance). Dans un deuxième temps, on se place dans un cadre unidimensionnel et on étudie le comportement du prix critique (fonction délimitant la région d'exercice) d'un Put américain près de l'échéance. Particulièrement, on considère le cas où le prix ne converge pas vers le strike K, dans un modèle Jump-diffusion puis dans un modèle où le processus de Lévy est à saut pur avec un comportement proche de celui d'un &-stable. La deuxième partie porte sur l'approximation numérique de la Credit Valuation Adjustment (CVA). On y présente une méthode basée sur le calcul de Malliavin inspirées de celles utilisées pour les options américaines. Une étude de la complexité de cette méthode y est aussi présentée et comparée aux méthodes purement Monte Carlo et aux méthodes fondées sur la régression.
APA, Harvard, Vancouver, ISO, and other styles
39

Broy, Perrine. "Evaluation de la sûreté de systèmes dynamiques hybrides complexes : application aux systèmes hydrauliques." Phd thesis, Université de Technologie de Troyes, 2014. http://tel.archives-ouvertes.fr/tel-01006308.

Full text
Abstract:
Ces travaux s'intéressent à l'estimation de la fiabilité des évacuateurs de crues vannés. Le comportement fiabiliste de ces systèmes hydrauliques dépend à la fois d'événements aléatoires discrets, mais aussi de l'évolution d'une variable déterministe continue : ce sont des systèmes dynamiques hybrides. Pour ces systèmes, l'événement redouté est réalisé lorsque le niveau de la retenue atteint un seuil de sûreté. La démarche de fiabilité dynamique proposée dans cette thèse vise à prendre en compte l'information temporelle, de la modélisation à la synthèse d'indicateurs fiabilistes pour l'aide à la décision et développe deux contributions : 1) L'élaboration d'une base de connaissances dédiée à la description des évacuateurs de crues en termes de fiabilité dynamique. Chaque classe de composants est décrite par un automate stochastique hybride dont les états sont les différentes phases de son fonctionnement. 2) Le suivi de la simulation de Monte Carlo, le traitement et l'analyse des "histoires" (séquence de tous les états activés et des dates d'activation) obtenues en simulation. Cela permet de construire des indicateurs de fiabilité classique (probabilité d'occurrence de l'évènement redouté, identification des coupes équivalentes prépondérantes, ...). Des indicateurs de fiabilité dynamique basés sur la classification des histoires en fonction des dates de défaillance des composants concernés et sur l'estimation de l'importance dynamique sont aussi proposés.
APA, Harvard, Vancouver, ISO, and other styles
40

Martin, Victorin. "Modélisation probabiliste et inférence par l'algorithme Belief Propagation." Phd thesis, Ecole Nationale Supérieure des Mines de Paris, 2013. http://tel.archives-ouvertes.fr/tel-00867693.

Full text
Abstract:
On s'intéresse à la construction et l'estimation - à partir d'observations incomplètes - de modèles de variables aléatoires à valeurs réelles sur un graphe. Ces modèles doivent être adaptés à un problème de régression non standard où l'identité des variables observées (et donc celle des variables à prédire) varie d'une instance à l'autre. La nature du problème et des données disponibles nous conduit à modéliser le réseau sous la forme d'un champ markovien aléatoire, choix justifié par le principe de maximisation d'entropie de Jaynes. L'outil de prédiction choisi dans ces travaux est l'algorithme Belief Propagation - dans sa version classique ou gaussienne - dont la simplicité et l'efficacité permettent son utilisation sur des réseaux de grande taille. Après avoir fourni un nouveau résultat sur la stabilité locale des points fixes de l'algorithme, on étudie une approche fondée sur un modèle d'Ising latent où les dépendances entre variables réelles sont encodées à travers un réseau de variables binaires. Pour cela, on propose une définition de ces variables basée sur les fonctions de répartition des variables réelles associées. Pour l'étape de prédiction, il est nécessaire de modifier l'algorithme Belief Propagation pour imposer des contraintes de type bayésiennes sur les distributions marginales des variables binaires. L'estimation des paramètres du modèle peut aisément se faire à partir d'observations de paires. Cette approche est en fait une manière de résoudre le problème de régression en travaillant sur les quantiles. D'autre part, on propose un algorithme glouton d'estimation de la structure et des paramètres d'un champ markovien gaussien, basé sur l'algorithme Iterative Proportional Scaling. Cet algorithme produit à chaque itération un nouveau modèle dont la vraisemblance, ou une approximation de celle-ci dans le cas d'observations incomplètes, est supérieure à celle du modèle précédent. Cet algorithme fonctionnant par perturbation locale, il est possible d'imposer des contraintes spectrales assurant une meilleure compatibilité des modèles obtenus avec la version gaussienne de Belief Propagation. Les performances des différentes approches sont illustrées par des expérimentations numériques sur des données synthétiques.
APA, Harvard, Vancouver, ISO, and other styles
41

Rychnovsky, Mark. "Some Exactly Solvable Models And Their Asymptotics." Thesis, 2021. https://doi.org/10.7916/d8-3pga-pm90.

Full text
Abstract:
In this thesis, we present three projects studying exactly solvable models in the KPZ universality class and one project studying a generalization of the SIR model from epidemiology. The first chapter gives an overview of the results and how they fit into the study of KPZ universality when applicable. Each of the following 4 chapters corresponds to a published or submitted article. In the first project, we study an oriented first passage percolation model for the evolution of a river delta. We show that at any fixed positive time, the width of a river delta of length L approaches a constant times L²/³ with Tracy-Widom GUE fluctuations of order L⁴/⁹. This result can be rephrased in terms of a particle system generalizing pushTASEP. We introduce an exactly solvable particle system on the integer half line and show that after running the system for only finite time the particle positions have Tracy-Widom fluctuations. In the second project, we study n-point sticky Brownian motions: a family of n diffusions that evolve as independent Brownian motions when they are apart, and interact locally so that the set of coincidence times has positive Lebesgue measure with positive probability. These diffusions can also be seen as n random motions in a random environment whose distribution is given by so-called stochastic flows of kernels. For a specific type of sticky interaction, we prove exact formulas characterizing the stochastic flow and show that in the large deviations regime, the random fluctuations of these stochastic flows are Tracy-Widom GUE distributed. An equivalent formulation of this result states that the extremal particle among n sticky Brownian motions has Tracy-Widom distributed fluctuations in the large n and large time limit. These results are proved by viewing sticky Brownian motions as a diffusive limit of the exactly solvable beta random walk in random environment. In the third project, we study a class of probability distributions on the six-vertex model, which originates from the higher spin vertex model. For these random six-vertex models we show that the behavior near their base is asymptotically described by the GUE-corners process. In the fourth project, we study a model for the spread of an epidemic. This model generalizes the classical SIR model to account for inhomogeneity in the infectiousness and susceptibility of individuals in the population. A first statement of this model is given in terms of infinitely many coupled differential equations. We show that solving these equations can be reduced to solving a one dimensional first order ODE, which is easy to solve numerically. We use the explicit form of this ODE to characterize the total number of people who are ever infected before the epidemic dies out. This model is not related to the KPZ universality class.
APA, Harvard, Vancouver, ISO, and other styles
42

Li, Zhi. "Arbitrage Theory Under Portfolio Constraints." Thesis, 2020. https://doi.org/10.7916/d8-ca07-1312.

Full text
Abstract:
In this dissertation, we adopt the viability approach to mathematical finance developed in the book of Karatzas and Kardaras (2020), and extend it to settings where portfolio choice is constrained. We introduce in Chapter 2 the notions of supermartingale numeraire, supermartingale deflator, and viability. After that, we characterize all supermartingale deflators under conic constraints on portfolio choice. Most importantly, we prove a fundamental theorem for equity market structure and arbitrage theory under such conic constraints, to the effect that the existence of the supermartingale numeraire is equivalent to market viability. Further, and always under the assumption of viability, we establish some additional optimality properties of the supermartingale numeraire. In the end of Chapter 2, we pose and solve a problem of robust maximization of asymptotic growth, under some realistic assumptions. In Chapter 3, we state and prove the Optional Decomposition Theorem under conic constraints. Using this version of the Optional Decomposition Theorem, we deal with the problem, of superhedging contingent claims. In Chapter 4, we consider yet another portfolio optimization problem. Under simultaneous conic constraints on portfolio choice, and drawdown constraints on their generated wealth, we try to maximize the long-term growth rate from investment. Application of the Azema-Yor transform allows us to show that the optimal portfolio for this optimization problem is a simple path transformation of a supermartingale numeraire portfolio. Some asymptotic properties of this portfolio are also discussed in Chapter 4.
APA, Harvard, Vancouver, ISO, and other styles
43

Sypkens, Roelf. "Risk properties and parameter estimation on mean reversion and Garch models." Diss., 2010. http://hdl.handle.net/10500/4049.

Full text
Abstract:
Most of the notations and terminological conventions used in this thesis are Statistical. The aim in risk management is to describe the risk factors present in time series. In order to group these risk factors, one needs to distinguish between different stochastic processes and put them into different classes. The risk factors discussed in this thesis are fat tails and mean reversion. The presence of these risk factors fist need to be found in the historical dataset. I will refer to the historical dataset as the original dataset. The Ljung- Box-Pierce test will be used in this thesis to determine if the distribution of the original dataset has mean reversion or no mean reversion.
Mathematical Sciences
M.Sc. (Applied Mathematics)
APA, Harvard, Vancouver, ISO, and other styles
44

"New results in probabilistic modeling." 2000. http://library.cuhk.edu.hk/record=b6073308.

Full text
Abstract:
Chan Ho-leung.
"December 2000."
Thesis (Ph.D.)--Chinese University of Hong Kong, 2000.
Includes bibliographical references (p. 154-[160]).
Electronic reproduction. Hong Kong : Chinese University of Hong Kong, [2012] System requirements: Adobe Acrobat Reader. Available via World Wide Web.
Mode of access: World Wide Web.
Abstracts in English and Chinese.
APA, Harvard, Vancouver, ISO, and other styles
45

Qian, Meifen. "Probability of informed trading around scheduled and unscheduled corporate announcements." Phd thesis, 2011. http://hdl.handle.net/1885/149798.

Full text
Abstract:
This thesis examines how public announcement events with different characteristics affect the probability of informed trading (PI). Using the Bollen, Smith and Whaley (2004) model of inferring PI directly from trades, we investigate the differences in PI between the pre-announcement period and post-announcement period from 2002 to 2008 in the American stock market along two dimensions: whether announcements are scheduled; and other characteristics related to the content such as payment methods, offer premium (takeover announcements) and earnings surprises (earnings announcements). We first focus on unscheduled takeover announcements. Our results show that PI (and bid/ask spread) is significantly higher in the pre-announcement period compared to that in the post-announcement period. Further, we link the changes in PI to takeover announcement characteristics. We show that PI is significantly higher in the pre-event period for successful offers and cash offers as well as offers with relatively high premiums. In contrast, the changes in PI after announcement are not significant for unsuccessful offers, stock or mixed offers as well as offers with relatively low premiums. We then investigate informed trading around scheduled earnings announcements. Results indicate that PI (and bid/ask spread) before earnings announcements is not significantly higher than after the announcement. However, when breaking down the sample according to the size of the earnings surprise, we find significant incremental PI in the pre-earnings period when reported earnings contain big surprises with respect to the forecasts. Finally, we contrast PI around public announcements, conditioning on whether the announcement is scheduled or unscheduled. We find that the level of PI for scheduled earnings announcements are significantly higher than for unscheduled takeover announcements in the post-announcement period but not in the pre-announcement period. Our results are consistent with the argument that an announcement that is anticipated stimulates relatively more private information gathering, and hence the degree of information asymmetry around scheduled announcements might be higher. We also find that trading volume dries up in the pre-announcement period for scheduled earnings announcements. -- provided by Candidate.
APA, Harvard, Vancouver, ISO, and other styles
46

Mazumder, Tanvir, University of Western Sydney, of Science Technology and Environment College, and School of Engineering. "Application of the joint probability approach to ungauged catchments for design flood estimation." 2005. http://handle.uws.edu.au:8081/1959.7/22731.

Full text
Abstract:
Design flood estimation is often required in hydrologic practice. For catchments with sufficient streamflow data, design floods can be obtained using flood frequency analysis. For catchments with no or little streamflow data (ungauged catchments), design flood estimation is a difficult task. The currently recommended method in Australia for design flood estimation in ungauged catchments is known as the Probabilistic Rational Method. There are alternatives to this method such as Quantile Regression Technique or Index Flood Method. All these methods give the flood peak estimate but the full streamflow hydrograph is required for many applications. The currently recommended rainfall based flood estimation method in Australia that can estimate full streamflow hydrograph is known as the Design Event Approach. This considers the probabilistic nature of rainfall depth but ignores the probabilistic behavior of other flood producing variables such as rainfall temporal pattern and initial loss, and thus this is likely to produce probability bias in final flood estimates. Joint Probability Approach is a superior method of design flood estimation which considers the probabilistic nature of the input variables (such as rainfall temporal pattern and initial loss) in the rainfall-runoff modelling. Rahman et al. (2002) developed a simple Monte Carlo Simulation technique based on the principles of joint probability, which is applicable to gauged catchments. This thesis extends the Monte Carlo Simulation technique to ungauged catchments. The Joint Probability Approach/ Monte Carlo Simulation Technique requires identification of the distributions of the input variables to the rainfall-runoff model e.g. rainfall duration, rainfall intensity, rainfall temporal pattern, and initial loss. For gauged catchments, these probability distributions are identified from observed rainfall and/or streamflow data. For application of the Joint Probability Approach to ungauged catchments, the distributions of the input variables need to be regionalised. This thesis, in particular, investigates the regionalisation of the distribution of rainfall duration and intensity. In this thesis, it is hypothesised that the distribution of storm duration can be described by Exponential distribution. The developed new technique of design flood estimation can provide the full hydrograph rather than only peak value as with the Probabilistic Rational Method and Quantile Regression Technique. The developed new technique can further be improved by addition of new and improved regional estimation equations for the initial loss, continuing loss and storage delay parameter (k) as and when these are available.
(M. Eng.) (Hons)
APA, Harvard, Vancouver, ISO, and other styles
47

Jiang, Bin Computer Science &amp Engineering Faculty of Engineering UNSW. "Probabilistic skylines on uncertain data." 2007. http://handle.unsw.edu.au/1959.4/40712.

Full text
Abstract:
Skyline analysis is important for multi-criteria decision making applications. The data in some of these applications are inherently uncertain due to various factors. Although a considerable amount of research has been dedicated separately to efficient skyline computation, as well as modeling uncertain data and answering some types of queries on uncertain data, how to conduct skyline analysis on uncertain data remains an open problem at large. In this thesis, we tackle the problem of skyline analysis on uncertain data. We propose a novel probabilistic skyline model where an uncertain object may take a probability to be in the skyline, and a p-skyline contains all the objects whose skyline probabilities are at least p. Computing probabilistic skylines on large uncertain data sets is challenging. An uncertain object is conceptually described by a probability density function (PDF) in the continuous case, or in the discrete case a set of instances (points) such that each instance has a probability to appear. We develop two efficient algorithms, the bottom-up and top-down algorithms, of computing p-skyline of a set of uncertain objects in the discrete case. We also discuss that our techniques can be applied to the continuous case as well. The bottom-up algorithm computes the skyline probabilities of some selected instances of uncertain objects, and uses those instances to prune other instances and uncertain objects effectively. The top-down algorithm recursively partitions the instances of uncertain objects into subsets, and prunes subsets and objects aggressively. Our experimental results on both the real NBA player data set and the benchmark synthetic data sets show that probabilistic skylines are interesting and useful, and our two algorithms are efficient on large data sets, and complementary to each other in performance.
APA, Harvard, Vancouver, ISO, and other styles
48

Fazelnia, Ghazal. "Optimization for Probabilistic Machine Learning." Thesis, 2019. https://doi.org/10.7916/d8-jm7k-2k98.

Full text
Abstract:
We have access to great variety of datasets more than any time in the history. Everyday, more data is collected from various natural resources and digital platforms. Great advances in the area of machine learning research in the past few decades have relied strongly on availability of these datasets. However, analyzing them imposes significant challenges that are mainly due to two factors. First, the datasets have complex structures with hidden interdependencies. Second, most of the valuable datasets are high dimensional and are largely scaled. The main goal of a machine learning framework is to design a model that is a valid representative of the observations and develop a learning algorithm to make inference about unobserved or latent data based on the observations. Discovering hidden patterns and inferring latent characteristics in such datasets is one of the greatest challenges in the area of machine learning research. In this dissertation, I will investigate some of the challenges in modeling and algorithm design, and present my research results on how to overcome these obstacles. Analyzing data generally involves two main stages. The first stage is designing a model that is flexible enough to capture complex variation and latent structures in data and is robust enough to generalize well to the unseen data. Designing an expressive and interpretable model is one of crucial objectives in this stage. The second stage involves training learning algorithm on the observed data and measuring the accuracy of model and learning algorithm. This stage usually involves an optimization problem whose objective is to tune the model to the training data and learn the model parameters. Finding global optimal or sufficiently good local optimal solution is one of the main challenges in this step. Probabilistic models are one of the best known models for capturing data generating process and quantifying uncertainties in data using random variables and probability distributions. They are powerful models that are shown to be adaptive and robust and can scale well to large datasets. However, most probabilistic models have a complex structure. Training them could become challenging commonly due to the presence of intractable integrals in the calculation. To remedy this, they require approximate inference strategies that often results in non-convex optimization problems. The optimization part ensures that the model is the best representative of data or data generating process. The non-convexity of an optimization problem take away the general guarantee on finding a global optimal solution. It will be shown later in this dissertation that inference for a significant number of probabilistic models require solving a non-convex optimization problem. One of the well-known methods for approximate inference in probabilistic modeling is variational inference. In the Bayesian setting, the target is to learn the true posterior distribution for model parameters given the observations and prior distributions. The main challenge involves marginalization of all the other variables in the model except for the variable of interest. This high-dimensional integral is generally computationally hard, and for many models there is no known polynomial time algorithm for calculating them exactly. Variational inference deals with finding an approximate posterior distribution for Bayesian models where finding the true posterior distribution is analytically or numerically impossible. It assumes a family of distribution for the estimation, and finds the closest member of that family to the true posterior distribution using a distance measure. For many models though, this technique requires solving a non-convex optimization problem that has no general guarantee on reaching a global optimal solution. This dissertation presents a convex relaxation technique for dealing with hardness of the optimization involved in the inference. The proposed convex relaxation technique is based on semidefinite optimization that has a general applicability to polynomial optimization problem. I will present theoretical foundations and in-depth details of this relaxation in this work. Linear dynamical systems represent the functionality of many real-world physical systems. They can describe the dynamics of a linear time-varying observation which is controlled by a controller unit with quadratic cost function objectives. Designing distributed and decentralized controllers is the goal of many of these systems, which computationally, results in a non-convex optimization problem. In this dissertation, I will further investigate the issues arising in this area and develop a convex relaxation framework to deal with the optimization challenges. Setting the correct number of model parameters is an important aspect for a good probabilistic model. If there are only a few parameters, model may lack capturing all the essential relations and components in the observations while too many parameters may cause significant complications in learning or overfit to the observations. Non-parametric models are suitable techniques to deal with this issue. They allow the model to learn the appropriate number of parameters to describe the data and make predictions. In this dissertation, I will present my work on designing Bayesian non-parametric models as powerful tools for learning representations of data. Moreover, I will describe the algorithm that we derived to efficiently train the model on the observations and learn the number of model parameters. Later in this dissertation, I will present my works on designing probabilistic models in combination with deep learning methods for representing sequential data. Sequential datasets comprise a significant portion of resources in the area of machine learning research. Designing models to capture dependencies in sequential datasets are of great interest and have a wide variety of applications in engineering, medicine and statistics. Recent advances in deep learning research has shown exceptional promises in this area. However, they lack interpretability in their general form. To remedy this, I will present my work on mixing probabilistic models with neural network models that results in better performance and expressiveness of the results.
APA, Harvard, Vancouver, ISO, and other styles
49

Kruger, Jan Walters. "Generalizing the number of states in Bayesian belief propagation, as applied to portfolio management." Thesis, 1996. https://hdl.handle.net/10539/26225.

Full text
Abstract:
A research report submitted to the Faculty of Science, University of the Witwatersrand, Johannesburg, in partial fulfillment of the requirements for the degree of Master of' Science.
This research report describes the use or the Pearl's algorithm in Bayesian belief networks to induce a belief network from a database. With a solid grounding in probability theory, the Pearl algorithm allows belief updating by propagating likelihoods of leaf nodes (variables) and the prior probabilities. The Pearl algorithm was originally developed for binary variables and a generalization to more states is investigated. The data 'Used to test this new method, in a Portfolio Management context, are the Return and various attributes of companies listed on the Johannesburg Stock Exchange ( JSE ). The results of this model is then compared to a linear regression model. The bayesian method is found to perform better than a linear regression approach.
Andrew Chakane 2018
APA, Harvard, Vancouver, ISO, and other styles
50

"Asymptotic expansions of empirical likelihood in time series." 2009. http://library.cuhk.edu.hk/record=b5894189.

Full text
Abstract:
Liu, Li.
Thesis (M.Phil.)--Chinese University of Hong Kong, 2009.
Includes bibliographical references (leaves 41-44).
Abstract also in Chinese.
Chapter 1 --- Introduction --- p.1
Chapter 1.1 --- Empirical Likelihood --- p.1
Chapter 1.2 --- Empirical Likelihood for Dependent Data --- p.4
Chapter 1.2.1 --- Spectral Method --- p.5
Chapter 1.2.2 --- Blockwise Method --- p.6
Chapter 1.3 --- Edgeworth Expansions and Bartlett Correction --- p.9
Chapter 1.3.1 --- Coverage Errors --- p.10
Chapter 1.3.2 --- Edgeworth Expansions --- p.11
Chapter 1.3.3 --- Bartlett Correction --- p.13
Chapter 2 --- Bartlett Correction for EL --- p.16
Chapter 2.1 --- Empirical Likelihood in Time Series --- p.16
Chapter 2.2 --- Stochastic Expansions of EL in Time Series --- p.19
Chapter 2.3 --- Edgeworth Expansions of EL in Time Series --- p.22
Chapter 2.3.1 --- Validity of the Formal Edgeworth Expansions --- p.22
Chapter 2.3.2 --- Cumulant Calculations --- p.24
Chapter 2.4 --- Main Results --- p.30
Chapter 3 --- Simulations --- p.32
Chapter 3.1 --- Confidence Region --- p.33
Chapter 3.2 --- Coverage Error of Confidence Regions --- p.35
Chapter 4 --- Conclusion and Future Work --- p.38
Bibliography --- p.41
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography