Siga este link para ver outros tipos de publicações sobre o tema: Modèle Rate Theory.

Teses / dissertações sobre o tema "Modèle Rate Theory"

Crie uma referência precisa em APA, MLA, Chicago, Harvard, e outros estilos

Selecione um tipo de fonte:

Veja os 50 melhores trabalhos (teses / dissertações) para estudos sobre o assunto "Modèle Rate Theory".

Ao lado de cada fonte na lista de referências, há um botão "Adicionar à bibliografia". Clique e geraremos automaticamente a citação bibliográfica do trabalho escolhido no estilo de citação de que você precisa: APA, MLA, Harvard, Chicago, Vancouver, etc.

Você também pode baixar o texto completo da publicação científica em formato .pdf e ler o resumo do trabalho online se estiver presente nos metadados.

Veja as teses / dissertações das mais diversas áreas científicas e compile uma bibliografia correta.

1

Georgesco, Arthur. "Effet couplé de l'endommagement balistique et électronique dans UO₂ : rôle de la température d'irradiation". Electronic Thesis or Diss., université Paris-Saclay, 2024. http://www.theses.fr/2024UPASP102.

Texto completo da fonte
Resumo:
En réacteur, le combustible UO₂ est soumis à l'irradiation simultanée de plusieurs particules et rayonnements, dont les produits de fission, avec l'ensemble de ces phénomènes se produisant en température (environ 400 - 500 °C dans la périphérie et 1000 - 1200 °C dans le centre des pastilles). À l'échelle atomique, cela provoque des dommages de type balistique (déplacements atomiques), principalement à cause des produits de fission de basse énergie, et de l'endommagement électronique (ionisations et excitations électroniques) dû aux particules de haute énergie. Les dommages balistiques induisent notamment la création de boucles de dislocation interstitielles, non fautées, d'une taille de quelques nanomètres à quelques dizaines de nanomètres, qui évoluent vers des lignes de dislocation enchevêtrées, ainsi que des objets lacunaires sub-nanométriques à nanométriques. Les dommages électroniques, au-delà d'un certain niveau d'énergie déposée (supérieur à 20 keV/nm), induisent la formation de traces. Ainsi, alors que les effets des pertes d'énergie balistique et électronique dans UO₂ sont bien documentés, les effets de couplage entre ces deux processus, et surtout les mécanismes associés, n'ont été étudiés qu'à température ambiante. Cependant, la diffusion des défauts ponctuels varie avec la température, et certains défauts ou regroupements de défauts peuvent déjà être mobiles à température ambiante dans UO₂. Cette différence de mobilité peut avoir un impact important sur leur mécanisme d'évolution, notamment dans le cas de l'effet couplé entre les deux contributions. Ces premiers résultats nécessitent donc d'être complétés en s'intéressant à l'influence de la température d'irradiation sur ce couplage. Pour ce faire, deux approches sont considérées. Dans un premier temps, il est nécessaire de s'affranchir de l'effet de la température d'irradiation, en travaillant à très basse température, pour mieux identifier les mécanismes d'évolution des défauts mis en jeu lors du couplage. Dans un second temps, une fois ces mécanismes définis, il est intéressant de travailler à plus haute température, pour se rapprocher des conditions en réacteur. Ainsi, des irradiations aux ions en simple et double faisceau simultané dans des échantillons d'UO₂ ont été réalisées à différentes températures sur les plateformes JANNuS Orsay et Saclay. La microscopie électronique en transmission et la spectroscopie Raman ont été utilisées (de manière in situ et ex situ) pour étudier l'évolution des défauts étendus et du désordre lié aux défauts ponctuels, respectivement. Un modèle Rate Theory a été utilisé en regard des résultats expérimentaux, pour identifier les mécanismes mis en jeu lors d'irradiations avec ou sans l'effet de la température, et avec ou sans l'effet des pertes d'énergie électronique. Les résultats montrent que les mécanismes de nucléation et de grossissement des boucles de dislocation sont très impactés par la diffusion des défauts ponctuels et/ou des amas de défauts, à la différence des objets lacunaires. Cette diffusion est activée soit par la température pendant l'irradiation, soit par les excitations électroniques / ionisations (induisant des effets de pointe thermique) des ions très énergétiques lors du couplage. La température impacte donc fortement le couplage entre les pertes d'énergie électronique et nucléaire. Par ailleurs, l'effet de ce couplage diffère selon le mode d'irradiation (simple ou double faisceau) ; tout cela résulte en des évolutions de la microstructure très différentes. Les différentes irradiations réalisées, ainsi que l'utilisation du modèle Rate Theory, ont permis de définir les mécanismes qui s'opèrent dans UO₂ avec l'effet couplé de la température d'irradiation et des pertes d'énergie balistique et électronique. Cette démarche apporte une meilleure compréhension du comportement du combustible nucléaire en réacteur
In the reactor, UO₂ fuel is subjected to simultaneous irradiation by several particles and radiation, including fission products, with all these phenomena occurring at high temperatures (around 400 - 500 °C in the pellet periphery and 1000 - 1200 °C in the pellet center). On an atomic scale, this leads to ballistic damage (atomic displacements), mainly due to low-energy fission products, and electronic damage (ionizations and electronic excitations) due to high-energy particles. Ballistic damage results in the creation of interstitial-type dislocation loops, a few nanometers to tens of nanometers in size, which evolve into tangled dislocation lines, as well as sub-nanometric to nanometric vacancy-type objects. Electronic damage, beyond a certain level of deposited energy (above 20 keV/nm), induces tracks formation. Therefore, while the effects of ballistic and electronic energy losses in UO₂ are well documented, the coupling effects between these two processes, and especially the associated mechanisms, have only been studied at room temperature. However, the diffusion of point defects varies with temperature, and some defects or defect clusters may already be mobile at room temperature in UO₂. This difference in mobility may have a significant impact on their evolution mechanism, particularly in the case of the coupled effect between the two contributions. These initial results therefore need to be supplemented by looking at the influence of irradiation temperature on this coupling. To achieve this, two approaches are considered. Firstly, it is necessary to eliminate the effect of irradiation temperature, by working at very low temperature, to better identify the mechanisms of defect evolution occurring during coupling. Secondly, once these mechanisms have been defined, it is worthwhile working at higher temperatures, to get closer to reactor conditions. Single- and dual-beam ion irradiations of UO₂ samples were carried out at different temperatures on the JANNuS Orsay and Saclay facilities. Transmission electron microscopy and Raman spectroscopy were used (in situ and ex situ) to study the evolution of extended defects and disorder related to point defects, respectively. A Rate Theory model was used in conjunction with the experimental results, to identify the mechanisms involved in irradiation with or without the effect of temperature, and with or without the effect of electronic energy losses. The results show that the nucleation and growth mechanisms of dislocation loops are strongly impacted by the diffusion of point defects and/or defect clusters, unlike vacancy-type objects. This diffusion is activated either by temperature during irradiation, or by the electronic excitations/ionizations (inducing thermal spike effects) of high-energy ions during coupling. Temperature therefore has a major impact on the coupling between electronic and nuclear energy losses. Moreover, the effect of this coupling differs according to the irradiation mode (single or dual beam), resulting in very different microstructure evolutions. The various irradiations carried out, together with the use of the Rate Theory model, have enabled us to define the mechanisms at work in UO₂, with the coupled effect of irradiation temperature and ballistic and electronic energy losses. This approach provides a better understanding of the behavior of nuclear fuel in reactors
Estilos ABNT, Harvard, Vancouver, APA, etc.
2

Elhouar, Mikael. "Essays on interest rate theory". Doctoral thesis, Handelshögskolan i Stockholm, Finansiell Ekonomi (FI), 2008. http://urn.kb.se/resolve?urn=urn:nbn:se:hhs:diva-451.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
3

Götsch, Irina. "Libor market model theory and implementation". Saarbrücken VDM, Müller, 2006. http://deposit.d-nb.de/cgi-bin/dokserv?id=2868878&prov=M&dok_var=1&dok_ext=htm.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
4

Smith, P. N. "Structural models of the exchange rate : Theory and evidence". Thesis, University of Southampton, 1987. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.378873.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
5

Zhang, Jiangxingyun. "International Portfolio Theory-based Interest Rate Models and EMU Crisis". Thesis, Rennes 1, 2017. http://www.theses.fr/2017REN1G011/document.

Texto completo da fonte
Resumo:
L'objectif de cette thèse est d’étudier à côté du risque défaut, le rôle spécifique des risques de volatilité et de co-volatilité dans la formation des taux longs dans la zone euro. On propose en particulier un modèle théorique de choix de portefeuille à deux pays permettant d’évaluer la contribution des primes de risque de volatilité aux processus de contagion et de fuite vers la qualité dans différents épisodes de la crise de la dette souveraine. Ce modèle permet également d’analyser le rôle des achats d’actifs (QE) de la BCE sur l’équilibre des marchés obligataires. Nos tests empiriques suggèrent que les programmes QE de la BCE à partir de mars 2015 n’ont fait qu’accélérer « une défragmentation » des marchés obligataires de la zone euro, apparue plus tôt dans la crise, dès la mise en place de l’OMT
This thesis examines the specific role of volatility risks and co-volatility in the formation of long-term interest rates in the euro area. In particular, a two-country theoretical portfolio choice model is proposed to evaluate the volatility risk premia and their contribution to the contagion and flight to quality processes. This model also provides an opportunity to analyze the ECB's role of asset purchases (QE) on the equilibrium of bond markets. Our empirical tests suggest that the ECB's QE programs from March 2015 have accelerated the "defragmentation" of the euro zone bond markets
Estilos ABNT, Harvard, Vancouver, APA, etc.
6

Dogan, Aydan. "Two sector models of the real exchange rate". Thesis, University of Kent, 2016. https://kar.kent.ac.uk/54747/.

Texto completo da fonte
Resumo:
This thesis consists of three self contained chapters. In the first chapter, we re-assess the problem of general equilibrium models in matching the behaviour of the real exchange rate. We do so by developing a two country general equilibrium model with non-traded goods, home bias, incomplete markets and partial degrees of pass through as well as nominal rigidities both in the goods and labour markets. Our key finding is that presenting an encompassing model structure improves the performance of the model in addressing the persistence of the real exchange rate and its correlation with relative consumption, but this improvement is at the expense of failing to replicate some other characteristics of the data; where the model does a good job at explaining the failure of international risk sharing and generates substantial real exchange rate persistence, it fails to match several other observed business cycle features of the data, such as the volatility of real exchange rate and consumption. In the second chapter of the thesis, we study the importance of the extensive margin of trade for the UK export dynamics. During the great recession, UK exports fell by around 8% with respect to their trend, more than a standard general equilibrium model would predict. In this paper, we ask whether an estimated two country DSGE model with extensive margin of trade can explain this drop and the main business cycle features of the UK economy. The extensive margin improves the overall performance of the model, but cannot improve substantially on replicating the behaviour of exports. Much of the trade collapse during the great recession can be explained by a shock to export entry costs associated with tighter financial conditions. Understanding the trade balance dynamics has a central role in studies of emerging market business cycles. In the last chapter, we investigate the driving sources of emerging market trade balance fluctuations by developing a two country, two sector international real business cycle model with investment and consumption goods sectors. We estimate the model for Mexico and US data and find that a slowly diffusing permanent investment specific technology shock that originates in the US accounts for most of the trade balance variability in Mexico. This shock is also the key driver of business cycle fluctuations in Mexico.
Estilos ABNT, Harvard, Vancouver, APA, etc.
7

Pang, Kin. "Calibration of interest rate term structure and derivative pricing models". Thesis, University of Warwick, 1997. http://wrap.warwick.ac.uk/36270/.

Texto completo da fonte
Resumo:
We argue interest rate derivative pricing models are misspecified so that when they are fitted to historical data they do not produce prices consistently with the market. Interest rate models have to be calibrated to prices to ensure consistency. There are few published works on calibration to derivatives prices and we make this the focus of our thesis. We show how short rate models can be calibrated to derivatives prices accurately with a second time dependent parameter. We analyse the misspecification of the fitted models and their implications for other models. We examine the Duffle and Kan Affine Yield Model, a class of short rate models, that appears to allow easier calibration. We show that, in fact, a direct calibration of Duffle and Kan Affine Yield Models is exceedingly difficult. We show the non-negative subclass is equivalent to generalised Cox, Ingersoll and Ross models that facilitate an indirect calibration of nonnegative Duffle and Kan Affine Yield Models. We examine calibration of Heath, Jarrow and Morton models. We show, using some experiments, Heath, Jarrow and Morton models cannot be calibrated quickly to be of practical use unless we restrict to special subclasses. We introduce the Martingale Variance Technique for improving the accuracy of Monte Carlo simulations. We examine calibration of Gaussian Heath Jarrow and Morton models. We provide a new non-parametric calibration using the Gaussian Random Field Model of Kennedy as an intermediate step. We derive new approximate swaption pricing formulae for the calibration. We examine how to price resettable caps and floors with the market- Libor model. We derive a new relationship between resettable caplets and floorlets prices. We provide accurate approximations for the prices. We provide practical approximations to price resettable caplets and floorlets directly from quotes on standard caps and floors. We examine how to calibrate the market-Libor model.
Estilos ABNT, Harvard, Vancouver, APA, etc.
8

Tsai, Angela C. F. "Valuation of Eurodollar futures contracts under alternative term structure models : theory and evidence". Thesis, University of Strathclyde, 1999. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.366802.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
9

Chen, Wei 1976. "Perceptual postfiltering for low bit rate speech coders". Thesis, McGill University, 2007. http://digitool.Library.McGill.CA:80/R/?func=dbin-jump-full&object_id=112563.

Texto completo da fonte
Resumo:
Adaptive postfiltering has become a common part of speech coding standards based on the Linear Prediction Analysis-by-Synthesis algorithm to decrease audible coding noise. However, a conventional adaptive postfilter is based on empirical assumptions of masking phenomena, which sometimes makes it hard to balance between noise reduction and speech distortion.
This thesis introduces a novel perceptual postfiltering system for low bit rate speech coders. The proposed postfilter works at the decoder, as is the case for the conventional adaptive postfilter. Specific human auditory properties are considered in the postfilter design to improve speech quality. A Gaussian Mixture Model based Minimum Mean Squared Error estimation of the perceptual postfilter is performed with the received information at the decoder. Perceptual postfiltering is then applied to the reconstructed speech to improve speech quality. Test results show that the proposed system gives better perceptual speech quality over conventional adaptive postfiltering.
Estilos ABNT, Harvard, Vancouver, APA, etc.
10

Van, Wijck Tjaart. "Interest rate model theory with reference to the South African market". Thesis, Stellenbosch : University of Stellenbosch, 2006. http://hdl.handle.net/10019.1/3396.

Texto completo da fonte
Resumo:
Thesis (MComm (Statistics and Actuarial Science))--University of Stellenbosch, 2006.
An overview of modern and historical interest rate model theory is given with the specific aim of derivative pricing. A variety of stochastic interest rate models are discussed within a South African market context. The various models are compared with respect to characteristics such as mean reversion, positivity of interest rates, the volatility structures they can represent, the yield curve shapes they can represent and weather analytical bond and derivative prices can be found. The distribution of the interest rates implied by some of these models is also found under various measures. The calibration of these models also receives attention with respect to instruments available in the South African market. Problems associated with the calibration of the modern models are also discussed.
Estilos ABNT, Harvard, Vancouver, APA, etc.
11

Ruan, Shiling. "Poisson race models theory and application in conjoint choice analysis /". Columbus, Ohio : Ohio State University, 2007. http://rave.ohiolink.edu/etdc/view?acc%5Fnum=osu1173204902.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
12

Gyldberg, Ellinor, e Henrik Bark. "Type 1 error rate and significance levels when using GARCH-type models". Thesis, Uppsala universitet, Statistiska institutionen, 2019. http://urn.kb.se/resolve?urn=urn:nbn:se:uu:diva-375770.

Texto completo da fonte
Resumo:
The purpose of this thesis is to test whether the probability of falsely rejecting a true null hypothesis of a model intercept being equal to zero is consistent with the chosen significance level when modelling the variance of the error term using GARCH (1,1), TGARCH (1,1) or IGARCH (1,1) models. We test this by estimating “Jensen’s alpha” to evaluate alpha trading, using a Monte Carlo simulation based on historical data from the Standard & Poor’s 500 Index and stocks in the Dow Jones Industrial Average Index. We evaluate over simulated daily data ranging over periods of 3 months, 6 months, and 1 year. Our results indicate that the GARCH and IGARCH consistently reject a true null hypothesis less often than the selected 1%, 5%, or 10%, whereas the TGARCH consistently rejects a true null more often than the chosen significance level. Thus, there is a risk of incorrect inferences when using these GARCH-type models.
Estilos ABNT, Harvard, Vancouver, APA, etc.
13

Rhee, Joonhee. "Three models of the term structure of interest rates". Thesis, University of Warwick, 1998. http://wrap.warwick.ac.uk/36336/.

Texto completo da fonte
Resumo:
In this dissertation, we consider the stochastic volatility of short rates, the jump property of short rates, and market expectation of changes in interest rates as the crucial factors in explaining the term structure of interest rates. In each chapter, we model the term structure of interest rates in accordance with these factors.
Estilos ABNT, Harvard, Vancouver, APA, etc.
14

Leblon, Grégoire. "Quadratic term structure models of interest rates : theory, implementation and applications". Rennes 1, 2012. http://www.theses.fr/2012REN1G038.

Texto completo da fonte
Resumo:
La modélisation de la structure par terme des taux d’intérêt renvoie à une double problématique en finance. La première consiste à reproduire, à chaque instant, la courbe des taux extraite des prix d’obligations observés. La seconde consiste à capturer sa dynamique d’évolution. Pour répondre à ces problématiques, de nombreux modèles ont été développés. L’objet de cette thèse est d’approfondir l’un d’entre eux : les modèles quadratiques. Cette famille de modèle suppose d’une part qu’une relation quadratique relie le taux d’intérêt instantané à des variables latentes censées rendre compte de l’état de l’économie. D’autre part, l’évolution temporelle des variables latentes suit un processus d’Ornstein-Uhlenbeck. Cette famille de modèle fut introduite afin de répondre aux problèmes structurels rencontrés de façon récurrente par les autres familles de modèle. Dans cette thèse, nous approfondissons le cadre théorique des modèles Quadratiques en temps discret. Forts de ces résultats, nous évaluons leurs capacités à reproduire la Structure par Terme des Taux d’Intérêt. Leur utilisation en gestion de portefeuilles obligataires est aussi traitée théoriquement et empiriquement. Le prix des options européennes sur obligations dépend du modèle considéré. Nous fournissons dans ce cadre des solutions analytiques exactes et approchées
Modeling the Term Structure of Interest Rates refers to a dual problem in finance. The first one is to replicate yield curves extracted from observed bond prices. The second is to capture its dynamics. To address these issues, many models have been developed. The purpose of this thesis is to explore one of them: the Quadratic model. Quadratic Term Structure Models first assume a quadratic relationship connecting the instantaneous interest rate and latent variables describing the evolution of the theoretical economy. Second, latent variables’ are assumed to follow Ornstein-Uhlenbeck processes. Quadratic Term Structure Models were introduced to address structural problems encounter by other types of models. This thesis deepens the theoretical framework of Quadratic Term Structure Models in discrete time. We exploit these results to assess their ability to reproduce Term Structure of Interest Rates. Their use in bond portfolio management is also investigated theoretically and empirically. Finally, we study the price of a European option written on bonds within this framework
Estilos ABNT, Harvard, Vancouver, APA, etc.
15

Lo, Tak-shing. "Two-body operators and correlation crystal field models /". [Hong Kong : University of Hong Kong], 1993. http://sunzi.lib.hku.hk/hkuto/record.jsp?B13437549.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
16

Adodo, Sophia. "THE FASHION RUNWAY THROUGH A CRITICAL RACE THEORY LENS". Kent State University / OhioLINK, 2016. http://rave.ohiolink.edu/etdc/view?acc_num=kent1461576556.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
17

Woodard, Roger. "Bayesian hierarchical models for hunting success rates /". free to MU campus, to others for purchase, 1999. http://wwwlib.umi.com/cr/mo/fullcit?p9951135.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
18

盧德成 e Tak-shing Lo. "Two-body operators and correlation crystal field models". Thesis, The University of Hong Kong (Pokfulam, Hong Kong), 1993. http://hub.hku.hk/bib/B31210922.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
19

Wolden, Bache Ida. "Econometrics of exchange rate pass-through /". Oslo : Unipub, 2007. http://www.gbv.de/dms/zbw/527973297.pdf.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
20

Wong, Po-shing. "Some mixture models for the joint distribution of stock's return and trading volume /". [Hong Kong] : University of Hong Kong, 1991. http://sunzi.lib.hku.hk/hkuto/record.jsp?B13009485.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
21

Jackson, Zara. "Basal Metabolic Rate (BMR) estimation using Probabilistic Graphical Models". Thesis, Uppsala universitet, Statistiska institutionen, 2019. http://urn.kb.se/resolve?urn=urn:nbn:se:uu:diva-384629.

Texto completo da fonte
Resumo:
Obesity is a growing problem globally. Currently 2.3 billion adults are overweight, and this number is rising. The most common method for weight loss is calorie counting, in which to lose weight a person should be in a calorie deficit. Basal Metabolic Rate accounts for the majority of calories a person burns in a day and it is therefore a major contributor to accurate calorie counting. This paper uses a Dynamic Bayesian Network to estimate Basal Metabolic Rate (BMR) for a sample of 219 individuals from all Body Mass Index (BMI) categories. The data was collected through the Lifesum app. A comparison of the estimated BMR values was made with the commonly used Harris Benedict equation, finding that food journaling is a sufficient method to estimate BMR. Next day weight prediction was also computed based on the estimated BMR. The results stated that the Harris Benedict equation produced more accurate predictions than the metabolic model proposed, therefore more work is necessary to find a model that accurately estimates BMR.
Estilos ABNT, Harvard, Vancouver, APA, etc.
22

Vlaseros, Vasileios. "Essays on strategic voting and political influence". Thesis, University of Edinburgh, 2014. http://hdl.handle.net/1842/9932.

Texto completo da fonte
Resumo:
Chapter 1 : I attempt a detailed literature review on the passage from the probabilistic versions of the Condorcet Jury Theorem to models augmented by the concept of strategic agents, including both theoretical and relevant empirical work. In the first part, I explore the most influential relevant game theoretic models and their main predictions. In the second part, I review what voting experiments have to say about these predictions, with a brief mention of the experiments' key methodological aspects. In the final part, I provide with an attempt to map the recent strategic voting literature in terms of structure and scope. I close with a philosophical question on the exogeneity of a "correct" choice of a voting outcome, which is inherent in the current strategic voting literature. Chapter 2 : I develop a two stage game with individually costly political action and costless voting on a binary agenda where, in equilibrium, agents rationally cast honest votes in the voting stage. I show that a positive but sufficiently low individual cost of political action can lead to a loss in aggregate welfare for any electorate size. When the individual cost of political action is lower than the signalling gain, agents will engage in informative political action. In the voting stage, since everyone's signal is revealed, agents will unanimously vote for the same policy. Therefore, the result of the ballot will be exactly the same as the one without prior communication, but with the additional aggregate cost of political action. However, when agents have heterogeneous prior beliefs, society is large and the state of the world is sufficiently uncertain, a moderate individual cost of political action can induce informative collective action of only a subset of the members of society, which increases ex ante aggregate welfare relative to no political action. The size of the subset of agents engaging in collective action depends on the dispersion of prior opinions. Chapter 3 : This chapter shows theoretically that hearing expert opinions can be a double-edged sword for decision making committees. We study a majoritarian voting game of common interest where committee members receive not only private information, but also expert information that is more accurate than private information and observed by all members. We identify three types of equilibria of interest, namely i) the symmetric mixed strategy equilibrium where each member randomizes between following the private and public signals should they disagree; ii) the asymmetric pure strategy equilibrium where a certain number of members always follow the public signal while the others always follow the private signal; and iii) a class of equilibria where a supermajority and hence the committee decision always follow the expert signal. We find that in the first two equilibria, the expert signal is collectively taken into account in such a way that it enhances the efficiency (accuracy) of the committee decision, and a fortiori the CJT holds. However, in the third type of equilibria, private information is not reflected in the committee decision and the efficiency of committee decision is identical to that of public information, which may well be lower than the efficiency the committee could achieve without expert information. In other words, the introduction of expert information might reduce efficiency in equilibrium. Chapter 4 : In this chapter we present experimental results on the theory of the previous chapter. In the laboratory, too many subjects voted according to expert information compared to the predictions from the efficient equilibria. The majority decisions followed the expert signal most of the time, which is consistent with the class of obedient equilibria mentioned in the previous chapter. Another interesting finding is the marked heterogeneity in voting behaviour. We argue that the voters' behaviour in our data can be best described as that in an obedient equilibrium where a supermajority (and hence the decision) always follow the expert signal so that no voter is pivotal. A large efficiency loss manifests due to the presence of expert information when the committee size was large. We suggest that it may be desirable for expert information to be revealed only to a subset of committee members. Finally, in the Appendix we describe a new alternative method for producing the signal matrix of the game. Chapter 5 : There is a significant gap between the theoretical predictions and the empirical evidence about the efficiency of policies in reducing crime rates. This chapter argues that one important reason for this is that the current literature of economics of crime overlooks an important hysteresis effect in criminal behaviour. One important consequence of hysteresis is that the effect on an outcome variable from positive exogenous variations in the determining variables has a different magnitude from negative variations. We present a simple model that characterises hysteresis in both the micro and macro levels. When the probability of punishment decreases, some law abiding agents will find it more beneficial to enter a criminal career. If the probability of punishment returns to its original level, a subset of these agents will continue with their career in crime. We show that, when crime choice exhibits weak hysteresis at the individual level, crime rate in a society consisted from a continuum of agents that follows any non-uniform distribution will exhibit strong hysteresis. Only when punishment is extremely severe the effect of hysteresis ceases to exist. The theoretical predictions corroborate the argument that policy makers should be more inclined to set pre-emptive policies rather than mitigating measures.
Estilos ABNT, Harvard, Vancouver, APA, etc.
23

Ho, Man Wai. "Bayesian inference for models with monotone densities and hazard rates /". View Abstract or Full-Text, 2002. http://library.ust.hk/cgi/db/thesis.pl?ISMT%202002%20HO.

Texto completo da fonte
Resumo:
Thesis (Ph. D.)--Hong Kong University of Science and Technology, 2002.
Includes bibliographical references (leaves 110-114). Also available in electronic version. Access restricted to campus users.
Estilos ABNT, Harvard, Vancouver, APA, etc.
24

Chung, Wanyu. "Three essays in international economics : invoicing currency, exchange rate pass-through and gravity models with trade in intermediate goods". Thesis, University of Warwick, 2014. http://wrap.warwick.ac.uk/66297/.

Texto completo da fonte
Resumo:
A large proportion of international trade is in intermediate goods. The implications of this empirical regularity, however, have not been exhaustively explored in several aspects. The main objective of the thesis is to fill in the gap by introducing trade in intermediate goods into several strands of literature in international economics. This thesis is a collection of three essays studying the implications of trade in intermediate goods for the degree of exchange rate pass-through (Chapter 2), firms invoicing currency choice (Chapter 3) and the performance of the gravity models (Chapter 4). In Chapter 2 I present a theoretical framework and show that back-and-forth trade between two countries is associated with low degrees of aggregated exchange rate pass-through. In Chapter 3 I focus instead on firm heterogeneity in the dependence on imported inputs. I show theoretically that exporters more dependent on foreign currency-denominated inputs are more likely to price in the foreign currency. I then test the theoretical prediction using an innovative and unique dataset that covers all UK trade transactions with non-EU partners from HM Revenue and Customs (HMRC). Overall the results strongly support the theoretical prediction. Chapter 4 is a theoretical piece of work showing how the underlying trade structure alters the predictions of the gravity models. I relate gravity equations to labour shares of income. Given that these parameters are industry-specific, the results suggest that it is crucial to take them into account when the main research interest lies in sectoral differences in bilateral trade.
Estilos ABNT, Harvard, Vancouver, APA, etc.
25

Lepage, Thomas. "The impact of variable evolutionary rates on phylogenetic inference : a Bayesian approach". Thesis, McGill University, 2007. http://digitool.Library.McGill.CA:80/R/?func=dbin-jump-full&object_id=103264.

Texto completo da fonte
Resumo:
In this dissertation, we explore the effect of variable evolutionary rates on phylogenetic inference. In the first half of the thesis are introduced the biological fundamentals and the statistical framework that will be used throughout the thesis. The basic concepts in phylogenetics and an overview of Bayesian inference are presented in Chapter 1. In Chapter 2, we survey the models that are already used for rate variation. We argue that the CIR process---a diffusion process widely used in finance---is the best suited for applications in phylogenetics, for both mathematical and computational reasons. Chapter 3 shows how evolutionary rate models are incorporated to DNA substitution models. We derive the general formulae for transition probabilities of substitutions when the rate is a continuous-time Markov chain, a diffusion process or a jump process (a diffusion process with discrete jumps).
The second half of the thesis is dedicated to applications of variable evolutionary rate models in two different contexts. In Chapter 4, we use the CIR process to model heterotachy, an evolutionary hypothesis according to which positions of an alignment may evolve at rates that vary with time differently from site to site. A comparison the CIR process with the covarion---a widely-used heterotachous model---on two different data sets allows us to conclude that the CIR provides a significantly better fit. Our approach, based on a Bayesian mixture model, enables us to determine the level of heterotachy at each site. Finally, the impact of variable evolutionary rates on divergence time estimation is explored in Chapter 5.
Several models, including the CIR process are compared on three data sets. We find that autocorrelated models (including the CIR) provide the best fits.
Estilos ABNT, Harvard, Vancouver, APA, etc.
26

Cohen, Margaret A. "Estimating the growth rate of harmful algal blooms using a model averaged method". View electronic thesis (PDF), 2009. http://dl.uncw.edu/etd/2009-1/rp/cohenm/margaretcohen.pdf.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
27

Kinene, Alan. "FORECASTING OF THE INFLATION RATES IN UGANDA: : A COMPARISON OF ARIMA, SARIMA AND VECM MODELS". Thesis, Örebro universitet, Handelshögskolan vid Örebro Universitet, 2016. http://urn.kb.se/resolve?urn=urn:nbn:se:oru:diva-49388.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
28

Brestovansky, Dennis Francis. "The Influence of competition on chemical process plant profitability and the selection of capacity and production rate : microeconomics and game theory models /". [S.l.] : [s.n.], 1986. http://e-collection.ethbib.ethz.ch/show?type=diss&nr=8020.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
29

Conte, Riccardo. "A dynamical approach to the calculation of thermal reaction rate constants". Doctoral thesis, Scuola Normale Superiore, 2008. http://hdl.handle.net/11384/85794.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
30

Corker, Lloyd A. "A test for Non-Gaussian distributions on the Johannesburg stock exchange and its implications on forecasting models based on historical growth rates". University of Western Cape, 2002. http://hdl.handle.net/11394/7447.

Texto completo da fonte
Resumo:
Masters of Commerce
If share price fluctuations follow a simple random walk then it implies that forecasting models based on historical growth rates have little ability to forecast acceptable share price movements over a certain period. The simple random walk description of share price dynamics is obtained when a large number of investors have equal probability to buy or sell based on their own opinion. This simple random walk description of the stock market is in essence the Efficient Market Hypothesis, EMT. EMT is the central concept around which financial modelling is based which includes the Black-Scholes model and other important theoretical underpinnings of capital market theory like mean-variance portfolio selection, arbitrage pricing theory (APT), security market line and capital asset pricing model (CAPM). These theories, which postulates that risk can be reduced to zero sets the foundation for option pricing and is a key component in financial software packages used for pricing and forecasting in the financial industry. The model used by Black and Scholes and other models mentioned above are Gaussian, i.e. they exhibit a random nature. This Gaussian property and the existence of expected returns and continuous time paths (also Gaussian properties) allow the use of stochastic calculus to solve complex Black- Scholes models. However, if the markets are not Gaussian then the idea that risk can be. (educed to zero can lead to a misleading and potentially disastrous sense of security on the financial markets. This study project test the null hypothesis - share prices on the JSE follow a random walk - by means of graphical techniques such as symmetry plots and Quantile-Quantile plots to analyse the test distributions. In both graphical techniques evidence for the rejection of normality was found. Evidenceleading to the rejection of the hypothesis was also found through nonparametric or distribution free methods at a 1% level of significance for Anderson-Darling and Runs test.
Estilos ABNT, Harvard, Vancouver, APA, etc.
31

Hasan, Ebrahim A. Rahman. "Strategic Genco offers in electric energy markets cleared by merit order". Thesis, McGill University, 2008. http://digitool.Library.McGill.CA:80/R/?func=dbin-jump-full&object_id=115916.

Texto completo da fonte
Resumo:
In an electricity market cleared by merit-order economic dispatch we identify necessary and sufficient conditions under which the market outcomes supported by pure strategy Nash equilibria (NE) exist when generating companies (Gencos) game through continuously variable incremental cost (IC) block offers. A Genco may own any number of units, each unit having multiple blocks with each block being offered at a constant IC.
Next, a mixed-integer linear programming (MILP) scheme devoid of approximations or iterations is developed to identify all possible NE. The MILP scheme is systematic and general but computationally demanding for large systems. Thus, an alternative significantly faster lambda-iterative approach that does not require the use of MILP was also developed.
Once all NE are found, one critical question is to identify the one whose corresponding gaming strategy may be considered by all Gencos as being the most rational. To answer this, this thesis proposes the use of a measure based on the potential profit gain and loss by each Genco for each NE. The most rational offer strategy for each Genco in terms of gaming or not gaming that best meets their risk/benefit expectations is the one corresponding to the NE with the largest gain to loss ratio.
The computation of all NE is tested on several systems of up to ninety generating units, each with four incremental cost blocks. These NE are then used to examine how market power is influenced by market parameters, specifically, the number of competing Gencos, their size and true ICs, as well as the level of demand and price cap.
Estilos ABNT, Harvard, Vancouver, APA, etc.
32

Lin, Shu-Chuan. "Robust estimation for spatial models and the skill test for disease diagnosis". Diss., Atlanta, Ga. : Georgia Institute of Technology, 2008. http://hdl.handle.net/1853/26681.

Texto completo da fonte
Resumo:
Thesis (Ph.D)--Industrial and Systems Engineering, Georgia Institute of Technology, 2009.
Committee Chair: Lu, Jye-Chyi; Committee Co-Chair: Kvam, Paul; Committee Member: Mei, Yajun; Committee Member: Serban, Nicoleta; Committee Member: Vidakovic, Brani. Part of the SMARTech Electronic Thesis and Dissertation Collection.
Estilos ABNT, Harvard, Vancouver, APA, etc.
33

Bouwman, Kees Evert. "Essays on financial econometrics : modeling the term structure of interest rates /". Enschede : PPI, 2008. http://www.gbv.de/dms/zbw/561223343.pdf.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
34

Wong, Po-shing, e 黃寶誠. "Some mixture models for the joint distribution of stock's return and trading volume". Thesis, The University of Hong Kong (Pokfulam, Hong Kong), 1991. http://hub.hku.hk/bib/B31210065.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
35

Ngalo-Morrison, Lulama. "Factors influencing the academic attainment of undergraduate sponsored students at the University of the Western Cape: a strength-based approach". Thesis, University of the Western Cape, 2017. http://hdl.handle.net/11394/5553.

Texto completo da fonte
Resumo:
Philosophiae Doctor - PhD (Education)
Deficit models dominate current research on academic retention and success in South African higher education and internationally. Most studies focus on students who are at risk of exiting higher education prematurely or those who fail academically because of their socio-economic conditions. Dropout and failure in existing research is often correlated to class and lack of access to financial resources. The prevailing philosophy based on needs assessment, deficit intervention and problem-solving does not sufficiently facilitate the academic success of diverse learners. Yet, surveys in most countries show that addressing weakness does not necessarily help people improve in their performance more than will highlighting their strengths (Hodges & Clifton, 2004). In contrast, this study adopts a strength-based approach, drawing largely on ‘ecological’ perspectives which recognize the importance of people’s surroundings and the multifaceted variables constantly at play, impacting the lives of students throughout the world. A strength-based model is posited as a pragmatic approach to pedagogy in the 21st century. This perspective recognizes the resilience of individuals and focuses on potential, strengths, interests, abilities, determination and capabilities rather than limits. This study accepts that there are persistent challenges to widening participation in South African universities, and leakages in the education pipeline continue with little improvement in graduation rates. However, there are numerous undocumented examples of academically successful students from working-class backgrounds whose academic attainment is not accounted for. Empirical data is required to establish the relationship between academic success and the resilience of undergraduate sponsored students from working class backgrounds. The case study examines factors that influence the academic attainment of undergraduate sponsored students and the institutional practices that enhance their performance at the University of the Western Cape. Factors motivating sponsored students from poor communities to succeed were explored. Furthermore, institutional influences that are relevant to, and inform students’ academic attainment are investigated. The study utilized a variety of data including relevant institutional documents, interviews with sponsored students and secondary data sourced from the Institutional Quality Assurance and Planning department. Findings of the study show that affordability through funding for equitable access to higher education is a motivating factor in academic attainment for students from disadvantaged backgrounds. Also, participants in this study attributed their success to nurtured resilience across the institution, and the supportive relationships established through structured intervention programmes in and out of class. It is important to note, contrary to findings in other studies, that low socio economic background was more of a motivational factor and being resourceful for social mobility. This study adds to the limited understanding of the academic attainment of students from poor backgrounds who succeed against all odds. This provides direction to universities for adopting different approaches and offers insights for the University of the Western Cape into the experiences of its graduates. Based on the findings, the study highlights recommendations and opportunities for future investigation.
Ngalo-Morrison, L. (2017). Factors influencing the academic attainment of undergraduate sponsored students at the University of the Western Cape: A strength-based approach. PhD thesis. University of the Western Cape
Estilos ABNT, Harvard, Vancouver, APA, etc.
36

Renlund, Henrik. "Recursive Methods in Urn Models and First-Passage Percolation". Doctoral thesis, Uppsala universitet, Matematisk statistik, 2011. http://urn.kb.se/resolve?urn=urn:nbn:se:uu:diva-145430.

Texto completo da fonte
Resumo:
This PhD thesis consists of a summary and four papers which deal with stochastic approximation algorithms and first-passage percolation. Paper I deals with the a.s. limiting properties of bounded stochastic approximation algorithms in relation to the equilibrium points of the drift function. Applications are given to some generalized Pólya urn processes. Paper II continues the work of Paper I and investigates under what circumstances one gets asymptotic normality from a properly scaled algorithm. The algorithms are shown to converge in some other circumstances, although the limiting distribution is not identified. Paper III deals with the asymptotic speed of first-passage percolation on a graph called the ladder when the times associated to the edges are independent, exponentially distributed with the same intensity. Paper IV generalizes the work of Paper III in allowing more edges in the graph as well as not having all intensities equal.
Estilos ABNT, Harvard, Vancouver, APA, etc.
37

Neshatpour, Siavash. "Récentes implications au-delà du modèle standard des désintégrations de mésons beaux". Thesis, Clermont-Ferrand 2, 2013. http://www.theses.fr/2013CLF22354.

Texto completo da fonte
Resumo:
Des progrès expérimentaux importants sont en cours dans l’étude des désintégrations rares de mésons contenant un quark beau et impliquant un quark étrange et une paire de leptons. Le travail présent mesure la portée indirecte de ces progrés sur des extensions supersymétriques du modèle standard. Même dans des modèles contraints, les limites indirectes ainsi obtenues peuvent dans certains cas être plus fortes que celles provenant de la recherche directe de particules supersymétriques. La précision gagnée par les facteurs de forme et les corrections d’ordre supérieur nouvellement implémentés dans le programme public ”SuperIso” montrent alors leur importance
There are fast progresses in the experimental study of rare decay sof mesons containing a b-quark, and involving a pair of leptons and an s-quark. The present work measures the indirect implications of these progresses on the supersymmetric extensions of the Standard Model. Even within constrained models, the indirect limits obtained in this way can in some cases be stronger than those coming from direct searches of supersymmetric particles. The accuracy gained by the form factors and higher order corrections newly implemented in the public code ”SuperIso” are then fully relevant
Estilos ABNT, Harvard, Vancouver, APA, etc.
38

Erhardt, Erik Barry. "Bayesian Simultaneous Intervals for Small Areas: An Application to Mapping Mortality Rates in U.S. Health Service Areas". Link to electronic thesis, 2004. http://www.wpi.edu/Pubs/ETD/Available/etd-0105104-195633/.

Texto completo da fonte
Resumo:
Thesis (M.S.) -- Worcester Polytechnic Institute.
Keywords: Poisson-Gamma Regression; MCMC; Bayesian; Small Area Estimation; Simultaneous Inference; Statistics Includes bibliographical references (p. 61-67).
Estilos ABNT, Harvard, Vancouver, APA, etc.
39

Novakovic, Ana M. "Longitudinal Models for Quantifying Disease and Therapeutic Response in Multiple Sclerosis". Doctoral thesis, Uppsala universitet, Institutionen för farmaceutisk biovetenskap, 2017. http://urn.kb.se/resolve?urn=urn:nbn:se:uu:diva-316562.

Texto completo da fonte
Resumo:
Treatment of patients with multiple sclerosis (MS) and development of new therapies have been challenging due to the disease complexity and slow progression, and the limited sensitivity of available clinical outcomes. Modeling and simulation has become an increasingly important component in drug development and in post-marketing optimization of use of medication. This thesis focuses on development of pharmacometric models for characterization and quantification of the relationships between drug exposure, biomarkers and clinical endpoints in relapse-remitting MS (RRMS) following cladribine treatment. A population pharmacokinetic model of cladribine and its main metabolite, 2-chloroadenine, was developed using plasma and urine data. The renal clearance of cladribine was close to half of total elimination, and was found to be a linear function of creatinine clearance (CRCL). Exposure-response models could quantify a clear effect of cladribine tablets on absolute lymphocyte count (ALC), burden of disease (BoD), expanded disability status scale (EDSS) and relapse rate (RR) endpoints. Moreover, they gave insight into disease progression of RRMS. This thesis further demonstrates how integrated modeling framework allows an understanding of the interplay between ALC and clinical efficacy endpoints. ALC was found to be a promising predictor of RR. Moreover, ALC and BoD were identified as predictors of EDSS time-course. This enables the understanding of the behavior of the key outcomes necessary for the successful development of long-awaited MS therapies, as well as how these outcomes correlate with each other. The item response theory (IRT) methodology, an alternative approach for analysing composite scores, enabled to quantify the information content of the individual EDSS components, which could help improve this scale. In addition, IRT also proved capable of increasing the detection power of potential drug effects in clinical trials, which may enhance drug development efficiency. The developed nonlinear mixed-effects models offer a platform for the quantitative understanding of the biomarker(s)/clinical endpoint relationship, disease progression and therapeutic response in RRMS by integrating a significant amount of knowledge and data.
Estilos ABNT, Harvard, Vancouver, APA, etc.
40

Uski, Ville. "Rare events and other deviations from universality in disordered conductors". Doctoral thesis, [S.l. : s.n.], 2001. http://deposit.ddb.de/cgi-bin/dokserv?idn=968601898.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
41

Henter, Gustav Eje. "Probabilistic Sequence Models with Speech and Language Applications". Doctoral thesis, KTH, Kommunikationsteori, 2013. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-134693.

Texto completo da fonte
Resumo:
Series data, sequences of measured values, are ubiquitous. Whenever observations are made along a path in space or time, a data sequence results. To comprehend nature and shape it to our will, or to make informed decisions based on what we know, we need methods to make sense of such data. Of particular interest are probabilistic descriptions, which enable us to represent uncertainty and random variation inherent to the world around us. This thesis presents and expands upon some tools for creating probabilistic models of sequences, with an eye towards applications involving speech and language. Modelling speech and language is not only of use for creating listening, reading, talking, and writing machines---for instance allowing human-friendly interfaces to future computational intelligences and smart devices of today---but probabilistic models may also ultimately tell us something about ourselves and the world we occupy. The central theme of the thesis is the creation of new or improved models more appropriate for our intended applications, by weakening limiting and questionable assumptions made by standard modelling techniques. One contribution of this thesis examines causal-state splitting reconstruction (CSSR), an algorithm for learning discrete-valued sequence models whose states are minimal sufficient statistics for prediction. Unlike many traditional techniques, CSSR does not require the number of process states to be specified a priori, but builds a pattern vocabulary from data alone, making it applicable for language acquisition and the identification of stochastic grammars. A paper in the thesis shows that CSSR handles noise and errors expected in natural data poorly, but that the learner can be extended in a simple manner to yield more robust and stable results also in the presence of corruptions. Even when the complexities of language are put aside, challenges remain. The seemingly simple task of accurately describing human speech signals, so that natural synthetic speech can be generated, has proved difficult, as humans are highly attuned to what speech should sound like. Two papers in the thesis therefore study nonparametric techniques suitable for improved acoustic modelling of speech for synthesis applications. Each of the two papers targets a known-incorrect assumption of established methods, based on the hypothesis that nonparametric techniques can better represent and recreate essential characteristics of natural speech. In the first paper of the pair, Gaussian process dynamical models (GPDMs), nonlinear, continuous state-space dynamical models based on Gaussian processes, are shown to better replicate voiced speech, without traditional dynamical features or assumptions that cepstral parameters follow linear autoregressive processes. Additional dimensions of the state-space are able to represent other salient signal aspects such as prosodic variation. The second paper, meanwhile, introduces KDE-HMMs, asymptotically-consistent Markov models for continuous-valued data based on kernel density estimation, that additionally have been extended with a fixed-cardinality discrete hidden state. This construction is shown to provide improved probabilistic descriptions of nonlinear time series, compared to reference models from different paradigms. The hidden state can be used to control process output, making KDE-HMMs compelling as a probabilistic alternative to hybrid speech-synthesis approaches. A final paper of the thesis discusses how models can be improved even when one is restricted to a fundamentally imperfect model class. Minimum entropy rate simplification (MERS), an information-theoretic scheme for postprocessing models for generative applications involving both speech and text, is introduced. MERS reduces the entropy rate of a model while remaining as close as possible to the starting model. This is shown to produce simplified models that concentrate on the most common and characteristic behaviours, and provides a continuum of simplifications between the original model and zero-entropy, completely predictable output. As the tails of fitted distributions may be inflated by noise or empirical variability that a model has failed to capture, MERS's ability to concentrate on high-probability output is also demonstrated to be useful for denoising models trained on disturbed data.

QC 20131128


ACORNS: Acquisition of Communication and Recognition Skills
LISTA – The Listening Talker
Estilos ABNT, Harvard, Vancouver, APA, etc.
42

Gathy, Maude. "On some damage processes in risk and epidemic theories". Doctoral thesis, Universite Libre de Bruxelles, 2010. http://hdl.handle.net/2013/ULB-DIPOT:oai:dipot.ulb.ac.be:2013/210063.

Texto completo da fonte
Resumo:
Cette thèse traite de processus de détérioration en théorie du risque et en biomathématique.

En théorie du risque, le processus de détérioration étudié est celui des sinistres supportés par une compagnie d'assurance.

Le premier chapitre examine la distribution de Markov-Polya comme loi possible pour modéliser le nombre de sinistres et établit certains liens avec la famille de lois de Katz/Panjer. Nous construisons la loi de Markov-Polya sur base d'un modèle de survenance des sinistres et nous montrons qu'elle satisfait une récurrence élégante. Celle-ci permet notamment de déduire un algorithme efficace pour la loi composée correspondante. Nous déduisons la famille de Katz/Panjer comme famille limite de la loi de Markov-Polya.

Le second chapitre traite de la famille dite "Lagrangian Katz" qui étend celle de Katz/Panjer. Nous motivons par un problème de premier passage son utilisation comme loi du nombre de sinistres. Nous caractérisons toutes les lois qui en font partie et nous déduisons un algorithme efficace pour la loi composée. Nous examinons également son indice de dispersion ainsi que son comportement asymptotique.

Dans le troisième chapitre, nous étudions la probabilité de ruine sur horizon fini dans un modèle discret avec taux d'intérêt positifs. Nous déterminons un algorithme ainsi que différentes bornes pour cette probabilité. Une borne particulière nous permet de construire deux mesures de risque. Nous examinons également la possibilité de faire appel à de la réassurance proportionelle avec des niveaux de rétention égaux ou différents sur les périodes successives.

Dans le cadre de processus épidémiques, la détérioration étudiée consiste en la propagation d'une maladie de type SIE (susceptible - infecté - éliminé). La manière dont un infecté contamine les susceptibles est décrite par des distributions de survie particulières. Nous en déduisons la distribution du nombre total de personnes infectées à la fin de l'épidémie. Nous examinons en détails les épidémies dites de type Markov-Polya et hypergéométrique. Nous approximons ensuite cette loi par un processus de branchement. Nous étudions également un processus de détérioration similaire en théorie de la fiabilité où le processus de détérioration consiste en la propagation de pannes en cascade dans un système de composantes interconnectées.


Doctorat en Sciences
info:eu-repo/semantics/nonPublished

Estilos ABNT, Harvard, Vancouver, APA, etc.
43

Sheth, Swapnil Suhas. "Self-Consistency of the Lauritzen-Hoffman and Strobl Models of Polymer Crystallization Evaluated for Poly(ε-caprolactone) Fractions and Effect of Composition on the Phenomenon of Concurrent Crystallization in Polyethylene Blends". Diss., Virginia Tech, 2013. http://hdl.handle.net/10919/23904.

Texto completo da fonte
Resumo:
Narrow molecular weight fractions of Poly(ε-caprolactone) were successfully obtained using the successive precipitation fractionation technique with toluene/n-heptane as a solvent/nonsolvent pair. Calorimetric studies of the melting behavior of fractions that were crystallized either isothermally or under constant cooling rate conditions suggested that the isothermal crystallization of the samples should be used for a proper evaluation of the molecular weight dependence of the observed melting temperature and degree of crystallinity in PCL. The molecular weight and temperature dependence of the spherulitic growth rate of fractions was studied in the context of the Lauritzen-Hoffman two-phase model and the Strobl three-phase model of polymer crystallization. The zero-growth rate temperatures, determined from spherulitic growth rates using four different methods, are consistent with each other and increase with chain length. The concomitant increase in the apparent secondary nucleation constant was attributed to two factors. First, for longer chains there is an increase in the probability that crystalline stems belong to loose chain-folds, hence, an increase in fold surface free energy. It is speculated that the increase in loose folding and resulting decrease in crystallinity with increasing chain length are associated with the ester group registration requirement in PCL crystals. The second contribution to the apparent nucleation constant arises from chain friction associated with segmental transport across the melt/crystal interface. These factors were responsible for the much stronger chain length dependence of spherulitic growth rates at fixed undercooling observed here with PCL than previously reported for PE and PEO. In the case of PCL, the scaling exponent associated with the chain length dependence of spherulitic growth rates exceeds the upper theoretical bound of 2 predicted from the Brochard-DeGennes chain pullout model. Observation that zero-growth and equilibrium melting temperature values are identical with each other within the uncertainty of their determinations casts serious doubt on the validity of Strobl three-phase model. A novel method is proposed to determine the Porod constant necessary to extrapolate the small angle X-ray scattering intensity data to large scattering vectors. The one-dimensional correlation function determined using this Porod constant yielded the values of lamellar crystal thickness, which were similar to these estimated using the Hosemann-Bagchi Paracrystalline Lattice model. The temperature dependence of the lamellar crystal thickness was consistent with both LH and the Strobl model of polymer crystallization. However, in contrast to the predictions of Strobl’s model, the value of the mesomorph-to-crystal equilibrium transition temperature was very close to the zero-growth temperature. Moreover, the lateral block sizes (obtained using wide angle X-ray diffraction) and the lamellar thicknesses were not found to be controlled by the mesomorph-to-crystal equilibrium transition temperature. Hence, we concluded that the crystallization of PCL is not mediated by a mesophase. Metallocene-catalyzed linear low-density (m-LLDPE with 3.4 mol% 1-octene) and conventional low-density (LDPE) polyethylene blends of different compositions were investigated for their melt-state miscibility and concurrent crystallization tendency. Differential scanning calorimetric studies and morphological studies using atomic force microscopy confirm that these blends are miscible in the melt-state for all compositions. LDPE chains are found to crystallize concurrently with m-LLDPE chains during cooling in the m-LLDPE crystallization temperature range. While the extent of concurrent crystallization was found to be optimal in blends with highest m-LLDPE content studied, strong evidence was uncovered for the existence of a saturation effect in the concurrent crystallization behavior. This observation leads us to suggest that co-crystallization, rather than mere concurrent crystallization, of LDPE with m-LLDPE can indeed take place. Matching of the respective sequence length distributions in LDPE and m-LLDPE is suggested to control the extent of co-crystallization.
Ph. D.
Estilos ABNT, Harvard, Vancouver, APA, etc.
44

Tautenhahn, Martin. "Lokalisierung für korrelierte Anderson Modelle". Master's thesis, Universitätsbibliothek Chemnitz, 2007. http://nbn-resolving.de/urn:nbn:de:bsz:ch1-200701584.

Texto completo da fonte
Resumo:
Im Fokus dieser Diplomarbeit steht ein korreliertes Anderson Modell. Unser Modell beschreibt kurzreichweitige Einzelplatzpotentiale, wobei negative Korrelationen zugelassen werden. Für dieses korrelierte Modell wird mittels der fraktionalen Momentenmethode im Falle genügend großer Unordnung exponentieller Abfall der Greenschen Funktion bewiesen. Anschließend wird daraus für den nicht korrelierten Spezialfall Anderson Lokalisierung bewiesen
This thesis (diploma) is devoted to a correlated Anderson model. Our model describes short range single site potentials, whereby negative correlations become certified. For this correlated model exponential decay of the Greens' function is proven in the case sufficient large disorder according to the fractional moment method. Subsequently, we prove Anderson localization for the not correlated special case
Estilos ABNT, Harvard, Vancouver, APA, etc.
45

Allalen, Mohammed. "Magnetic properties and proton spin-lattice relaxation in molecular clusters". Doctoral thesis, [S.l.] : [s.n.], 2006. http://deposit.ddb.de/cgi-bin/dokserv?idn=979984777.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
46

Horký, Miroslav. "Modely hromadné obsluhy". Master's thesis, Vysoké učení technické v Brně. Fakulta strojního inženýrství, 2015. http://www.nusl.cz/ntk/nusl-232033.

Texto completo da fonte
Resumo:
The master’s thesis solves models of queueing systems, which use the property of Markov chains. The queueing system is a system, where the objects enter into this system in random moments and require the service. This thesis solves specifically such models of queueing systems, in which the intervals between the objects incomings and service time have exponential distribution. In the theoretical part of the master’s thesis I deal with the topics stochastic process, queueing theory, classification of models and description of the models having Markovian property. In the practical part I describe realization and function of the program, which solves simulation of chosen model M/M/m. At the end I compare results which were calculated in analytic way and by simulation of the model M/M/m.
Estilos ABNT, Harvard, Vancouver, APA, etc.
47

Ederer, Stefan, Maximilian Mayerhofer e Miriam Rehm. "Rich and Ever Richer: Differential Returns Across Socio-Economic Groups". WU Vienna University of Economics and Business, 2019. http://epub.wu.ac.at/7170/1/WP_29.pdf.

Texto completo da fonte
Resumo:
This paper estimates rates of return across the gross wealth distribution in eight European countries. Like differential saving rates, differential rates of return matter for Post Keynesian theory, because they impact the income and wealth distribution and add an explosive element to growth models. We show that differential rates of return matter empirically by merging data on household balance sheets with long-run returns for individual asset categories. We find that (1) the composition of wealth differentiates between three socioeconomic groups: 30% are asset-poor, 65% are middle-class home owners, and the top 5% are business-owning capitalists; (2) rates of return rise across all groups; and (3) rates of return broadly follow a log-shaped function across the distribution, where inequality in the lower half of the distribution is higher than in the upper half. If socioeconomic groups are collapsed into the bottom 95% workers and top 5% capitalists, then rates of return are 5.6% for the former and 7.2% for the latter.
Series: Ecological Economic Papers
Estilos ABNT, Harvard, Vancouver, APA, etc.
48

Van, Heerden Petrus Marthinus Stephanus. "The relationship between the forward– and the realized spot exchange rate in South Africa / Petrus Marthinus Stephanus van Heerden". Thesis, North-West University, 2010. http://hdl.handle.net/10394/4511.

Texto completo da fonte
Resumo:
The inability to effectively hedge against unfavourable exchange rate movements, using the current forward exchange rate as the only guideline, is a key inhibiting factor of international trade. Market participants use the current forward exchange rate quoted in the market to make decisions regarding future exchange rate changes. However, the current forward exchange rate is not solely determined by the interaction of demand and supply, but is also a mechanistic estimation, which is based on the current spot exchange rate and the carry cost of the transaction. Results of various studies, including this study, demonstrated that the current forward exchange rate differs substantially from the realized future spot exchange rate. This phenomenon is known as the exchange rate puzzle. This study contributes to the dynamics of modelling exchange rate theories by developing an exchange rate model that has the ability to explain the realized future spot exchange rate and the exchange rate puzzle. The exchange rate model is based only on current (time t) economic fundamentals and includes an alternative approach of incorporating the impact of the interaction of two international financial markets into the model. This study derived a unique exchange rate model, which proves that the exchange rate puzzle is a pseudo problem. The pseudo problem is based on the generally excepted fallacy that current non–stationary, level time series data cannot be used to model exchange rate theories, because of the incorrect assumption that all the available econometric methods yield statistically insignificant results due to spurious regressions. Empirical evidence conclusively shows that using non–stationary, level time series data of current economic fundamentals can statistically significantly explain the realized future spot exchange rate and, therefore, that the exchange rate puzzle can be solved. This model will give market participants in the foreign exchange market a better indication of expected future exchange rates, which will considerably reduce the dependence on the mechanistically derived forward points. The newly derived exchange rate model will also have an influence on the demand and supply of forward exchange, resulting in forward points that are a more accurate prediction of the realized future exchange rate.
Thesis (Ph.D. (Risk management))--North-West University, Potchefstroom Campus, 2011.
Estilos ABNT, Harvard, Vancouver, APA, etc.
49

Dumas, James M. "The race for Muslim hearts and minds : a social movement analysis of the U.S. war on terror and popular support in the Muslim world". Thesis, University of St Andrews, 2010. http://hdl.handle.net/10023/993.

Texto completo da fonte
Resumo:
According to conventional wisdom winning hearts and minds is one of the most important goals for defeating terrorism. However, despite repeated claims about U.S. efforts to build popular support as part of the war on terror during the first seven years after 9/11, a steady stream of polls and surveys delivered troubling news. Using a counterinsurgency and social movement informed approach, I explain why the United States performed poorly in the race for Muslim hearts and minds, with a specific focus on problems inherent in the social construction of terrorism, the use of an enemy-centric model while overestimating agency, and the counterproductive effect of policy choices on framing processes. Popular support plays wide-ranging roles in counterterrorism, including: influencing recruitment, fundraising, operational support, and the flow of intelligence; providing credibility and legitimacy; and, sanctifying or marginalizing violence. Recognizing this the U.S. emphasized public diplomacy, foreign aid, positive military-civilian interactions, democracy promotion, and other efforts targeting populations in the Muslim world. To explain the problems these efforts had, this thesis argues that how Americans think and talk about terrorism, reflected especially in the rhetoric and strategic narrative of the Bush administration, evolved after 9/11 to reinforce normative and enemy-centric biases undermining both understanding of the underlying conflicts and resulting efforts. U.S. policy advocates further misjudged American agency, especially in terms of overemphasizing U.S. centrality, failing to recognize the importance of real grievances, and overestimating American ability to implement its own policies or control the policies of local governments. Finally, the failure to acknowledge the role of U.S. policies counterproductively impacted contested framing processes influencing the evolution of mobilization. The resulting rhetoric and actions reinforced existing anti- American views, contributed to the perception that the war on terror is really a war on Islam, and undermined natural counter narratives.
Estilos ABNT, Harvard, Vancouver, APA, etc.
50

Mantovani, Marco. "Essays in forward looking behavior in strategic interactions". Doctoral thesis, Universite Libre de Bruxelles, 2013. http://hdl.handle.net/2013/ULB-DIPOT:oai:dipot.ulb.ac.be:2013/209492.

Texto completo da fonte
Resumo:
The general topic of our thesis is forward looking behavior in strategic situations. Mixing theoretical and experimental analysis, we document how strategic thinking is affected by the specific features of a dynamic interaction. The overarching result is that the information regarding decisions that are close to the current one, receive a qualitatively different consideration, with respect to distant ones. That is, the actual decisions are based on reasoning over a limited number of steps, close to actual decison node. We capture this feature of behavior both in a strategic (limited backward induction) and in a non-strategic (limited farsightedness) set up, and we identify relevant consequences on the outcome of the interaction, which powerfullly explain many observed experimental regularities.

In the first essay, we present a general out-of-equilibrium framework for strategic thinking in sequential games. It assumes the agents to take decisions on restricted game trees, according to their (limited) foresight level, following backward induction. Therefore we talk of limited backward induction (LBI). We test for LBI using a variant of the race game. Our design allows to identify restricted game trees and backward reasoning, thus properly disentangling LBI behavior. The results provide strong support in favor of LBI. Most players solve intermediate tasks - i.e. restricted games - without reasoning on the terminal histories. Only a small fraction of subjects play close to equilibrium, and (slow) convergence toward it appears, though only in the base game. An intermediate task keeps the subjects off the equilibrium path longer than in the base game. The results cannot be rationalized using the most popular models of strategic reasoning, let alone equilibrium analysis.

In the second essay, a subtle implication of the model is investigated: the sensitivity of the players’ foresight to the accessibility and completeness of the information they have, using a Centipede game. By manipulating the way in which information is provided to subjects, we show that reduced availability of information is sufficient to shift the distribution of take-nodes further from the equilibrium prediction. On the other hand, similar results are obtained in a treatment where reduced availability of information is combined with an attempt to elicit preferences for reciprocity, through the presentation of the centipede as a repeated trust game. Our results could be interpreted as cognitive limitations being more effective than preferences in determining (shifts in) behavior in our experimental centipede. Furthermore our results are at odds with the recent ones in Cox [2012], suggesting caution in generalizing their results. Reducing the availability of information may hamper backward induction or induce myopic behavior, depending on the strategic environment.

The third essay consists of an experimental investigation of farsighted versus myopic behavior in network formation. Pairwise stability Jackson and Wolinsky [1996] is the standard stability concept in network formation. It assumes myopic behavior of the agents in the sense that they do not forecast how others might react to their actions. Assuming that agents are perfectly farsighted, related stability concepts have been proposed. We design a simple network formation experiment to test these extreme theories, but find evidence against both of them: the subjects are consistent with an intermediate rule of behavior, which we interpret as a form of limited farsightedness. On aggregate, the selection among multiple pairwise stable networks (and the performance of farsighted stability) crucially depends on the level of farsightedness needed to sustain them, and not on efficiency or cooperative considerations. Individual behavior analysis corroborates this interpretation, and suggests, in general, a low level of farsightedness (around two steps) on the part of the agents.
Doctorat en Sciences économiques et de gestion
info:eu-repo/semantics/nonPublished

Estilos ABNT, Harvard, Vancouver, APA, etc.
Oferecemos descontos em todos os planos premium para autores cujas obras estão incluídas em seleções literárias temáticas. Contate-nos para obter um código promocional único!

Vá para a bibliografia