Pour voir les autres types de publications sur ce sujet consultez le lien suivant : Rate Theory model.

Thèses sur le sujet « Rate Theory model »

Créez une référence correcte selon les styles APA, MLA, Chicago, Harvard et plusieurs autres

Choisissez une source :

Consultez les 50 meilleures thèses pour votre recherche sur le sujet « Rate Theory model ».

À côté de chaque source dans la liste de références il y a un bouton « Ajouter à la bibliographie ». Cliquez sur ce bouton, et nous générerons automatiquement la référence bibliographique pour la source choisie selon votre style de citation préféré : APA, MLA, Harvard, Vancouver, Chicago, etc.

Vous pouvez aussi télécharger le texte intégral de la publication scolaire au format pdf et consulter son résumé en ligne lorsque ces informations sont inclues dans les métadonnées.

Parcourez les thèses sur diverses disciplines et organisez correctement votre bibliographie.

1

Elhouar, Mikael. « Essays on interest rate theory ». Doctoral thesis, Handelshögskolan i Stockholm, Finansiell Ekonomi (FI), 2008. http://urn.kb.se/resolve?urn=urn:nbn:se:hhs:diva-451.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
2

Götsch, Irina. « Libor market model theory and implementation ». Saarbrücken VDM, Müller, 2006. http://deposit.d-nb.de/cgi-bin/dokserv?id=2868878&prov=M&dok_var=1&dok_ext=htm.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
3

Riga, Candia. « The Libor Market Model : from theory to calibration ». Master's thesis, Alma Mater Studiorum - Università di Bologna, 2011. http://amslaurea.unibo.it/2288/.

Texte intégral
Résumé :
This thesis is focused on the financial model for interest rates called the LIBOR Market Model. In the appendixes, we provide the necessary mathematical theory. In the inner chapters, firstly, we define the main interest rates and financial instruments concerning with the interest rate models, then, we set the LIBOR market model, demonstrate its existence, derive the dynamics of forward LIBOR rates and justify the pricing of caps according to the Black’s formula. Then, we also present the Swap Market Model, which models the forward swap rates instead of the LIBOR ones. Even this model is justified by a theoretical demonstration and the resulting formula to price the swaptions coincides with the Black’s one. However, the two models are not compatible from a theoretical point. Therefore, we derive various analytical approximating formulae to price the swaptions in the LIBOR market model and we explain how to perform a Monte Carlo simulation. Finally, we present the calibration of the LIBOR market model to the markets of both caps and swaptions, together with various examples of application to the historical correlation matrix and the cascade calibration of the forward volatilities to the matrix of implied swaption volatilities provided by the market.
Styles APA, Harvard, Vancouver, ISO, etc.
4

Yeldener, Suat. « Sinusoidal model based low bit rate speech coding for communication systems ». Thesis, University of Surrey, 1993. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.359842.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
5

Van, Wijck Tjaart. « Interest rate model theory with reference to the South African market ». Thesis, Stellenbosch : University of Stellenbosch, 2006. http://hdl.handle.net/10019.1/3396.

Texte intégral
Résumé :
Thesis (MComm (Statistics and Actuarial Science))--University of Stellenbosch, 2006.
An overview of modern and historical interest rate model theory is given with the specific aim of derivative pricing. A variety of stochastic interest rate models are discussed within a South African market context. The various models are compared with respect to characteristics such as mean reversion, positivity of interest rates, the volatility structures they can represent, the yield curve shapes they can represent and weather analytical bond and derivative prices can be found. The distribution of the interest rates implied by some of these models is also found under various measures. The calibration of these models also receives attention with respect to instruments available in the South African market. Problems associated with the calibration of the modern models are also discussed.
Styles APA, Harvard, Vancouver, ISO, etc.
6

Stefanovic, Milos. « Vocoder model based variable rate narrowband and wideband speech coding below 9 kbps ». Thesis, University of Surrey, 1999. http://epubs.surrey.ac.uk/843965/.

Texte intégral
Résumé :
The past two decades have witnessed rapid growth and development within the telecommunications industry. This has been primarily fuelled by the proliferation of digital mobile communication applications and services which have become commonplace and easily within the financial reach of businesses and the general public. Current research trends, involving integration and packetisation of voice, video and data channels into true multimedia communications, promise a similar technological revolution in the next decade. One of the key design issues of the new high quality multimedia services is a requirement for very high data rates. Whilst the available bandwidth in wire based terrestrial network is a relatively cheap and expandable resource, it becomes inherently limited in satellite or cellular radio systems. In order to accommodate ever growing numbers of subscribers whilst maintaining high quality and low operational costs, it is necessary to maximise spectral efficiency and reduce power consumption. This has given rise to the rapid development of signal compression techniques, which in the speech transmission domain are known as speech coding algorithms. The research carried out for this thesis has mainly focused on the design and development of low bit rate narrowband and wideband speech coding systems which utilise a variable rate approach in order to improve their perceptual quality and reduce their transmission rates. The algorithms subsequently developed are based on the existing vocoding schemes, whose rigid fixed rate structure is a major limitation to achieving higher quality and lower rates. The variable rate schemes utilise the time-varying characteristics of the speech signal which is classified according to the developed segmentation algorithms. Two main schemes were developed, a variable bit rate with an average as low as 1.35 kbps and a variable frame rate with an average of 2.1 kbps, both achieving or even surpassing the subjective quality of the existing vocoding standard at 4.15 kbps. Wideband speech exhibits characteristics which are not embodied within narrowband speech and which contribute to the superior perceived quality. A very high quality wideband vocoder operating at rates (fixed and variable) below 9 kbps is presented in this thesis, whereby particular attention is paid to preserving the information in higher frequencies in order to maximise the attainable quality.
Styles APA, Harvard, Vancouver, ISO, etc.
7

Pringle, Sammie VanOrden Marc A. « Applying modern portfolio theory and the capital asset pricing model to DoD's information technology investments ». Monterey, Calif. : Naval Postgraduate School, 2009. http://edocs.nps.edu/npspubs/scholarly/theses/2009/March/09Mar%5FPringle.pdf.

Texte intégral
Résumé :
Thesis (M.S. in Information Technololgy Management)--Naval Postgraduate School, March 2009.
Thesis Advisor(s): Housel, Thomas J. "March 2009." Description based on title screen as viewed on April 23, 2009. Author(s) subject terms: CAPM, Capital Asset Pricing Model, KVA, Knowledge Value Added, Real Options, ROI, Return on Investment, MPT, Modern Portfolio Theory. Includes bibliographical references (p. 37-39). Also available in print.
Styles APA, Harvard, Vancouver, ISO, etc.
8

Mönnich, Christina. « Tariff rate quotas and their administration : theory, practice and an econometric model for the EU / ». Frankfurt am Main [u.a.] : Lang, 2004. http://www.gbv.de/dms/zbw/390979201.pdf.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
9

Cohen, Margaret A. « Estimating the growth rate of harmful algal blooms using a model averaged method ». View electronic thesis (PDF), 2009. http://dl.uncw.edu/etd/2009-1/rp/cohenm/margaretcohen.pdf.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
10

Oinuma, Ryoji. « Fundamental study of evaporation model in micron pore ». Texas A&M University, 2004. http://hdl.handle.net/1969.1/1239.

Texte intégral
Résumé :
As the demand for high performance small electronic devices has increased, heat removal from these devices for space use is approaching critical limits. A heat pipe is a promising device to enhance the heat removal performance due to the phase change phenomena for space thermal management system. Even though a heat pipe has a big potential to remove the thermal energy from a high heat flux source, the heat removal performance of heat pipes cannot be predicted well since the first principle of evaporation has not been established. The purpose of this study is to establish a method to apply the evaporation model based on the statistical rate theory for engineering application including vapor-liquid-structure intermolecular effect. The evaporation model is applied to the heat pipe performance analysis through a pressure balance and an energy balance in the loop heat pipe.
Styles APA, Harvard, Vancouver, ISO, etc.
11

Ahmed, Najeer. « Addressing the Post-Keynesian Critique : Exchange Rate Determination with an Extended Mundell-Fleming Model ». Scholarship @ Claremont, 2016. http://scholarship.claremont.edu/cmc_theses/1335.

Texte intégral
Résumé :
The assertion that financial flows are the primary drivers of exchange rates may be considered as financial markets become increasingly large and sophisticated. However, the Post-Keynesian critique leaves little room for the real economy to impact exchange rates. This paper aims to extend the Mundell-Fleming model to address the Post-Keynesian critique of mainstream models, by incorporating wealth effects, expectations, and Taylor-rule interest targeting. Discussion of significant financial events affecting the USDJPY exchange rate finds that wealth effects are significant considerations, and that the real economy cannot be discounted completely. Empirical results find that the real interest rate is a significant factor in exchange rate determination, tying into the discussion over the relationship between savings and consumption.
Styles APA, Harvard, Vancouver, ISO, etc.
12

Grammenidis, Ackis, et Anna Fattor. « Zero impact or zero reliability ? : An empirical test of Capital Asset Pricing Model during periods ofzero risk-free rate ». Thesis, Umeå University, Umeå School of Business, 2009. http://urn.kb.se/resolve?urn=urn:nbn:se:umu:diva-25631.

Texte intégral
Résumé :

1.3. Research Questions.

With this in mind, the research questions of this work are:

1. Is the Capital Asset Pricing Model still applicable despite the heavy impact of the financial crisis on the financial systems?

2. What happens to this model when the risk free rate approaches zero?

3. Is there a relationship between the riskiness of an asset and the risk-free interestrate when the latter is approaching the zero level?

Styles APA, Harvard, Vancouver, ISO, etc.
13

Yin, Pei. « Volatility estimation and price prediction using a hidden Markov model with empirical study ». Diss., Columbia, Mo. : University of Missouri-Columbia, 2007. http://hdl.handle.net/10355/4795.

Texte intégral
Résumé :
Thesis (Ph. D.)--University of Missouri-Columbia, 2007.
The entire dissertation/thesis text is included in the research.pdf file; the official abstract appears in the short.pdf file (which also appears in the research.pdf); a non-technical general description, or public abstract, appears in the public.pdf file. Title from title screen of research.pdf file (viewed on December 18, 2007) Vita. Includes bibliographical references.
Styles APA, Harvard, Vancouver, ISO, etc.
14

Madrid, Angel Calderon. « The role of private financial wealth in a portfolio model : a study of the effects of fiscal deficits of the exchange rate and GDP ». Thesis, University of Cambridge, 1989. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.306390.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
15

Katskov, DA, et N. Darangwa. « Application of Langmuir theory of evaporation to the simulation of sample vapor composition and release rate in graphite tube atomizers. Part 1. The model and calculation algorithm ». Journal of Analytical Atomic Spectrometry, 2010. http://encore.tut.ac.za/iii/cpro/DigitalItemViewPage.external?sp=1001252.

Texte intégral
Résumé :
A method is suggested for simulation of transient sample vapor composition and release rate during vaporization of analytes in electrothermal (ET) atomizers for AAS. The approach is based on the Langmuir theory of evaporation of metals in the presence of a gas at atmospheric pressure, which advocates formation of mass equilibrium in the boundary layer next to the evaporation surface. It is suggested in this work that in ET atomizers the release of atoms and molecules from the boundary layer next to the dry residue of the analyte is accompanied by spreading of the layer around the sample droplets or crystals. Thus, eventually, the vapor source forms an effective area associated with a monolayer of the analyte. In particular, for the case of a metal oxide analyte as discussed in the work, the boundary layer contains the species present in thermodynamic equilibrium with oxide, which are metal atoms and dimers, oxide molecules and oxygen. Because of an excess of Ar, the probability of mass and energy exchange between the evolved gaseous species is low, this substantiates independent mass transport of each type of species from the boundary layer and through absorption volume. Diffusion, capture by Ar flow and gas thermal expansion is considered to control vapor transport and release rate. Each specific flow is affected by secondary processes occurring in collisions of the evolved molecules and atoms with the walls of graphite tube. Diffusion of oxygen containing species out of the boundary layer is facilitated by annihilation of oxygen and reduction of oxide on the graphite surface, while interaction of metal vapor with graphite slows down transport of atomic vapor out of the atomizer. These assumptions are used as the basis for the presentation of the problem as a system of first order differential equations describing mass and temperature balance in the atomizer. Numerical solution of the system of equations provides the simulation of temporal composition of the sample constituents in condensed and gas phase in the atomizer according to chemical properties of the analyte and experimental conditions. The suggested approach avoids the description of atomization processes via kinetic parameters such as activation energy, frequency factor, surface coverage or reaction order.
Styles APA, Harvard, Vancouver, ISO, etc.
16

Stockman, Peter Frans Willem. « An intertemporal model of the real exchange rate and the current account of the developing countries of east Asia : theory and calibration ». Thesis, Cardiff University, 1998. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.248175.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
17

NAKANO, MASAKI, AKIRA ASAOKA et TOSHIHIRO NODA. « SOIL-WATER COUPLED FINITE DEFORMATION ANALYSIS BASED ON A RATE-TYPE EQUATION OF MOTION INCORPORATING THE SYS CAM-CLAY MODEL ». 地盤工学会, 2008. http://hdl.handle.net/2237/20062.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
18

Lin, Shu-Chuan. « Robust estimation for spatial models and the skill test for disease diagnosis ». Diss., Atlanta, Ga. : Georgia Institute of Technology, 2008. http://hdl.handle.net/1853/26681.

Texte intégral
Résumé :
Thesis (Ph.D)--Industrial and Systems Engineering, Georgia Institute of Technology, 2009.
Committee Chair: Lu, Jye-Chyi; Committee Co-Chair: Kvam, Paul; Committee Member: Mei, Yajun; Committee Member: Serban, Nicoleta; Committee Member: Vidakovic, Brani. Part of the SMARTech Electronic Thesis and Dissertation Collection.
Styles APA, Harvard, Vancouver, ISO, etc.
19

Van, Heerden Petrus Marthinus Stephanus. « The relationship between the forward– and the realized spot exchange rate in South Africa / Petrus Marthinus Stephanus van Heerden ». Thesis, North-West University, 2010. http://hdl.handle.net/10394/4511.

Texte intégral
Résumé :
The inability to effectively hedge against unfavourable exchange rate movements, using the current forward exchange rate as the only guideline, is a key inhibiting factor of international trade. Market participants use the current forward exchange rate quoted in the market to make decisions regarding future exchange rate changes. However, the current forward exchange rate is not solely determined by the interaction of demand and supply, but is also a mechanistic estimation, which is based on the current spot exchange rate and the carry cost of the transaction. Results of various studies, including this study, demonstrated that the current forward exchange rate differs substantially from the realized future spot exchange rate. This phenomenon is known as the exchange rate puzzle. This study contributes to the dynamics of modelling exchange rate theories by developing an exchange rate model that has the ability to explain the realized future spot exchange rate and the exchange rate puzzle. The exchange rate model is based only on current (time t) economic fundamentals and includes an alternative approach of incorporating the impact of the interaction of two international financial markets into the model. This study derived a unique exchange rate model, which proves that the exchange rate puzzle is a pseudo problem. The pseudo problem is based on the generally excepted fallacy that current non–stationary, level time series data cannot be used to model exchange rate theories, because of the incorrect assumption that all the available econometric methods yield statistically insignificant results due to spurious regressions. Empirical evidence conclusively shows that using non–stationary, level time series data of current economic fundamentals can statistically significantly explain the realized future spot exchange rate and, therefore, that the exchange rate puzzle can be solved. This model will give market participants in the foreign exchange market a better indication of expected future exchange rates, which will considerably reduce the dependence on the mechanistically derived forward points. The newly derived exchange rate model will also have an influence on the demand and supply of forward exchange, resulting in forward points that are a more accurate prediction of the realized future exchange rate.
Thesis (Ph.D. (Risk management))--North-West University, Potchefstroom Campus, 2011.
Styles APA, Harvard, Vancouver, ISO, etc.
20

Georgesco, Arthur. « Effet couplé de l'endommagement balistique et électronique dans UO₂ : rôle de la température d'irradiation ». Electronic Thesis or Diss., université Paris-Saclay, 2024. http://www.theses.fr/2024UPASP102.

Texte intégral
Résumé :
En réacteur, le combustible UO₂ est soumis à l'irradiation simultanée de plusieurs particules et rayonnements, dont les produits de fission, avec l'ensemble de ces phénomènes se produisant en température (environ 400 - 500 °C dans la périphérie et 1000 - 1200 °C dans le centre des pastilles). À l'échelle atomique, cela provoque des dommages de type balistique (déplacements atomiques), principalement à cause des produits de fission de basse énergie, et de l'endommagement électronique (ionisations et excitations électroniques) dû aux particules de haute énergie. Les dommages balistiques induisent notamment la création de boucles de dislocation interstitielles, non fautées, d'une taille de quelques nanomètres à quelques dizaines de nanomètres, qui évoluent vers des lignes de dislocation enchevêtrées, ainsi que des objets lacunaires sub-nanométriques à nanométriques. Les dommages électroniques, au-delà d'un certain niveau d'énergie déposée (supérieur à 20 keV/nm), induisent la formation de traces. Ainsi, alors que les effets des pertes d'énergie balistique et électronique dans UO₂ sont bien documentés, les effets de couplage entre ces deux processus, et surtout les mécanismes associés, n'ont été étudiés qu'à température ambiante. Cependant, la diffusion des défauts ponctuels varie avec la température, et certains défauts ou regroupements de défauts peuvent déjà être mobiles à température ambiante dans UO₂. Cette différence de mobilité peut avoir un impact important sur leur mécanisme d'évolution, notamment dans le cas de l'effet couplé entre les deux contributions. Ces premiers résultats nécessitent donc d'être complétés en s'intéressant à l'influence de la température d'irradiation sur ce couplage. Pour ce faire, deux approches sont considérées. Dans un premier temps, il est nécessaire de s'affranchir de l'effet de la température d'irradiation, en travaillant à très basse température, pour mieux identifier les mécanismes d'évolution des défauts mis en jeu lors du couplage. Dans un second temps, une fois ces mécanismes définis, il est intéressant de travailler à plus haute température, pour se rapprocher des conditions en réacteur. Ainsi, des irradiations aux ions en simple et double faisceau simultané dans des échantillons d'UO₂ ont été réalisées à différentes températures sur les plateformes JANNuS Orsay et Saclay. La microscopie électronique en transmission et la spectroscopie Raman ont été utilisées (de manière in situ et ex situ) pour étudier l'évolution des défauts étendus et du désordre lié aux défauts ponctuels, respectivement. Un modèle Rate Theory a été utilisé en regard des résultats expérimentaux, pour identifier les mécanismes mis en jeu lors d'irradiations avec ou sans l'effet de la température, et avec ou sans l'effet des pertes d'énergie électronique. Les résultats montrent que les mécanismes de nucléation et de grossissement des boucles de dislocation sont très impactés par la diffusion des défauts ponctuels et/ou des amas de défauts, à la différence des objets lacunaires. Cette diffusion est activée soit par la température pendant l'irradiation, soit par les excitations électroniques / ionisations (induisant des effets de pointe thermique) des ions très énergétiques lors du couplage. La température impacte donc fortement le couplage entre les pertes d'énergie électronique et nucléaire. Par ailleurs, l'effet de ce couplage diffère selon le mode d'irradiation (simple ou double faisceau) ; tout cela résulte en des évolutions de la microstructure très différentes. Les différentes irradiations réalisées, ainsi que l'utilisation du modèle Rate Theory, ont permis de définir les mécanismes qui s'opèrent dans UO₂ avec l'effet couplé de la température d'irradiation et des pertes d'énergie balistique et électronique. Cette démarche apporte une meilleure compréhension du comportement du combustible nucléaire en réacteur
In the reactor, UO₂ fuel is subjected to simultaneous irradiation by several particles and radiation, including fission products, with all these phenomena occurring at high temperatures (around 400 - 500 °C in the pellet periphery and 1000 - 1200 °C in the pellet center). On an atomic scale, this leads to ballistic damage (atomic displacements), mainly due to low-energy fission products, and electronic damage (ionizations and electronic excitations) due to high-energy particles. Ballistic damage results in the creation of interstitial-type dislocation loops, a few nanometers to tens of nanometers in size, which evolve into tangled dislocation lines, as well as sub-nanometric to nanometric vacancy-type objects. Electronic damage, beyond a certain level of deposited energy (above 20 keV/nm), induces tracks formation. Therefore, while the effects of ballistic and electronic energy losses in UO₂ are well documented, the coupling effects between these two processes, and especially the associated mechanisms, have only been studied at room temperature. However, the diffusion of point defects varies with temperature, and some defects or defect clusters may already be mobile at room temperature in UO₂. This difference in mobility may have a significant impact on their evolution mechanism, particularly in the case of the coupled effect between the two contributions. These initial results therefore need to be supplemented by looking at the influence of irradiation temperature on this coupling. To achieve this, two approaches are considered. Firstly, it is necessary to eliminate the effect of irradiation temperature, by working at very low temperature, to better identify the mechanisms of defect evolution occurring during coupling. Secondly, once these mechanisms have been defined, it is worthwhile working at higher temperatures, to get closer to reactor conditions. Single- and dual-beam ion irradiations of UO₂ samples were carried out at different temperatures on the JANNuS Orsay and Saclay facilities. Transmission electron microscopy and Raman spectroscopy were used (in situ and ex situ) to study the evolution of extended defects and disorder related to point defects, respectively. A Rate Theory model was used in conjunction with the experimental results, to identify the mechanisms involved in irradiation with or without the effect of temperature, and with or without the effect of electronic energy losses. The results show that the nucleation and growth mechanisms of dislocation loops are strongly impacted by the diffusion of point defects and/or defect clusters, unlike vacancy-type objects. This diffusion is activated either by temperature during irradiation, or by the electronic excitations/ionizations (inducing thermal spike effects) of high-energy ions during coupling. Temperature therefore has a major impact on the coupling between electronic and nuclear energy losses. Moreover, the effect of this coupling differs according to the irradiation mode (single or dual beam), resulting in very different microstructure evolutions. The various irradiations carried out, together with the use of the Rate Theory model, have enabled us to define the mechanisms at work in UO₂, with the coupled effect of irradiation temperature and ballistic and electronic energy losses. This approach provides a better understanding of the behavior of nuclear fuel in reactors
Styles APA, Harvard, Vancouver, ISO, etc.
21

Aquino, Phillip A. « PREDICTION OF PREMIXED INTERNAL COMBUSTION ENGINE MASS FRACTON BURNED PROFILES USING A PHYSICAL FORM OF THE WIEBE FUNCTION AND THE THEORY OF TURBULENT FLAME BRUSH THICKNESS DYNAMICS ». The Ohio State University, 2020. http://rave.ohiolink.edu/etdc/view?acc_num=osu1606987013001077.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
22

Huang, Zhenyu. « Crystallization and Melting Behavior of Linear Polyethylene and Ethylene/Styrene Copolymers and Chain Length Dependence of Spherulitic Growth Rate for Poly(Ethylene Oxide) Fractions ». Diss., Virginia Tech, 2004. http://hdl.handle.net/10919/29211.

Texte intégral
Résumé :
The crystallization and melting behavior of linear polyethylene and of a series of random ethylene/styrene copolymers was investigated using a combination of classical and temperature modulated differential scanning calorimetry. In the case of linear polyethylene and low styrene content copolymers, the temporal evolutions of the melting temperature, degree of crystallinity, and excess heat capacity were studied during crystallization. The following correlations were established: 1) the evolution of the melting temperature with time parallels that of the degree of crystallinity, 2) the excess heat capacity increases linearly with the degree of crystallinity during primary crystallization, reaches a maximum during the mixed stage and decays during secondary crystallization, 3) the rates of shift of the melting temperature and decay of the excess heat capacity lead to apparent activation energies that are very similar to these reported for the crystal ac relaxation by other techniques. Strong correlations in the time domain between the secondary crystallization and the evolution of the excess heat capacity suggest that the reversible crystallization/melting phenomenon is associated with molecular events in the melt-crystal fold interfacial region. In the case of higher styrene content copolymers, the multiple melting behavior at high temperature is investigated through studies of the overall crystallization kinetics, heating rate effects and partial melting. Low melting crystals can be classified into two categories according to their melting behavior, superheating and reorganization characteristics. Low styrene content copolymers still exhibit some chain folded lamellar structure. The shift of the low melting temperature with time in this case is tentatively explained in terms of reorganization effects. Decreasing the crystallization temperature or increasing the styrene content leads to low melting crystals more akin to fringed-micelles. These crystals exhibit a lower tendency to reorganize during heating. The shift of their melting temperature with time is attributed to a decrease in the conformational entropy of the amorphous fraction as a result of constraints imposed by primary and secondary crystals. To further understand the mechanism of formation of low melting crystals, quasi-isothermal crystallization experiments were carried out using temperature modulation. The evolution of the excess heat capacity was correlated with that of the melting behavior. On the basis of these results, it is speculated that the generation of excess heat capacity at high temperature results from reversible segmental exchange on the fold surface. On the other hand, the temporal evolution of the excess heat capacity at low temperature for high styrene content copolymers is attributed to the reversible segment attachment and detachment on the lateral surface of primary crystals. The existence of different mechanisms for the generation of excess heat capacity in different temperature ranges is consistent with the observation of two temperature regimes for the degree of reversibility inferred from quasi-isothermal melting experiments. In a second project, the chain length and temperature dependences of spherulitic growth rates were studied for a series of narrow fractions of poly(ethylene oxide) with molecular weight ranging from 11 to 917 kg/mol. The crystal growth rate data spanning crystallization temperatures in regimes I and II was analyzed using the formalism of the Lauritzen-Hoffman (LH) theory. Our results are found to be in conflict with predictions from LH theory. The Kg ratio increases with molecular weight instead of remaining constant. The chain length dependence of the exponential prefactor, G0, does not follow the power law predicted by Hoffman and Miller (HM). On this basis, the simple reptation argument proposed in the HM treatment and the nucleation regime concept advanced by the LH model are questioned. We proposed that the observed I/II regime transition in growth rate data may be related to a transition in the friction coefficient, as postulated by the Brochard-de Gennnes slippage model. This mechanism is also consistent with recent calculations published by Toda in which both the rates of surface nucleation and substrate completion processes exhibit a strong temperature dependence.
Ph. D.
Styles APA, Harvard, Vancouver, ISO, etc.
23

Nåtman, Jonatan. « The performance of inverse probability of treatment weighting and propensity score matching for estimating marginal hazard ratios ». Thesis, Uppsala universitet, Statistiska institutionen, 2019. http://urn.kb.se/resolve?urn=urn:nbn:se:uu:diva-385502.

Texte intégral
Résumé :
Propensity score methods are increasingly being used to reduce the effect of measured confounders in observational research. In medicine, censored time-to-event data is common. Using Monte Carlo simulations, this thesis evaluates the performance of nearest neighbour matching (NNM) and inverse probability of treatment weighting (IPTW) in combination with Cox proportional hazards models for estimating marginal hazard ratios. Focus is on the performance for different sample sizes and censoring rates, aspects which have not been fully investigated in this context before. The results show that, in the absence of censoring, both methods can reduce bias substantially. IPTW consistently had better performance in terms of bias and MSE compared to NNM. For the smallest examined sample size with 60 subjects, the use of IPTW led to estimates with bias below 15 %. Since the data were generated using a conditional parametrisation, the estimation of univariate models violates the proportional hazards assumption. As a result, censoring the data led to an increase in bias.
Styles APA, Harvard, Vancouver, ISO, etc.
24

Sheth, Swapnil Suhas. « Self-Consistency of the Lauritzen-Hoffman and Strobl Models of Polymer Crystallization Evaluated for Poly(ε-caprolactone) Fractions and Effect of Composition on the Phenomenon of Concurrent Crystallization in Polyethylene Blends ». Diss., Virginia Tech, 2013. http://hdl.handle.net/10919/23904.

Texte intégral
Résumé :
Narrow molecular weight fractions of Poly(ε-caprolactone) were successfully obtained using the successive precipitation fractionation technique with toluene/n-heptane as a solvent/nonsolvent pair. Calorimetric studies of the melting behavior of fractions that were crystallized either isothermally or under constant cooling rate conditions suggested that the isothermal crystallization of the samples should be used for a proper evaluation of the molecular weight dependence of the observed melting temperature and degree of crystallinity in PCL. The molecular weight and temperature dependence of the spherulitic growth rate of fractions was studied in the context of the Lauritzen-Hoffman two-phase model and the Strobl three-phase model of polymer crystallization. The zero-growth rate temperatures, determined from spherulitic growth rates using four different methods, are consistent with each other and increase with chain length. The concomitant increase in the apparent secondary nucleation constant was attributed to two factors. First, for longer chains there is an increase in the probability that crystalline stems belong to loose chain-folds, hence, an increase in fold surface free energy. It is speculated that the increase in loose folding and resulting decrease in crystallinity with increasing chain length are associated with the ester group registration requirement in PCL crystals. The second contribution to the apparent nucleation constant arises from chain friction associated with segmental transport across the melt/crystal interface. These factors were responsible for the much stronger chain length dependence of spherulitic growth rates at fixed undercooling observed here with PCL than previously reported for PE and PEO. In the case of PCL, the scaling exponent associated with the chain length dependence of spherulitic growth rates exceeds the upper theoretical bound of 2 predicted from the Brochard-DeGennes chain pullout model. Observation that zero-growth and equilibrium melting temperature values are identical with each other within the uncertainty of their determinations casts serious doubt on the validity of Strobl three-phase model. A novel method is proposed to determine the Porod constant necessary to extrapolate the small angle X-ray scattering intensity data to large scattering vectors. The one-dimensional correlation function determined using this Porod constant yielded the values of lamellar crystal thickness, which were similar to these estimated using the Hosemann-Bagchi Paracrystalline Lattice model. The temperature dependence of the lamellar crystal thickness was consistent with both LH and the Strobl model of polymer crystallization. However, in contrast to the predictions of Strobl’s model, the value of the mesomorph-to-crystal equilibrium transition temperature was very close to the zero-growth temperature. Moreover, the lateral block sizes (obtained using wide angle X-ray diffraction) and the lamellar thicknesses were not found to be controlled by the mesomorph-to-crystal equilibrium transition temperature. Hence, we concluded that the crystallization of PCL is not mediated by a mesophase. Metallocene-catalyzed linear low-density (m-LLDPE with 3.4 mol% 1-octene) and conventional low-density (LDPE) polyethylene blends of different compositions were investigated for their melt-state miscibility and concurrent crystallization tendency. Differential scanning calorimetric studies and morphological studies using atomic force microscopy confirm that these blends are miscible in the melt-state for all compositions. LDPE chains are found to crystallize concurrently with m-LLDPE chains during cooling in the m-LLDPE crystallization temperature range. While the extent of concurrent crystallization was found to be optimal in blends with highest m-LLDPE content studied, strong evidence was uncovered for the existence of a saturation effect in the concurrent crystallization behavior. This observation leads us to suggest that co-crystallization, rather than mere concurrent crystallization, of LDPE with m-LLDPE can indeed take place. Matching of the respective sequence length distributions in LDPE and m-LLDPE is suggested to control the extent of co-crystallization.
Ph. D.
Styles APA, Harvard, Vancouver, ISO, etc.
25

Allalen, Mohammed. « Magnetic properties and proton spin-lattice relaxation in molecular clusters ». Doctoral thesis, [S.l.] : [s.n.], 2006. http://deposit.ddb.de/cgi-bin/dokserv?idn=979984777.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
26

Smith, P. N. « Structural models of the exchange rate : Theory and evidence ». Thesis, University of Southampton, 1987. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.378873.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
27

Zhang, Jiangxingyun. « International Portfolio Theory-based Interest Rate Models and EMU Crisis ». Thesis, Rennes 1, 2017. http://www.theses.fr/2017REN1G011/document.

Texte intégral
Résumé :
L'objectif de cette thèse est d’étudier à côté du risque défaut, le rôle spécifique des risques de volatilité et de co-volatilité dans la formation des taux longs dans la zone euro. On propose en particulier un modèle théorique de choix de portefeuille à deux pays permettant d’évaluer la contribution des primes de risque de volatilité aux processus de contagion et de fuite vers la qualité dans différents épisodes de la crise de la dette souveraine. Ce modèle permet également d’analyser le rôle des achats d’actifs (QE) de la BCE sur l’équilibre des marchés obligataires. Nos tests empiriques suggèrent que les programmes QE de la BCE à partir de mars 2015 n’ont fait qu’accélérer « une défragmentation » des marchés obligataires de la zone euro, apparue plus tôt dans la crise, dès la mise en place de l’OMT
This thesis examines the specific role of volatility risks and co-volatility in the formation of long-term interest rates in the euro area. In particular, a two-country theoretical portfolio choice model is proposed to evaluate the volatility risk premia and their contribution to the contagion and flight to quality processes. This model also provides an opportunity to analyze the ECB's role of asset purchases (QE) on the equilibrium of bond markets. Our empirical tests suggest that the ECB's QE programs from March 2015 have accelerated the "defragmentation" of the euro zone bond markets
Styles APA, Harvard, Vancouver, ISO, etc.
28

Dogan, Aydan. « Two sector models of the real exchange rate ». Thesis, University of Kent, 2016. https://kar.kent.ac.uk/54747/.

Texte intégral
Résumé :
This thesis consists of three self contained chapters. In the first chapter, we re-assess the problem of general equilibrium models in matching the behaviour of the real exchange rate. We do so by developing a two country general equilibrium model with non-traded goods, home bias, incomplete markets and partial degrees of pass through as well as nominal rigidities both in the goods and labour markets. Our key finding is that presenting an encompassing model structure improves the performance of the model in addressing the persistence of the real exchange rate and its correlation with relative consumption, but this improvement is at the expense of failing to replicate some other characteristics of the data; where the model does a good job at explaining the failure of international risk sharing and generates substantial real exchange rate persistence, it fails to match several other observed business cycle features of the data, such as the volatility of real exchange rate and consumption. In the second chapter of the thesis, we study the importance of the extensive margin of trade for the UK export dynamics. During the great recession, UK exports fell by around 8% with respect to their trend, more than a standard general equilibrium model would predict. In this paper, we ask whether an estimated two country DSGE model with extensive margin of trade can explain this drop and the main business cycle features of the UK economy. The extensive margin improves the overall performance of the model, but cannot improve substantially on replicating the behaviour of exports. Much of the trade collapse during the great recession can be explained by a shock to export entry costs associated with tighter financial conditions. Understanding the trade balance dynamics has a central role in studies of emerging market business cycles. In the last chapter, we investigate the driving sources of emerging market trade balance fluctuations by developing a two country, two sector international real business cycle model with investment and consumption goods sectors. We estimate the model for Mexico and US data and find that a slowly diffusing permanent investment specific technology shock that originates in the US accounts for most of the trade balance variability in Mexico. This shock is also the key driver of business cycle fluctuations in Mexico.
Styles APA, Harvard, Vancouver, ISO, etc.
29

Qin, Tongran. « Buoyancy-thermocapillary convection of volatile fluids in confined and sealed geometries ». Diss., Georgia Institute of Technology, 2016. http://hdl.handle.net/1853/54939.

Texte intégral
Résumé :
Convection in a layer of fluid with a free surface due to a combination of thermocapillary stresses and buoyancy is a classic problem of fluid mechanics. It has attracted increasing attentions recently due to its relevance for two-phase cooling. Many of the modern thermal management technologies exploit the large latent heats associated with phase change at the interface of volatile liquids, allowing compact devices to handle very high heat fluxes. To enhance phase change, such cooling devices usually employ a sealed cavity from which almost all noncondensable gases, such as air, have been evacuated. Heating one end of the cavity, and cooling the other, establishes a horizontal temperature gradient that drives the flow of the coolant. Although such flows have been studied extensively at atmospheric conditions, our fundamental understanding of the heat and mass transport for volatile fluids at reduced pressures remains limited. A comprehensive and quantitative numerical model of two-phase buoyancy-thermocapillary convection of confined volatile fluids subject to a horizontal temperature gradient has been developed, implemented, and validated against experiments as a part of this thesis research. Unlike previous simplified models used in the field, this new model incorporates a complete description of the momentum, mass, and heat transport in both the liquid and the gas phase, as well as phase change across the entire liquid-gas interface. Numerical simulations were used to improve our fundamental understanding of the importance of various physical effects (buoyancy, thermocapillary stresses, wetting properties of the liquid, etc.) on confined two-phase flows. In particular, the effect of noncondensables (air) was investigated by varying their average concentration from that corresponding to ambient conditions to zero, in which case the gas phase becomes a pure vapor. It was found that the composition of the gas phase has a crucial impact on heat and mass transport as well as on the flow stability. A simplified theoretical description of the flow and its stability was developed and used to explain many features of the numerical solutions and experimental observations that were not well understood previously. In particular, an analytical solution for the base return flow in the liquid layer was extended to the gas phase, justifying the previous ad-hoc assumption of the linear interfacial temperature profile. Linear stability analysis of this two-layer solution was also performed. It was found that as the concentration of noncondensables decreases, the instability responsible for the emergence of a convective pattern is delayed, which is mainly due to the enhancement of phase change. Finally, a simplified transport model was developed for heat pipes with wicks or microchannels that gives a closed-form analytical prediction for the heat transfer coefficient and the optimal size of the pores of the wick (or the width of the microchannels).
Styles APA, Harvard, Vancouver, ISO, etc.
30

Pang, Kin. « Calibration of interest rate term structure and derivative pricing models ». Thesis, University of Warwick, 1997. http://wrap.warwick.ac.uk/36270/.

Texte intégral
Résumé :
We argue interest rate derivative pricing models are misspecified so that when they are fitted to historical data they do not produce prices consistently with the market. Interest rate models have to be calibrated to prices to ensure consistency. There are few published works on calibration to derivatives prices and we make this the focus of our thesis. We show how short rate models can be calibrated to derivatives prices accurately with a second time dependent parameter. We analyse the misspecification of the fitted models and their implications for other models. We examine the Duffle and Kan Affine Yield Model, a class of short rate models, that appears to allow easier calibration. We show that, in fact, a direct calibration of Duffle and Kan Affine Yield Models is exceedingly difficult. We show the non-negative subclass is equivalent to generalised Cox, Ingersoll and Ross models that facilitate an indirect calibration of nonnegative Duffle and Kan Affine Yield Models. We examine calibration of Heath, Jarrow and Morton models. We show, using some experiments, Heath, Jarrow and Morton models cannot be calibrated quickly to be of practical use unless we restrict to special subclasses. We introduce the Martingale Variance Technique for improving the accuracy of Monte Carlo simulations. We examine calibration of Gaussian Heath Jarrow and Morton models. We provide a new non-parametric calibration using the Gaussian Random Field Model of Kennedy as an intermediate step. We derive new approximate swaption pricing formulae for the calibration. We examine how to price resettable caps and floors with the market- Libor model. We derive a new relationship between resettable caplets and floorlets prices. We provide accurate approximations for the prices. We provide practical approximations to price resettable caplets and floorlets directly from quotes on standard caps and floors. We examine how to calibrate the market-Libor model.
Styles APA, Harvard, Vancouver, ISO, etc.
31

Kristensen, Scott Dennis 1958. « A new monetary model of foreign exchange rates ». Diss., The University of Arizona, 1997. http://hdl.handle.net/10150/288762.

Texte intégral
Résumé :
An attempt is made to create a model of exchange rates that explains the short term, daily levels of the foreign exchange spot market. The model is a monetary type that focuses on the eurocurrency markets and the current account. It has a liquidity preference form and employs daily data. The futures rate, the euro interest rate, the eurocurrency money stocks and a current account variable are the individual variables of the model. The futures rate and the euro interest rates are from the assumed Fisher's 'Covered Interest Rate Paradigm'. The eurocurrency money stock variable's justification is based on the real world structure of the spot market where the foreign exchange desks of the major world commercial banks are the dominant players. The current account variable, which is motivated by a desire to improve on the short run performance of the Purchasing Power Parity variable of other monetary models, is justified by trade theory. The liquidity preference form of the model is in keeping with current monetary models. The econometric results show that the model is better than the random walk model. However the results of the individual variables are mixed. The futures rate accounts for the vast majority of the model's success. Although the eurocurrency variable is as statistically significant as the interest rate differentials from the widely accepted Fisher's Covered Interest Rate Parity paradigm, neither was as significant as the futures rate. The current account variable results are not statistically significant. Thus, the current account variable may be discarded while the eurocurrency interest rates and euromoney variables warrant further study. As a result of the dominance of the futures rate variable, models that cry to capture rational expectations such as the News or Chaos Models are appealing. This rational expectations characteristic of the market combined with the dominance of speculation over economic fundamentals also points toward game theory as a good candidate for further study.
Styles APA, Harvard, Vancouver, ISO, etc.
32

Tsai, Angela C. F. « Valuation of Eurodollar futures contracts under alternative term structure models : theory and evidence ». Thesis, University of Strathclyde, 1999. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.366802.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
33

Chen, Wei 1976. « Perceptual postfiltering for low bit rate speech coders ». Thesis, McGill University, 2007. http://digitool.Library.McGill.CA:80/R/?func=dbin-jump-full&object_id=112563.

Texte intégral
Résumé :
Adaptive postfiltering has become a common part of speech coding standards based on the Linear Prediction Analysis-by-Synthesis algorithm to decrease audible coding noise. However, a conventional adaptive postfilter is based on empirical assumptions of masking phenomena, which sometimes makes it hard to balance between noise reduction and speech distortion.
This thesis introduces a novel perceptual postfiltering system for low bit rate speech coders. The proposed postfilter works at the decoder, as is the case for the conventional adaptive postfilter. Specific human auditory properties are considered in the postfilter design to improve speech quality. A Gaussian Mixture Model based Minimum Mean Squared Error estimation of the perceptual postfilter is performed with the received information at the decoder. Perceptual postfiltering is then applied to the reconstructed speech to improve speech quality. Test results show that the proposed system gives better perceptual speech quality over conventional adaptive postfiltering.
Styles APA, Harvard, Vancouver, ISO, etc.
34

Ruan, Shiling. « Poisson race models theory and application in conjoint choice analysis / ». Columbus, Ohio : Ohio State University, 2007. http://rave.ohiolink.edu/etdc/view?acc%5Fnum=osu1173204902.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
35

Menegat, Luciana Arenhart. « Análise da interdependência entre os investimentos privado e público federal no Brasil ». Universidade do Vale do Rio dos Sinos, 2017. http://www.repositorio.jesuita.org.br/handle/UNISINOS/6880.

Texte intégral
Résumé :
Submitted by JOSIANE SANTOS DE OLIVEIRA (josianeso) on 2018-02-08T15:37:26Z No. of bitstreams: 1 Luciana Arenhart Menegat_.pdf: 2130171 bytes, checksum: 31939bf69f5ffaead6152b5d73c3171a (MD5)
Made available in DSpace on 2018-02-08T15:37:26Z (GMT). No. of bitstreams: 1 Luciana Arenhart Menegat_.pdf: 2130171 bytes, checksum: 31939bf69f5ffaead6152b5d73c3171a (MD5) Previous issue date: 2017-12-19
Nenhuma
Na literatura, ocorre divergência quanto ao tipo de relação existente entre os investimentos públicos e privados, havendo estudos que indicam uma relação de crowding in e outros de crowding out, identificando-se uma lacuna nesses estudos e a necessidade de novas investigações que determinem essa relação. Assim, defende-se a tese de que os investimentos públicos possuem impacto positivo sobre os investimentos privados, tendo-se como objetivo geral identificar se existe interdependência entre os investimentos público federal e privado no Brasil. Para esse intento, analisam-se dados trimestrais entre 2002 e 2015, por meio de um Modelo VAR e VEC, que permite estabelecer relações de curto e longo prazos entre as variáveis. Dentre os modelos testados, o melhor deles se mostrou o composto por investimento privado, investimento público, taxa de câmbio e utilização da capacidade instalada, que permite o reconhecimento de que, a cada aumento de 1% no investimento público, deve ampliar-se, em média, 0,44% o investimento privado. Destaca-se que a valorização da taxa de câmbio produz um efeito negativo significativo sobre o investimento privado, no curto prazo, mas, no longo prazo, essa relação torna-se não significativa. Os demais resultados mantiveram os mesmos sinais das elasticidades no curto e no longo prazos, resultados semelhantes aos observados na literatura. Dessa forma, confirmam as hipóteses de pesquisa fundamentadas na função Keynesiana em relação às expectativas, à renda e à taxa de juros, para explicar o comportamento do investimento público e do privado, de modo que essas hipóteses comprovam a tese fundamental desta pesquisa e permitem atingir o objetivo geral. Em síntese, foi possível determinar uma relação positiva entre investimentos público e privado, embora essa relação tenha baixa elasticidade. Ou seja, o investimento público é significante para explicar o nível do investimento privado, mas não é a variável de maior impacto, que foi a utilização da capacidade instalada (indicando as expectativas e o nível de atividade). Além disso, os investimentos privados e públicos apresentaram um movimento conjunto, não sendo possível determinar uma relação de causalidade pelo Teste de Granger entre os investimentos públicos e os privados, nesta pesquisa. Essas situações possuem embasamento teórico, o que indica que é uma relação ainda não definida e que se sugere como pesquisa futura, ou seja, trabalhos que tentem definir o sentido da causalidade entre os investimentos privados e públicos.
In the literature, there is a divergence between the type of relationship between public and private investments, with studies indicating crowding in and others crowding out, identifying a gap in these studies and the need for further investigations that determine this relationship . Thus, it is defended the thesis that public investments have a positive impact on private investments, with the general objective of identifying if there is interdependence between public and private investments in Brazil. For this purpose, quarterly data are analyzed between 2002 and 2015, using a VAR and VEC Model, which allows establishing short- and long-term relationships between variables. Among the models tested, the best of them was composed of private investment, public investment, exchange rate and utilization of installed capacity, which allows the recognition that, with each 1% increase in public investment, on average, 0.44% of private investment. It should be emphasized that the appreciation of the exchange rate has a significant negative effect on private investment in the short term, but in the long run this relationship becomes insignificant. The other results maintained the same signs of elasticities in the short and long terms, results similar to those observed in the literature. Thus, they confirm the hypotheses of research based on the Keynesian function in relation to expectations, income and interest rate, to explain the behavior of public and private investment, so that these hypotheses prove the fundamental thesis of this research and allow to reach the overall goal. In summary, it was possible to determine a positive relation between public and private investments, although this relation has low elasticity. That is, public investment is significant to explain the level of private investment, but it is not the variable with the greatest impact, which was the use of installed capacity (indicating expectations and level of activity). In addition, private and public investments presented a joint movement, and it is not possible to determine a causal relationship by the Granger Test between public and private investments in this research. These situations have a theoretical basis, which indicates that it is a relationship not yet defined and that is suggested as future research, that is, works that try to define the sense of causality between private and public investments.
Styles APA, Harvard, Vancouver, ISO, etc.
36

Gyldberg, Ellinor, et Henrik Bark. « Type 1 error rate and significance levels when using GARCH-type models ». Thesis, Uppsala universitet, Statistiska institutionen, 2019. http://urn.kb.se/resolve?urn=urn:nbn:se:uu:diva-375770.

Texte intégral
Résumé :
The purpose of this thesis is to test whether the probability of falsely rejecting a true null hypothesis of a model intercept being equal to zero is consistent with the chosen significance level when modelling the variance of the error term using GARCH (1,1), TGARCH (1,1) or IGARCH (1,1) models. We test this by estimating “Jensen’s alpha” to evaluate alpha trading, using a Monte Carlo simulation based on historical data from the Standard & Poor’s 500 Index and stocks in the Dow Jones Industrial Average Index. We evaluate over simulated daily data ranging over periods of 3 months, 6 months, and 1 year. Our results indicate that the GARCH and IGARCH consistently reject a true null hypothesis less often than the selected 1%, 5%, or 10%, whereas the TGARCH consistently rejects a true null more often than the chosen significance level. Thus, there is a risk of incorrect inferences when using these GARCH-type models.
Styles APA, Harvard, Vancouver, ISO, etc.
37

Adodo, Sophia. « THE FASHION RUNWAY THROUGH A CRITICAL RACE THEORY LENS ». Kent State University / OhioLINK, 2016. http://rave.ohiolink.edu/etdc/view?acc_num=kent1461576556.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
38

Rhee, Joonhee. « Three models of the term structure of interest rates ». Thesis, University of Warwick, 1998. http://wrap.warwick.ac.uk/36336/.

Texte intégral
Résumé :
In this dissertation, we consider the stochastic volatility of short rates, the jump property of short rates, and market expectation of changes in interest rates as the crucial factors in explaining the term structure of interest rates. In each chapter, we model the term structure of interest rates in accordance with these factors.
Styles APA, Harvard, Vancouver, ISO, etc.
39

Leblon, Grégoire. « Quadratic term structure models of interest rates : theory, implementation and applications ». Rennes 1, 2012. http://www.theses.fr/2012REN1G038.

Texte intégral
Résumé :
La modélisation de la structure par terme des taux d’intérêt renvoie à une double problématique en finance. La première consiste à reproduire, à chaque instant, la courbe des taux extraite des prix d’obligations observés. La seconde consiste à capturer sa dynamique d’évolution. Pour répondre à ces problématiques, de nombreux modèles ont été développés. L’objet de cette thèse est d’approfondir l’un d’entre eux : les modèles quadratiques. Cette famille de modèle suppose d’une part qu’une relation quadratique relie le taux d’intérêt instantané à des variables latentes censées rendre compte de l’état de l’économie. D’autre part, l’évolution temporelle des variables latentes suit un processus d’Ornstein-Uhlenbeck. Cette famille de modèle fut introduite afin de répondre aux problèmes structurels rencontrés de façon récurrente par les autres familles de modèle. Dans cette thèse, nous approfondissons le cadre théorique des modèles Quadratiques en temps discret. Forts de ces résultats, nous évaluons leurs capacités à reproduire la Structure par Terme des Taux d’Intérêt. Leur utilisation en gestion de portefeuilles obligataires est aussi traitée théoriquement et empiriquement. Le prix des options européennes sur obligations dépend du modèle considéré. Nous fournissons dans ce cadre des solutions analytiques exactes et approchées
Modeling the Term Structure of Interest Rates refers to a dual problem in finance. The first one is to replicate yield curves extracted from observed bond prices. The second is to capture its dynamics. To address these issues, many models have been developed. The purpose of this thesis is to explore one of them: the Quadratic model. Quadratic Term Structure Models first assume a quadratic relationship connecting the instantaneous interest rate and latent variables describing the evolution of the theoretical economy. Second, latent variables’ are assumed to follow Ornstein-Uhlenbeck processes. Quadratic Term Structure Models were introduced to address structural problems encounter by other types of models. This thesis deepens the theoretical framework of Quadratic Term Structure Models in discrete time. We exploit these results to assess their ability to reproduce Term Structure of Interest Rates. Their use in bond portfolio management is also investigated theoretically and empirically. Finally, we study the price of a European option written on bonds within this framework
Styles APA, Harvard, Vancouver, ISO, etc.
40

Lo, Tak-shing. « Two-body operators and correlation crystal field models / ». [Hong Kong : University of Hong Kong], 1993. http://sunzi.lib.hku.hk/hkuto/record.jsp?B13437549.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
41

Wolden, Bache Ida. « Econometrics of exchange rate pass-through / ». Oslo : Unipub, 2007. http://www.gbv.de/dms/zbw/527973297.pdf.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
42

Vlaseros, Vasileios. « Essays on strategic voting and political influence ». Thesis, University of Edinburgh, 2014. http://hdl.handle.net/1842/9932.

Texte intégral
Résumé :
Chapter 1 : I attempt a detailed literature review on the passage from the probabilistic versions of the Condorcet Jury Theorem to models augmented by the concept of strategic agents, including both theoretical and relevant empirical work. In the first part, I explore the most influential relevant game theoretic models and their main predictions. In the second part, I review what voting experiments have to say about these predictions, with a brief mention of the experiments' key methodological aspects. In the final part, I provide with an attempt to map the recent strategic voting literature in terms of structure and scope. I close with a philosophical question on the exogeneity of a "correct" choice of a voting outcome, which is inherent in the current strategic voting literature. Chapter 2 : I develop a two stage game with individually costly political action and costless voting on a binary agenda where, in equilibrium, agents rationally cast honest votes in the voting stage. I show that a positive but sufficiently low individual cost of political action can lead to a loss in aggregate welfare for any electorate size. When the individual cost of political action is lower than the signalling gain, agents will engage in informative political action. In the voting stage, since everyone's signal is revealed, agents will unanimously vote for the same policy. Therefore, the result of the ballot will be exactly the same as the one without prior communication, but with the additional aggregate cost of political action. However, when agents have heterogeneous prior beliefs, society is large and the state of the world is sufficiently uncertain, a moderate individual cost of political action can induce informative collective action of only a subset of the members of society, which increases ex ante aggregate welfare relative to no political action. The size of the subset of agents engaging in collective action depends on the dispersion of prior opinions. Chapter 3 : This chapter shows theoretically that hearing expert opinions can be a double-edged sword for decision making committees. We study a majoritarian voting game of common interest where committee members receive not only private information, but also expert information that is more accurate than private information and observed by all members. We identify three types of equilibria of interest, namely i) the symmetric mixed strategy equilibrium where each member randomizes between following the private and public signals should they disagree; ii) the asymmetric pure strategy equilibrium where a certain number of members always follow the public signal while the others always follow the private signal; and iii) a class of equilibria where a supermajority and hence the committee decision always follow the expert signal. We find that in the first two equilibria, the expert signal is collectively taken into account in such a way that it enhances the efficiency (accuracy) of the committee decision, and a fortiori the CJT holds. However, in the third type of equilibria, private information is not reflected in the committee decision and the efficiency of committee decision is identical to that of public information, which may well be lower than the efficiency the committee could achieve without expert information. In other words, the introduction of expert information might reduce efficiency in equilibrium. Chapter 4 : In this chapter we present experimental results on the theory of the previous chapter. In the laboratory, too many subjects voted according to expert information compared to the predictions from the efficient equilibria. The majority decisions followed the expert signal most of the time, which is consistent with the class of obedient equilibria mentioned in the previous chapter. Another interesting finding is the marked heterogeneity in voting behaviour. We argue that the voters' behaviour in our data can be best described as that in an obedient equilibrium where a supermajority (and hence the decision) always follow the expert signal so that no voter is pivotal. A large efficiency loss manifests due to the presence of expert information when the committee size was large. We suggest that it may be desirable for expert information to be revealed only to a subset of committee members. Finally, in the Appendix we describe a new alternative method for producing the signal matrix of the game. Chapter 5 : There is a significant gap between the theoretical predictions and the empirical evidence about the efficiency of policies in reducing crime rates. This chapter argues that one important reason for this is that the current literature of economics of crime overlooks an important hysteresis effect in criminal behaviour. One important consequence of hysteresis is that the effect on an outcome variable from positive exogenous variations in the determining variables has a different magnitude from negative variations. We present a simple model that characterises hysteresis in both the micro and macro levels. When the probability of punishment decreases, some law abiding agents will find it more beneficial to enter a criminal career. If the probability of punishment returns to its original level, a subset of these agents will continue with their career in crime. We show that, when crime choice exhibits weak hysteresis at the individual level, crime rate in a society consisted from a continuum of agents that follows any non-uniform distribution will exhibit strong hysteresis. Only when punishment is extremely severe the effect of hysteresis ceases to exist. The theoretical predictions corroborate the argument that policy makers should be more inclined to set pre-emptive policies rather than mitigating measures.
Styles APA, Harvard, Vancouver, ISO, etc.
43

Beisler, Matthias Werner. « Modelling of input data uncertainty based on random set theory for evaluation of the financial feasibility for hydropower projects ». Doctoral thesis, Technische Universitaet Bergakademie Freiberg Universitaetsbibliothek "Georgius Agricola", 2011. http://nbn-resolving.de/urn:nbn:de:bsz:105-qucosa-71564.

Texte intégral
Résumé :
The design of hydropower projects requires a comprehensive planning process in order to achieve the objective to maximise exploitation of the existing hydropower potential as well as future revenues of the plant. For this purpose and to satisfy approval requirements for a complex hydropower development, it is imperative at planning stage, that the conceptual development contemplates a wide range of influencing design factors and ensures appropriate consideration of all related aspects. Since the majority of technical and economical parameters that are required for detailed and final design cannot be precisely determined at early planning stages, crucial design parameters such as design discharge and hydraulic head have to be examined through an extensive optimisation process. One disadvantage inherent to commonly used deterministic analysis is the lack of objectivity for the selection of input parameters. Moreover, it cannot be ensured that the entire existing parameter ranges and all possible parameter combinations are covered. Probabilistic methods utilise discrete probability distributions or parameter input ranges to cover the entire range of uncertainties resulting from an information deficit during the planning phase and integrate them into the optimisation by means of an alternative calculation method. The investigated method assists with the mathematical assessment and integration of uncertainties into the rational economic appraisal of complex infrastructure projects. The assessment includes an exemplary verification to what extent the Random Set Theory can be utilised for the determination of input parameters that are relevant for the optimisation of hydropower projects and evaluates possible improvements with respect to accuracy and suitability of the calculated results
Die Auslegung von Wasserkraftanlagen stellt einen komplexen Planungsablauf dar, mit dem Ziel das vorhandene Wasserkraftpotential möglichst vollständig zu nutzen und künftige, wirtschaftliche Erträge der Kraftanlage zu maximieren. Um dies zu erreichen und gleichzeitig die Genehmigungsfähigkeit eines komplexen Wasserkraftprojektes zu gewährleisten, besteht hierbei die zwingende Notwendigkeit eine Vielzahl für die Konzepterstellung relevanter Einflussfaktoren zu erfassen und in der Projektplanungsphase hinreichend zu berücksichtigen. In frühen Planungsstadien kann ein Großteil der für die Detailplanung entscheidenden, technischen und wirtschaftlichen Parameter meist nicht exakt bestimmt werden, wodurch maßgebende Designparameter der Wasserkraftanlage, wie Durchfluss und Fallhöhe, einen umfangreichen Optimierungsprozess durchlaufen müssen. Ein Nachteil gebräuchlicher, deterministischer Berechnungsansätze besteht in der zumeist unzureichenden Objektivität bei der Bestimmung der Eingangsparameter, sowie der Tatsache, dass die Erfassung der Parameter in ihrer gesamten Streubreite und sämtlichen, maßgeblichen Parameterkombinationen nicht sichergestellt werden kann. Probabilistische Verfahren verwenden Eingangsparameter in ihrer statistischen Verteilung bzw. in Form von Bandbreiten, mit dem Ziel, Unsicherheiten, die sich aus dem in der Planungsphase unausweichlichen Informationsdefizit ergeben, durch Anwendung einer alternativen Berechnungsmethode mathematisch zu erfassen und in die Berechnung einzubeziehen. Die untersuchte Vorgehensweise trägt dazu bei, aus einem Informationsdefizit resultierende Unschärfen bei der wirtschaftlichen Beurteilung komplexer Infrastrukturprojekte objektiv bzw. mathematisch zu erfassen und in den Planungsprozess einzubeziehen. Es erfolgt eine Beurteilung und beispielhafte Überprüfung, inwiefern die Random Set Methode bei Bestimmung der für den Optimierungsprozess von Wasserkraftanlagen relevanten Eingangsgrößen Anwendung finden kann und in wieweit sich hieraus Verbesserungen hinsichtlich Genauigkeit und Aussagekraft der Berechnungsergebnisse ergeben
Styles APA, Harvard, Vancouver, ISO, etc.
44

Woodard, Roger. « Bayesian hierarchical models for hunting success rates / ». free to MU campus, to others for purchase, 1999. http://wwwlib.umi.com/cr/mo/fullcit?p9951135.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
45

盧德成 et Tak-shing Lo. « Two-body operators and correlation crystal field models ». Thesis, The University of Hong Kong (Pokfulam, Hong Kong), 1993. http://hub.hku.hk/bib/B31210922.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
46

Wong, Po-shing. « Some mixture models for the joint distribution of stock's return and trading volume / ». [Hong Kong] : University of Hong Kong, 1991. http://sunzi.lib.hku.hk/hkuto/record.jsp?B13009485.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
47

Jackson, Zara. « Basal Metabolic Rate (BMR) estimation using Probabilistic Graphical Models ». Thesis, Uppsala universitet, Statistiska institutionen, 2019. http://urn.kb.se/resolve?urn=urn:nbn:se:uu:diva-384629.

Texte intégral
Résumé :
Obesity is a growing problem globally. Currently 2.3 billion adults are overweight, and this number is rising. The most common method for weight loss is calorie counting, in which to lose weight a person should be in a calorie deficit. Basal Metabolic Rate accounts for the majority of calories a person burns in a day and it is therefore a major contributor to accurate calorie counting. This paper uses a Dynamic Bayesian Network to estimate Basal Metabolic Rate (BMR) for a sample of 219 individuals from all Body Mass Index (BMI) categories. The data was collected through the Lifesum app. A comparison of the estimated BMR values was made with the commonly used Harris Benedict equation, finding that food journaling is a sufficient method to estimate BMR. Next day weight prediction was also computed based on the estimated BMR. The results stated that the Harris Benedict equation produced more accurate predictions than the metabolic model proposed, therefore more work is necessary to find a model that accurately estimates BMR.
Styles APA, Harvard, Vancouver, ISO, etc.
48

Ho, Man Wai. « Bayesian inference for models with monotone densities and hazard rates / ». View Abstract or Full-Text, 2002. http://library.ust.hk/cgi/db/thesis.pl?ISMT%202002%20HO.

Texte intégral
Résumé :
Thesis (Ph. D.)--Hong Kong University of Science and Technology, 2002.
Includes bibliographical references (leaves 110-114). Also available in electronic version. Access restricted to campus users.
Styles APA, Harvard, Vancouver, ISO, etc.
49

Chung, Wanyu. « Three essays in international economics : invoicing currency, exchange rate pass-through and gravity models with trade in intermediate goods ». Thesis, University of Warwick, 2014. http://wrap.warwick.ac.uk/66297/.

Texte intégral
Résumé :
A large proportion of international trade is in intermediate goods. The implications of this empirical regularity, however, have not been exhaustively explored in several aspects. The main objective of the thesis is to fill in the gap by introducing trade in intermediate goods into several strands of literature in international economics. This thesis is a collection of three essays studying the implications of trade in intermediate goods for the degree of exchange rate pass-through (Chapter 2), firms invoicing currency choice (Chapter 3) and the performance of the gravity models (Chapter 4). In Chapter 2 I present a theoretical framework and show that back-and-forth trade between two countries is associated with low degrees of aggregated exchange rate pass-through. In Chapter 3 I focus instead on firm heterogeneity in the dependence on imported inputs. I show theoretically that exporters more dependent on foreign currency-denominated inputs are more likely to price in the foreign currency. I then test the theoretical prediction using an innovative and unique dataset that covers all UK trade transactions with non-EU partners from HM Revenue and Customs (HMRC). Overall the results strongly support the theoretical prediction. Chapter 4 is a theoretical piece of work showing how the underlying trade structure alters the predictions of the gravity models. I relate gravity equations to labour shares of income. Given that these parameters are industry-specific, the results suggest that it is crucial to take them into account when the main research interest lies in sectoral differences in bilateral trade.
Styles APA, Harvard, Vancouver, ISO, etc.
50

Webster, Eric Alexander Garner. « Single-Photon Avalanche Diode theory, simulation, and high performance CMOS integration ». Thesis, University of Edinburgh, 2013. http://hdl.handle.net/1842/17987.

Texte intégral
Résumé :
This thesis explores Single-Photon Avalanche Diodes (SPADs), which are solid-state devices for photon timing and counting, and concentrates on SPADs integrated in nano-scale CMOS. The thesis focuses on: the search for new theory regarding Geiger-mode operation; proving the utility of calibrated Technology Computer- Aided Design (TCAD) tools for accurately simulating SPADs for the first time; the investigation of how manufacture influences device operation; and the integration of high performance SPADs into CMOS which rival discrete devices. The accepted theories of SPAD operation are revisited and it is discovered that previously neglected minority carriers have many significant roles such as determining: after-pulsing, Dark Count Rate (DCR), bipolar “SPAD latch-up,” nonequilibrium DCR, and “quenching”. The “quenching” process is revisited and it is concluded that it is the “probability time” of ≈100-200ps, and not the previously thought latching current that is important. SPADs are also found to have transient negative differential resistance. The new theories of SPADs are also supported by steady-state 1D, 2D and 3D TCAD simulations as well as novel transient simulations and videos. It is demonstrated as possible to simulate DCR, Photon Detection Efficiency (PDE), guard ring performance, breakdown voltage, breakdown voltage variation, “quenching,” and transient operation of SPADs with great accuracy. The manufacture of SPADs is studied focusing on the operation and optimisation of guard rings and it is found that ion implantation induced asymmetry from the tilt and rotation/twist is critical. Where symmetric, guard rings fail first along the <100> directions due to enhanced mobility. Process integration rules are outlined for obtaining high performance SPADs in CMOS while maintaining compatibility with transistors. The minimisation of tunnelling with lightly-doped junctions and the reduction of ion implantation induced defects by additional annealing are found essential for achieving low DCR. The thesis demonstrates that it is possible to realise high performance SPADs in CMOS through the innovation of a “Deep SPAD” which achieves record PDE of ≈72% at 560nm with >40% PDE from 410-760nm, combined with 18Hz DCR, <60ps FWHM timing resolution, and <4% after-pulsing which is demonstrated to have potential for significant further improvement. The findings suggest that CMOS SPAD-based micro-systems could outperform existing photon timing and counting solutions in the future.
Styles APA, Harvard, Vancouver, ISO, etc.
Nous offrons des réductions sur tous les plans premium pour les auteurs dont les œuvres sont incluses dans des sélections littéraires thématiques. Contactez-nous pour obtenir un code promo unique!

Vers la bibliographie