Dissertations / Theses on the topic 'Utility theory – Mathematical models'

To see the other types of publications on this topic, follow the link: Utility theory – Mathematical models.

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 dissertations / theses for your research on the topic 'Utility theory – Mathematical models.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.

1

Lipscomb, Clifford Allen. "Resolving the aggregation problem that plagues the hedonic pricing method." Diss., Available online, Georgia Institute of Technology, 2004:, 2003. http://etd.gatech.edu/theses/available/etd-04082004-180317/unrestricted/lipscomb%5fclifford%5fa%5f200312%5fphd.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Cook, Victoria Tracy 1960. "The effects of temporal uncertainty resolution on the overall utility and suspense of risky monetary and survival gambles /." Thesis, McGill University, 1989. http://digitool.Library.McGill.CA:80/R/?func=dbin-jump-full&object_id=75966.

Full text
Abstract:
We extend Kreps and Porteus' (1978, 1979a,b) temporal utility theory to include measures of suspense for gambles that vary in the timing of uncertainty resolution. Our f$ sp t$-modification (of their theory) defines overall utility and suspense in terms of two functions: a standard utility function and an iterative function whose properties determine attitude towards temporal uncertainty resolution. Suspense, which is increasing with time delay to uncertainty resolution, is defined as the "variance" of the standard utilities of the outcome streams taken about our measure of overall utility (rather than about the standard mean utility). We explore the properties of our measures and their implications for the overall utility and suspense of various key examples. Two preliminary experiments are reported which give some support for our overall utility and suspense measures, and which suggest that risk and suspense are different concepts. Iteration theory is also discussed in some detail.
APA, Harvard, Vancouver, ISO, and other styles
3

Thompson, Stephanie C. "Rational design theory: a decision-based foundation for studying design methods." Diss., Georgia Institute of Technology, 2011. http://hdl.handle.net/1853/39490.

Full text
Abstract:
While design theories provide a foundation for representing and reasoning about design methods, existing design theories do not explicitly include uncertainty considerations or recognize tradeoffs between the design artifact and the design process. These limitations prevent the existing theories from adequately describing and explaining observed or proposed design methods. In this thesis, Rational Design Theory is introduced as a normative theoretical framework for evaluating prescriptive design methods. This new theory is based on a two-level perspective of design decisions in which the interactions between the artifact and the design process decisions are considered. Rational Design Theory consists of normative decision theory applied to design process decisions, and is complemented by a decision-theory-inspired conceptual model of design. The application of decision analysis to design process decisions provides a structured framework for the qualitative and quantitative evaluation of design methods. The qualitative evaluation capabilities are demonstrated in a review of the systematic design method of Pahl and Beitz. The quantitative evaluation capabilities are demonstrated in two example problems. In these two quantitative examples, Value of Information analysis is investigated as a strategy for deciding when to perform an analysis to gather additional information in support of a choice between two design concepts. Both quantitative examples demonstrate that Value of Information achieves very good results when compared to a more comprehensive decision analysis that allows for a sequence of analyses to be performed.
APA, Harvard, Vancouver, ISO, and other styles
4

Heller, Collin M. "A computational model of engineering decision making." Thesis, Georgia Institute of Technology, 2013. http://hdl.handle.net/1853/50272.

Full text
Abstract:
The research objective of this thesis is to formulate and demonstrate a computational framework for modeling the design decisions of engineers. This framework is intended to be descriptive in nature as opposed to prescriptive or normative; the output of the model represents a plausible result of a designer's decision making process. The framework decomposes the decision into three elements: the problem statement, the designer's beliefs about the alternatives, and the designer's preferences. Multi-attribute utility theory is used to capture designer preferences for multiple objectives under uncertainty. Machine-learning techniques are used to store the designer's knowledge and to make Bayesian inferences regarding the attributes of alternatives. These models are integrated into the framework of a Markov decision process to simulate multiple sequential decisions. The overall framework enables the designer's decision problem to be transformed into an optimization problem statement; the simulated designer selects the alternative with the maximum expected utility. Although utility theory is typically viewed as a normative decision framework, the perspective in this research is that the approach can be used in a descriptive context for modeling rational and non-time critical decisions by engineering designers. This approach is intended to enable the formalisms of utility theory to be used to design human subjects experiments involving engineers in design organizations based on pairwise lotteries and other methods for preference elicitation. The results of these experiments would substantiate the selection of parameters in the model to enable it to be used to diagnose potential problems in engineering design projects. The purpose of the decision-making framework is to enable the development of a design process simulation of an organization involved in the development of a large-scale complex engineered system such as an aircraft or spacecraft. The decision model will allow researchers to determine the broader effects of individual engineering decisions on the aggregate dynamics of the design process and the resulting performance of the designed artifact itself. To illustrate the model's applicability in this context, the framework is demonstrated on three example problems: a one-dimensional decision problem, a multidimensional turbojet design problem, and a variable fidelity analysis problem. Individual utility functions are developed for designers in a requirements-driven design problem and then combined into a multi-attribute utility function. Gaussian process models are used to represent the designer's beliefs about the alternatives, and a custom covariance function is formulated to more accurately represent a designer's uncertainty in beliefs about the design attributes.
APA, Harvard, Vancouver, ISO, and other styles
5

Monson, Christopher Kenneth. "No Free Lunch, Bayesian Inference, and Utility: A Decision-Theoretic Approach to Optimization." Diss., CLICK HERE for online access, 2006. http://contentdm.lib.byu.edu/ETD/image/etd1292.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Waters, John Michael. "The Utility of Mathematical Symbols." Thesis, Virginia Tech, 2015. http://hdl.handle.net/10919/52706.

Full text
Abstract:
Explanations of why mathematics is useful to empirical research focus on mathematics' role as a representation or model. On platonist accounts, the representational relation is one of structural correspondence between features of the real world and the abstract mathematical structures that represent them. Where real numbers are concerned, however, there is good reason to think the world's correspondence with systems of real number symbols, rather than the real numbers themselves, can be utilized for our representational purposes. One way this can be accomplished is through a paraphrase interpretation of real number symbols where the symbols are taken to refer directly to the things in the world real numbers are supposed to represent. A platonist account of structural correspondence between structures of real numbers and the world can be found in the foundations of measurement where a scale of real numbers is applied to quantities of physical properties like length, mass and velocity. This subject will be employed as a demonstration of how abstract real numbers, traditionally construed as modeling features of the world, are superfluous if their symbols are taken to refer directly to those features.
Master of Arts
APA, Harvard, Vancouver, ISO, and other styles
7

Bouzit, Abdel Madjid. "Modélisation du comportement des agriculteurs face au risque : investigations de la théorie de l'utilité dépendant des rangs." Cachan, Ecole normale supérieure, 1996. http://www.theses.fr/1996DENS0024.

Full text
Abstract:
Cette thèse est consacrée à la modélisation des décisions d'assolement des agriculteurs en situation risquée. Les modèles de programmation mathématique du risque utilises en production agricole sont souvent issus de la théorie de l'utilité espérée (théorie UE). Or, la théorie UE est soumise a de nombreux paradoxes empiriques dont le plus célèbre est le paradoxe d'allais. Parmi les théories alternatives la plus prometteuse figure la théorie de l'utilite dépendant des rangs des conséquences (théorie UDR). Cette dernière postule l'existence de deux fonctions pour représenter les préférences des décideurs face au risque : une fonction d'utilite de Von-Neumann Morgenstern et une fonction de transformation des probabilités. Dans cadre de la théorie UDR, nous proposons une procédure de révélation des préférences face au risque basée sur les techniques de l'analyse de la décision (Keeney & Raiffa, 1967) et de la méthode - d'équivalent en loterie double - (Wakker & Deneffe, 1994). La procédure est appliquée pour la spécification de fonctionnelle de préférence UDR de seize agriculteurs dans la région biterroise. Les principaux résultats des estimations sont : 1) les fonctions de transformation de probabilité ne sont pas linéaires en probabilité ; 2) il existe une concordance entre les caractéristiques socio-économiques et les caractéristiques comportementales des agriculteurs interviewes. Par la suite, nous avons généralisé la formulation des modèles de programmation mathématique du risque (pmr) dans le cadre de la théorie UDR (modèle pmr-UDR). L'implémentation du modèle sur trois exploitations agricoles représentatives montre que les décisions d'assolements réelles des agriculteurs sont mieux représentées par le modèle pmr-UDR que par les modèles standards issus de la théorie UE.
APA, Harvard, Vancouver, ISO, and other styles
8

Almeida, Serra Costa Vitoria Pedro Miguel. "Topics on forward investment theory." Thesis, University of Oxford, 2015. http://ora.ox.ac.uk/objects/uuid:158e9239-1385-4314-b337-3eed27c76dfc.

Full text
Abstract:
In this thesis, we study three topics in optimal portfolio selection that are relevant to the theory of forward investment performance processes. In Chapter 1, we develop a connection between the classical mean-variance optimisation and time-monotone forward performance processes for infinitesimal trading times. Namely, we consider consecutive mean-variance problems and we show that, for an appropriate choice of the corresponding mean-variance trade-off coefficients, the wealth process that is generated converges (as the trading interval goes to zero) to the optimal wealth process generated by a time-monotone forward performance process. The choice of the trade-off coefficients is made in accordance to the evolution of the risk tolerance process of the forward performance process. This result allows us to provide a fresh view on the issue of time-consistency of mean-variance analysis, for we propose a method to update mean-variance risk preferences forward in time. As a by-product, our convergence theorem generalises a result by Gyöngy (1998) on the convergence of the Euler scheme for SDEs. We also provide novel results on the Lipschitz regularity of the local risk tolerance function of forward investment performance processes. The material in this chapter is joint work with Marek Musiela and Thaleia Zariphopoulou. Chapter 2 combines forward investment theory and partial information. Specifically, we construct forward investment performance processes in models where the drift is a random variable distributed according to a known distribution. The forward performance processes we consider are of the type U(t,x) = u(t,x, R_t), where R. denotes the process of cumulative excess returns, and u(t,x,z):[0,∞) × ℝ imes ℝN ⟶ ℝ is such that u(t,.,z) is a utility function satisfying Inada's conditions. We derive the Hamilton-Jacobi-Bellman (HJB) equation for u(.). The HJB equation is linearised into the ill-posed heat equation; then, using the multidimensional version of Widder's theorem, we fully characterise the solutions to this equation in terms of a collection of positive measures; the result is an integral representation of the convex conjugate function of u(t,.,z). We construct several examples, and we show how these can be combined, in the dual domain, to generate mixtures of forward investment performance processes. We also show that the volatility of these processes is intrinsic, in that it is not generated by changes of numéraire/measure. In Chapter 3, we provide an extension of the Black-Litterman model to the continuous time setting. Our extension is different from, and complements that of, Frey, Gabih, and Wunderlich (2012) and Davis and Lleo (2013). Specifically, we develop a novel robust estimator of instantaneous expected returns which is continuously shrunk towards the predictions of an asset pricing theory, such as the CAPM. We derive this estimator fairly explicitly and study some of its properties. As in the Black-Litterman model, such an estimator can be used to make optimal asset allocation problems in continuous time more robust with respect to estimation errors. We provide explicit solutions to the problem of maximising expected power utility of terminal wealth, when our estimator is used to estimate the drift. As an example, we illustrate our results explicitly in the case of a multifactor model, where Arbitrage Pricing Theory predicts that alphas should be approximately zero.
APA, Harvard, Vancouver, ISO, and other styles
9

Caccavano, Adam. "Optics and Spectroscopy in Massive Electrodynamic Theory." PDXScholar, 2013. https://pdxscholar.library.pdx.edu/open_access_etds/1485.

Full text
Abstract:
The kinematics and dynamics for plane wave optics are derived for a massive electrodynamic field by utilizing Proca's theory. Atomic spectroscopy is also examined, with the focus on the 21 cm radiation due to the hyperfine structure of hydrogen. The modifications to Snell's Law, the Fresnel formulas, and the 21 cm radiation are shown to reduce to the familiar expressions in the limit of zero photon mass.
APA, Harvard, Vancouver, ISO, and other styles
10

Shaikh, Zain U. "Some mathematical structures arising in string theory." Thesis, University of Aberdeen, 2010. http://digitool.abdn.ac.uk:80/webclient/DeliveryManager?pid=158375.

Full text
Abstract:
This thesis is concerned with mathematical interpretations of some recent develop- ments in string theory. All theories are considered before quantisation. The rst half of the thesis investigates a large class of Lagrangians, L, that arise in the physics literature. Noether's famous theorem says that under certain conditions there is a bijective correspondence between the symmetries of L and the \conserved currents" or integrals of motion. The space of integrals of motion form a sheaf and has a bilinear bracket operation. We show that there is a canonical sheaf d1;0 J1( ) that contains a representation of the higher Dorfman bracket. This is the rst step to de ne a Courant algebroid structure on this sheaf. We discuss the existence of this structure proving that, for a re ned de nition, we have the necessary components. The pure spinor formalism of string theory involves the addition of the algebra of pure spinors to the data of the superstring. This algebra is a Koszul algebra and, for physicists, Koszul duality is string/gauge duality. Motivated by this, we investigate the intimate relationship between a commutative Koszul algebra A and its graded Lie superalgebra Koszul dual to A, U(g) = A!. Classically, this means we obtain the algebra of syzygies AS from the cohomology of a Lie subalgebra of g. We prove H (g 2;C) ' AS again and extend it to the notion of k-syzygies, which we de ne as H (g k;C). In particular, we show that H B er(A) ' H (g 3;C), where H Ber(A) is the Berkovits cohomology of A.
APA, Harvard, Vancouver, ISO, and other styles
11

Ong, Alen Sen Kay. "Asset location decision models in life insurance." Thesis, City University London, 1995. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.336430.

Full text
APA, Harvard, Vancouver, ISO, and other styles
12

Stuk, Stephen Paul. "Multivariable systems theory for Lanchester type models." Diss., Georgia Institute of Technology, 1987. http://hdl.handle.net/1853/24171.

Full text
APA, Harvard, Vancouver, ISO, and other styles
13

Liu, Binbin, and 刘彬彬. "Some topics in risk theory and optimal capital allocation problems." Thesis, The University of Hong Kong (Pokfulam, Hong Kong), 2012. http://hub.hku.hk/bib/B48199291.

Full text
Abstract:
In recent years, the Markov Regime-Switching model and the class of Archimedean copulas have been widely applied to a variety of finance-related fields. The Markov Regime-Switching model can reflect the reality that the underlying economy is changing over time. Archimedean copulas are one of the most popular classes of copulas because they have closed form expressions and have great flexibility in modeling different kinds of dependencies. In the thesis, we first consider a discrete-time risk process based on the compound binomial model with regime-switching. Some general recursive formulas of the expected penalty function have been obtained. The orderings of ruin probabilities are investigated. In particular, we show that if there exists a stochastic dominance relationship between random claims at different regimes, then we can order ruin probabilities under different initial regimes. Regarding capital allocation problems, which are important areas in finance and risk management, this thesis studies the problems of optimal allocation of policy limits and deductibles when the dependence structure among risks is modeled by an Archimedean copula. By employing the concept of arrangement increasing and stochastic dominance, useful qualitative results of the optimal allocations are obtained. Then we turn our attention to a new family of risk measures satisfying a set of proposed axioms, which includes the class of distortion risk measures with concave distortion functions. By minimizing the new risk measures, we consider the optimal allocation of policy limits and deductibles problems based on the assumption that for each risk there exists an indicator random variable which determines whether the risk occurs or not. Several sufficient conditions to order the optimal allocations are obtained using tools in stochastic dominance theory.
published_or_final_version
Statistics and Actuarial Science
Doctoral
Doctor of Philosophy
APA, Harvard, Vancouver, ISO, and other styles
14

Alvarez, Benjamin. "Scattering Theory for Mathematical Models of the Weak Interaction." Electronic Thesis or Diss., Université de Lorraine, 2019. http://www.theses.fr/2019LORR0227.

Full text
Abstract:
Dans ce travail nous considérons d'abord un modèle mathématique de la désintégration des bosons W en leptons. L'hamiltonien d'énergie libre est perturbé par un terme d'interaction issu du modèle standard de la physique des particules. Après avoir introduit des coupures en hautes énergies ainsi qu'en espace, nous démontrons que l'Hamiltonien est un opérateur auto-adjoint sur un produit tensoriel d'espaces de Fock. Nous en étudions la théorie de la diffusion. D'abord, nous supposons que les neutrinos ont une masse non nulle et la complétude asymptotique est vérifiée pour une valeur quelconque de la constante de couplage. Dans un deuxième temps, nous considérons des neutrinos non massifs dans un modèle simplifié. Nous démontrons alors la complétude asymptotique en supposant que la constante de couplage est suffisamment petite, en utilisant une théorie de Mourre singulière, des estimations de propagation adaptées ainsi que la conservation d'une certaine combinaison linéaire d'opérateurs de nombre de particules. Nous étudions par ailleurs des modèles de théorie des champs pour un nombre fini mais quelconque de fermions de spin 1/2. Le terme d'interaction est obtenu en considérant toutes les combinaisons possibles pour les opérateurs de création et d'annihilation. Les différents champs peuvent être massifs comme non massifs et le noyau d'interaction doit vérifier des hypothèses de régularité en espace comme en moment. L'hamiltonien est alors un opérateur auto-adjoint, quelque soit l'intensité de l'interaction, sur un produit tensoriel d'espaces de Fock. Nous démontrons par ailleurs l'existence d'un état fondamental. Nos résultats s'appuient sur une interpolation d'estimation en Nτ et peuvent intervenir dans la modélisation de processus d'interaction faible dans la théorie de Fermi. Nous présenterons enfin une façon de retirer la troncature en espace sur des modèles jouets anfin de définir un modèle invariant par translation
In this work, we consider, first, mathematical models of the weak decay of the vector bosons W into leptons. The free quantum field Hamiltonian is perturbed by an interaction term from the standard model of particle physics. After the introduction of high energy and spatial cut-offs, the total quantum Hamiltonian defines a self-adjoint operator on a tensor product of Fock spaces. We study the scattering theory for such models. First, the masses of the neutrinos are supposed to be positive: for all values of the coupling constant, we prove asymptotic completeness of the wave operators. In a second model, neutrinos are treated as massless particles and we consider a simpler interaction Hamiltonian: for small enough values of the coupling constant, we prove again asymptotic completeness, using singular Mourre's theory, suitable propagation estimates and the conservation of the difference of some number operators. We moreover study Hamiltonian models representing an arbitrary number of spin 1/2 fermion quantum fields interacting through arbitrary processes of creation or annihilation of particles. The fields may be massive or massless. The interaction form factors are supposed to satisfy some regularity conditions in both position and momentum space. Without any restriction on the strength of the interaction, we prove that the Hamiltonian identifies to a self-adjoint operator on a tensor product of anti-symmetric Fock spaces and we establish the existence of a ground state. Our results rely on novel interpolated Nτ estimates. They apply to models arising from the Fermi theory of weak interactions, with ultraviolet and spatial cut-offs. Finally, the removal of spatial cut-off to define translation invariant toy models will be quickly discussed in the last chapter
APA, Harvard, Vancouver, ISO, and other styles
15

Tsandzana, Afonso Fernando. "Homogenization of some new mathematical models in lubrication theory." Doctoral thesis, Luleå tekniska universitet, Institutionen för teknikvetenskap och matematik, 2016. http://urn.kb.se/resolve?urn=urn:nbn:se:ltu:diva-59629.

Full text
Abstract:
We consider mathematical modeling of thin film flow between two rough surfaces which are in relative motion. For example such flows take place in different kinds of bearings and gears when a lubricant is used to reduce friction and wear between the surfaces. The mathematical foundations of lubrication theory is given by the Navier--Stokes equation, which describes the motion of viscous fluids. In thin domains several approximations are possible which lead to the so called Reynolds equation. This equation is crucial to describe the pressure in the lubricant film. When the pressure is found it is possible to predict vorous important physical quantities such as friction (stresses on the bounding surfaces), load carrying capacity and velocity field. In hydrodynamic lubrication the effect of surface roughness is not negligible, because in practical situations the amplitude of the surface roughness are of the same order as the film thickness. Moreover, a perfectly smooth surface does not exist in reality due to imperfections in the manufacturing process. Therefore, any realistic lubrication model should account for the effects of surface roughness. This implies that the mathematical modeling leads to partial differential equations with coefficients that will oscillate rapidly in space and time. A direct numerical computation is therefore very difficult, since an extremely dense mesh is needed to resolve the oscillations due to the surface roughness. A natural approach is to do some type of averaging. In this PhD thesis we use and develop modern homogenization theory to be able to handle the questions above. Especially, we use, develop and apply the method based on the multiple scale expansions and two-scale convergence. The thesis is based on five papers (A-E), with an appendix to paper A, and an extensive introduction, which puts these publications in a larger context. In Paper A the connection between the Stokes equation and the Reynolds equation is investigated. More precisely, the asymptotic behavior as both the film thickness  and wavelength  of the roughness tend to zero is analyzed and described. Three different limit equations are derived. Time-dependent equations of Reynolds type are obtained in all three cases (Stokes roughness, Reynolds roughness and high frequency roughness regime). In paper C we extend the work done in Paper A where we compare the roughness regimes by numeric computations for the stationary case. In paper B we present a mathematical model that takes into account cavitation, surfaces roughness and compressibility of the fluid. We compute the homogenized coefficients in the case of unidirectional roughness.In the paper D we derive a mathematical model of thin film flow between two close rough surfaces, which takes into account cavitation, surface roughness and pressure dependent density. Moreover, we use two-scale convergence to homogenize the model. Finally, in paper E we prove the existence of solutions to a frequently used mathematical model of thin film flow, which takes cavitation into account.
APA, Harvard, Vancouver, ISO, and other styles
16

Visarraga, Darrin Bernardo. "Heat transport models with distributed microstructure." Access restricted to users with UT Austin EID, 2001. http://wwwlib.umi.com/cr/utexas/fullcit?p3036605.

Full text
APA, Harvard, Vancouver, ISO, and other styles
17

De, Mello Lurion. "An investigation of the equity premium using habit utility and equity returns: Australian evidence." Thesis, Edith Cowan University, Research Online, Perth, Western Australia, 2004. https://ro.ecu.edu.au/theses/808.

Full text
Abstract:
The gap between the return on stocks and the return on the risk free assets represented by bonds is named the 'Equity Premium' or 'Equity Risk Premium'. In the history of asset pricing models, one of the most serious problems for the equity premium is that the average equity premium is too large to be explained by standard general equilibrium asset pricing models. Researcher's have tried to use variables such as dividend yield's to explain the gap between stocks and bonds with mixed results. After retrieving around a one percent equity premium with the most standard consumption base asset pricing models or Lucas styled asset pricing model, Mehra and Prescott (1985) first recognised this problem and announced it as a 'Puzzle'. In their analysis they used Lucas's (1978) standard asset pricing model where a representative investor has additive and separable utility functions in the perfect market. Compared to other forms or utility functions, at a certain period, these conventional preferences derived from utility of consumption in previous periods. Also this utility maintains a constance risk aversion parameter, y, over the reasonable consumption boundaries. In this study two approaches are adopted. The first involves the commonly applied dividend yield approach to forecasting the equity premium. The results obtained from using the current and lagged divided yield to try to capture the size and movement in the market risk premium are shown in chapter three. The results are not particularly promising. The remainder of the dissertation is devoted to a more sophisticated model: the consumption capital asset pricing model with habit derived by Campbell and Cochrane (1995) is tested using Australian data. The utility specification separates the temporal choice from state contingent choice and in doing so resolves part of the equity premium puzzle. The model is able to generate an equity premium using consumption data that is collinear with the actual premium, but with a significantly different volatility. The conclusion is that the state and time separable model is only partly able to resolve Mehra and Prescott’s (1985) equity premium puzzle.
APA, Harvard, Vancouver, ISO, and other styles
18

Bougenaya, Yamina. "Fermion models on the lattice and in field theory." Thesis, Durham University, 1985. http://etheses.dur.ac.uk/7080/.

Full text
Abstract:
The first part deals with lattice approach to field theories. The fermion doubling problems are described. This doubling can be removed if a dual lattice is introduced, as first pointed out by Stacey. His method is developed and in the process a formalism for the construction of a covariant difference lattice operator and thus of a gauge invariant action, is exhibited. It is shown how this formalism relates to the work of Wilson. Problems of gauge invariance can be traced back to the absence of the Leibnitz rule on the lattice. To circumvent this failure the usual notion of the product is replaced by a convolution. The solutions display a complementarity : the more localised the product the more extended is the approximation to the derivative and vice-versa. It is found that the form of the difference operator in the continuous limit dictates the formulation of the full two-dimensional supersymmetric algebra. The construction of the fields necessary to form the Wess-Zumino model follows from the requirement of anticommutativity of the supersymmetric charges. In the second part, the Skyrme model is reviewed and Bogomolnyi conditions are defined and discussed. It appears that while the Skyrme model has many satisfactory features, it fails to describe the interactions between nucleons correctly. These problems are brought out and the available solutions reviewed.
APA, Harvard, Vancouver, ISO, and other styles
19

Moore, Matthew Richard. "New mathematical models for splash dynamics." Thesis, University of Oxford, 2014. http://ora.ox.ac.uk/objects/uuid:c94ff7f2-296a-4f13-b04b-e9696eda9047.

Full text
Abstract:
In this thesis, we derive, extend and generalise various aspects of impact theory and splash dynamics. Our methods throughout will involve isolating small parameters in our models, which we can utilise using the language of matched asymptotics. In Chapter 1 we briefly motivate the field of impact theory and outline the structure of the thesis. In Chapter 2, we give a detailed review of classical small-deadrise water entry, Wagner theory, in both two and three dimensions, highlighting the key results that we will use in our extensions of the theory. We study oblique water entry in Chapter 3, in which we use a novel transformation to relate an oblique impact with its normal-impact counterpart. This allows us to derive a wide range of solutions to both two- and three-dimensional oblique impacts, as well as discuss the limitations and breakdown of Wagner theory. We return to vertical water-entry in Chapter 4, but introduce the air layer trapped between the impacting body and the liquid it is entering. We extend the classical theory to include this air layer and in the limit in which the density ratio between the air and liquid is sufficiently small, we derive the first-order correction to the Wagner solution due to the presence of the surrounding air. The model is presented in both two dimensions and axisymmetric geometries. In Chapter 5 we move away from Wagner theory and systematically derive a series of splash jet models in order to find possible mechanisms for phenomena seen in droplet impact and droplet spreading experiments. Our canonical model is a thin jet of liquid shot over a substrate with a thin air layer trapped between the jet and the substrate. We consider a variety of parameter regimes and investigate the stability of the jet in each regime. We then use this model as part of a growing-jet problem, in which we attempt to include effects due to the jet tip. In the final chapter we summarise the main results of the thesis and outline directions for future work.
APA, Harvard, Vancouver, ISO, and other styles
20

Zhu, Jinxia, and 朱金霞. "Ruin theory under Markovian regime-switching risk models." Thesis, The University of Hong Kong (Pokfulam, Hong Kong), 2008. http://hub.hku.hk/bib/B40203980.

Full text
APA, Harvard, Vancouver, ISO, and other styles
21

Mao, Wen. "Essays on bargaining theory and voting behavior." Diss., Virginia Tech, 1994. http://hdl.handle.net/10919/38561.

Full text
APA, Harvard, Vancouver, ISO, and other styles
22

Jiao, Yue. "Mathematical models for control of probabilistic Boolean networks." Click to view the E-thesis via HKUTO, 2008. http://sunzi.lib.hku.hk/hkuto/record/B41508634.

Full text
APA, Harvard, Vancouver, ISO, and other styles
23

Angoshtari, Bahman. "Stochastic modeling and methods for portfolio management in cointegrated markets." Thesis, University of Oxford, 2014. http://ora.ox.ac.uk/objects/uuid:1ae9236c-4bf0-4d9b-a694-f08e1b8713c0.

Full text
Abstract:
In this thesis we study the utility maximization problem for assets whose prices are cointegrated, which arises from the investment practice of convergence trading and its special forms, pairs trading and spread trading. The major theme in the first two chapters of the thesis, is to investigate the assumption of market-neutrality of the optimal convergence trading strategies, which is a ubiquitous assumption taken by practitioners and academics alike. This assumption lacks a theoretical justification and, to the best of our knowledge, the only relevant study is Liu and Timmermann (2013) which implies that the optimal convergence strategies are, in general, not market-neutral. We start by considering a minimalistic pairs-trading scenario with two cointegrated stocks and solve the Merton investment problem with power and logarithmic utilities. We pay special attention to when/if the stochastic control problem is well-posed, which is overlooked in the study done by Liu and Timmermann (2013). In particular, we show that the problem is ill-posed if and only if the agent’s risk-aversion is less than a constant which is an explicit function of the market parameters. This condition, in turn, yields the necessary and sufficient condition for well-posedness of the Merton problem for all possible values of agent’s risk-aversion. The resulting well-posedness condition is surprisingly strict and, in particular, is equivalent to assuming the optimal investment strategy in the stocks to be market-neutral. Furthermore, it is shown that the well-posedness condition is equivalent to applying Novikov’s condition to the market-price of risk, which is a ubiquitous sufficient condition for imposing absence of arbitrage. To the best of our knowledge, these are the only theoretical results for supporting the assumption of market-neutrality of convergence trading strategies. We then generalise the results to the more realistic setting of multiple cointegrated assets, assuming risk factors that effects the asset returns, and general utility functions for investor’s preference. In the process of generalising the bivariate results, we also obtained some well-posedness conditions for matrix Riccati differential equations which are, to the best of our knowledge, new. In the last chapter, we set up and justify a Merton problem that is related to spread-trading with two futures assets and assuming proportional transaction costs. The model possesses three characteristics whose combination makes it different from the existing literature on proportional transaction costs: 1) finite time horizon, 2) Multiple risky assets 3) stochastic opportunity set. We introduce the HJB equation and provide rigorous arguments showing that the corresponding value function is the viscosity solution of the HJB equation. We end the chapter by devising a numerical scheme, based on the penalty method of Forsyth and Vetzal (2002), to approximate the viscosity solution of the HJB equation.
APA, Harvard, Vancouver, ISO, and other styles
24

Choe, Byung-Tae. "Essays on concave and homothetic utility functions." Uppsala : Stockholm, Sweden : s.n. ; Distributor, Almqvist & Wiksell International, 1991. http://catalog.hathitrust.org/api/volumes/oclc/27108685.html.

Full text
APA, Harvard, Vancouver, ISO, and other styles
25

Wong, Tsun-yu Jeff, and 黃峻儒. "On some Parisian problems in ruin theory." Thesis, The University of Hong Kong (Pokfulam, Hong Kong), 2014. http://hdl.handle.net/10722/206448.

Full text
Abstract:
Traditionally, in the context of ruin theory, most judgements are made on an immediate sense. An example would be the determination of ruin, in which a business is declared broke right away when it attains a negative surplus. Another example would be the decision on dividend payment, in which a business pays dividends whenever the surplus level overshoots certain threshold. Such scheme of decision making is generally being criticized as unrealistic from a practical point of view. The Parisian concept is therefore invoked to handle this issue. This idea is deemed more realistic since it allows certain delay in the execution of decisions. In this thesis, such Parisian concept is utilized on two different aspects. The first one is to incorporate this concept on defining ruin, leading to the introduction of Parisian ruin time. Under such a setting, a business is considered ruined only when the surplus level stays negative continuously for a prescribed length of time. The case for a fixed delay is considered. Both the renewal risk model and the dual renewal risk model are studied. Under a mild distributional assumption that either the inter arrival time or the claim size is exponentially distributed (while keeping the other arbitrary), the Laplace transform to the Parisian ruin time is derived. Numerical example is performed to confirm the reasonableness of the results. The methodology in obtaining the Laplace transform to the Parisian ruin time is also demonstrated to be useful in deriving the joint distribution to the number of negative surplus causing or without causing Parisian ruin. The second contribution is to incorporate this concept on the decision for dividend payment. Specifically, a business only pays lump-sum dividends when the surplus level stays above certain threshold continuously for a prescribed length of time. The case for a fixed and an Erlang(n) delay are considered. The dual compound Poisson risk model is studied. Laplace transform to the ordinary ruin time is derived. Numerical examples are performed to illustrate the results.
published_or_final_version
Statistics and Actuarial Science
Master
Master of Philosophy
APA, Harvard, Vancouver, ISO, and other styles
26

Coyle, Andrew James. "Some problems in queueing theory." Title page, contents and summary only, 1989. http://web4.library.adelaide.edu.au/theses/09PH/09phc8812.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
27

McCloud, Nadine. "Model misspecification theory and applications /." Diss., Online access via UMI:, 2008.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
28

Sprumont, Yves. "Three essays in collective choice theory." Diss., Virginia Tech, 1990. http://hdl.handle.net/10919/40872.

Full text
APA, Harvard, Vancouver, ISO, and other styles
29

Liu, Luyin, and 劉綠茵. "Analysis of some risk processes in ruin theory." Thesis, The University of Hong Kong (Pokfulam, Hong Kong), 2013. http://hdl.handle.net/10722/195992.

Full text
Abstract:
In the literature of ruin theory, there have been extensive studies trying to generalize the classical insurance risk model. In this thesis, we look into two particular risk processes considering multi-dimensional risk and dependent structures respectively. The first one is a bivariate risk process with a dividend barrier, which concerns a two-dimensional risk model under a barrier strategy. Copula is used to represent the dependence between two business lines when a common shock strikes. By defining the time of ruin to be the first time that either of the two lines has its surplus level below zero, we derive a discrete approximation procedure to calculate the expected discounted dividends until ruin under such a model. A thorough discussion of application in proportional reinsurance with numerical examples is provided as well as an examination of the joint optimal dividend barrier for the bivariate process. The second risk process is a semi-Markovian dual risk process. Assuming that the dependence among innovations and waiting times is driven by a Markov chain, we analyze a quantity resembling the Gerber-Shiu expected discounted penalty function that incorporates random variables defined before and after the time of ruin, such as the minimum surplus level before ruin and the time of the first gain after ruin. General properties of the function are studied, and some exact results are derived upon distributional assumptions on either the inter-arrival times or the gain amounts. Applications in a perpetual insurance and the last inter-arrival time before ruin are given along with some numerical examples.
published_or_final_version
Statistics and Actuarial Science
Master
Master of Philosophy
APA, Harvard, Vancouver, ISO, and other styles
30

Jiao, Yue, and 焦月. "Mathematical models for control of probabilistic Boolean networks." Thesis, The University of Hong Kong (Pokfulam, Hong Kong), 2008. http://hub.hku.hk/bib/B41508634.

Full text
APA, Harvard, Vancouver, ISO, and other styles
31

Staschus, Konstantin. "Renewable energy in electric utility capacity planning: a decomposition approach with application to a Mexican utility." Diss., Virginia Polytechnic Institute and State University, 1985. http://hdl.handle.net/10919/53898.

Full text
Abstract:
Many electric utilities have been tapping such energy sources as wind energy or conservation for years. However, the literature shows few attempts to incorporate such non-dispatchable energy sources as decision variables into the long-range planning methodology. In this dissertation, efficient algorithms for electric utility capacity expansion planning with renewable energy are developed. The algorithms include a deterministic phase which quickly finds a near-optimal expansion plan using derating and a linearized approximation to the time-dependent availability of non-dispatchable energy sources. A probabilistic second phase needs comparatively few computer-time consuming probabilistic simulation iterations to modify this solution towards the optimal expansion plan. For the deterministic first phase, two algorithms, based on a Lagrangian Dual decomposition and a Generalized Benders Decomposition, are developed. The Lagrangian Dual formulation results in a subproblem which can be separated into single-year plantmix problems that are easily solved using a breakeven analysis. The probabilistic second phase uses a Generalized Benders Decomposition approach. A depth-first Branch and Bound algorithm is superimposed on the two-phase algorithm if conventional equipment types are only available in discrete sizes. In this context, computer time savings accrued through the application of the two-phase method are crucial. Extensive computational tests of the algorithms are reported. Among the deterministic algorithms, the one based on Lagrangian Duality proves fastest. The two-phase approach is shown to save up to 80 percent in computing time as compared to a purely probabilistic algorithm. The algorithms are applied to determine the optimal expansion plan for the Tijuana-Mexicali subsystem of the Mexican electric utility system. A strong recommendation to push conservation programs in the desert city of Mexicali I results from this implementation.
Ph. D.
APA, Harvard, Vancouver, ISO, and other styles
32

Mendoza, Maria Nimfa F. "Essays in production theory : efficiency measurement and comparative statics." Thesis, University of British Columbia, 1989. http://hdl.handle.net/2429/30734.

Full text
Abstract:
Nonparametric linear programming tests for consistency with the hypotheses of technical efficiency and allocative efficiency for the general case of multiple output-multiple input technologies are developed in Part I. The tests are formulated relative to three kinds of technologies — convex, constant returns to scale and quasiconcave technologies. Violation indices as summary indicators of the distance of an inefficient observation from an efficient allocation are proposed. The consistent development of the violation indices across the technical efficiency and allocative efficiency tests allows us to obtain comparative measures of the degrees of technical inefficiency and pure allocative inefficiency. Constrained optimization tests applicable to cases where the producer is restricted to optimizing with respect to a subset of goods are also proposed. The latter tests yield the revealed preference-type inequalities commonly used as tests for consistency of observed data with profit maximizing or cost minimizing behavior as limiting cases. Computer programs for implementing the different tests and sample results are listed in the appendix. In part II, an empirical comparison of nonparametric and parametric measures of technical progress for constant returns to scale technologies is performed using the Canadian input-output data for the period 1961-1980. The original data base was aggregated into four sectors and ten goods and the comparison was done for each sector. If we assume optimizing behavior on the part of the producers, we can reinterpret the violation indices yielded by the efficiency tests in part I as indicators of the shift in the production frontier. More precisely, the violation indices can be considered nonparametric chained indices of technical progress. The parametric measures of technical progress were obtained through econometric profit function estimation using the generalized McFadden flexible functional form with a quadratic spline model for technical progress proposed by Diewert and Wales (1989). Under the assumption of constant returns, the index of technical change is defined in terms of the unit scale profit function which gives the per unit return to the normalizing good. The empirical results show that the parametric estimates of technical change display a much smoother behavior which can be attributed to the incorporation of stochastic disturbance terms in the estimation procedure and, more interestingly, track the long term trend in the nonparametric estimates. Part III builds on the theory of minimum wages in international trade and is a theoretical essay in the tradition of analyzing the effects of factor market imperfections on resource allocation. The comparative static responses of the endogenous variables — output levels, employment levels of fixed-price factors with elastic supply and flexible prices of domestic resources — to marginal changes in the economy's exogenous variables — output prices, fixed factor prices and endowments of flexibly-priced domestic resources -— are examined. The effect of a change in a fixed factor price on other flexible factor prices can be decomposed Slutsky-like into substitution and scale effects. A symmetry condition between fixed factor prices and flexible factor prices is obtained which clarifies the concepts of "substitutability" and "complementarity" between these two kinds of factors. As an illustration, the model is applied to the case of a devaluation in a two-sector small open economy with rigid wages and capital as specific factors. The empirical implementation of the general model for the Canadian economy is left to more able econometricians but a starting point can be the sectoral analysis performed in Part II.
Arts, Faculty of
Vancouver School of Economics
Graduate
APA, Harvard, Vancouver, ISO, and other styles
33

Agi, Egemen. "Mathematical Modeling Of Gate Control Theory." Master's thesis, METU, 2009. http://etd.lib.metu.edu.tr/upload/3/12611468/index.pdf.

Full text
Abstract:
The purpose of this thesis work is to model the gate control theory, which explains the modulation of pain signals, with a motivation of finding new possible targets for pain treatment and to find novel control algorithms that can be used in engineering practice. The difference of the current study from the previous modeling trials is that morphologies of neurons that constitute gate control system are also included in the model by which structure-function relationship can be observed. Model of an excitable neuron is constructed and the response of the model for different perturbations are investigated. The simulation results of the excitable cell model is obtained and when compared with the experimental findings obtained by using crayfish, it is found that they are in good agreement. Model encodes stimulation intensity information as firing frequency and also it can add sub-threshold inputs and fire action potentials as real neurons. Moreover, model is able to predict depolarization block. Absolute refractory period of the single cell model is found as 3.7 ms. The developed model, produces no action potentials when the sodium channels are blocked by tetrodotoxin. Also, frequency and amplitudes of generated action potentials increase when the reversal potential of Na is increased. In addition, propagation of signals along myelinated and unmyelinated fibers is simulated and input current intensity-frequency relationships for both type of fibers are constructed. Myelinated fiber starts to conduct when current input is about 400 pA whereas this minimum threshold value for unmyelinated fiber is around 1100 pA. Propagation velocity in the 1 cm long unmyelinated fiber is found as 0.43 m/s whereas velocity along myelinated fiber with the same length is found to be 64.35 m/s. Developed synapse model exhibits the summation and tetanization properties of real synapses while simulating the time dependency of neurotransmitter concentration in the synaptic cleft. Morphometric analysis of neurons that constitute gate control system are done in order to find electrophysiological properties according to dimensions of the neurons. All of the individual parts of the gate control system are connected and the whole system is simulated. For different connection configurations, results of the simulations predict the observed phenomena for the suppression of pain. If the myelinated fiber is dissected, the projection neuron generates action potentials that would convey to brain and elicit pain. However, if the unmyelinated fiber is dissected, projection neuron remains silent. In this study all of the simulations are preformed using Simulink.
APA, Harvard, Vancouver, ISO, and other styles
34

Zhang, You-Kuan. "A quasilinear theory of time-dependent nonlocal dispersion in geologic media." Diss., The University of Arizona, 1990. http://hdl.handle.net/10150/185039.

Full text
Abstract:
A theory is presented which accounts for a particular aspect of nonlinearity caused by the deviation of plume "particles" from their mean trajectory in three-dimensional, statistically homogeneous but anisotropic porous media under an exponential covariance of log hydraulic conductivities. Quasilinear expressions for the time-dependent nonlocal dispersivity and spatial covariance tensors of ensemble mean concentration are derived, as a function of time, variance σᵧ² of log hydraulic conductivity, degree of anisotropy, and flow direction. One important difference between existing linear theories and the new quasilinear theory is that in the former transverse nonlocal dispersivities tend asymptotically to zero whereas in the latter they tend to nonzero Fickian asymptotes. Another important difference is that while all existing theories are nominally limited to situations where σᵧ² is less than 1, the quasilinear theory is expected to be less prone to error when this restriction is violated because it deals with the above nonlinearity without formally limiting σᵧ². The theory predicts a significant drop in dimensionless longitudinal dispersivity when σᵧ² is large as compared to the case where σᵧ² is small. As a consequence of this drop the real asymptotic longitudinal dispersivity, which varies in proportion to σᵧ² when σᵧ² is small, is predicted to vary as σᵧ when σᵧ² is large. The dimensionless transverse dispersivity also drops significantly at early dimensionless time when σᵧ² is large. At late time this dispersivity attains a maximum near σᵧ² = 1, varies asymptotically at a rate proportional to σᵧ² when σᵧ² is small, and appears inversely proportional to σᵧ when σᵧ² is large. The actual asymptotic transverse dispersivity varies in proportion to σᵧ⁴ when σᵧ² is small and appears proportional to σᵧ when σᵧ² is large. One of the most interesting findings is that when the mean seepage velocity vector μ is at an angle to the principal axes of statistical anisotropy, the orientation of longitudinal spread is generally offset from μ toward the direction of largest log hydraulic conductivity correlation scale. When local dispersion is active, a plume starts elongating parallel to μ. With time the long axis of the plume rotates toward the direction of largest correlation scale, then rotates back toward μ, and finally stabilizes asymptotically at a relatively small angle of deflection. Application of the theory to depth-averaged concentration data from the recent tracer experiment at Borden, Ontario, yields a consistent and improved fit without any need for parameter adjustment.
APA, Harvard, Vancouver, ISO, and other styles
35

Li, Caiwei. "Dynamic scheduling of multiclass queueing networks." Diss., Georgia Institute of Technology, 2000. http://hdl.handle.net/1853/24339.

Full text
APA, Harvard, Vancouver, ISO, and other styles
36

Hu, Fan. "Computation of exciton transfer in the one- and two-dimensional close-packed quantum dot arrays." Virtual Press, 2005. http://liblink.bsu.edu/uhtbin/catkey/1319543.

Full text
Abstract:
Forster theory of energy transfer is applied in diluted systems, and yet it remains unknown if it can be applied to the dense media. We have studied the exciton transfer in one-dimensional (1-D) close-packed pure and mixed quantum dot (QD) array under different models and two-dimensional (2-D) perfect lattice. Our approach is based on the master equation created by treating the exciton relaxation as a stochastic process. The random parameter has been used to describe dot-to-dot distance variations. The master equation has been investigated analytically for 1-D and 2-D perfect lattices and numerically for 1-D disordered systems. The suitability of Forster decay law on the excitation decay of close-packed solid has been discussed. The necessity to consider the effect of the further nearest interdot interactions has been checked.
Department of Physics and Astronomy
APA, Harvard, Vancouver, ISO, and other styles
37

Huang, Yun, and 黄赟. "Game-theoretic coordination and configuration of multi-level supply chains." Thesis, The University of Hong Kong (Pokfulam, Hong Kong), 2010. http://hub.hku.hk/bib/B44904411.

Full text
APA, Harvard, Vancouver, ISO, and other styles
38

Kim, Ji S. "Electron transport through the double quantum dots in Aharonov-Bohm rings." Virtual Press, 2005. http://liblink.bsu.edu/uhtbin/catkey/1319544.

Full text
Abstract:
We numerically investigate a total transmission probability through QDs embedded in an AB ring. The QDs are formed by delta function-like double potential barriers and a magnetic flux is penetrated in the center of the ring. In particular, we study the coupled double-QDs in series and non-coupled double-QDs in parallel in an AB ring. In each model, we show the total transmission probability as a function of QD size and electron incident energy, and present the transmission amplitude on the complex-energy plane. Of interest is the change and progression of Fano resonances and corresponding zero-pole pairs on the complex-energy plane with magnetic flux in the center of the ring.To accomplish this, we analytically solve the scattering matrix at each junction and the transfer matrix through the arms of the ring using Schrodinger equation for the delta function barriers. Then, the total transmission probability is obtained as a function of electron energy and magnetic flux by cascading these matrices. Finally, the solutions of the analytical equations and the graphical output of the transmission characteristics in the system will be obtained numerically by using Mathematica programs run on desktop computers.
Department of Physics and Astronomy
APA, Harvard, Vancouver, ISO, and other styles
39

Yeo, Keng Leong Actuarial Studies Australian School of Business UNSW. "Claim dependence in credibility models." Awarded by:University of New South Wales. School of Actuarial Studies, 2006. http://handle.unsw.edu.au/1959.4/25971.

Full text
Abstract:
Existing credibility models have mostly allowed for one source of claim dependence only, that across time for an individual insured risk or a group of homogeneous insured risks. Numerous circumstances demonstrate that this may be inadequate and insufficient. In this dissertation, we developed a two-level common effects model, based loosely on the Bayesian model, which allows for two possible sources of dependence, that across time for the same individual risk and that between risks. For the case of Normal common effects, we are able to derive explicit formulas for the credibility premium. This takes the intuitive form of a weighted average between the individual risk's claims experience, the group's claims experience and the prior mean. We also consider the use of copulas, a tool widely used in other areas of work involving dependence, in constructing credibility premiums. Specifically, we utilise copulas to model the dependence across time for an individual risk or group of homogeneous risks. We develop the construction with several well-known families of copulas and are able to derive explicit formulas for their respective conditional expectations. Whilst some recent work has been done on constructing credibility models with copulas, explicit formulas for the conditional expectations have rarely been made available. Finally, we calibrate these copula credibility models using a real data set. This data set relates to the claims experience of workers' compensation insurance by occupation over a 7-year period for a particular state in the United States. Our results show that for each occupation, claims dependence across time is indeed present. Amongst the copulas considered in our empirical analysis, the Cook-Johnson copula model is found to be the best fit for the data set used. The calibrated copula models are then used for prediction of the next period's claims. We found that the Cook-Johnson copula model gives superior predictions. Furthermore, this calibration exercise allowed us to uncover the importance of examining the nature of the data and comparing it with the characteristics of the copulas we are calibrating to.
APA, Harvard, Vancouver, ISO, and other styles
40

Kwan, Kwok-man, and 關國文. "Ruin theory under a threshold insurance risk model." Thesis, The University of Hong Kong (Pokfulam, Hong Kong), 2007. http://hub.hku.hk/bib/B38320034.

Full text
APA, Harvard, Vancouver, ISO, and other styles
41

Yang, Xige. "MATHEMATICAL MODELS OF PATTERN FORMATION IN CELL BIOLOGY." The Ohio State University, 2018. http://rave.ohiolink.edu/etdc/view?acc_num=osu1542236214346341.

Full text
APA, Harvard, Vancouver, ISO, and other styles
42

Chau, Ki-wai, and 周麒偉. "Fourier-cosine method for insurance risk theory." Thesis, The University of Hong Kong (Pokfulam, Hong Kong), 2014. http://hdl.handle.net/10722/208586.

Full text
Abstract:
In this thesis, a systematic study is carried out for effectively approximating Gerber-Shiu functions under L´evy subordinator models. It is a hardly touched topic in the recent literature and our approach is via the popular Fourier-cosine method. In theory, classical Gerber-Shiu functions can be expressed in terms of an infinite sum of convolutions, but its inherent complexity makes efficient computation almost impossible. In contrast, Fourier transforms of convolutions could be evaluated in a far simpler manner. Therefore, an efficient numerical method based on Fourier transform is pursued in this thesis for evaluating Gerber-Shiu functions. Fourier-cosine method is a numerical method based on Fourier transform and has been very popular in option pricing since its introduction. It then evolves into a number of extensions, and we here adopt its spirit to insurance risk theory. In this thesis, the proposed approximant of Gerber-Shiu functions under an L´evy subordinator model has O(n) computational complexity in comparison with that of O(n log n) via the usual numerical Fourier inversion. Also, for Gerber-Shiu functions within the proposed refined Sobolev space, an explicit error bound is given and error bound of this type is seemingly absent in the literature. Furthermore, the error bound for our estimation can be further enhanced under extra assumptions, which are not immediate from Fang and Oosterlee’s works. We also suggest a robust method on the estimation of ruin probabilities (one special class of Gerber-Shiu functions) based on the moments of both claim size and claim arrival distributions. Rearrangement inequality will also be adopted to amplify the use of our Fourier-cosine method in ruin probability, resulting in an effective global estimation. Finally, the effectiveness of our result will be further illustrated in a number of numerical studies and our enhanced error bound is apparently optimal in our demonstration; more precisely, empirical evidence exhibiting the biggest possible error convergence rate agrees with our theoretical conclusion.
published_or_final_version
Mathematics
Master
Master of Philosophy
APA, Harvard, Vancouver, ISO, and other styles
43

Henry, Eric James. "Contaminant induced flow effects in variably-saturated porous media." Diss., The University of Arizona, 2001. http://hdl.handle.net/10150/191256.

Full text
Abstract:
Dissolved organic contaminants that decrease the surface tension of water (surfactants) can have an effect on unsaturated flow through porous media due to the dependence of capillary pressure on surface tension. One and two-dimensional (1D, 2D) laboratory experiments and numerical simulations were conducted to study surfactant-induced unsaturated flow. The 1D experiments investigated differences in surfactant-induced flow as a function of contaminant mobility. The flow in a system contaminated with a high solubility, mobile surfactant, butanol, was much different than in a system contaminated with a sparingly soluble, relatively immobile surfactant, myristyl alcohol (MA). Because surface tension depression caused by MA was confined to the original source zone, the MA system was modeled using a standard unsaturated flow model (HYDRUS-1D) by assigning separate sets of hydraulic functions to the initially clean and source zones. To simulate the butanol system, HYDRUS-1D was modified to incorporate surfactant concentration-dependent changes to the moisture content-pressure head and unsaturated hydraulic conductivity functions. Following the 1D study, a two-dimensional flow cell (2.4 x 1.5 x 0.1 m) was used to investigate the infiltration of a surfactant contaminant plume from a point source on the soil surface, through the vadose zone, and toward a shallow aquifer. Above the top of the capillary fringe the advance of the surfactant solution caused a drainage front that radiated from the point source. Upon reaching the capillary fringe, the drainage front caused a localized depression of the capillary fringe and eventually a new capillary fringe height was established. Horizontal transport of surfactant in the depressed capillary fringe caused the propagation of a wedge-shaped drainage front in the downgradient direction. The numerical model HYDRUS-2D was modified to account for surfactant concentration-dependent effects on the unsaturated hydraulic functions and was successfully used to simulate the surfactant infiltration experiment. The extensive propagation of the drying front and the effect of vadose zone drainage on contaminant breakthrough time demonstrate the potential importance of considering surface tension effects on unsaturated flow and transport in systems containing surface-active organic contaminants or in systems where surfactants are used for remediation of the vadose zone or unconfined aquifers.
APA, Harvard, Vancouver, ISO, and other styles
44

Mackenzie, Neil C. "The independent quadratic optimisation algorithm for the active control of noise and vibration /." Title page, contents and abstract only, 1996. http://web4.library.adelaide.edu.au/theses/09PH/09phm15742.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
45

Coulombe, Daniel. "Voluntary income increasing accounting changes : theory and further empirical investigation." Thesis, University of British Columbia, 1987. http://hdl.handle.net/2429/26983.

Full text
Abstract:
This thesis presents a three step analysis of voluntary income increasing accounting changes. We first propose a theory as to why managers would elect to modify their reporting strategy. This theory builds on research on the economic factors motivating accounting choices, since it is assumed that accounting choices are a function of political costs, manager's compensation plans and debt constraints. Specifically, we claim that adversity motivates the manager to effect an income increasing accounting change. Secondly, the thesis proposes a theoretical analysis of the potential market responses to a change announcement. The stock price effect of a change announcement is examined as a function of investors' rational anticipations of the manager's reporting actions and as a function of the level of information about adversity that investors may have prior to a change announcement. An empirical analysis is presented in the third step of this thesis. Our empirical findings are that: 1- Change announcements, on average, have no significant impact on the market. 2- Relative to the Compustat population as a whole, firms that voluntarily adopt income increasing accounting changes exhibit symptoms of financial distress, suggesting that such change announcements are associated with financial adversity. 3- Firms which voluntarily adopt income increasing accounting changes tend to exhibit symptoms of financial distress one or more years prior to the change year, suggesting that change announcements tend not to be a timely source of information conveying distress to the market. 4- There is a significant negative association between investors' proxies for prior information about adversity and the market impact of the change, especially for the subset of firms with above average leverage, suggesting that the information content of the accounting change signal is inversely related to investors prior information about adversity. The empirical results thus support the view that investors, at the time a change occurs, have information about the prevailing state of the world, and that they have rational anticipations with respect to the manager's reporting behavior. In this respect, the accounting change is, on average, an inconsequential signal that adds little to what investors already knew before the change announcement.
Business, Sauder School of
Graduate
APA, Harvard, Vancouver, ISO, and other styles
46

Hinkelmann, Franziska Babette. "Algebraic theory for discrete models in systems biology." Diss., Virginia Tech, 2011. http://hdl.handle.net/10919/28509.

Full text
Abstract:
This dissertation develops algebraic theory for discrete models in systems biology. Many discrete model types can be translated into the framework of polynomial dynamical systems (PDS), that is, time- and state-discrete dynamical systems over a finite field where the transition function for each variable is given as a polynomial. This allows for using a range of theoretical and computational tools from computer algebra, which results in a powerful computational engine for model construction, parameter estimation, and analysis methods. Formal definitions and theorems for PDS and the concept of PDS as models of biological systems are introduced in section 1.3. Constructing a model for given time-course data is a challenging problem. Several methods for reverse-engineering, the process of inferring a model solely based on experimental data, are described briefly in section 1.3. If the underlying dependencies of the model components are known in addition to experimental data, inferring a "good" model amounts to parameter estimation. Chapter 2 describes a parameter estimation algorithm that infers a special class of polynomials, so called nested canalyzing functions. Models consisting of nested canalyzing functions have been shown to exhibit desirable biological properties, namely robustness and stability. The algorithm is based on the parametrization of nested canalyzing functions. To demonstrate the feasibility of the method, it is applied to the cell-cycle network of budding yeast. Several discrete model types, such as Boolean networks, logical models, and bounded Petri nets, can be translated into the framework of PDS. Section 3 describes how to translate agent-based models into polynomial dynamical systems. Chapter 4, 5, and 6 are concerned with analysis of complex models. Section 4 proposes a new method to identify steady states and limit cycles. The method relies on the fact that attractors correspond to the solutions of a system of polynomials over a finite field, a long-studied problem in algebraic geometry which can be efficiently solved by computing Gröbner bases. Section 5 introduces a bit-wise implementation of a Gröbner basis algorithm for Boolean polynomials. This implementation has been incorporated into the core engine of Macaulay2. Chapter 6 discusses bistability for Boolean models formulated as polynomial dynamical systems.
Ph. D.
APA, Harvard, Vancouver, ISO, and other styles
47

Choy, Siu Kai. "Statistical histogram characterization and modeling : theory and applications." HKBU Institutional Repository, 2008. http://repository.hkbu.edu.hk/etd_ra/913.

Full text
APA, Harvard, Vancouver, ISO, and other styles
48

陳幸福 and Xingfu Chen. "A ductile damage model based on endochronic theory and its applicationto ductile failure analysis." Thesis, The University of Hong Kong (Pokfulam, Hong Kong), 1993. http://hub.hku.hk/bib/B31233004.

Full text
APA, Harvard, Vancouver, ISO, and other styles
49

Zhao, Jingxin. "Application of partial consistency for the semi-parametric models." HKBU Institutional Repository, 2017. https://repository.hkbu.edu.hk/etd_oa/441.

Full text
Abstract:
The semi-parametric model enjoys a relatively flexible structure and keeps some of the simplicity in the statistical analysis. Hence, there are abundance discussions on semi-parametric models in the literature. The concept of partial consistency was firstly brought up in Neyman and Scott (1948). It was said the in cases where infinite parameters are involved, consistent estimators are always attainable for those "structural" parameters. The "structural' parameters are finite and govern infinite samples. Since the nonparametric model can be regarded as a parametric model with infinite parameters, then the semi-parametric model can be easily transformed into a infinite-parametric model with some "structural" parameters. Therefore, based on this idea, we develop several new methods for the estimating and model checking problems in semi-parametric models. The implementation of applying partial consistency is through the method "local average". We consider the nonparametric part as piecewise constant so that infinite parameters are created. The "structural" parameters shall be the parametric part, the model residual variance and so on. Due to the partial consistency phenomena, classical statistic tools can then be applied to obtain consistent estimators for those "structural" parameters. Furthermore, we can take advantage of the rest of parameters to estimate the nonparametric part. In this thesis, we take the varying coefficient model as the example. The estimation of the functional coefficient is discussed and relative model checking methods are presented. The proposed new methods, no matter for the estimation or the test, have remarkably lessened the computation complexity. At the same time, the estimators and the tests get satisfactory asymptotic statistical properties. The simulations we conducted for the new methods also support the asymptotic results, giving a relatively efficient and accurate performance. What's more, the local average method is easy to understand and can be flexibly applied to other type of models. Further developments could be done on this potential method. In Chapter 2, we introduce a local average method to estimate the functional coefficients in the varying coefficient model. As a typical semi-parametric model, the varying coefficient model is widely applied in many areas. The varying coefficient model could be seen as a more flexible version of classical linear model, while it explains well when the regression coefficients do not stay constant. In addition, we extend this local average method to the semi-varying coefficient model, which consists of a linear part and a varying coefficient part. The procedures of the estimations are developed, and their statistical properties are investigated. Plenty of simulations and a real data application are conducted to study the performance of the proposed method. Chapter 3 is about the local average method in variance estimation. Variance estimation is a fundamental problem in statistical modeling and plays an important role in the inferences in model selection and estimation. In this chapter, we have discussed the problem in several nonparametric and semi-parametric models. The proposed method has the advantages of avoiding the estimation of the nonparametric function and reducing the computational cost, and can be easily extended to more complex settings. Asymptotic normality is established for the proposed local average estimators. Numerical simulations and a real data analysis are presented to illustrate the finite sample performance of the proposed method. Naturally, we move to the model checking problem in Chapter 4, still taking varying coefficient models as an example. One important and frequently asked question is whether an estimated coefficient is significant or really "varying". In the literature, the relative hypothesis tests usually require fitting the whole model, including the nuisance coefficients. Consequently, the estimation procedure could be very compute-intensive and time-consuming. Thus, we bring up several tests which can avoid unnecessary functions estimation. The proposed tests are very easy to implement and their asymptotic distributions under null hypothesis have been deduced. Simulations are also studied to show the properties of the tests.
APA, Harvard, Vancouver, ISO, and other styles
50

Larsson, Ashley Ian. "Mathematical aspects of wave theory for inhomogeneous materials /." Title page, table of contents and summary only, 1991. http://web4.library.adelaide.edu.au/theses/09PH/09phl334.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography