Dissertations / Theses on the topic 'Prediction theory Mathematical models'

To see the other types of publications on this topic, follow the link: Prediction theory Mathematical models.

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 dissertations / theses for your research on the topic 'Prediction theory Mathematical models.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.

1

Campbell, Alyce. "An empirical study of a financial signalling model." Thesis, University of British Columbia, 1987. http://hdl.handle.net/2429/26969.

Full text
Abstract:
Brennan and Kraus (1982,1986) developed a costless signalling model which can explain why managers issue hybrid securities—convertibles(CB's) or bond-warrant packages(BW's). The model predicts that when the true standard deviation (σ) of the distribution of future firm value is unknown to the market, the firm's managers will issue a hybrid with specific characteristics such that the security's full information value is at a minimum at the firm's true σ. In this fully revealing equilibrium market price is equal to this minimum value. In this study, first the mathematical properties of the hypothesized bond-valuation model were examined to see if specific functions could have a minimum not at σ = 0 or σ = ∞ as required for signalling. The Black-Scholes-Merton model was the valuation model chosen because of ease of use, supporting empirical evidence, and compatibility with the Brennan-Kraus model. Three different variations, developed from Ingersoll(1977a); Geske( 1977,1979) and Geske and Johnson(1984); and Brennan and Schwartz(1977,1978), were examined. For all hybrids except senior CB's, pricing functions with a minimum can be found for plausible input parameters. However, functions with an interior maximum are also plausible. A function with a maximum cannot be used for signalling. Second, bond pricing functions for 105 hybrids were studied. The two main hypotheses were: (1) most hybrids have functions with an interior minimum; (2) market price equals minimum theoretical value. The results do not support the signalling model, although the evidence is ambiguous. For the σ range 0.05-0.70, for CB's (BW's) 15(8) Brennan-Schwartz functions were everywhere positively sloping, 11(2) had an interior minimum, 22(0) were everywhere negatively sloping, and 35(12) had an interior maximum. Market prices did lie closer to minima than maxima from the Brennan-Schwartz solutions, but the results suggest that the solution as implemented overpriced the CB's. BW's were unambiguously overpriced. With consistent overpricing, market prices would naturally lie closer to minima. Average variation in theoretical values was, however, only about 5 percent for CB's and about 10 percent for BW's. This, coupled with the shape data, suggests that firms were choosing securities with theoretical values relatively insensitive to a rather than choosing securities to signal σ unambiguously.
Business, Sauder School of
Graduate
APA, Harvard, Vancouver, ISO, and other styles
2

Yang, Boye, and 扬博野. "Online auction price prediction: a Bayesian updating framework based on the feedback history." Thesis, The University of Hong Kong (Pokfulam, Hong Kong), 2009. http://hub.hku.hk/bib/B43085830.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Cross, Richard J. (Richard John). "Inference and Updating of Probabilistic Structural Life Prediction Models." Diss., Georgia Institute of Technology, 2007. http://hdl.handle.net/1853/19828.

Full text
Abstract:
Aerospace design requirements mandate acceptable levels of structural failure risk. Probabilistic fatigue models enable estimation of the likelihood of fatigue failure. A key step in the development of these models is the accurate inference of the probability distributions for dominant parameters. Since data sets for these inferences are of limited size, the fatigue model parameter distributions are themselves uncertain. A hierarchical Bayesian approach is adopted to account for the uncertainties in both the parameters and their distribution. Variables specifying the distribution of the fatigue model parameters are cast as hyperparameters whose uncertainty is modeled with a hyperprior distribution. Bayes' rule is used to determine the posterior hyperparameter distribution, given available data, thus specifying the probabilistic model. The Bayesian formulation provides an additional advantage by allowing the posterior distribution to be updated as new data becomes available through inspections. By updating the probabilistic model, uncertainty in the hyperparameters can be reduced, and the appropriate level of conservatism can be achieved. In this work, techniques for Bayesian inference and updating of probabilistic fatigue models for metallic components are developed. Both safe-life and damage-tolerant methods are considered. Uncertainty in damage rates, crack growth behavior, damage, and initial flaws are quantified. Efficient computational techniques are developed to perform the inference and updating analyses. The developed capabilities are demonstrated through a series of case studies.
APA, Harvard, Vancouver, ISO, and other styles
4

Van, Koten Chikako, and n/a. "Bayesian statistical models for predicting software effort using small datasets." University of Otago. Department of Information Science, 2007. http://adt.otago.ac.nz./public/adt-NZDU20071009.120134.

Full text
Abstract:
The need of today�s society for new technology has resulted in the development of a growing number of software systems. Developing a software system is a complex endeavour that requires a large amount of time. This amount of time is referred to as software development effort. Software development effort is the sum of hours spent by all individuals involved. Therefore, it is not equal to the duration of the development. Accurate prediction of the effort at an early stage of development is an important factor in the successful completion of a software system, since it enables the developing organization to allocate and manage their resource effectively. However, for many software systems, accurately predicting the effort is a challenge. Hence, a model that assists in the prediction is of active interest to software practitioners and researchers alike. Software development effort varies depending on many variables that are specific to the system, its developmental environment and the organization in which it is being developed. An accurate model for predicting software development effort can often be built specifically for the target system and its developmental environment. A local dataset of similar systems to the target system, developed in a similar environment, is then used to calibrate the model. However, such a dataset often consists of fewer than 10 software systems, causing a serious problem in the prediction, since predictive accuracy of existing models deteriorates as the size of the dataset decreases. This research addressed this problem with a new approach using Bayesian statistics. This particular approach was chosen, since the predictive accuracy of a Bayesian statistical model is not so dependent on a large dataset as other models. As the size of the dataset decreases to fewer than 10 software systems, the accuracy deterioration of the model is expected to be less than that of existing models. The Bayesian statistical model can also provide additional information useful for predicting software development effort, because it is also capable of selecting important variables from multiple candidates. In addition, it is parametric and produces an uncertainty estimate. This research developed new Bayesian statistical models for predicting software development effort. Their predictive accuracy was then evaluated in four case studies using different datasets, and compared with other models applicable to the same small dataset. The results have confirmed that the best new models are not only accurate but also consistently more accurate than their regression counterpart, when calibrated with fewer than 10 systems. They can thus replace the regression model when using small datasets. Furthermore, one case study has shown that the best new models are more accurate than a simple model that predicts the effort by calculating the average value of the calibration data. Two case studies has also indicated that the best new models can be more accurate for some software systems than a case-based reasoning model. Since the case studies provided sufficient empirical evidence that the new models are generally more accurate than existing models compared, in the case of small datasets, this research has produced a methodology for predicting software development effort using the new models.
APA, Harvard, Vancouver, ISO, and other styles
5

Abbas, Kaja Moinudeen. "Bayesian Probabilistic Reasoning Applied to Mathematical Epidemiology for Predictive Spatiotemporal Analysis of Infectious Diseases." Thesis, University of North Texas, 2006. https://digital.library.unt.edu/ark:/67531/metadc5302/.

Full text
Abstract:
Abstract Probabilistic reasoning under uncertainty suits well to analysis of disease dynamics. The stochastic nature of disease progression is modeled by applying the principles of Bayesian learning. Bayesian learning predicts the disease progression, including prevalence and incidence, for a geographic region and demographic composition. Public health resources, prioritized by the order of risk levels of the population, will efficiently minimize the disease spread and curtail the epidemic at the earliest. A Bayesian network representing the outbreak of influenza and pneumonia in a geographic region is ported to a newer region with different demographic composition. Upon analysis for the newer region, the corresponding prevalence of influenza and pneumonia among the different demographic subgroups is inferred for the newer region. Bayesian reasoning coupled with disease timeline is used to reverse engineer an influenza outbreak for a given geographic and demographic setting. The temporal flow of the epidemic among the different sections of the population is analyzed to identify the corresponding risk levels. In comparison to spread vaccination, prioritizing the limited vaccination resources to the higher risk groups results in relatively lower influenza prevalence. HIV incidence in Texas from 1989-2002 is analyzed using demographic based epidemic curves. Dynamic Bayesian networks are integrated with probability distributions of HIV surveillance data coupled with the census population data to estimate the proportion of HIV incidence among the different demographic subgroups. Demographic based risk analysis lends to observation of varied spectrum of HIV risk among the different demographic subgroups. A methodology using hidden Markov models is introduced that enables to investigate the impact of social behavioral interactions in the incidence and prevalence of infectious diseases. The methodology is presented in the context of simulated disease outbreak data for influenza. Probabilistic reasoning analysis enhances the understanding of disease progression in order to identify the critical points of surveillance, control and prevention. Public health resources, prioritized by the order of risk levels of the population, will efficiently minimize the disease spread and curtail the epidemic at the earliest.
APA, Harvard, Vancouver, ISO, and other styles
6

Wang, Ni. "Statistical Learning in Logistics and Manufacturing Systems." Diss., Georgia Institute of Technology, 2006. http://hdl.handle.net/1853/11457.

Full text
Abstract:
This thesis focuses on the developing of statistical methodology in reliability and quality engineering, and to assist the decision-makings at enterprise level, process level, and product level. In Chapter II, we propose a multi-level statistical modeling strategy to characterize data from spatial logistics systems. The model can support business decisions at different levels. The information available from higher hierarchies is incorporated into the multi-level model as constraint functions for lower hierarchies. The key contributions include proposing the top-down multi-level spatial models which improve the estimation accuracy at lower levels; applying the spatial smoothing techniques to solve facility location problems in logistics. In Chapter III, we propose methods for modeling system service reliability in a supply chain, which may be disrupted by uncertain contingent events. This chapter applies an approximation technique for developing first-cut reliability analysis models. The approximation relies on multi-level spatial models to characterize patterns of store locations and demands. The key contributions in this chapter are to bring statistical spatial modeling techniques to approximate store location and demand data, and to build system reliability models entertaining various scenarios of DC location designs and DC capacity constraints. Chapter IV investigates the power law process, which has proved to be a useful tool in characterizing the failure process of repairable systems. This chapter presents a procedure for detecting and estimating a mixture of conforming and nonconforming systems. The key contributions in this chapter are to investigate the property of parameter estimation in mixture repair processes, and to propose an effective way to screen out nonconforming products. The key contributions in Chapter V are to propose a new method to analyze heavily censored accelerated life testing data, and to study the asymptotic properties. This approach flexibly and rigorously incorporates distribution assumptions and regression structures into estimating equations in a nonparametric estimation framework. Derivations of asymptotic properties of the proposed method provide an opportunity to compare its estimation quality to commonly used parametric MLE methods in the situation of mis-specified regression models.
APA, Harvard, Vancouver, ISO, and other styles
7

Clark, Peter G. "Multi-scale modelling describing thermal behaviour of polymeric materials. Scalable lattice-Boltzmann models based upon the theory of Grmela towards refined thermal performance prediction of polymeric materials at micro and nano scales." Thesis, University of Bradford, 2012. http://hdl.handle.net/10454/5768.

Full text
Abstract:
Micrometer injection moulding is a type of moulding in which moulds have geometrical design features on a micrometer scale that must be transferred to the geometry of the produced part. The difficulties encountered due to very high shear and rapid heat transfer of these systems has motivated this investigation into the fundamental mathematics behind polymer heat transfer and associated processes. The aim is to derive models for polymer dynamics, especially heat dynamics, that are considerably less approximate than the ones used at present, and to translate this into simulation and optimisation algorithms and strategies, Thereby allowing for greater control of the various polymer processing methods at micrometer scales.
APA, Harvard, Vancouver, ISO, and other styles
8

Collins, Michael. "Trust Discounting in the Multi-Arm Trust Game." Wright State University / OhioLINK, 2020. http://rave.ohiolink.edu/etdc/view?acc_num=wright1607086117161125.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Clark, Peter Graham. "Multi-scale modelling describing thermal behaviour of polymeric materials : scalable lattice-Boltzmann models based upon the theory of Grmela towards refined thermal performance prediction of polymeric materials at micro and nano scales." Thesis, University of Bradford, 2012. http://hdl.handle.net/10454/5768.

Full text
Abstract:
Micrometer injection moulding is a type of moulding in which moulds have geometrical design features on a micrometer scale that must be transferred to the geometry of the produced part. The difficulties encountered due to very high shear and rapid heat transfer of these systems has motivated this investigation into the fundamental mathematics behind polymer heat transfer and associated processes. The aim is to derive models for polymer dynamics, especially heat dynamics, that are considerably less approximate than the ones used at present, and to translate this into simulation and optimisation algorithms and strategies, Thereby allowing for greater control of the various polymer processing methods at micrometer scales.
APA, Harvard, Vancouver, ISO, and other styles
10

Asiri, Aisha. "Applications of Game Theory, Tableau, Analytics, and R to Fashion Design." DigitalCommons@Robert W. Woodruff Library, Atlanta University Center, 2018. http://digitalcommons.auctr.edu/cauetds/146.

Full text
Abstract:
This thesis presents various models to the fashion industry to predict the profits for some products. To determine the expected performance of each product in 2016, we used tools of game theory to help us identify the expected value. We went further and performed a simple linear regression and used scatter plots to help us predict further the performance of the products of Prada. We used tools of game theory, analytics, and statistics to help us predict the performance of some of Prada's products. We also used the Tableau platform to visualize an overview of the products' performances. All of these tools were used to aid in finding better predictions of Prada's product performances.
APA, Harvard, Vancouver, ISO, and other styles
11

Chowdhury, Sohini Roy. "Mathematical models for prediction and optimal mitigation of epidemics." Thesis, Manhattan, Kan. : Kansas State University, 2010. http://hdl.handle.net/2097/3874.

Full text
APA, Harvard, Vancouver, ISO, and other styles
12

Caccavano, Adam. "Optics and Spectroscopy in Massive Electrodynamic Theory." PDXScholar, 2013. https://pdxscholar.library.pdx.edu/open_access_etds/1485.

Full text
Abstract:
The kinematics and dynamics for plane wave optics are derived for a massive electrodynamic field by utilizing Proca's theory. Atomic spectroscopy is also examined, with the focus on the 21 cm radiation due to the hyperfine structure of hydrogen. The modifications to Snell's Law, the Fresnel formulas, and the 21 cm radiation are shown to reduce to the familiar expressions in the limit of zero photon mass.
APA, Harvard, Vancouver, ISO, and other styles
13

Shaikh, Zain U. "Some mathematical structures arising in string theory." Thesis, University of Aberdeen, 2010. http://digitool.abdn.ac.uk:80/webclient/DeliveryManager?pid=158375.

Full text
Abstract:
This thesis is concerned with mathematical interpretations of some recent develop- ments in string theory. All theories are considered before quantisation. The rst half of the thesis investigates a large class of Lagrangians, L, that arise in the physics literature. Noether's famous theorem says that under certain conditions there is a bijective correspondence between the symmetries of L and the \conserved currents" or integrals of motion. The space of integrals of motion form a sheaf and has a bilinear bracket operation. We show that there is a canonical sheaf d1;0 J1( ) that contains a representation of the higher Dorfman bracket. This is the rst step to de ne a Courant algebroid structure on this sheaf. We discuss the existence of this structure proving that, for a re ned de nition, we have the necessary components. The pure spinor formalism of string theory involves the addition of the algebra of pure spinors to the data of the superstring. This algebra is a Koszul algebra and, for physicists, Koszul duality is string/gauge duality. Motivated by this, we investigate the intimate relationship between a commutative Koszul algebra A and its graded Lie superalgebra Koszul dual to A, U(g) = A!. Classically, this means we obtain the algebra of syzygies AS from the cohomology of a Lie subalgebra of g. We prove H (g 2;C) ' AS again and extend it to the notion of k-syzygies, which we de ne as H (g k;C). In particular, we show that H B er(A) ' H (g 3;C), where H Ber(A) is the Berkovits cohomology of A.
APA, Harvard, Vancouver, ISO, and other styles
14

Gogonel, Adriana Geanina. "Statistical Post-Processing Methods And Their Implementation On The Ensemble Prediction Systems For Forecasting Temperature In The Use Of The French Electric Consumption." Phd thesis, Université René Descartes - Paris V, 2012. http://tel.archives-ouvertes.fr/tel-00798576.

Full text
Abstract:
The thesis has for objective to study new statistical methods to correct temperature predictionsthat may be implemented on the ensemble prediction system (EPS) of Meteo France so toimprove its use for the electric system management, at EDF France. The EPS of Meteo Francewe are working on contains 51 members (forecasts by time-step) and gives the temperaturepredictions for 14 days. The thesis contains three parts: in the first one we present the EPSand we implement two statistical methods improving the accuracy or the spread of the EPS andwe introduce criteria for comparing results. In the second part we introduce the extreme valuetheory and the mixture models we use to combine the model we build in the first part withmodels for fitting the distributions tails. In the third part we introduce the quantile regressionas another way of studying the tails of the distribution.
APA, Harvard, Vancouver, ISO, and other styles
15

Stuk, Stephen Paul. "Multivariable systems theory for Lanchester type models." Diss., Georgia Institute of Technology, 1987. http://hdl.handle.net/1853/24171.

Full text
APA, Harvard, Vancouver, ISO, and other styles
16

Liu, Binbin, and 刘彬彬. "Some topics in risk theory and optimal capital allocation problems." Thesis, The University of Hong Kong (Pokfulam, Hong Kong), 2012. http://hub.hku.hk/bib/B48199291.

Full text
Abstract:
In recent years, the Markov Regime-Switching model and the class of Archimedean copulas have been widely applied to a variety of finance-related fields. The Markov Regime-Switching model can reflect the reality that the underlying economy is changing over time. Archimedean copulas are one of the most popular classes of copulas because they have closed form expressions and have great flexibility in modeling different kinds of dependencies. In the thesis, we first consider a discrete-time risk process based on the compound binomial model with regime-switching. Some general recursive formulas of the expected penalty function have been obtained. The orderings of ruin probabilities are investigated. In particular, we show that if there exists a stochastic dominance relationship between random claims at different regimes, then we can order ruin probabilities under different initial regimes. Regarding capital allocation problems, which are important areas in finance and risk management, this thesis studies the problems of optimal allocation of policy limits and deductibles when the dependence structure among risks is modeled by an Archimedean copula. By employing the concept of arrangement increasing and stochastic dominance, useful qualitative results of the optimal allocations are obtained. Then we turn our attention to a new family of risk measures satisfying a set of proposed axioms, which includes the class of distortion risk measures with concave distortion functions. By minimizing the new risk measures, we consider the optimal allocation of policy limits and deductibles problems based on the assumption that for each risk there exists an indicator random variable which determines whether the risk occurs or not. Several sufficient conditions to order the optimal allocations are obtained using tools in stochastic dominance theory.
published_or_final_version
Statistics and Actuarial Science
Doctoral
Doctor of Philosophy
APA, Harvard, Vancouver, ISO, and other styles
17

Alvarez, Benjamin. "Scattering Theory for Mathematical Models of the Weak Interaction." Electronic Thesis or Diss., Université de Lorraine, 2019. http://www.theses.fr/2019LORR0227.

Full text
Abstract:
Dans ce travail nous considérons d'abord un modèle mathématique de la désintégration des bosons W en leptons. L'hamiltonien d'énergie libre est perturbé par un terme d'interaction issu du modèle standard de la physique des particules. Après avoir introduit des coupures en hautes énergies ainsi qu'en espace, nous démontrons que l'Hamiltonien est un opérateur auto-adjoint sur un produit tensoriel d'espaces de Fock. Nous en étudions la théorie de la diffusion. D'abord, nous supposons que les neutrinos ont une masse non nulle et la complétude asymptotique est vérifiée pour une valeur quelconque de la constante de couplage. Dans un deuxième temps, nous considérons des neutrinos non massifs dans un modèle simplifié. Nous démontrons alors la complétude asymptotique en supposant que la constante de couplage est suffisamment petite, en utilisant une théorie de Mourre singulière, des estimations de propagation adaptées ainsi que la conservation d'une certaine combinaison linéaire d'opérateurs de nombre de particules. Nous étudions par ailleurs des modèles de théorie des champs pour un nombre fini mais quelconque de fermions de spin 1/2. Le terme d'interaction est obtenu en considérant toutes les combinaisons possibles pour les opérateurs de création et d'annihilation. Les différents champs peuvent être massifs comme non massifs et le noyau d'interaction doit vérifier des hypothèses de régularité en espace comme en moment. L'hamiltonien est alors un opérateur auto-adjoint, quelque soit l'intensité de l'interaction, sur un produit tensoriel d'espaces de Fock. Nous démontrons par ailleurs l'existence d'un état fondamental. Nos résultats s'appuient sur une interpolation d'estimation en Nτ et peuvent intervenir dans la modélisation de processus d'interaction faible dans la théorie de Fermi. Nous présenterons enfin une façon de retirer la troncature en espace sur des modèles jouets anfin de définir un modèle invariant par translation
In this work, we consider, first, mathematical models of the weak decay of the vector bosons W into leptons. The free quantum field Hamiltonian is perturbed by an interaction term from the standard model of particle physics. After the introduction of high energy and spatial cut-offs, the total quantum Hamiltonian defines a self-adjoint operator on a tensor product of Fock spaces. We study the scattering theory for such models. First, the masses of the neutrinos are supposed to be positive: for all values of the coupling constant, we prove asymptotic completeness of the wave operators. In a second model, neutrinos are treated as massless particles and we consider a simpler interaction Hamiltonian: for small enough values of the coupling constant, we prove again asymptotic completeness, using singular Mourre's theory, suitable propagation estimates and the conservation of the difference of some number operators. We moreover study Hamiltonian models representing an arbitrary number of spin 1/2 fermion quantum fields interacting through arbitrary processes of creation or annihilation of particles. The fields may be massive or massless. The interaction form factors are supposed to satisfy some regularity conditions in both position and momentum space. Without any restriction on the strength of the interaction, we prove that the Hamiltonian identifies to a self-adjoint operator on a tensor product of anti-symmetric Fock spaces and we establish the existence of a ground state. Our results rely on novel interpolated Nτ estimates. They apply to models arising from the Fermi theory of weak interactions, with ultraviolet and spatial cut-offs. Finally, the removal of spatial cut-off to define translation invariant toy models will be quickly discussed in the last chapter
APA, Harvard, Vancouver, ISO, and other styles
18

Tsandzana, Afonso Fernando. "Homogenization of some new mathematical models in lubrication theory." Doctoral thesis, Luleå tekniska universitet, Institutionen för teknikvetenskap och matematik, 2016. http://urn.kb.se/resolve?urn=urn:nbn:se:ltu:diva-59629.

Full text
Abstract:
We consider mathematical modeling of thin film flow between two rough surfaces which are in relative motion. For example such flows take place in different kinds of bearings and gears when a lubricant is used to reduce friction and wear between the surfaces. The mathematical foundations of lubrication theory is given by the Navier--Stokes equation, which describes the motion of viscous fluids. In thin domains several approximations are possible which lead to the so called Reynolds equation. This equation is crucial to describe the pressure in the lubricant film. When the pressure is found it is possible to predict vorous important physical quantities such as friction (stresses on the bounding surfaces), load carrying capacity and velocity field. In hydrodynamic lubrication the effect of surface roughness is not negligible, because in practical situations the amplitude of the surface roughness are of the same order as the film thickness. Moreover, a perfectly smooth surface does not exist in reality due to imperfections in the manufacturing process. Therefore, any realistic lubrication model should account for the effects of surface roughness. This implies that the mathematical modeling leads to partial differential equations with coefficients that will oscillate rapidly in space and time. A direct numerical computation is therefore very difficult, since an extremely dense mesh is needed to resolve the oscillations due to the surface roughness. A natural approach is to do some type of averaging. In this PhD thesis we use and develop modern homogenization theory to be able to handle the questions above. Especially, we use, develop and apply the method based on the multiple scale expansions and two-scale convergence. The thesis is based on five papers (A-E), with an appendix to paper A, and an extensive introduction, which puts these publications in a larger context. In Paper A the connection between the Stokes equation and the Reynolds equation is investigated. More precisely, the asymptotic behavior as both the film thickness  and wavelength  of the roughness tend to zero is analyzed and described. Three different limit equations are derived. Time-dependent equations of Reynolds type are obtained in all three cases (Stokes roughness, Reynolds roughness and high frequency roughness regime). In paper C we extend the work done in Paper A where we compare the roughness regimes by numeric computations for the stationary case. In paper B we present a mathematical model that takes into account cavitation, surfaces roughness and compressibility of the fluid. We compute the homogenized coefficients in the case of unidirectional roughness.In the paper D we derive a mathematical model of thin film flow between two close rough surfaces, which takes into account cavitation, surface roughness and pressure dependent density. Moreover, we use two-scale convergence to homogenize the model. Finally, in paper E we prove the existence of solutions to a frequently used mathematical model of thin film flow, which takes cavitation into account.
APA, Harvard, Vancouver, ISO, and other styles
19

Visarraga, Darrin Bernardo. "Heat transport models with distributed microstructure." Access restricted to users with UT Austin EID, 2001. http://wwwlib.umi.com/cr/utexas/fullcit?p3036605.

Full text
APA, Harvard, Vancouver, ISO, and other styles
20

Kim, Changkyun. "Development and evaluation of traffic prediction systems." Diss., This resource online, 1994. http://scholar.lib.vt.edu/theses/available/etd-06062008-164007/.

Full text
APA, Harvard, Vancouver, ISO, and other styles
21

Silva, Djany Souza. "Evaluation of mathematical models to prediction the dynamic viscosity of fruit juices." Universidade Federal do CearÃ, 2015. http://www.teses.ufc.br/tde_busca/arquivo.php?codArquivo=14440.

Full text
Abstract:
CoordenaÃÃo de AperfeÃoamento de Pessoal de NÃvel Superior
O consumo de sucos de frutas tem crescido, devido a comodidade e praticidade gerada pelos produtos prontos. Segundo a AssociaÃÃo Brasileira das IndÃstrias de Refrigerantes, em 2012, a produÃÃo anual foi de 987 milhÃes de litros de sucos de frutas no Brasil. No entanto, para alcanÃar maior eficiÃncia e rendimento, torna-se necessÃrio o conhecimento do comportamento reolÃgico das matÃrias-primas. A viscosidade à uma das propriedades reolÃgicas usada em diversas aplicaÃÃes, tais como: parÃmetro para o cÃlculo de coeficientes de transferÃncia de calor e massa; dimensionamento de equipamentos; avaliaÃÃo de custos; projetos de processos; controle de qualidade do produto; alÃm de possibilitar a compreensÃo da estrutura quÃmica das matÃrias-primas. Durante o processamento industrial dos sucos de frutas, a matÃria-prima à submetida à variaÃÃes de temperaturas e concentraÃÃes de sÃlidos que alteram sua viscosidade. Por esse motivo, o conhecimento dos efeitos combinados desses dois parÃmetros na viscosidade à essencial para a indÃstria de sucos. Nesse trabalho, dados experimentais da literatura para onze sucos clarificados de frutas (manga, cereja, maÃÃ, pÃssego, groselha, romÃ, pÃra, limÃo, tangerina, limÃo-galego e uva) em concentraÃÃes e temperaturas de 15,0 a 74,0 ÂBrix, e 278,15 a 393,15 K, respectivamente, foram modelados utilizando correlaÃÃes empÃricas e semi-empÃricas oriundas da literatura. ParÃmetros globais e especÃficos, respectivamente, em funÃÃo da temperatura e concentraÃÃo de sÃlidos solÃveis totais (SST), foram mantidos nos modelos. Quatro equaÃÃes foram avaliadas no cÃlculo da energia de ativaÃÃo (equaÃÃo da reta, exponencial, polinomial de 2 e 3 ordem) nos modelos. E trÃs estratÃgias de modelagem foram realizadas: ajuste para todas as concentraÃÃes de SST e temperaturas; em diferentes faixas de concentraÃÃes de SST; e, diferentes faixas de temperaturas. A estratÃgia de otimizaÃÃo por faixas de concentraÃÃes de SST mostrou-se a mais adequada. Duas relaÃÃes matemÃticas exponenciais, baseadas na correlaÃÃo de Arrhenius, obtiveram bons resultados na prediÃÃo da viscosidade dinÃmica de sucos de frutas clarificados entre as concentraÃÃes de 17,0 a 50,1 ÂBrix para todas as temperaturas de estudo. Enquanto que o uso da equaÃÃo de Vogel obteve bons resultados para concentraÃÃes de 51,0 a 66,0 ÂBrix na prediÃÃo da viscosidade dinÃmica dos sucos de frutas. Os modelos foram validados com dados experimentais para suco clarificado de laranja em baixas (30,7 a 50,5 ÂBrix) e altas concentraÃÃes (54,1 a 63,5 ÂBrix) de SST, com excelente prediÃÃo da viscosidade dinÃmica.
The comsumption of fruit juices has grown due to co nvenience and practicality generated by the finished products. According to the AssociaÃÃo Brasileira das IndÃstrias de Refrigerantes, in 2012 the annual production was 987 million liter s of fruit juices in Brazil. However, to achieve greater efficiency and performance, it is n ecessary to know the rheological behavior of the raw materials. Among rheological properties, viscosity is widely used in industrial and academic applications such as: a parameter for the calculation of heat and mass transfer coefficients; equipment design; cost assessment; de sign processes; quality control of the product; and enable an understanding of the chemica l structure of raw materials. During industrial processing of fruit juices, the raw mate rials are submitted to temperatures and concentrations of solids variations that altering i ts viscosity. Therefore, the knowledge of the combined effect of temperature and concentration of solids on viscosity are essential for the juice processing. In this work, literature data fro m eleven clarified juices of fruit (mango, cherry, apple, peach, blackcurrant, pomegranate, pe ar, lemon, tangerine, lime and grape) at concentrations and temperatures from 15.0 to 74.0 Â Brix and from 278.15 to 393.15 K, respectively, were modeled using empirical and semi -empirical correlations derived from the literature. Global and specific parameters for all studied models been obtained in function of temperature and total soluble solids (TSS) concentr ation. Four equations were evaluated to calculate the activation energy in each model (line ar equation, exponential, polynomial of 2nd and 3rd order) using activation energy as specific parameter, and three different modeling strategies were conducted: for all TSS concentratio ns and temperatures; two ranges concentrations of TSS; and, two ranges of temperatu res. The optimization strategy for the concentrations TSS range proved the most suitable. Two exponential mathematical relations based on correlation of Arrhenius have been success ful in predicting the dynamic viscosity of clarified fruit juices at concentrations from 17.0 to 50.1 ÂBrix for all temperatures studied. While Vogel's equation obtained good results for co ncentrations of 51.0 to 66.0 ÂBrix in predicting the dynamic viscosity of fruit juices. T he models were validated using experimental data to clarified orange juices at low (30.7 to 50.5 ÂBrix) and high concentrations (54.1 to 63.5 ÂBrix) of TSS, with ex cellent prediction of dynamic viscosity
APA, Harvard, Vancouver, ISO, and other styles
22

Williams, Robert C. "The Development of Mathematical Models for Preliminary Prediction of Highway Construction Duration." Diss., Virginia Tech, 2008. http://hdl.handle.net/10919/29483.

Full text
Abstract:
Knowledge of construction duration is pertinent to a number of project planning functions prior to detailed design development. Funding, financing, and resource allocation decisions take place early in project design development and are significantly influenced by the construction duration. Currently, there is not an understanding of the project factors having a statistically significant relationship with highway construction duration. Other industry sectors have successfully used statistical regression analysis to identify and model the project parameters related to construction duration. While the need is seen for such work in highway construction, there are very few studies which attempt to identify duration-influential parameters and their relationship with the highway construction duration. This research identifies the project factors, known early in design development, which influence highway construction duration. The factors identified are specific to their respective project types and are those factors which demonstrate a statistically-significant relationship with construction duration. This work also quantifies the relationship between the duration-influential factors and highway construction duration. The quantity, magnitude, and sign of the factor coefficient yields evidence regarding the importance of the project factor to highway construction duration. Finally, the research incorporates the duration-influential project factors and their relationship with highway construction duration into mathematical models which assist in the prediction of construction duration. Full and condensed models are presented for Full-Depth Section and Highway Improvement project types. This research uses statistical regression analysis to identify, quantify, and model these early-known, duration-influential project factors. The results of this research contribute to the body of knowledge of the sponsoring organization (Virginia Department of Transportation), the highway construction industry, and the general construction industry at large.
Ph. D.
APA, Harvard, Vancouver, ISO, and other styles
23

Bougenaya, Yamina. "Fermion models on the lattice and in field theory." Thesis, Durham University, 1985. http://etheses.dur.ac.uk/7080/.

Full text
Abstract:
The first part deals with lattice approach to field theories. The fermion doubling problems are described. This doubling can be removed if a dual lattice is introduced, as first pointed out by Stacey. His method is developed and in the process a formalism for the construction of a covariant difference lattice operator and thus of a gauge invariant action, is exhibited. It is shown how this formalism relates to the work of Wilson. Problems of gauge invariance can be traced back to the absence of the Leibnitz rule on the lattice. To circumvent this failure the usual notion of the product is replaced by a convolution. The solutions display a complementarity : the more localised the product the more extended is the approximation to the derivative and vice-versa. It is found that the form of the difference operator in the continuous limit dictates the formulation of the full two-dimensional supersymmetric algebra. The construction of the fields necessary to form the Wess-Zumino model follows from the requirement of anticommutativity of the supersymmetric charges. In the second part, the Skyrme model is reviewed and Bogomolnyi conditions are defined and discussed. It appears that while the Skyrme model has many satisfactory features, it fails to describe the interactions between nucleons correctly. These problems are brought out and the available solutions reviewed.
APA, Harvard, Vancouver, ISO, and other styles
24

Moore, Matthew Richard. "New mathematical models for splash dynamics." Thesis, University of Oxford, 2014. http://ora.ox.ac.uk/objects/uuid:c94ff7f2-296a-4f13-b04b-e9696eda9047.

Full text
Abstract:
In this thesis, we derive, extend and generalise various aspects of impact theory and splash dynamics. Our methods throughout will involve isolating small parameters in our models, which we can utilise using the language of matched asymptotics. In Chapter 1 we briefly motivate the field of impact theory and outline the structure of the thesis. In Chapter 2, we give a detailed review of classical small-deadrise water entry, Wagner theory, in both two and three dimensions, highlighting the key results that we will use in our extensions of the theory. We study oblique water entry in Chapter 3, in which we use a novel transformation to relate an oblique impact with its normal-impact counterpart. This allows us to derive a wide range of solutions to both two- and three-dimensional oblique impacts, as well as discuss the limitations and breakdown of Wagner theory. We return to vertical water-entry in Chapter 4, but introduce the air layer trapped between the impacting body and the liquid it is entering. We extend the classical theory to include this air layer and in the limit in which the density ratio between the air and liquid is sufficiently small, we derive the first-order correction to the Wagner solution due to the presence of the surrounding air. The model is presented in both two dimensions and axisymmetric geometries. In Chapter 5 we move away from Wagner theory and systematically derive a series of splash jet models in order to find possible mechanisms for phenomena seen in droplet impact and droplet spreading experiments. Our canonical model is a thin jet of liquid shot over a substrate with a thin air layer trapped between the jet and the substrate. We consider a variety of parameter regimes and investigate the stability of the jet in each regime. We then use this model as part of a growing-jet problem, in which we attempt to include effects due to the jet tip. In the final chapter we summarise the main results of the thesis and outline directions for future work.
APA, Harvard, Vancouver, ISO, and other styles
25

Dresch, Andrea Alves Guimarães. "Método para reconhecimento de vogais e extração de parâmetros acústicos para analises forenses." Universidade Tecnológica Federal do Paraná, 2015. http://repositorio.utfpr.edu.br/jspui/handle/1/1799.

Full text
Abstract:
Exames de Comparação Forense de Locutores apresentam características complexas, demandando análises demoradas quando realizadas manualmente. Propõe-se um método para reconhecimento automático de vogais com extração de características para análises acústicas, objetivando-se contribuir com uma ferramenta de apoio nesses exames. A proposta baseia-se na medição dos formantes através de LPC (Linear Predictive Coding), seletivamente por detecção da frequência fundamental, taxa de passagem por zero, largura de banda e continuidade, sendo o agrupamento das amostras realizado por meio do método k-means. Experimentos realizados com amostras de três diferentes bases de dados trouxeram resultados promissores, com localização das regiões correspondentes a cinco das vogais do Português Brasileiro, propiciando a visualização do comportamento do trato vocal de um falante, assim como detecção de trechos correspondentes as vogais-alvo.
Forensic speaker comparison exams have complex characteristics, demanding a long time for manual analysis. A method for automatic recognition of vowels, providing feature extraction for acoustic analysis is proposed, aiming to contribute as a support tool in these exams. The proposal is based in formant measurements by LPC (Linear Predictive Coding), selectively by fundamental frequency detection, zero crossing rate, bandwidth and continuity, with the clustering being done by the k-means method. Experiments using samples from three different databases have shown promising results, in which the regions corresponding to five of the Brasilian Portuguese vowels were successfully located, providing visualization of a speaker’s vocal tract behavior, as well as the detection of segments corresponding to target vowels.
APA, Harvard, Vancouver, ISO, and other styles
26

Feng, Ming-Fa. "Fault diagnosis and prediction in reciprocating air compressors by quantifying operating parameters." Diss., Virginia Tech, 1992. http://hdl.handle.net/10919/39786.

Full text
Abstract:
This research introduces a new method of diagnosing the internal condition of a reciprocating air compressor. Using only measured load torques and shaft dynamics, pressures, temperatures, flow rates, leakages, and heat transfer conditions are quantified to within 5%. The load torque acting on the rotor of the machine is shown to be a function of the dynamics (instantaneous position, velocity, and acceleration) of the driving shaft, the kinematic construction, and the internal condition of the machine. If the load torque, the kinematic construction of the machine, and the dynamics of the rotor are known, then the condition of the machine can be assessed. A theoretical model is developed to describe the physical behavior of the slider-crank mechanism and the shaft system. Solution techniques, which are based on the machine construction, crankshaft dynamics, and load torque measurements, are presented to determine the machine parameters. A personal computer based system used to measure the quantities necessary to solve for the machine parameters and the quantities used to compare with calculations is also documented. The solution algorithm for multi-stage compressors is verified by decoupling the load torque contributed by each cylinder. Pressure data for a four-stage two-cylinder high pressure air compressor (HPAC) is used. Also, the mathematical model is proven feasible by using measured angular velocity of the crankshaft and direct measurements of the load torque of a single stage, single cylinder air compressor to solve for the machine parameters. With this unintrusive and nondestructive method of quantifying the operating parameters, the cylinder pressures, operating temperatures, heat transfer conditions, leakage, and power consumption of a reciprocating air compressor can be evaluated.
Ph. D.
APA, Harvard, Vancouver, ISO, and other styles
27

Friedbaum, Jesse Robert. "Model Predictive Linear Control with Successive Linearization." BYU ScholarsArchive, 2018. https://scholarsarchive.byu.edu/etd/7063.

Full text
Abstract:
Robots have been a revolutionizing force in manufacturing in the 20th and 21st century but have proven too dangerous around humans to be used in many other fields including medicine. We describe a new control algorithm for robots developed by the Brigham Young University Robotics and Dynamics and Robotics Laboratory that has shown potential to make robots less dangerous to humans and suitable to work in more applications. We analyze the computational complexity of this algorithm and find that it could be a feasible control for even the most complicated robots. We also show conditions for a system which guarantee local stability for this control algorithm.
APA, Harvard, Vancouver, ISO, and other styles
28

Zhu, Jinxia, and 朱金霞. "Ruin theory under Markovian regime-switching risk models." Thesis, The University of Hong Kong (Pokfulam, Hong Kong), 2008. http://hub.hku.hk/bib/B40203980.

Full text
APA, Harvard, Vancouver, ISO, and other styles
29

Xu, Peng School of Mathematics UNSW. "A computational model for the assessment and prediction of salinisation in irrigated areas." Awarded by:University of New South Wales. School of Mathematics, 2003. http://handle.unsw.edu.au/1959.4/23342.

Full text
Abstract:
This thesis presents the results of a computational study on salt transport and accumulation in crop root zone. The main objective of this study is to examine the impacts of past land use on the environment and to examine the effect of irrigation water on the rising of groundwater level and the subsequent salinity problem in rice growing area under given climatic conditions. A special focus has been such impacts in the Wakool irrigation area, NSW, Australia. To this end, a computational model for the assessment and prediction of salinisation in agricultural areas has been developed. This modelling system consists of a land surface scheme (ALSIS) for simulating unsaturated soil moisture and moisture flux, a groundwater flow model (MODFLOW) for estimating the spatial and temporal variations of groundwatertable, a surface flow model (DAFLOW) for calculating water flow in river networks, a module for calculating solute transport at unsaturated zone and a 3-D model (MOC3D) for simulating solute transport in groundwater as well as a module for calculating the spatial and temporal distributions of overland flow depth during wet seasons. The modelling system uses a finite difference linked technique to form a quasi three dimensional model. The land surface scheme is coupled with the groundwater flow model to account for the interactions between the saturated and unsaturated zones. On the land surface, the modelling system incorporates a surface runoff model and detailed treatments of surface energy balance, which is important in es-timating the evapotranspiration, a crucial quantity in calculating the moisture and moisture fluxes in the root zone. Vertical heterogeneity of soil hydraulic properties in the soil profile has been considered. The modelling system has the flexibility of using either Clapp and Hornberger (1978), Broadbridge and White (1988), van Genuchten (1980) or Brooks and Corey (1966) soil water retention models. Deep in the soil, the impact of groundwater table fluctuation on soil moisture and salinity in the unsaturated soil is also included. The calibration and validation for the system have been partially performed with observed groundwater levels in the Wakool irrigation area. The applications of the model to theWakool region are made in two steps. Firstly, a one-dimensional simulation to a selected site in the Wakool irrigation area is carried out to study the possible impact of ponded irrigation on salinisation and the general features of salt movement. Secondly, a more realistic three-dimensional simulation for the entire Wakool region is performed to study the spatial and temporal variations of root zone soil salinity under the influence of past land use from 1975 to 1994. To allow the assessment and prediction of the effects of ponded rice irrigation water (which contains salt) on soil salinity in the area, several hypothetical scenarios using different qualities of water for rice irrigation are tested. To facilitate comparative analysis of different scenarios, a base case is defined, for which irrigation water is assumed to be free of salt. The simulated results show that irrigation increases overall recharge to groundwater in the Wakool irrigation area. The use of ponded irrigation for rice growing has a substantial effect on salt accumulation in the root zone and the rising of groundwater level, indicating that irrigation at rice bay is a major budget item for controlling soil salinity problem in the local area.
APA, Harvard, Vancouver, ISO, and other styles
30

Mao, Wen. "Essays on bargaining theory and voting behavior." Diss., Virginia Tech, 1994. http://hdl.handle.net/10919/38561.

Full text
APA, Harvard, Vancouver, ISO, and other styles
31

Jiao, Yue. "Mathematical models for control of probabilistic Boolean networks." Click to view the E-thesis via HKUTO, 2008. http://sunzi.lib.hku.hk/hkuto/record/B41508634.

Full text
APA, Harvard, Vancouver, ISO, and other styles
32

Lipscomb, Clifford Allen. "Resolving the aggregation problem that plagues the hedonic pricing method." Diss., Available online, Georgia Institute of Technology, 2004:, 2003. http://etd.gatech.edu/theses/available/etd-04082004-180317/unrestricted/lipscomb%5fclifford%5fa%5f200312%5fphd.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
33

Wong, Tsun-yu Jeff, and 黃峻儒. "On some Parisian problems in ruin theory." Thesis, The University of Hong Kong (Pokfulam, Hong Kong), 2014. http://hdl.handle.net/10722/206448.

Full text
Abstract:
Traditionally, in the context of ruin theory, most judgements are made on an immediate sense. An example would be the determination of ruin, in which a business is declared broke right away when it attains a negative surplus. Another example would be the decision on dividend payment, in which a business pays dividends whenever the surplus level overshoots certain threshold. Such scheme of decision making is generally being criticized as unrealistic from a practical point of view. The Parisian concept is therefore invoked to handle this issue. This idea is deemed more realistic since it allows certain delay in the execution of decisions. In this thesis, such Parisian concept is utilized on two different aspects. The first one is to incorporate this concept on defining ruin, leading to the introduction of Parisian ruin time. Under such a setting, a business is considered ruined only when the surplus level stays negative continuously for a prescribed length of time. The case for a fixed delay is considered. Both the renewal risk model and the dual renewal risk model are studied. Under a mild distributional assumption that either the inter arrival time or the claim size is exponentially distributed (while keeping the other arbitrary), the Laplace transform to the Parisian ruin time is derived. Numerical example is performed to confirm the reasonableness of the results. The methodology in obtaining the Laplace transform to the Parisian ruin time is also demonstrated to be useful in deriving the joint distribution to the number of negative surplus causing or without causing Parisian ruin. The second contribution is to incorporate this concept on the decision for dividend payment. Specifically, a business only pays lump-sum dividends when the surplus level stays above certain threshold continuously for a prescribed length of time. The case for a fixed and an Erlang(n) delay are considered. The dual compound Poisson risk model is studied. Laplace transform to the ordinary ruin time is derived. Numerical examples are performed to illustrate the results.
published_or_final_version
Statistics and Actuarial Science
Master
Master of Philosophy
APA, Harvard, Vancouver, ISO, and other styles
34

Miller, J. Glenn (James). "Predictive inference." Diss., Georgia Institute of Technology, 2002. http://hdl.handle.net/1853/24294.

Full text
APA, Harvard, Vancouver, ISO, and other styles
35

Jones, Julie Elizabeth. "A series of mathematical models of the life-cycle of the nematode Ostertagia ostertagia." Thesis, University of Exeter, 1988. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.328834.

Full text
APA, Harvard, Vancouver, ISO, and other styles
36

Tillman, Måns. "On-Line Market Microstructure Prediction Using Hidden Markov Models." Thesis, KTH, Matematisk statistik, 2017. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-208312.

Full text
Abstract:
Over the last decades, financial markets have undergone dramatic changes. With the advent of the arbitrage pricing theory, along with new technology, markets have become more efficient. In particular, the new high-frequency markets, with algorithmic trading operating on micro-second level, make it possible to translate ”information” into price almost instantaneously. Such phenomena are studied in the field of market microstructure theory, which aims to explain and predict them. In this thesis, we model the dynamics of high frequency markets using non-linear hidden Markov models (HMMs). Such models feature an intuitive separation between observations and dynamics, and are therefore highly convenient tools in financial settings, where they allow a precise application of domain knowledge. HMMs can be formulated based on only a few parameters, yet their inherently dynamic nature can be used to capture well-known intra-day seasonality effects that many other models fail to explain. Due to recent breakthroughs in Monte Carlo methods, HMMs can now be efficiently estimated in real-time. In this thesis, we develop a holistic framework for performing both real-time inference and learning of HMMs, by combining several particle-based methods. Within this framework, we also provide methods for making accurate predictions from the model, as well as methods for assessing the model itself. In this framework, a sequential Monte Carlo bootstrap filter is adopted to make on-line inference and predictions. Coupled with a backward smoothing filter, this provides a forward filtering/backward smoothing scheme. This is then used in the sequential Monte Carlo expectation-maximization algorithm for finding the optimal hyper-parameters for the model. To design an HMM specifically for capturing information translation, we adopt the observable volume imbalance into a dynamic setting. Volume imbalance has previously been used in market microstructure theory to study, for example, price impact. Through careful selection of key model assumptions, we define a slightly modified observable as a process that we call scaled volume imbalance. The outcomes of this process retain the key features of volume imbalance (that is, its relationship to price impact and information), and allows an efficient evaluation of the framework, while providing a promising platform for future studies. This is demonstrated through a test on actual financial trading data, where we obtain high-performance predictions. Our results demonstrate that the proposed framework can successfully be applied to the field of market microstructure.
Under de senaste decennierna har det gjorts stora framsteg inom finansiell teori för kapitalmarknader. Formuleringen av arbitrageteori medförde möjligheten att konsekvent kunna prissätta finansiella instrument. Men i en tid då högfrekvenshandel numera är standard, har omsättningen av information i pris börjat ske i allt snabbare takt. För att studera dessa fenomen; prispåverkan och informationsomsättning, har mikrostrukturteorin vuxit fram. I den här uppsatsen studerar vi mikrostruktur med hjälp av en dynamisk modell. Historiskt sett har mikrostrukturteorin fokuserat på statiska modeller men med hjälp av icke-linjära dolda Markovmodeller (HMM:er) utökar vi detta till den dynamiska domänen. HMM:er kommer med en naturlig uppdelning mellan observation och dynamik, och är utformade på ett sådant sätt att vi kan dra nytta av domänspecifik kunskap. Genom att formulera lämpliga nyckelantaganden baserade på traditionell mikrostrukturteori specificerar vi en modell—med endast ett fåtal parametrar—som klarar av att beskriva de välkända säsongsbeteenden som statiska modeller inte klarar av. Tack vare nya genombrott inom Monte Carlo-metoder finns det nu kraftfulla verktyg att tillgå för att utföra optimal filtrering med HMM:er i realtid. Vi applicerar ett så kallat bootstrap filter för att sekventiellt filtrera fram tillståndet för modellen och prediktera framtida tillstånd. Tillsammans med tekniken backward smoothing estimerar vi den posteriora simultana fördelningen för varje handelsdag. Denna används sedan för statistisk inlärning av våra hyperparametrar via en sekventiell Monte Carlo Expectation Maximization-algoritm. För att formulera en modell som beskriver omsättningen av information, väljer vi att utgå ifrån volume imbalance, som ofta används för att studera prispåverkan. Vi definierar den relaterade observerbara storheten scaled volume imbalance som syftar till att bibehålla kopplingen till prispåverkan men även går att modellera med en dynamisk process som passar in i ramverket för HMM:er. Vi visar även hur man inom detta ramverk kan utvärdera HMM:er i allmänhet, samt genomför denna analys för vår modell i synnerhet. Modellen testas mot finansiell handelsdata för både terminskontrakt och aktier och visar i bägge fall god predikteringsförmåga.
APA, Harvard, Vancouver, ISO, and other styles
37

Hang, Huajiang Engineering &amp Information Technology Australian Defence Force Academy UNSW. "Prediction of the effects of distributed structural modification on the dynamic response of structures." Awarded by:University of New South Wales - Australian Defence Force Academy. Engineering & Information Technology, 2009. http://handle.unsw.edu.au/1959.4/44275.

Full text
Abstract:
The aim of this study is to investigate means of efficiently assessing the effects of distributed structural modification on the dynamic properties of a complex structure. The helicopter structure is normally designed to avoid resonance at the main rotor rotational frequency. However, very often military helicopters have to be modified (such as to carry a different weapon system or an additional fuel tank) to fulfill operational requirements. Any modification to a helicopter structure has the potential of changing its resonance frequencies and mode shapes. The dynamic properties of the modified structure can be determined by experimental testing or numerical simulation, both of which are complex, expensive and time-consuming. Assuming that the original dynamic characteristics are already established and that the modification is a relatively simple attachment such as beam or plate modification, the modified dynamic properties may be determined numerically without solving the equations of motion of the full-modified structure. The frequency response functions (FRFs) of the modified structure can be computed by coupling the original FRFs and a delta dynamic stiffness matrix for the modification introduced. The validity of this approach is investigated by applying it to several cases, 1) 1D structure with structural modification but no change in the number of degree of freedom (DOFs). A simply supported beam with double thickness in the middle section is treated as an example for this case; 2) 1D structure with additional DOFs. A cantilever beam to which a smaller beam is attached is treated as an example for this case, 3) 2D structure with a reduction in DOFs. A four-edge-clamped plate with a cut-out in the centre is treated as an example for this case; and 4) 3D structure with additional DOFs. A box frame with a plate attached to it as structural modification with additional DOFs and combination of different structures. The original FRFs were obtained numerically and experimentally except for the first case. The delta dynamic stiffness matrix was determined numerically by modelling the part of the modified structure including the modifying structure and part of the original structure at the same location. The FRFs of the modified structure were then computed. Good agreement is obtained by comparing the results to the FRFs of the modified structure determined experimentally as well as by numerical modelling of the complete modified structure.
APA, Harvard, Vancouver, ISO, and other styles
38

Chaganti, Vasanta Gayatri. "Wireless body area networks : accuracy of channel modelling and prediction." Phd thesis, Canberra, ACT : The Australian National University, 2014. http://hdl.handle.net/1885/150112.

Full text
APA, Harvard, Vancouver, ISO, and other styles
39

Coyle, Andrew James. "Some problems in queueing theory." Title page, contents and summary only, 1989. http://web4.library.adelaide.edu.au/theses/09PH/09phc8812.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
40

McCloud, Nadine. "Model misspecification theory and applications /." Diss., Online access via UMI:, 2008.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
41

Sprumont, Yves. "Three essays in collective choice theory." Diss., Virginia Tech, 1990. http://hdl.handle.net/10919/40872.

Full text
APA, Harvard, Vancouver, ISO, and other styles
42

Liu, Luyin, and 劉綠茵. "Analysis of some risk processes in ruin theory." Thesis, The University of Hong Kong (Pokfulam, Hong Kong), 2013. http://hdl.handle.net/10722/195992.

Full text
Abstract:
In the literature of ruin theory, there have been extensive studies trying to generalize the classical insurance risk model. In this thesis, we look into two particular risk processes considering multi-dimensional risk and dependent structures respectively. The first one is a bivariate risk process with a dividend barrier, which concerns a two-dimensional risk model under a barrier strategy. Copula is used to represent the dependence between two business lines when a common shock strikes. By defining the time of ruin to be the first time that either of the two lines has its surplus level below zero, we derive a discrete approximation procedure to calculate the expected discounted dividends until ruin under such a model. A thorough discussion of application in proportional reinsurance with numerical examples is provided as well as an examination of the joint optimal dividend barrier for the bivariate process. The second risk process is a semi-Markovian dual risk process. Assuming that the dependence among innovations and waiting times is driven by a Markov chain, we analyze a quantity resembling the Gerber-Shiu expected discounted penalty function that incorporates random variables defined before and after the time of ruin, such as the minimum surplus level before ruin and the time of the first gain after ruin. General properties of the function are studied, and some exact results are derived upon distributional assumptions on either the inter-arrival times or the gain amounts. Applications in a perpetual insurance and the last inter-arrival time before ruin are given along with some numerical examples.
published_or_final_version
Statistics and Actuarial Science
Master
Master of Philosophy
APA, Harvard, Vancouver, ISO, and other styles
43

Rockney, Alissa Ann. "A Predictive Model Which Uses Descriptors of RNA Secondary Structures Derived from Graph Theory." Digital Commons @ East Tennessee State University, 2011. https://dc.etsu.edu/etd/1300.

Full text
Abstract:
The secondary structures of ribonucleic acid (RNA) have been successfully modeled with graph-theoretic structures. Often, simple graphs are used to represent secondary RNA structures; however, in this research, a multigraph representation of RNA is used, in which vertices represent stems and edges represent the internal motifs. Any type of RNA secondary structure may be represented by a graph in this manner. We define novel graphical invariants to quantify the multigraphs and obtain characteristic descriptors of the secondary structures. These descriptors are used to train an artificial neural network (ANN) to recognize the characteristics of secondary RNA structure. Using the ANN, we classify the multigraphs as either RNA-like or not RNA-like. This classification method produced results similar to other classification methods. Given the expanding library of secondary RNA motifs, this method may provide a tool to help identify new structures and to guide the rational design of RNA molecules.
APA, Harvard, Vancouver, ISO, and other styles
44

Jiao, Yue, and 焦月. "Mathematical models for control of probabilistic Boolean networks." Thesis, The University of Hong Kong (Pokfulam, Hong Kong), 2008. http://hub.hku.hk/bib/B41508634.

Full text
APA, Harvard, Vancouver, ISO, and other styles
45

Mendoza, Maria Nimfa F. "Essays in production theory : efficiency measurement and comparative statics." Thesis, University of British Columbia, 1989. http://hdl.handle.net/2429/30734.

Full text
Abstract:
Nonparametric linear programming tests for consistency with the hypotheses of technical efficiency and allocative efficiency for the general case of multiple output-multiple input technologies are developed in Part I. The tests are formulated relative to three kinds of technologies — convex, constant returns to scale and quasiconcave technologies. Violation indices as summary indicators of the distance of an inefficient observation from an efficient allocation are proposed. The consistent development of the violation indices across the technical efficiency and allocative efficiency tests allows us to obtain comparative measures of the degrees of technical inefficiency and pure allocative inefficiency. Constrained optimization tests applicable to cases where the producer is restricted to optimizing with respect to a subset of goods are also proposed. The latter tests yield the revealed preference-type inequalities commonly used as tests for consistency of observed data with profit maximizing or cost minimizing behavior as limiting cases. Computer programs for implementing the different tests and sample results are listed in the appendix. In part II, an empirical comparison of nonparametric and parametric measures of technical progress for constant returns to scale technologies is performed using the Canadian input-output data for the period 1961-1980. The original data base was aggregated into four sectors and ten goods and the comparison was done for each sector. If we assume optimizing behavior on the part of the producers, we can reinterpret the violation indices yielded by the efficiency tests in part I as indicators of the shift in the production frontier. More precisely, the violation indices can be considered nonparametric chained indices of technical progress. The parametric measures of technical progress were obtained through econometric profit function estimation using the generalized McFadden flexible functional form with a quadratic spline model for technical progress proposed by Diewert and Wales (1989). Under the assumption of constant returns, the index of technical change is defined in terms of the unit scale profit function which gives the per unit return to the normalizing good. The empirical results show that the parametric estimates of technical change display a much smoother behavior which can be attributed to the incorporation of stochastic disturbance terms in the estimation procedure and, more interestingly, track the long term trend in the nonparametric estimates. Part III builds on the theory of minimum wages in international trade and is a theoretical essay in the tradition of analyzing the effects of factor market imperfections on resource allocation. The comparative static responses of the endogenous variables — output levels, employment levels of fixed-price factors with elastic supply and flexible prices of domestic resources — to marginal changes in the economy's exogenous variables — output prices, fixed factor prices and endowments of flexibly-priced domestic resources -— are examined. The effect of a change in a fixed factor price on other flexible factor prices can be decomposed Slutsky-like into substitution and scale effects. A symmetry condition between fixed factor prices and flexible factor prices is obtained which clarifies the concepts of "substitutability" and "complementarity" between these two kinds of factors. As an illustration, the model is applied to the case of a devaluation in a two-sector small open economy with rigid wages and capital as specific factors. The empirical implementation of the general model for the Canadian economy is left to more able econometricians but a starting point can be the sectoral analysis performed in Part II.
Arts, Faculty of
Vancouver School of Economics
Graduate
APA, Harvard, Vancouver, ISO, and other styles
46

Agi, Egemen. "Mathematical Modeling Of Gate Control Theory." Master's thesis, METU, 2009. http://etd.lib.metu.edu.tr/upload/3/12611468/index.pdf.

Full text
Abstract:
The purpose of this thesis work is to model the gate control theory, which explains the modulation of pain signals, with a motivation of finding new possible targets for pain treatment and to find novel control algorithms that can be used in engineering practice. The difference of the current study from the previous modeling trials is that morphologies of neurons that constitute gate control system are also included in the model by which structure-function relationship can be observed. Model of an excitable neuron is constructed and the response of the model for different perturbations are investigated. The simulation results of the excitable cell model is obtained and when compared with the experimental findings obtained by using crayfish, it is found that they are in good agreement. Model encodes stimulation intensity information as firing frequency and also it can add sub-threshold inputs and fire action potentials as real neurons. Moreover, model is able to predict depolarization block. Absolute refractory period of the single cell model is found as 3.7 ms. The developed model, produces no action potentials when the sodium channels are blocked by tetrodotoxin. Also, frequency and amplitudes of generated action potentials increase when the reversal potential of Na is increased. In addition, propagation of signals along myelinated and unmyelinated fibers is simulated and input current intensity-frequency relationships for both type of fibers are constructed. Myelinated fiber starts to conduct when current input is about 400 pA whereas this minimum threshold value for unmyelinated fiber is around 1100 pA. Propagation velocity in the 1 cm long unmyelinated fiber is found as 0.43 m/s whereas velocity along myelinated fiber with the same length is found to be 64.35 m/s. Developed synapse model exhibits the summation and tetanization properties of real synapses while simulating the time dependency of neurotransmitter concentration in the synaptic cleft. Morphometric analysis of neurons that constitute gate control system are done in order to find electrophysiological properties according to dimensions of the neurons. All of the individual parts of the gate control system are connected and the whole system is simulated. For different connection configurations, results of the simulations predict the observed phenomena for the suppression of pain. If the myelinated fiber is dissected, the projection neuron generates action potentials that would convey to brain and elicit pain. However, if the unmyelinated fiber is dissected, projection neuron remains silent. In this study all of the simulations are preformed using Simulink.
APA, Harvard, Vancouver, ISO, and other styles
47

Zhang, You-Kuan. "A quasilinear theory of time-dependent nonlocal dispersion in geologic media." Diss., The University of Arizona, 1990. http://hdl.handle.net/10150/185039.

Full text
Abstract:
A theory is presented which accounts for a particular aspect of nonlinearity caused by the deviation of plume "particles" from their mean trajectory in three-dimensional, statistically homogeneous but anisotropic porous media under an exponential covariance of log hydraulic conductivities. Quasilinear expressions for the time-dependent nonlocal dispersivity and spatial covariance tensors of ensemble mean concentration are derived, as a function of time, variance σᵧ² of log hydraulic conductivity, degree of anisotropy, and flow direction. One important difference between existing linear theories and the new quasilinear theory is that in the former transverse nonlocal dispersivities tend asymptotically to zero whereas in the latter they tend to nonzero Fickian asymptotes. Another important difference is that while all existing theories are nominally limited to situations where σᵧ² is less than 1, the quasilinear theory is expected to be less prone to error when this restriction is violated because it deals with the above nonlinearity without formally limiting σᵧ². The theory predicts a significant drop in dimensionless longitudinal dispersivity when σᵧ² is large as compared to the case where σᵧ² is small. As a consequence of this drop the real asymptotic longitudinal dispersivity, which varies in proportion to σᵧ² when σᵧ² is small, is predicted to vary as σᵧ when σᵧ² is large. The dimensionless transverse dispersivity also drops significantly at early dimensionless time when σᵧ² is large. At late time this dispersivity attains a maximum near σᵧ² = 1, varies asymptotically at a rate proportional to σᵧ² when σᵧ² is small, and appears inversely proportional to σᵧ when σᵧ² is large. The actual asymptotic transverse dispersivity varies in proportion to σᵧ⁴ when σᵧ² is small and appears proportional to σᵧ when σᵧ² is large. One of the most interesting findings is that when the mean seepage velocity vector μ is at an angle to the principal axes of statistical anisotropy, the orientation of longitudinal spread is generally offset from μ toward the direction of largest log hydraulic conductivity correlation scale. When local dispersion is active, a plume starts elongating parallel to μ. With time the long axis of the plume rotates toward the direction of largest correlation scale, then rotates back toward μ, and finally stabilizes asymptotically at a relatively small angle of deflection. Application of the theory to depth-averaged concentration data from the recent tracer experiment at Borden, Ontario, yields a consistent and improved fit without any need for parameter adjustment.
APA, Harvard, Vancouver, ISO, and other styles
48

Koessler, Denise Renee. "A Predictive Model for Secondary RNA Structure Using Graph Theory and a Neural Network." Digital Commons @ East Tennessee State University, 2010. https://dc.etsu.edu/etd/1684.

Full text
Abstract:
In this work we use a graph-theoretic representation of secondary RNA structure found in the database RAG: RNA-As-Graphs. We model the bonding of two RNA secondary structures to form a larger structure with a graph operation called merge. The resulting data from each tree merge operation is summarized and represented by a vector. We use these vectors as input values for a neural network and train the network to recognize a tree as RNA-like or not based on the merge data vector. The network correctly assigned a high probability of RNA-likeness to trees identified as RNA-like in the RAG database, and a low probability of RNA-likeness to those classified as not RNA-like in the RAG database. We then used the neural network to predict the RNA-likeness of all the trees of order 9. The use of a graph operation to theoretically describe the bonding of secondary RNA is novel.
APA, Harvard, Vancouver, ISO, and other styles
49

Mutakela, Patrick Silishebo. "Biomass prediction models for Colophospermum Mopane (Mopane) in Botswana." Thesis, Stellenbosch : University of Stellenbosch, 2009. http://hdl.handle.net/10019.1/2167.

Full text
Abstract:
Thesis (MFor (Forest and Wood Science))--University of Stellenbosch, 2009.
The aim of this study was to develop biomass prediction models for the determination of total aboveground biomass for mopane at three (3) study sites in Botswana. Thereafter, based on the pooled data from the three (3) study sites, recommend one cross-site biomass prediction model that could be used for the indirect estimation of the total aboveground biomass for mopane in Botswana. All the data were collected by destructive sampling from three (3) study sites in Botswana. Stratified random sampling was based on the stem diameter at breast height (1.3 m from the ground). A total of 30 sample trees at each study site were measured, felled and weighed. The 30 sample trees were distributed equally between six DBH classes (Five sample trees per DBH class). Thereafter, using the data from these sample trees, site-specific biomass prediction models for the indirect estimation of total aboveground biomass for mopane were developed as a function of the following independent variables: stem diameter at 0.15 m from the ground; stem diameter at 1.3 m from the ground; stem diameter at 3 m from the ground; crown diameter; and total tree height. The data from the sites were pooled together to develop cross-site biomass prediction models as a function of the given independent variables. The biomass prediction model that provided the best fit at Serule was a linear equation estimated by means of the stem diameter at 1.3 m, while in Sexaxa the biomass prediction model that provided the best fit was estimated by means of the stem diameter at 0.15 m. The biomass prediction model that provided the best fit at the Tamacha site was estimated by means of the stem diameter at 1.3 m. On the basis of the collected data, cross-site biomass prediction models were developed. The cross-site biomass prediction model that provided the best fit was developed from the stem diameter at 1.3 m. This relationship was adopted as the prediction model for the indirect biomass estimation of Colophospermum mopane (mopane) in Botswana.
APA, Harvard, Vancouver, ISO, and other styles
50

Li, Caiwei. "Dynamic scheduling of multiclass queueing networks." Diss., Georgia Institute of Technology, 2000. http://hdl.handle.net/1853/24339.

Full text
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography