Siga este link para ver outros tipos de publicações sobre o tema: Computation Theory and Mathematics.

Teses / dissertações sobre o tema "Computation Theory and Mathematics"

Crie uma referência precisa em APA, MLA, Chicago, Harvard, e outros estilos

Selecione um tipo de fonte:

Veja os 50 melhores trabalhos (teses / dissertações) para estudos sobre o assunto "Computation Theory and Mathematics".

Ao lado de cada fonte na lista de referências, há um botão "Adicionar à bibliografia". Clique e geraremos automaticamente a citação bibliográfica do trabalho escolhido no estilo de citação de que você precisa: APA, MLA, Harvard, Chicago, Vancouver, etc.

Você também pode baixar o texto completo da publicação científica em formato .pdf e ler o resumo do trabalho online se estiver presente nos metadados.

Veja as teses / dissertações das mais diversas áreas científicas e compile uma bibliografia correta.

1

Bryant, Ross. "A Computation of Partial Isomorphism Rank on Ordinal Structures". Thesis, University of North Texas, 2006. https://digital.library.unt.edu/ark:/67531/metadc5387/.

Texto completo da fonte
Resumo:
We compute the partial isomorphism rank, in the sense Scott and Karp, of a pair of ordinal structures using an Ehrenfeucht-Fraisse game. A complete formula is proven by induction given any two arbitrary ordinals written in Cantor normal form.
Estilos ABNT, Harvard, Vancouver, APA, etc.
2

Zhang, Yue. "Sparsity in Image Processing and Machine Learning: Modeling, Computation and Theory". Case Western Reserve University School of Graduate Studies / OhioLINK, 2018. http://rave.ohiolink.edu/etdc/view?acc_num=case1523017795312546.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
3

Semegni, Jean Yves. "On the computation of freely generated modular lattices". Thesis, Stellenbosch : Stellenbosch University, 2008. http://hdl.handle.net/10019.1/1207.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
4

Khafizov, Farid T. "Descriptions and Computation of Ultrapowers in L(R)". Thesis, University of North Texas, 1995. https://digital.library.unt.edu/ark:/67531/metadc277867/.

Texto completo da fonte
Resumo:
The results from this dissertation are an exact computation of ultrapowers by measures on cardinals $\aleph\sb{n},\ n\in w$, in $L(\IR$), and a proof that ordinals in $L(\IR$) below $\delta\sbsp{5}{1}$ represented by descriptions and the identity function with respect to sequences of measures are cardinals. An introduction to the subject with the basic definitions and well known facts is presented in chapter I. In chapter II, we define a class of measures on the $\aleph\sb{n},\ n\in\omega$, in $L(\IR$) and derive a formula for an exact computation of the ultrapowers of cardinals by these measures. In chapter III, we give the definitions of descriptions and the lowering operator. Then we prove that ordinals represented by descriptions and the identity function are cardinals. This result combined with the fact that every cardinal $<\delta\sbsp{5}{1}$ in $L(\IR$) is represented by a description (J1), gives a characterization of cardinals in $L(\IR$) below $\delta\sbsp{5}{1}. Concrete examples of formal computations are shown in chapter IV.
Estilos ABNT, Harvard, Vancouver, APA, etc.
5

Theeranaew, Wanchat. "STUDY ON INFORMATION THEORY: CONNECTION TO CONTROL THEORY, APPROACH AND ANALYSIS FOR COMPUTATION". Case Western Reserve University School of Graduate Studies / OhioLINK, 2015. http://rave.ohiolink.edu/etdc/view?acc_num=case1416847576.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
6

Marsden, Daniel. "Logical aspects of quantum computation". Thesis, University of Oxford, 2015. http://ora.ox.ac.uk/objects/uuid:e99331a3-9d93-4381-8075-ad843fb9b77c.

Texto completo da fonte
Resumo:
A fundamental component of theoretical computer science is the application of logic. Logic provides the formalisms by which we can model and reason about computational questions, and novel computational features provide new directions for the development of logic. From this perspective, the unusual features of quantum computation present both challenges and opportunities for computer science. Our existing logical techniques must be extended and adapted to appropriately model quantum phenomena, stimulating many new theoretical developments. At the same time, tools developed with quantum applications in mind often prove effective in other areas of logic and computer science. In this thesis we explore logical aspects of this fruitful source of ideas, with category theory as our unifying framework. Inspired by the success of diagrammatic techniques in quantum foundations, we begin by demonstrating the effectiveness of string diagrams for practical calculations in category theory. We proceed by example, developing graphical formulations of the definitions and proofs of many topics in elementary category theory, such as adjunctions, monads, distributive laws, representable functors and limits and colimits. We contend that these tools are particularly suitable for calculations in the field of coalgebra, and continue to demonstrate the use of string diagrams in the remainder of the thesis. Our coalgebraic studies commence in chapter 3, in which we present an elementary formulation of a representation result for the unitary transformations, following work developed in a fibrational setting in [Abramsky, 2010]. That paper raises the question of what a suitable "fibred coalgebraic logic" would be. This question is the starting point for our work in chapter 5, in which we introduce a parameterized, duality based frame- work for coalgebraic logic. We show sufficient conditions under which dual adjunctions and equivalences can be lifted to fibrations of (co)algebras. We also prove that the semantics of these logics satisfy certain "institution conditions" providing harmony between syntactic and semantic transformations. We conclude by studying the impact of parameterization on another logical aspect of coalgebras, in which certain fibrations of predicates can be seen as generalized invariants. Our focus is on the lifting of coalgebra structure along a fibration from the base category to an associated total category of predicates. We show that given a suitable parameterized generalization of the usual liftings of signature functors, this induces a "fibration of fibrations" capturing the relationship between the two different axes of variation.
Estilos ABNT, Harvard, Vancouver, APA, etc.
7

Kirk, Neil Patrick. "Computational aspects of singularity theory". Thesis, University of Liverpool, 1993. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.359187.

Texto completo da fonte
Resumo:
In this thesis we develop computational methods suitable for performing the symbolic calculations common to local singularity theory. For classification theory we employ the unipotent determinacy techniques of Bruce, du Plessis, Wall and complete transversal theorems of Bruce, du Plessis. The latter results are, as yet, unpublished and we spend some time reviewing these results, extending them to filtrations of the module m,,,.E (n, p) other than the standard filtration by degree. Weighted filtrations and filtrations induced by the action of a nilpotent Lie algebra are considered. A computer package called Transversal is developed. This is written in the mathematical language Maple and performs calculations such as those mentioned above and those central to unfolding theory. We discuss the package in detail and give examples of calculations performed in this thesis. Several classifications are obtained. The first is an extensive classification of map-germs (R2,0) -p (R4,0) under A-equivalence. We also consider the classification of function-germs (CP, O) -f (C, 0) under R(D)-equivalence: the restriction of R-equivalence to source coordinate changes which preserve a discriminant variety, D. We consider the cases where D is the discriminant of the A2 and A3 singularities, extending the results of Arnol'd. Several other simple singularities are discussed briefly; in particular, we consider the cases where D is the discriminant of the A4, D4, D5, D6, and Ek singularities. The geometry of the singularities (R2,0) -f (R4,0) is investigated by calculating the adjacencies and several geometrical invariants. For the given source and target dimensions, the invariants associated to the double point schemes and L-codimension of the germs are particularly significant. Finally we give an application of computer graphics to singularity theory. A program is written (in C) which calculates and draws the family of profiles of a surface rotating about a fixed axis in R3, the resulting envelope of profiles, and several other geometrical features. The program was used in recent research by Rycroft. We review some of the results and conclude with computer produced images which demonstrate certain transitions of the singularities on the envelope.
Estilos ABNT, Harvard, Vancouver, APA, etc.
8

Fatouros, Stavros. "Approximate algebraic computations in control theory". Thesis, City University London, 2003. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.274524.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
9

Heyman, Joseph Lee. "On the Computation of Strategically Equivalent Games". The Ohio State University, 2019. http://rave.ohiolink.edu/etdc/view?acc_num=osu1561984858706805.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
10

Fukasawa, Ricardo. "Single-row mixed-integer programs : theory and computations /". Diss., Atlanta, Ga. : Georgia Institute of Technology, 2008. http://hdl.handle.net/1853/24660.

Texto completo da fonte
Resumo:
Thesis (Ph.D.)--Industrial and Systems Engineering, Georgia Institute of Technology, 2009.
Committee Chair: William J. Cook; Committee Member: Ellis Johnson; Committee Member: George Nemhauser; Committee Member: Robin Thomas; Committee Member: Zonghao Gu
Estilos ABNT, Harvard, Vancouver, APA, etc.
11

Engdahl, Erik. "Computation of resonance energies and spectral densities in the complex energy plane : application of complex scaling techniques for atoms, molecules and surfaces /". Uppsala : Uppsala Universitet, 1988. http://bibpurl.oclc.org/web/32938.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
12

Whaley, Dewey Lonzo. "The Interquartile Range: Theory and Estimation". Digital Commons @ East Tennessee State University, 2005. https://dc.etsu.edu/etd/1030.

Texto completo da fonte
Resumo:
The interquartile range (IQR) is used to describe the spread of a distribution. In an introductory statistics course, the IQR might be introduced as simply the “range within which the middle half of the data points lie.” In other words, it is the distance between the two quartiles, IQR = Q3 - Q1. We will compute the population IQR, the expected value, and the variance of the sample IQR for various continuous distributions. In addition, a bootstrap confidence interval for the population IQR will be evaluated.
Estilos ABNT, Harvard, Vancouver, APA, etc.
13

Phoa, Wesley. "Domain theory in realizability toposes". Thesis, University of Cambridge, 1990. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.387061.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
14

Marletta, Marco. "Theory and implementation of algorithms for Sturm-Lioville computations". Thesis, Cranfield University, 1991. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.293105.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
15

Apedaile, Thomas J. "Computational Topics in Lie Theory and Representation Theory". DigitalCommons@USU, 2014. https://digitalcommons.usu.edu/etd/2156.

Texto completo da fonte
Resumo:
The computer algebra system Maple contains a basic set of commands for working with Lie algebras. The purpose of this thesis was to extend the functionality of these Maple packages in a number of important areas. First, programs for dening multiplication in several types of Cayley algebras, Jordan algebras and Cliord algebras were created to allow users to perform a variety of calculations. Second, commands were created for calculating some basic properties of nite-dimensional representations of complex semisimple Lie algebras. These commands allow one to identify a given representation as direct sum of irreducible subrepresentations, each one identied by an invariant highest weight. Third, creating an algorithm to calculate the Lie bracket for Vinberg's symmetric construction of Freudenthal's Magic Square allowed for a uniform construction of all ve exceptional Lie algebras. Maple examples and tutorials are provided to illustrate the implementation and use of the algebras now available in Maple as well as the tools for working with Lie algebra representations.
Estilos ABNT, Harvard, Vancouver, APA, etc.
16

Buckle, John Francis. "Computational aspects of lattice theory". Thesis, University of Warwick, 1989. http://wrap.warwick.ac.uk/106446/.

Texto completo da fonte
Resumo:
The use of computers to produce a user-friendly safe environment is an important area of research in computer science. This dissertation investigates how computers can be used to create an interactive environment for lattice theory. The dissertation is divided into three parts. Chapters two and three discuss mathematical aspects of lattice theory, chapter four describes methods of representing and displaying distributive lattices and chapters five, six and seven describe a definitive based environment for lattice theory. Chapter two investigates lattice congruences and pre-orders and demonstrates that any lattice congruence or pre-order can be determined by sets of join-irreducibles. By this correspondence it is shown that lattice operations in a quotient lattice can be calculated by set operations on the join-irreducibles that determine the congruence. This alternative characterisation is used in chapter three to obtain closed forms for all replacements of the form "h can replace g when computing an element f", and hence extends the results of Beynon and Dunne into general lattices. Chapter four investigates methods of representing and displaying distributive lattices. Techniques for generating Hasse diagrams of distributive lattices are discussed and two methods for performing calculations on free distributive lattices and their respective advantages are given. Chapters five and six compare procedural and functional based notations with computer environments based on definitive notations for creating an interactive environment for studying set theory. Chapter seven introduces a definitive based language called Pecan for creating an interactive environment for lattice theory. The results of chapters two and three are applied so that quotients, congruences and homomorphic images of lattices can be calculated efficiently.
Estilos ABNT, Harvard, Vancouver, APA, etc.
17

Zhu, Huaiyu. "Neural networks and adaptive computers : theory and methods of stochastic adaptive computation". Thesis, University of Liverpool, 1993. http://eprints.aston.ac.uk/365/.

Texto completo da fonte
Resumo:
This thesis studies the theory of stochastic adaptive computation based on neural networks. A mathematical theory of computation is developed in the framework of information geometry, which generalises Turing machine (TM) computation in three aspects - It can be continuous, stochastic and adaptive - and retains the TM computation as a subclass called "data processing". The concepts of Boltzmann distribution, Gibbs sampler and simulated annealing are formally defined and their interrelationships are studied. The concept of "trainable information processor" (TIP) - parameterised stochastic mapping with a rule to change the parameters - is introduced as an abstraction of neural network models. A mathematical theory of the class of homogeneous semilinear neural networks is developed, which includes most of the commonly studied NN models such as back propagation NN, Boltzmann machine and Hopfield net, and a general scheme is developed to classify the structures, dynamics and learning rules. All the previously known general learning rules are based on gradient following (GF), which are susceptible to local optima in weight space. Contrary to the widely held belief that this is rarely a problem in practice, numerical experiments show that for most non-trivial learning tasks GF learning never converges to a global optimum. To overcome the local optima, simulated annealing is introduced into the learning rule, so that the network retains adequate amount of "global search" in the learning process. Extensive numerical experiments confirm that the network always converges to a global optimum in the weight space. The resulting learning rule is also easier to be implemented and more biologically plausible than back propagation and Boltzmann machine learning rules: Only a scalar needs to be back-propagated for the whole network. Various connectionist models have been proposed in the literature for solving various instances of problems, without a general method by which their merits can be combined. Instead of proposing yet another model, we try to build a modular structure in which each module is basically a TIP. As an extension of simulated annealing to temporal problems, we generalise the theory of dynamic programming and Markov decision process to allow adaptive learning, resulting in a computational system called a "basic adaptive computer", which has the advantage over earlier reinforcement learning systems, such as Sutton's "Dyna", in that it can adapt in a combinatorial environment and still converge to a global optimum. The theories are developed with a universal normalisation scheme for all the learning parameters so that the learning system can be built without prior knowledge of the problems it is to solve.
Estilos ABNT, Harvard, Vancouver, APA, etc.
18

Mitrouli, Marilena Th. "Numerical issues and computational problems in algebraic control theory". Thesis, City University London, 1991. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.280573.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
19

Hill, Michael Anthony Ph D. Massachusetts Institute of Technology. "Computational methods for higher real K-theory with applications to tmf". Thesis, Massachusetts Institute of Technology, 2006. http://hdl.handle.net/1721.1/34545.

Texto completo da fonte
Resumo:
Thesis (Ph. D.)--Massachusetts Institute of Technology, Dept. of Mathematics, 2006.
Includes bibliographical references (p. 67-69).
We begin by present a new Hopf algebra which can be used to compute the tmf homology of a space or spectrum at the prime 3. Generalizing work of Mahowald and Davis, we use this Hopf algebra to compute the tmf homology of the classifying space of the symmetric group on three elements. We also discuss the E3 Tate spectrum of tmf at the prime 3. We then build on work of Hopkins and his collaborators, first computing the Adams-Novikov zero line of the homotopy of the spectrum eo4 at 5 and then generalizing the Hopf algebra for tmf to a family of Hopf algebras, one for each spectrum eop_l at p. Using these, and using a K(p - 1)-local version, we further generalize the Davis-Mahowald result, computing the eop_1 homology of the cofiber of the transfer map [...]. We conclude by computing the initial computations needed to understand the homotopy groups of the Hopkins-Miller real K-theory spectra for heights large than p- 1 at p. The basic computations are supplemented with conjectures as to the collapse of the spectral sequences used herein to compute the homotopy.
by Michael Anthony Hill.
Ph.D.
Estilos ABNT, Harvard, Vancouver, APA, etc.
20

Artemov, Anton G. "Inverse factorization in electronic structure theory : Analysis and parallelization". Licentiate thesis, Uppsala universitet, Avdelningen för beräkningsvetenskap, 2019. http://urn.kb.se/resolve?urn=urn:nbn:se:uu:diva-381333.

Texto completo da fonte
Resumo:
This licentiate thesis is a part of an effort to run large electronic structure calculations in modern computational environments with distributed memory. The ultimate goal is to model materials consisting of millions of atoms at the level of quantum mechanics. In particular, the thesis focuses on different aspects of a computational problem of inverse factorization of Hermitian positive definite matrices. The considered aspects are numerical properties of the algorithms and parallelization. Not only is an efficient and scalable computation of inverse factors necessary in order to be able to run large scale electronic computations based on the Hartree–Fock or Kohn–Sham approaches with the self-consistent field procedure, but it can be applied more generally for preconditioner construction. Parallelization of algorithms with unknown load and data distributions requires a paradigm shift in programming. In this thesis we also discuss a few parallel programming models with focus on task-based models, and, more specifically, the Chunks and Tasks model.
eSSENCE
Estilos ABNT, Harvard, Vancouver, APA, etc.
21

Young, Po-yuk. "Profile of good computational estimators related mathematical variables and common strategies used". [Hong Kong] : University of Hong Kong, 1994. http://sunzi.lib.hku.hk/hkuto/record.jsp?B14420478.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
22

Shi, Bin. "A Mathematical Framework on Machine Learning: Theory and Application". FIU Digital Commons, 2018. https://digitalcommons.fiu.edu/etd/3876.

Texto completo da fonte
Resumo:
The dissertation addresses the research topics of machine learning outlined below. We developed the theory about traditional first-order algorithms from convex opti- mization and provide new insights in nonconvex objective functions from machine learning. Based on the theory analysis, we designed and developed new algorithms to overcome the difficulty of nonconvex objective and to accelerate the speed to obtain the desired result. In this thesis, we answer the two questions: (1) How to design a step size for gradient descent with random initialization? (2) Can we accelerate the current convex optimization algorithms and improve them into nonconvex objective? For application, we apply the optimization algorithms in sparse subspace clustering. A new algorithm, CoCoSSC, is proposed to improve the current sample complexity under the condition of the existence of noise and missing entries. Gradient-based optimization methods have been increasingly modeled and inter- preted by ordinary differential equations (ODEs). Existing ODEs in the literature are, however, inadequate to distinguish between two fundamentally different meth- ods, Nesterov’s acceleration gradient method for strongly convex functions (NAG-SC) and Polyak’s heavy-ball method. In this paper, we derive high-resolution ODEs as more accurate surrogates for the two methods in addition to Nesterov’s acceleration gradient method for general convex functions (NAG-C), respectively. These novel ODEs can be integrated into a general framework that allows for a fine-grained anal- ysis of the discrete optimization algorithms through translating properties of the amenable ODEs into those of their discrete counterparts. As a first application of this framework, we identify the effect of a term referred to as gradient correction in NAG-SC but not in the heavy-ball method, shedding deep insight into why the for- mer achieves acceleration while the latter does not. Moreover, in this high-resolution ODE framework, NAG-C is shown to boost the squared gradient norm minimization at the inverse cubic rate, which is the sharpest known rate concerning NAG-C itself. Finally, by modifying the high-resolution ODE of NAG-C, we obtain a family of new optimization methods that are shown to maintain the accelerated convergence rates as NAG-C for minimizing convex functions.
Estilos ABNT, Harvard, Vancouver, APA, etc.
23

Wikström, Gunilla. "Computation of Parameters in some Mathematical Models". Doctoral thesis, Umeå University, Computing Science, 2002. http://urn.kb.se/resolve?urn=urn:nbn:se:umu:diva-565.

Texto completo da fonte
Resumo:

In computational science it is common to describe dynamic systems by mathematical models in forms of differential or integral equations. These models may contain parameters that have to be computed for the model to be complete. For the special type of ordinary differential equations studied in this thesis, the resulting parameter estimation problem is a separable nonlinear least squares problem with equality constraints. This problem can be solved by iteration, but due to complicated computations of derivatives and the existence of several local minima, so called short-cut methods may be an alternative. These methods are based on simplified versions of the original problem. An algorithm, called the modified Kaufman algorithm, is proposed and it takes the separability into account. Moreover, different kinds of discretizations and formulations of the optimization problem are discussed as well as the effect of ill-conditioning.

Computation of parameters often includes as a part solution of linear system of equations Ax = b. The corresponding pseudoinverse solution depends on the properties of the matrix A and vector b. The singular value decomposition of A can then be used to construct error propagation matrices and by use of these it is possible to investigate how changes in the input data affect the solution x. Theoretical error bounds based on condition numbers indicate the worst case but the use of experimental error analysis makes it possible to also have information about the effect of a more limited amount of perturbations and in that sense be more realistic. It is shown how the effect of perturbations can be analyzed by a semi-experimental analysis. The analysis combines the theory of the error propagation matrices with an experimental error analysis based on randomly generated perturbations that takes the structure of A into account

Estilos ABNT, Harvard, Vancouver, APA, etc.
24

Bastounis, Alexander James. "On fundamental computational barriers in the mathematics of information". Thesis, University of Cambridge, 2018. https://www.repository.cam.ac.uk/handle/1810/279086.

Texto completo da fonte
Resumo:
This thesis is about computational theory in the setting of the mathematics of information. The first goal is to demonstrate that many commonly considered problems in optimisation theory cannot be solved with an algorithm if the input data is only known up to an arbitrarily small error (modelling the fact that most real numbers are not expressible to infinite precision with a floating point based computational device). This includes computing the minimisers to basis pursuit, linear programming, lasso and image deblurring as well as finding an optimal neural network given training data. These results are somewhat paradoxical given the success that existing algorithms exhibit when tackling these problems with real world datasets and a substantial portion of this thesis is dedicated to explaining the apparent disparity, particularly in the context of compressed sensing. To do so requires the introduction of a variety of new concepts, including that of a breakdown epsilon, which may have broader applicability to computational problems outside of the ones central to this thesis. We conclude with a discussion on future research directions opened up by this work.
Estilos ABNT, Harvard, Vancouver, APA, etc.
25

Leclerc, Philip. "Prospect Theory Preferences in Noncooperative Game Theory". VCU Scholars Compass, 2014. http://scholarscompass.vcu.edu/etd/3522.

Texto completo da fonte
Resumo:
The present work seeks to incorporate a popular descriptive, empirically grounded model of human preference under risk, prospect theory, into the equilibrium theory of noncooperative games. Three primary, candidate definitions are systematically identified on the basis of classical characterizations of Nash Equilibrium; in addition, three equilibrium subtypes are defined for each primary definition, in order to enable modeling of players' reference points as exogenous and fixed, slowly and myopically adaptive, highly flexible and non-myopically adaptive. Each primary equilibrium concept was analyzed both theoretically and empirically; for the theoretical analyses, prospect theory, game theory, and computational complexity theory were all summoned to analysis. In chapter 1, the reader is provided with background on each of these theoretical underpinnings of the current work, the scope of the project is described, and its conclusions briefly summarized. In chapters 2 and 3, each of the three equilibrium concepts is analyzed theoretically, with emphasis placed on issues of classical interest (e.g. existence, dominance, rationalizability) and computational complexity (i.e, assessing how difficult each concept is to apply in algorithmic practice, with particular focus on comparison to classical Nash Equilibrium). This theoretical analysis leads us to discard the first of our three equilibrium concepts as unacceptable. In chapter 4, our remaining two equilibrium concepts are compared empirically, using average-level data originally aggregated from a number of studies by Camerer and Selten and Chmura; the results suggest that PT preferences may improve on the descriptive validity of NE, and pose some interesting questions about the nature of the PT weighting function (2003, Ch. 3). Chapter 5 concludes, systematically summarizes theoretical and empirical differences and similarities between the three equilibrium concepts, and offers some thoughts on future work.
Estilos ABNT, Harvard, Vancouver, APA, etc.
26

Tung, Jen-Fu. "An Algorithm to Generate Two-Dimensional Drawings of Conway Algebraic Knots". TopSCHOLAR®, 2010. http://digitalcommons.wku.edu/theses/163.

Texto completo da fonte
Resumo:
The problem of finding an efficient algorithm to create a two-dimensional embedding of a knot diagram is not an easy one. Typically, knots with a large number of crossings will not nicely generate two-dimensional drawings. This thesis presents an efficient algorithm to generate a knot and to create a nice two-dimensional embedding of the knot. For the purpose of this thesis a drawing is “nice” if the number of tangles in the diagram consisting of half-twists is minimal. More specifically, the algorithm generates prime, alternating Conway algebraic knots in O(n) time where n is the number of crossings in the knot, and it derives a precise representation of the knot’s nice drawing in O(n) time (The rendering of the drawing is not O(n).). Central to the algorithm is a special type of rooted binary tree which represents a distinct prime, alternating Conway algebraic knot. Each leaf in the tree represents a crossing in the knot. The algorithm first generates the tree and then modifies such a tree repeatedly to reduce the number of its leaves while ensuring that the knot type associated with the tree is not modified. The result of the algorithm is a tree (for the knot) with a minimum number of leaves. This minimum tree is the basis of deriving a 4-regular plane map which represents the knot embedding and to finally draw the knot’s diagram.
Estilos ABNT, Harvard, Vancouver, APA, etc.
27

Alp, Murat. "GAP, crossed inodules, Cat'1-groups : applications of computational group theory". Thesis, Bangor University, 1997. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.361168.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
28

Zarabi, Patrick, e August Denes. "Solving the Facility Location Problem using Graph Theory and Shortest Path Algorithms". Thesis, KTH, Optimeringslära och systemteori, 2018. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-229979.

Texto completo da fonte
Resumo:
This thesis in systems engineering and optimization theory aims to solve a facility location problem within the context of a confined space with path and proximity constraints. The thesis was commissioned by LKAB Kiruna, to help in their decision of where to construct a new facility on their industrial premises. The facility location problem was divided into a main problem of finding the best position of the facility, and a sub-problem of how to model distances and feasible areas within this particular context. The distance and feasibility modeling was solved by utilizing graph theory to construct a graph representation of a geographic area and then obtain the necessary distances using Dijkstra’s shortest path algorithm. The main problem was then solved using a mixed integer linear programming formulation which utilizes the distances obtained through the Dijkstra algorithm. The model is also extended to not only decide the placement of one facility but to accommodate the placement of two facilities. The extended model was solved in three ways, a heuristic algorithm, a mixed integer non linear formulation and a mixed integer linear formulation. The results concluded that the implementation of the single facility model was able to obtain optimal solutions consistently. Regarding the extension, the mixed integer linear formulation was deemed to be the best model as it was computationally fast and consistently produced optimal solutions. Finally, several model improvements are identified to increase the applicability to different cases. These improvements could also allow the model to provide more strategical and managerial insights to the facility location decision process. Some future research into metaheuristics and machine learning are also suggested to further improve the usability of the models.
Detta examensarbete inom systemteknik och optimeringslära syftar till att lösa ett lagerplaceringsproblem. Lagret ska ställas inom en liten yta med hänsyn till ruttbegränsningar och närhet till andra byggnader. Denna uppsats är begärd av LKAB Kiruna for att underlätta i deras beslut om var ett nytt lager skulle kunna byggas inom deras industriområde. Lagerplaceringsproblemet delades upp i två problem, huvudproblemet var att lokalisera den basta platsen för lagret att byggas. Subproblemet var hur distanser och tillåtna placeringar ska modelleras i denna specifika kontext med rutt- och narhetsbegränsningar. Distans- och platsmodelleringen gjordes genom att skapa en grafrepresentation av industriområdet. Sedan användes Dijkstras kortaste vägen algoritm för att erhålla alla distanser mellan möjliga byggområden och de produktionsanläggningar som behöver tillgång till lagret. Huvudproblemet kunde sedan lösas med hjälp av dessa distanser och en linjär heltalsoptimeringsmodell. Modellen utökades sedan för att tillåta placeringen av två separata lagerbyggnader. Den utökade modellen löstes med hjälp av tre olika implementeringar, en heuristisk algoritm, en ickelinjär heltalsoptimeringsmodell samt en linjär heltalsoptimeringsmodell.  Resultaten visade att implementeringen av det ursprungliga lagerplaceringsproblemet konsekvent kunde beräkna optimala lösningar. Den utökade modellen löstes bäst av den linjära heltalsoptimeringsimplementeringen, då denna implementering konsekvent resulterade i bäst (lägst) värde i målfunktion samt löste problemet med låg beräkningstid. Slutligen identifierades flertalet potentiella modellförbättringar som skulle kunna implementeras för att ge modellen mer generaliserbarhet. Detta skulle även innebära att modellen själv kan utvärdera hur många lager som bör byggas givet en satt budget. Således kan modellen även erbjuda mer strategiska beslut om dessa förbättringar implementeras. Ytterligare forskning skulle även kunna göras inom metaheuristik och maskininlärning för att ytterligare förbättra distansmodelleringen.
Estilos ABNT, Harvard, Vancouver, APA, etc.
29

Hu, Fan. "Computation of exciton transfer in the one- and two-dimensional close-packed quantum dot arrays". Virtual Press, 2005. http://liblink.bsu.edu/uhtbin/catkey/1319543.

Texto completo da fonte
Resumo:
Forster theory of energy transfer is applied in diluted systems, and yet it remains unknown if it can be applied to the dense media. We have studied the exciton transfer in one-dimensional (1-D) close-packed pure and mixed quantum dot (QD) array under different models and two-dimensional (2-D) perfect lattice. Our approach is based on the master equation created by treating the exciton relaxation as a stochastic process. The random parameter has been used to describe dot-to-dot distance variations. The master equation has been investigated analytically for 1-D and 2-D perfect lattices and numerically for 1-D disordered systems. The suitability of Forster decay law on the excitation decay of close-packed solid has been discussed. The necessity to consider the effect of the further nearest interdot interactions has been checked.
Department of Physics and Astronomy
Estilos ABNT, Harvard, Vancouver, APA, etc.
30

Dang, Hiep Tuan [Verfasser], e Wolfram [Akademischer Betreuer] Decker. "Intersection theory with applications to the computation of Gromov-Witten invariants / Dang Tuan Hiep. Betreuer: Wolfram Decker". Kaiserslautern : Technische Universität Kaiserslautern, 2014. http://d-nb.info/1048558428/34.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
31

Dmytryshyn, Andrii. "Skew-symmetric matrix pencils : stratification theory and tools". Licentiate thesis, Umeå universitet, Institutionen för datavetenskap, 2014. http://urn.kb.se/resolve?urn=urn:nbn:se:umu:diva-87501.

Texto completo da fonte
Resumo:
Investigating the properties, explaining, and predicting the behaviour of a physical system described by a system (matrix) pencil often require the understanding of how canonical structure information of the system pencil may change, e.g., how eigenvalues coalesce or split apart, due to perturbations in the matrix pencil elements. Often these system pencils have different block-partitioning and / or symmetries. We study changes of the congruence canonical form of a complex skew-symmetric matrix pencil under small perturbations. The problem of computing the congruence canonical form is known to be ill-posed: both the canonical form and the reduction transformation depend discontinuously on the entries of a pencil. Thus it is important to know the canonical forms of all such pencils that are close to the investigated pencil. One way to investigate this problem is to construct the stratification of orbits and bundles of the pencils. To be precise, for any problem dimension we construct the closure hierarchy graph for congruence orbits or bundles. Each node (vertex) of the graph represents an orbit (or a bundle) and each edge represents the cover/closure relation. Such a relation means that there is a path from one node to another node if and only if a skew-symmetric matrix pencil corresponding to the first node can be transformed by an arbitrarily small perturbation to a skew-symmetric matrix pencil corresponding to the second node. From the graph it is straightforward to identify more degenerate and more generic nearby canonical structures. A necessary (but not sufficient) condition for one orbit being in the closure of another is that the first orbit has larger codimension than the second one. Therefore we compute the codimensions of the congruence orbits (or bundles). It is done via the solutions of an associated homogeneous system of matrix equations. The complete stratification is done by proving the relation between equivalence and congruence for the skew-symmetric matrix pencils. This relation allows us to use the known result about the stratifications of general matrix pencils (under strict equivalence) in order to stratify skew-symmetric matrix pencils under congruence. Matlab functions to work with skew-symmetric matrix pencils and a number of other types of symmetries for matrices and matrix pencils are developed and included in the Matrix Canonical Structure (MCS) Toolbox.
Estilos ABNT, Harvard, Vancouver, APA, etc.
32

Sivan, D. D. "Design and structural modifications of vibratory systems to achieve prescribed modal spectra /". Title page, contents and abstract only, 1997. http://web4.library.adelaide.edu.au/theses/09PH/09phs6238.pdf.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
33

Johnson, Tomas. "Computer-aided Computation of Abelian integrals and Robust Normal Forms". Doctoral thesis, Uppsala universitet, Matematiska institutionen, 2009. http://urn.kb.se/resolve?urn=urn:nbn:se:uu:diva-107519.

Texto completo da fonte
Resumo:
This PhD thesis consists of a summary and seven papers, where various applications of auto-validated computations are studied. In the first paper we describe a rigorous method to determine unknown parameters in a system of ordinary differential equations from measured data with known bounds on the noise of the measurements. Papers II, III, IV, and V are concerned with Abelian integrals. In Paper II, we construct an auto-validated algorithm to compute Abelian integrals. In Paper III we investigate, via an example, how one can use this algorithm to determine the possible configurations of limit cycles that can bifurcate from a given Hamiltonian vector field. In Paper IV we construct an example of a perturbation of degree five of a Hamiltonian vector field of degree five, with 27 limit cycles, and in Paper V we construct an example of a perturbation of degree seven of a Hamiltonian vector field of degree seven, with 53 limit cycles. These are new lower bounds for the maximum number of limit cycles that can bifurcate from a Hamiltonian vector field for those degrees. In Papers VI, and VII, we study a certain kind of normal form for real hyperbolic saddles, which is numerically robust. In Paper VI we describe an algorithm how to automatically compute these normal forms in the planar case. In Paper VII we use the properties of the normal form to compute local invariant manifolds in a neighbourhood of the saddle.
Estilos ABNT, Harvard, Vancouver, APA, etc.
34

Mbangeni, Litha. "Development of methods for parallel computation of the solution of the problem for optimal control". Thesis, Cape Peninsula University of Technology, 2010. http://hdl.handle.net/20.500.11838/1110.

Texto completo da fonte
Resumo:
Thesis (MTech(Electrical Engineering))--Cape Peninsula University of Technology, 2010
Optimal control of fermentation processes is necessary for better behaviour of the process in order to achieve maximum production of product and biomass. The problem for optimal control is a very complex nonlinear, dynamic problem requiring long time for calculation Application of decomposition-coordinating methods for the solution of this type of problems simplifies the solution if it is implemented in a parallel way in a cluster of computers. Parallel computing can reduce tremendously the time of calculation through process of distribution and parallelization of the computation algorithm. These processes can be achieved in different ways using the characteristics of the problem for optimal control. Problem for optimal control of a fed-batch, batch and continuous fermentation processes for production of biomass and product are formulated. The problems are based on a criterion for maximum production of biomass at the end of the fermentation process for the fed-batch process, maximum production of metabolite at the end of the fermentation for the batch fermentation process and minimum time for achieving steady state fermentor behavior for the continuous process and on unstructured mass balance biological models incorporating in the kinetic coefficients, the physiochemical variables considered as control inputs. An augmented functional of Lagrange is applied and its decomposition in time domain is used with a new coordinating vector. Parallel computing in a Matlab cluster is used to solve the above optimal control problems. The calculations and tasks allocation to the cluster workers are based on a shared memory architecture. Real-time control implementation of calculation algorithms using a cluster of computers allows quick and simpler solutions to the optimal control problems.
Estilos ABNT, Harvard, Vancouver, APA, etc.
35

Levkovitz, Ron. "An investigation of interior point methods for large scale linear programs : theory and computational algorithms". Thesis, Brunel University, 1992. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.316541.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
36

Qin, Yu. "Computations and Algorithms in Physical and Biological Problems". Thesis, Harvard University, 2014. http://dissertations.umi.com/gsas.harvard:11478.

Texto completo da fonte
Resumo:
This dissertation presents the applications of state-of-the-art computation techniques and data analysis algorithms in three physical and biological problems: assembling DNA pieces, optimizing self-assembly yield, and identifying correlations from large multivariate datasets. In the first topic, in-depth analysis of using Sequencing by Hybridization (SBH) to reconstruct target DNA sequences shows that a modified reconstruction algorithm can overcome the theoretical boundary without the need for different types of biochemical assays and is robust to error. In the second topic, consistent with theoretical predictions, simulations using Graphics Processing Unit (GPU) demonstrate how controlling the short-ranged interactions between particles and controlling the concentrations optimize the self-assembly yield of a desired structure, and nonequilibrium behavior when optimizing concentrations is also unveiled by leveraging the computation capacity of GPUs. In the last topic, a methodology to incorporate existing categorization information into the search process to efficiently reconstruct the optimal true correlation matrix for multivariate datasets is introduced. Simulations on both synthetic and real financial datasets show that the algorithm is able to detect signals below the Random Matrix Theory (RMT) threshold. These three problems are representatives of using massive computation techniques and data analysis algorithms to tackle optimization problems, and outperform theoretical boundary when incorporating prior information into the computation.
Engineering and Applied Sciences
Estilos ABNT, Harvard, Vancouver, APA, etc.
37

Aubad, Ali. "On commuting involution graphs of certain finite groups". Thesis, University of Manchester, 2017. https://www.research.manchester.ac.uk/portal/en/theses/on-commuting-involution-graphs-of-certain-finite-groups(009c80f5-b0d6-4164-aefc-f783f74c80f1).html.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
38

Hansen, Brian Francis. "Explicit Computations Supporting a Generalization of Serre's Conjecture". Diss., CLICK HERE for online access, 2005. http://contentdm.lib.byu.edu/ETD/image/etd842.pdf.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
39

Witt, Walter G. "Quantifying the Structure of Misfolded Proteins Using Graph Theory". Digital Commons @ East Tennessee State University, 2017. https://dc.etsu.edu/etd/3244.

Texto completo da fonte
Resumo:
The structure of a protein molecule is highly correlated to its function. Some diseases such as cystic fibrosis are the result of a change in the structure of a protein so that this change interferes or inhibits its function. Often these changes in structure are caused by a misfolding of the protein molecule. To assist computational biologists, there is a database of proteins together with their misfolded versions, called decoys, that can be used to test the accuracy of protein structure prediction algorithms. In our work we use a nested graph model to quantify a selected set of proteins that have two single misfold decoys. The graph theoretic model used is a three tiered nested graph. Measures based on the vertex weights are calculated and we compare the quantification of the proteins with their decoys. Our method is able to separate the misfolded proteins from the correctly folded proteins.
Estilos ABNT, Harvard, Vancouver, APA, etc.
40

Wenz, Andreas [Verfasser], Peter [Gutachter] Müller e Michael [Gutachter] Dettweiler. "Computation of Belyi maps with prescribed ramification and applications in Galois theory / Andreas Wenz ; Gutachter: Peter Müller, Michael Dettweiler". Würzburg : Universität Würzburg, 2021. http://d-nb.info/1236859898/34.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
41

Sheppeard, Marni Dee. "Gluon Phenomenology and a Linear Topos". Thesis, University of Canterbury. Physics and Astronomy, 2007. http://hdl.handle.net/10092/1436.

Texto completo da fonte
Resumo:
In thinking about quantum causality one would like to approach rigorous QFT from outside the perspective of QFT, which one expects to recover only in a specific physical domain of quantum gravity. This thesis considers issues in causality using Category Theory, and their application to field theoretic observables. It appears that an abstract categorical Machian principle of duality for a ribbon graph calculus has the potential to incorporate the recent calculation of particle rest masses by Brannen, as well as the Bilson-Thompson characterisation of the particles of the Standard Model. This thesis shows how Veneziano n point functions may be recovered in such a framework, using cohomological techniques inspired by twistor theory and recent MHV techniques. This distinct approach fits into a rich framework of higher operads, leaving room for a generalisation to other physical amplitudes. The utility of operads raises the question of a categorical description for the underlying physical logic. We need to consider quantum analogues of a topos. Grothendieck's concept of a topos is a genuine extension of the notion of a space that incorporates a logic internal to itself. Conventional quantum logic has yet to be put into a form of equal utility, although its logic has been formulated in category theoretic terms. Axioms for a quantum topos are given in this thesis, in terms of braided monoidal categories. The associated logic is analysed and, in particular, elements of linear vector space logic are shown to be recovered. The usefulness of doing so for ordinary quantum computation was made apparent recently by Coecke et al. Vector spaces underly every notion of algebra, and a new perspective on it is therefore useful. The concept of state vector is also readdressed in the language of tricategories.
Estilos ABNT, Harvard, Vancouver, APA, etc.
42

Andujo, Nicholas R. "Progenitors Involving Simple Groups". CSUSB ScholarWorks, 1986. https://scholarworks.lib.csusb.edu/etd/758.

Texto completo da fonte
Resumo:
I will be going over writing representations of both permutation and monomial progenitors, which include 2^{*4} : D_4, 2^(*7) :L_2 (7) as permutation progenitors, and monomial progenitors 7^(*2) :_m S_3 \times 2, 11^{*2} :_m (5:2)^{*}5, 11^{*3} :_m (25:3), 11^{*4} :_m (4 : 5)^{*}5. Also, the images of these different progenitors at both lower and higher fields and orders. \\ We will also do the double coset enumeration of S5 over D6, S6 over 5 : 4, A_5 x A_5 over (5:2)^{*}5, and go on to also do the double coset enumeration over maximal subgroups for larger constructions. We will also do the construction of sporadic group M22 over maximal subgroup A7, and also J1 with the monomial representation 7^(*2) :_m S_3 \times 2 over maximal subgroup PSL(2,11). We will also look at different extension problems of composition factors of different groups, and determine the isomorphism types of each extension.
Estilos ABNT, Harvard, Vancouver, APA, etc.
43

Torstensson, Johan. "Computation of Mileage Limits for Traveling Salesmen by Means of Optimization Techniques". Thesis, Linköping University, Department of Mathematics, 2008. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-12473.

Texto completo da fonte
Resumo:

Many companies have traveling salesmen that market and sell their products.This results in much traveling by car due to the daily customer visits. Thiscauses costs for the company, in form of travel expenses compensation, and environmentaleffects, in form of carbon dioxide pollution. As many companies arecertified according to environmental management systems, such as ISO 14001,the environmental work becomes more and more important as the environmentalconsciousness increases every day for companies, authorities and public.The main task of this thesis is to compute reasonable limits on the mileage ofthe salesmen; these limits are based on specific conditions for each salesman’sdistrict. The objective is to implement a heuristic algorithm that optimizes thecustomer tours for an arbitrary chosen month, which will represent a “standard”month. The output of the algorithm, the computed distances, will constitute amileage limit for the salesman.The algorithm consists of a constructive heuristic that builds an initial solution,which is modified if infeasible. This solution is then improved by a local searchalgorithm preceding a genetic algorithm, which task is to improve the toursseparately.This method for computing mileage limits for traveling salesmen generates goodsolutions in form of realistic tours. The mileage limits could be improved if theinput data were more accurate and adjusted to each district, but the suggestedmethod does what it is supposed to do.

Estilos ABNT, Harvard, Vancouver, APA, etc.
44

Mildenhall, Paula. "Enhancing the teaching and learning of computational estimation in year 6". Thesis, Edith Cowan University, Research Online, Perth, Western Australia, 2011. https://ro.ecu.edu.au/theses/387.

Texto completo da fonte
Resumo:
There have been repeated calls for computational estimation to have a more prominent position in mathematics teaching and learning but there is still little evidence that quality time is being spent on this topic. Estimating numerical quantities is a useful skill for people to be able to use in their everyday lives in order to meet their personal needs. It is also accepted that number sense is an important component of mathematics learning (McIntosh, Reys, Reys, Bana, & Farrell, 1997; Paterson, 2004) and that computational estimation is an important part of number sense (Edwards, 1984; Markovits & Sowder, 1988; Schoen, 1994). This research hoped to contribute towards establishing computational estimation as a more accepted and worthwhile part of the mathematics curriculum. The study focused on a professional learning intervention, which used an action research approach, and was designed to develop teachers’ pedagogical content knowledge of computational estimation. The study utilised a multiple case study model set within a social constructivist and sociocultural paradigm to investigate the teachers’ involvement in this intervention. Case studies selected were completed focussing on three of the teachers and their classes.
Estilos ABNT, Harvard, Vancouver, APA, etc.
45

Kley, Tobias [Verfasser], Holger [Gutachter] Dette, Herold [Gutachter] Dehling e Marc [Gutachter] Hallin. "Quantile-based spectral analysis : asymptotic theory and computation / Tobias Kley ; Gutachter: Holger Dette, Herold Dehling, Marc Hallin ; Fakultät für Mathematik". Bochum : Ruhr-Universität Bochum, 2014. http://d-nb.info/1228624046/34.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
46

Burgos, Sylvestre Jean-Baptiste Louis. "The computation of Greeks with multilevel Monte Carlo". Thesis, University of Oxford, 2014. http://ora.ox.ac.uk/objects/uuid:6453a93b-9daf-4bfe-8c77-9cd6802f77dd.

Texto completo da fonte
Resumo:
In mathematical finance, the sensitivities of option prices to various market parameters, also known as the “Greeks”, reflect the exposure to different sources of risk. Computing these is essential to predict the impact of market moves on portfolios and to hedge them adequately. This is commonly done using Monte Carlo simulations. However, obtaining accurate estimates of the Greeks can be computationally costly. Multilevel Monte Carlo offers complexity improvements over standard Monte Carlo techniques. However the idea has never been used for the computation of Greeks. In this work we answer the following questions: can multilevel Monte Carlo be useful in this setting? If so, how can we construct efficient estimators? Finally, what computational savings can we expect from these new estimators? We develop multilevel Monte Carlo estimators for the Greeks of a range of options: European options with Lipschitz payoffs (e.g. call options), European options with discontinuous payoffs (e.g. digital options), Asian options, barrier options and lookback options. Special care is taken to construct efficient estimators for non-smooth and exotic payoffs. We obtain numerical results that demonstrate the computational benefits of our algorithms. We discuss the issues of convergence of pathwise sensitivities estimators. We show rigorously that the differentiation of common discretisation schemes for Ito processes does result in satisfactory estimators of the the exact solutions’ sensitivities. We also prove that pathwise sensitivities estimators can be used under some regularity conditions to compute the Greeks of options whose underlying asset’s price is modelled as an Ito process. We present several important results on the moments of the solutions of stochastic differential equations and their discretisations as well as the principles of the so-called “extreme path analysis”. We use these to develop a rigorous analysis of the complexity of the multilevel Monte Carlo Greeks estimators constructed earlier. The resulting complexity bounds appear to be sharp and prove that our multilevel algorithms are more efficient than those derived from standard Monte Carlo.
Estilos ABNT, Harvard, Vancouver, APA, etc.
47

Simoneau, Andre. "An Overview of Computational Mathematical Physics: A Deep Dive on Gauge Theories". Scholarship @ Claremont, 2019. https://scholarship.claremont.edu/cmc_theses/2182.

Texto completo da fonte
Resumo:
Over the course of a college mathematics degree, students are inevitably exposed to elementary physics. The derivation of the equations of motion are the classic examples of applications of derivatives and integrals. These equations of motion are easy to understand, however they can be expressed in other ways that students aren't often exposed to. Using the Lagrangian and the Hamiltonian, we can capture the same governing dynamics of Newtonian mechanics with equations that emphasize physical quantities other than position, velocity, and acceleration like Newton's equations do. Building o of these alternate interpretations of mechanics and understanding gauge transformations, we begin to understand some of the mathematical physics relating to gauge theories. In general, gauge theories are eld theories that can have gauge transformations applied to them in such a way that the meaningful physical quantities remain invariant. This paper covers the buildup to gauge theories, some of their applications, and some computational approaches to understanding them.
Estilos ABNT, Harvard, Vancouver, APA, etc.
48

Khoury, Imad. "Mathematical and computational tools for the manipulation of musical cyclic rhythms". Thesis, McGill University, 2007. http://digitool.Library.McGill.CA:80/R/?func=dbin-jump-full&object_id=101858.

Texto completo da fonte
Resumo:
This thesis presents and analyzes tools and experiments that aim at achieving multiple yet related goals in the exploration and manipulation of musical cyclic rhythms. The work presented in this thesis may be viewed as a preliminary study for the ultimate future goal of developing a general computational theory of rhythm. Given a family of rhythms, how does one reconstruct its ancestral rhythms? How should one change a rhythm's cycle length while preserving its musicologically salient properties, and hence be able to confirm or disprove popular or historical beliefs regarding its origins and evolution? How should one compare musical rhythms? How should one automatically generate rhythmic patterns? All these questions are addressed and, to a certain extent, solved in our study, and serve as a basis for the development of novel general tools, implemented in Matlab, for the manipulation of rhythms.
Estilos ABNT, Harvard, Vancouver, APA, etc.
49

Andersson, Per-Åke. "Computation of Thermal Development in Injection Mould Filling, based on the Distance Model". Licentiate thesis, Linköping University, Linköping University, Optimization, 2002. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-5733.

Texto completo da fonte
Resumo:

The heat transfer in the filling phase of injection moulding is studied, based on Gunnar Aronsson’s distance model for flow expansion ([Aronsson], 1996).

The choice of a thermoplastic materials model is motivated by general physical properties, admitting temperature and pressure dependence. Two-phase, per-phase-incompressible, power-law fluids are considered. The shear rate expression takes into account pseudo-radial flow from a point inlet.

Instead of using a finite element (FEM) solver for the momentum equations a general analytical viscosity expression is used, adjusted to current axial temperature profiles and yielding expressions for axial velocity profile, pressure distribution, frozen layer expansion and special front convection.

The nonlinear energy partial differential equation is transformed into its conservative form, expressed by the internal energy, and is solved differently in the regions of streaming and stagnant flow, respectively. A finite difference (FD) scheme is chosen using control volume discretization to keep truncation errors small in the presence of non-uniform axial node spacing. Time and pseudo-radial marching is used. A local system of nonlinear FD equations is solved. In an outer iterative procedure the position of the boundary between the “solid” and “liquid” fluid cavity parts is determined. The uniqueness of the solution is claimed. In an inner iterative procedure the axial node temperatures are found. For all physically realistic material properties the convergence is proved. In particular the assumptions needed for the Newton-Mysovskii theorem are secured. The metal mould PDE is locally solved by a series expansion. For particular material properties the same technique can be applied to the “solid” fluid.

In the circular plate application, comparisons with the commercial FEM-FD program Moldflow (Mfl) are made, on two Mfl-database materials, for which model parameters are estimated/adjusted. The resulting time evolutions of pressures and temperatures are analysed, as well as the radial and axial profiles of temperature and frozen layer. The greatest differences occur at the flow front, where Mfl neglects axial heat convection. The effects of using more and more complex material models are also investigated. Our method performance is reported.

In the polygonal star-shaped plate application a geometric cavity model is developed. Comparison runs with the commercial FEM-FD program Cadmould (Cmd) are performed, on two Cmd-database materials, in an equilateral triangular mould cavity, and materials model parameters are estimated/adjusted. The resulting average temperatures at the end of filling are compared, on rays of different angular deviation from the closest corner ray and on different concentric circles, using angular and axial (cavity-halves) symmetry. The greatest differences occur in narrow flow sectors, fatal for our 2D model for a material with non-realistic viscosity model. We present some colour plots, e.g. for the residence time.

The classical square-root increase by time of the frozen layer is used for extrapolation. It may also be part of the front model in the initial collision with the cold metal mould. An extension of the model is found which describes the radial profile of the frozen layer in the circular plate application accurately also close to the inlet.

The well-posedness of the corresponding linearized problem is studied, as well as the stability of the linearized FD-scheme.


Report code: LiU-TEK-LIC-2002:66.
Estilos ABNT, Harvard, Vancouver, APA, etc.
50

Volkova, Tanya N. Presmeg Norma C. "Characterizing preservice teachers' thinking in computational estimation with regard to whole numbers, fractions, decimals, and percents". Normal, Ill. : Illinois State University, 2006. http://proquest.umi.com/pqdweb?index=0&did=1276391451&SrchMode=1&sid=6&Fmt=2&VInst=PROD&VType=PQD&RQT=309&VName=PQD&TS=1181316122&clientId=43838.

Texto completo da fonte
Resumo:
Thesis (Ph. D.)--Illinois State University, 2006.
Title from title page screen, viewed on June 8, 2007. Dissertation Committee: Norma C. Presmeg (chair), Cynthia W. Langrall, Beverly S. Rich, Janet Warfield. Includes bibliographical references (leaves 177-187) and abstract. Also available in print.
Estilos ABNT, Harvard, Vancouver, APA, etc.
Oferecemos descontos em todos os planos premium para autores cujas obras estão incluídas em seleções literárias temáticas. Contate-nos para obter um código promocional único!

Vá para a bibliografia