Dissertations / Theses on the topic '080200 Computation Theory and Mathematics'

To see the other types of publications on this topic, follow the link: 080200 Computation Theory and Mathematics.

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 dissertations / theses for your research on the topic '080200 Computation Theory and Mathematics.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.

1

Bryant, Ross. "A Computation of Partial Isomorphism Rank on Ordinal Structures." Thesis, University of North Texas, 2006. https://digital.library.unt.edu/ark:/67531/metadc5387/.

Full text
Abstract:
We compute the partial isomorphism rank, in the sense Scott and Karp, of a pair of ordinal structures using an Ehrenfeucht-Fraisse game. A complete formula is proven by induction given any two arbitrary ordinals written in Cantor normal form.
APA, Harvard, Vancouver, ISO, and other styles
2

Zhang, Yue. "Sparsity in Image Processing and Machine Learning: Modeling, Computation and Theory." Case Western Reserve University School of Graduate Studies / OhioLINK, 2018. http://rave.ohiolink.edu/etdc/view?acc_num=case1523017795312546.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Semegni, Jean Yves. "On the computation of freely generated modular lattices." Thesis, Stellenbosch : Stellenbosch University, 2008. http://hdl.handle.net/10019.1/1207.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Khafizov, Farid T. "Descriptions and Computation of Ultrapowers in L(R)." Thesis, University of North Texas, 1995. https://digital.library.unt.edu/ark:/67531/metadc277867/.

Full text
Abstract:
The results from this dissertation are an exact computation of ultrapowers by measures on cardinals $\aleph\sb{n},\ n\in w$, in $L(\IR$), and a proof that ordinals in $L(\IR$) below $\delta\sbsp{5}{1}$ represented by descriptions and the identity function with respect to sequences of measures are cardinals. An introduction to the subject with the basic definitions and well known facts is presented in chapter I. In chapter II, we define a class of measures on the $\aleph\sb{n},\ n\in\omega$, in $L(\IR$) and derive a formula for an exact computation of the ultrapowers of cardinals by these measures. In chapter III, we give the definitions of descriptions and the lowering operator. Then we prove that ordinals represented by descriptions and the identity function are cardinals. This result combined with the fact that every cardinal $<\delta\sbsp{5}{1}$ in $L(\IR$) is represented by a description (J1), gives a characterization of cardinals in $L(\IR$) below $\delta\sbsp{5}{1}. Concrete examples of formal computations are shown in chapter IV.
APA, Harvard, Vancouver, ISO, and other styles
5

Theeranaew, Wanchat. "STUDY ON INFORMATION THEORY: CONNECTION TO CONTROL THEORY, APPROACH AND ANALYSIS FOR COMPUTATION." Case Western Reserve University School of Graduate Studies / OhioLINK, 2015. http://rave.ohiolink.edu/etdc/view?acc_num=case1416847576.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Marsden, Daniel. "Logical aspects of quantum computation." Thesis, University of Oxford, 2015. http://ora.ox.ac.uk/objects/uuid:e99331a3-9d93-4381-8075-ad843fb9b77c.

Full text
Abstract:
A fundamental component of theoretical computer science is the application of logic. Logic provides the formalisms by which we can model and reason about computational questions, and novel computational features provide new directions for the development of logic. From this perspective, the unusual features of quantum computation present both challenges and opportunities for computer science. Our existing logical techniques must be extended and adapted to appropriately model quantum phenomena, stimulating many new theoretical developments. At the same time, tools developed with quantum applications in mind often prove effective in other areas of logic and computer science. In this thesis we explore logical aspects of this fruitful source of ideas, with category theory as our unifying framework. Inspired by the success of diagrammatic techniques in quantum foundations, we begin by demonstrating the effectiveness of string diagrams for practical calculations in category theory. We proceed by example, developing graphical formulations of the definitions and proofs of many topics in elementary category theory, such as adjunctions, monads, distributive laws, representable functors and limits and colimits. We contend that these tools are particularly suitable for calculations in the field of coalgebra, and continue to demonstrate the use of string diagrams in the remainder of the thesis. Our coalgebraic studies commence in chapter 3, in which we present an elementary formulation of a representation result for the unitary transformations, following work developed in a fibrational setting in [Abramsky, 2010]. That paper raises the question of what a suitable "fibred coalgebraic logic" would be. This question is the starting point for our work in chapter 5, in which we introduce a parameterized, duality based frame- work for coalgebraic logic. We show sufficient conditions under which dual adjunctions and equivalences can be lifted to fibrations of (co)algebras. We also prove that the semantics of these logics satisfy certain "institution conditions" providing harmony between syntactic and semantic transformations. We conclude by studying the impact of parameterization on another logical aspect of coalgebras, in which certain fibrations of predicates can be seen as generalized invariants. Our focus is on the lifting of coalgebra structure along a fibration from the base category to an associated total category of predicates. We show that given a suitable parameterized generalization of the usual liftings of signature functors, this induces a "fibration of fibrations" capturing the relationship between the two different axes of variation.
APA, Harvard, Vancouver, ISO, and other styles
7

Heyman, Joseph Lee. "On the Computation of Strategically Equivalent Games." The Ohio State University, 2019. http://rave.ohiolink.edu/etdc/view?acc_num=osu1561984858706805.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Engdahl, Erik. "Computation of resonance energies and spectral densities in the complex energy plane : application of complex scaling techniques for atoms, molecules and surfaces /." Uppsala : Uppsala Universitet, 1988. http://bibpurl.oclc.org/web/32938.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Whaley, Dewey Lonzo. "The Interquartile Range: Theory and Estimation." Digital Commons @ East Tennessee State University, 2005. https://dc.etsu.edu/etd/1030.

Full text
Abstract:
The interquartile range (IQR) is used to describe the spread of a distribution. In an introductory statistics course, the IQR might be introduced as simply the “range within which the middle half of the data points lie.” In other words, it is the distance between the two quartiles, IQR = Q3 - Q1. We will compute the population IQR, the expected value, and the variance of the sample IQR for various continuous distributions. In addition, a bootstrap confidence interval for the population IQR will be evaluated.
APA, Harvard, Vancouver, ISO, and other styles
10

Tung, Jen-Fu. "An Algorithm to Generate Two-Dimensional Drawings of Conway Algebraic Knots." TopSCHOLAR®, 2010. http://digitalcommons.wku.edu/theses/163.

Full text
Abstract:
The problem of finding an efficient algorithm to create a two-dimensional embedding of a knot diagram is not an easy one. Typically, knots with a large number of crossings will not nicely generate two-dimensional drawings. This thesis presents an efficient algorithm to generate a knot and to create a nice two-dimensional embedding of the knot. For the purpose of this thesis a drawing is “nice” if the number of tangles in the diagram consisting of half-twists is minimal. More specifically, the algorithm generates prime, alternating Conway algebraic knots in O(n) time where n is the number of crossings in the knot, and it derives a precise representation of the knot’s nice drawing in O(n) time (The rendering of the drawing is not O(n).). Central to the algorithm is a special type of rooted binary tree which represents a distinct prime, alternating Conway algebraic knot. Each leaf in the tree represents a crossing in the knot. The algorithm first generates the tree and then modifies such a tree repeatedly to reduce the number of its leaves while ensuring that the knot type associated with the tree is not modified. The result of the algorithm is a tree (for the knot) with a minimum number of leaves. This minimum tree is the basis of deriving a 4-regular plane map which represents the knot embedding and to finally draw the knot’s diagram.
APA, Harvard, Vancouver, ISO, and other styles
11

Johnson, Tomas. "Computer-aided Computation of Abelian integrals and Robust Normal Forms." Doctoral thesis, Uppsala universitet, Matematiska institutionen, 2009. http://urn.kb.se/resolve?urn=urn:nbn:se:uu:diva-107519.

Full text
Abstract:
This PhD thesis consists of a summary and seven papers, where various applications of auto-validated computations are studied. In the first paper we describe a rigorous method to determine unknown parameters in a system of ordinary differential equations from measured data with known bounds on the noise of the measurements. Papers II, III, IV, and V are concerned with Abelian integrals. In Paper II, we construct an auto-validated algorithm to compute Abelian integrals. In Paper III we investigate, via an example, how one can use this algorithm to determine the possible configurations of limit cycles that can bifurcate from a given Hamiltonian vector field. In Paper IV we construct an example of a perturbation of degree five of a Hamiltonian vector field of degree five, with 27 limit cycles, and in Paper V we construct an example of a perturbation of degree seven of a Hamiltonian vector field of degree seven, with 53 limit cycles. These are new lower bounds for the maximum number of limit cycles that can bifurcate from a Hamiltonian vector field for those degrees. In Papers VI, and VII, we study a certain kind of normal form for real hyperbolic saddles, which is numerically robust. In Paper VI we describe an algorithm how to automatically compute these normal forms in the planar case. In Paper VII we use the properties of the normal form to compute local invariant manifolds in a neighbourhood of the saddle.
APA, Harvard, Vancouver, ISO, and other styles
12

Sivan, D. D. "Design and structural modifications of vibratory systems to achieve prescribed modal spectra /." Title page, contents and abstract only, 1997. http://web4.library.adelaide.edu.au/theses/09PH/09phs6238.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
13

Mbangeni, Litha. "Development of methods for parallel computation of the solution of the problem for optimal control." Thesis, Cape Peninsula University of Technology, 2010. http://hdl.handle.net/20.500.11838/1110.

Full text
Abstract:
Thesis (MTech(Electrical Engineering))--Cape Peninsula University of Technology, 2010
Optimal control of fermentation processes is necessary for better behaviour of the process in order to achieve maximum production of product and biomass. The problem for optimal control is a very complex nonlinear, dynamic problem requiring long time for calculation Application of decomposition-coordinating methods for the solution of this type of problems simplifies the solution if it is implemented in a parallel way in a cluster of computers. Parallel computing can reduce tremendously the time of calculation through process of distribution and parallelization of the computation algorithm. These processes can be achieved in different ways using the characteristics of the problem for optimal control. Problem for optimal control of a fed-batch, batch and continuous fermentation processes for production of biomass and product are formulated. The problems are based on a criterion for maximum production of biomass at the end of the fermentation process for the fed-batch process, maximum production of metabolite at the end of the fermentation for the batch fermentation process and minimum time for achieving steady state fermentor behavior for the continuous process and on unstructured mass balance biological models incorporating in the kinetic coefficients, the physiochemical variables considered as control inputs. An augmented functional of Lagrange is applied and its decomposition in time domain is used with a new coordinating vector. Parallel computing in a Matlab cluster is used to solve the above optimal control problems. The calculations and tasks allocation to the cluster workers are based on a shared memory architecture. Real-time control implementation of calculation algorithms using a cluster of computers allows quick and simpler solutions to the optimal control problems.
APA, Harvard, Vancouver, ISO, and other styles
14

Torstensson, Johan. "Computation of Mileage Limits for Traveling Salesmen by Means of Optimization Techniques." Thesis, Linköping University, Department of Mathematics, 2008. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-12473.

Full text
Abstract:

Many companies have traveling salesmen that market and sell their products.This results in much traveling by car due to the daily customer visits. Thiscauses costs for the company, in form of travel expenses compensation, and environmentaleffects, in form of carbon dioxide pollution. As many companies arecertified according to environmental management systems, such as ISO 14001,the environmental work becomes more and more important as the environmentalconsciousness increases every day for companies, authorities and public.The main task of this thesis is to compute reasonable limits on the mileage ofthe salesmen; these limits are based on specific conditions for each salesman’sdistrict. The objective is to implement a heuristic algorithm that optimizes thecustomer tours for an arbitrary chosen month, which will represent a “standard”month. The output of the algorithm, the computed distances, will constitute amileage limit for the salesman.The algorithm consists of a constructive heuristic that builds an initial solution,which is modified if infeasible. This solution is then improved by a local searchalgorithm preceding a genetic algorithm, which task is to improve the toursseparately.This method for computing mileage limits for traveling salesmen generates goodsolutions in form of realistic tours. The mileage limits could be improved if theinput data were more accurate and adjusted to each district, but the suggestedmethod does what it is supposed to do.

APA, Harvard, Vancouver, ISO, and other styles
15

Sheppeard, Marni Dee. "Gluon Phenomenology and a Linear Topos." Thesis, University of Canterbury. Physics and Astronomy, 2007. http://hdl.handle.net/10092/1436.

Full text
Abstract:
In thinking about quantum causality one would like to approach rigorous QFT from outside the perspective of QFT, which one expects to recover only in a specific physical domain of quantum gravity. This thesis considers issues in causality using Category Theory, and their application to field theoretic observables. It appears that an abstract categorical Machian principle of duality for a ribbon graph calculus has the potential to incorporate the recent calculation of particle rest masses by Brannen, as well as the Bilson-Thompson characterisation of the particles of the Standard Model. This thesis shows how Veneziano n point functions may be recovered in such a framework, using cohomological techniques inspired by twistor theory and recent MHV techniques. This distinct approach fits into a rich framework of higher operads, leaving room for a generalisation to other physical amplitudes. The utility of operads raises the question of a categorical description for the underlying physical logic. We need to consider quantum analogues of a topos. Grothendieck's concept of a topos is a genuine extension of the notion of a space that incorporates a logic internal to itself. Conventional quantum logic has yet to be put into a form of equal utility, although its logic has been formulated in category theoretic terms. Axioms for a quantum topos are given in this thesis, in terms of braided monoidal categories. The associated logic is analysed and, in particular, elements of linear vector space logic are shown to be recovered. The usefulness of doing so for ordinary quantum computation was made apparent recently by Coecke et al. Vector spaces underly every notion of algebra, and a new perspective on it is therefore useful. The concept of state vector is also readdressed in the language of tricategories.
APA, Harvard, Vancouver, ISO, and other styles
16

Andujo, Nicholas R. "Progenitors Involving Simple Groups." CSUSB ScholarWorks, 1986. https://scholarworks.lib.csusb.edu/etd/758.

Full text
Abstract:
I will be going over writing representations of both permutation and monomial progenitors, which include 2^{*4} : D_4, 2^(*7) :L_2 (7) as permutation progenitors, and monomial progenitors 7^(*2) :_m S_3 \times 2, 11^{*2} :_m (5:2)^{*}5, 11^{*3} :_m (25:3), 11^{*4} :_m (4 : 5)^{*}5. Also, the images of these different progenitors at both lower and higher fields and orders. \\ We will also do the double coset enumeration of S5 over D6, S6 over 5 : 4, A_5 x A_5 over (5:2)^{*}5, and go on to also do the double coset enumeration over maximal subgroups for larger constructions. We will also do the construction of sporadic group M22 over maximal subgroup A7, and also J1 with the monomial representation 7^(*2) :_m S_3 \times 2 over maximal subgroup PSL(2,11). We will also look at different extension problems of composition factors of different groups, and determine the isomorphism types of each extension.
APA, Harvard, Vancouver, ISO, and other styles
17

Burgos, Sylvestre Jean-Baptiste Louis. "The computation of Greeks with multilevel Monte Carlo." Thesis, University of Oxford, 2014. http://ora.ox.ac.uk/objects/uuid:6453a93b-9daf-4bfe-8c77-9cd6802f77dd.

Full text
Abstract:
In mathematical finance, the sensitivities of option prices to various market parameters, also known as the “Greeks”, reflect the exposure to different sources of risk. Computing these is essential to predict the impact of market moves on portfolios and to hedge them adequately. This is commonly done using Monte Carlo simulations. However, obtaining accurate estimates of the Greeks can be computationally costly. Multilevel Monte Carlo offers complexity improvements over standard Monte Carlo techniques. However the idea has never been used for the computation of Greeks. In this work we answer the following questions: can multilevel Monte Carlo be useful in this setting? If so, how can we construct efficient estimators? Finally, what computational savings can we expect from these new estimators? We develop multilevel Monte Carlo estimators for the Greeks of a range of options: European options with Lipschitz payoffs (e.g. call options), European options with discontinuous payoffs (e.g. digital options), Asian options, barrier options and lookback options. Special care is taken to construct efficient estimators for non-smooth and exotic payoffs. We obtain numerical results that demonstrate the computational benefits of our algorithms. We discuss the issues of convergence of pathwise sensitivities estimators. We show rigorously that the differentiation of common discretisation schemes for Ito processes does result in satisfactory estimators of the the exact solutions’ sensitivities. We also prove that pathwise sensitivities estimators can be used under some regularity conditions to compute the Greeks of options whose underlying asset’s price is modelled as an Ito process. We present several important results on the moments of the solutions of stochastic differential equations and their discretisations as well as the principles of the so-called “extreme path analysis”. We use these to develop a rigorous analysis of the complexity of the multilevel Monte Carlo Greeks estimators constructed earlier. The resulting complexity bounds appear to be sharp and prove that our multilevel algorithms are more efficient than those derived from standard Monte Carlo.
APA, Harvard, Vancouver, ISO, and other styles
18

Qin, Yu. "Computations and Algorithms in Physical and Biological Problems." Thesis, Harvard University, 2014. http://dissertations.umi.com/gsas.harvard:11478.

Full text
Abstract:
This dissertation presents the applications of state-of-the-art computation techniques and data analysis algorithms in three physical and biological problems: assembling DNA pieces, optimizing self-assembly yield, and identifying correlations from large multivariate datasets. In the first topic, in-depth analysis of using Sequencing by Hybridization (SBH) to reconstruct target DNA sequences shows that a modified reconstruction algorithm can overcome the theoretical boundary without the need for different types of biochemical assays and is robust to error. In the second topic, consistent with theoretical predictions, simulations using Graphics Processing Unit (GPU) demonstrate how controlling the short-ranged interactions between particles and controlling the concentrations optimize the self-assembly yield of a desired structure, and nonequilibrium behavior when optimizing concentrations is also unveiled by leveraging the computation capacity of GPUs. In the last topic, a methodology to incorporate existing categorization information into the search process to efficiently reconstruct the optimal true correlation matrix for multivariate datasets is introduced. Simulations on both synthetic and real financial datasets show that the algorithm is able to detect signals below the Random Matrix Theory (RMT) threshold. These three problems are representatives of using massive computation techniques and data analysis algorithms to tackle optimization problems, and outperform theoretical boundary when incorporating prior information into the computation.
Engineering and Applied Sciences
APA, Harvard, Vancouver, ISO, and other styles
19

Hansen, Brian Francis. "Explicit Computations Supporting a Generalization of Serre's Conjecture." Diss., CLICK HERE for online access, 2005. http://contentdm.lib.byu.edu/ETD/image/etd842.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
20

Devore, Lucas Clay. "Random Walks with Elastic and Reflective Lower Boundaries." TopSCHOLAR®, 2009. http://digitalcommons.wku.edu/theses/134.

Full text
APA, Harvard, Vancouver, ISO, and other styles
21

Barbier, Morgan. "Décodage en liste et application à la sécurité de l'information." Phd thesis, Ecole Polytechnique X, 2011. http://pastel.archives-ouvertes.fr/pastel-00677421.

Full text
Abstract:
Cette thèse porte sur l'étude de certains aspects des codes correcteurs d'erreurs et leurs applications à la sécurité de l'information. Plus spécifiquement, on s'est intéressé aux problèmes de décodage complet et de décodage en liste. Une nouvelle notion de codes a été introduite en liant une famille de codes et un algorithme de décodage, mettant ainsi en évidence les codes pour lesquels le décodage complet est réalisable en un temps polynomial. On présente ensuite une reformulation de l'algorithme de Koetter et Vardy pour le décodage en liste pour les codes alternant et analysons sa complexité. Cette méthode a permit de présenter une réduction de la taille de la clé du cryptosystème de McEliece, allant jusqu'à 21\% pour la variante dyadique. On s'est également intéressé à la stéganographie basée sur les codes. On propose différentes bornes caractérisant les stégosystèmes utilisant des codes linéaires, de façon à assurer la solvabilité du problème d'insertion avec des positions verrouillées. Une de ces bornes permet d'affirmer que plus le rang MDS du code utilisé est bas, plus ce code permettra de concevoir un stégosystème efficace. On montre également que les codes non-linéaires systématiques sont également de bons candidats. Enfin, on reformule le problème d'insertion bornée avec des positions verrouillées rendant ainsi l'insertion toujours possible, et on démontre que les codes de Hamming binaires permettent de satisfaire à toutes les contraintes exhibées.
APA, Harvard, Vancouver, ISO, and other styles
22

Bispo, Danilo Gustavo. "Dos fundamentos da matemática ao surgimento da teoria da computação por Alan Turing." Pontifícia Universidade Católica de São Paulo, 2013. https://tede2.pucsp.br/handle/handle/13286.

Full text
Abstract:
Made available in DSpace on 2016-04-28T14:16:18Z (GMT). No. of bitstreams: 1 Danilo Gustavo Bispo.pdf: 2512902 bytes, checksum: 2261f415993066c8892733480af9c1c9 (MD5) Previous issue date: 2013-04-15
In this paper I present initially in order to contextualize the influences involved in the emergence of the theory of Alan Turing computability on a history of some issues that mobilized mathematicians in the early twentieth century. In chapter 1, an overview will be exposed to the emergence of ideology Formalist designed by mathematician David Hilbert in the early twentieth century. The aim was to base the formalism elementary mathematics from the method and axiomatic theories eliminating contradictions and paradoxes. Although Hilbert has not obtained full success in your program, it will be demonstrated how their ideas influenced the development of the theory of computation Turing. The theory proposes that Turing is a decision procedure, a method that analyzes any arbitrary formula of logic and determines whether it is likely or not. Turing proves that there can be no general decision. For that will be used as a primary source document On Computable Numbers, with an application to the Entscheidungsproblem. In Chapter 2, you will see the main sections of the document Turing exploring some of its concepts. The project will be completed with a critique of this classic text in the history of mathematics based on historiographical proposals presented in the first chapter
Neste texto apresento inicialmente com o intuito de contextualizar as influências envolvidas no surgimento da teoria de Alan Turing sobre computabilidade um histórico de algum problemas que mobilizaram os matemáticos no início do século XX. No capítulo 1, será exposto um panorama do surgimento da ideologia formalista concebida pelo matemático David Hilbert no início do século XX. O objetivo do formalismo era de fundamentar a matemática elementar a partir do método e axiomático, eliminando das teorias suas contradições e paradoxos. Embora Hilbert não tenha obtido pleno êxito em seu programa, será demonstrado como suas concepções influenciaram o desenvolvimento da teoria da computação de Turing. A teoria que Turing propõe é um procedimento de decisão, um método que analisa qualquer fórmula arbitrária da lógica e determina se ela é provável ou não. Turing prova que nenhuma decisão geral pode existir. Para tanto será utilizado como fonte primária o documento On computable numbers, with an application to the Entscheidungsproblem. No capítulo 2, será apresentado as principais seções do documento de Turing explorando alguns de seus conceitos. O projeto será finalizado com uma crítica a este texto clássico da história da matemática com base nas propostas historiográficas apresentadas no primeiro capítulo
APA, Harvard, Vancouver, ISO, and other styles
23

Zhao, Yue. "Modelling avian influenza in bird-human systems : this thesis is presented in the partial fulfillment of the requirement for the degree of Masters of Information Science in Mathematics at Massey University, Albany, New Zealand." Massey University, 2009. http://hdl.handle.net/10179/1145.

Full text
Abstract:
In 1997, the first human case of avian influenza infection was reported in Hong Kong. Since then, avian influenza has become more and more hazardous for both animal and human health. Scientists believed that it would not take long until the virus mutates to become contagious from human to human. In this thesis, we construct avian influenza with possible mutation situations in bird-human systems. Also, possible control measures for humans are introduced in the systems. We compare the analytical and numerical results and try to find the most efficient control measures to prevent the disease.
APA, Harvard, Vancouver, ISO, and other styles
24

McDermott, Matthew. "Fast Algorithms for Analyzing Partially Ranked Data." Scholarship @ Claremont, 2014. http://scholarship.claremont.edu/hmc_theses/58.

Full text
Abstract:
Imagine your local creamery administers a survey asking their patrons to choose their five favorite ice cream flavors. Any data collected by this survey would be an example of partially ranked data, as the set of all possible flavors is only ranked into subsets of the chosen flavors and the non-chosen flavors. If the creamery asks you to help analyze this data, what approaches could you take? One approach is to use the natural symmetries of the underlying data space to decompose any data set into smaller parts that can be more easily understood. In this work, I describe how to use permutation representations of the symmetric group to create and study efficient algorithms that yield such decompositions.
APA, Harvard, Vancouver, ISO, and other styles
25

Whitinger, Robert. "An Algorithm for the Machine Calculation of Minimal Paths." Digital Commons @ East Tennessee State University, 2016. https://dc.etsu.edu/etd/3119.

Full text
Abstract:
Problems involving the minimization of functionals date back to antiquity. The mathematics of the calculus of variations has provided a framework for the analytical solution of a limited class of such problems. This paper describes a numerical approximation technique for obtaining machine solutions to minimal path problems. It is shown that this technique is applicable not only to the common case of finding geodesics on parameterized surfaces in R3, but also to the general case of finding minimal functionals on hypersurfaces in Rn associated with an arbitrary metric.
APA, Harvard, Vancouver, ISO, and other styles
26

Lenormand, Maxime. "Initialize and Calibrate a Dynamic Stochastic Microsimulation Model: Application to the SimVillages Model." Phd thesis, Université Blaise Pascal - Clermont-Ferrand II, 2012. http://tel.archives-ouvertes.fr/tel-00764929.

Full text
Abstract:
Le but de cette thèse est de développer des outils statistiques permettant d'initialiser et de calibrer les modèles de microsimulation dynamique stochastique, en partant de l'exemple du modèle SimVillages (développé dans le cadre du projet Européen PRIMA). Ce modèle couple des dynamiques démographiques et économiques appliquées à une population de municipalités rurales. Chaque individu de la population, représenté explicitement dans un ménage au sein d'une commune, travaille éventuellement dans une autre, et possède sa propre trajectoire de vie. Ainsi, le modèle inclut-il des dynamiques de choix de vie, d'étude, de carrière, d'union, de naissance, de divorce, de migration et de décès. Nous avons développé, implémenté et testé les modèles et méthodes suivants: * un modèle permettant de générer une population synthétique à partir de données agrégées, où chaque individu est membre d'un ménage, vit dans une commune et possède un statut au regard de l'emploi. Cette population synthétique est l'état initial du modèle. * un modèle permettant de simuler une table d'origine-destination des déplacements domicile-travail à partir de données agrégées. * un modèle permettant d'estimer le nombre d'emplois dans les services de proximité dans une commune donnée en fonction de son nombre d'habitants et de son voisinage en termes de service. * une méthode de calibration des paramètres inconnus du modèle SimVillages de manière à satisfaire un ensemble de critères d'erreurs définis sur des sources de données hétérogènes. Cette méthode est fondée sur un nouvel algorithme d'échantillonnage séquentiel de type Approximate Bayesian Computation.
APA, Harvard, Vancouver, ISO, and other styles
27

Lemaire, François. "Contribution à l'algorithmique en algèbre différentielle." Phd thesis, Université des Sciences et Technologie de Lille - Lille I, 2002. http://tel.archives-ouvertes.fr/tel-00001363.

Full text
Abstract:
Cette thèse est consacrée à l'étude des systèmes d'équations
différentielles non linéaires aux dérivées partielles. L'approche choisie est celle de l'algèbre différentielle. Étant donné un système d'équations différentielles, nous cherchons à obtenir des renseignements sur ses solutions. Pour ce faire, nous calculons une famille d'ensembles particuliers (appelés chaînes différentielles régulières) dont la réunion des solutions coïncide avec les solutions du système initial.

Les nouveaux résultats relèvent principalement du calcul formel. Le chapitre 2 clarifie le lien entre les chaînes régulières et les chaînes différentielles régulières. Deux nouveaux algorithmes (chapitres 4 et 5) viennent optimiser les algorithmes existants permettant de calculer ces chaînes différentielles régulières. Ces deux algorithmes intègrent des techniques purement algébriques qui permettent de mieux contrôler le grossissement des données et de supprimer des calculs inutiles. Des problèmes jusqu'à présent non résolus ont ainsi pu être traités. Un algorithme de calcul de forme normale d'un polynôme différentiel modulo une chaîne différentielle régulière est exposé dans le chapitre 2.

Les derniers résultats relèvent de l'analyse. Les solutions que nous considérons sont des séries formelles. Le chapitre 3 fournit des conditions suffisantes pour qu'une solution formelle soit analytique. Ce même chapitre présente un contre-exemple à une conjecture portant sur l'analycité des solutions formelles.
APA, Harvard, Vancouver, ISO, and other styles
28

Weil, Jacques-Arthur. "Méthodes effectives en théorie de Galois différentielle et applications à l'intégrabilité de systèmes dynamiques." Habilitation à diriger des recherches, Université de Limoges, 2013. http://tel.archives-ouvertes.fr/tel-00933064.

Full text
Abstract:
Mes recherches portent essentiellement sur l''elaboration de m'ethodes de calcul formel pour l''etude constructive des 'equations diff'erentielles lin'eaires, plus particuli'erement autour de la th'eorie de Galois diff'erentielle. Celles-ci vont du d'eveloppement de la th'eorie sous-jacente aux algorithmes, en incluant leur implantation en Maple. Ces travaux ont en commun une approche exp'erimentale des math'ematiques o'u l'on met l'accent sur l'examen d'exemples les plus pertinents possibles. L''etude d'etaill'ee de cas provenant de la m'ecanique rationnelle ou de la physique th'eorique nourrit en retour le d'eveloppement de th'eories math'ematiques idoines. Mes travaux s'articulent suivant trois grands th'emes interd'ependants : la th'eorie de Galois diff'erentielle effective, ses applications 'a l'int'egrabilit'e de syst'emes hamiltoniens et des applications en physique th'eorique.
APA, Harvard, Vancouver, ISO, and other styles
29

Janon, Alexandre. "Analyse de sensibilité et réduction de dimension. Application à l'océanographie." Phd thesis, Université de Grenoble, 2012. http://tel.archives-ouvertes.fr/tel-00757101.

Full text
Abstract:
Les modèles mathématiques ont pour but de décrire le comportement d'un système. Bien souvent, cette description est imparfaite, notamment en raison des incertitudes sur les paramètres qui définissent le modèle. Dans le contexte de la modélisation des fluides géophysiques, ces paramètres peuvent être par exemple la géométrie du domaine, l'état initial, le forçage par le vent, ou les coefficients de frottement ou de viscosité. L'objet de l'analyse de sensibilité est de mesurer l'impact de l'incertitude attachée à chaque paramètre d'entrée sur la solution du modèle, et, plus particulièrement, identifier les paramètres (ou groupes de paramètres) og sensibles fg. Parmi les différentes méthodes d'analyse de sensibilité, nous privilégierons la méthode reposant sur le calcul des indices de sensibilité de Sobol. Le calcul numérique de ces indices de Sobol nécessite l'obtention des solutions numériques du modèle pour un grand nombre d'instances des paramètres d'entrée. Cependant, dans de nombreux contextes, dont celui des modèles géophysiques, chaque lancement du modèle peut nécessiter un temps de calcul important, ce qui rend inenvisageable, ou tout au moins peu pratique, d'effectuer le nombre de lancements suffisant pour estimer les indices de Sobol avec la précision désirée. Ceci amène à remplacer le modèle initial par un emph{métamodèle} (aussi appelé emph{surface de réponse} ou emph{modèle de substitution}). Il s'agit d'un modèle approchant le modèle numérique de départ, qui nécessite un temps de calcul par lancement nettement diminué par rapport au modèle original. Cette thèse se centre sur l'utilisation d'un métamodèle dans le cadre du calcul des indices de Sobol, plus particulièrement sur la quantification de l'impact du remplacement du modèle par un métamodèle en terme d'erreur d'estimation des indices de Sobol. Nous nous intéressons également à une méthode de construction d'un métamodèle efficace et rigoureux pouvant être utilisé dans le contexte géophysique.
APA, Harvard, Vancouver, ISO, and other styles
30

Hanna, George T. "Cubature reduction using the theory of inequalities." Thesis, 2002. https://vuir.vu.edu.au/18166/.

Full text
Abstract:
This dissertation is a detailed analysis of two-dimensional integration providing a priori error bounds in a variety of measures of integrand derivatives. Cubature formulae involving both function evaluations and one-dimensional integration are furnished and numerical experiments to investigate the efficacy of the error formulae are performed. Product (and singular) double integration is investigated. Two-dimensional rectangular integral inequalities are constructed via embedding two one dimensional Peano kernels. In one dimension, linear kernels with a parametric discontinuity furnish "three point" rules where sampling occurs at the boundary and an interior point. The error is bounded in terms of the Lebesgue norms of the first derivative of the integrand. In two dimensions for a rectangular region, we find that the rule generalises to three "three point" rules in each dimension. That is nine sample points and six one dimensional integrals. The error bound is expressed in terms of norms of the first mixed partial derivative of the integrand. These results are further generalised to provide error bounds in terms an arbitrary order mixed partial derivative of the integrand. That is, error bounds in measures of δfn+m/δtnδsm for some integers n,m>0 where the integrand is f. In this case, we find that the rule involves both sample points and one-dimensional integrals involving all the partial derivatives of the integrand up to the stated order. Finally, we explore product integrands, where the weight ω(•,•) is positive and integrable. In this case, the rule and the error bound involve moments of the weight. Particular attention is applied to identifying a priori two dimensional grids for which the error bound is minimized. Various weights and weight null spaces are explored and cubature formulae providing "optimal" grids are given.
APA, Harvard, Vancouver, ISO, and other styles
31

Im, Paul Poh Teng. "An enhanced progressive fuzzy clustering approach to pattern recognition." Thesis, 1997. https://vuir.vu.edu.au/15324/.

Full text
Abstract:
This thesis applies an enhanced progressive clustering approach, involving fuzzy clustering algorithms and fuzzy neural networks, to solve some practical problems of pattern recognition. A new fuzzy clustering framework, referred to as Cluster Prototype Centring by Membership (CPCM), has been developed. A Possibilistic Fuzzy c-Means algorithm(PFCM), which is also new, has been formulated to investigate properties of fuzzy clustering. PFCM extends the useability of the Fuzzy c-Means (FCM) algorithm by generalisation of the membership function.
APA, Harvard, Vancouver, ISO, and other styles
32

White, Roderick J. "Aspects of parallel topologies applied to digital transforms of discrete signals." Thesis, 1994. https://vuir.vu.edu.au/17930/.

Full text
Abstract:
Discrete transformations are widely used in the fields of signal and image processing. Applications in the areas of data compression, template matching, signal filtering pattern recognition all utilise various discrete transforms. The calculation of transformations is a computationally intensive task which in most practical applications requires considerable computing resources. This characteristic has restricted the use of many transformations to applications with smaller datasets or where real-time performance is not essential. This restriction can be removed by the application of parallel processing techniques to the calculation of discrete transformations. The aim of this thesis is to determine efficient parallel algorithms and processor topologies for the implementation of the discrete Walsh, cosine, Haar and D4 Daubauchies transforms, and to compare the operation of the parallel algorithms running on T800 Transputers with the equivalent serial von Neumann type algorithm. This thesis also examines the transformations of a number of test functions in order to determine their ability to represent various common global and locally defined functions. It was found that the parallel algorithms developed during the course of this thesis for the discrete Walsh, cosine, Haar and D4 Daubauchies transforms could all be efficiently implemented on a hypercube processor topology. Development of a number of parallel algorithms also led to the discovery of a new parallel algorithm for the calculation of any transformation which can be expressed as a Kronecker or tensor product/sum. A hypercube based algorithm was devised which converts the Kronecker product to a Hadamard product on a hypercube structure. This provides a simple algorithm for parallel implementations. Examination of the four sets of transform coefficients for the test functions revealed that all the transforms examined were not suitable for representing functions with large numbers of discontinuity's such as the chirp function. Also, transforms with local basis functions such as the Haar and D4 Daubauchies transforms provided better representations of localised functions than transforms consisting of global basis function sets such as the discrete Walsh and cosine transformations.
APA, Harvard, Vancouver, ISO, and other styles
33

So, Wing Wah Simon. "Content-based image indexing and retrieval for visual information systems." Thesis, 2000. https://vuir.vu.edu.au/15318/.

Full text
Abstract:
The dominance of visual data in recent times has made a fundamental change to our everyday life. Less than five to ten years ago, Internet and World Wide Web were not the daily vocabulary for the general public. But now, even a young child can use the Internet to search for information. This, however, does not mean that we have a mature technology to perform visual information search. On the contrary, visual information retrieval is still in its infancy. The problem lies on the semantic richness and complexity of visual information in comparison to alphanumeric information. In this thesis, we present new paradigms for content-based image indexing and retrieval for Visual Information Systems. The concept of Image Hashing and the developments of Composite Bitplane Signatures with Inverted Image Indexing and Compression are the main contributions to this dissertation. These paradigms are analogous to the signature-based indexing and inversion-based postings for text information retrieval. We formulate the problem of image retrieval as a two dimensional hashing as oppose to a one-dimensional hash vector used in conventional hashing techniques. Wavelets are used to generate the bitplane signatures. The natural consequence to our bitplane signature scheme is the superimposed bitplane signatures for efficient retrieval. Composite bitplanes can then be used as the low-level feature information together with high-level semantic indexing to form a unified and integrated framework in our inverted model for content-based image retrieval.
APA, Harvard, Vancouver, ISO, and other styles
34

Hart, Keith Allen. "Mean reversion in asset prices and asset allocation in investment management." Thesis, 1996. https://vuir.vu.edu.au/18168/.

Full text
Abstract:
This thesis examines the predictability of asset prices for an Australian investor. Evidence supporting the mean reversion alternative to the random walk hypothesis is presented, with a discussion of potential models, both linear and nonlinear. The normality and homoscedasticity assumptions are investigated and their use in asset models is validated. A study of fund performance is carried out and value is found to be added by timing asset allocation but not by stock selection, though there is no correlation between past and present rankings of managers. The difficulty of proving mean reversion or reversion to trend, other than for large deviations or extremes, and the actual performance by managers, implies a strategy of allocation at these extremes. That is, managers should adhere to their policy portfolios and let markets run short term; making appropriate large strategic moves when markets have moved to extremes.
APA, Harvard, Vancouver, ISO, and other styles
35

Misiorek, Violetta Iwona. "Controlling processes with reference to costs, item price and process evolution." Thesis, 1998. https://vuir.vu.edu.au/18194/.

Full text
Abstract:
This thesis presents some recent work of the author in developing the analysis of a number of process control models that take into account statistical, economic and other practical issues. Special attention is paid to the problem of optimum selection the initial process mean setting, with particular reference to filling/canning processes. As there are many different situations that involve different cost parameters, this leads to the consideration of various models each with their own particular solution. The effects of change of the process variance on the optimal solution as well as on the expected profit are discussed. Implications to 'Weights and Measures' requirements of following this optimality path are provided, with particular reference to loss in expected profit per item. Chapter 1 provides a brief introduction and is followed by a literature review in Chapter 2. Chapter 3 deals with the issue of selecting the optimum process mean by presenting a simple model and emphasising the dependencies between the process parameters. Chapter 4 further investigates the problem presented in Chapter 3 and presents several models for which the selection of the most profitable process setting is considered, concentrating on a canning problem. Various industrial filling processes are described and some of the issues considered include: waste, overfill, top-up, and the penalty costs for items that initially fail to meet specifications. Chapter 5 discusses Weights and Measures requirements in connection with a canning process. Both Australian requirements and OIML International recommendations are discussed. The Australian requirements are also compared with the requirements of the European Economic Community as well as the United States. Chapter 6 illustrates the potential use of the models developed in chapter 4 by giving an industrial example and again discussing the implications to Weights and Measures requirements. In Chapter 7 the problem of an optimal selection of the initial process mean is examined for a process with a linear shift. Special focus is on the economic benefits obtained from reducing process standard deviation and the rate of change of the mean. Conclusions and some suggestions for future work are provided in chapter 8. Parts of chapter 4, 5 and 6 form the contents of a paper, 'Mean Selection for Various Types of Filling Process with Implications to 'Weights and Measures' Requirements', undergoing revision for publication in the Journal of Quality Technology.
APA, Harvard, Vancouver, ISO, and other styles
36

Taniar, David Randy. "Query optimization for parallel object-oriented database systems." Thesis, 1997. https://vuir.vu.edu.au/15272/.

Full text
Abstract:
This thesis studies parallel query optimization for object-oriented queries. Its main objective is to investigate how performance improvement of object-oriented query processing can be achieved through processor parallelism.
APA, Harvard, Vancouver, ISO, and other styles
37

Ives, Robert V. "Reduction of the parameter estimation time for an adaptive control system." Thesis, 1994. https://vuir.vu.edu.au/18182/.

Full text
Abstract:
The following work is concerned with the use of the Method of Least Squares in the parameter estimation of a discrete-time model of a system. In particular, the emphasis is upon both the initial convergence and accuracy of the estimates. The investigation is therefore pertinent to both the "cold-starting" of least squares estimators, and to systems in which "jump" changes in parameters occur, requiring resetting of the estimator. The work was approached from an engineering viewpoint, with the requirement that the theory be applied to a real system. The real system selected was a positional servosystem, using a DC motor. A number of least squares algorithms were compared for their suitability to such an application. The algorithms examined were: 1) A standard, non-recursive solution of the least squares equations by Lower-Upper Factorisation of the information matrix. 2) A standard, recursive solution, i.e. Recursive Least Squares, RLS. 3) Two reduced order solutions using a priori knowledge of the type number of the servosystem (LU Factorisation and RLS). 4) An Extended Least Squares Solution, using a recursive algorithm. 5) Several non-recursive solutions using instrumental variables. The methods were initially examined using a software simulation of the servosystem. This simulation was based on a linear, second-order model. It was concluded that the preferred methods were the reduced-order solutions using a priori knowledge. The following hypothesis was examined: By raising the rate at which the signals are sampled, more information is provided to the estimator in any given period of time. Increasing the sampling rate should therefore result in a superior, real-time parameter estimator.
APA, Harvard, Vancouver, ISO, and other styles
38

Chandran, Jaideep. "An image based colorimetric technique for portable blood gas analysis." Thesis, 2012. https://vuir.vu.edu.au/19422/.

Full text
Abstract:
Blood gas analysis is an important part of a doctor’s diagnosis; it primarily consists of the partial pressure of oxygen, partial pressure of carbon dioxide and the pH. It provides the medical practitioners with insight into the patient’s respiratory health, the metabolic activity in the body and the health of the renal system. The emphasis of the thesis is based on the development of the colorimetric technique to measure the pH and pCO2 and the design and implementation of the colorimetric algorithm in hardware. The colorimetric algorithm is implemented using floating point arithmetic architectures for high accuracy with an emphasis on low power consumption and area.
APA, Harvard, Vancouver, ISO, and other styles
39

Grossman, Igor. "Applications of multi-threading paradigms to stimulate turbulent flows." Thesis, 2017. https://vuir.vu.edu.au/40454/.

Full text
Abstract:
Flow structures in turbulent flows span many orders of magnitude of length and time scales. They range from the length scale at which very small eddies lose their coherence as their translational kinetic energy is dissipated into heat, up to eddies the size of which is related to that of the macroscopic system. The behaviour of the range of flow structures can be captured by assuming that the fluid is a continuum, and they can be described by solving the Navier-Stokes equations. However, analytical solutions of the Navier-Stokes equations exist only for simple cases. A complete description of turbulent flow in which the flow variables velocity and pressure are resolved as a function of space and time can be obtained only numerically. The instantaneous range of scales in turbulent flows increases rapidly with the Reynolds number. As a result, most engineering problems have a wide range of scales that can be computed using direct numerical simulation (DNS). As the complexity of the calculated flows increases, an improvement in turbulence models is often needed. One way to overcome this problem is to search for models that better capture the features of turbulence. Furthermore, the models should be parameterised in a way that allows flows to be simulated under a wide range of conditions. DNS is a useful tool in this endeavour, and it can be used to complement the long-established methodologies of experimental research. A large number of computational grids must be used to simulate a high Reynolds number inflows that occur in the complicated geometries often encountered in practical applications. This approach requires a considerable amount of computational power. For example, reducing the grid spacing in half increases the computational cost by a factor of about sixteen. Challenges presented by limitations imposed by computer hardware significantly limit the number of practical numerical solutions required to satisfy engineering needs. In this work, we propose an alternative approach. Rather than running an application that solves the Navier-Stokes equations on one computer, we have developed a platform that allows a group of computers to communicate with one another working together to obtain a solution of a specific flow problem. This approach helps to overcome the problem of hardware limitations. However, to grasp these challenges, we must devise new strategies to computational paradigms associated with parallel computing. In the case of solving the Navier-Stokes equations, we have to deal with significant computational and memory requirements. To overcome these requirements, software should be able to be run on many high-performance computers simultaneously, and network communication may become a new limiting issue that is specific only to parallel environments. Translating to parallel environments triggers several scenarios that do not exist when developing software that executes sequential operations. For example, "racing conditions" may appear that result in many threads that attempt to use different values of a shared variable, or they simultaneously attempt to overwrite it. The order of executions may be random as the operating system can swap between the threads at any time. Attempts to synchronise the threads may result in "deadlock" when all resources become simultaneously locked. Debugging and problem-solving in parallel environments is quite often difficult due to the potentially random nature of the orders in which threads run. All of these features require the development of new paradigms, and we must transform our way of envisioning the development of software for parallel execution. The solution to this problem is the motivation for the work presented in this thesis. A significant contribution of this work is to strategically use the ideas of thread injection to speed up the execution of sequential code. Bottlenecks are identified, and thread injection is used to parallelise the code that may be distributed to many different systems. This approach is implemented by creating a class that takes control over the sequential instructions that create the bottlenecks. The challenge to engineers and scientists is to determine how a given task can be split into components that can be run in parallel. The method is illustrated by applying it to Channelflow (Gibson, 2014), which is open-source Direct Numerical Simulation software used to simulate flows between two parallel plates. Another challenge that arises when approaching representations of real geometries is the scale and magnitude of the data samples. For example, Johns Hopkins Turbulent Database (JHTB) contains results of the solution of direct numerical simulation (DNS) of isotropic turbulent flow in the incompressible fluid in 3D space and only requires 100 TB data. Much more data needed to perform a simulation, and this is just a straightforward model. A natural answer to this challenge is to exploit the opportunities offered by contemporary applications of ‘database technology’ in computational fluid dynamics (CFD) and turbulence research. Direct numerical solution of the Navier-Stokes equations resolves all of the flow structures that influence turbulent flows. Still, in the case of Large Eddy Simulation, the Navier-Stokes equations are spatially filtered so that they are expressed in terms of the velocities of larger-scale structures. The rate of viscous dissipation is quantified by modelling the shear stress, and this process can lead to inaccuracies. A means of rapid testing and evaluation of models is therefore required, and this involves working with large data sets. The contribution of this work is the development of a computational platform that allows LES models to be dynamically loaded and to be rapidly evaluated against DNS data. An idea permeating the methodology is that a core is defined that contains the ‘know-how’ associated with accessing and manipulating data, and which operates independently of a plug-in. The thesis presents an example that demonstrates how users can examine the accuracy of LES models and obtain results almost instantaneously. Such methods allow engineers or scientists to propose their own LES models and implement them as a plug- in with only a few lines of code. We have demonstrated how it can be done by converting the Smagorinsky model to a plug-in to be used on our platform.
APA, Harvard, Vancouver, ISO, and other styles
40

Rowland, Eric Samuel. "Experimental methods applied to the computation of integer sequences." 2009. http://hdl.rutgers.edu/1782.2/rucore10001600001.ETD.000051398.

Full text
APA, Harvard, Vancouver, ISO, and other styles
41

Morris, Brian Cleon. "Variational study of interstellar magnetic gas clouds: Theory, modeling, and computation." 1991. https://scholarworks.umass.edu/dissertations/AAI9132888.

Full text
Abstract:
Herein are described some model problems, means of solution, and some properties of solutions for the equilibrium of self-gravitating isothermal gases in interstellar configurations, with magnetic field. The approach is from the viewpoint of the calculus of variations, with flux-freezing being modelled as well as flux loss through partial ionization. In this work such a treatment is presented for the first time, incorporating aspects of the physical problem as previously studied by authors Woltjer, Mouschovias, and others, and presenting a new application of recently developed variational methods, extending their previous applications from fluid dynamics and terrestrial plasma problems to the present situation. In this approach the problem is formulated and solved as a non-linear, free-boundary problem in variational form with linear and non-linear constraints. The full extent of the matter is considered, from model construction, through construction of solutions to dimensionless PDE, to interpretation of results and their physical and mathematical meaning. Computational methods for calculating the physical and mathematical meaning. Computational methods for calculating the solutions are applied. The construction and justification of this solution method forms the basis for a constructive proof of the existence of solutions. A foundation is prepared for complete analytical investigations of the model or prototype problem, as well as for computational investigation of important realistic physical situations.
APA, Harvard, Vancouver, ISO, and other styles
42

Huang, Guangyan. "Semantics orientated spatial temporal data mining for water resource decision support." Thesis, 2011. https://vuir.vu.edu.au/18971/.

Full text
Abstract:
Water resource management is becoming more complex and relies heavily on computer software processing to help data queries for common and rare patterns for analyzing critical water events. For example, it is vital for decision makers to know if certain types of water quality problems are isolated (e.g. rare) or ubiquitous (e.g. common) and whether the conditions are changing spatially or temporally for a proper management plan. This thesis aims to automatically detect spatiotemporal common and rare patterns by significantly addressing the uncertainty and heterogeneity in water quality data, in order to enhance the accuracy and efficiency of common and rare pattern mining models underpinning many of the water resource management strategies and planning decisions. Therefore, we propose two novel semantics-oriented mining methods: the Correcting Imprecise Readings and Compressing Excrescent Points (CIRCE) method and the Exceptional Object Analysis for Finding Rare Environmental Events (EOAFREE) method. The CIRCE method resolves uncertainty problems in retrieving common patterns based on spatiotemporal semantic points, such as inflexions. The EOAFREE method tackles the heterogeneity problem by summarizing raw water data into a water quality index, that is, water semantics, in discovering rare patterns. We demonstrate the efficiency and effectiveness of the two methods by using simulation and real world datasets, and then implement them in a Semantics-Oriented Mining Application for Detecting Water Quality Events (SOMAwater) prototype system, which is used to query spatiotemporal common and rare patterns for a real world water quality dataset of 93 sites in 10 river basins in Victoria, Australia from 1975 to 2010.
APA, Harvard, Vancouver, ISO, and other styles
43

Teng, Luyao. "Research on Joint Sparse Representation Learning Approaches." Thesis, 2019. https://vuir.vu.edu.au/40024/.

Full text
Abstract:
Dimensionality reduction techniques such as feature extraction and feature selection are critical tools employed in artificial intelligence, machine learning and pattern recognitions tasks. Previous studies of dimensionality reduction have three common problems: 1) The conventional techniques are disturbed by noise data. In the context of determining useful features, the noises may have adverse effects on the result. Given that noises are inevitable, it is essential for dimensionality reduction techniques to be robust from noises. 2) The conventional techniques separate the graph learning system apart from informative feature determination. These techniques used to construct a data structure graph first, and keep the graph unchanged to process the feature extraction or feature selection. Hence, the result of feature extraction or feature selection is strongly relying on the graph constructed. 3) The conventional techniques determine data intrinsic structure with less systematic and partial analyzation. They maintain either the data global structure or the data local manifold structure. As a result, it becomes difficult for one technique to achieve great performance in different datasets. We propose three learning models that overcome prementioned problems for various tasks under different learning environment. Specifically, our research outcomes are listing as followings: 1) We propose a novel learning model that joints Sparse Representation (SR) and Locality Preserving Projection (LPP), named Joint Sparse Representation and Locality Preserving Projection for Feature Extraction (JSRLPP), to extract informative features in the context of unsupervised learning environment. JSRLPP processes the feature extraction and data structure learning simultaneously, and is able to capture both the data global and local structure. The sparse matrix in the model operates directly to deal with different types of noises. We conduct comprehensive experiments and confirm that the proposed learning model performs impressive over the state-of-the-art approaches. 2) We propose a novel learning model that joints SR and Data Residual Relationships (DRR), named Unsupervised Feature Selection with Adaptive Residual Preserving (UFSARP), to select informative features in the context of unsupervised learning environment. Such model does not only reduce disturbance of different types of noise, but also effectively enforces similar samples to have similar reconstruction residuals. Besides, the model carries graph construction and feature determination simultaneously. Experimental results show that the proposed framework improves the effect of feature selection. 3) We propose a novel learning model that joints SR and Low-rank Representation (LRR), named Sparse Representation based Classifier with Low-rank Constraint (SRCLC), to extract informative features in the context of supervised learning environment. When processing the model, the Low-rank Constraint (LRC) regularizes both the within-class structure and between-class structure while the sparse matrix works to handle noises and irrelevant features. With extensive experiments, we confirm that SRLRC achieves impressive improvement over other approaches. To sum up, with the purpose of obtaining appropriate feature subset, we propose three novel learning models in the context of supervised learning and unsupervised learning to complete the tasks of feature extraction and feature selection respectively. Comprehensive experimental results on public databases demonstrate that our models are performing superior over the state-of-the-art approaches.
APA, Harvard, Vancouver, ISO, and other styles
44

Mustafa, Abdul K. "Signal conditioning for high efficiency : wireless transmission." Thesis, 2010. https://vuir.vu.edu.au/24556/.

Full text
Abstract:
Fourth generation (4G) mobile communication systems will need wider band-width channels and improved spectrum efficiency to achieve the specified LTE-advanced 100Mbps (mobile) and 1Gbps (fixed) wireless transmission target rate. The next generation of wireless basestations will also need to be powered from renewable sources, particularly in developing countries. A new generation of components, circuits, algorithms and transmission structures will therefore be required to meet the wider bandwidth and the lower energy requirements. This thesis addresses the transmitter chain, which dominates the basestation power budget. In particular we consider pre-conditioning algorithms for a new generation of high efficiency radio frequency power amplifiers (RFPA).
APA, Harvard, Vancouver, ISO, and other styles
45

Pan, Jie. "Variational inequalities in the modelling and computation of spatial economic equilibria: Structural reformulations and the method of multipliers." 1992. https://scholarworks.umass.edu/dissertations/AAI9233126.

Full text
Abstract:
Variational inequalities have been used to study problems involving partial differential equations with unilateral constraints, such as free-boundary problems. They have also gained much recent interest in the field of operations research, particularly in the study of competitive equilibrium problems. The main focus of this work is to develop efficient algorithms for the computation of large-scale economic equilibria under weaker conditions than those considered previously. The prototype that we use in the analysis is the spatial market equilibrium system with direct price functions. We take advantage of the special structure of the variational inequalities, hence reformulate the problems, via a dual approach of Mosco and a linear algebra argument, as multivalued equations involving two maximal monotone operators. We then apply a relaxed proximal point method with variable parameters to the new formulation. In finite dimensions, we prove that the splitting sequences so generated are convergent to the equilibrium and the Lagrange multipliers, respectively. We also develop variational inequality formulations for migration networks and spatial market systems with goaling constraints. Based on the given economic equilibrium conditions, we establish the corresponding variational inequality formulations. In the second case, we provide direct equivalence proof that is motivated by the governing economic conditions. Essentially, we establish that the economic conditions are the dual forms of the corresponding variational inequalities. By applying the theory of variational inequalities, we then study the qualitative properties of these spatial equilibrium systems. In particular, we show the existence and uniqueness of the equilibrium in each case, assuming some monotonicity conditions that can be interpreted economically. We then apply the above numerical scheme to the variational inequality formulations of spatial equilibrium systems. As a result, we obtain a class of methods of multipliers for the computation of the studied economic equilibria. The methods so derived have an important feature that they require only monotonicity instead of strong monotonicity of supply price functions and demand price functions. They still require strong monotonicity of transaction cost functions. Finally, since they are splitting algorithms, they are suitable for decomposing large-scale problems. With a sequence of penalty parameters being set properly, each split part can then be computed sequentially or parallelly.
APA, Harvard, Vancouver, ISO, and other styles
46

Loo, Clinton. "Settling Time Reducibility Orderings." Thesis, 2010. http://hdl.handle.net/10012/5101.

Full text
Abstract:
It is known that orderings can be formed with settling time domination and strong settling time domination as relations on c.e. sets. However, it has been shown that no such ordering can be formed when considering computation time domination as a relation on $n$-c.e. sets where $n \geq 3$. This will be extended to the case of $2$-c.e. sets, showing that no ordering can be derived from computation time domination on $n$-c.e. sets when $n\geq 2$. Additionally, we will observe properties of the orderings given by settling time domination and strong settling time domination on c.e. sets, respectively denoted as $\mathcal{E}_{st}$ and $\mathcal{E}_{sst}$. More specifically, it is already known that any countable partial ordering can be embedded into $\mathcal{E}_{st}$ and any linear ordering with no infinite ascending chains can be embedded into $\mathcal{E}_{sst}$. Continuing along this line, we will show that any finite partial ordering can be embedded into $\mathcal{E}_{sst}$.
APA, Harvard, Vancouver, ISO, and other styles
47

(6636218), Luke N. Veldt. "Optimization Frameworks for Graph Clustering." Thesis, 2019.

Find full text
Abstract:
In graph theory and network analysis, communities or clusters are sets of nodes in a graph that share many internal connections with each other, but are only sparsely connected to nodes outside the set. Graph clustering, the computational task of detecting these communities, has been studied extensively due to its widespread applications and its theoretical richness as a mathematical problem. This thesis presents novel optimization tools for addressing two major challenges associated with graph clustering.
The first major challenge is that there already exists a plethora of algorithms and objective functions for graph clustering. The relationship between different methods is often unclear, and it can be very difficult to determine in practice which approach is the best to use for a specific application. To address this challenge, we introduce a generalized discrete optimization framework for graph clustering called LambdaCC, which relies on a single tunable parameter. The value of this parameter controls the balance between the internal density and external sparsity of clusters that are formed by optimizing an underlying objective function. LambdaCC unifies the landscape of graph clustering techniques, as a large number of previously developed approaches can be recovered as special cases for a fixed value of the LambdaCC input parameter.
The second major challenge of graph clustering is the computational intractability of detecting the best way to cluster a graph with respect to a given NP-hard objective function. To address this intractability, we present new optimization tools and results which apply to LambdaCC as well as a broader class of graph clustering problems. In particular, we develop polynomial time approximation algorithms for LambdaCC and other more generalized clustering objectives. In particular, we show how to obtain a polynomial-time 2-approximation for cluster deletion, which improves upon the previous best approximation factor of 3. We also present a new optimization framework for solving convex relaxations of NP-hard graph clustering problems, which are frequently used in the design of approximation algorithms. Finally, we develop a new framework for efficiently setting tunable parameters for graph clustering objective functions, so that practitioners can work with graph clustering techniques that are especially well suited to their application.
APA, Harvard, Vancouver, ISO, and other styles
48

Sivan, Dmitri D. "Design and structural modifications of vibratory systems to achieve prescribed modal spectra / Dmitri D. Sivan." Thesis, 1997. http://hdl.handle.net/2440/18916.

Full text
Abstract:
Bibliography: leaves 184-192.
xii, 198 leaves : ill. ; 30 cm.
This thesis reports on problems associated with design and structural modification of vibratory systems. Several common problems encountered in practical engineering applications are described and novel strategies for solving this problems are proposed. Mathematical formulations of these problems are generated, and solution methods are developed.
Thesis (Ph.D.)--University of Adelaide, Dept. of Mechanical Engineering, 1997
APA, Harvard, Vancouver, ISO, and other styles
49

(5929862), Xuejiao Kang. "Fault Tolerance in Linear Algebraic Methods using Erasure Coded Computations." Thesis, 2019.

Find full text
Abstract:

As parallel and distributed systems scale to hundreds of thousands of cores and beyond, fault tolerance becomes increasingly important -- particularly on systems with limited I/O capacity and bandwidth. Error correcting codes (ECCs) are used in communication systems where errors arise when bits are corrupted silently in a message. Error correcting codes can detect and correct erroneous bits. Erasure codes, an instance of error correcting codes that deal with data erasures, are widely used in storage systems. An erasure code addsredundancy to the data to tolerate erasures.


In this thesis, erasure coded computations are proposed as a novel approach to dealing with processor faults in parallel and distributed systems. We first give a brief review of traditional fault tolerance methods, error correcting codes, and erasure coded storage. The benefits and challenges of erasure coded computations with respect to coding scheme, fault models and system support are also presented.


In the first part of my thesis, I demonstrate the novel concept of erasure coded computations for linear system solvers. Erasure coding augments a given problem instance with redundant data. This augmented problem is executed in a fault oblivious manner in a faulty parallel environment. In the event of faults, we show how we can compute the true solution from potentially fault-prone solutions using a computationally inexpensive procedure. The results on diverse linear systems show that our technique has several important advantages: (i) as the hardware platform scales in size and in number of faults, our scheme yields increasing improvement in resource utilization, compared to traditional schemes; (ii) the proposed scheme is easy to code as the core algorithm remains the same; (iii) the general scheme is flexible to accommodate a range of computation and communication trade-offs.


We propose a new coding scheme for augmenting the input matrix that satisfies the recovery equations of erasure coding with high probability in the event of random failures. This coding scheme also minimizes fill (non-zero elements introduced by the coding block), while being amenable to efficient partitioning across processing nodes. Our experimental results show that the scheme adds minimal overhead for fault tolerance, yields excellent parallel efficiency and scalability, and is robust to different fault arrival models and fault rates.


Building on these results, we show how we can minimize, to optimality, the overhead associated with our problem augmentation techniques for linear system solvers. Specifically, we present a technique that adaptively augments the problem only when faults are detected. At any point during execution, we only solve a system with the same size as the original input system. This has several advantages in terms of maintaining the size and conditioning of the system, as well as in only adding the minimal amount of computation needed to tolerate the observed faults. We present, in details, the augmentation process, the parallel formulation, and the performance of our method. Specifically, we show that the proposed adaptive fault tolerance mechanism has minimal overhead in terms of FLOP counts with respect to the original solver executing in a non-faulty environment, has good convergence properties, and yields excellent parallel performance.


Based on the promising results for linear system solvers, we apply the concept of erasure coded computation to eigenvalue problems, which arise in many applications including machine learning and scientific simulations. Erasure coded computation is used to design a fault tolerant eigenvalue solver. The original eigenvalue problem is reformulated into a generalized eigenvalue problem defined on appropriate augmented matrices. We present the augmentation scheme, the necessary conditions for augmentation blocks, and the proofs of equivalence of the original eigenvalue problem and the reformulated generalized eigenvalue problem. Finally, we show how the eigenvalues can be derived from the augmented system in the event of faults.


We present detailed experiments, which demonstrate the excellent convergence properties of our fault tolerant TraceMin eigensolver in the average case. In the worst case where the row-column pairs that have the most impact on eigenvalues are erased, we present a novel scheme that computes the augmentation blocks as the computation proceeds, using the estimates of leverage scores of row-column pairs as they are computed by the iterative process. We demonstrate low overhead, excellent scalability in terms of the number of faults, and the robustness to different fault arrival models and fault rates for our method.


In summary, this thesis presents a novel approach to fault tolerance based on erasure coded computations, demonstrates it in the context of important linear algebra kernels, and validates its performance on a diverse set of problems on scalable parallel computing platforms. As parallel systems scale to hundreds of thousands of processing cores and beyond, these techniques present the most scalable fault tolerant mechanisms currently available.


APA, Harvard, Vancouver, ISO, and other styles
50

(11186139), Benjamin D. Harsha. "Modeling Rational Adversaries: Predicting Behavior and Developing Deterrents." Thesis, 2021.

Find full text
Abstract:
In the field of cybersecurity, it is often not possible to construct systems that are resistant to all attacks. For example, even a well-designed password authentication system will be vulnerable to password cracking attacks because users tend to select low-entropy passwords. In the field of cryptography, we often model attackers as powerful and malicious and say that a system is broken if any such attacker can violate the desired security properties. While this approach is useful in some settings, such a high bar is unachievable in many security applications e.g., password authentication. However, even when the system is imperfectly secure, it may be possible to deter a rational attacker who seeks to maximize their utility. In particular, if a rational adversary finds that the cost of running an attack is higher than their expected rewards, they will not run that particular attack. In this dissertation we argue in support of the following statement: Modeling adversaries as rational actors can be used to better model the security of imperfect systems and develop stronger defenses. We present several results in support of this thesis. First, we develop models for the behavior of rational adversaries in the context of password cracking and quantum key-recovery attacks. These models allow us to quantify the damage caused by password breaches, quantify the damage caused by (widespread) password length leakage, and identify imperfectly secure settings where a rational adversary is unlikely to run any attacks i.e. quantum key-recovery attacks. Second, we develop several tools to deter rational attackers by ensuring the utility-optimizing attack is either less severe or nonexistent. Specifically, we develop tools that increase the cost of offline password cracking attacks by strengthening password hashing algorithms, strategically signaling user password strength, and using dedicated Application-Specific Integrated Circuits (ASICs) to store passwords.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography