Dissertations / Theses on the topic 'Theory and Models'

To see the other types of publications on this topic, follow the link: Theory and Models.

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 dissertations / theses for your research on the topic 'Theory and Models.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.

1

Calhoun, Grayson Ford. "Limit theory for overfit models." Diss., [La Jolla] : University of California, San Diego, 2009. http://wwwlib.umi.com/cr/ucsd/fullcit?p3359804.

Full text
Abstract:
Thesis (Ph. D.)--University of California, San Diego, 2009.
Title from first page of PDF file (viewed July 23, 2009). Available via ProQuest Digital Dissertations. Vita. Includes bibliographical references (p. 104-109).
APA, Harvard, Vancouver, ISO, and other styles
2

McCloud, Nadine. "Model misspecification theory and applications /." Diss., Online access via UMI:, 2008.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
3

Elgueta, Montó Raimon. "Algebraic model theory for languages without equality." Doctoral thesis, Universitat de Barcelona, 1994. http://hdl.handle.net/10803/21799.

Full text
Abstract:
In our opinion, it is fair to distinguish two separate branches in the origins of model theory. The first one, the model theory of first-order logic, can be traced back to the pioneering work of L. Lowenheim, T. Skolem, K. Gödel, A. Tarski and A.I. MaI 'cev, published before the mid 30's. This branch was put forward during the 40s' and 50s’ by several authors, including A. Tarski, L. Henkin, A. Robinson, J. Los. Their contribution, however, was rather influenced by modern algebra, a discipline whose development was being truly fast at the time. Largely due to this influence, it was a very common usage among these authors lo the equality symbol belonging lo the language. Even when a few years later the algebraic methods started to be supplanted to a large extent by the set-theoretical technique that mark present-day theory, the consideration of the equality a constant in the language still subsisted. The second branch is the model theory of equational logic. It was born with the seminal work of G. Birkhoff which established the first basic tools and results of what later developed the part of universal algebra known as the theory of varieties and quasivarieties. The algebraic character of this other branch of model theory was clearer and stronger, for it simply emerged as the last stop in the continuous process of abstraction in algebra. Amid these two branches of model theory, which suffered a rapid growth at the time, there appeared the work done by Mal'cev in the early 1950's and the late 60's, which some influence in the future development of the discipline, in the old Soviet Union. During the period mentioned above, he developed a first-order model theory that retained much of the spirit of the period and diverged openly from the model theory developed in the West. In particular, he put forward the model theory of universal Horn logic with equality along the of Birkhoff's theory of varieties, and showed that such logic forms a right setting for a large part of universal algebra, including the theory of presentations and free structures. The most worth-mentioning peculiarities of Mal'cev's program were the following: first, he kept on dealing with first-order languages with equality; second, he adopted notions of homomorphism and congruence that had little to do with the relational part of the language. This well-roted tradition of developing model theory in the presence of an equality symbol to express the identity relation, which goes back to its very origin, finally broken when logicians from the PoIish School started program similar to that of Mal'cev for another type of UHL, viz. general sentential logic. Indeed, in spite of the fact that the algebraic character of sentential logic was evident early in its development (chiefly because classical sentential calculus could be completely reduced to the quasi-equational theory of Boolean algebras), the natural models of arbitrary sentential calculus quickly took the form of logical matrices, that is, algebras endowed with a unary relation on their universe. This matrix semantics so became the first attempt of starting a systematic development of a model theory for first-order languages without equality. Beginning with the publication of a paper by Los in 1949, matrix semantics was successfully developed over the next three decades by a number of different authors in Poland, including J. Los himself, R. Suszko, R. Wojcicki and J. Zygmunt. The present evolution of these issues points towards an effort of encompassing the theory of varieties and quasi-varieties and the model theory of sentential logic, by means of the development of a program similar to Mal’cev’s for UHL without equality. We recognize that this evolution has been fast and notorious in the last decade, thanks mainly to the work done by J. Czelakowski, W. Blok and D. Pigozzi among others. For example, the first author has been developing a model theory of sentential logic inherits a lot of the algebraic character of Mal’cev’s theory of sentential logic originated by Birkhoff. On the other hand, Blok and Pigozzi, in a paper published in 1992, have succeeded in the development of a model theory –based on the Leibniz operator introduced by them– that does comprises for the first time both equational logic and sentential logic, and so strengthens Czelakowsk’s program. What enables such a simultaneous treatment in their approach in the observation that equational logic can be viewed as an example of a 2-dimensional sentential calculus and thus admits a matrix semantics, this time a matrix being an algebra together with a congruence on the algebra. A characteristic of decisive importance in Blok and Pigozzi's approach in their apparent conviction that only reduced models really possess the algebraic character of the models of quasi-equational theories. We give up such a conviction and the restriction to particular types of languages. The main purpose of this paper is to outline some basic aspects of the model theory for first-order languages that definitively do not include the equality symbol and which account of both the full and the reduced semantics. The theory is intended to follow as much as possible of the Mal'cev's tradition by pronounced algebraic character and mainly covering topics fairly well studied in universal algebra (that is the reason for giving the term “algebraic” to our model theory). Most of the work, that extends to general languages and fairly clarifies some recent trends in algebraic logic, constitutes the foundations of a model theory of UHL without equality. An important number of the results in the paper run side by side with some well-known results of either classical model theory or universal algebra; so, we make an effort to highlight the concepts and techniques only applied in these contexts although, in some sense, they find a more general setting in ours. The outgrowth of the current interest in the model theory of UHL without equality is the emergence of several applications mainly in algebraic logic and computer science. Therefore we also discuss the way that the developed theory relates to algebraic logic. Actually, we maintain that our approach provides an appropriate context to investigate the availability of nice algebraic semantics, not only for the traditional deductive systems that arise in sentential logic, but also for some other types of deductive systems that are attracting increasing attention at the time. The reason is that all of them admit as interpretation as universal Horn theories without equality. As we said before, the absence of symbol is the language to mean the identity relation is central to this work. Traditionally, the equality in classical model theory has had a representation is the moral language and has been understood in an absolute sense, i.e., for any interpretation of the language, the interest of model-theorists has been put on the relation according to which two members of the universe are the same or has no other logical relation. We break this tradition by introducing a weak form of equality predicate and not presupposing its formal representation by a symbol of the language. Then the main problem consists, broadly speaking, in the investigation of the relationship between the features of this weaker equality in a given class of structures and the fulfillment of certain properties by this class. This is not at all recent treatment of the equality; for instance, it underlies the old notion of Lindenbaum-Tarski algebra in the model theory of sentential logic, and more recently contributions to the study of algebraic for semantics logics. Our contribution amounts to no more than providing a broader framework for the investigation of this question in the domain of first-order logic, the universal Horn fragment. Several points stand out for they govern all our approach. First, the extended use we make of two unlike notions of homomorphism, whose difference relies on the importance each one attaches to relations; this is a distinction that no longer exists in universal algebra but does exist in classical model theory. Secondly, the availability of two distinct adequate semantics easily connected through an algebraic operation, which consists in factorizing the structures in such a way that the Leibniz equality and the usual identity relation coincide. We believe this double semantics is what is mainly responsible for the interest of the model theory for languages without equality as a research topic; in spite of their equivalence from a semantical point of view, they furnish several stimulating problems regarding their comparability from an algebraic perspective. Thirdly, the two extensions that the notion of congruence on an algebra admits when dealing with general structures over languages without equality, namely, as a special sort of binary relation associated to a structure, here called congruence, and as the relational part of a structure, which is embodied in the concept of filter extension. Finally, and not because of this less important, the nice algebraic description that our equality predicate has as the greatest one of the congruence on a structure. This fact allows to replace the fundamental (logical) concept of Leibniz equality by an entirely algebraic notion, and to put the main emphasis on the algebraic methods. Actually, it seems to us that other forms of equality without such a property hardly give rise to model theories that work out so beautifully. The work is organized in 10 chapters. The first three contain basic material that is essential to overcome the small inadequacies of some approaches to the topic formerly provided by other authors. Chapter 1 reviews some terminology and notation that will appear repeteadly thereafter, and presents some elementary notions and results of classical model theory that remains equal for languages without equality. Chapter 2 states and characterizes algebraically the fundamental concept of equality in the sense of Leibniz which we deal with all over the paper. Finally, in Chapter 3 we discuss the semantical consequences of factorizing a structure by a congruence and show that first-order logic without equality has two complete semantics related by a reduction operator (Theorem 3.2.1). Right here we pose out if the central problem to which most of the subsequent work is devoted, i.e., the investigation of the algebraic properties that the full reduced model classes of an elementary theory exhibit. Chapter 4 contains the first difficult results in the work. By a rather obvious generalization of proofs known from classical model theory, we obtain Birkhoff-type characterizations of full classes of axiomatized by certain sorts of first-order without equality, and apply these results to derive analogue characterizations for the corresponding reduced classes. Chapter 5 is a central one; it examines the primary consequences of dealing with the relational part of a structure as the natural extension of congruences when passing from algebraic to general fist-order languages without equality. A key observation in this case is that the sets of structures of an algebraic complete lattice. It is proved that this classes is just the quasinvarieties of structure. The Leibniz operator is defined right here as a primary criterion to distinguish properties of the Leibniz equality in a class of models. Using the operator, a fundamental hierarchy of classes. Chapter 7 examines how the characterizations of reduced quasinvarieties (relative varieties) obtained in Chapter 4 can be improved when we deal with the special types of classes introduced formerly. Chapters 6, 8 and 9 provide an explicit generalization of well-known results from universal algebra. Concretely, in Chapter 6 we present the main tools of Subdirect Representation Theory for general first-order structures without equality. Chapter 8 deals with the existence of free structures both in full and reduced classes. This Chapter also includes the investigation of a correspondence between (quasi)varieties and some lattice structures associated with the Herbrand structures, correspondence that offers the possibility of turning the logical methods used in the theory of varieties and quasivarieties into purely algebraic ones (Theorems 8.3.3 and 8.3.6). In Chapter 9 we set the problem of finding Mal'cev-type conditions for properties concerning posets of relative congruences or relative filler extensions of members of quasivarieties. Finally, Chapter 10 discusses briefly the relation between algebraic logic and the approach to model theory outlined in the previous chapters, providing that some vindication to it. Of course, we cannot say whether this work will ultimately have a bearing on the resolution of any of the problem of algebraic logic, but for us, it could at least provide fresh insights in this exciting branch of logic.
APA, Harvard, Vancouver, ISO, and other styles
4

Toribio, Sherwin G. "Bayesian Model Checking Strategies for Dichotomous Item Response Theory Models." Bowling Green State University / OhioLINK, 2006. http://rave.ohiolink.edu/etdc/view?acc_num=bgsu1150425606.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Arnold, Wolfram Till. "Theory of electron localization in disordered systems /." view abstract or download file of text, 2000. http://wwwlib.umi.com/cr/uoregon/fullcit?p9986736.

Full text
Abstract:
Thesis (Ph. D.)--University of Oregon, 2000.
Typescript. Includes vita and abstract. Includes bibliographical references (leaves 199-204). Also available for download via the World Wide Web; free to UO users.
APA, Harvard, Vancouver, ISO, and other styles
6

von, Glehn Tamara. "Polynomials and models of type theory." Thesis, University of Cambridge, 2015. https://www.repository.cam.ac.uk/handle/1810/254394.

Full text
Abstract:
This thesis studies the structure of categories of polynomials, the diagrams that represent polynomial functors. Specifically, we construct new models of intensional dependent type theory based on these categories. Firstly, we formalize the conceptual viewpoint that polynomials are built out of sums and products. Polynomial functors make sense in a category when there exist pseudomonads freely adding indexed sums and products to fibrations over the category, and a category of polynomials is obtained by adding sums to the opposite of the codomain fibration. A fibration with sums and products is essentially the structure defining a categorical model of dependent type theory. For such a model the base category of the fibration should also be identified with the fibre over the terminal object. Since adding sums does not preserve this property, we are led to consider a general method for building new models of type theory from old ones, by first performing a fibrewise construction and then extending the base. Applying this method to the polynomial construction, we show that given a fibration with sufficient structure modelling type theory, there is a new model in a category of polynomials. The key result is establishing that although the base category is not locally cartesian closed, this model has dependent product types. Finally, we investigate the properties of identity types in this model, and consider the link with functional interpretations in logic.
APA, Harvard, Vancouver, ISO, and other styles
7

Paraskevopoulos, Ioannis. "Econometric models applied to production theory." Thesis, Queen Mary, University of London, 2001. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.392498.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Bolton, Colin. "Models of nucleation : theory and application." Thesis, University of Nottingham, 2003. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.403959.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Corner, Ann-Marie. "Circumplex models : theory, methodology and practice." Thesis, University of Exeter, 1993. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.356997.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Boulier, Simon Pierre. "Extending type theory with syntactic models." Thesis, Ecole nationale supérieure Mines-Télécom Atlantique Bretagne Pays de la Loire, 2018. http://www.theses.fr/2018IMTA0110/document.

Full text
Abstract:
Cette thèse s'intéresse à la métathéorie de la théorie des types intuitionniste. Les systèmes que nous considérons sont des variantes de la théorie des types de Martin-Löf ou du Calcul des Constructions, et nous nous intéressons à la cohérence de ces systèmes ou encore à l'indépendance d'axiomes par rapport à ces systèmes. Le fil rouge de cette thèse est la construction de modèles syntaxiques, qui sont des modèles qui réutilisent la théorie des types pour interpréter la théorie des types. Dans une première partie, nous introduisons la théorie des types à l'aide d'un système minimal et de plusieurs extensions potentielles. Dans une seconde partie, nous introduisons les modèles syntaxiques donnés par traduction de programme et donnons plusieurs exemples. Dans une troisième partie, nous présentons Template-Coq, un plugin de métaprogrammation pour Coq. Nous montrons comment l'utiliser pour implémenter directement certains modèles syntaxiques. Enfin, dans une dernière partie, nous nous intéressons aux théories des types à deux égalités : une égalité stricte et une égalité univalente. Nous proposons une relecture des travaux de Coquand et. al. et Orton et Pitts sur le modèle cubique en introduisant la notion de fibrance dégénérée
This thesis is about the metatheory of intuitionnistic type theory. The considered systems are variants of Martin-Löf type theory of Calculus of Constructions, and we are interested in the coherence of those systems and in the independence of axioms with respect to those systems. The common theme of this thesis is the construction of syntactic models, which are models reusing type theory to interpret type theory. In a first part, we introduce type theory by a minimal system and several possible extensions. In a second part, we introduce the syntactic models given by program translation and give several examples. In a third part, we present Template-Coq, a plugin for metaprogramming in Coq. We demonstrate how to use it to implement directly some syntactic models. Last, we consider type theories with two equalities: one strict and one univalent. We propose a re-reading of works of Coquand et.al. and of Orton and Pitts on the cubical model by introducing degenerate fibrancy
APA, Harvard, Vancouver, ISO, and other styles
11

Moss, Sean. "The dialectica models of type theory." Thesis, University of Cambridge, 2018. https://www.repository.cam.ac.uk/handle/1810/280672.

Full text
Abstract:
This thesis studies some constructions for building new models of Martin-Löf type theory out of old. We refer to the main techniques as gluing and idempotent splitting. For each we give general conditions under which type constructors exist in the resulting model. These techniques are used to construct some examples of Dialectica models of type theory. The name is chosen by analogy with de Paiva's Dialectica categories, which semantically embody Gödel's Dialectica functional interpretation and its variants. This continues a programme initiated by von Glehn with the construction of the polynomial model of type theory. We complete the analogy between this model and Gödel's original Dialectica by using our techniques to construct a two-level version of this model, equipping the original objects with an extra layer of predicates. In order to do this we have to carefully build up the theory of finite sum types in a display map category. We construct two other notable models. The first is a model analogous to the Diller-Nahm variant, which requires a detailed study of biproducts in categories of algebras. To make clear the generalization from the categories studied by de Paiva, we illustrate the construction of the Diller-Nahm category in terms of gluing an indexed system of types together with a system of predicates. Following this we develop the general techniques needed for the type-theoretic case. The second notable model is analogous to the Dialectica category associated to the error monad as studied by Biering. This model has only weak dependent products. In order to get a model with full dependent products we use the idempotent splitting construction, which generalizes the Karoubi envelope of a category. Making sense of the Karoubi envelope in the type-theoretic case requires us to face up to issues of coherence in our models. We choose the route of making sure all of the constructions we use preserve strict coherence, rather than applying a general coherence theorem to produce a strict model afterwards. Our chosen method preserves more detailed information in the final model.
APA, Harvard, Vancouver, ISO, and other styles
12

Dotti, V. "Multidimensional voting models : theory and applications." Thesis, University College London (University of London), 2016. http://discovery.ucl.ac.uk/1516004/.

Full text
Abstract:
In this thesis I study how electoral competition shapes the public policies implemented by democratic countries. In particular, I analyse the relationship between observable characteristics of the population of voters, such as the distribution of income and age, and relevant public policy outcomes of the political process. I focus on two theoretical issues that have proved difficult to tackle with existing voting models, namely multidimensionality of the policy space and non-convexity of voter preferences. I propose a new theoretical framework to deal with these issues. I employ this new framework to address three popular questions in the Political Economy literature for which a multidimensional policy space is deemed to be a crucial element to capture the underlying economic trade-offs. Specifically, I analyse (i) the relationship between income inequality and size of the government, (ii) the causal link between population ageing and the ’tightness’ of immigration policies, and (iii) the role played by the income distribution in shaping public investment in education. I compare the predictions derived under the new theoretical tool with those that prevail in the existing literature. I show that the interaction among multiple endogenous policy dimensions helps to explain why several studies in the literature - in which the analysis in restricted to a unique endogenous policy choice - deliver empirically controversial or inconsistent predictions. For all three questions, the approach proposed in this thesis is shown to be helpful in reconciling the theoretical predictions with empirical evidence, and in identifying the economic channels that underpin the patterns observed in the data.
APA, Harvard, Vancouver, ISO, and other styles
13

Landgren, Filip. "Minimal Models in Conformal Field Theory." Thesis, Uppsala universitet, Teoretisk fysik, 2020. http://urn.kb.se/resolve?urn=urn:nbn:se:uu:diva-416402.

Full text
Abstract:
The following article reviews minimal models in conformal field theory (CFT). A two-dimensional CFT has an infinite-dimensional symmetry algebra allowing the theory to be solved exactly with operator product expansions (OPE)s. Minimal models are examples of these that are constructed from unitarity and have a finite number of primary fields. Different minimal models correspond to different twodimensional statistical models as the conformal weights of primary fields correspond to the critical exponents. We will derive the differential equations for the correlation functions of the primary fields which results in constraints on the conformal weights called the fusion rules. The fusion rules arise by requiring the representation to be unitary and they govern which fields will be present in the OPE of two primary fields. The two-dimensional critical Ising model will be considered as an example where the fusion rules are used to obtain the present fields and the OPEs are used to compute the correlators. This allow us to obtain the full dynamics of the system.
APA, Harvard, Vancouver, ISO, and other styles
14

Han, Lin. "Graph generative models from information theory." Thesis, University of York, 2012. http://etheses.whiterose.ac.uk/3726/.

Full text
Abstract:
Generative models are commonly used in statistical pattern recognition to describe the probability distributions of patterns in a vector space. In recent years, sustained by the wide range of mathematical tools available in vector space, many algorithms for constructing generative models have been developed. Compared with the advanced development of the generative model for vectors, the development of a generative model for graphs has had less progress. In this thesis, we aim to solve the problem of constructing the generative model for graphs using information theory. Given a set of sample graphs, the generative model for the graphs we aim to construct should be able to not only capture the structural variation of the sample graphs, but to also allow new graphs which share similar properties with the original graphs to be generated. In this thesis, we pose the problem of constructing a generative model for graphs as that of constructing a supergraph structure for the graphs. In Chapter 3, we describe a method of constructing a supergraph-based generative model given a set of sample graphs. By adopting the a posteriori probability developed in a graph matching problem, we obtain a probabilistic framework which measures the likelihood of the sample graphs, given the structure of the supergraph and the correspondence information between the nodes of the sample graphs and those of the supergraph. The supergraph we aim to obtain is one which maximizes the likelihood of the sample graphs. The supergraph is represented here by its adjacency matrix, and we develop a variant of the EM algorithm to locate the adjacency matrix that maximizes the likelihood of the sample graphs. Experimental evaluations demonstrate that the constructed supergraph performs well on classifying graphs. In Chapter 4, we aim to develop graph characterizations that can be used to measure the complexity of graphs. The first graph characterization developed is the von Neumann entropy of a graph associated with its normalized Laplacian matrix. This graph characterization is defined by the eigenvalues of the normalized Laplacian matrix, therefore it is a member of the graph invariant characterization. By applying some transformations, we also develop a simplified form of the von Neumann entropy, which can be expressed in terms of the node degree statistics of the graphs. Experimental results reveal that effectiveness of the two graph characterizations. Our third contribution is presented in Chapter 5, where we use the graph characterization developed in Chapter 4 to measure the supergraph complexity and we develop a novel framework for learning a supergraph using the minimum description length criterion. We combine the Jensen-Shanon kernel with our supergraph construction and this provides us with a way of measuring graph similarity. Moreover, we also develop a method of sampling new graphs from the supergraph. The supergraph we present in this chapter is a generative model which can fulfil the tasks of graph classification, graph clustering, and of generating new graphs. We experiment with both the COIL and “Toy” datasets to illustrate the utility of our generative model. Finally, in Chapter 6, we propose a method of selecting prototype graphs of the most appropriate size from candidate prototypes. The method works by partitioning the sample graphs into two parts and approximating their hypothesis space using the partition functions. From the partition functions, the mutual information between the two sets is defined. The prototype which gives the highest mutual information is selected.
APA, Harvard, Vancouver, ISO, and other styles
15

Bianchi, Lorenzo. "Perturbation theory for string sigma models." Doctoral thesis, Humboldt-Universität zu Berlin, Mathematisch-Naturwissenschaftliche Fakultät, 2016. http://dx.doi.org/10.18452/17439.

Full text
Abstract:
In dieser Arbeit untersuchen wir Quanten-Aspekte des Green-Schwarz Superstrings in verschiedenen AdS-Hintergründen, die für die AdS/CFT Korrespondenz von Bedeutung sind, und geben einige Beispiele für perturbative Rechnungen in den entsprechenden integrablen Sigma-Modellen. Wir beginnen mit einer detaillierten Darstellung der Konstruktion der Wirkung des Typ-IIB-Superstrings auf dem AdS5 x S5-Hintergrund, die durch eine Supercoset-Sigma-Modell definiert wird, und zeigen die Grenzen dieser Herangehensweise für Hintergründe auf, die in niedrig-dimensionalen Beispielen der Eich/Gravitations-Dualität von Interesse sind. Daraufhin betrachten wir die Entwicklung um das BMN-Vakuum und die S-Matrix für die Streuung von Weltflächen-Anregungen. Um ihre Elemente effizient auszuwerten, entwickeln wir eine auf Unitarität basierende Methode für allgemeine massive, zweidimensionale Feldtheorien. Weiterhin betrachten wir den AdS-Lichtkegel eichfixierten String in AdS4 x CP3 in einer Entwicklung um das "null-cusp"-Vakuum. Die freie Energie dieses Modells hängt zusammen mit der anomalen Cusp-Dimension der Eichtheorie. Wir berechnen Korrekturen zur Zustandssumme des Superstring-Modells und leiten somit die anomale Cusp-Dimension der ABJM-Theorie bei starker Kopplung bis zur Zweischleifen-Ordnung her. Schlie\sslich berechnen wir auf Einschleifen-Ebene die Dispersionsrelation von Anregungen um das GKP-Vakuum. Unsere erfolgreiche Anwendung von auf Unitarität basierenden Cut-Techniken auf verschiedene Beispiele stützt die Vermutung, dass die S-Matrizen zweidimensionaler, integrabler Feldtheorien cut-konstruierbar sind. Weiterhin liefern unsere Ergebnisse wertvolle Daten, die die Konsistenz der String-Wirkung auf Quanten-Niveau belegen und stellen nicht-triviale stringente Tests der Quanten-Integrabilität der untersuchten Modelle dar.
In this thesis we investigate quantum aspects of the Green-Schwarz superstring in various AdS backgrounds relevant for the AdS/CFT correspondence, providing several examples of perturbative computations in the corresponding integrable sigma-models. We start by reviewing in details the construction of the type IIB superstring action in AdS5 x S5 background defined as a supercoset sigma model, pointing out the limits of this procedure for backgrounds interesting in lower-dimensional examples of the gauge/gravity duality. We then consider the expansion about the BMN vacuum and the S-matrix for the scattering of worldsheet excitations. To evaluate its elements efficiently we develop a unitarity-based method for general massive two-dimensional field theories. We also analyze the AdS light-cone gauge fixed string in AdS4 x CP3 expanded around a “null cusp” vacuum. The free energy of this model is related to the cusp anomalous dimension of the gauge theory and, indirectly, to a non-trivial effective coupling entering all integrability-based calculations in AdS4/CFT3. We calculate corrections to the superstring partition function of the model, thus deriving the cusp anomalous dimension of ABJM theory at strong coupling up to two-loop order and giving support to a recent conjecture. Finally, we calculate at one-loop the dispersion relation of excitations about the GKP vacuum. Our successful application of unitarity-cut techniques on several examples supports the conjecture that S-matrices of two-dimensional integrable field theories are cut-constructible. Furthermore, our results provide valuable data in support of the quantum consistency of the string actions and furnish non-trivial stringent tests for the quantum integrability of the analyzed models.
APA, Harvard, Vancouver, ISO, and other styles
16

Bernardo, Heliudson de Oliveira. "Cosmological models from string theory setups /." São Paulo, 2019. http://hdl.handle.net/11449/183612.

Full text
Abstract:
Orientador: Horatiu Nastase
Resumo: Nesta tese, discutimos três modelos cosmológicos que são baseados direta ou indiretamente em ideias advindas de teoria das cordas. Depois de uma revisão geral de cosmologia em teoria das cordas, um resumo de cosmologia e teoria das cordas é apresentado, com ênfase nos conceitos fundamentais e teóricos. Então descrevemos como o acoplamento camaleônico pode potencialmente afetar as predições de inflação cósmica com campo único, com tratamento cuidadoso dos modos de perturbação cosmológica adiabáticos e de entropia. Além disso uma nova abordagem para a dualidade-T em soluções cosmológicas de supergravidade bosônica é discutida no contexto de teoria dupla de campos. Por fim, propomos uma nova prescrição para o mapa holográfico em cosmologia que pode ser usado para conectar modelos fundamentais de cosmologia holográfica com outras abordagens fenomenológicas.
Doutor
APA, Harvard, Vancouver, ISO, and other styles
17

Dotti, Valerio. "Multidimensional voting models: theory and applications." Doctoral thesis, UCL Discovery, 2016. http://hdl.handle.net/10278/3742614.

Full text
Abstract:
In this thesis I study how electoral competition shapes the public policies implemented by democratic countries. In particular, I analyse the relationship between observable characteristics of the population of voters, such as the distribution of income and age, and relevant public policy outcomes of the political process. I focus on two theoretical issues that have proved difficult to tackle with existing voting models, namely multidimensionality of the policy space and non-convexity of voter preferences. I propose a new theoretical framework to deal with these issues. I employ this new framework to address three popular questions in the Political Economy literature for which a multidimensional policy space is deemed to be a crucial element to capture the underlying economic trade-offs. Specifically, I analyse (i) the relationship between income inequality and size of the government, (ii) the causal link between population ageing and the ’tightness’ of immigration policies, and (iii) the role played by the income distribution in shaping public investment in education. I compare the predictions derived under the new theoretical tool with those that prevail in the existing literature. I show that the interaction among multiple endogenous policy dimensions helps to explain why several studies in the literature - in which the analysis in restricted to a unique endogenous policy choice - deliver empirically controversial or inconsistent predictions. For all three questions, the approach proposed in this thesis is shown to be helpful in reconciling the theoretical predictions with empirical evidence, and in identifying the economic channels that underpin the patterns observed in the data.
APA, Harvard, Vancouver, ISO, and other styles
18

Godazgar, Mohammad Hadi. "Dualities in string theory." Thesis, University of Cambridge, 2012. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.610466.

Full text
APA, Harvard, Vancouver, ISO, and other styles
19

Blair, Christopher David Andrew. "Duality and extended geometry in string theory and M-theory." Thesis, University of Cambridge, 2015. https://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.709364.

Full text
APA, Harvard, Vancouver, ISO, and other styles
20

Caccavano, Adam. "Optics and Spectroscopy in Massive Electrodynamic Theory." PDXScholar, 2013. https://pdxscholar.library.pdx.edu/open_access_etds/1485.

Full text
Abstract:
The kinematics and dynamics for plane wave optics are derived for a massive electrodynamic field by utilizing Proca's theory. Atomic spectroscopy is also examined, with the focus on the 21 cm radiation due to the hyperfine structure of hydrogen. The modifications to Snell's Law, the Fresnel formulas, and the 21 cm radiation are shown to reduce to the familiar expressions in the limit of zero photon mass.
APA, Harvard, Vancouver, ISO, and other styles
21

Mercer, Karl John. "Identification of signal distortion models." Thesis, University of Cambridge, 1993. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.308356.

Full text
APA, Harvard, Vancouver, ISO, and other styles
22

Stuk, Stephen Paul. "Multivariable systems theory for Lanchester type models." Diss., Georgia Institute of Technology, 1987. http://hdl.handle.net/1853/24171.

Full text
APA, Harvard, Vancouver, ISO, and other styles
23

Holan, Scott Harold. "Time series exponential models: theory and methods." Diss., Texas A&M University, 2004. http://hdl.handle.net/1969.1/431.

Full text
Abstract:
The exponential model of Bloomfield (1973) is becoming increasingly important due to its recent applications to long memory time series. However, this model has received little consideration in the context of short memory time series. Furthermore, there has been very little attempt at using the EXP model as a model to analyze observed time series data. This dissertation research is largely focused on developing new methods to improve the utility and robustness of the EXP model. Specifically, a new nonparametric method of parameter estimation is developed using wavelets. The advantage of this method is that, for many spectra, the resulting parameter estimates are less susceptible to biases associated with methods of parameter estimation based directly on the raw periodogram. Additionally, several methods are developed for the validation of spectral models. These methods test the hypothesis that the estimated model provides a whitening transformation of the spectrum; this is equivalent to the time domain notion of producing a model whose residuals behave like the residuals of white noise. The results of simulation and real data analysis are presented to illustrate these methods.
APA, Harvard, Vancouver, ISO, and other styles
24

Cheung, Elliot. "Birational models of geometric invariant theory quotients." Thesis, University of British Columbia, 2017. http://hdl.handle.net/2429/61278.

Full text
Abstract:
In this thesis, we study the problem of finding birational models of projective G-varieties with tame stabilizers. This is done with linearizations, so that each birational model may be considered as a (modular) compactification of an orbit space (of properly stable points). The main portion of the thesis is a re-working of a result in Kirwan's paper "Partial Desingularisations of Quotients of Nonsingular Varieties and their Betti Numbers", written in a purely algebro-geometric language. As such, the proofs are novel and require the Luna Slice Theorem as their primary tool. Chapter 1 is devoted to preliminary material on Geometric Invariant Theory and the Luna Slice Theorem. In Chapter 2, we present and prove a version of "Kirwan's procedure". This chapter concludes with an outline of some differences between the current thesis and Kirwan's original paper. In Chapter 3, we combine the results from Chapter 2 and a result from a paper by Reichstein and Youssin to provide another type of birational model with tame stabilizers (again, with respect to an original linearization).
Science, Faculty of
Mathematics, Department of
Graduate
APA, Harvard, Vancouver, ISO, and other styles
25

Weston, Robert Andrew. "Lattice field theory and statistical-mechanical models." Thesis, University of Cambridge, 1989. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.315971.

Full text
APA, Harvard, Vancouver, ISO, and other styles
26

Schwarz, Maike [Verfasser]. "Stochastic Models in Inventory Theory / Maike Schwarz." Aachen : Shaker, 2004. http://d-nb.info/1172610304/34.

Full text
APA, Harvard, Vancouver, ISO, and other styles
27

Lázár, Emese. "Multi-state volatility models : theory and applications." Thesis, University of Reading, 2006. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.434314.

Full text
APA, Harvard, Vancouver, ISO, and other styles
28

Bozek, Krzysztof. "Particle phenomenology from M theory inspired models." Thesis, King's College London (University of London), 2018. https://kclpure.kcl.ac.uk/portal/en/theses/particle-phenomenology-from-m-theory-inspired-models(f03bc0dc-3bda-4c6d-b3dc-dc4360f3f67b).html.

Full text
Abstract:
This thesis focuses on low-energy particle phenomenology arising from G2 compactifications of M theory. We construct a supersymmetric SO(10) model that can be naturally realised in this framework. An appropriate discrete symmetry combined with a symmetry breaking Wilson line suppresses the μ-term and dangerous triplet–matter interactions at the compactification scale. Stabilised moduli introduce back the forbidden terms providing the μ-term with the phenomenologically expected value of O(TeV). In our model triplets are light and regenerated triplet interactions induce proton decay but safely within experimental constraints. In order to restore gauge unification we introduce extra, light, vector-like matter multiplets that together with the (unstable) lightest supersymmetric particle (LSP) can provide interesting experimen-tal signatures. We also present a mechanism that generates high scale vacuum expectation values (VEV)s for the scalar components of right-handed neutrinos N of the vector-like pair that further break the gauge symmetry into the Standard Model SU(3)C × SU(2)Y × U(1)Y as well as can induce the correct neutrino masses. The other significant part of the thesis is focused on collider phenomenology of string/M theory inspired models. In particular we study a prospect for electroweakino discovery at a proposed 100 TeV collider with three leptons plus missing transverse energy signature. We design simple but eective signal regions for this case and using simplified detector-level analysis we evaluate discovery reach and exclusion limits. Assuming 3000 fb−1 of integrated luminosity, W-inos could be discovered (excluded) up to 1.1 (1.8) TeV if the spectrum is not compressed.
APA, Harvard, Vancouver, ISO, and other styles
29

Sloan, Robert Hal. "Computational learning theory : new models and algorithms." Thesis, Massachusetts Institute of Technology, 1989. http://hdl.handle.net/1721.1/38339.

Full text
Abstract:
Thesis (Ph. D.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 1989.
Includes bibliographical references (leaves 116-120).
by Robert Hal Sloan.
Ph.D.
APA, Harvard, Vancouver, ISO, and other styles
30

Stratton, Robert James. "Automated theory selection using agent based models." Thesis, King's College London (University of London), 2015. http://kclpure.kcl.ac.uk/portal/en/theses/automated-theory-selection-using-agent-based-models(99e5b6ac-0134-4097-956a-05394e84d575).html.

Full text
Abstract:
Models are used as a tool for theory induction and decision making in many contexts, including complex and dynamic commercial environments. New technological and social developments — such as the increasing availability of real-time transactional data and the rising use of online social networks — create a trend towards modelling process automation, and a demand for models that can help decision making in the context of social interaction in the target process. There is often no obvious specification for the form that a particular model should take, and some kind of selection procedure is necessary that can evaluate the properties of a model and its associated theoretical implications. Automated theory selection has already proven successful for identifying model specifications in equation based modelling (EBM), but there has been little progress in developing automatic approaches to agent based model (ABM) selection. I analyse some of the automation methods currently used in EBM and consider what innovations would be required to create an automated ABM specification system. I then compare the effectiveness of simple automatically specified ABM and EBM approaches in selecting optimal strategies in a series of encounters between artificial corporations, mediated through a simulated market environment. I find that as the level of interaction increases, agent based models are more successful than equation based methods in identifying optimal decisions. I then propose a fuller framework for automated ABM model specification, based around an agent-centric theory representation which incorporates emergent features, a model-to-theory mapping protocol, a set of theory evaluation methods, a search procedure, and a simple recommendation system. I evaluate the approach using empirical data collected at two different levels of aggregation. Using macro level data, I derive a theory that represents the dynamics of an online social networking site, in which the data generating process involves interaction between users, and derive management recommendations. Then, using micro level data, I develop a model using individual-level transaction data and making use of existing statistical techniques — hidden Markov and multinomial discrete choice models. I find that the results at both micro and macro level offer insights in terms of understanding the interrelationship between exogenous factors, agent behaviours, and emergent features. From a quantitative perspective, the automated ABM approach shows small but consistent improvements in fit to the target empirical data compared with EBM approaches.
APA, Harvard, Vancouver, ISO, and other styles
31

Mazzoli, Mattia. "Human mobility: data analysis, theory and models." Doctoral thesis, Universitat de les Illes Balears, 2021. http://hdl.handle.net/10803/673530.

Full text
Abstract:
[eng] Like Columbus mistook America for India, we stepped into the era of misinformation mistaking it for the era of big data. Since the digital revolution in the early ’90s we started producing such a huge amount of data that we do not even know how to find our compass anymore. However, not all this data is available and accessible. In this thesis, by navigating through the seas of available, open and even purchased data on board of our knowledge of physics and complex systems, we try to draw some new routes and shortcuts to study human mobility in different contexts, scales and applications. We first introduce a simple method to treat Twitter data on the Venezuelan exodus to show how this data can consistently reproduce and uncover many different aspects of migration until now neglected. This method designs a safe route to solve many more open questions not yet explored due to the limitations of classic and other sources of migration data in many parts of the world. The same type of data provides us reasonable footprints of human mobility, which lead us to an innovative shortcut in the way in which a specific type of urban mobility is treated so far. By hoisting the sails of theoretical physics, we can add a field theoretic description of commuting in worldwide cities, which simplifies the complexity of the description of urban mobility. By means of this new framework it is possible to tackle the well known aspect of policentricity of cities, drawing urban basins of attraction and reproducing them through a field theoretic version of the gravity model. Following the same shortcut we get to discover something that has been only theorized so far in pedestrian dynamics: the navigation potential of evacuation. This potential has been used by many social interactions models, which have been used to study new and better policies to avoid cloggings and stampedes during evacuation drills, hence creating safer protocols for our buildings and public spaces. In the middle of our navigation, we suddenly bump into a new epidemic and we perform a route change. By purchasing mobile and smartphones location datasets to find our compass and cope with noisy epidemic records, we are able to uncover the so called multi-seeding effect, which has been studied mostly theoretically. This allows us to backtrace and remap the epidemic spreading in Western Europe to specific epidemic hubs. By means of metapopulation models, we confirm our hypotheses on multiseeding using different contact network topologies. Our results allow to designate efficient policies like selective lockdowns and to better prepare healthcare systems of areas which are more exposed to mobility from epidemic and mobility hubs. While the epidemic spreads in Europe, we spot the first cases on the American coasts. The same phenomenon we already saw can be observed at smaller scales in the United States, this time within cities at neighborhoods level. Here we need a high resolution Google dataset in order to see that cities mobility hierarchy leads the disease to spread faster than in sprawled urban areas. However, hierarchy also helps containment policies to better suppress the disease, whereas the same restrictions are less effective in non-hierarchical metropolitan areas. Some cities are more senstive to disease spreading and they must be accurately monitored in order to avoid the rest of cities and country to get involved. Finally, in order to suppress the disease it is very important to avoid the virus to board on long-range trips and infect new places. By means of smartphones location records, we mimic the spreading of viruses at even finer scales inside the busiest airport of Europe: Heathrow, London. By modeling the implementation of a spatial immunization system we are able to strongly reduce the outbreaks within the airport and the number of exported infections abroad. The same technique can be applied even in ordinary public buildings to create safer spaces for the everyday life in the post-Covid era. In this thesis our philosophy is to always rely on the empirical observations to design hypotheses, models and finally solutions. Thanks to the scientific method, we manage to solve complex problems in the field of human mobility with simple approaches and relatively big data. Most of the results presented in this thesis belong to published and in submission works [1–6].
[spa] Tal como Colón confundió América por India, hemos entrado en la era de la misinformación confundiendola por la era de big data. Desde la revolución digital, hemos empezado a producir una tal cantidad de datos que ya ni siquiera sabemos donde encontrar nuestra brújula. En esta tesis, navegando por los mares de los datos disponibles, a bordo de nuestro conocimiento de física y sistemas complejos, intentamos dibujar nuevas rutas y atajos para estudiar la movilidad humana en diferentes contextos, escalas y aplicaciones. Primero introducimos un método para tratar datos de Twitter sobre el éxodo venezolano para mostrar como estos datos pueden reproducir y desvelar muchos aspectos de la migración hasta ahora no accesibles. Este método diseña una ruta segura hacia la resolución de muchos temas abiertos que aún no se han explorado debido a las limitaciones de los datos clásicos. El mismo tipo de datos nos deja huellas fiables de movilidad humana, que nos llevan a un atajo novedoso en el tratamiento de la movilidad casa-trabajo. Izando las velas de la física teórica podemos añadir una descripción de campo del pendularismo en ciudades. Con este enfoque podemos atacar el problema de la policentricidad de las ciudades, dibujando cuencas de atracción urbanas y reproduciendolas através una versión de campo del gravity model. Siguiendo el mismo atajo llegamos a descubrir algo que solo se había teorizado hasta ahora en dinámicas de peatones: el potencial de navegación de evacuaciones. Este potencial se ha usado en muchos modelos de interacciones sociales, que se han usado para evitar atascos y estampidas durante las evacuaciones, entonces creando protocolos más seguros para nuestros edificios y espacios públicos. En el medio de nuestra navegación nos encontramos en una nueva epidemia y nos vemos obligados a un cambio de ruta. Adquiriendo datos de localización por antenas de móviles y por gps de smartphones para encontrar nuestra brújula, somos capaces de descubrir el denominado efecto multi-seeding, que se ha estudiado por la mayoría teoricamente. Gracias a modelos de metapoblaciones, confirmamos nuestras hipótesis sobre el multiseeding. Nuestros resultados permiten diseñar políticas eficientes como confinamientos selectivos y praparar de una manera mejor los sistemas sanitarios de las áreas más expuestas en términos de movilidad desde las fuentes epidémicas. El mismo fenómeno que hemos visto puede observarse a escalas más pequeñas en Estados Unidos, esta vez dentro de ciudades. Aquí necesitamos los datos de Google en alta resolución para ver que las ciudades jerárquicas exhiben difusiones más rápidas que en las ciudades decentralizadas. En contrapartida, la jerarquía ayuda las políticas de contención para mejor suprimir el virus, mientras que las mismas restricciones tienen un menor efecto en las áreas metropolitanas decentralizadas. Algunas ciudades son más sensibles que otras a la difusión epidémica requiriendo una supervisión estricta para evitar que el resto de ciudades y paises se infecten a su vez. Finalmente, para suprimir la epidemia es muy importante evitar que el virus no embarque en viajes de larga distancia e infecte nuevas regiones. Gracias a datos de localización gps de smartphones, imitamos la difusión de varios viruses a escalas aún más pequeñas dentro del aeropuerto de Heathrow, Londres. Modelizando un sistema de inmunización espacial conseguimos reducir fuertemente los brotes dentro del aeropuerto y el número de infecciones exportadas al extranjero. La misma técnica se puede aplicar en edificios públicos ordinarios para crear espacios más seguros para la vida de cada día en la era post-Covid. Gracias al método científico, conseguimos resolver problemas complejos en el campo de la movilidad humana con enfoques simples y datos relativamente grandes.
[cat] Tal com Colom va confondre Amèrica per l’Índia, hem entrat en l’era de la misinformació confonent-la per l’era del big data. Des de la revolució digital, hem començat a produir una tal quantitat de dades que ja ni tan sols sabem on trobar la nostra brúixola. Navegant pels mars de les dades disponibles, a bord del nostre coneixement de física i sistemes complexos, vam intentar dibuixar algunes noves rutes i dreceres per estudiar la mobilitat humana en diferents contextes, escales i aplicacions. Primer introduïm un mètode per tractar dades de Twitter sobre l’èxode veneçolà per mostrar com aquestes dades poden reproduir i revelar aspectes de la migració fins ara no accessibles. Aquest mètode dissenya una ruta segura cap a la resolució de molts més temes oberts que encara no s’han explorat a causa de les limitacions de les dades clàssics. El mateix tipus de dades ens dona empremtes de la mobilitat, que ens porten a una drecera nova en el tractament de la mobilitat casa-treball. Hissant les veles de la fìsica teòrica podem introduir una descripció de camp de la mobilitat pendular. Amb aquest enfoc és possible abordar el problema de la policentricidad de les ciutats, dibuixant conques d’atracció urbanes i reproduciendolas través una versió de camp de l’gravity model. Seguint la mateixa drecera arribem a descobrir una cosa que fins ara només s’havia teoritzat en dinàmiques de vianants: el potencial de navegació d’evacuacions. Aquest potencial s’ha usat en molts models d’interaccions socials per evitar embussos i estampides durant les evacuacions, a fi de crear protocols més segurs per als nostres edificis i espais públics. Al mig de la nostra navegació ens trobem en una nova epidèmia i ens veiem obligats a un canvi de ruta. Adquirint dades de localització per antenes de mòbils i per gps de smartphones, som capaços de descobrir l’anomenat efecte multi-seeding, que es ha estudiat per la majoria teòricament. Gràcies a models de metapoblacions, confirmem la nostra hipotesis sobre el multiseeding. Els nostres resultats permeten dissenyar polítiques eficients com confinaments selectius i praparar d’una manera millor els sistemes sanitaris d’aquelles àrees més exposades en termes de mobilitat des de les fonts epidèmiques. Mentre l’epidèmia segueix a Europa, vam detectar els primers casos a les costes Americanes. El mateix fenomen que hem vist pot observar-se a escales més petites als Estats Units, aquest cop en ciutats. Aquí necessitem les dades de Google d’alta resolució per veure que les ciutats jeràrquiques exhibeixen difusions més ràpides que a les ciutats decentralizades. Per altra banda, la jerarquia ajuda les polítiques de contenció per millor suprimir el virus, mentre que les mateixes restriccions tenen un menor efecte en les àrees metropolitanes decentralizades. Algunes ciutats són més sensibles que d’altres a la difusió epidèmica requerint una supervisió estricta per evitar que la resta de ciutats i països s’infectin al seu torn. Finalment, per suprimir l’epidèmia és molt important evitar que el virus no sigui embarcat en viatges de llarga distància i infecti noves regions. Gràcies a dades de localització GPS de smartphones, imitem la difusió de diversos virus a escales encara més petites dins de l’aeroport de Heathrow, Londres. Un sistema d’immunització espacial es capaç de reduir fortament els brots dins de l’aeroport i la quantitat de infeccions exportades a l’estranger. La mateixa tècnica es pot aplicar a edificis públics ordinaris per crear espais més segurs per al dia a dia en l’era post-Covid. Gràcies a el mètode científic, vam aconseguir resoldre problemes complexos en el camp de la mobilitat humana amb enfocaments simples i dades relativament grans.
APA, Harvard, Vancouver, ISO, and other styles
32

Luu, Duy Hao. "Gradient Theory: Constitutive Models and Fatigue Criteria." Palaiseau, Ecole polytechnique, 2013. http://pastel.archives-ouvertes.fr/docs/00/86/60/81/PDF/These_Duy-Hao_LUU.pdf.

Full text
Abstract:
Le travail porte sur deux grandes classes de modèles mécaniques à gradient : les modèles de comportement élastoplastiques et les modèles de fatigue à grand nombre de cycles. Sa principale motivation vient du fait que les problématiques nanomécaniques deviennent de plus en plus importantes dans l'ingénierie et les technologies microélectromécaniques. Dans ces problèmes à petites échelles, les effets de gradient et d'échelle deviennent significatifs. Leur prise en compte dans le dimensionnement et donc dans les modèles de comportement et de durée de vie est nécessaire pour une bonne estimation de la fiabilité de tels dispositifs. Dans cette thèse, l'auteur s'est attaché à étudier d'une part les modèles de comportement élastoplastiques à gradient dans un cadre standard généralisé, d'autre part les possibilités d'extension des critères fatigue à grand nombre de cycles. Ces deux thématiques constituent les deux grandes parties (A et B) du mémoire présenté. Partie A- Modèles de comportement standards à gradient Après une analyse de la littérature abondante sur les formulations des lois de comportement élastoplastiques à gradient, l'auteur a retenu celles proposées par Q. S. Nguyen (2000, 2005, 2011 et 2012). Elle se base sur une approche globale cohérente, mise en oeuvre dans le cadre standard généralisé (à partir du potentiel thermodynamique et du potentiel de dissipation). Elle permet l'obtention des équations de comportement, d'évolution ainsi que des principes variationnels associés. Ce cadre rend aisé aussi le traitement des questions relatives à l'unicité des solutions. Les modèles à gradient élastoplastiques considérés comprennent différents types d'écrouissage (cinématique et isotrope). Lorsque le potentiel de dissipation dépend du gradient des variables internes, l'implémentation numérique de ces modèles présente des difficultés. Une méthode de régularisation de l'énergie permet de surmonter celles-ci. La présence du gradient conduit à un problème d'évolution non standard avec une équation de Laplace et des conditions aux limites traduites par des équations différentielles du second ordre. La méthode de résolution choisie et implémentée dans la code CAST3M est similaire à celle utilisée pour les problèmes de diffusion. Quelques exemples typiques d'illustration sont ensuite traités et comparés aux résultats de la littérature pour montrer la pertinence des modèles. Il s'agit de la torsion des fils minces, du cisaillement des films minces ainsi de la croissance des microcavités. L'effet bien connu resumé par les mots "smaller is stronger" est retrouvé. Cette partie comporte quatre chapitres. Partie B- Critères de fatigue à gradient Les effets de gradient et de taille sont bien connus en fatigue des matériaux. Ils sont souvent mis en évidence dans les problèmes d'entailles, de fretting fatigue et de micromécaniques. S'il existe un très grand nombre de critères de fatigue, très peu de ceux-ci sont à même de reproduire ces effets. Un travail pertinent est celui effectué par Papadopoulos et Panoskaltsis en 1996. A partir d'une analyse des essais disponibles dans la littérature, il met en évidence les effets de gradients et ceux de taille et propose un critère de fatigue prenant en compte uniquement le gradient de la contrainte hydrostatique. Les travaux de la thèse partent de cette analyse et reformulent les critères de fatigue multiaxiale à gradient pour prendre en compte le gradient des contraintes dans les deux principales composantes des critères (relatives à l'amplitude du cisaillement et à la contrainte hydrostatique) d'une manière générale en définissant une amplitude de cisaillement et une contrainte hydrostatique étendues intégrant des termes de gradient. Deux critères classiques, très utilisés dans le dimensionnement des structures industrielles, sont en particulier formulés dans le nouveau cadre à gradient. Il s'agit du critère de Crossland et de celui de Dang Van. Une validation de ces propositions est ensuite effectuée en utilisant les résultats expérimentaux de la littérature. De très bonnes corrélations sont obtenues. Cette partie comporte deux chapitres, suivis d'une conclusion générale et d'une annexe sur l'implémentation numérique
In the present thesis, two new classes of phenomenological models in the framework of the continuum thermodynamics and gradient theory are proposed. The first one is standard gradient constitutive model used to deal with the mechanical problems at micro-scale, and the other concerns gradient fatigue criteria for the problems at small scale. Using these, some common effects which are not captured yet in the classical mechanics but become significant at sufficiently small scales, are taken into account. For each class, the size and gradient effects which are the two effects most commonly discussed and very confused between each other in the literature, are clearly distinct and demonstrated to be integrated into the later via gradient terms. The thesis contains two principal contents presented in the part A and part B, respectively corresponding to the two new model classes. The following are their summary: Part A- Standard Gradient Constitutive Models: Application in Micro-Mechanics. A formulation of Standard Gradient Plasticity Models, based on an abundant researches on strain gradient plasticity (SPG) theory in the literature such as the ones of Q. S. Nguyen (2000, 2005, 2011 and 2012), is proposed and numerically implemented. The models are based on a global approach in the framework of continuum thermodynamics and generalized standard materials where the standard gradients of the internal parameters in the set of state variables are introduced. The governing equations for a solid are derived from an extended version of the virtual work equation (Frémond, 1985 or Gurtin, 1996). These equations can also be derived from the formalism of energy and dissipation potentials and appear as a generalized Biot equation for the solid. The gradient formulation established in such way is considered a higher-order extension of the local plasticity theory, with the introduction of the material characteristic length scale and the insulation boundary condition proposed by Polizzotto. The presence of strain gradient leads to a Laplacian equation and to non-standard boundary value problem with partial differential equations of higher order. A computational method, at the global level, based on diffusion like-problem spirit is used. Illustrations are given and applied to some typical problems in micro-mechanics to reproduce the well-known mechanical phenomenon, the effect "smaller is stronger". A good agreement between numerical results and reference counterparts is found; mesh-independence of numerical results is observed. Part B- Gradient Fatigue Criteria at Small Scale. A reformulation of gradient fatigue criteria is proposed in the context of multiaxial high-cycle fatigue (HCF) of metallic materials, initiated by Papadopoulos 1996. The notable dependence of fatigue limit on some common factors concerning the material specimen size is analysed and modeled. These factors, which are not taken into account before in classical fatigue criteria but become significant at sufficiently small scales, are included in the new formulation. Among such factors, three ones intimately related to each other, the pure size (smaller is stronger), stress gradient (higher gradient is higher resistance) and loading (i. E. Loading mode) effects, are here investigated. An effort has been made to roughly integrate all these effects into only one through gradient terms. According to that, a new class of fatigue criteria with stress gradient terms introduced not only in the normal stress but also in the shear stress parts, are formulated. Such a formulation allows to capture all the pure size (if important) and stress gradient (if any) effects, as well as to cover a wide range of loading effect (traction, bending and shearing, for instance). Due to such a property, these new criteria are naturally generalized to multiaxial loadings to be a new version of stress gradient dependent multiaxial fatigue criteria. Application to some classical fatigue criteria such Crossland and Dang Van is provided as illustrations. As shown, classical fatigue criteria as well as the one of Papadopoulos 1996, are considered special cases of the new respective criteria. An overview for the whole thesis is put in this Summary, and an overview for each model class is found in the Chapter 1 where a general introduction of the thesis is given. Their corresponding detail are presented in the Chapters 2-4 (for part A) and Chapters 5-6 (for part B). The last chapter, Chapter 7, is dedicated to general conclusions and perspectives
APA, Harvard, Vancouver, ISO, and other styles
33

McCarthy, Ian M. "Theory and applications of consumer search models." [Bloomington, Ind.] : Indiana University, 2008. http://gateway.proquest.com/openurl?url_ver=Z39.88-2004&rft_val_fmt=info:ofi/fmt:kev:mtx:dissertation&res_dat=xri:pqdiss&rft_dat=xri:pqdiss:3319837.

Full text
Abstract:
Thesis (Ph.D.)--Indiana University, Dept. of Economics, 2008.
Title from PDF t.p. (viewed on May 8, 2009). Source: Dissertation Abstracts International, Volume: 69-08, Section: A, page: 3242. Advisers: Roy Gardner; Michael Rauh.
APA, Harvard, Vancouver, ISO, and other styles
34

Costa, Nelson Haykin Simon. "Wideband MIMO channel models: Theory and practice." *McMaster only, 2007.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
35

Capriotti, Paolo. "Models of type theory with strict equality." Thesis, University of Nottingham, 2017. http://eprints.nottingham.ac.uk/39382/.

Full text
Abstract:
This thesis introduces the idea of two-level type theory, an extension of Martin-Löf type theory that adds a notion of strict equality as an internal primitive. A type theory with a strict equality alongside the more conventional form of equality, the latter being of fundamental importance for the recent innovation of homotopy type theory (HoTT), was first proposed by Voevodsky, and is usually referred to as HTS. Here, we generalise and expand this idea, by developing a semantic framework that gives a systematic account of type formers for two-level systems, and proving a conservativity result relating back to a conventional type theory like HoTT. Finally, we show how a two-level theory can be used to provide partial solutions to open problems in HoTT. In particular, we use it to construct semi-simplicial types, and lay out the foundations of an internal theory of (∞, 1)-categories.
APA, Harvard, Vancouver, ISO, and other styles
36

Kadam, Sangram Vilasrao. "Models of Matching Markets." Thesis, Harvard University, 2016. http://nrs.harvard.edu/urn-3:HUL.InstRepos:33493461.

Full text
Abstract:
The structure, length, and characteristics of matching markets affect the outcomes for their participants. This dissertation attempts to fill the lacuna in our understanding about matching markets on three dimensions through three essays. The first essay highlights the role of constraints at the interviewing stage of matching markets where participants have to make choices even before they discover their own preferences entirely. Two results stand out from this setting. When preferences are ex-ante aligned, relaxing the interviewing constraints for one side of the market improves the welfare for everyone on the other side. Moreover, such interventions can lead to a decrease in the number of matched agents. The second essay elucidates the importance of rematching opportunities when relationships last over multiple periods. It identifies sufficient conditions for existence of a stable matching which accommodates the form of preferences we expect to see in multi-period environments. Preferences with inter-temporal complementarities, desire for variety and a status-quo bias are included in this setting. The third essay furthers our understanding while connecting two of the sufficient conditions in a specialized matching with contracts setting. It provides a novel linkage by providing a constructive way of arriving at a preference condition starting from another and thus proving that the later implies the former.
Economics
APA, Harvard, Vancouver, ISO, and other styles
37

Correia, Fagner Cintra [UNESP]. "The standard model effective field theory: integrating UV models via functional methods." Universidade Estadual Paulista (UNESP), 2017. http://hdl.handle.net/11449/151703.

Full text
Abstract:
Submitted by FAGNER CINTRA CORREIA null (ccorreia@ift.unesp.br) on 2017-09-24T14:11:35Z No. of bitstreams: 1 Correia_TeseIFT.pdf: 861574 bytes, checksum: 1829fcb0903e20303312d37d7c1e0ffc (MD5)
Approved for entry into archive by Monique Sasaki (sayumi_sasaki@hotmail.com) on 2017-09-27T19:37:47Z (GMT) No. of bitstreams: 1 correia_fc_dr_ift.pdf: 861574 bytes, checksum: 1829fcb0903e20303312d37d7c1e0ffc (MD5)
Made available in DSpace on 2017-09-27T19:37:47Z (GMT). No. of bitstreams: 1 correia_fc_dr_ift.pdf: 861574 bytes, checksum: 1829fcb0903e20303312d37d7c1e0ffc (MD5) Previous issue date: 2017-07-27
Conselho Nacional de Desenvolvimento Científico e Tecnológico (CNPq)
Coordenação de Aperfeiçoamento de Pessoal de Nível Superior (CAPES)
O Modelo Padrão Efetivo é apresentado como um método consistente de parametrizar Física Nova. Os conceitos de Matching e Power Counting são tratados, assim como a Expansão em Derivadas Covariantes introduzida como alternativa à construção do conjunto de operadores efetivos resultante de um modelo UV particular. A técnica de integração funcional é aplicada em casos que incluem o MP com Tripleto de Escalares e diferentes setores do modelo 3-3-1 na presença de Leptons pesados. Finalmente, o coeficiente de Wilson de dimensão-6 gerado a partir da integração de um quark-J pesado é limitado pelos valores recentes do parâmetro obliquo Y.
It will be presented the principles behind the use of the Standard Model Effective Field Theory as a consistent method to parametrize New Physics. The concepts of Matching and Power Counting are covered and a Covariant Derivative Expansion introduced to the construction of the operators set coming from the particular integrated UV model. The technique is applied in examples including the SM with a new Scalar Triplet and for different sectors of the 3-3-1 model in the presence of Heavy Leptons. Finally, the Wilson coefficient for a dimension-6 operator generated from the integration of a heavy J-quark is then compared with the measurements of the oblique Y parameter.
CNPq: 142492/2013-2
CAPES: 88881.132498/2016-01
APA, Harvard, Vancouver, ISO, and other styles
38

Correia, Fagner Cintra. "The standard model effective field theory : integrating UV models via functional methods /." São Paulo, 2017. http://hdl.handle.net/11449/151703.

Full text
Abstract:
Orientador: Vicente Pleitez
Resumo: O Modelo Padrão Efetivo é apresentado como um método consistente de parametrizar FísicaNova. Os conceitos de Matching e Power Counting são tratados, assim como a Expansão emDerivadas Covariantes introduzida como alternativa à construção do conjunto de operadoresefetivos resultante de um modelo UV particular. A técnica de integração funcional é aplicadaem casos que incluem o MP com Tripleto de Escalares e diferentes setores do modelo 3-3-1 napresença de Leptons pesados. Finalmente, o coeficiente de Wilson de dimensão-6 gerado a partirda integração de um quark-J pesado é limitado pelos valores recentes do parâmetro obliquo Y.
Doutor
APA, Harvard, Vancouver, ISO, and other styles
39

Thompson, Bernard Robert. "Theory of cluster-cluster aggregation." Thesis, University of Cambridge, 1988. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.238123.

Full text
APA, Harvard, Vancouver, ISO, and other styles
40

Elhouar, Mikael. "Essays on interest rate theory." Doctoral thesis, Handelshögskolan i Stockholm, Finansiell Ekonomi (FI), 2008. http://urn.kb.se/resolve?urn=urn:nbn:se:hhs:diva-451.

Full text
APA, Harvard, Vancouver, ISO, and other styles
41

Combs, Adam. "Bayesian Model Checking Methods for Dichotomous Item Response Theory and Testlet Models." Bowling Green State University / OhioLINK, 2014. http://rave.ohiolink.edu/etdc/view?acc_num=bgsu1394808820.

Full text
APA, Harvard, Vancouver, ISO, and other styles
42

De, Aguinaga José Guillermo. "Uncertainty Assessment of Hydrogeological Models Based on Information Theory." Doctoral thesis, Saechsische Landesbibliothek- Staats- und Universitaetsbibliothek Dresden, 2011. http://nbn-resolving.de/urn:nbn:de:bsz:14-qucosa-71814.

Full text
Abstract:
There is a great deal of uncertainty in hydrogeological modeling. Overparametrized models increase uncertainty since the information of the observations is distributed through all of the parameters. The present study proposes a new option to reduce this uncertainty. A way to achieve this goal is to select a model which provides good performance with as few calibrated parameters as possible (parsimonious model) and to calibrate it using many sources of information. Akaike’s Information Criterion (AIC), proposed by Hirotugu Akaike in 1973, is a statistic-probabilistic criterion based on the Information Theory, which allows us to select a parsimonious model. AIC formulates the problem of parsimonious model selection as an optimization problem across a set of proposed conceptual models. The AIC assessment is relatively new in groundwater modeling and it presents a challenge to apply it with different sources of observations. In this dissertation, important findings in the application of AIC in hydrogeological modeling using different sources of observations are discussed. AIC is tested on ground-water models using three sets of synthetic data: hydraulic pressure, horizontal hydraulic conductivity, and tracer concentration. In the present study, the impact of the following factors is analyzed: number of observations, types of observations and order of calibrated parameters. These analyses reveal not only that the number of observations determine how complex a model can be but also that its diversity allows for further complexity in the parsimonious model. However, a truly parsimonious model was only achieved when the order of calibrated parameters was properly considered. This means that parameters which provide bigger improvements in model fit should be considered first. The approach to obtain a parsimonious model applying AIC with different types of information was successfully applied to an unbiased lysimeter model using two different types of real data: evapotranspiration and seepage water. With this additional independent model assessment it was possible to underpin the general validity of this AIC approach
Hydrogeologische Modellierung ist von erheblicher Unsicherheit geprägt. Überparametrisierte Modelle erhöhen die Unsicherheit, da gemessene Informationen auf alle Parameter verteilt sind. Die vorliegende Arbeit schlägt einen neuen Ansatz vor, um diese Unsicherheit zu reduzieren. Eine Möglichkeit, um dieses Ziel zu erreichen, besteht darin, ein Modell auszuwählen, das ein gutes Ergebnis mit möglichst wenigen Parametern liefert („parsimonious model“), und es zu kalibrieren, indem viele Informationsquellen genutzt werden. Das 1973 von Hirotugu Akaike vorgeschlagene Informationskriterium, bekannt als Akaike-Informationskriterium (engl. Akaike’s Information Criterion; AIC), ist ein statistisches Wahrscheinlichkeitskriterium basierend auf der Informationstheorie, welches die Auswahl eines Modells mit möglichst wenigen Parametern erlaubt. AIC formuliert das Problem der Entscheidung für ein gering parametrisiertes Modell als ein modellübergreifendes Optimierungsproblem. Die Anwendung von AIC in der Grundwassermodellierung ist relativ neu und stellt eine Herausforderung in der Anwendung verschiedener Messquellen dar. In der vorliegenden Dissertation werden maßgebliche Forschungsergebnisse in der Anwendung des AIC in hydrogeologischer Modellierung unter Anwendung unterschiedlicher Messquellen diskutiert. AIC wird an Grundwassermodellen getestet, bei denen drei synthetische Datensätze angewendet werden: Wasserstand, horizontale hydraulische Leitfähigkeit und Tracer-Konzentration. Die vorliegende Arbeit analysiert den Einfluss folgender Faktoren: Anzahl der Messungen, Arten der Messungen und Reihenfolge der kalibrierten Parameter. Diese Analysen machen nicht nur deutlich, dass die Anzahl der gemessenen Parameter die Komplexität eines Modells bestimmt, sondern auch, dass seine Diversität weitere Komplexität für gering parametrisierte Modelle erlaubt. Allerdings konnte ein solches Modell nur erreicht werden, wenn eine bestimmte Reihenfolge der kalibrierten Parameter berücksichtigt wurde. Folglich sollten zuerst jene Parameter in Betracht gezogen werden, die deutliche Verbesserungen in der Modellanpassung liefern. Der Ansatz, ein gering parametrisiertes Modell durch die Anwendung des AIC mit unterschiedlichen Informationsarten zu erhalten, wurde erfolgreich auf einen Lysimeterstandort übertragen. Dabei wurden zwei unterschiedliche reale Messwertarten genutzt: Evapotranspiration und Sickerwasser. Mit Hilfe dieser weiteren, unabhängigen Modellbewertung konnte die Gültigkeit dieses AIC-Ansatzes gezeigt werden
APA, Harvard, Vancouver, ISO, and other styles
43

Pashourtidou, Nicoletta. "Cointegration in misspecified models." Thesis, University of Southampton, 2002. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.252324.

Full text
APA, Harvard, Vancouver, ISO, and other styles
44

Shaikh, Zain U. "Some mathematical structures arising in string theory." Thesis, University of Aberdeen, 2010. http://digitool.abdn.ac.uk:80/webclient/DeliveryManager?pid=158375.

Full text
Abstract:
This thesis is concerned with mathematical interpretations of some recent develop- ments in string theory. All theories are considered before quantisation. The rst half of the thesis investigates a large class of Lagrangians, L, that arise in the physics literature. Noether's famous theorem says that under certain conditions there is a bijective correspondence between the symmetries of L and the \conserved currents" or integrals of motion. The space of integrals of motion form a sheaf and has a bilinear bracket operation. We show that there is a canonical sheaf d1;0 J1( ) that contains a representation of the higher Dorfman bracket. This is the rst step to de ne a Courant algebroid structure on this sheaf. We discuss the existence of this structure proving that, for a re ned de nition, we have the necessary components. The pure spinor formalism of string theory involves the addition of the algebra of pure spinors to the data of the superstring. This algebra is a Koszul algebra and, for physicists, Koszul duality is string/gauge duality. Motivated by this, we investigate the intimate relationship between a commutative Koszul algebra A and its graded Lie superalgebra Koszul dual to A, U(g) = A!. Classically, this means we obtain the algebra of syzygies AS from the cohomology of a Lie subalgebra of g. We prove H (g 2;C) ' AS again and extend it to the notion of k-syzygies, which we de ne as H (g k;C). In particular, we show that H B er(A) ' H (g 3;C), where H Ber(A) is the Berkovits cohomology of A.
APA, Harvard, Vancouver, ISO, and other styles
45

Forchini, Giovanni. "Exact distribution theory for some econometric problems." Thesis, University of Southampton, 1998. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.242631.

Full text
APA, Harvard, Vancouver, ISO, and other styles
46

Macorra, Axel de la. "Supersymmetry breaking in 4D string theory." Thesis, University of Oxford, 1993. http://ora.ox.ac.uk/objects/uuid:0bc6b606-1a02-4d28-b68f-bd5c3ac11d04.

Full text
Abstract:
In this thesis we address the problem of supersymmetry breaking in four dimensional string theory. We derive an effective Lagrangian describing the low energy degrees of freedom including the Goldstone mode associated with the spontaneously broken R-symmetry when a gaugino condensate forms. We show the equivalence between our approach and those previously used for studying gaugino condensate in 4D string theory but we also show the need to include quantum effects due to the strong coupling constant in the hidden sector. We determine the vacuum structure of the complete scalar potential and show that supersymmetry is broken and a large mass hierarchy may develop with a single gaugino condensate. Realistic phenomenological values for the gauge coupling constant, unification scale and soft supersymmetric breaking terms can be obtained. Consistency with the minimal supersymmetric extension of the standard model requires the hidden gauge group to be SU(6) or SO(9).
APA, Harvard, Vancouver, ISO, and other styles
47

Liu, Binbin, and 刘彬彬. "Some topics in risk theory and optimal capital allocation problems." Thesis, The University of Hong Kong (Pokfulam, Hong Kong), 2012. http://hub.hku.hk/bib/B48199291.

Full text
Abstract:
In recent years, the Markov Regime-Switching model and the class of Archimedean copulas have been widely applied to a variety of finance-related fields. The Markov Regime-Switching model can reflect the reality that the underlying economy is changing over time. Archimedean copulas are one of the most popular classes of copulas because they have closed form expressions and have great flexibility in modeling different kinds of dependencies. In the thesis, we first consider a discrete-time risk process based on the compound binomial model with regime-switching. Some general recursive formulas of the expected penalty function have been obtained. The orderings of ruin probabilities are investigated. In particular, we show that if there exists a stochastic dominance relationship between random claims at different regimes, then we can order ruin probabilities under different initial regimes. Regarding capital allocation problems, which are important areas in finance and risk management, this thesis studies the problems of optimal allocation of policy limits and deductibles when the dependence structure among risks is modeled by an Archimedean copula. By employing the concept of arrangement increasing and stochastic dominance, useful qualitative results of the optimal allocations are obtained. Then we turn our attention to a new family of risk measures satisfying a set of proposed axioms, which includes the class of distortion risk measures with concave distortion functions. By minimizing the new risk measures, we consider the optimal allocation of policy limits and deductibles problems based on the assumption that for each risk there exists an indicator random variable which determines whether the risk occurs or not. Several sufficient conditions to order the optimal allocations are obtained using tools in stochastic dominance theory.
published_or_final_version
Statistics and Actuarial Science
Doctoral
Doctor of Philosophy
APA, Harvard, Vancouver, ISO, and other styles
48

Coyle, Andrew James. "Some problems in queueing theory." Title page, contents and summary only, 1989. http://web4.library.adelaide.edu.au/theses/09PH/09phc8812.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
49

Di, Natale Anna. "Stochastic models and graph theory for Zipf's law." Master's thesis, Alma Mater Studiorum - Università di Bologna, 2018. http://amslaurea.unibo.it/17065/.

Full text
Abstract:
In questo elaborato ci siamo occupati della legge di Zipf sia da un punto di vista applicativo che teorico. Tale legge empirica afferma che il rango in frequenza (RF) delle parole di un testo seguono una legge a potenza con esponente -1. Per quanto riguarda l'approccio teorico abbiamo trattato due classi di modelli in grado di ricreare leggi a potenza nella loro distribuzione di probabilità. In particolare, abbiamo considerato delle generalizzazioni delle urne di Polya e i processi SSR (Sample Space Reducing). Di questi ultimi abbiamo dato una formalizzazione in termini di catene di Markov. Infine abbiamo proposto un modello di dinamica delle popolazioni capace di unificare e riprodurre i risultati dei tre SSR presenti in letteratura. Successivamente siamo passati all'analisi quantitativa dell'andamento del RF sulle parole di un corpus di testi. Infatti in questo caso si osserva che la RF non segue una pura legge a potenza ma ha un duplice andamento che può essere rappresentato da una legge a potenza che cambia esponente. Abbiamo cercato di capire se fosse possibile legare l'analisi dell'andamento del RF con le proprietà topologiche di un grafo. In particolare, a partire da un corpus di testi abbiamo costruito una rete di adiacenza dove ogni parola era collegata tramite un link alla parola successiva. Svolgendo un'analisi topologica della struttura del grafo abbiamo trovato alcuni risultati che sembrano confermare l'ipotesi che la sua struttura sia legata al cambiamento di pendenza della RF. Questo risultato può portare ad alcuni sviluppi nell'ambito dello studio del linguaggio e della mente umana. Inoltre, siccome la struttura del grafo presenterebbe alcune componenti che raggruppano parole in base al loro significato, un approfondimento di questo studio potrebbe condurre ad alcuni sviluppi nell'ambito della comprensione automatica del testo (text mining).
APA, Harvard, Vancouver, ISO, and other styles
50

Chepkwony, Isaac. "Analysis and control theory of some cochlear models." [Ames, Iowa : Iowa State University], 2006.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography