Dissertations / Theses on the topic 'Computational complexity and computability'

To see the other types of publications on this topic, follow the link: Computational complexity and computability.

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 dissertations / theses for your research on the topic 'Computational complexity and computability.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.

1

Latsch, Wolfram Wilhelm. "Beyond complexity and evolution : on the limits of computability in economics." Thesis, University of Oxford, 1999. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.325103.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Pouly, Amaury. "Continuous models of computation: from computability to complexity." Palaiseau, Ecole polytechnique, 2015. https://theses.hal.science/tel-01223284/document.

Full text
Abstract:
In 1941, Claude Shannon introduced a continuous-time analog model of computation, namely the General Purpose Analog Computer (GPAC). The GPAC is a physically feasible model in the sense that it can be implemented in practice through the use of analog electronics or mechanical devices. It can be proved that the functions computed by a GPAC are precisely the solutions of a special class of differential equations where the right-hand side is a polynomial. Analog computers have since been replaced by digital counterpart. Nevertheless, one can wonder how the GPAC could be compared to Turing machines. A few years ago, it was shown that Turing-based paradigms and the GPAC have the same computational power. However, this result did not shed any light on what happens at a computational complexity level. In other words, analog computers do not make a difference about what can be computed; but maybe they could compute faster than a digital computer. A fundamental difficulty of continuous-time model is to define a proper notion of complexity. Indeed, a troubling problem is that many models exhibit the so-called Zeno's phenomenon, also known as space-time contraction. In this thesis, we give several fundamental contributions to these questions. We show that the GPAC has the same computational power as the Turing machine, at the complexity level. We also provide as a side effect a purely analog, machine-independent characterization of P and Computable Analysis.
APA, Harvard, Vancouver, ISO, and other styles
3

Pégny, Maël. "Sur les limites empiriques du calcul : calculabilité, complexité et physique." Thesis, Paris 1, 2013. http://www.theses.fr/2013PA010673/document.

Full text
Abstract:
Durant ces dernières décennies, la communauté informatique a montré un intérêt grandissant pour les modèles de calcul non-standard, inspirés par des phénomènes physiques, biologiques ou chimiques. Les propriétés exactes de ces modèles ont parfois été l'objet de controverses: que calculent-ils? Et à quelle vitesse? Les enjeux de ces questions sont renforcés par la possibilité que certains de ces modèles pourraient transgresser les limites acceptées du calcul, en violant soit la thèse de Church-Turing soit la thèse de Church-Turing étendue. La possibilité de réaliser physiquement ces modèles a notamment été au coeur des débats. Ainsi, des considérations empiriques semblent introduites dans les fondements même de la calculabilité et de la complexité computationnelle, deux théories qui auraient été précédemment considérées comme des parties purement a priori de la logique et de l'informatique. Par conséquent, ce travail est consacré à la question suivante : les limites du calcul reposent-elles sur des fondements empiriques? Et si oui, quels sont-ils? Pour ce faire, nous examinons tout d'abord la signification précise des limites du calcul, et articulons une conception épistémique du calcul, permettant la comparaison des modèles les plus variés. Nous répondrons à la première question par l'affirmative, grâce à un examen détaillé des débats entourant la faisabilité des modèles non-standard. Enfin, nous montrerons les incertitudes entourant la deuxième question dans l'état actuel de la recherche, en montrant les difficultés de la traduction des concepts computationnels en limites physiques
Recent years have seen a surge in the interest for non-standard computational models, inspired by physical, biological or chemical phenomena. The exact properties of some of these models have been a topic of somewhat heated discussion: what do they compute? And how fast do they compute? The stakes of these questions were heightened by the claim that these models would violate the accepted limits of computation, by violating the Church-Turing Thesis or the Extended Church-Turing Thesis. To answer these questions, the physical realizability of some of those models - or lack thereof - has often been put at the center of the argument. It thus seems that empirical considerations have been introduced into the very foundations of computability and computational complexity theory, both subjects that would have been previously considered purely a priori parts of logic and computer science. Consequently, this dissertation is dedicated to the following question: do computability and computational complexity theory rest on empirical foundations? If yes, what are these foundations? We will first examine the precise meaning of those limits of computation, and articulate a philosophical conception of computation able to make sense of this variety of models. We then answer the first question by the affirmative, through a careful examination of current debates around non-standard models. We show the various difficulties surrounding the second question, and study how they stem from the complex translation of computational concepts into physical limitations
APA, Harvard, Vancouver, ISO, and other styles
4

Kübel, David J. F. [Verfasser]. "On some Geometric Search Problems : Algorithms, Complexity, Computability / David J. F. Kübel." Bonn : Universitäts- und Landesbibliothek Bonn, 2020. http://d-nb.info/1224270606/34.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Farr, Graham E. "Topics in computational complexity." Thesis, University of Oxford, 1986. http://ora.ox.ac.uk/objects/uuid:ad3ed1a4-fea4-4b46-8e7a-a0c6a3451325.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Vikas, Narayan. "Computational complexity of graph compaction." Thesis, National Library of Canada = Bibliothèque nationale du Canada, 1997. http://www.collectionscanada.ca/obj/s4/f2/dsk3/ftp04/nq24360.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Yamakami, Tomoyuki. "Average case computational complexity theory." Thesis, National Library of Canada = Bibliothèque nationale du Canada, 1997. http://www.collectionscanada.ca/obj/s4/f2/dsk3/ftp04/nq28091.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Tseng, Hung-Li. "Computational Complexity of Hopfield Networks." Thesis, University of North Texas, 1998. https://digital.library.unt.edu/ark:/67531/metadc278272/.

Full text
Abstract:
There are three main results in this dissertation. They are PLS-completeness of discrete Hopfield network convergence with eight different restrictions, (degree 3, bipartite and degree 3, 8-neighbor mesh, dual of the knight's graph, hypercube, butterfly, cube-connected cycles and shuffle-exchange), exponential convergence behavior of discrete Hopfield network, and simulation of Turing machines by discrete Hopfield Network.
APA, Harvard, Vancouver, ISO, and other styles
9

Rubiano, Thomas. "Implicit Computational Complexity and Compilers." Thesis, Sorbonne Paris Cité, 2017. http://www.theses.fr/2017USPCD076/document.

Full text
Abstract:
Complexity theory helps us predict and control resources, usually time and space, consumed by programs. Static analysis on specific syntactic criterion allows us to categorize some programs. A common approach is to observe the program’s data’s behavior. For instance, the detection of non-size-increasing programs is based on a simple principle : counting memory allocation and deallocation, particularly in loops. This way, we can detect programs which compute within a constant amount of space. This method can easily be expressed as property on control flow graphs. Because analyses on data’s behaviour are syntactic, they can be done at compile time. Because they are only static, those analyses are not always computable or easily computable and approximations are needed. “Size-Change Principle” from C. S. Lee, N. D. Jones et A. M. Ben-Amram presented a method to predict termination by observing resources evolution and a lot of research came from this theory. Until now, these implicit complexity theories were essentially applied on more or less toy languages. This thesis applies implicit computational complexity methods into “real life” programs by manipulating intermediate representation languages in compilers. This give an accurate idea of the actual expressivity of these analyses and show that implicit computational complexity and compilers communities can fuel each other fruitfully. As we show in this thesis, the methods developed are quite generals and open the way to several new applications
La théorie de la complexité´e s’intéresse à la gestion des ressources, temps ou espace, consommés par un programmel ors de son exécution. L’analyse statique nous permet de rechercher certains critères syntaxiques afin de catégoriser des familles de programmes. L’une des approches les plus fructueuses dans le domaine consiste à observer le comportement potentiel des données manipulées. Par exemple, la détection de programmes “non size increasing” se base sur le principe très simple de compter le nombre d’allocations et de dé-allocations de mémoire, en particulier au cours de boucles et on arrive ainsi à détecter les programmes calculant en espace constant. Cette méthode s’exprime très bien comme propriété sur les graphes de flot de contrôle. Comme les méthodes de complexité implicite fonctionnent à l’aide de critères purement syntaxiques, ces analyses peuvent être faites au moment de la compilation. Parce qu’elles ne sont ici que statiques, ces analyses ne sont pas toujours calculables ou facilement calculables, des compromis doivent être faits en s’autorisant des approximations. Dans le sillon du “Size-Change Principle” de C. S. Lee, N. D. Jones et A. M. Ben-Amram, beaucoup de recherches reprennent cette méthode de prédiction de terminaison par observation de l’évolution des ressources. Pour le moment, ces méthodes venant des théories de la complexité implicite ont surtout été appliquées sur des langages plus ou moins jouets. Cette thèse tend à porter ces méthodes sur de “vrais” langages de programmation en s’appliquant au niveau des représentations intermédiaires dans des compilateurs largement utilises. Elle fournit à la communauté un outil permettant de traiter une grande quantité d’exemples et d’avoir une idée plus précise de l’expressivité réelle de ces analyses. De plus cette thèse crée un pont entre deux communautés, celle de la complexité implicite et celle de la compilation, montrant ainsi que chacune peut apporter à l’autre
APA, Harvard, Vancouver, ISO, and other styles
10

Okabe, Yasuo. "Parallel Computational Complexity and Date-Transfer Complexity of Supercomputing." Kyoto University, 1994. http://hdl.handle.net/2433/74658.

Full text
APA, Harvard, Vancouver, ISO, and other styles
11

Quttineh, Nils-Hassan. "Computational Complexity of Finite Field Multiplication." Thesis, Linköping University, Department of Electrical Engineering, 2003. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-1968.

Full text
Abstract:

The subject for this thesis is to find a basis which minimizes the number of bit operations involved in a finite field multiplication. The number of bases of a finite field increases quickly with the extension degree, and it is therefore important to find efficient search algorithms. Only fields of characteristic two are considered.

A complexity measure is introduced, in order to compare bases. Different methods and algorithms are tried out, limiting the search in order to explore larger fields. The concept of equivalent bases is introduced.

A comparison is also made between the Polynomial, Normal and Triangular Bases, referred to as known bases, as they are commonly used in implementations. Tables of the best found known bases for all fields up to GF(2^24) is presented.

A list of the best found bases for all fields up to GF(2^25) is also given.

APA, Harvard, Vancouver, ISO, and other styles
12

Edwards, K. "Topics in computational complexity and enumeration." Thesis, University of Oxford, 1986. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.376892.

Full text
APA, Harvard, Vancouver, ISO, and other styles
13

Alfonsin, Jorge L. Ramirez. "Topics in combinatorics and computational complexity." Thesis, University of Oxford, 1993. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.334189.

Full text
APA, Harvard, Vancouver, ISO, and other styles
14

Melkebeek, Dieter van. "Randomness and completeness in computational complexity." New York : Springer, 2000. http://www.springerlink.com/openurl.asp?genre=issue&issn=0302-9743&volume=1950.

Full text
APA, Harvard, Vancouver, ISO, and other styles
15

Pontoizeau, Thomas. "Community detection : computational complexity and approximation." Thesis, Paris Sciences et Lettres (ComUE), 2018. http://www.theses.fr/2018PSLED007/document.

Full text
Abstract:
Cette thèse étudie la détection de communautés dans le contexte des réseaux sociaux. Un réseau social peut être modélisé par un graphe dans lequel les sommets représentent les membres et les arêtes représentent les relations entre les membres. En particulier, j'étudie quatre différentes définitions de communauté. D'abord, une structure en communautés peut être définie par une partition des sommets telle que tout sommet a une plus grande proportion de voisins dans sa partie que dans toute autre partie. Cette définition peut être adaptée pour l'étude d'une seule communauté. Ensuite, une communauté peut être vue comme un sous graphe tel que tout couple de sommets sont à distance 2 dans ce sous graphe. Enfin, dans le contexte des sites de rencontre, je propose d'étudier une définition de communauté potentielle dans le sens où les membres de la communauté ne se connaissent pas, mais sont liés par des connaissances communes. Pour ces trois définitions, j'étudie la complexité computationnelle et l'approximation de problèmes liés à l'existence ou la recherche de telles communautés dans les graphes
This thesis deals with community detection in the context of social networks. A social network can be modeled by a graph in which vertices represent members, and edges represent relationships. In particular, I study four different definitions of a community. First, a community structure can be defined as a partition of the vertices such that each vertex has a greater proportion of neighbors in its part than in any other part. This definition can be adapted in order to study only one community. Then, a community can be viewed as a subgraph in which every two vertices are at distance 2 in this subgraph. Finally, in the context of online meetup services, I investigate a definition for potential communities in which members do not know each other but are related by their common neighbors. In regard to these proposed definitions, I study computational complexity and approximation within problems that either relate to the existence of such communities or to finding them in graphs
APA, Harvard, Vancouver, ISO, and other styles
16

Parisen, Toldin Paolo <1984&gt. "Implicit computational complexity and probabilistic classes." Doctoral thesis, Alma Mater Studiorum - Università di Bologna, 2013. http://amsdottorato.unibo.it/5573/1/Parisen_Toldin_Paolo_tesi.pdf.

Full text
Abstract:
The thesis applies the ICC tecniques to the probabilistic polinomial complexity classes in order to get an implicit characterization of them. The main contribution lays on the implicit characterization of PP (which stands for Probabilistic Polynomial Time) class, showing a syntactical characterisation of PP and a static complexity analyser able to recognise if an imperative program computes in Probabilistic Polynomial Time. The thesis is divided in two parts. The first part focuses on solving the problem by creating a prototype of functional language (a probabilistic variation of lambda calculus with bounded recursion) that is sound and complete respect to Probabilistic Prolynomial Time. The second part, instead, reverses the problem and develops a feasible way to verify if a program, written with a prototype of imperative programming language, is running in Probabilistic polynomial time or not. This thesis would characterise itself as one of the first step for Implicit Computational Complexity over probabilistic classes. There are still open hard problem to investigate and try to solve. There are a lot of theoretical aspects strongly connected with these topics and I expect that in the future there will be wide attention to ICC and probabilistic classes.
APA, Harvard, Vancouver, ISO, and other styles
17

Parisen, Toldin Paolo <1984&gt. "Implicit computational complexity and probabilistic classes." Doctoral thesis, Alma Mater Studiorum - Università di Bologna, 2013. http://amsdottorato.unibo.it/5573/.

Full text
Abstract:
The thesis applies the ICC tecniques to the probabilistic polinomial complexity classes in order to get an implicit characterization of them. The main contribution lays on the implicit characterization of PP (which stands for Probabilistic Polynomial Time) class, showing a syntactical characterisation of PP and a static complexity analyser able to recognise if an imperative program computes in Probabilistic Polynomial Time. The thesis is divided in two parts. The first part focuses on solving the problem by creating a prototype of functional language (a probabilistic variation of lambda calculus with bounded recursion) that is sound and complete respect to Probabilistic Prolynomial Time. The second part, instead, reverses the problem and develops a feasible way to verify if a program, written with a prototype of imperative programming language, is running in Probabilistic polynomial time or not. This thesis would characterise itself as one of the first step for Implicit Computational Complexity over probabilistic classes. There are still open hard problem to investigate and try to solve. There are a lot of theoretical aspects strongly connected with these topics and I expect that in the future there will be wide attention to ICC and probabilistic classes.
APA, Harvard, Vancouver, ISO, and other styles
18

Wareham, Harold Todd. "Systematic parameterized complexity analysis in computational phonology." Thesis, National Library of Canada = Bibliothèque nationale du Canada, 1999. http://www.collectionscanada.ca/obj/s4/f2/dsk2/ftp02/NQ37368.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
19

Bull, David R. "Signal processing techniques with reduced computational complexity." Thesis, Cardiff University, 1998. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.388006.

Full text
APA, Harvard, Vancouver, ISO, and other styles
20

Noel, Jonathan A. "Extremal combinatorics, graph limits and computational complexity." Thesis, University of Oxford, 2016. https://ora.ox.ac.uk/objects/uuid:8743ff27-b5e9-403a-a52a-3d6299792c7b.

Full text
Abstract:
This thesis is primarily focused on problems in extremal combinatorics, although we will also consider some questions of analytic and algorithmic nature. The d-dimensional hypercube is the graph with vertex set {0,1}d where two vertices are adjacent if they differ in exactly one coordinate. In Chapter 2 we obtain an upper bound on the 'saturation number' of Qm in Qd. Specifically, we show that for m ≥ 2 fixed and d large there exists a subgraph G of Qd of bounded average degree such that G does not contain a copy of Qm but, for every G' such that G ⊊ G' ⊆ Qd, the graph G' contains a copy of Qm. This result answers a question of Johnson and Pinto and is best possible up to a factor of O(m). In Chapter 3, we show that there exists ε > 0 such that for all k and for n sufficiently large there is a collection of at most 2(1-ε)k subsets of [n] which does not contain a chain of length k+1 under inclusion and is maximal subject to this property. This disproves a conjecture of Gerbner, Keszegh, Lemons, Palmer, Pálvölgyi and Patkós. We also prove that there exists a constant c ∈ (0,1) such that the smallest such collection is of cardinality 2(1+o(1))ck for all k. In Chapter 4, we obtain an exact expression for the 'weak saturation number' of Qm in Qd. That is, we determine the minimum number of edges in a spanning subgraph G of Qd such that the edges of E(Qd)\E(G) can be added to G, one edge at a time, such that each new edge completes a copy of Qm. This answers another question of Johnson and Pinto. We also obtain a more general result for the weak saturation of 'axis aligned' copies of a multidimensional grid in a larger grid. In the r-neighbour bootstrap process, one begins with a set A0 of 'infected' vertices in a graph G and, at each step, a 'healthy' vertex becomes infected if it has at least r infected neighbours. If every vertex of G is eventually infected, then we say that A0 percolates. In Chapter 5, we apply ideas from weak saturation to prove that, for fixed r ≥ 2, every percolating set in Qd has cardinality at least (1+o(1))(d choose r-1)/r. This confirms a conjecture of Balogh and Bollobás and is asymptotically best possible. In addition, we determine the minimum cardinality exactly in the case r=3 (the minimum cardinality in the case r=2 was already known). In Chapter 6, we provide a framework for proving lower bounds on the number of comparable pairs in a subset S of a partially ordered set (poset) of prescribed size. We apply this framework to obtain an explicit bound of this type for the poset 𝒱(q,n) consisting of all subspaces of 𝔽qnordered by inclusion which is best possible when S is not too large. In Chapter 7, we apply the result from Chapter 6 along with the recently developed 'container method,' to obtain an upper bound on the number of antichains in 𝒱(q,n) and a bound on the size of the largest antichain in a p-random subset of 𝒱(q,n) which holds with high probability for p in a certain range. In Chapter 8, we construct a 'finitely forcible graphon' W for which there exists a sequence (εi)i=1 tending to zero such that, for all i ≥ 1, every weak εi-regular partition of W has at least exp(εi-2/25log∗εi-2) parts. This result shows that the structure of a finitely forcible graphon can be much more complex than was anticipated in a paper of Lovász and Szegedy. For positive integers p,q with p/q ❘≥ 2, a circular (p,q)-colouring of a graph G is a mapping V(G) → ℤp such that any two adjacent vertices are mapped to elements of ℤp at distance at least q from one another. The reconfiguration problem for circular colourings asks, given two (p,q)-colourings f and g of G, is it possible to transform f into g by recolouring one vertex at a time so that every intermediate mapping is a p,q-colouring? In Chapter 9, we show that this question can be answered in polynomial time for 2 ≤ p/q < 4 and is PSPACE-complete for p/q ≥ 4.
APA, Harvard, Vancouver, ISO, and other styles
21

Dare, Christopher Edward. "Turing Decidability and Computational Complexity of MorseHomology." Thesis, Virginia Tech, 2019. http://hdl.handle.net/10919/90397.

Full text
Abstract:
This thesis presents a general background on discrete Morse theory, as developed by Robin Forman, as well as an introduction to computability and computational complexity. Since general point-set data equipped with a smooth structure can admit a triangulation, discrete Morse theory finds numerous applications in data analysis which can range from traffic control to geographical interpretation. Currently, there are various methods which convert point-set data to simplicial complexes or piecewise-smooth manifolds; however, this is not the focus of the thesis. Instead, this thesis will show that the Morse homology of such data is computable in the classical sense of Turing decidability, bound the complexity of finding the Morse homology of a given simplicial complex, and provide a measure for when this is more efficient than simplicial homology.
Master of Science
With the growing prevalence of data in the technological world, there is an emerging need to identify geometric properties (such as holes and boundaries) to data sets. However, it is often fruitless to employ an algorithm if it is known to be too computationally expensive (or even worse, not computable in the traditional sense). However, discrete Morse theory was originally formulated to provide a simplified manner of calculating these geometric properties on discrete sets. Therefore, this thesis outlines the general background of Discrete Morse theory and formulates the computational cost of computing specific geometric algorithms from the Discrete Morse perspective.
APA, Harvard, Vancouver, ISO, and other styles
22

Hagen, Matthias. "Algorithmic and computational complexity issues of MONET." Göttingen Cuvillier, 2008. http://d-nb.info/99226300X/04.

Full text
APA, Harvard, Vancouver, ISO, and other styles
23

Zuppiroli, Sara <1979&gt. "Probabilistic Recursion Theory and Implicit Computational Complexity." Doctoral thesis, Alma Mater Studiorum - Università di Bologna, 2014. http://amsdottorato.unibo.it/6723/1/thesis.pdf.

Full text
Abstract:
In this thesis we provide a characterization of probabilistic computation in itself, from a recursion-theoretical perspective, without reducing it to deterministic computation. More specifically, we show that probabilistic computable functions, i.e., those functions which are computed by Probabilistic Turing Machines (PTM), can be characterized by a natural generalization of Kleene's partial recursive functions which includes, among initial functions, one that returns identity or successor with probability 1/2. We then prove the equi-expressivity of the obtained algebra and the class of functions computed by PTMs. In the the second part of the thesis we investigate the relations existing between our recursion-theoretical framework and sub-recursive classes, in the spirit of Implicit Computational Complexity. More precisely, endowing predicative recurrence with a random base function is proved to lead to a characterization of polynomial-time computable probabilistic functions.
APA, Harvard, Vancouver, ISO, and other styles
24

Zuppiroli, Sara <1979&gt. "Probabilistic Recursion Theory and Implicit Computational Complexity." Doctoral thesis, Alma Mater Studiorum - Università di Bologna, 2014. http://amsdottorato.unibo.it/6723/.

Full text
Abstract:
In this thesis we provide a characterization of probabilistic computation in itself, from a recursion-theoretical perspective, without reducing it to deterministic computation. More specifically, we show that probabilistic computable functions, i.e., those functions which are computed by Probabilistic Turing Machines (PTM), can be characterized by a natural generalization of Kleene's partial recursive functions which includes, among initial functions, one that returns identity or successor with probability 1/2. We then prove the equi-expressivity of the obtained algebra and the class of functions computed by PTMs. In the the second part of the thesis we investigate the relations existing between our recursion-theoretical framework and sub-recursive classes, in the spirit of Implicit Computational Complexity. More precisely, endowing predicative recurrence with a random base function is proved to lead to a characterization of polynomial-time computable probabilistic functions.
APA, Harvard, Vancouver, ISO, and other styles
25

Bovo, Samuele <1989&gt. "Development of computational methods for biological complexity." Doctoral thesis, Alma Mater Studiorum - Università di Bologna, 2018. http://amsdottorato.unibo.it/8366/1/SB_thesis_2018.pdf.

Full text
Abstract:
The cell is a complex system. In this system, the different layers of biological information establish complex links converging in the space of functions; processes and pathways talk each other defining cell types and organs. In the space of biological functions, this lead to a higher order of “emergence”, greater than the sum of the single parts, defining a biological entity a complex system. The introduction of omic techniques has made possible to investigate the complexity of each biological layer. With the different technologies we can have a near complete readout of the different biomolecules. However, it is only through data integration that we can let emerge and understand biological complexity. Given the complexity of the problem, we are far from having fully understood and developed exhaustive computational methods. Thus, this make urgent the exploration of biological complexity through the implementation of more powerful tools relying on new data and hypotheses. To this aim, Bioinformatics and Computational Biology play determinant roles. The present thesis describes computational methods aimed at deciphering biological complexity starting from genomic, interactomic, metabolomic and functional data. The first part describes NET-GE, a network-based gene enrichment tool aimed at extracting biological functions and processes of a set of gene/proteins related to a phenotype. NET-GE exploits the information stored in biological networks to better define the biological events occurring at gene/protein level. The first part describes also eDGAR, a database collecting and organizing gene-disease associations. The second part deals with metabolomics. I describe a new way to perform metabolite enrichment analysis: the metabolome is explored by exploiting the features of an interactome. The third part describes the methods and results obtained in the CAGI experiment, a community experiment aimed at assessing computational methods used to predict the impact of genomic variation on a phenotype.
APA, Harvard, Vancouver, ISO, and other styles
26

Lemieux, François 1961. "Finite groupoids and their applications to computational complexity." Thesis, McGill University, 1996. http://digitool.Library.McGill.CA:80/R/?func=dbin-jump-full&object_id=40171.

Full text
Abstract:
In our Master thesis the notions of recognition by semigroups and by programs over semigroups were extended to groupoids. As a consequence of this transformation, we obtained context-free languages instead of regular with recognition by groupoids, and we obtained SAC$ sp1$ instead of NC$ sp1$ with recognition by programs over groupoids. In this thesis, we continue the investigation of the computational power of finite groupoids.
We consider different restrictions on the original model. We examine the effect of restricting the kind of groupoids used, the way parentheses are placed, and we distinguish between the case where parentheses are explicitly given and the case where they are guessed nondeterministically.
We introduce the notions of linear recognition by groupoids and by programs over groupoids. This leads to new characterizations of linear context-free languages and NL. We also prove that the algebraic structure of finite groupoids induces a strict hierarchy on the classes of languages they linearly recognized.
We investigate the classes obtained when the groupoids are restricted to be quasigroups (i.e. the multiplication table forms a latin square). We prove that languages recognized by quasigroups are regular and that programs over quasigroups characterize NC$ sp1$.
We also consider the problem of evaluating a well-parenthesized expression over a finite loop (a quasigroup with an identity). This problem is in NC$ sp1$ for any finite loop, and we give algebraic conditions for its completeness. In particular, we prove that it is sufficient that the loop be nonsolvable, extending a well-known theorem of Barrington.
Finally, we consider programs where the groupoids are allowed to grow with the input length. We study the relationship between these programs and more classical models of computation like Turing machines, pushdown automata, and owner-read owner-write PRAM. As a consequence, we find a restriction on Boolean circuits that has some interesting properties. In particular, circuits that characterize NP and NL are shown to correspond, in presence of our restriction, to P and L respectively.
APA, Harvard, Vancouver, ISO, and other styles
27

Kamath, Pritish. "Some hardness escalation results in computational complexity theory." Thesis, Massachusetts Institute of Technology, 2019. https://hdl.handle.net/1721.1/128290.

Full text
Abstract:
This electronic version was submitted by the student author. The certified thesis is available in the Institute Archives and Special Collections.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Electrical Engineering and Computer Science, 2020
Cataloged from student-submitted PDF of thesis. "February 2020."
Includes bibliographical references (pages 92-105).
In this thesis, we prove new hardness escalation results in computational complexity theory; a phenomenon where hardness results against seemingly weak models of computation for any problem can be lifted, in a black box manner, to much stronger models of computation by considering a simple gadget composed version of the original problem. For any unsatisfiable CNF formula F that is hard to refute in the Resolution proof system, we show that a gadget-composed version of F is hard to refute in any proof system whose lines are computed by efficient communication protocols. This allows us to prove new lower bounds for: -- Monotone Circuit Size : we get an exponential lower bound for an explicit monotone function computable by linear sized monotone span programs and also in (non-monotone) NC². -- Real Monotone Circuit Size : Our proof technique extends to real communication protocols, which yields similar lower bounds against real monotone circuits. -- Cutting Planes Length : we get exponential lower bound for an explicit CNF contradiction that is refutable with logarithmic Nullstellensatz degree. Finally, we describe an intimate connection between computational models and communication complexity analogs of the sub-classes of TFNP, the class of all total search problems in NP. We show that the communication analog of PPA[subscript p] captures span programs over F[subscript p] for any prime p. This complements previously known results that communication FP captures formulas (Karchmer- Wigderson, 1988) and that communication PLS captures circuits (Razborov, 1995).
by Pritish Kamath.
Ph. D.
Ph.D. Massachusetts Institute of Technology, Department of Electrical Engineering and Computer Science
APA, Harvard, Vancouver, ISO, and other styles
28

McQuillan, Colin. "The computational complexity of approximation of partition functions." Thesis, University of Liverpool, 2013. http://livrepository.liverpool.ac.uk/12893/.

Full text
Abstract:
This thesis studies the computational complexity of approximately evaluating partition functions. For various classes of partition functions, we investigate whether there is an FPRAS: a fully polynomial randomised approximation scheme. In many of these settings we also study 'expressibility', a simple notion of defining a constraint by combining other constraints, and we show that the results cannot be extended by expressibility reductions alone. The main contributions are: • We show that there is no FPRAS for evaluating the partition function of the hard-core gas model on planar graphs at fugacity 312, unless RP = NP. • We generalise an argument of Jerrum and Sinclair to give FPRASes for a large class of degree-two Boolean #CSPs. • We initiate the classification of degree-two Boolean #CSPs where the constraint language consists of a single arity 3 relation. • We show that the complexity of approximately counting downsets in directed acyclic graphs is not affected by restricting to graphs of maximum degree three. • We classify the complexity of degree-two #CSPs with Boolean relations and weights on variables. • We classify the complexity of the problem #CSP(F) for arbitrary finite domains when enough non-negative-valued arity 1 functions are in the constraint language. • We show that not all log-supermodular functions can be expressed by binary logsupermodular functions in the context of #CSPs.
APA, Harvard, Vancouver, ISO, and other styles
29

Aghighi, Meysam. "Computational Complexity of some Optimization Problems in Planning." Doctoral thesis, Linköpings universitet, Programvara och system, 2017. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-136280.

Full text
Abstract:
Automated planning is known to be computationally hard in the general case. Propositional planning is PSPACE-complete and first-order planning is undecidable. One method for analyzing the computational complexity of planning is to study restricted subsets of planning instances, with the aim of differentiating instances with varying complexity. We use this methodology for studying the computational complexity of planning. Finding new tractable (i.e. polynomial-time solvable) problems has been a particularly important goal for researchers in the area. The reason behind this is not only to differentiate between easy and hard planning instances, but also to use polynomial-time solvable instances in order to construct better heuristic functions and improve planners. We identify a new class of tractable cost-optimal planning instances by restricting the causal graph. We study the computational complexity of oversubscription planning (such as the net-benefit problem) under various restrictions and reveal strong connections with classical planning. Inspired by this, we present a method for compiling oversubscription planning problems into the ordinary plan existence problem. We further study the parameterized complexity of cost-optimal and net-benefit planning under the same restrictions and show that the choice of numeric domain for the action costs has a great impact on the parameterized complexity. We finally consider the parameterized complexity of certain problems related to partial-order planning. In some applications, less restricted plans than total-order plans are needed. Therefore, a partial-order plan is being used instead. When dealing with partial-order plans, one important question is how to achieve optimal partial order plans, i.e. having the highest degree of freedom according to some notion of flexibility. We study several optimization problems for partial-order plans, such as finding a minimum deordering or reordering, and finding the minimum parallel execution length.
APA, Harvard, Vancouver, ISO, and other styles
30

Goussevskaia, Olga. "Computational complexity and scheduling algorithms for wireless networks." Konstanz Hartung-Gorre, 2009. http://d-nb.info/997891122/04.

Full text
APA, Harvard, Vancouver, ISO, and other styles
31

Sabharwal, Ashish. "Algorithmic applications of propositional proof complexity /." Thesis, Connect to this title online; UW restricted, 2005. http://hdl.handle.net/1773/6938.

Full text
APA, Harvard, Vancouver, ISO, and other styles
32

Kawachi, Akinori. "Studies on quantum query complexity and quantum computational cryptography." 京都大学 (Kyoto University), 2004. http://hdl.handle.net/2433/145315.

Full text
Abstract:
Kyoto University (京都大学)
0048
新制・課程博士
博士(情報学)
甲第11157号
情博第132号
新制||情||30(附属図書館)
22726
UT51-2004-R32
京都大学大学院情報学研究科通信情報システム専攻
(主査)教授 岩間 一雄, 教授 福嶋 雅夫, 教授 北野 正雄
学位規則第4条第1項該当
APA, Harvard, Vancouver, ISO, and other styles
33

De, Benedetti Erika. "Linear logic, type assignment systems and implicit computational complexity." Thesis, Lyon, École normale supérieure, 2015. http://www.theses.fr/2015ENSL0981/document.

Full text
Abstract:
La complexité implicite (ICC) vise à donner des caractérisations de classes de complexité dans des langages de programmation ou des logiques, sans faire référence à des bornes sur les ressources (temps, espace mémoire). Dans cette thèse, nous étudions l’approche de la logique linéaire à la complexité implicite. L’objectif est de donner des caractérisations de classes de complexité, à travers des variantes du lambda-calcul qui sont typables dans de tels systèmes. En particulier, nous considérons à la fois une perspective monovalente et une perspective polyvalente par rapport à l’ICC. Dans le premier cas, le but est de caractériser une hiérarchie de classes de complexité à travers un lambda-calcul élémentaire typé dans la logique linéaire élémentaire (ELL), où la complexité ne dépend que de l’interface d’un programme, c’est à dire son type. La deuxième approche rend compte à la fois des fonctions calculables en temps polynomial et de la normalisation forte, à travers des termes du lambda-calcul pur qui sont typés dans un système inspiré par la logique linéaire Soft (SLL); en particulier, par rapport à l’approche logique ordinaire, ici nous abandonnons la modalité “!” en faveur de l’emploi des types stratifiés, vus comme un raffinement des types intersection non associatifs, afin d’améliorer la typabilité et, en conséquence, l’expressivité. Enfin, nous explorons l’utilisation des types intersection, privés de certaines de leurs propriétés, vers une direction plus quantitative que l’approche qualitative habituelle, afin d’obtenir une borne sur le calcul de lambda-termes purs, en obtenant en plus une caractérisation de la normalisation forte
In this thesis we explore the linear logic approach to implicit computational complexity, through the design of type assignment systems based on light linear logic, or heavily inspired by them, with the purpose of giving a characterization of one or more complexity classes, through variants of lambda-calculi which are typable in such systems. In particular, we consider both a monovalent and a polyvalent perspective with respect to ICC. In the first one the aim is to characterize a hierarchy of complexity classes through an elementary lambda-calculus typed in Elementary Linear Logic (ELL), where the complexity depends only on the interface of a term, namely its type. The second approach gives an account of both the functions computable in polynomial time and of strong normalization, through terms of pure lambda-calculus which are typed in a system inspired by Soft Linear Logic (SLL); in particular, with respect to the usual logical take, in the latter we give up the “!” modality in favor of employing stratified types as a refinement of non-associative intersection types, in order to improve typability and, as a consequence, expressivity.Finally we explore the use of intersection types, deprived of some of their usual properties, towards a more quantitative approach rather than the usual qualitative one, namely in order to compute a bound on the computation of pure lambda-terms, obtaining in addition a characterization of strong normalization
APA, Harvard, Vancouver, ISO, and other styles
34

Fomenko, Fedor. "Dual-context multicategories as models for implicit computational complexity." Thesis, University of Edinburgh, 2008. http://hdl.handle.net/1842/29104.

Full text
Abstract:
We start by defining two basic dual-context calculi II(σ) and IL (σ) of typed terms, built using two kinds of free variables and dual-typed function symbols. In the I L(σ) calculus we use safe variables in a ‘linear’ fashion, forbidding any weakening and contraction, while, in the II (σ) calculus, contraction and weakening are allowed for both safe and normal variables. We then consider models for II (σ) and IL (σ) with basic equational judgements. Rather than following the traditional approach of encoding dual-contexts through additional structure on a category, we consider dual-context multicategories in which dual-contexts are built into the definition. The main advantage of this approach is that it covers a wider class of models, including some particularly natural models from the field of implicit computational complexity. Next we enrich our basic calculi with different type constructions such as products and sums and provide their multicategorical interpretation. We consider a dual-context list type constructor together with a type-theoretic analogue of the safe recursion of Bellantoni and Cook’s system B, which characterises polynomial time computability on natural numbers. We define an interpretation for such dual-context lists using safe list objects. We show that the polytime computable functions are exactly the functions definable in every dual-context multicategory with safe binary list object. Finally, motivated by the standard approach to the categorical interpretation of primitive recursion using the notion of initial algebra, we develop a notion of safe initial algebra, which generalises the safe recursion scheme and provides us with insights about the choice of initial functions in system B.
APA, Harvard, Vancouver, ISO, and other styles
35

Hallett, Michael Trevor. "An integrated complexity analysis of problems from computational biology." Thesis, National Library of Canada = Bibliothèque nationale du Canada, 1996. http://www.collectionscanada.ca/obj/s4/f2/dsk3/ftp04/nq21933.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
36

Broxvall, Mathias. "A Study in the Computational Complexity of Temporal Reasoning." Doctoral thesis, Linköping : Univ, 2002. http://www.ep.liu.se/diss/science_technology/07/79/index.html.

Full text
APA, Harvard, Vancouver, ISO, and other styles
37

Tesson, Pascal. "Computational complexity questions related to finite monoids and semigroups." Thesis, McGill University, 2003. http://digitool.Library.McGill.CA:80/R/?func=dbin-jump-full&object_id=84441.

Full text
Abstract:
In this thesis, we address a number of issues pertaining to the computational power of monoids and semigroups as machines and to the computational complexity of problems whose difficulty is parametrized by an underlying semigroup or monoid and find that these two axes of research are deeply intertwined.
We first consider the "program over monoid" model of D. Barrington and D. Therien [BT88] and set out to answer two fundamental questions: which monoids are rich enough to recognize arbitrary languages via programs of arbitrary length, and which monoids are so weak that any program over them has an equivalent of polynomial length? We find evidence that the two notions are dual and in particular prove that every monoid in DS has exactly one of these two properties. We also prove that for certain "weak" varieties of monoids, programs can only recognize those languages with a "neutral letter" that can be recognized via morphisms over that variety.
We then build an algebraic approach to communication complexity, a field which has been of great importance in the study of small complexity classes. We prove that every monoid has communication complexity O(1), &THgr;(log n) or &THgr;(n) in this model. We obtain similar classifications for the communication complexity of finite monoids in the probabilistic, simultaneous, probabilistic simultaneous and MOD p-counting variants of this two-party model and thus characterize the communication complexity (in a worst-case partition sense) of every regular language in these five models. Furthermore, we study the same questions in the Chandra-Furst-Lipton multiparty extension of the classical communication model and describe the variety of monoids which have bounded 3-party communication complexity and bounded k-party communication complexity for some k. We also show how these bounds can be used to establish computational limitations of programs over certain classes of monoids.
Finally, we consider the computational complexity of testing if an equation or a system of equations over some fixed finite monoid (or semigroup) has a solution.
APA, Harvard, Vancouver, ISO, and other styles
38

Smith, Justin N. "Computational complexity, bounded rationality and the theory of games." Thesis, University of Oxford, 2000. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.365642.

Full text
APA, Harvard, Vancouver, ISO, and other styles
39

Chandoo, Maurice [Verfasser]. "Computational complexity aspects of implicit graph representations / Maurice Chandoo." Hannover : Gottfried Wilhelm Leibniz Universität Hannover, 2018. http://d-nb.info/117241436X/34.

Full text
APA, Harvard, Vancouver, ISO, and other styles
40

Levy, Matthew Asher. "A SURVEY OF LIMITED NONDETERMINISM IN COMPUTATIONAL COMPLEXITY THEORY." UKnowledge, 2003. http://uknowledge.uky.edu/gradschool_theses/221.

Full text
Abstract:
Nondeterminism is typically used as an inherent part of the computational models used incomputational complexity. However, much work has been done looking at nondeterminism asa separate resource added to deterministic machines. This survey examines several differentapproaches to limiting the amount of nondeterminism, including Kintala and Fischer's βhierarchy, and Cai and Chen's guess-and-check model.
APA, Harvard, Vancouver, ISO, and other styles
41

Arasteh, Davoud. "Computational Intelligence and Complexity Measures for Chaotic Information Processing." ScholarWorks@UNO, 2008. http://scholarworks.uno.edu/td/834.

Full text
Abstract:
This dissertation investigates the application of computational intelligence methods in the analysis of nonlinear chaotic systems in the framework of many known and newly designed complex systems. Parallel comparisons are made between these methods. This provides insight into the difficult challenges facing nonlinear systems characterization and aids in developing a generalized algorithm in computing algorithmic complexity measures, Lyapunov exponents, information dimension and topological entropy. These metrics are implemented to characterize the dynamic patterns of discrete and continuous systems. These metrics make it possible to distinguish order from disorder in these systems. Steps required for computing Lyapunov exponents with a reorthonormalization method and a group theory approach are formalized. Procedures for implementing computational algorithms are designed and numerical results for each system are presented. The advance-time sampling technique is designed to overcome the scarcity of phase space samples and the buffer overflow problem in algorithmic complexity measure estimation in slow dynamics feedback-controlled systems. It is proved analytically and tested numerically that for a quasiperiodic system like a Fibonacci map, complexity grows logarithmically with the evolutionary length of the data block. It is concluded that a normalized algorithmic complexity measure can be used as a system classifier. This quantity turns out to be one for random sequences and a non-zero value less than one for chaotic sequences. For periodic and quasi-periodic responses, as data strings grow their normalized complexity approaches zero, while a faster deceasing rate is observed for periodic responses. Algorithmic complexity analysis is performed on a class of certain rate convolutional encoders. The degree of diffusion in random-like patterns is measured. Simulation evidence indicates that algorithmic complexity associated with a particular class of 1/n-rate code increases with the increase of the encoder constraint length. This occurs in parallel with the increase of error correcting capacity of the decoder. Comparing groups of rate-1/n convolutional encoders, it is observed that as the encoder rate decreases from 1/2 to 1/7, the encoded data sequence manifests smaller algorithmic complexity with a larger free distance value.
APA, Harvard, Vancouver, ISO, and other styles
42

Holder, Ethan Graham. "Musiplectics: Computational Assessment of the Complexity of Music Scores." Thesis, Virginia Tech, 2015. http://hdl.handle.net/10919/52376.

Full text
Abstract:
In the Western classical tradition, musicians play music from notated sheet music, called a score. When playing music from a score, a musician translates its visual symbols into sequences of instrument-specific physical motions. Hence, a music score's overall complexity represents a sum of the cognitive and mechanical acuity required for its performance. For a given instrument, different notes, intervals, articulations, dynamics, key signatures, and tempo represent dissimilar levels of difficulty, which vary depending on the performer's proficiency. Individual musicians embrace this tenet, but may disagree about the degrees of difficulty. This thesis introduces musiplectics (musiplectics = music + plectics, Greek for the study of complexity), a systematic and objective approach to computational assessment of the complexity of a music score for any instrument. Musiplectics defines computing paradigms for automatically and accurately calculating the complexity of playing a music score on a given instrument. The core concept codifies a two-phase process. First, music experts rank the relative difficulty of individual musical components (e.g., notes, intervals, dynamics, etc.) for different playing proficiencies and instruments. Second, a computing engine automatically applies this ranking to music scores and calculates their respective complexity. As a proof of concept of musiplectics, we present an automated, Web-based application called Musical Complexity Scoring (MCS) for music educators and performers. Musiplectics can engender the creation of practical computing tools for objective and expeditious assessment of a music score's suitability for the abilities of intended performers. This thesis is based on research submitted for publication at ONWARD'15.
Master of Science
APA, Harvard, Vancouver, ISO, and other styles
43

Shah, Kushal Yogeshkumar. "Computational Complexity of Signal Processing Functions in Software Radio." Cleveland State University / OhioLINK, 2010. http://rave.ohiolink.edu/etdc/view?acc_num=csu1292854939.

Full text
APA, Harvard, Vancouver, ISO, and other styles
44

Kotzing, Timo. "Abstraction and complexity in computational learning in the limit." Access to citation, abstract and download form provided by ProQuest Information and Learning Company; downloadable PDF file, 156 p, 2009. http://proquest.umi.com/pqdweb?did=1886744841&sid=7&Fmt=2&clientId=8331&RQT=309&VName=PQD.

Full text
APA, Harvard, Vancouver, ISO, and other styles
45

Vee, Erik. "Time-space tradeoffs for nonuniform computation /." Thesis, Connect to this title online; UW restricted, 2004. http://hdl.handle.net/1773/6915.

Full text
APA, Harvard, Vancouver, ISO, and other styles
46

Dandekar, Pranav. "Algebraic-geometric methods for complexity lower bounds." [Gainesville, Fla.] : University of Florida, 2004. http://purl.fcla.edu/fcla/etd/UFE0008843.

Full text
APA, Harvard, Vancouver, ISO, and other styles
47

Pavan, A. "Average-case complexity theory and polynomial-time reductions." Buffalo, N.Y. : Dept. of Computer Science, State University of New York at Buffalo, 2001. http://www.cse.buffalo.edu/tech%2Dreports/2001%2D10.ps.Z.

Full text
APA, Harvard, Vancouver, ISO, and other styles
48

Lee, Yoonhyoung Gordon Peter C. "Linguistic complexity and working memory structure effect of the computational demands of reasoning on syntactic complexity /." Chapel Hill, N.C. : University of North Carolina at Chapel Hill, 2007. http://dc.lib.unc.edu/u?/etd,797.

Full text
Abstract:
Thesis (Ph. D.)--University of North Carolina at Chapel Hill, 2007.
Title from electronic title page (viewed Dec. 18, 2007). "... in partial fulfillment of the requirements for the degree of Doctor of Philosophy in the Department of Psychology (Cognitive Psychology)." Discipline: Psychology; Department/School: Psychology.
APA, Harvard, Vancouver, ISO, and other styles
49

Neyer, Gabriele. "Algorithms, complexity, and software engineering in computational geometry : case studies /." [S.l.] : [s.n.], 2000. http://e-collection.ethbib.ethz.ch/show?type=diss&nr=13586.

Full text
APA, Harvard, Vancouver, ISO, and other styles
50

Hundt, Christian [Verfasser]. "On the computational complexity of projective image matching / Christian Hundt." Lübeck : Zentrale Hochschulbibliothek Lübeck, 2012. http://d-nb.info/1020437006/34.

Full text
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography