Academic literature on the topic 'Computational complexity and computability'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the lists of relevant articles, books, theses, conference reports, and other scholarly sources on the topic 'Computational complexity and computability.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Journal articles on the topic "Computational complexity and computability"

1

Dean, Walter. "Computational Complexity Theory and the Philosophy of Mathematics†." Philosophia Mathematica 27, no. 3 (October 1, 2019): 381–439. http://dx.doi.org/10.1093/philmat/nkz021.

Full text
Abstract:
Abstract Computational complexity theory is a subfield of computer science originating in computability theory and the study of algorithms for solving practical mathematical problems. Amongst its aims is classifying problems by their degree of difficulty — i.e., how hard they are to solve computationally. This paper highlights the significance of complexity theory relative to questions traditionally asked by philosophers of mathematics while also attempting to isolate some new ones — e.g., about the notion of feasibility in mathematics, the $\mathbf{P} \neq \mathbf{NP}$ problem and why it has proven hard to resolve, and the role of non-classical modes of computation and proof.
APA, Harvard, Vancouver, ISO, and other styles
2

Japaridze, Giorgi. "Arithmetics based on computability logic." Logical Investigations 25, no. 2 (December 23, 2019): 61–74. http://dx.doi.org/10.21146/2074-1472-2019-25-2-61-74.

Full text
Abstract:
This paper is a brief survey of number theories based on em computability logic (CoL) a game-semantically conceived logic of computational tasks of resources. Such theories, termed em clarithmetics, are conservative extensions of first-order Peano arithmetic. The first section of the paper lays out the conceptual basis of CoL and describes the relevant fragment of its formal language, with so called parallel connectives, choice connectives and quantifiers, and blind quantifiers. Both syntactically and semantically, this is a conservative generalization of the language of classical logic. Clarithmetics, based on the corresponding fragment of CoL in the same sense as Peano arithmetic is based on classical logic, are discussed in the second section. The axioms and inference rules of the system of clarithmetic named ${\bf CLA11}$ are presented, and the main results on this system are stated: constructive soundness, extensional completeness, and intensional completeness. In the final section two potential applications of clarithmetics are addressed: clarithmetics as declarative programming languages in an extreme sense, and as tools for separating computational complexity classes. When clarithmetics or similar CoL-based theories are viewed as programming languages, programming reduces to proof-search, as programs can be mechanically extracted from proofs; such programs also serves as their own formal verifications, thus fully neutralizing the notorious (and generally undecidable) program verification problem. The second application reduces the problem of separating various computational complexity classes to separating the corresponding versions of clarithmetic, the potential benefits of which stem from the belief that separating theories should generally be easier than separating complexity classes directly.
APA, Harvard, Vancouver, ISO, and other styles
3

Reif, J. H., J. D. Tygar, and A. Yoshida. "Computability and complexity of ray tracing." Discrete & Computational Geometry 11, no. 3 (March 1994): 265–88. http://dx.doi.org/10.1007/bf02574009.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Yampolsky, Michael. "Towards understanding the theoretical challenges of numerical modeling of dynamical systems." New Zealand Journal of Mathematics 52 (September 19, 2021): 453–67. http://dx.doi.org/10.53733/129.

Full text
Abstract:
Asking and answering the right questions about computability and computational complexity of dynamical systems is more important than many may realize. In this note, I discuss the principal challenges, their implications, and some instructive open problems.
APA, Harvard, Vancouver, ISO, and other styles
5

BEGGS, EDWIN, JOSÉ FÉLIX COSTA, and JOHN V. TUCKER. "THREE FORMS OF PHYSICAL MEASUREMENT AND THEIR COMPUTABILITY." Review of Symbolic Logic 7, no. 4 (September 9, 2014): 618–46. http://dx.doi.org/10.1017/s1755020314000240.

Full text
Abstract:
AbstractWe have begun a theory of measurement in which an experimenter and his or her experimental procedure are modeled by algorithms that interact with physical equipment through a simple abstract interface. The theory is based upon using models of physical equipment as oracles to Turing machines. This allows us to investigate the computability and computational complexity of measurement processes. We examine eight different experiments that make measurements and, by introducing the idea of an observable indicator, we identify three distinct forms of measurement process and three types of measurement algorithm. We give axiomatic specifications of three forms of interfaces that enable the three types of experiment to be used as oracles to Turing machines, and lemmas that help certify an experiment satisfies the axiomatic specifications. For experiments that satisfy our axiomatic specifications, we give lower bounds on the computational power of Turing machines in polynomial time using nonuniform complexity classes. These lower bounds break the barrier defined by the Church-Turing Thesis.
APA, Harvard, Vancouver, ISO, and other styles
6

Blum, Manuel, and Santosh Vempala. "The complexity of human computation via a concrete model with an application to passwords." Proceedings of the National Academy of Sciences 117, no. 17 (April 14, 2020): 9208–15. http://dx.doi.org/10.1073/pnas.1801839117.

Full text
Abstract:
What can humans compute in their heads? We are thinking of a variety of cryptographic protocols, games like sudoku, crossword puzzles, speed chess, and so on. For example, can a person compute a function in his or her head so that an eavesdropper with a powerful computer—who sees the responses to random inputs—still cannot infer responses to new inputs? To address such questions, we propose a rigorous model of human computation and associated measures of complexity. We apply the model and measures first and foremost to the problem of 1) humanly computable password generation and then, consider related problems of 2) humanly computable “one-way functions” and 3) humanly computable “pseudorandom generators.” The theory of human computability developed here plays by different rules than standard computability; the polynomial vs. exponential time divide of modern computability theory is irrelevant to human computation. In human computability, the step counts for both humans and computers must be more concrete. As an application and running example, password generation schemas are humanly computable algorithms based on private keys. Humanly computable and/or humanly usable mean, roughly speaking, that any human needing—and capable of using—passwords can if sufficiently motivated generate and memorize a secret key in less than 1 h (including all rehearsals) and can subsequently use schema plus key to transform website names (challenges) into passwords (responses) in less than 1 min. Moreover, the schemas have precisely defined measures of security against all adversaries, human and/or machine.
APA, Harvard, Vancouver, ISO, and other styles
7

Valenti, Manlio. "A journey through computability, topology and analysis." Bulletin of Symbolic Logic 28, no. 2 (June 2022): 266–67. http://dx.doi.org/10.1017/bsl.2022.13.

Full text
Abstract:
AbstractThis thesis is devoted to the exploration of the complexity of some mathematical problems using the framework of computable analysis and (effective) descriptive set theory. We will especially focus on Weihrauch reducibility as a means to compare the uniform computational strength of problems. After a short introduction of the relevant background notions, we investigate the uniform computational content of problems arising from theorems that lie at the higher levels of the reverse mathematics hierarchy.We first analyze the strength of the open and clopen Ramsey theorems. Since there is not a canonical way to phrase these theorems as multi-valued functions, we identify eight different multi-valued functions (five corresponding to the open Ramsey theorem and three corresponding to the clopen Ramsey theorem) and study their degree from the point of view of Weihrauch, strong Weihrauch, and arithmetic Weihrauch reducibility.We then discuss some new operators on multi-valued functions and study their algebraic properties and the relations with other previously studied operators on problems. In particular, we study the first-order part and the deterministic part of a problem f, capturing the Weihrauch degree of the strongest multi-valued problem that is reducible to f and that, respectively, has codomain $\mathbb {N}$ or is single-valued.These notions proved to be extremely useful when exploring the Weihrauch degree of the problem $\mathsf {DS}$ of computing descending sequences in ill-founded linear orders. They allow us to show that $\mathsf {DS}$ , and the Weihrauch equivalent problem $\mathsf {BS}$ of finding bad sequences through non-well quasi-orders, while being very “hard” to solve, are rather weak in terms of uniform computational strength. We then generalize $\mathsf {DS}$ and $\mathsf {BS}$ by considering $\boldsymbol {\Gamma }$ -presented orders, where $\boldsymbol {\Gamma }$ is a Borel pointclass or $\boldsymbol {\Delta }^1_1$ , $\boldsymbol {\Sigma }^1_1$ , $\boldsymbol {\Pi }^1_1$ . We study the obtained $\mathsf {DS}$ -hierarchy and $\mathsf {BS}$ -hierarchy of problems in comparison with the (effective) Baire hierarchy and show that they do not collapse at any finite level.Finally, we work in the context of geometric measure theory and we focus on the characterization, from the point of view of descriptive set theory, of some conditions involving the notions of Hausdorff/Fourier dimension and Salem sets. We first work in the hyperspace $\mathbf {K}([0,1])$ of compact subsets of $[0,1]$ and show that the closed Salem sets form a $\boldsymbol {\Pi }^0_3$ -complete family. This is done by characterizing the complexity of the family of sets having sufficiently large Hausdorff or Fourier dimension. We also show that the complexity does not change if we increase the dimension of the ambient space and work in $\mathbf {K}([0,1]^d)$ . We also generalize the results by relaxing the compactness of the ambient space and show that the closed Salem sets are still $\boldsymbol {\Pi }^0_3$ -complete when we endow $\mathbf {F}(\mathbb {R}^d)$ with the Fell topology. A similar result holds also for the Vietoris topology.We conclude by analyzing the same notions from the point of view of effective descriptive set theory and Type-2 Theory of Effectivity, and show that the complexities remain the same also in the lightface case. In particular, we show that the family of all the closed Salem sets is $\Pi ^0_3$ -complete. We furthermore characterize the Weihrauch degree of the functions computing Hausdorff and Fourier dimension of closed sets.Abstract prepared by Manlio Valenti.E-mail:manliovalenti@gmail.com
APA, Harvard, Vancouver, ISO, and other styles
8

Miguel-Tomé, Sergio, Ángel L. Sánchez-Lázaro, and Luis Alonso-Romero. "Fundamental Physics and Computation: The Computer-Theoretic Framework." Universe 8, no. 1 (January 11, 2022): 40. http://dx.doi.org/10.3390/universe8010040.

Full text
Abstract:
The central goal of this manuscript is to survey the relationships between fundamental physics and computer science. We begin by providing a short historical review of how different concepts of computer science have entered the field of fundamental physics, highlighting the claim that the universe is a computer. Following the review, we explain why computational concepts have been embraced to interpret and describe physical phenomena. We then discuss seven arguments against the claim that the universe is a computational system and show that those arguments are wrong because of a misunderstanding of the extension of the concept of computation. Afterwards, we address a proposal to solve Hempel’s dilemma using the computability theory but conclude that it is incorrect. After that, we discuss the relationship between the proposals that the universe is a computational system and that our minds are a simulation. Analysing these issues leads us to proposing a new physical principle, called the principle of computability, which claims that the universe is a computational system (not restricted to digital computers) and that computational power and the computational complexity hierarchy are two fundamental physical constants. On the basis of this new principle, a scientific paradigm emerges to develop fundamental theories of physics: the computer-theoretic framework (CTF). The CTF brings to light different ideas already implicit in the work of several researchers and provides a new view on the universe based on computer theoretic concepts that expands the current view. We address different issues regarding the development of fundamental theories of physics in the new paradigm. Additionally, we discuss how the CTF brings new perspectives to different issues, such as the unreasonable effectiveness of mathematics and the foundations of cognitive science.
APA, Harvard, Vancouver, ISO, and other styles
9

BARAKAT, MOHAMED, and MARKUS LANGE-HEGERMANN. "AN AXIOMATIC SETUP FOR ALGORITHMIC HOMOLOGICAL ALGEBRA AND AN ALTERNATIVE APPROACH TO LOCALIZATION." Journal of Algebra and Its Applications 10, no. 02 (April 2011): 269–93. http://dx.doi.org/10.1142/s0219498811004562.

Full text
Abstract:
In this paper we develop an axiomatic setup for algorithmic homological algebra of Abelian categories. This is done by exhibiting all existential quantifiers entering the definition of an Abelian category, which for the sake of computability need to be turned into constructive ones. We do this explicitly for the often-studied example Abelian category of finitely presented modules over a so-called computable ring R, i.e. a ring with an explicit algorithm to solve one-sided (in)homogeneous linear systems over R. For a finitely generated maximal ideal 𝔪 in a commutative ring R, we show how solving (in)homogeneous linear systems over R𝔪 can be reduced to solving associated systems over R. Hence, the computability of R implies that of R𝔪. As a corollary, we obtain the computability of the category of finitely presented R𝔪-modules as an Abelian category, without the need of a Mora-like algorithm. The reduction also yields, as a byproduct, a complexity estimation for the ideal membership problem over local polynomial rings. Finally, in the case of localized polynomial rings, we demonstrate the computational advantage of our homologically motivated alternative approach in comparison to an existing implementation of Mora's algorithm.
APA, Harvard, Vancouver, ISO, and other styles
10

Igusa, Gregory. "Nonexistence of minimal pairs for generic computability." Journal of Symbolic Logic 78, no. 2 (June 2013): 511–22. http://dx.doi.org/10.2178/jsl.7802090.

Full text
Abstract:
AbstractA generic computation of a subset A of ℕ consists of a computation that correctly computes most of the bits of A, and never incorrectly computes any bits of A, but which does not necessarily give an answer for every input. The motivation for this concept comes from group theory and complexity theory, but the purely recursion theoretic analysis proves to be interesting, and often counterintuitive. The primary result of this paper is that there are no minimal pairs for generic computability, answering a question of Jockusch and Schupp.
APA, Harvard, Vancouver, ISO, and other styles

Dissertations / Theses on the topic "Computational complexity and computability"

1

Latsch, Wolfram Wilhelm. "Beyond complexity and evolution : on the limits of computability in economics." Thesis, University of Oxford, 1999. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.325103.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Pouly, Amaury. "Continuous models of computation: from computability to complexity." Palaiseau, Ecole polytechnique, 2015. https://theses.hal.science/tel-01223284/document.

Full text
Abstract:
In 1941, Claude Shannon introduced a continuous-time analog model of computation, namely the General Purpose Analog Computer (GPAC). The GPAC is a physically feasible model in the sense that it can be implemented in practice through the use of analog electronics or mechanical devices. It can be proved that the functions computed by a GPAC are precisely the solutions of a special class of differential equations where the right-hand side is a polynomial. Analog computers have since been replaced by digital counterpart. Nevertheless, one can wonder how the GPAC could be compared to Turing machines. A few years ago, it was shown that Turing-based paradigms and the GPAC have the same computational power. However, this result did not shed any light on what happens at a computational complexity level. In other words, analog computers do not make a difference about what can be computed; but maybe they could compute faster than a digital computer. A fundamental difficulty of continuous-time model is to define a proper notion of complexity. Indeed, a troubling problem is that many models exhibit the so-called Zeno's phenomenon, also known as space-time contraction. In this thesis, we give several fundamental contributions to these questions. We show that the GPAC has the same computational power as the Turing machine, at the complexity level. We also provide as a side effect a purely analog, machine-independent characterization of P and Computable Analysis.
APA, Harvard, Vancouver, ISO, and other styles
3

Pégny, Maël. "Sur les limites empiriques du calcul : calculabilité, complexité et physique." Thesis, Paris 1, 2013. http://www.theses.fr/2013PA010673/document.

Full text
Abstract:
Durant ces dernières décennies, la communauté informatique a montré un intérêt grandissant pour les modèles de calcul non-standard, inspirés par des phénomènes physiques, biologiques ou chimiques. Les propriétés exactes de ces modèles ont parfois été l'objet de controverses: que calculent-ils? Et à quelle vitesse? Les enjeux de ces questions sont renforcés par la possibilité que certains de ces modèles pourraient transgresser les limites acceptées du calcul, en violant soit la thèse de Church-Turing soit la thèse de Church-Turing étendue. La possibilité de réaliser physiquement ces modèles a notamment été au coeur des débats. Ainsi, des considérations empiriques semblent introduites dans les fondements même de la calculabilité et de la complexité computationnelle, deux théories qui auraient été précédemment considérées comme des parties purement a priori de la logique et de l'informatique. Par conséquent, ce travail est consacré à la question suivante : les limites du calcul reposent-elles sur des fondements empiriques? Et si oui, quels sont-ils? Pour ce faire, nous examinons tout d'abord la signification précise des limites du calcul, et articulons une conception épistémique du calcul, permettant la comparaison des modèles les plus variés. Nous répondrons à la première question par l'affirmative, grâce à un examen détaillé des débats entourant la faisabilité des modèles non-standard. Enfin, nous montrerons les incertitudes entourant la deuxième question dans l'état actuel de la recherche, en montrant les difficultés de la traduction des concepts computationnels en limites physiques
Recent years have seen a surge in the interest for non-standard computational models, inspired by physical, biological or chemical phenomena. The exact properties of some of these models have been a topic of somewhat heated discussion: what do they compute? And how fast do they compute? The stakes of these questions were heightened by the claim that these models would violate the accepted limits of computation, by violating the Church-Turing Thesis or the Extended Church-Turing Thesis. To answer these questions, the physical realizability of some of those models - or lack thereof - has often been put at the center of the argument. It thus seems that empirical considerations have been introduced into the very foundations of computability and computational complexity theory, both subjects that would have been previously considered purely a priori parts of logic and computer science. Consequently, this dissertation is dedicated to the following question: do computability and computational complexity theory rest on empirical foundations? If yes, what are these foundations? We will first examine the precise meaning of those limits of computation, and articulate a philosophical conception of computation able to make sense of this variety of models. We then answer the first question by the affirmative, through a careful examination of current debates around non-standard models. We show the various difficulties surrounding the second question, and study how they stem from the complex translation of computational concepts into physical limitations
APA, Harvard, Vancouver, ISO, and other styles
4

Kübel, David J. F. [Verfasser]. "On some Geometric Search Problems : Algorithms, Complexity, Computability / David J. F. Kübel." Bonn : Universitäts- und Landesbibliothek Bonn, 2020. http://d-nb.info/1224270606/34.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Farr, Graham E. "Topics in computational complexity." Thesis, University of Oxford, 1986. http://ora.ox.ac.uk/objects/uuid:ad3ed1a4-fea4-4b46-8e7a-a0c6a3451325.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Vikas, Narayan. "Computational complexity of graph compaction." Thesis, National Library of Canada = Bibliothèque nationale du Canada, 1997. http://www.collectionscanada.ca/obj/s4/f2/dsk3/ftp04/nq24360.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Yamakami, Tomoyuki. "Average case computational complexity theory." Thesis, National Library of Canada = Bibliothèque nationale du Canada, 1997. http://www.collectionscanada.ca/obj/s4/f2/dsk3/ftp04/nq28091.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Tseng, Hung-Li. "Computational Complexity of Hopfield Networks." Thesis, University of North Texas, 1998. https://digital.library.unt.edu/ark:/67531/metadc278272/.

Full text
Abstract:
There are three main results in this dissertation. They are PLS-completeness of discrete Hopfield network convergence with eight different restrictions, (degree 3, bipartite and degree 3, 8-neighbor mesh, dual of the knight's graph, hypercube, butterfly, cube-connected cycles and shuffle-exchange), exponential convergence behavior of discrete Hopfield network, and simulation of Turing machines by discrete Hopfield Network.
APA, Harvard, Vancouver, ISO, and other styles
9

Rubiano, Thomas. "Implicit Computational Complexity and Compilers." Thesis, Sorbonne Paris Cité, 2017. http://www.theses.fr/2017USPCD076/document.

Full text
Abstract:
Complexity theory helps us predict and control resources, usually time and space, consumed by programs. Static analysis on specific syntactic criterion allows us to categorize some programs. A common approach is to observe the program’s data’s behavior. For instance, the detection of non-size-increasing programs is based on a simple principle : counting memory allocation and deallocation, particularly in loops. This way, we can detect programs which compute within a constant amount of space. This method can easily be expressed as property on control flow graphs. Because analyses on data’s behaviour are syntactic, they can be done at compile time. Because they are only static, those analyses are not always computable or easily computable and approximations are needed. “Size-Change Principle” from C. S. Lee, N. D. Jones et A. M. Ben-Amram presented a method to predict termination by observing resources evolution and a lot of research came from this theory. Until now, these implicit complexity theories were essentially applied on more or less toy languages. This thesis applies implicit computational complexity methods into “real life” programs by manipulating intermediate representation languages in compilers. This give an accurate idea of the actual expressivity of these analyses and show that implicit computational complexity and compilers communities can fuel each other fruitfully. As we show in this thesis, the methods developed are quite generals and open the way to several new applications
La théorie de la complexité´e s’intéresse à la gestion des ressources, temps ou espace, consommés par un programmel ors de son exécution. L’analyse statique nous permet de rechercher certains critères syntaxiques afin de catégoriser des familles de programmes. L’une des approches les plus fructueuses dans le domaine consiste à observer le comportement potentiel des données manipulées. Par exemple, la détection de programmes “non size increasing” se base sur le principe très simple de compter le nombre d’allocations et de dé-allocations de mémoire, en particulier au cours de boucles et on arrive ainsi à détecter les programmes calculant en espace constant. Cette méthode s’exprime très bien comme propriété sur les graphes de flot de contrôle. Comme les méthodes de complexité implicite fonctionnent à l’aide de critères purement syntaxiques, ces analyses peuvent être faites au moment de la compilation. Parce qu’elles ne sont ici que statiques, ces analyses ne sont pas toujours calculables ou facilement calculables, des compromis doivent être faits en s’autorisant des approximations. Dans le sillon du “Size-Change Principle” de C. S. Lee, N. D. Jones et A. M. Ben-Amram, beaucoup de recherches reprennent cette méthode de prédiction de terminaison par observation de l’évolution des ressources. Pour le moment, ces méthodes venant des théories de la complexité implicite ont surtout été appliquées sur des langages plus ou moins jouets. Cette thèse tend à porter ces méthodes sur de “vrais” langages de programmation en s’appliquant au niveau des représentations intermédiaires dans des compilateurs largement utilises. Elle fournit à la communauté un outil permettant de traiter une grande quantité d’exemples et d’avoir une idée plus précise de l’expressivité réelle de ces analyses. De plus cette thèse crée un pont entre deux communautés, celle de la complexité implicite et celle de la compilation, montrant ainsi que chacune peut apporter à l’autre
APA, Harvard, Vancouver, ISO, and other styles
10

Okabe, Yasuo. "Parallel Computational Complexity and Date-Transfer Complexity of Supercomputing." Kyoto University, 1994. http://hdl.handle.net/2433/74658.

Full text
APA, Harvard, Vancouver, ISO, and other styles

Books on the topic "Computational complexity and computability"

1

Börger, E. Computability, complexity, logic. Amsterdam: North-Holland, 1989.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
2

Nies, André. Computability and randomness. Oxford: Oxford University Press, 2012.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
3

Computability and randomness. New York: Oxford University Press, 2009.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
4

Jones, Neil D. Computability and complexity: From a programming perspective. Cambridge, Mass: MIT Press, 1997.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
5

Rich, Elaine. Automata, computability and complexity: Theory and applications. Upper Saddle River, N.J: Pearson Prentice Hall, 2008.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
6

Ron, Sigal, and Weyuker Elaine J, eds. Computability, complexity, and languages: Fundamentals of theoretical computer science. 2nd ed. Boston: Academic Press, Harcourt, Brace, 1994.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
7

Petzold, Charles. The annotated Turing: A guided tour through Alan Turing's historic paper on computability. Indianapolis, IN: Wiley Pub., 2008.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
8

1966-, Blanck Jens, Brattka Vasco 1966-, and Hertling Peter 1965-, eds. Computability and complexity in analysis: 4th international workshop, CCA 2000, Swansea, UK, September 17-19, 2000 : selected papers. Berlin: Springer, 2001.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
9

1952-, Calude Cristian, Dinneen M. J. 1957-, and Sburlan Silviu, eds. Combinatorics, computability, and logic: Proceedings of the Third International Conference on Combinatorics, Computability, and Logic, (DMTCS '01). London: Springer, 2001.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
10

A first course in logic: An introduction to model theory, proof theory, computability, and complexity. Oxford: Oxford University Press, 2004.

Find full text
APA, Harvard, Vancouver, ISO, and other styles

Book chapters on the topic "Computational complexity and computability"

1

Weihrauch, Klaus. "Computational Complexity." In Computability, 306–19. Berlin, Heidelberg: Springer Berlin Heidelberg, 1987. http://dx.doi.org/10.1007/978-3-642-69965-8_23.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Norman, Alfred Lorn. "Computability, Complexity and Economics." In Computational Techniques for Econometrics and Economic Analysis, 89–108. Dordrecht: Springer Netherlands, 1994. http://dx.doi.org/10.1007/978-94-015-8372-5_6.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Tsuiki, Hideki. "Computational Dimension of Topological Spaces." In Computability and Complexity in Analysis, 323–35. Berlin, Heidelberg: Springer Berlin Heidelberg, 2001. http://dx.doi.org/10.1007/3-540-45335-0_19.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Weihrauch, Klaus. "Constructivity, computability, and computational complexity in analysis." In Fundamentals of Computation Theory, 480–93. Berlin, Heidelberg: Springer Berlin Heidelberg, 1989. http://dx.doi.org/10.1007/3-540-51498-8_47.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Kohlenbach, Ulrich. "On the Computational Content of the Krasnoselski and Ishikawa Fixed Point Theorems." In Computability and Complexity in Analysis, 119–45. Berlin, Heidelberg: Springer Berlin Heidelberg, 2001. http://dx.doi.org/10.1007/3-540-45335-0_9.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Selivanova, Svetlana. "Computational Complexity of Classical Solutions of Partial Differential Equations." In Revolutions and Revelations in Computability, 299–312. Cham: Springer International Publishing, 2022. http://dx.doi.org/10.1007/978-3-031-08740-0_25.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Wainer, Stanley S. "Computability — Logical and Recursive Complexity." In Logic, Algebra, and Computation, 237–64. Berlin, Heidelberg: Springer Berlin Heidelberg, 1991. http://dx.doi.org/10.1007/978-3-642-76799-9_6.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Bournez, Olivier, Daniel S. Graça, Amaury Pouly, and Ning Zhong. "Computability and Computational Complexity of the Evolution of Nonlinear Dynamical Systems." In Lecture Notes in Computer Science, 12–21. Berlin, Heidelberg: Springer Berlin Heidelberg, 2013. http://dx.doi.org/10.1007/978-3-642-39053-1_2.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Bridges, Douglas S. "Abstract Complexity Theory." In Computability, 93–115. New York, NY: Springer New York, 1994. http://dx.doi.org/10.1007/978-1-4612-0863-1_7.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Allender, Eric. "The Complexity of Complexity." In Computability and Complexity, 79–94. Cham: Springer International Publishing, 2016. http://dx.doi.org/10.1007/978-3-319-50062-1_6.

Full text
APA, Harvard, Vancouver, ISO, and other styles

Conference papers on the topic "Computational complexity and computability"

1

Py, Mônica Xavier, Laira Vieira Toscani, Luís C. Lamb, and Tiarajú Asmuz Diverio. "Learning Parallel Computing Concepts via a Turing Machine Simulator." In Simpósio de Arquitetura de Computadores e Processamento de Alto Desempenho. Sociedade Brasileira de Computação, 2001. http://dx.doi.org/10.5753/sbac-pad.2001.22201.

Full text
Abstract:
lt is well-known that technology developments in Computing Science has led to new developments in Computing Theory and vice-versa. This work is a contribution towards the formalisation of Parallel Processing concepts. We exploit variations of the Turing Machine model, referred as natural extensions, which may be composed by several control units, tapes and heads in such a way that one can define a variety of Parallel Turing Machine models. Several notions and definitions of parallelism are identified including computability, complexity and performance of computations. A prototype of the Parallel Turing Machine model is presented. This prototype was used as a test bed for the assessment of the usefulness of these machine models as a parallel processing teaching tool, as well as a tool to facilitate the creation of a "culture" among students in high performance computing. Finally, we analyse the notions of performance, speedup, efficiency, load balancing, synchronisation and communication in parallel processing.
APA, Harvard, Vancouver, ISO, and other styles
2

del Vado Vírseda, Rafael. "Computability and Algorithmic Complexity Questions in Secondary Education." In CompEd '19: ACM Global Computing Education Conference 2019. New York, NY, USA: ACM, 2019. http://dx.doi.org/10.1145/3300115.3309507.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Yan, Song Y. "Computability, Learnability and Breakability in Cryptanalysis." In 2008 International Conference on Computational Intelligence for Modelling Control & Automation. IEEE, 2008. http://dx.doi.org/10.1109/cimca.2008.206.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Chakaravarthy, Venkatesan T. "New results on the computability and complexity of points--to analysis." In the 30th ACM SIGPLAN-SIGACT symposium. New York, New York, USA: ACM Press, 2003. http://dx.doi.org/10.1145/604131.604142.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Peng, Weiguang. "A Survey of the Distributional Complexity for AND-OR Trees." In The 9th International Conference on Computability Theory and Foundations of Mathematics. WORLD SCIENTIFIC, 2022. http://dx.doi.org/10.1142/9789811259296_0004.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Dobzinski, Shahar, and Jan Vondrak. "From query complexity to computational complexity." In the 44th symposium. New York, New York, USA: ACM Press, 2012. http://dx.doi.org/10.1145/2213977.2214076.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Decatur, Scott, Oded Goldreich, and Dana Ron. "Computational sample complexity." In the tenth annual conference. New York, New York, USA: ACM Press, 1997. http://dx.doi.org/10.1145/267460.267489.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Goldreich, Oded, Rafail Ostrovsky, and Erez Petrank. "Computational complexity and knowledge complexity (extended abstract)." In the twenty-sixth annual ACM symposium. New York, New York, USA: ACM Press, 1994. http://dx.doi.org/10.1145/195058.195406.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Linial, Nati, and Adi Shraibman. "Learning Complexity vs. Communication Complexity." In 2008 23rd Annual IEEE Conference on Computational Complexity. IEEE, 2008. http://dx.doi.org/10.1109/ccc.2008.28.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Goldsmith, Simon F., Alex S. Aiken, and Daniel S. Wilkerson. "Measuring empirical computational complexity." In the the 6th joint meeting of the European software engineering conference and the ACM SIGSOFT symposium. New York, New York, USA: ACM Press, 2007. http://dx.doi.org/10.1145/1287624.1287681.

Full text
APA, Harvard, Vancouver, ISO, and other styles

Reports on the topic "Computational complexity and computability"

1

Coward, John R. Computational Complexity and Encryption. Fort Belvoir, VA: Defense Technical Information Center, February 1995. http://dx.doi.org/10.21236/ada291910.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Lidar, Daniel. Quantum Computational Complexity of Spin Glasses. Fort Belvoir, VA: Defense Technical Information Center, March 2011. http://dx.doi.org/10.21236/ada545157.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Heggernes, P., S. C. Eisenstat, G. Kumfert, and A. Pothen. The Computational Complexity of the Minimum Degree Algorithm. Office of Scientific and Technical Information (OSTI), December 2001. http://dx.doi.org/10.2172/15002765.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Hart, W. E. On the computational complexity of sequence design problems. Office of Scientific and Technical Information (OSTI), December 1996. http://dx.doi.org/10.2172/425316.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Harris, D. Computational Complexity of Subspace Detectors and Matched Field Processing. Office of Scientific and Technical Information (OSTI), December 2010. http://dx.doi.org/10.2172/1017995.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Lumsdaine, Robin, James Stock, and David Wise. Three Models of Retirement: Computational Complexity Versus Predictive Validity. Cambridge, MA: National Bureau of Economic Research, December 1990. http://dx.doi.org/10.3386/w3558.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Barlow, R. E., and S. Iyer. Computational Complexity of Coherent Systems and the Reliability Polynomial. Fort Belvoir, VA: Defense Technical Information Center, July 1985. http://dx.doi.org/10.21236/ada158689.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Reif, John H. Computational Complexity and Efficiency in Electro-Optical Computing Systems. Fort Belvoir, VA: Defense Technical Information Center, June 1990. http://dx.doi.org/10.21236/ada229195.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Baader, Franz, and Oliver Fernández Gil. Decidability and Complexity of Threshold Description Logics Induced by Concept Similarity Measures. Technische Universität Dresden, 2016. http://dx.doi.org/10.25368/2022.229.

Full text
Abstract:
In a recent research paper, we have proposed an extension of the lightweight Description Logic (DL) EL in which concepts can be defined in an approximate way. To this purpose, the notion of a graded membership function m, which instead of a Boolean membership value 0 or 1 yields a membership degree from the interval [0; 1], was introduced. Threshold concepts can then, for example, require that an individual belongs to a concept C with degree at least 0:8. Reasoning in the threshold DL T EL(m) obtained this way of course depends on the employed graded membership function m. The paper defines a specific such function, called deg, and determines the exact complexity of reasoning in T EL(deg). In addition, it shows how concept similarity measures (CSMs) ~ satisfying certain properties can be used to define graded membership functions m~, but it does not investigate the complexity of reasoning in the induced threshold DLs T EL(m~). In the present paper, we start filling this gap. In particular, we show that computability of ~ implies decidability of T EL(m~), and we introduce a class of CSMs for which reasoning in the induced threshold DLs has the same complexity as in T EL(deg).
APA, Harvard, Vancouver, ISO, and other styles
10

Murenzi, Romain, Lance Kaplan, Jean-Pierre Antoine, and Fernando Mujica. Computational Complexity of the Continuous Wavelet Transform in Two Dimensions. Fort Belvoir, VA: Defense Technical Information Center, January 1998. http://dx.doi.org/10.21236/ada358633.

Full text
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography