Journal articles on the topic 'Computational complexity and computability'

To see the other types of publications on this topic, follow the link: Computational complexity and computability.

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 journal articles for your research on the topic 'Computational complexity and computability.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse journal articles on a wide variety of disciplines and organise your bibliography correctly.

1

Dean, Walter. "Computational Complexity Theory and the Philosophy of Mathematics†." Philosophia Mathematica 27, no. 3 (October 1, 2019): 381–439. http://dx.doi.org/10.1093/philmat/nkz021.

Full text
Abstract:
Abstract Computational complexity theory is a subfield of computer science originating in computability theory and the study of algorithms for solving practical mathematical problems. Amongst its aims is classifying problems by their degree of difficulty — i.e., how hard they are to solve computationally. This paper highlights the significance of complexity theory relative to questions traditionally asked by philosophers of mathematics while also attempting to isolate some new ones — e.g., about the notion of feasibility in mathematics, the $\mathbf{P} \neq \mathbf{NP}$ problem and why it has proven hard to resolve, and the role of non-classical modes of computation and proof.
APA, Harvard, Vancouver, ISO, and other styles
2

Japaridze, Giorgi. "Arithmetics based on computability logic." Logical Investigations 25, no. 2 (December 23, 2019): 61–74. http://dx.doi.org/10.21146/2074-1472-2019-25-2-61-74.

Full text
Abstract:
This paper is a brief survey of number theories based on em computability logic (CoL) a game-semantically conceived logic of computational tasks of resources. Such theories, termed em clarithmetics, are conservative extensions of first-order Peano arithmetic. The first section of the paper lays out the conceptual basis of CoL and describes the relevant fragment of its formal language, with so called parallel connectives, choice connectives and quantifiers, and blind quantifiers. Both syntactically and semantically, this is a conservative generalization of the language of classical logic. Clarithmetics, based on the corresponding fragment of CoL in the same sense as Peano arithmetic is based on classical logic, are discussed in the second section. The axioms and inference rules of the system of clarithmetic named ${\bf CLA11}$ are presented, and the main results on this system are stated: constructive soundness, extensional completeness, and intensional completeness. In the final section two potential applications of clarithmetics are addressed: clarithmetics as declarative programming languages in an extreme sense, and as tools for separating computational complexity classes. When clarithmetics or similar CoL-based theories are viewed as programming languages, programming reduces to proof-search, as programs can be mechanically extracted from proofs; such programs also serves as their own formal verifications, thus fully neutralizing the notorious (and generally undecidable) program verification problem. The second application reduces the problem of separating various computational complexity classes to separating the corresponding versions of clarithmetic, the potential benefits of which stem from the belief that separating theories should generally be easier than separating complexity classes directly.
APA, Harvard, Vancouver, ISO, and other styles
3

Reif, J. H., J. D. Tygar, and A. Yoshida. "Computability and complexity of ray tracing." Discrete & Computational Geometry 11, no. 3 (March 1994): 265–88. http://dx.doi.org/10.1007/bf02574009.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Yampolsky, Michael. "Towards understanding the theoretical challenges of numerical modeling of dynamical systems." New Zealand Journal of Mathematics 52 (September 19, 2021): 453–67. http://dx.doi.org/10.53733/129.

Full text
Abstract:
Asking and answering the right questions about computability and computational complexity of dynamical systems is more important than many may realize. In this note, I discuss the principal challenges, their implications, and some instructive open problems.
APA, Harvard, Vancouver, ISO, and other styles
5

BEGGS, EDWIN, JOSÉ FÉLIX COSTA, and JOHN V. TUCKER. "THREE FORMS OF PHYSICAL MEASUREMENT AND THEIR COMPUTABILITY." Review of Symbolic Logic 7, no. 4 (September 9, 2014): 618–46. http://dx.doi.org/10.1017/s1755020314000240.

Full text
Abstract:
AbstractWe have begun a theory of measurement in which an experimenter and his or her experimental procedure are modeled by algorithms that interact with physical equipment through a simple abstract interface. The theory is based upon using models of physical equipment as oracles to Turing machines. This allows us to investigate the computability and computational complexity of measurement processes. We examine eight different experiments that make measurements and, by introducing the idea of an observable indicator, we identify three distinct forms of measurement process and three types of measurement algorithm. We give axiomatic specifications of three forms of interfaces that enable the three types of experiment to be used as oracles to Turing machines, and lemmas that help certify an experiment satisfies the axiomatic specifications. For experiments that satisfy our axiomatic specifications, we give lower bounds on the computational power of Turing machines in polynomial time using nonuniform complexity classes. These lower bounds break the barrier defined by the Church-Turing Thesis.
APA, Harvard, Vancouver, ISO, and other styles
6

Blum, Manuel, and Santosh Vempala. "The complexity of human computation via a concrete model with an application to passwords." Proceedings of the National Academy of Sciences 117, no. 17 (April 14, 2020): 9208–15. http://dx.doi.org/10.1073/pnas.1801839117.

Full text
Abstract:
What can humans compute in their heads? We are thinking of a variety of cryptographic protocols, games like sudoku, crossword puzzles, speed chess, and so on. For example, can a person compute a function in his or her head so that an eavesdropper with a powerful computer—who sees the responses to random inputs—still cannot infer responses to new inputs? To address such questions, we propose a rigorous model of human computation and associated measures of complexity. We apply the model and measures first and foremost to the problem of 1) humanly computable password generation and then, consider related problems of 2) humanly computable “one-way functions” and 3) humanly computable “pseudorandom generators.” The theory of human computability developed here plays by different rules than standard computability; the polynomial vs. exponential time divide of modern computability theory is irrelevant to human computation. In human computability, the step counts for both humans and computers must be more concrete. As an application and running example, password generation schemas are humanly computable algorithms based on private keys. Humanly computable and/or humanly usable mean, roughly speaking, that any human needing—and capable of using—passwords can if sufficiently motivated generate and memorize a secret key in less than 1 h (including all rehearsals) and can subsequently use schema plus key to transform website names (challenges) into passwords (responses) in less than 1 min. Moreover, the schemas have precisely defined measures of security against all adversaries, human and/or machine.
APA, Harvard, Vancouver, ISO, and other styles
7

Valenti, Manlio. "A journey through computability, topology and analysis." Bulletin of Symbolic Logic 28, no. 2 (June 2022): 266–67. http://dx.doi.org/10.1017/bsl.2022.13.

Full text
Abstract:
AbstractThis thesis is devoted to the exploration of the complexity of some mathematical problems using the framework of computable analysis and (effective) descriptive set theory. We will especially focus on Weihrauch reducibility as a means to compare the uniform computational strength of problems. After a short introduction of the relevant background notions, we investigate the uniform computational content of problems arising from theorems that lie at the higher levels of the reverse mathematics hierarchy.We first analyze the strength of the open and clopen Ramsey theorems. Since there is not a canonical way to phrase these theorems as multi-valued functions, we identify eight different multi-valued functions (five corresponding to the open Ramsey theorem and three corresponding to the clopen Ramsey theorem) and study their degree from the point of view of Weihrauch, strong Weihrauch, and arithmetic Weihrauch reducibility.We then discuss some new operators on multi-valued functions and study their algebraic properties and the relations with other previously studied operators on problems. In particular, we study the first-order part and the deterministic part of a problem f, capturing the Weihrauch degree of the strongest multi-valued problem that is reducible to f and that, respectively, has codomain $\mathbb {N}$ or is single-valued.These notions proved to be extremely useful when exploring the Weihrauch degree of the problem $\mathsf {DS}$ of computing descending sequences in ill-founded linear orders. They allow us to show that $\mathsf {DS}$ , and the Weihrauch equivalent problem $\mathsf {BS}$ of finding bad sequences through non-well quasi-orders, while being very “hard” to solve, are rather weak in terms of uniform computational strength. We then generalize $\mathsf {DS}$ and $\mathsf {BS}$ by considering $\boldsymbol {\Gamma }$ -presented orders, where $\boldsymbol {\Gamma }$ is a Borel pointclass or $\boldsymbol {\Delta }^1_1$ , $\boldsymbol {\Sigma }^1_1$ , $\boldsymbol {\Pi }^1_1$ . We study the obtained $\mathsf {DS}$ -hierarchy and $\mathsf {BS}$ -hierarchy of problems in comparison with the (effective) Baire hierarchy and show that they do not collapse at any finite level.Finally, we work in the context of geometric measure theory and we focus on the characterization, from the point of view of descriptive set theory, of some conditions involving the notions of Hausdorff/Fourier dimension and Salem sets. We first work in the hyperspace $\mathbf {K}([0,1])$ of compact subsets of $[0,1]$ and show that the closed Salem sets form a $\boldsymbol {\Pi }^0_3$ -complete family. This is done by characterizing the complexity of the family of sets having sufficiently large Hausdorff or Fourier dimension. We also show that the complexity does not change if we increase the dimension of the ambient space and work in $\mathbf {K}([0,1]^d)$ . We also generalize the results by relaxing the compactness of the ambient space and show that the closed Salem sets are still $\boldsymbol {\Pi }^0_3$ -complete when we endow $\mathbf {F}(\mathbb {R}^d)$ with the Fell topology. A similar result holds also for the Vietoris topology.We conclude by analyzing the same notions from the point of view of effective descriptive set theory and Type-2 Theory of Effectivity, and show that the complexities remain the same also in the lightface case. In particular, we show that the family of all the closed Salem sets is $\Pi ^0_3$ -complete. We furthermore characterize the Weihrauch degree of the functions computing Hausdorff and Fourier dimension of closed sets.Abstract prepared by Manlio Valenti.E-mail:manliovalenti@gmail.com
APA, Harvard, Vancouver, ISO, and other styles
8

Miguel-Tomé, Sergio, Ángel L. Sánchez-Lázaro, and Luis Alonso-Romero. "Fundamental Physics and Computation: The Computer-Theoretic Framework." Universe 8, no. 1 (January 11, 2022): 40. http://dx.doi.org/10.3390/universe8010040.

Full text
Abstract:
The central goal of this manuscript is to survey the relationships between fundamental physics and computer science. We begin by providing a short historical review of how different concepts of computer science have entered the field of fundamental physics, highlighting the claim that the universe is a computer. Following the review, we explain why computational concepts have been embraced to interpret and describe physical phenomena. We then discuss seven arguments against the claim that the universe is a computational system and show that those arguments are wrong because of a misunderstanding of the extension of the concept of computation. Afterwards, we address a proposal to solve Hempel’s dilemma using the computability theory but conclude that it is incorrect. After that, we discuss the relationship between the proposals that the universe is a computational system and that our minds are a simulation. Analysing these issues leads us to proposing a new physical principle, called the principle of computability, which claims that the universe is a computational system (not restricted to digital computers) and that computational power and the computational complexity hierarchy are two fundamental physical constants. On the basis of this new principle, a scientific paradigm emerges to develop fundamental theories of physics: the computer-theoretic framework (CTF). The CTF brings to light different ideas already implicit in the work of several researchers and provides a new view on the universe based on computer theoretic concepts that expands the current view. We address different issues regarding the development of fundamental theories of physics in the new paradigm. Additionally, we discuss how the CTF brings new perspectives to different issues, such as the unreasonable effectiveness of mathematics and the foundations of cognitive science.
APA, Harvard, Vancouver, ISO, and other styles
9

BARAKAT, MOHAMED, and MARKUS LANGE-HEGERMANN. "AN AXIOMATIC SETUP FOR ALGORITHMIC HOMOLOGICAL ALGEBRA AND AN ALTERNATIVE APPROACH TO LOCALIZATION." Journal of Algebra and Its Applications 10, no. 02 (April 2011): 269–93. http://dx.doi.org/10.1142/s0219498811004562.

Full text
Abstract:
In this paper we develop an axiomatic setup for algorithmic homological algebra of Abelian categories. This is done by exhibiting all existential quantifiers entering the definition of an Abelian category, which for the sake of computability need to be turned into constructive ones. We do this explicitly for the often-studied example Abelian category of finitely presented modules over a so-called computable ring R, i.e. a ring with an explicit algorithm to solve one-sided (in)homogeneous linear systems over R. For a finitely generated maximal ideal 𝔪 in a commutative ring R, we show how solving (in)homogeneous linear systems over R𝔪 can be reduced to solving associated systems over R. Hence, the computability of R implies that of R𝔪. As a corollary, we obtain the computability of the category of finitely presented R𝔪-modules as an Abelian category, without the need of a Mora-like algorithm. The reduction also yields, as a byproduct, a complexity estimation for the ideal membership problem over local polynomial rings. Finally, in the case of localized polynomial rings, we demonstrate the computational advantage of our homologically motivated alternative approach in comparison to an existing implementation of Mora's algorithm.
APA, Harvard, Vancouver, ISO, and other styles
10

Igusa, Gregory. "Nonexistence of minimal pairs for generic computability." Journal of Symbolic Logic 78, no. 2 (June 2013): 511–22. http://dx.doi.org/10.2178/jsl.7802090.

Full text
Abstract:
AbstractA generic computation of a subset A of ℕ consists of a computation that correctly computes most of the bits of A, and never incorrectly computes any bits of A, but which does not necessarily give an answer for every input. The motivation for this concept comes from group theory and complexity theory, but the purely recursion theoretic analysis proves to be interesting, and often counterintuitive. The primary result of this paper is that there are no minimal pairs for generic computability, answering a question of Jockusch and Schupp.
APA, Harvard, Vancouver, ISO, and other styles
11

ARRIGHI, PABLO, and GILLES DOWEK. "THE PHYSICAL CHURCH-TURING THESIS AND THE PRINCIPLES OF QUANTUM THEORY." International Journal of Foundations of Computer Science 23, no. 05 (August 2012): 1131–45. http://dx.doi.org/10.1142/s0129054112500153.

Full text
Abstract:
As was emphasized by Deutsch, quantum computation shatters complexity theory, but is innocuous to computability theory. Yet Nielsen and others have shown how quantum theory as it stands could breach the physical Church-Turing thesis. We draw a clear line as to when this is the case, in a way that is inspired by Gandy. Gandy formulates postulates about physics, such as homogeneity of space and time, bounded density and velocity of information — and proves that the physical Church-Turing thesis is a consequence of these postulates. We provide a quantum version of the theorem. Thus this approach exhibits a formal non-trivial interplay between theoretical physics symmetries and computability assumptions.
APA, Harvard, Vancouver, ISO, and other styles
12

Webb, Marcus, and Sheehan Olver. "Spectra of Jacobi Operators via Connection Coefficient Matrices." Communications in Mathematical Physics 382, no. 2 (February 22, 2021): 657–707. http://dx.doi.org/10.1007/s00220-021-03939-w.

Full text
Abstract:
AbstractWe address the computational spectral theory of Jacobi operators that are compact perturbations of the free Jacobi operator via the asymptotic properties of a connection coefficient matrix. In particular, for Jacobi operators that are finite-rank perturbations we show that the computation of the spectrum can be reduced to a polynomial root finding problem, from a polynomial that is derived explicitly from the entries of a connection coefficient matrix. A formula for the spectral measure of the operator is also derived explicitly from these entries. The analysis is extended to trace-class perturbations. We address issues of computability in the framework of the Solvability Complexity Index, proving that the spectrum of compact perturbations of the free Jacobi operator is computable in finite time with guaranteed error control in the Hausdorff metric on sets.
APA, Harvard, Vancouver, ISO, and other styles
13

Song, Bosheng, Kenli Li, David Orellana-Martín, Mario J. Pérez-Jiménez, and Ignacio PéRez-Hurtado. "A Survey of Nature-Inspired Computing." ACM Computing Surveys 54, no. 1 (April 2021): 1–31. http://dx.doi.org/10.1145/3431234.

Full text
Abstract:
Nature-inspired computing is a type of human-designed computing motivated by nature, which is based on the employ of paradigms, mechanisms, and principles underlying natural systems. In this article, a versatile and vigorous bio-inspired branch of natural computing, named membrane computing is discussed. This computing paradigm is aroused by the internal membrane function and the structure of biological cells. We first introduce some basic concepts and formalisms of membrane computing, and then some basic types or variants of P systems (also named membrane systems ) are presented. The state-of-the-art computability theory and a pioneering computational complexity theory are presented with P system frameworks and numerous solutions to hard computational problems (especially NP -complete problems) via P systems with membrane division are reported. Finally, a number of applications and open problems of P systems are briefly described.
APA, Harvard, Vancouver, ISO, and other styles
14

Alford, Ron, Vikas Shivashankar, Ugur Kuter, and Dana Nau. "On the Feasibility of Planning Graph Style Heuristics for HTN Planning." Proceedings of the International Conference on Automated Planning and Scheduling 24 (May 10, 2014): 2–10. http://dx.doi.org/10.1609/icaps.v24i1.13630.

Full text
Abstract:
In classical planning, the polynomial-time computability of propositional delete-free planning (planning with only positive effects and preconditions) led to the highly successful Relaxed Graphplan heuristic. We present a hierarchy of new computational complexity results for different classes of propositional delete-free HTN planning, with two main results: We prove that finding a plan for the delete-relaxation of a propositional HTN problem is NP-complete: hence unless P=NP, there is no directly analogous GraphPlan heuristic for HTN planning. However, a further relaxation of HTN planning (delete-free HTN planning with task insertion) is polynomial-time computable. Thus, there may be a possibility of using this or other relaxations to develop search heuristics for HTN planning.
APA, Harvard, Vancouver, ISO, and other styles
15

Marlowe, Thomas J., and Laura Schoppmann. "Polynomial-time computability of the edge-reliability of graphs using Gilbert's formula." Mathematical Problems in Engineering 4, no. 3 (1998): 247–66. http://dx.doi.org/10.1155/s1024123x98000817.

Full text
Abstract:
Reliability is an important consideration in analyzing computer and other communication networks, but current techniques are extremely limited in the classes of graphs which can be analyzed efficiently. While Gilbert's formula establishes a theoretically elegant recursive relationship between the edge reliability of a graph and the reliability of its subgraphs, naive evaluation requires consideration of all sequences of deletions of individual vertices, and for many graphs has time complexity essentiallyΘ(N!). We discuss a general approach which significantly reduces complexity, encoding subgraph isomorphism in a finer partition by invariants, and recursing through the set of invariants.We illustrate this approach using threshhold graphs, and show that any computation of reliability using Gilbert's formula will be polynomial-time if and only if the number of invariants considered is polynomial; we then show families of graphs with polynomial-time, and non-polynomial reliability computation, and show that these encompass most previously known results.We then codify our approach to indicate how it can be used for other classes of graphs, and suggest several classes to which the technique can be applied.
APA, Harvard, Vancouver, ISO, and other styles
16

Brattka, Vasco, and Vladik Kreinovich. "Computability and Complexity in Analysis (CCA). A View from Interval Computations -Cincinnati, Ohio, USA, August 28–30, 2003." Reliable Computing 10, no. 1 (2004): 75–80. http://dx.doi.org/10.1023/b:reom.0000004035.08919.ed.

Full text
APA, Harvard, Vancouver, ISO, and other styles
17

Armoni, Michal, Judith Gal-Ezer, and Dina Tirosh. "Solving Problems Reductively." Journal of Educational Computing Research 32, no. 2 (March 2005): 113–29. http://dx.doi.org/10.2190/6pcm-447v-wf7b-qeuf.

Full text
Abstract:
Solving problems by reduction is an important issue in mathematics and science education in general (both in high school and in college or university) and particularly in computer science education. Developing reductive thinking patterns is an important goal in any scientific discipline, yet reduction is not an easy subject to cope with. Still, the use of reduction usually is insufficiently reflected in high school mathematics and science programs. Even in academic computer science programs the concept of reduction is mentioned explicitly only in advanced academic courses such as computability and complexity theory. However, reduction can be applied in other courses as well, even on the high school level. Specifically, in the field of computational models, reduction is an important method for solving design and proof problems. This study focuses on high school students studying the unit “computational models”—a unique unit, which is part of the new Israeli computer science high school curriculum. We examined whether high school students tend to solve problems dealing with computational models reductively, and if they do, what is the nature of their reductive solutions. To the best of our knowledge, the tendency to reductive thinking in theoretical computer science has not been studied before. Our findings show that even though many students use reduction, many others prefer non-reductive solutions, even when reduction can significantly decrease the technical complexity of the solution. We discuss these findings and suggest possible ways to improve reductive thinking.
APA, Harvard, Vancouver, ISO, and other styles
18

Arenas, Marcelo, Pablo BarcelÓ, and Mikaël Monet. "The Complexity of Counting Problems Over Incomplete Databases." ACM Transactions on Computational Logic 22, no. 4 (October 31, 2021): 1–52. http://dx.doi.org/10.1145/3461642.

Full text
Abstract:
We study the complexity of various fundamental counting problems that arise in the context of incomplete databases, i.e., relational databases that can contain unknown values in the form of labeled nulls. Specifically, we assume that the domains of these unknown values are finite and, for a Boolean query q , we consider the following two problems: Given as input an incomplete database D , (a) return the number of completions of D that satisfy q ; or (b) return the number of valuations of the nulls of D yielding a completion that satisfies q . We obtain dichotomies between #P-hardness and polynomial-time computability for these problems when q is a self-join–free conjunctive query and study the impact on the complexity of the following two restrictions: (1) every null occurs at most once in D (what is called Codd tables ); and (2) the domain of each null is the same. Roughly speaking, we show that counting completions is much harder than counting valuations: For instance, while the latter is always in #P, we prove that the former is not in #P under some widely believed theoretical complexity assumption. Moreover, we find that both (1) and (2) can reduce the complexity of our problems. We also study the approximability of these problems and show that, while counting valuations always has a fully polynomial-time randomized approximation scheme (FPRAS), in most cases counting completions does not. Finally, we consider more expressive query languages and situate our problems with respect to known complexity classes.
APA, Harvard, Vancouver, ISO, and other styles
19

WANG, PEI. "PROBLEM SOLVING WITH INSUFFICIENT RESOURCES." International Journal of Uncertainty, Fuzziness and Knowledge-Based Systems 12, no. 05 (October 2004): 673–700. http://dx.doi.org/10.1142/s0218488504003144.

Full text
Abstract:
A new approach, "controlled concurrency," is introduced for inference control in an adaptive reasoning system working with insufficient knowledge and resources. With this method, a problem-solving process is constructed from atomic steps in run time, according to the system's past experience and the current context. The system carries out many such processes in parallel by distributing its resources among them, and dynamically adjusting the distribution according to feedback. A data structure, "bag," is designed to support this dynamic time-space allocation, and is a kind of probabilistic priority queue. This approach provides a flexible, efficient, and adaptive control mechanism for real-time systems working with uncertain knowledge. To analyze problem solving in such a system, the traditional computability theory and computational complexity theory become inappropriate, because the system no longer follows problem-specific algorithms in problem solving.
APA, Harvard, Vancouver, ISO, and other styles
20

Hayashi, Masahito, and Shun Watanabe. "Finite-Length Analyses for Source and Channel Coding on Markov Chains." Entropy 22, no. 4 (April 18, 2020): 460. http://dx.doi.org/10.3390/e22040460.

Full text
Abstract:
We derive finite-length bounds for two problems with Markov chains: source coding with side-information where the source and side-information are a joint Markov chain and channel coding for channels with Markovian conditional additive noise. For this purpose, we point out two important aspects of finite-length analysis that must be argued when finite-length bounds are proposed. The first is the asymptotic tightness, and the other is the efficient computability of the bound. Then, we derive finite-length upper and lower bounds for the coding length in both settings such that their computational complexity is low. We argue the first of the above-mentioned aspects by deriving the large deviation bounds, the moderate deviation bounds, and second-order bounds for these two topics and show that these finite-length bounds achieve the asymptotic optimality in these senses. Several kinds of information measures for transition matrices are introduced for the purpose of this discussion.
APA, Harvard, Vancouver, ISO, and other styles
21

DEAN, WALTER. "STRICT FINITISM, FEASIBILITY, AND THE SORITES." Review of Symbolic Logic 11, no. 2 (June 2018): 295–346. http://dx.doi.org/10.1017/s1755020318000163.

Full text
Abstract:
AbstractThis article bears on four topics: observational predicates and phenomenal properties, vagueness, strict finitism as a philosophy of mathematics, and the analysis of feasible computability. It is argued that reactions to strict finitism point towards a semantics for vague predicates in the form of nonstandard models of weak arithmetical theories of the sort originally introduced to characterize the notion of feasibility as understood in computational complexity theory. The approach described eschews the use of nonclassical logic and related devices like degrees of truth or supervaluation. Like epistemic approaches to vagueness, it may thus be smoothly integrated with the use of classical model theory as widely employed in natural language semantics. But unlike epistemicism, the described approach fails to imply either the existence of sharp boundaries or the failure of tolerance for soritical predicates. Applications of measurement theory (in the sense of Krantz, Luce, Suppes, & Tversky (1971)) to vagueness in the nonstandard setting are also explored.
APA, Harvard, Vancouver, ISO, and other styles
22

Trella, Anna L., Kelly W. Zhang, Inbal Nahum-Shani, Vivek Shetty, Finale Doshi-Velez, and Susan A. Murphy. "Designing Reinforcement Learning Algorithms for Digital Interventions: Pre-Implementation Guidelines." Algorithms 15, no. 8 (July 22, 2022): 255. http://dx.doi.org/10.3390/a15080255.

Full text
Abstract:
Online reinforcement learning (RL) algorithms are increasingly used to personalize digital interventions in the fields of mobile health and online education. Common challenges in designing and testing an RL algorithm in these settings include ensuring the RL algorithm can learn and run stably under real-time constraints, and accounting for the complexity of the environment, e.g., a lack of accurate mechanistic models for the user dynamics. To guide how one can tackle these challenges, we extend the PCS (predictability, computability, stability) framework, a data science framework that incorporates best practices from machine learning and statistics in supervised learning to the design of RL algorithms for the digital interventions setting. Furthermore, we provide guidelines on how to design simulation environments, a crucial tool for evaluating RL candidate algorithms using the PCS framework. We show how we used the PCS framework to design an RL algorithm for Oralytics, a mobile health study aiming to improve users’ tooth-brushing behaviors through the personalized delivery of intervention messages. Oralytics will go into the field in late 2022.
APA, Harvard, Vancouver, ISO, and other styles
23

DOWNEY, RODNEY G., CARL G. JOCKUSCH, and PAUL E. SCHUPP. "ASYMPTOTIC DENSITY AND COMPUTABLY ENUMERABLE SETS." Journal of Mathematical Logic 13, no. 02 (October 31, 2013): 1350005. http://dx.doi.org/10.1142/s0219061313500050.

Full text
Abstract:
We study connections between classical asymptotic density, computability and computable enumerability. In an earlier paper, the second two authors proved that there is a computably enumerable set A of density 1 with no computable subset of density 1. In the current paper, we extend this result in three different ways: (i) The degrees of such sets A are precisely the nonlow c.e. degrees. (ii) There is a c.e. set A of density 1 with no computable subset of nonzero density. (iii) There is a c.e. set A of density 1 such that every subset of A of density 1 is of high degree. We also study the extent to which c.e. sets A can be approximated by their computable subsets B in the sense that A\B has small density. There is a very close connection between the computational complexity of a set and the arithmetical complexity of its density and we characterize the lower densities, upper densities and densities of both computable and computably enumerable sets. We also study the notion of "computable at density r" where r is a real in the unit interval. Finally, we study connections between density and classical smallness notions such as immunity, hyperimmunity, and cohesiveness.
APA, Harvard, Vancouver, ISO, and other styles
24

Minati, Gianfranco. "A Note on the Reality of Incomputable Real Numbers and Its Systemic Significance." Systems 9, no. 2 (June 12, 2021): 44. http://dx.doi.org/10.3390/systems9020044.

Full text
Abstract:
We discuss mathematical and physical arguments contrasting continuous and discrete, limitless discretization as arbitrary granularity. In this regard, we focus on Incomputable (lacking an algorithm that computes in finite time) Real Numbers (IRNs). We consider how, for measurements, the usual approach to dealing with IRNs is to approximate to avoid the need for more detailed, unrealistic surveys. In this regard, we contrast effective computation and emergent computation. Furthermore, we consider the alternative option of taking into account the properties of the decimal part of IRNs, such as the occurrence, distribution, combinations, quasi-periodicities, and other contextual properties, e.g., topological. For instance, in correspondence with chaotic behaviors, quasi-periodic solutions, quasi-systems, uniqueness, and singularities, non-computability represents and corresponds to theoretically incomplete properties of the processes of complexity, such as emergence and quantum-like properties. We elaborate upon cases of equivalences and symmetries, characterizing complexity and infiniteness as corresponding to the usage of multiple non-equivalent models that are constructively and theoretically incomplete due to the non-exhaustive nature of the multiplicity of complexity. Finally, we detail alternative computational approaches, such as hypercomputation, natural computing, quantum computing, and analog and hybrid computing. The reality of IRNs is considered to represent the theoretical incompleteness of complex phenomena taking place through collapse from equivalences and symmetries. A world of precise finite values, even if approximated, is assumed to have dynamics that are zippable in analytical formulae and to be computable and symbolically representable in the way it functions. A world of arbitrary precise infinite values with dynamics that are non-zippable in analytical formulae, non-computable, and, for instance, sub-symbolically representable, is assumed to be almost compatible with the coherence of emergence. The real world is assumed to be a continuous combination of the two—functioning and emergent—where the second dominates and is the norm, and the first is the locus of primarily epistemic extracts. Research on IRNs should focus on properties representing and corresponding to those that are detectable in real, even if extreme, phenomena, such as emergence and quantum phenomena.
APA, Harvard, Vancouver, ISO, and other styles
25

Ziegler, Martin. "Real computation with least discrete advice: A complexity theory of nonuniform computability with applications to effective linear algebra." Annals of Pure and Applied Logic 163, no. 8 (August 2012): 1108–39. http://dx.doi.org/10.1016/j.apal.2011.12.030.

Full text
APA, Harvard, Vancouver, ISO, and other styles
26

Harrington, Leo, and Robert I. Soare. "Codable sets and orbits of computably enumerable sets." Journal of Symbolic Logic 63, no. 1 (March 1998): 1–28. http://dx.doi.org/10.2307/2586583.

Full text
Abstract:
AbstractA set X of nonnegative integers is computably enumerable (c.e.), also called recursively enumerable (r.e.), if there is a computable method to list its elements. Let ε denote the structure of the computably enumerable sets under inclusion, ε = ({We}eϵω,⊆). We previously exhibited a first order ε-definable property Q(X) such that Q(X) guarantees that X is not Turing complete (i.e., does not code complete information about c.e. sets).Here we show first that Q(X) implies that X has a certain “slowness” property whereby the elements must enter X slowly (under a certain precise complexity measure of speed of computation) even though X may have high information content. Second we prove that every X with this slowness property is computable in some member of any nontrivial orbit, namely for any noncomputable A ϵ ε there exists B in the orbit of A such that X ≤TB under relative Turing computability (≤T). We produce B using the -automorphism method we introduced earlier.
APA, Harvard, Vancouver, ISO, and other styles
27

HUET, GÉRARD. "Special issue on ‘Logical frameworks and metalanguages’." Journal of Functional Programming 13, no. 2 (March 2003): 257–60. http://dx.doi.org/10.1017/s0956796802004549.

Full text
Abstract:
There is both a great unity and a great diversity in presentations of logic. The diversity is staggering indeed – propositional logic, first-order logic, higher-order logic belong to one classification; linear logic, intuitionistic logic, classical logic, modal and temporal logics belong to another one. Logical deduction may be presented as a Hilbert style of combinators, as a natural deduction system, as sequent calculus, as proof nets of one variety or other, etc. Logic, originally a field of philosophy, turned into algebra with Boole, and more generally into meta-mathematics with Frege and Heyting. Professional logicians such as Gödel and later Tarski studied mathematical models, consistency and completeness, computability and complexity issues, set theory and foundations, etc. Logic became a very technical area of mathematical research in the last half century, with fine-grained analysis of expressiveness of subtheories of arithmetic or set theory, detailed analysis of well-foundedness through ordinal notations, logical complexity, etc. Meanwhile, computer modelling developed a need for concrete uses of logic, first for the design of computer circuits, then more widely for increasing the reliability of sofware through the use of formal specifications and proofs of correctness of computer programs. This gave rise to more exotic logics, such as dynamic logic, Hoare-style logic of axiomatic semantics, logics of partial values (such as Scott's denotational semantics and Plotkin's domain theory) or of partial terms (such as Feferman's free logic), etc. The first actual attempts at mechanisation of logical reasoning through the resolution principle (automated theorem proving) had been disappointing, but their shortcomings gave rise to a considerable body of research, developing detailed knowledge about equational reasoning through canonical simplification (rewriting theory) and proofs by induction (following Boyer and Moore successful integration of primitive recursive arithmetic within the LISP programming language). The special case of Horn clauses gave rise to a new paradigm of non-deterministic programming, called Logic Programming, developing later into Constraint Programming, blurring further the scope of logic. In order to study knowledge acquisition, researchers in artificial intelligence and computational linguistics studied exotic versions of modal logics such as Montague intentional logic, epistemic logic, dynamic logic or hybrid logic. Some others tried to capture common sense, and modeled the revision of beliefs with so-called non-monotonic logics. For the careful crafstmen of mathematical logic, this was the final outrage, and Girard gave his anathema to such “montres à moutardes”.
APA, Harvard, Vancouver, ISO, and other styles
28

Case, John, and Timo Kötzing. "Computability-theoretic learning complexity." Philosophical Transactions of the Royal Society A: Mathematical, Physical and Engineering Sciences 370, no. 1971 (July 28, 2012): 3570–96. http://dx.doi.org/10.1098/rsta.2011.0320.

Full text
Abstract:
Initially discussed are some of Alan Turing's wonderfully profound and influential ideas about mind and mechanism—including regarding their connection to the main topic of the present study, which is within the field of computability-theoretic learning theory. Herein is investigated the part of this field concerned with the algorithmic, trial-and-error inference of eventually correct programs for functions from their data points. As to the main content of this study: in prior papers, beginning with the seminal work by Freivalds et al. in 1995, the notion of intrinsic complexity is used to analyse the learning complexity of sets of functions in a Gold-style learning setting. Herein are pointed out some weaknesses of this notion. Offered is an alternative based on epitomizing sets of functions—sets that are learnable under a given learning criterion, but not under other criteria that are not at least as powerful. To capture the idea of epitomizing sets, new reducibility notions are given based on robust learning (closure of learning under certain sets of computable operators). Various degrees of epitomizing sets are characterized as the sets complete with respect to corresponding reducibility notions! These characterizations also provide an easy method for showing sets to be epitomizers, and they are then employed to prove several sets to be epitomizing. Furthermore, a scheme is provided to generate easily very strong epitomizers for a multitude of learning criteria. These strong epitomizers are the so-called self-learning sets, previously applied by Case & Kötzing in 2010. These strong epitomizers can be easily generated and employed in a myriad of settings to witness with certainty the strict separation in learning power between the criteria so epitomized and other not as powerful criteria!
APA, Harvard, Vancouver, ISO, and other styles
29

Norman, Alfred Lorn. "Computability, complexity and economics." Computational Economics 7, no. 1 (February 1994): 1–21. http://dx.doi.org/10.1007/bf01299326.

Full text
APA, Harvard, Vancouver, ISO, and other styles
30

Downey, Rod. "Computability, Complexity and Randomness." Theory of Computing Systems 52, no. 1 (October 5, 2012): 1. http://dx.doi.org/10.1007/s00224-012-9430-3.

Full text
APA, Harvard, Vancouver, ISO, and other styles
31

Rustem, Berc, and Kumaraswamy Velupillai. "Rationality, computability, and complexity." Journal of Economic Dynamics and Control 14, no. 2 (May 1990): 419–32. http://dx.doi.org/10.1016/0165-1889(90)90027-e.

Full text
APA, Harvard, Vancouver, ISO, and other styles
32

Brattka, Vasco, Peter Hertling, Ker-I. Ko, and Hideki Tsuiki. "Computability and complexity in analysis." Journal of Complexity 22, no. 6 (December 2006): 728. http://dx.doi.org/10.1016/j.jco.2006.10.001.

Full text
APA, Harvard, Vancouver, ISO, and other styles
33

Barmpalias, George. "Algorithmic Randomness and Measures of Complexity." Bulletin of Symbolic Logic 19, no. 3 (September 2013): 318–50. http://dx.doi.org/10.1017/s1079898600010672.

Full text
Abstract:
AbstractWe survey recent advances on the interface between computability theory and algorithmic randomness, with special attention on measures of relative complexity. We focus on (weak) reducibilities that measure (a) the initial segment complexity of reals and (b) the power of reals to compress strings, when they are used as oracles. The results are put into context and several connections are made with various central issues in modern algorithmic randomness and computability.
APA, Harvard, Vancouver, ISO, and other styles
34

Palmore, Julian. "The New SciencesChaos, Complexity, and Computability." Military Operations Research 5, no. 3 (June 1, 2000): 5–7. http://dx.doi.org/10.5711/morj.5.3.5.

Full text
APA, Harvard, Vancouver, ISO, and other styles
35

Chen, Jinhe, Decheng Ding, and Liang Yu. "Conference on Computability, Complexity and Randomness." Bulletin of Symbolic Logic 14, no. 4 (December 2008): 548–49. http://dx.doi.org/10.2178/bsl/1231081465.

Full text
APA, Harvard, Vancouver, ISO, and other styles
36

Hiratsuka, Kota, Yuzuru Sato, and Zin Arai. "Computability and Complexity of Julia Sets." IEICE Proceeding Series 2 (March 17, 2014): 2–5. http://dx.doi.org/10.15248/proc.2.2.

Full text
APA, Harvard, Vancouver, ISO, and other styles
37

Cockett, Robin, Joaquín Díaz-Boïls, Jonathan Gallagher, and Pavel Hrubeš. "Timed Sets, Functional Complexity, and Computability." Electronic Notes in Theoretical Computer Science 286 (September 2012): 117–37. http://dx.doi.org/10.1016/j.entcs.2012.08.009.

Full text
APA, Harvard, Vancouver, ISO, and other styles
38

Harizanov, Valentina S. "Computability-Theoretic Complexity of Countable Structures." Bulletin of Symbolic Logic 8, no. 4 (December 2002): 457–77. http://dx.doi.org/10.2178/bsl/1182353917.

Full text
Abstract:
Computable model theory, also called effective or recursive model theory, studies algorithmic properties of mathematical structures, their relations, and isomorphisms. These properties can be described syntactically or semantically. One of the major tasks of computable model theory is to obtain, whenever possible, computability-theoretic versions of various classical model-theoretic notions and results. For example, in the 1950's, Fröhlich and Shepherdson realized that the concept of a computable function can make van der Waerden's intuitive notion of an explicit field precise. This led to the notion of a computable structure. In 1960, Rabin proved that every computable field has a computable algebraic closure. However, not every classical result “effectivizes”. Unlike Vaught's theorem that no complete theory has exactly two nonisomorphic countable models, Millar's and Kudaibergenov's result establishes that there is a complete decidable theory that has exactly two nonisomorphic countable models with computable elementary diagrams. In the 1970's, Metakides and Nerode [58], [59] and Remmel [71], [72], [73] used more advanced methods of computability theory to investigate algorithmic properties of fields, vector spaces, and other mathematical structures.
APA, Harvard, Vancouver, ISO, and other styles
39

Shoham, Yoav, and Moshe Tennenholtz. "On Rational Computability and Communication Complexity." Games and Economic Behavior 35, no. 1-2 (April 2001): 197–211. http://dx.doi.org/10.1006/game.2000.0833.

Full text
APA, Harvard, Vancouver, ISO, and other styles
40

Lathrop, James I., Jack H. Lutz, Matthew J. Patitz, and Scott M. Summers. "Computability and Complexity in Self-assembly." Theory of Computing Systems 48, no. 3 (April 20, 2010): 617–47. http://dx.doi.org/10.1007/s00224-010-9252-0.

Full text
APA, Harvard, Vancouver, ISO, and other styles
41

Analyti, Anastasia, Carlos Viegas Damásio, and Grigoris Antoniou. "Extended RDF: Computability and complexity issues." Annals of Mathematics and Artificial Intelligence 75, no. 3-4 (March 24, 2015): 267–334. http://dx.doi.org/10.1007/s10472-015-9451-0.

Full text
APA, Harvard, Vancouver, ISO, and other styles
42

Čenek, E. W. "Computability and complexity theory and the complexity theory companion." ACM SIGACT News 33, no. 3 (September 2002): 17–19. http://dx.doi.org/10.1145/582475.582480.

Full text
APA, Harvard, Vancouver, ISO, and other styles
43

Csima, Barbara F., and Robert I. Soare. "Computability Results Used in Differential Geometry." Journal of Symbolic Logic 71, no. 4 (December 2006): 1394–410. http://dx.doi.org/10.2178/jsl/1164060462.

Full text
Abstract:
AbstractTopologists Nabutovsky and Weinberger discovered how to embed computably enumerable (c.e.) sets into the geometry of Riemannian metrics modulo diffeomorphisms. They used the complexity of the settling times of the c.e. sets to exhibit a much greater complexity of the depth and density of local minima for the diameter function than previously imagined. Their results depended on the existence of certain sequences of c.e. sets, constructed at their request by Csima and Soare, whose settling times had the necessary dominating properties. Although these computability results had been announced earlier, their proofs have been deferred until this paper.Computably enumerable sets have long been used to prove undecidability of mathematical problems such as the word problem for groups and Hilbert's Tenth Problem. However, this example by Nabutovsky and Weinberger is perhaps the first example of the use of c.e. sets to demonstrate specific mathematical or geometric complexity of a mathematical structure such as the depth and distribution of local minima.
APA, Harvard, Vancouver, ISO, and other styles
44

Campani, Carlos A. P., and Paulo Blauth Menezes. "Teorias da Aleatoriedade." Revista de Informática Teórica e Aplicada 11, no. 2 (December 20, 2004): 75–98. http://dx.doi.org/10.22456/2175-2745.5983.

Full text
Abstract:
This work is a survey about the definition of “random sequence”. We emphasize the definition of Martin-Löf and the definition based on incompressibility (Kolmogorov complexity). Kolmogorov complexity is a profound and sofisticated theory of information and randomness based on Turing machines. These two definitions solve all the problems of the other approaches, satisfying our intuitive concept of randomness, and both are mathematically correct. Furthermore, we show the Schnorr’s approach, that includes a requisite of effectiveness (computability) in his definition. We show the relations between all definitions in a critical way. Keywords: randomness, Kolmogorov complexity, Turing machine, computability, probability.
APA, Harvard, Vancouver, ISO, and other styles
45

Díaz, Josep, and Carme Torras. "Turing’s algorithmic lens: From computability to complexity theory." Arbor 189, no. 764 (December 30, 2013): a080. http://dx.doi.org/10.3989/arbor.2013.764n6003.

Full text
APA, Harvard, Vancouver, ISO, and other styles
46

Hiratsuka, Kota, Yuzuru Sato, and Zin Arai. "Computability and complexity of Julia sets: a review." Nonlinear Theory and Its Applications, IEICE 5, no. 4 (2014): 410–23. http://dx.doi.org/10.1587/nolta.5.410.

Full text
APA, Harvard, Vancouver, ISO, and other styles
47

Greenberg, Noam. "Editorial: Special Issue on Computability, Complexity and Randomness." Theory of Computing Systems 58, no. 3 (July 24, 2015): 381–82. http://dx.doi.org/10.1007/s00224-015-9648-y.

Full text
APA, Harvard, Vancouver, ISO, and other styles
48

Miyabe, Kenshi. "L1-Computability, Layerwise Computability and Solovay Reducibility." Computability 2, no. 1 (2013): 15–29. http://dx.doi.org/10.3233/com-13015.

Full text
APA, Harvard, Vancouver, ISO, and other styles
49

(Vela) Velupillai, K. "Expository notes on computability and complexity in (arithmetical) games." Journal of Economic Dynamics and Control 21, no. 6 (June 1997): 955–79. http://dx.doi.org/10.1016/s0165-1889(97)00012-2.

Full text
APA, Harvard, Vancouver, ISO, and other styles
50

IANOVSKI, EGOR, RUSSELL MILLER, KENG MENG NG, and ANDRÉ NIES. "COMPLEXITY OF EQUIVALENCE RELATIONS AND PREORDERS FROM COMPUTABILITY THEORY." Journal of Symbolic Logic 79, no. 3 (August 18, 2014): 859–81. http://dx.doi.org/10.1017/jsl.2013.33.

Full text
Abstract:
AbstractWe study the relative complexity of equivalence relations and preorders from computability theory and complexity theory. Given binary relationsR,S, a componentwise reducibility is defined byR≤S⇔ ∃f∀x, y[x R y↔f(x)S f(y)].Here,fis taken from a suitable class of effective functions. For us the relations will be on natural numbers, andfmust be computable. We show that there is a${\rm{\Pi }}_1^0$-complete equivalence relation, but no${\rm{\Pi }}_k^0$-complete fork≥ 2. We show that${\rm{\Sigma }}_k^0$preorders arising naturally in the above-mentioned areas are${\rm{\Sigma }}_k^0$-complete. This includes polynomial timem-reducibility on exponential time sets, which is${\rm{\Sigma }}_2^0$, almost inclusion on r.e. sets, which is${\rm{\Sigma }}_3^0$, and Turing reducibility on r.e. sets, which is${\rm{\Sigma }}_4^0$.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography