To see the other types of publications on this topic, follow the link: Arithmetic Complexity Classes.

Journal articles on the topic 'Arithmetic Complexity Classes'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 25 journal articles for your research on the topic 'Arithmetic Complexity Classes.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse journal articles on a wide variety of disciplines and organise your bibliography correctly.

1

G�l, Anna, and Avi Wigderson. "Boolean complexity classes vs. their arithmetic analogs." Random Structures and Algorithms 9, no. 1-2 (August 1996): 99–111. http://dx.doi.org/10.1002/(sici)1098-2418(199608/09)9:1/2<99::aid-rsa7>3.0.co;2-6.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Ignjatović, Aleksandar. "Delineating classes of computational complexity via second order theories with weak set existence principles. I." Journal of Symbolic Logic 60, no. 1 (March 1995): 103–21. http://dx.doi.org/10.2307/2275511.

Full text
Abstract:
AbstractIn this paper we characterize the well-known computational complexity classes of the polynomial time hierarchy as classes of provably recursive functions (with graphs of suitable bounded complexity) of some second order theories with weak comprehension axiom schemas but without any induction schemas (Theorem 6). We also find a natural relationship between our theories and the theories of bounded arithmetic (Lemmas 4 and 5). Our proofs use a technique which enables us to “speed up” induction without increasing the bounded complexity of the induction formulas. This technique is also used to obtain an interpretability result for the theories of bounded arithmetic (Theorem 4).
APA, Harvard, Vancouver, ISO, and other styles
3

Jeřábek, Emil. "Approximate counting in bounded arithmetic." Journal of Symbolic Logic 72, no. 3 (September 2007): 959–93. http://dx.doi.org/10.2178/jsl/1191333850.

Full text
Abstract:
AbstractWe develop approximate counting of sets definable by Boolean circuits in bounded arithmetic using the dual weak pigeonhole principle (dWPHP(PV)), as a generalization of results from [15]. We discuss applications to formalization of randomized complexity classes (such as BPP, APP, MA, AM) in PV1 + dWPHP(PV).
APA, Harvard, Vancouver, ISO, and other styles
4

Japaridze, Giorgi. "Arithmetics based on computability logic." Logical Investigations 25, no. 2 (December 23, 2019): 61–74. http://dx.doi.org/10.21146/2074-1472-2019-25-2-61-74.

Full text
Abstract:
This paper is a brief survey of number theories based on em computability logic (CoL) a game-semantically conceived logic of computational tasks of resources. Such theories, termed em clarithmetics, are conservative extensions of first-order Peano arithmetic. The first section of the paper lays out the conceptual basis of CoL and describes the relevant fragment of its formal language, with so called parallel connectives, choice connectives and quantifiers, and blind quantifiers. Both syntactically and semantically, this is a conservative generalization of the language of classical logic. Clarithmetics, based on the corresponding fragment of CoL in the same sense as Peano arithmetic is based on classical logic, are discussed in the second section. The axioms and inference rules of the system of clarithmetic named ${\bf CLA11}$ are presented, and the main results on this system are stated: constructive soundness, extensional completeness, and intensional completeness. In the final section two potential applications of clarithmetics are addressed: clarithmetics as declarative programming languages in an extreme sense, and as tools for separating computational complexity classes. When clarithmetics or similar CoL-based theories are viewed as programming languages, programming reduces to proof-search, as programs can be mechanically extracted from proofs; such programs also serves as their own formal verifications, thus fully neutralizing the notorious (and generally undecidable) program verification problem. The second application reduces the problem of separating various computational complexity classes to separating the corresponding versions of clarithmetic, the potential benefits of which stem from the belief that separating theories should generally be easier than separating complexity classes directly.
APA, Harvard, Vancouver, ISO, and other styles
5

Kupke, Clemens, Dirk Pattinson, and Lutz Schröder. "Coalgebraic Reasoning with Global Assumptions in Arithmetic Modal Logics." ACM Transactions on Computational Logic 23, no. 2 (April 30, 2022): 1–34. http://dx.doi.org/10.1145/3501300.

Full text
Abstract:
We establish a generic upper bound ExpTime for reasoning with global assumptions (also known as TBoxes) in coalgebraic modal logics. Unlike earlier results of this kind, our bound does not require a tractable set of tableau rules for the instance logics, so that the result applies to wider classes of logics. Examples are Presburger modal logic, which extends graded modal logic with linear inequalities over numbers of successors, and probabilistic modal logic with polynomial inequalities over probabilities. We establish the theoretical upper bound using a type elimination algorithm. We also provide a global caching algorithm that potentially avoids building the entire exponential-sized space of candidate states, and thus offers a basis for practical reasoning. This algorithm still involves frequent fixpoint computations; we show how these can be handled efficiently in a concrete algorithm modelled on Liu and Smolka’s linear-time fixpoint algorithm. Finally, we show that the upper complexity bound is preserved under adding nominals to the logic, i.e., in coalgebraic hybrid logic.
APA, Harvard, Vancouver, ISO, and other styles
6

Harnik, Victor. "Provably total functions of intuitionistic bounded arithmetic." Journal of Symbolic Logic 57, no. 2 (June 1992): 466–77. http://dx.doi.org/10.2307/2275282.

Full text
Abstract:
This note deals with a proof-theoretic characterisation of certain complexity classes of functions in fragments of intuitionistic bounded arithmetic. In this Introduction we survey the background and state our main result.We follow Buss [B1] and consider a language for arithmetic whose nonlogical symbols are 0, S (the successor operation Sx = x + 1), +, ·, ∣ ∣ (∣x∣ being the number of digits in the binary notation for x), rounded down to the nearest integer), # (x#y = 2∣x∣∣y∣) and ≤. We define 1 = S0, 2 = S1, s0x = 2x and s1x = 2x + 1. In Buss's approach the functions s0 and s1 play a special role. Notice that six is the number obtained from x by suffixing the digit i to its binary representation, and thus the natural numbers are generated from 0 by repeated applications of the operations s0 and s1. This means that they satisfy the induction schemeUsing the fact that is x with its last binary digit deleted, this can be stated more compactly in the following form, called by Buss the polynomial induction or PIND schema:Buss defined a theory S2 consisting of a finite set BASIC of open axioms and the PIND-schema restricted to bounded formulas ϕ. The topic of bounded arithmetic is concerned with S2 and its fragments.
APA, Harvard, Vancouver, ISO, and other styles
7

Zeschke, Yorick. "Growth Functions, Rates and Classes of String-Based Multiway Systems." Complex Systems 31, no. 1 (March 15, 2022): 123–64. http://dx.doi.org/10.25088/complexsystems.31.1.123.

Full text
Abstract:
Inspired by the recently emerging Wolfram Physics Project where “Multiway Systems,” graph representations of abstract rewriting systems equipped with a causal structure, have played an important role in discrete models of spacetime and quantum mechanics, this paper establishes several more fundamental properties about the growth (number of states over steps in a system’s evolution) of string rewriting systems in general. While proving the undecidability of exactly determining a system’s growth function, we show several asymptotic properties all growth functions of arbitrary string rewriting systems share. Through an explicit construction, it is proven that string rewriting systems, while never exceeding exponential functions in their growth, are capable of growing arbitrarily slowly, that is, slower than the asymptotic inverse of every Turing-computable function. Additionally, an elementary classification scheme partitioning the set of string rewriting systems into finitely many nontrivial subsets is provided. By introducing arithmetic-like operations under which Multiway Systems form a weakened semiring structure, it is furthermore demonstrated that their growth functions, while always being primitively recursive, can approximate many well-known elementary functions classically used in calculus, which underlines the complexity and computational diversity of Multiway Systems. In the context of the Wolfram Physics Project, some implications of these findings are discussed as well.
APA, Harvard, Vancouver, ISO, and other styles
8

Leivant, Daniel, and Bob Constable. "Editorial." Journal of Functional Programming 11, no. 1 (January 2001): 1. http://dx.doi.org/10.1017/s0956796801009030.

Full text
Abstract:
This issue of the Journal of Functional Programming is dedicated to work presented at the Workshop on Implicit Computational Complexity in Programming Languages, affiliated with the 1998 meeting of the International Conference on Functional Programming in Baltimore.Several machine-independent approaches to computational complexity have been developed in recent years; they establish a correspondence linking computational complexity to conceptual and structural measures of complexity of declarative programs and of formulas, proofs and models of formal theories. Examples include descriptive complexity of finite models, restrictions on induction in arithmetic and related first order theories, complexity of set-existence principles in higher order logic, and specifications in linear logic. We refer to these approaches collectively as Implicit Computational Complexity. This line of research provides a framework for a streamlined incorporation of computational complexity into areas such as formal methods in software development, programming language theory, and database theory.A fruitful thread in implicit computational complexity is based on exploring the computational complexity consequences of introducing various syntactic control mechanisms in functional programming, including restrictions (akin to static typing) on scoping, data re-use (via linear modalities), and iteration (via ramification of data). These forms of control, separately and in combination, can certify bounds on the time and space resources used by programs. In fact, all results in this area establish that each restriction considered yields precisely a major computational complexity class. The complexity classes thus obtained range from very restricted ones, such as NC and Alternating logarithmic time, through the central classes Poly-Time and Poly-Space, to broad classes such as the Elementary and the Primitive Recursive functions.Considerable effort has been invested in recent years to relax as much as possible the structural restrictions considered, allowing for more exible programming and proof styles, while still guaranteeing the same resource bounds. Notably, more exible control forms have been developed for certifying that functional programs execute in Poly-Time.The 1998 workshop covered both the theoretical foundations of the field and steps toward using its results in various implemented systems, for example in controlling the computational complexity of programs extracted from constructive proofs. The five papers included in this issue nicely represent this dual concern of theory and practice. As they are going to print, we should note that the field of Implicit Computational Complexity continues to thrive: successful workshops dedicated to it were affiliated with both the LICS'99 and LICS'00 conferences. Special issues, of Information and Computation dedicated to the former, and of Theoretical Computer Science to the latter, are in preparation.
APA, Harvard, Vancouver, ISO, and other styles
9

Guzhov, Vladimir I., Ilya O. Marchenko, Ekaterina E. Trubilina, and Dmitry S. Khaidukov. "Comparison of numbers and analysis of overflow in modular arithmetic." Analysis and data processing systems, no. 3 (September 30, 2021): 75–86. http://dx.doi.org/10.17212/2782-2001-2021-3-75-86.

Full text
Abstract:
The method of modular arithmetic consists in operating not with a number, but with its remainders after division by some integers. In the modular number system or the number system in the residual classes, a multi-bit integer in the positional number system is represented as a sequence of several positional numbers. These numbers are the remainders (residues) of dividing the original number into some modules that are mutually prime integers. The advantage of the modular representation is that it is very simple to perform addition, subtraction and multiplication operations. In parallel execution of operations, the use of modular arithmetic can significantly reduce the computation time. However, there are drawbacks to modular representation that limit its use. These include a slow conversion of numbers from modular to positional representation; the complexity of comparing numbers in modular representation; the difficulty in performing the division operation; and the difficulty of determining the presence of an overflow. The use of modular arithmetic is justified if there are fast algorithms for calculating a number from a set of remainders. This article describes a fast algorithm for converting numbers from modular representation to positional representation based on a geometric approach. The review is carried out for the case of a comparison system with two modules. It is also shown that as a result of increasing numbers in positional calculus, they successively change in a spiral on the surface of a two-dimensional torus. Based on this approach, a fast algorithm for comparing numbers and an algorithm for detecting an overflow during addition and multiplication of numbers in modular representation were developed. Consideration for the multidimensional case is possible when analyzing a multidimensional torus and studying the behavior of the turns on its surface.
APA, Harvard, Vancouver, ISO, and other styles
10

Zadiraka, Valerii, and Inna Shvidchenko. "Using Rounding Errors in Modern Computer Technologies." Cybernetics and Computer Technologies, no. 3 (September 30, 2021): 43–52. http://dx.doi.org/10.34229/2707-451x.21.3.4.

Full text
Abstract:
Introduction. When solving problems of transcomputational complexity, the problem of evaluating the rounding error is relevant, since it can be dominant in evaluating the accuracy of solving the problem. The ways to reduce it are important, as are the reserves for optimizing the algorithms for solving the problem in terms of accuracy. In this case, you need to take into account the rounding-off rules and calculation modes. The article shows how the estimates of the rounding error can be used in modern computer technologies for solving problems of computational, applied mathematics, as well as information security. The purpose of the article is to draw the attention of the specialists in computational and applied mathematics to the need to take into account the rounding error when analyzing the quality of the approximate solution of problems. This is important for mathematical modeling problems, problems using Bigdata, digital signal and image processing, cybersecurity, and many others. The article demonstrates specific estimates of the rounding error for solving a number of problems: estimating the mathematical expectation, calculating the discrete Fourier transform, using multi-digit arithmetic and using the estimates of the rounding error in algorithms for solving computer steganography problems. The results. The estimates of the rounding error of the algorithms for solving the above-mentioned classes of problems are given for different rounding-off rules and for different calculation modes. For the problem of constructing computer steganography, the use of the estimates of the rounding error in computer technologies for solving problems of hidden information transfer is shown. Conclusions. Taking into account the rounding error is an important factor in assessing the accuracy of the approximate solution of problems of the complexity above average. Keywords: rounding error, computer technology, discrete Fourier transform, multi-digit arithmetic, computer steganography.
APA, Harvard, Vancouver, ISO, and other styles
11

Cenzer, D., V. W. Marek, and J. B. Remmel. "On the complexity of index sets for finite predicate logic programs which allow function symbols." Journal of Logic and Computation 30, no. 1 (January 2020): 107–56. http://dx.doi.org/10.1093/logcom/exaa005.

Full text
Abstract:
Abstract We study the recognition problem in the metaprogramming of finite normal predicate logic programs. That is, let $\mathcal{L}$ be a computable first-order predicate language with infinitely many constant symbols and infinitely many $n$-ary predicate symbols and $n$-ary functions symbols for all $n \geq 1$. Then we can effectively list all the finite normal predicate logic programs $Q_0,Q_1,\ldots $ over $\mathcal{L}$. Given some property $\mathcal{P}$ of finite normal predicate logic programs over $\mathcal{L}$, we define the index set $I_{\mathcal{P}}$ to be the set of indices $e$ such that $Q_e$ has property $\mathcal{P}$. We classify the complexity of the index set $I_{\mathcal{P}}$ within the arithmetic hierarchy for various natural properties of finite predicate logic programs. For example, we determine the complexity of the index sets relative to all finite predicate logic programs and relative to certain special classes of finite predicate logic programs of properties such as (i) having no stable models, (ii) having no recursive stable models, (iii) having at least one stable model, (iv) having at least one recursive stable model, (v) having exactly $c$ stable models for any given positive integer $c$, (vi) having exactly $c$ recursive stable models for any given positive integer $c$, (vii) having only finitely many stable models, (viii) having only finitely many recursive stable models, (ix) having infinitely many stable models and (x) having infinitely many recursive stable models.
APA, Harvard, Vancouver, ISO, and other styles
12

Tretiakov, Igor Aleksandrovich, and Vladimir Vasilevich Danilov. "RESEARCH OF RADIO FREQUENCY SPECTROGRAMS USING METHODS OF LINGUISTIC ANALYSIS." Vestnik of Astrakhan State Technical University. Series: Management, computer science and informatics 2020, no. 3 (July 31, 2020): 45–51. http://dx.doi.org/10.24143/2072-9502-2020-3-45-51.

Full text
Abstract:
The article presents the study of experimental curves of the spectral spectra of radio waves of the FM-range developed on a laboratory model. The algorithm was used to define sections, for which the complexity function took locally minimal values. The standard was determined for each section of the curve, in which all the arithmetic mean ordinates of all areas correspond to the certain class. For a more extended linguistic description of the experimental curves, it is proposed to compile the description taking into account the location of the curve sections on the abscissa axis. The obtained extended linguistic description of the curve will reflect not only classes of simple events, but their phases as well. As a result of applying the linguistic analysis system for the analysis of spectral radiograms, it can be inferred that the experimental curves are presented in the form of short and reliable rules for the analysis of the radiogram spectrum. The use of standards allows to accurately represent each chain of characters in each group with a minimum distance to the standard. The obtained extended descriptions quite accurately describe the behavior of the curves studied.
APA, Harvard, Vancouver, ISO, and other styles
13

Sriraam, N. "A High-Performance Lossless Compression Scheme for EEG Signals Using Wavelet Transform and Neural Network Predictors." International Journal of Telemedicine and Applications 2012 (2012): 1–8. http://dx.doi.org/10.1155/2012/302581.

Full text
Abstract:
Developments of new classes of efficient compression algorithms, software systems, and hardware for data intensive applications in today's digital health care systems provide timely and meaningful solutions in response to exponentially growing patient information data complexity and associated analysis requirements. Of the different 1D medical signals, electroencephalography (EEG) data is of great importance to the neurologist for detecting brain-related disorders. The volume of digitized EEG data generated and preserved for future reference exceeds the capacity of recent developments in digital storage and communication media and hence there is a need for an efficient compression system. This paper presents a new and efficient high performance lossless EEG compression using wavelet transform and neural network predictors. The coefficients generated from the EEG signal by integer wavelet transform are used to train the neural network predictors. The error residues are further encoded using a combinational entropy encoder, Lempel-Ziv-arithmetic encoder. Also a new context-based error modeling is also investigated to improve the compression efficiency. A compression ratio of 2.99 (with compression efficiency of 67%) is achieved with the proposed scheme with less encoding time thereby providing diagnostic reliability for lossless transmission as well as recovery of EEG signals for telemedicine applications.
APA, Harvard, Vancouver, ISO, and other styles
14

Bosson, Mélanie. "Learning Potential in Analogical Reasoning: Construction of a Rasch-scaled Instrument for Group Wise Assessment." Journal of Cognitive Education and Psychology 4, no. 1 (January 2004): 146–47. http://dx.doi.org/10.1891/194589504787382947.

Full text
Abstract:
The aim of this project is to construct a new learning potential test in analogical reasoning that can be administered both individually and group wise. A first set of items has been created within the framework of Berger’s Master’s thesis (Berger; 2003). In this study (Bosson, 2003), these items were revised and applied in a primary school with 271 children, age between 6 and 13 years. The HART (Hessels Analogical Reasoning Test; Hessels, 2003) is composed of 66 analogical matrices presented in 2 (columns) by 3 (rows) or 3 by 3 format with 6 or 8 response alternatives. Items are distributed across 9 levels of complexity based on number of attributes and relations in the matrices. The test is preceded by an introductory phase, during which four examples are discussed extensively with the children. In this study, the HART is presented in three phases. After the introductory phase, each child had to individually resolve 40 to 60 items (pretest). Then a short training (20-30 minutes) about how to resolve analogical matrices was given to whole classes, followed by a posttest in which the child had to resolved another 20 items (posttest). For a first evaluation of training effects and validity of the HART, the 14 classes were randomly assigned to either the control or the experimental condition. Furthermore, the teachers were asked to give ratings of pupils’ school performance in arithmetic, French, behavior, participation, and application in class.Analysis of pretest results permitted us to construct a generalized Rasch-scale with all 66 items. The theoretical complexity levels of the items largely concurred with their empirical difficulties. These results will allow us, in future research, to construct short tests adapted to the developmental level of each child. With regard to the validity, the low correlations between pretest, posttest and non-cognitive aspects (measured with teacher’s ratings) demonstrated that the test provides a relatively independent evaluation of factors such as behavior and application in class. The effect of training was significant: participants from the experimental group showed higher performance at posttest than those of the control group. Furthermore, the gains realized by the experimental group were not related to age, as all age groups showed to have benefited from the training to the same extent. The residual gains from pre- to posttest showed also to be significantly related to school success in the experimental group only.The results of this study show that the experimental version of the HART allows one to estimate the learning potential of children in mainstream primary education classes. The research will continue with children in special education to evaluate the differential effects of the training and the predictive capacity of the HART, as well as the merits of different testing procedures.
APA, Harvard, Vancouver, ISO, and other styles
15

Vyas, Astha, Ritu Srivastava, and Parul Gupta. "Channeling Chirmi." Emerald Emerging Markets Case Studies 13, no. 1 (June 21, 2023): 1–15. http://dx.doi.org/10.1108/eemcs-12-2022-0536.

Full text
Abstract:
Learning outcomes The case is intended to assist students to:1. understand the customer’s purchase decision with reference to channel values;2. evaluate and assess the channel strategy using conventional and digital channels; and3. design the channel strategy for start-ups in emerging markets. Case overview/synopsis The subject area for this teaching case was marketing management. The teaching case could be used for the undergraduation and graduation levels of students. The case was about the marketing channel strategy of a small start-up boutique called Chirmi in India, with the theory of consumption values explained. In this case, primary data was taken directly from Chirmi, whereas secondary data for market analysis was taken from various reports, articles and other sources. Because the owner provided the records and documentation, the account was therefore substantiated by the collected first-hand information. The case uses quantitative methods to make students understand the channel arithmetic and consumption values of all the channels used by Chirmi. Complexity academic level In the course of core marketing classes at the undergraduate and graduate levels, this case may be used. The case addresses the channel structure, including wholesaling, retailing and e-commerce. Distribution channel management, the theory of consumption values and e-commerce marketing management are explained. Evaluation of channel strategy, design, implementation and management is emphasized. Supplementary materials Teaching notes are available for educators only. Subject code CSS: 8: Marketing.
APA, Harvard, Vancouver, ISO, and other styles
16

NAZAROVA, I. A., and S. V. GOLUB. "TOPOLOGICAL ASPECTS OF MODELING PARALLEL COMPUTATIONS WHEN SOLVING DYNAMIC PROBLEMS BASED ON THE MESSAGE PASSING INTERFACE." Scientific papers of Donetsk National Technical University. Series: Informatics, Cybernetics and Computer Science 2 - №1, no. 35-36 (2023): 69–78. http://dx.doi.org/10.31474/1996-1588-2023-1-36-69-78.

Full text
Abstract:
"transmission when solving complex multidimensional dynamic problems. Models of interprocessor exchange are obtained for computing systems with distributed memory and topological architectures: ring, 2D-torus, hypercube, etc. The application of double arithmetic based on the GMP-library and its effect on the temporal characteristics of parallel applications are considered. The dependence of the parallelism characteristics of numerical algorithms when using the streaming SIMD-extension is analyzed. The modern stage of development of computer and information technologies has one of its directions - the use of the idea of parallelism in order to reduce the time of solving multidimensional problems that have a high temporal complexity or are even NP-complete. Despite the significant increase in the productivity of existing parallel systems, reducing the cost of components for their construction, development and implementation of parallel methods remains the most difficult problem due to the lack of study of the internal structure of algorithms and their properties. Thus, the introduction of parallel computing requires not so much the mastering of parallel data processing as the development of parallel algorithms based on the parallelization of sequential ones or the construction of fundamentally new, more efficient parallel methods. That is why the analysis of existing parallel algorithms, studying the possibilities of improving their characteristics is an important, promising and practically demanded task. The purpose of the work is to increase the efficiency of solving complex multidimensional dynamic problems implemented on parallel architectures of distributed memory using the message passing interface (MPI) due to the reduction of communication time. Tasks of the work: firstly, it is the development of analytical models of multiple data transfer operations for different topologies of connection of processors, substantiation of the models and research of the quality of parallel calculations of various classes of numerical methods for cluster systems. The possibility and effectiveness of using double arithmetic (GMP-library) and its influence on the temporal characteristics of parallel implementation are separately investigated. Additionally, the paper analyzes the dependence of the parallelism characteristics of the methods on the use of Streaming SIMD-Extension (SSE). The scientific novelty of the field lies in the development and development of analytical models for processes of interprocessor exchange given by cluster systems, which allow to increase the efficiency of parallel implementations of numerical methods in rich dynamic tasks for a short period of time on a communication warehouse. The practical significance of the field is in the development of split models for the analysis of the efficiency of collective operations of multiple data transfer within the MPI interface, which is the current standard for software implementation of robots with shared memory. Possibility of arranging floating arithmetic based on the GMP library was added, which is especially important for the development of folding problems of dynamic problems with singularities. "
APA, Harvard, Vancouver, ISO, and other styles
17

Skuratovskii, Ruslan, and Volodymyr Osadchyy. "Criterions of Supersinguliarity and Groups of Montgomery and Edwards Curves in Cryptography." WSEAS TRANSACTIONS ON MATHEMATICS 19 (March 1, 2021): 709–22. http://dx.doi.org/10.37394/23206.2020.19.77.

Full text
Abstract:
We consider the algebraic affine and projective curves of Edwards over the finite field Fpn. It is well known that many modern cryptosystems can be naturally transformed into elliptic curves. The criterions of the supersingularity of Montgomery and Edwards curves are found. In this paper, we extend our previous research into those Edwards algebraic curves over a finite field and we construct birational isomorphism of them with cubic in Weierstrass normal form. One class of twisted Edwards is researched too. We propose a novel effective method of point counting for both Edwards and elliptic curves. In addition to finding a specific set of coefficients with corresponding field characteristics for which these curves are supersingular, we also find a general formula by which one can determine whether or not a curve Ed[Fp] is supersingular over this field. The method proposed has complexity O( p log2 2 p ) . This is an improvement over both Schoof’s basic algorithm and the variant which makes use of fast arithmetic (suitable for only the Elkis or Atkin primes numbers) with complexities O(log8 2 pn) and O(log4 2 pn) respectively. The embedding degree of the supersingular curve of Edwards over Fpn in a finite field is additionally investigated. Singular points of twisted Edwards curve are completely described. Due existing the birational isomorphism between twisted Edwards curve and elliptic curve in Weierstrass normal form the result about order of this curve over finite field is extended on cubic in Weierstrass normal form. Also it is considered minimum degree of an isogeny (distance) between curves of this two classes when such isogeny exists. We extend the existing isogenous of elliptic curves.
APA, Harvard, Vancouver, ISO, and other styles
18

BARNES, JAMES S., JUN LE GOH, and RICHARD A. SHORE. "THEOREMS OF HYPERARITHMETIC ANALYSIS AND ALMOST THEOREMS OF HYPERARITHMETIC ANALYSIS." Bulletin of Symbolic Logic 28, no. 1 (March 2022): 133–49. http://dx.doi.org/10.1017/bsl.2021.70.

Full text
Abstract:
AbstractTheorems of hyperarithmetic analysis (THAs) occupy an unusual neighborhood in the realms of reverse mathematics and recursion-theoretic complexity. They lie above all the fixed (recursive) iterations of the Turing jump but below ATR $_{0}$ (and so $\Pi _{1}^{1}$ -CA $_{0}$ or the hyperjump). There is a long history of proof-theoretic principles which are THAs. Until the papers reported on in this communication, there was only one mathematical example. Barnes, Goh, and Shore [1] analyze an array of ubiquity theorems in graph theory descended from Halin’s [9] work on rays in graphs. They seem to be typical applications of ACA $_{0}$ but are actually THAs. These results answer Question 30 of Montalbán’s Open Questions in Reverse Mathematics [19] and supply several other natural principles of different and unusual levels of complexity.This work led in [25] to a new neighborhood of the reverse mathematical zoo: almost theorems of hyperarithmetic analysis (ATHAs). When combined with ACA $_{0}$ they are THAs but on their own are very weak. Denizens both mathematical and logical are provided. Generalizations of several conservativity classes ( $\Pi _{1}^{1}$ , r- $\Pi _{1}^{1}$ , and Tanaka) are defined and these ATHAs as well as many other principles are shown to be conservative over RCA $_{0}$ in all these senses and weak in other recursion-theoretic ways as well. These results answer a question raised by Hirschfeldt and reported in [19] by providing a long list of pairs of principles one of which is very weak over RCA $_{0}$ but over ACA $_{0}$ is equivalent to the other which may be strong (THA) or very strong going up a standard hierarchy and at the end being stronger than full second-order arithmetic.
APA, Harvard, Vancouver, ISO, and other styles
19

ENDRULLIS, JÖRG, DIMITRI HENDRIKS, RENA BAKHSHI, and GRIGORE ROŞU. "On the complexity of stream equality." Journal of Functional Programming 24, no. 2-3 (January 20, 2014): 166–217. http://dx.doi.org/10.1017/s0956796813000324.

Full text
Abstract:
AbstractWe study the complexity of deciding the equality of streams specified by systems of equations. There are several notions of stream models in the literature, each generating a different semantics of stream equality. We pinpoint the complexity of each of these notions in the arithmetical or analytical hierarchy. Their complexity ranges from low levels of the arithmetical hierarchy such as Π02 for the most relaxed stream models, to levels of the analytical hierarchy such as Π11 and up to subsuming the entire analytical hierarchy for more restrictive but natural stream models. Since all these classes properly include both the semi-decidable and co-semi-decidable classes, it follows that regardless of the stream semantics employed, there is no complete proof system or algorithm for determining equality or inequality of streams. We also discuss several related problems, such as the existence and uniqueness of stream solutions for systems of equations, as well as the equality of such solutions.
APA, Harvard, Vancouver, ISO, and other styles
20

Chung, Heewon, Myungsun Kim, Ahmad Al Badawi, Khin Mi Mi Aung, and Bharadwaj Veeravalli. "Homomorphic Comparison for Point Numbers with User-Controllable Precision and Its Applications." Symmetry 12, no. 5 (May 8, 2020): 788. http://dx.doi.org/10.3390/sym12050788.

Full text
Abstract:
This work is mainly interested in ensuring users’ privacy in asymmetric computing, such as cloud computing. In particular, because lots of user data are expressed in non-integer data types, privacy-enhanced applications built on fully homomorphic encryption (FHE) must support real-valued comparisons due to the ubiquity of real numbers in real-world applications. However, as FHE schemes operate in specific domains, such as that of congruent integers, most FHE-based solutions focus only on homomorphic comparisons of integers. Attempts to overcome this barrier can be grouped into two classes. Given point numbers in the form of approximate real numbers, one class of solution uses a special-purpose encoding to represent the point numbers, whereas the other class constructs a dedicated FHE scheme to encrypt point numbers directly. The solutions in the former class may provide depth-efficient arithmetic (i.e., logarithmic depth in the size of the data), but not depth-efficient comparisons between FHE-encrypted point numbers. The second class may avoid this problem, but it requires the precision of point numbers to be determined before the FHE setup is run. Thus, the precision of the data cannot be controlled once the setup is complete. Furthermore, because the precision accuracy is closely related to the sizes of the encryption parameters, increasing the precision of point numbers results in increasing the sizes of the FHE parameters, which increases the sizes of the public keys and ciphertexts, incurring more expensive computation and storage. Unfortunately, this problem also occurs in many of the proposals that fall into the first class. In this work, we are interested in depth-efficient comparison over FHE-encrypted point numbers. In particular, we focus on enabling the precision of point numbers to be manipulated after the system parameters of the underlying FHE scheme are determined, and even after the point numbers are encrypted. To this end, we encode point numbers in continued fraction (CF) form. Therefore, our work lies in the first class of solutions, except that our CF-based approach allows depth-efficient homomorphic comparisons (more precisely, the complexity of the comparison is O ( log κ + log n ) for a number of partial quotients n and their bit length κ , which is normally small) while allowing users to determine the precision of the encrypted point numbers when running their applications. We develop several useful applications (e.g., sorting) that leverage our CF-based homomorphic comparisons.
APA, Harvard, Vancouver, ISO, and other styles
21

TANAKA, Hisao. "Determining the Levels of Some Special Complexity Classes of Sets in the Kleene Arithmetical Hierarchy." Tokyo Journal of Mathematics 17, no. 1 (June 1994): 125–33. http://dx.doi.org/10.3836/tjm/1270128190.

Full text
APA, Harvard, Vancouver, ISO, and other styles
22

Finashin, S., and V. Kharlamov. "On the deformation chirality of real cubic fourfolds." Compositio Mathematica 145, no. 5 (September 2009): 1277–304. http://dx.doi.org/10.1112/s0010437x09004126.

Full text
Abstract:
AbstractAccording to our previous results, the conjugacy class of the involution induced by the complex conjugation in the homology of a real non-singular cubic fourfold determines the fourfold up to projective equivalence and deformation. Here, we show how to eliminate the projective equivalence and obtain a pure deformation classification, that is, how to respond to the chirality problem: which cubics are not deformation equivalent to their image under a mirror reflection. We provide an arithmetical criterion of chirality, in terms of the eigen-sublattices of the complex conjugation involution in homology, and show how this criterion can be effectively applied taking as examples M-cubics (that is, those for which the real locus has the richest topology) and (M−1)-cubics (the next case with respect to complexity of the real locus). It happens that there is one chiral class of M-cubics and three chiral classes of (M−1)-cubics, in contrast to two achiral classes of M-cubics and three achiral classes of (M−1)-cubics.
APA, Harvard, Vancouver, ISO, and other styles
23

Nisan, Noam, and Avi Wigderson. "Lower Bounds on Arithmetic Circuits via Partial Derivatives (Preliminary Version)." BRICS Report Series 2, no. 43 (June 13, 1995). http://dx.doi.org/10.7146/brics.v2i43.19944.

Full text
Abstract:
In this paper we describe a new technique for obtaining lower bounds on<br />restricted classes of non-monotone arithmetic circuits. The heart of this technique is a complexity measure for multivariate polynomials, based on the linear span of their partial derivatives. We use the technique to obtain new lower bounds for computing symmetric polynomials and iterated matrix products.
APA, Harvard, Vancouver, ISO, and other styles
24

Steiner, Matthias Johann. "A lower bound for differential uniformity by multiplicative complexity & bijective functions of multiplicative complexity 1 over finite fields." Cryptography and Communications, August 15, 2023. http://dx.doi.org/10.1007/s12095-023-00661-3.

Full text
Abstract:
AbstractThe multiplicative complexity of an S-box over a finite field is the minimum number of multiplications needed to implement the S-box as an arithmetic circuit. In this paper we fully characterize bijective S-boxes with multiplicative complexity 1 up to affine equivalence over any finite field. We show that under affine equivalence in odd characteristic there are two classes of bijective functions and in even characteristic there are three classes of bijective functions with multiplicative complexity 1. Moreover, in (Jeon et al., Cryptogr. Commun., 14(4), 849-874 (2022)) A-boxes where introduced to lower bound the differential uniformity of an S-box over $$\mathbb {F}_{2}^{n}$$ F 2 n via its multiplicative complexity. We generalize this concept to arbitrary finite fields. In particular, we show that the differential uniformity of a (n, m)-S-box over $$\mathbb {F}_{q}$$ F q is at least $$q^{n - l}$$ q n - l , where $$\lfloor \frac{n - 1}{2} \rfloor + l$$ ⌊ n - 1 2 ⌋ + l is the multiplicative complexity of the S-box.
APA, Harvard, Vancouver, ISO, and other styles
25

Ollinger, Nicolas, and Guillaume Theyssier. "Freezing, Bounded-Change and Convergent Cellular Automata." Discrete Mathematics & Theoretical Computer Science 24, no. 1, Automata, Logic and Semantics (January 31, 2022). http://dx.doi.org/10.46298/dmtcs.5734.

Full text
Abstract:
This paper studies three classes of cellular automata from a computational point of view: freezing cellular automata where the state of a cell can only decrease according to some order on states, cellular automata where each cell only makes a bounded number of state changes in any orbit, and finally cellular automata where each orbit converges to some fixed point. Many examples studied in the literature fit into these definitions, in particular the works on cristal growth started by S. Ulam in the 60s. The central question addressed here is how the computational power and computational hardness of basic properties is affected by the constraints of convergence, bounded number of change, or local decreasing of states in each cell. By studying various benchmark problems (short-term prediction, long term reachability, limits) and considering various complexity measures and scales (LOGSPACE vs. PTIME, communication complexity, Turing computability and arithmetical hierarchy) we give a rich and nuanced answer: the overall computational complexity of such cellular automata depends on the class considered (among the three above), the dimension, and the precise problem studied. In particular, we show that all settings can achieve universality in the sense of Blondel-Delvenne-K\r{u}rka, although short term predictability varies from NLOGSPACE to P-complete. Besides, the computability of limit configurations starting from computable initial configurations separates bounded-change from convergent cellular automata in dimension~1, but also dimension~1 versus higher dimensions for freezing cellular automata. Another surprising dimension-sensitive result obtained is that nilpotency becomes decidable in dimension~ 1 for all the three classes, while it stays undecidable even for freezing cellular automata in higher dimension.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography