To see the other types of publications on this topic, follow the link: Logical encodings.

Journal articles on the topic 'Logical encodings'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 journal articles for your research on the topic 'Logical encodings.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse journal articles on a wide variety of disciplines and organise your bibliography correctly.

1

Kovács, Tibor, Gábor Simon, and Gergely Mezei. "Benchmarking Graph Database Backends—What Works Well with Wikidata?" Acta Cybernetica 24, no. 1 (May 21, 2019): 43–60. http://dx.doi.org/10.14232/actacyb.24.1.2019.5.

Full text
Abstract:
Knowledge bases often utilize graphs as logical model. RDF-based knowledge bases (KB) are prime examples, as RDF (Resource Description Framework) does use graph as logical model. Graph databases are an emerging breed of NoSQL-type databases, offering graph as the logical model. Although there are specialized databases, the so-called triple stores, for storing RDF data, graph databases can also be promising candidates for storing knowledge. In this paper, we benchmark different graph database implementations loaded with Wikidata, a real-life, large-scale knowledge base. Graph databases come in all shapes and sizes, offer different APIs and graph models. Hence we used a measurement system, that can abstract away the API differences. For the modeling aspect, we made measurements with different graph encodings previously suggested in the literature, in order to observe the impact of the encoding aspect on the overall performance.
APA, Harvard, Vancouver, ISO, and other styles
2

Yoder, Theodore J., and Isaac H. Kim. "The surface code with a twist." Quantum 1 (April 25, 2017): 2. http://dx.doi.org/10.22331/q-2017-04-25-2.

Full text
Abstract:
The surface code is one of the most successful approaches to topological quantum error-correction. It boasts the smallest known syndrome extraction circuits and correspondingly largest thresholds. Defect-based logical encodings of a new variety called twists have made it possible to implement the full Clifford group without state distillation. Here we investigate a patch-based encoding involving a modified twist. In our modified formulation, the resulting codes, called triangle codes for the shape of their planar layout, have only weight-four checks and relatively simple syndrome extraction circuits that maintain a high, near surface-code-level threshold. They also use 25% fewer physical qubits per logical qubit than the surface code. Moreover, benefiting from the twist, we can implement all Clifford gates by lattice surgery without the need for state distillation. By a surgical transformation to the surface code, we also develop a scheme of doing all Clifford gates on surface code patches in an atypical planar layout, though with less qubit efficiency than the triangle code. Finally, we remark that logical qubits encoded in triangle codes are naturally amenable to logical tomography, and the smallest triangle code can demonstrate high-pseudothreshold fault-tolerance to depolarizing noise using just 13 physical qubits.
APA, Harvard, Vancouver, ISO, and other styles
3

Steiner, Erich. "Ideational grammatical metaphor." Languages in Contrast 4, no. 1 (April 14, 2004): 137–64. http://dx.doi.org/10.1075/lic.4.1.07ste.

Full text
Abstract:
In this paper I want to explore the systemic-functional notion of ‘grammatical metaphor’ from a cross-linguistic perspective. After a brief introduction to the concept of ‘grammatical metaphor’, I shall discuss the distinction between ‘congruent’ and ‘metaphorical’ encodings of meaning, as well as the distinction between rankshift, transcategorization, and grammatical metaphor as semogenic resources (Section 1). In a second section, I shall then focus on ideational grammatical metaphors in English and German and revisit the notion of direct vs. indirect mapping of experiential and logical semantics onto lexicogrammar (Section 2). It will be argued that ‘directness of encoding’ within one language can be defined with the help of the concept of ‘transparency’ or ‘motivation’ of encoding between levels. Across and between languages, however, the notion of ‘directness’ either has to be seen from the perspective of one of the languages involved, or from the perspective of a generalized semantics and grammar. In Section 3, I shall then explore the question of the experiential vs. logical encoding of semantic categories across languages, and of how this relates to metaphoricity. I shall exemplify and discuss the fact that in cross-linguistic analyses, one cannot consider any one of a given set of experiential or logical encodings of some unit of meaning as ‘congruent’ or ‘direct’, as long as one does not have a cross-linguistic semantics to establish ‘motivation’ and ‘transparentness’ on. It will also be argued that some of the differences in texts across languages as to what counts as ‘congruent’ can be predicted from comparisons between the language-specific grammatical systems involved. Other differences, however, seem to rely heavily on registerial influences and cultural factors. In Section 4, then, I shall inquire into the question of whether and precisely in what sense we can speak of two different types of grammatical metaphor, dependent on whether they involve a relocation in rank or a mere re-arrangement of mappings of semantic and lexicogrammatical functions. These types of metaphor, it will be argued, have different implications for the metaphoricity of the clause as a whole, as well as for the ‘density’ of the packaging of meaning.
APA, Harvard, Vancouver, ISO, and other styles
4

Pal, Amit Kumar, Philipp Schindler, Alexander Erhard, Ángel Rivas, Miguel-Angel Martin-Delgado, Rainer Blatt, Thomas Monz, and Markus Müller. "Relaxation times do not capture logical qubit dynamics." Quantum 6 (January 24, 2022): 632. http://dx.doi.org/10.22331/q-2022-01-24-632.

Full text
Abstract:
Quantum error correction procedures have the potential to enable faithful operation of large-scale quantum computers. They protect information from environmental decoherence by storing it in logical qubits, built from ensembles of entangled physical qubits according to suitably tailored quantum error correcting encodings. To date, no generally accepted framework to characterise the behaviour of logical qubits as quantum memories has been developed. In this work, we show that generalisations of well-established figures of merit of physical qubits, such as relaxation times, to logical qubits fail and do not capture dynamics of logical qubits. We experimentally illustrate that, in particular, spatial noise correlations can give rise to rich and counter-intuitive dynamical behavior of logical qubits. We show that a suitable set of observables, formed by code space population and logical operators within the code space, allows one to track and characterize the dynamical behaviour of logical qubits. Awareness of these effects and the efficient characterisation tools used in this work will help to guide and benchmark experimental implementations of logical qubits.
APA, Harvard, Vancouver, ISO, and other styles
5

Scala, Enrico, Miquel Ramírez, Patrik Haslum, and Sylvie Thiebaux. "Numeric Planning with Disjunctive Global Constraints via SMT." Proceedings of the International Conference on Automated Planning and Scheduling 26 (March 30, 2016): 276–84. http://dx.doi.org/10.1609/icaps.v26i1.13766.

Full text
Abstract:
This paper describes a novel encoding for sequential numeric planning into the problem of determining the satisfiability of a logical theory T. We introduce a novel technique, orthogonal to existing work aiming at producing more succinct encodings that enables the theory solver to roll up an unbounded yet finite number of instances of an action into a single plan step, greatly reducing the horizon at which T models valid plans. The technique is then extended to deal with problems featuring disjunctive global constraints, in which the state space becomes a non-convex n dimensional polytope. In order to empirically evaluate the encoding, we build a planner, SPRINGROLL, around a state–of–the–art off– the–shelf SMT solver. Experiments on a diverse set of domains are finally reported, and results show the generality and efficiency of the approach.
APA, Harvard, Vancouver, ISO, and other styles
6

CAVE, ANDREW, and BRIGITTE PIENTKA. "Mechanizing proofs with logical relations – Kripke-style." Mathematical Structures in Computer Science 28, no. 9 (August 2, 2018): 1606–38. http://dx.doi.org/10.1017/s0960129518000154.

Full text
Abstract:
Proofs with logical relations play a key role to establish rich properties such as normalization or contextual equivalence. They are also challenging to mechanize. In this paper, we describe two case studies using the proof environmentBeluga: First, we explain the mechanization of the weak normalization proof for the simply typed lambda-calculus; second, we outline how to mechanize the completeness proof of algorithmic equality for simply typed lambda-terms where we reason about logically equivalent terms. The development of these proofs inBelugarelies on three key ingredients: (1) we encode lambda-terms together with their typing rules, operational semantics, algorithmic and declarative equality using higher order abstract syntax (HOAS) thereby avoiding the need to manipulate and deal with binders, renaming and substitutions, (2) we take advantage ofBeluga's support for representing derivations that depend on assumptions and first-class contexts to directly state inductive properties such as logical relations and inductive proofs, (3) we exploitBeluga's rich equational theory for simultaneous substitutions; as a consequence, users do not need to establish and subsequently use substitution properties, and proofs are not cluttered with references to them. We believe these examples demonstrate thatBelugaprovides the right level of abstractions and primitives to mechanize challenging proofs using HOAS encodings. It also may serve as a valuable benchmark for other proof environments.
APA, Harvard, Vancouver, ISO, and other styles
7

Dennis, Louise A., Martin Mose Bentzen, Felix Lindner, and Michael Fisher. "Verifiable Machine Ethics in Changing Contexts." Proceedings of the AAAI Conference on Artificial Intelligence 35, no. 13 (May 18, 2021): 11470–78. http://dx.doi.org/10.1609/aaai.v35i13.17366.

Full text
Abstract:
Many systems proposed for the implementation of ethical reasoning involve an encoding of user values as a set of rules or a model. We consider the question of how changes of context affect these encodings. We propose the use of a reasoning cycle, in which information about the ethical reasoner's context is imported in a logical form, and we propose that context-specific aspects of an ethical encoding be prefaced by a guard formula. This guard formula should evaluate to true when the reasoner is in the appropriate context and the relevant parts of the reasoner's rule set or model should be updated accordingly. This architecture allows techniques for the model-checking of agent-based autonomous systems to be used to verify that all contexts respect key stakeholder values. We implement this framework using the hybrid ethical reasoning agents system (HERA) and the model-checking agent programming languages (MCAPL) framework.
APA, Harvard, Vancouver, ISO, and other styles
8

RABE, FLORIAN. "A logical framework combining model and proof theory." Mathematical Structures in Computer Science 23, no. 5 (March 1, 2013): 945–1001. http://dx.doi.org/10.1017/s0960129512000424.

Full text
Abstract:
Mathematical logic and computer science have driven the design of a growing number of logics and related formalisms such as set theories and type theories. In response to this population explosion, logical frameworks have been developed as formal meta-languages in which to represent, structure, relate and reason about logics.Research on logical frameworks has diverged into separate communities, often with conflicting backgrounds and philosophies. In particular, two of the most important logical frameworks are the framework of institutions, from the area of model theory based on category theory, and the Edinburgh Logical Framework LF, from the area of proof theory based on dependent type theory. Even though their ultimate motivations overlap – for example in applications to software verification – they have fundamentally different perspectives on logic.In the current paper, we design a logical framework that integrates the frameworks of institutions and LF in a way that combines their complementary advantages while retaining the elegance of each of them. In particular, our framework takes a balanced approach between model theory and proof theory, and permits the representation of logics in a way that comprises all major ingredients of a logic: syntax, models, satisfaction, judgments and proofs. This provides a theoretical basis for the systematic study of logics in a comprehensive logical framework. Our framework has been applied to obtain a large library of structured and machine-verified encodings of logics and logic translations.
APA, Harvard, Vancouver, ISO, and other styles
9

Locher, David F., Lorenzo Cardarelli, and Markus Müller. "Quantum Error Correction with Quantum Autoencoders." Quantum 7 (March 9, 2023): 942. http://dx.doi.org/10.22331/q-2023-03-09-942.

Full text
Abstract:
Active quantum error correction is a central ingredient to achieve robust quantum processors. In this paper we investigate the potential of quantum machine learning for quantum error correction in a quantum memory. Specifically, we demonstrate how quantum neural networks, in the form of quantum autoencoders, can be trained to learn optimal strategies for active detection and correction of errors, including spatially correlated computational errors as well as qubit losses. We highlight that the denoising capabilities of quantum autoencoders are not limited to the protection of specific states but extend to the entire logical codespace. We also show that quantum neural networks can be used to discover new logical encodings that are optimally adapted to the underlying noise. Moreover, we find that, even in the presence of moderate noise in the quantum autoencoders themselves, they may still be successfully used to perform beneficial quantum error correction and thereby extend the lifetime of a logical qubit.
APA, Harvard, Vancouver, ISO, and other styles
10

Hardie, Andrew. "From legacy encodings to Unicode: the graphical and logical principles in the scripts of South Asia." Language Resources and Evaluation 41, no. 1 (April 4, 2007): 1–25. http://dx.doi.org/10.1007/s10579-006-9003-7.

Full text
APA, Harvard, Vancouver, ISO, and other styles
11

Bruni, Roberto, Furio Honsell, Marina Lenisa, and Marino Miculan. "Comparing Higher-Order Encodings in Logical Frameworks and Tile Logic1 1Research supported by the MURST Project TOSCA." Electronic Notes in Theoretical Computer Science 62 (June 2002): 136–56. http://dx.doi.org/10.1016/s1571-0661(04)00324-x.

Full text
APA, Harvard, Vancouver, ISO, and other styles
12

Zhan, Yuan, Paul Hilaire, Edwin Barnes, Sophia E. Economou, and Shuo Sun. "Performance analysis of quantum repeaters enabled by deterministically generated photonic graph states." Quantum 7 (February 16, 2023): 924. http://dx.doi.org/10.22331/q-2023-02-16-924.

Full text
Abstract:
By encoding logical qubits into specific types of photonic graph states, one can realize quantum repeaters that enable fast entanglement distribution rates approaching classical communication. However, the generation of these photonic graph states requires a formidable resource overhead using traditional approaches based on linear optics. Overcoming this challenge, a number of new schemes have been proposed that employ quantum emitters to deterministically generate photonic graph states. Although these schemes have the potential to significantly reduce the resource cost, a systematic comparison of the repeater performance among different encodings and different generation schemes is lacking. Here, we quantitatively analyze the performance of quantum repeaters based on two different graph states, i.e. the tree graph states and the repeater graph states. For both states, we compare the performance between two generation schemes, one based on a single quantum emitter coupled to ancillary matter qubits, and one based on a single quantum emitter coupled to a delayed feedback. We identify the numerically optimal scheme at different system parameters. Our analysis provides a clear guideline on the selection of the generation scheme for graph-state-based quantum repeaters, and lays out the parameter requirements for future experimental realizations of different schemes.
APA, Harvard, Vancouver, ISO, and other styles
13

Toninho, Bernardo, and Nobuko Yoshida. "On Polymorphic Sessions and Functions." ACM Transactions on Programming Languages and Systems 43, no. 2 (July 2021): 1–55. http://dx.doi.org/10.1145/3457884.

Full text
Abstract:
This work exploits the logical foundation of session types to determine what kind of type discipline for the Λ-calculus can exactly capture, and is captured by, Λ-calculus behaviours. Leveraging the proof theoretic content of the soundness and completeness of sequent calculus and natural deduction presentations of linear logic, we develop the first mutually inverse and fully abstract processes-as-functions and functions-as-processes encodings between a polymorphic session π-calculus and a linear formulation of System F. We are then able to derive results of the session calculus from the theory of the Λ-calculus: (1) we obtain a characterisation of inductive and coinductive session types via their algebraic representations in System F; and (2) we extend our results to account for value and process passing, entailing strong normalisation.
APA, Harvard, Vancouver, ISO, and other styles
14

Strickland, Brent, Carlo Geraci, Emmanuel Chemla, Philippe Schlenker, Meltem Kelepir, and Roland Pfau. "Event representations constrain the structure of language: Sign language as a window into universally accessible linguistic biases." Proceedings of the National Academy of Sciences 112, no. 19 (April 27, 2015): 5968–73. http://dx.doi.org/10.1073/pnas.1423080112.

Full text
Abstract:
According to a theoretical tradition dating back to Aristotle, verbs can be classified into two broad categories. Telic verbs (e.g., “decide,” “sell,” “die”) encode a logical endpoint, whereas atelic verbs (e.g., “think,” “negotiate,” “run”) do not, and the denoted event could therefore logically continue indefinitely. Here we show that sign languages encode telicity in a seemingly universal way and moreover that even nonsigners lacking any prior experience with sign language understand these encodings. In experiments 1–5, nonsigning English speakers accurately distinguished between telic (e.g., “decide”) and atelic (e.g., “think”) signs from (the historically unrelated) Italian Sign Language, Sign Language of the Netherlands, and Turkish Sign Language. These results were not due to participants' inferring that the sign merely imitated the action in question. In experiment 6, we used pseudosigns to show that the presence of a salient visual boundary at the end of a gesture was sufficient to elicit telic interpretations, whereas repeated movement without salient boundaries elicited atelic interpretations. Experiments 7–10 confirmed that these visual cues were used by all of the sign languages studied here. Together, these results suggest that signers and nonsigners share universally accessible notions of telicity as well as universally accessible “mapping biases” between telicity and visual form.
APA, Harvard, Vancouver, ISO, and other styles
15

Demeter, David, and Doug Downey. "Just Add Functions: A Neural-Symbolic Language Model." Proceedings of the AAAI Conference on Artificial Intelligence 34, no. 05 (April 3, 2020): 7634–42. http://dx.doi.org/10.1609/aaai.v34i05.6264.

Full text
Abstract:
Neural network language models (NNLMs) have achieved ever-improving accuracy due to more sophisticated architectures and increasing amounts of training data. However, the inductive bias of these models (formed by the distributional hypothesis of language), while ideally suited to modeling most running text, results in key limitations for today's models. In particular, the models often struggle to learn certain spatial, temporal, or quantitative relationships, which are commonplace in text and are second-nature for human readers. Yet, in many cases, these relationships can be encoded with simple mathematical or logical expressions. How can we augment today's neural models with such encodings?In this paper, we propose a general methodology to enhance the inductive bias of NNLMs by incorporating simple functions into a neural architecture to form a hierarchical neural-symbolic language model (NSLM). These functions explicitly encode symbolic deterministic relationships to form probability distributions over words. We explore the effectiveness of this approach on numbers and geographic locations, and show that NSLMs significantly reduce perplexity in small-corpus language modeling, and that the performance improvement persists for rare tokens even on much larger corpora. The approach is simple and general, and we discuss how it can be applied to other word classes beyond numbers and geography.
APA, Harvard, Vancouver, ISO, and other styles
16

Mena López, Arturo, and Lian-Ao Wu. "Protectability of IBMQ Qubits by Dynamical Decoupling Technique." Symmetry 15, no. 1 (December 26, 2022): 62. http://dx.doi.org/10.3390/sym15010062.

Full text
Abstract:
We study the current effectiveness of the dynamical decoupling technique on a publicly accessible IBM quantum computer (IBMQ). This technique, also known as bang-bang decoupling or dynamical symmetrization, consists of applying sequences of pulses for protecting a qubit from decoherence by symmetrizing the qubit–environment interactions. Works in the field have studied sequences with different symmetries and carried out tests on IBMQ devices typically considering single-qubit states. We show that the simplest universal sequences can be interesting for preserving two-qubit states on the IBMQ device. For this, we considered a collection of single-qubit and two-qubit states. The results indicate that a simple dynamical decoupling approach using available IBMQ pulses is not enough for protecting a general single-qubit state without further care. Nevertheless, the technique is beneficial for the Bell states. This encouraged us to study logical qubit encodings such as |0⟩L≡|01⟩,|1⟩L≡|10⟩, where a quantum state has the form |ψab⟩=a|0⟩L+b|1⟩L. Thus, we explored the effectiveness of dynamical decoupling with a large set of two-qubit |ψab⟩ states, where a and b are real amplitudes. With this, we also determined that the |ψab⟩ states most benefiting from this dynamical decoupling approach and slowed down the decay of their survival probability.
APA, Harvard, Vancouver, ISO, and other styles
17

Raveendran, Nithin, Narayanan Rengaswamy, Filip Rozpędek, Ankur Raina, Liang Jiang, and Bane Vasić. "Finite Rate QLDPC-GKP Coding Scheme that Surpasses the CSS Hamming Bound." Quantum 6 (July 20, 2022): 767. http://dx.doi.org/10.22331/q-2022-07-20-767.

Full text
Abstract:
Quantum error correction has recently been shown to benefit greatly from specific physical encodings of the code qubits. In particular, several researchers have considered the individual code qubits being encoded with the continuous variable GottesmanKitaev-Preskill (GKP) code, and then imposed an outer discrete-variable code such as the surface code on these GKP qubits. Under such a concatenation scheme, the analog information from the inner GKP error correction improves the noise threshold of the outer code. However, the surface code has vanishing rate and demands a lot of resources with growing distance. In this work, we concatenate the GKP code with generic quantum low-density parity-check (QLDPC) codes and demonstrate a natural way to exploit the GKP analog information in iterative decoding algorithms. We first show the noise thresholds for two lifted product QLDPC code families, and then show the improvements of noise thresholds when the iterative decoder – a hardware-friendly min-sum algorithm (MSA) – utilizes the GKP analog information. We also show that, when the GKP analog information is combined with a sequential update schedule for MSA, the scheme surpasses the well-known CSS Hamming bound for these code families. Furthermore, we observe that the GKP analog information helps the iterative decoder in escaping harmful trapping sets in the Tanner graph of the QLDPC code, thereby eliminating or significantly lowering the error floor of the logical error rate curves. Finally, we discuss new fundamental and practical questions that arise from this work on channel capacity under GKP analog information, and on improving decoder design and analysis.
APA, Harvard, Vancouver, ISO, and other styles
18

Dubois, Didier, Henri Prade, and Agnès Rico. "The logical encoding of Sugeno integrals." Fuzzy Sets and Systems 241 (April 2014): 61–75. http://dx.doi.org/10.1016/j.fss.2013.12.014.

Full text
APA, Harvard, Vancouver, ISO, and other styles
19

Porncharoenwase, Sorawee, Luke Nelson, Xi Wang, and Emina Torlak. "A formal foundation for symbolic evaluation with merging." Proceedings of the ACM on Programming Languages 6, POPL (January 16, 2022): 1–28. http://dx.doi.org/10.1145/3498709.

Full text
Abstract:
Reusable symbolic evaluators are a key building block of solver-aided verification and synthesis tools. A reusable evaluator reduces the semantics of all paths in a program to logical constraints, and a client tool uses these constraints to formulate a satisfiability query that is discharged with SAT or SMT solvers. The correctness of the evaluator is critical to the soundness of the tool and the domain properties it aims to guarantee. Yet so far, the trust in these evaluators has been based on an ad-hoc foundation of testing and manual reasoning. This paper presents the first formal framework for reasoning about the behavior of reusable symbolic evaluators. We develop a new symbolic semantics for these evaluators that incorporates state merging. Symbolic evaluators use state merging to avoid path explosion and generate compact encodings. To accommodate a wide range of implementations, our semantics is parameterized by a symbolic factory, which abstracts away the details of merging and creation of symbolic values. The semantics targets a rich language that extends Core Scheme with assumptions and assertions, and thus supports branching, loops, and (first-class) procedures. The semantics is designed to support reusability, by guaranteeing two key properties: legality of the generated symbolic states, and the reducibility of symbolic evaluation to concrete evaluation. Legality makes it simpler for client tools to formulate queries, and reducibility enables testing of client tools on concrete inputs. We use the Lean theorem prover to mechanize our symbolic semantics, prove that it is sound and complete with respect to the concrete semantics, and prove that it guarantees legality and reducibility. To demonstrate the generality of our semantics, we develop Leanette, a reference evaluator written in Lean, and Rosette 4, an optimized evaluator written in Racket. We prove Leanette correct with respect to the semantics, and validate Rosette 4 against Leanette via solver-aided differential testing. To demonstrate the practicality of our approach, we port 16 published verification and synthesis tools from Rosette 3 to Rosette 4. Rosette 3 is an existing reusable evaluator that implements the classic merging semantics, adopted from bounded model checking. Rosette 4 replaces the semantic core of Rosette 3 but keeps its optimized symbolic factory. Our results show that Rosette 4 matches the performance of Rosette 3 across a wide range of benchmarks, while providing a cleaner interface that simplifies the implementation of client tools.
APA, Harvard, Vancouver, ISO, and other styles
20

Liao, Xiaojuan, and Miyuki Koshimura. "A comparative analysis and improvement of MaxSAT encodings for coalition structure generation under MC-nets." Journal of Logic and Computation 29, no. 6 (July 30, 2019): 913–31. http://dx.doi.org/10.1093/logcom/exz017.

Full text
Abstract:
Abstract Coalition structure generation (CSG) is one of the main research issues in the use of coalitional games in multiagent systems and weighted partial MaxSAT (WPM) encodings, i.e. rule relation-based WPM (RWPM) and agent relation-based WPM (AWPM), which are efficient for solving the CSG problem. Existing studies show that AWPM surpasses RWPM since it achieves more compact encoding; it generates fewer variables and clauses than RWPM. However, in this paper, we focus on a special case in which the two encodings generate identical numbers of variables and clauses. Experiments show that RWPM surprisingly has a dominant advantage over AWPM, which aroused our interest. We exploit the deep-rooted reason and find that it is the redundancy when encoding transitive laws in RWPM that leads to this situation. Finally, we remove redundant clauses for transitive laws in RWPM and develop an improved RWPM with refined transitive laws to solve the CSG problem. Experiments demonstrate that refined encoding is more compact and efficient than previous WPM encodings.
APA, Harvard, Vancouver, ISO, and other styles
21

McLauchlan, Campbell, and Benjamin Béri. "A new twist on the Majorana surface code: Bosonic and fermionic defects for fault-tolerant quantum computation." Quantum 8 (July 10, 2024): 1400. http://dx.doi.org/10.22331/q-2024-07-10-1400.

Full text
Abstract:
Majorana zero modes (MZMs) are promising candidates for topologically-protected quantum computing hardware, however their large-scale use will likely require quantum error correction. Majorana surface codes (MSCs) have been proposed to achieve this. However, many MSC properties remain unexplored. We present a unified framework for MSC "twist defects" – anyon-like objects encoding quantum information. We show that twist defects in MSCs can encode twice the amount of topologically protected information as in qubit-based codes or other MSC encoding schemes. This is due to twists encoding both logical qubits and "logical MZMs," with the latter enhancing the protection microscopic MZMs can offer. We explain how to perform universal computation with logical qubits and logical MZMs while potentially using far fewer resources than in other MSC schemes. All Clifford gates can be implemented on logical qubits by braiding twist defects. We introduce lattice-surgery-based techniques for computing with logical MZMs and logical qubits, achieving the effect of Clifford gates with zero time overhead. We also show that logical MZMs may result in improved spatial overheads for sufficiently low rates of quasi-particle poisoning. Finally, we introduce a novel MSC analogue of transversal gates that achieves encoded Clifford gates in small codes by braiding microscopic MZMs. MSC twist defects thus open new paths towards fault-tolerant quantum computation.
APA, Harvard, Vancouver, ISO, and other styles
22

HARPER, ROBERT, and DANIEL R. LICATA. "Mechanizing metatheory in a logical framework." Journal of Functional Programming 17, no. 4-5 (July 2007): 613–73. http://dx.doi.org/10.1017/s0956796807006430.

Full text
Abstract:
AbstractThe LF logical framework codifies a methodology for representing deductive systems, such as programming languages and logics, within a dependently typed λ-calculus. In this methodology, the syntactic and deductive apparatus of a system is encoded as the canonical forms of associated LF types; an encoding is correct (adequate) if and only if it defines acompositional bijectionbetween the apparatus of the deductive system and the associated canonical forms. Given an adequate encoding, one may establish metatheoretic properties of a deductive system by reasoning about the associated LF representation. The Twelf implementation of the LF logical framework is a convenient and powerful tool for putting this methodology into practice. Twelf supports both the representation of a deductive system and the mechanical verification of proofs of metatheorems about it. The purpose of this article is to provide an up-to-date overview of the LF λ-calculus, the LF methodology for adequate representation, and the Twelf methodology for mechanizing metatheory. We begin by defining a variant of the original LF language, calledCanonical LF, in which only canonical forms (long βη-normal forms) are permitted. This variant is parameterized by asubordination relation, which enables modular reasoning about LF representations. We then give an adequate representation of a simply typed λ-calculus in Canonical LF, both to illustrate adequacy and to serve as an object of analysis. Using this representation, we formalize and verify the proofs of some metatheoretic results, including preservation, determinacy, and strengthening. Each example illustrates a significant aspect of using LF and Twelf for formalized metatheory.
APA, Harvard, Vancouver, ISO, and other styles
23

Cayrol, Claudette, and Marie-Christine Lagasquie-Schiex. "Logical Encoding of Argumentation Frameworks with Higher-order Attacks and Evidential Supports." International Journal on Artificial Intelligence Tools 29, no. 03n04 (June 2020): 2060003. http://dx.doi.org/10.1142/s0218213020600039.

Full text
Abstract:
We propose a logical encoding of argumentation frameworks with higher-order interactions (i.e. attacks/supports whose targets are arguments or other attacks/supports) with an evidential meaning for supports. Our purpose is to separate the logical expression of the meaning of an attack or an evidential support (simple or higher-order) from the logical expression of acceptability semantics. We consider semantics which specify the conditions under which the arguments (resp. the attacks/supports) are considered as accepted, directly on the extended framework, without translating the original framework into a Dung’s argumentation framework. We characterize the output of a given framework in logical terms (namely as particular models of a logical theory). Our proposal applies to the particular case of Dung’s frameworks, enabling to recover standard extensions.
APA, Harvard, Vancouver, ISO, and other styles
24

Jenkins, Christopher, and Aaron Stump. "Monotone recursive types and recursive data representations in Cedille." Mathematical Structures in Computer Science 31, no. 6 (June 2021): 682–745. http://dx.doi.org/10.1017/s0960129521000402.

Full text
Abstract:
AbstractGuided by Tarksi’s fixpoint theorem in order theory, we show how to derive monotone recursive types with constant-time roll and unroll operations within Cedille, an impredicative, constructive, and logically consistent pure typed lambda calculus. This derivation takes place within the preorder on Cedille types induced by type inclusions, a notion which is expressible within the theory itself. As applications, we use monotone recursive types to generically derive two recursive representations of data in lambda calculus, the Parigot and Scott encoding. For both encodings, we prove induction and examine the computational and extensional properties of their destructor, iterator, and primitive recursor in Cedille. For our Scott encoding in particular, we translate into Cedille a construction due to Lepigre and Raffalli (2019) that equips Scott naturals with primitive recursion, then extend this construction to derive a generic induction principle. This allows us to give efficient and provably unique (up to function extensionality) solutions for the iteration and primitive recursion schemes for Scott-encoded data.
APA, Harvard, Vancouver, ISO, and other styles
25

Aulia, Fatimah Ihza, and Ika Kurniasari. "Student's Error Analysis In Solving Definite Integral Problem Based On Multiple Intelligences." MATHEdunesa 11, no. 1 (January 30, 2022): 320–27. http://dx.doi.org/10.26740/mathedunesa.v11n1.p320-327.

Full text
Abstract:
One of the factors that students perform errors in solving mathematic problem is student’s intelligence. Gardner mentioned there are eight types of multiple intelligences, there are linguistic, logical mathematical, kinesthetic, musical, spatial, interpersonal, intrapersonal and naturalistic. Student’s error can be analyzed using Newman’s error analysis which contains five types of errors, there are reading error, comprehension error, transformation error, process skill error, and encoding error. This research is descriptive qualitative research which aims to describe the types of student’s error in solving definite integral based on multiple intelligences. The subjects of this research are three grade XII senior high school students in Sidoarjo whom have intelligences related to mathematic, there are logical mathematical intelligence, linguistic intelligence and spatial intelligence. Data was collected by giving multiple intelligences test, definite integral problems and interview. Data analysis technique are data reduction, data presentation and data verification. The result of this research showed that: (1) students with logical mathematical intelligence perform transformation error, process skill error and kesalahan encoding error; (2) students with linguistic intelligence perform reading error, comprehension error, transformation error, process skill error and encoding error; (3) students with spatial intelligence perform reading error, transformation error and encoding error.
APA, Harvard, Vancouver, ISO, and other styles
26

Xiao, Huifu, Dezhao Li, Zilong Liu, Xu Han, Wenping Chen, Ting Zhao, Yonghui Tian, and Jianhong Yang. "Experimental realization of a CMOS-compatible optical directed priority encoder using cascaded micro-ring resonators." Nanophotonics 7, no. 4 (March 28, 2018): 727–33. http://dx.doi.org/10.1515/nanoph-2018-0005.

Full text
Abstract:
AbstractIn this paper, we propose and experimentally demonstrate an integrated optical device that can implement the logical function of priority encoding from a 4-bit electrical signal to a 2-bit optical signal. For the proof of concept, the thermo-optic modulation scheme is adopted to tune each micro-ring resonator (MRR). A monochromatic light with the working wavelength is coupled into the input port of the device through a lensed fiber, and the four input electrical logic signals regarded as pending encode signals are applied to the micro-heaters above four MRRs to control the working states of the optical switches. The encoding results are directed to the output ports in the form of light. At last, the logical function of priority encoding with an operation speed of 10 Kbps is demonstrated successfully.
APA, Harvard, Vancouver, ISO, and other styles
27

Kozyriev, Andrii, and Ihor Shubin. "The method of linear-logical operators and logical equations in information extraction tasks." INNOVATIVE TECHNOLOGIES AND SCIENTIFIC SOLUTIONS FOR INDUSTRIES, no. 1 (27) (July 2, 2024): 81–95. http://dx.doi.org/10.30837/itssi.2024.27.081.

Full text
Abstract:
Relational and logical methods of knowledge representation play a key role in creating a mathematical basis for information systems. Predicate algebra and predicate operators are among the most effective tools for describing information in detail. These tools make it easy to formulate formalized information, create database queries, and simulate human activity. In the context of the new need for reliable and efficient data selection, a problem arises in deeper analysis. Subject of the study is the theory of quantum linear equations based on the algebra of linear predicate operations, the formal apparatus of linear logic operators and methods for solving logical equations in information extraction tasks. Aim of the study is a developing of a method for using linear logic operators and logical equations to extract information. This approach can significantly optimize the process of extracting the necessary information, even in huge databases. Main tasks: analysis of existing approaches to information extraction; consideration of the theory of linear logic operators; study of methods for reducing logic to an algebraic form; analysis of logical spaces and the algebra of finite predicate actions and the theory of linear logic operators. The research methods involve a systematic analysis of the mathematical structure of the algebra of finite predicates and predicate functions to identify the key elements that affect the query formation process. The method of using linear logic operators and logical equations for information extraction is proposed. The results of the study showed that the method of using linear logic operators and logical equations is a universal and adaptive tool for working with algebraic data structures. It can be applied in a wide range of information extraction tasks and prove its value as one of the possible methods of information processing. Conclusion. The paper investigates formal methods of intelligent systems, in particular, ways of representing knowledge in accordance with the peculiarities of the field of application and the language that allows encoding this knowledge for storage in computer memory. The proposed method can be implemented in the development of language interfaces for automated information access systems, in search engine algorithms, for logical analysis of information in databases and expert systems, as well as in performing tasks related to object recognition and classification.
APA, Harvard, Vancouver, ISO, and other styles
28

Quan, Dongxiao, Chensong Liu, Xiaojie Lv, and Changxing Pei. "Implementation of Fault-Tolerant Encoding Circuit Based on Stabilizer Implementation and “Flag” Bits in Steane Code." Entropy 24, no. 8 (August 11, 2022): 1107. http://dx.doi.org/10.3390/e24081107.

Full text
Abstract:
Quantum error correction (QEC) is an effective way to overcome quantum noise and de-coherence, meanwhile the fault tolerance of the encoding circuit, syndrome measurement circuit, and logical gate realization circuit must be ensured so as to achieve reliable quantum computing. Steane code is one of the most famous codes, proposed in 1996, however, the classical encoding circuit based on stabilizer implementation is not fault-tolerant. In this paper, we propose a method to design a fault-tolerant encoding circuit for Calderbank-Shor-Steane (CSS) code based on stabilizer implementation and “flag” bits. We use the Steane code as an example to depict in detail the fault-tolerant encoding circuit design process including the logical operation implementation, the stabilizer implementation, and the “flag” qubits design. The simulation results show that assuming only one quantum gate will be wrong with a certain probability p, the classical encoding circuit will have logic errors proportional to p; our proposed circuit is fault-tolerant as with the help of the “flag” bits, all types of errors in the encoding process can be accurately and uniquely determined, the errors can be fixed. If all the gates will be wrong with a certain probability p, which is the actual situation, the proposed encoding circuit will also be wrong with a certain probability, but its error rate has been reduced greatly from p to p2 compared with the original circuit. This encoding circuit design process can be extended to other CSS codes to improve the correctness of the encoding circuit.
APA, Harvard, Vancouver, ISO, and other styles
29

Dong, Li, Jun-Xi Wang, Qing-Yang Li, Hong-Zhi Shen, Hai-Kuan Dong, Xiao-Ming Xiu, and Ya-Jun Gao. "Single logical qubit information encoding scheme with the minimal optical decoherence-free subsystem." Optics Letters 41, no. 5 (February 29, 2016): 1030. http://dx.doi.org/10.1364/ol.41.001030.

Full text
APA, Harvard, Vancouver, ISO, and other styles
30

Zhang, Na-Li, Sheng-Dong Zhao, Hao-Wen Dong, Yue-Sheng Wang, and Chuanzeng Zhang. "Reflection-type broadband coding metasurfaces for acoustic focusing and splitting." Applied Physics Letters 120, no. 14 (April 4, 2022): 142201. http://dx.doi.org/10.1063/5.0087339.

Full text
Abstract:
In this paper, we propose a kind of reflection-type broadband acoustic coding metasurfaces (BACMs), which are composed of two square helical channels and the connected air cavity at the end of the channels. This helical-cavity coupled structure is selected as a logical unit “1,” the pure air hole is set as a logical unit “0,” and the reflective phase difference of the two units is approximately equal to π in a broad frequency range. More importantly, we reveal a somewhat unconventional mechanism of the coupling resonance between the helical channel and the air cavity for the broadband characteristic, which can be hardly realized by the traditional space-coiling or Helmholtz-resonator metasurfaces. We prove that the 0/1 encoding form can be reconstructed simply by inserting the spiral structure or not. By encoding the sequence of the logical units in the BACMs, the broadband acoustic focusing lens and acoustic splitter within the frequency range of [2.4, 5.6 kHz] are demonstrated numerically and experimentally. Our study may find applications in the fields of acoustic wave devices.
APA, Harvard, Vancouver, ISO, and other styles
31

Chen, Hongxiang, Michael Vasmer, Nikolas P. Breuckmann, and Edward Grant. "Automated discovery of logical gates for quantum error correction (with Supplementary (153 pages))." Quantum Information and Computation 22, no. 11&12 (August 2022): 947–64. http://dx.doi.org/10.26421/qic22.11-12-3.

Full text
Abstract:
Quantum error correcting codes protect quantum computation from errors caused by decoherence and other noise. Here we study the problem of designing logical operations for quantum error correcting codes. We present an automated procedure that generates logical operations given known encoding and correcting procedures. Our technique is to use variational circuits for learning both the logical gates and the physical operations implementing them. This procedure can be implemented on near-term quantum computers via quantum process tomography. It enables automatic discovery of logical gates from analytically designed error correcting codes and can be extended to error correcting codes found by numerical optimization. We test the procedure by simulating small quantum codes of four to fifteen qubits showing that our procedure finds most logical gates known in the current literature. Additionally, it generates logical gates not found in the current literature for the [[5,1,2]] code, the [[6,3,2]] code, the [[8,3,2]] code, and the [[10,1,2]] code.
APA, Harvard, Vancouver, ISO, and other styles
32

CABALAR, PEDRO, MARTÍN DIÉGUEZ, and CONCEPCIÓN VIDAL. "An infinitary encoding of temporal equilibrium logic." Theory and Practice of Logic Programming 15, no. 4-5 (July 2015): 666–80. http://dx.doi.org/10.1017/s1471068415000307.

Full text
Abstract:
AbstractThis paper studies the relation between two recent extensions of propositional Equilibrium Logic, a well-known logical characterisation of Answer Set Programming. In particular, we show how Temporal Equilibrium Logic, which introduces modal operators as those typically handled in Linear-Time Temporal Logic (LTL), can be encoded into Infinitary Equilibrium Logic, a recent formalisation that allows the use of infinite conjunctions and disjunctions. We prove the correctness of this encoding and, as an application, we further use it to show that the semantics of the temporal logic programming formalism called TEMPLOG is subsumed by Temporal Equilibrium Logic.
APA, Harvard, Vancouver, ISO, and other styles
33

Baggio, Giacomo, Francesco Ticozzi, Peter D. Johnson, and Lorenza Viola. "Dissipative encoding of quantum information." Quantum Information and Computation 21, no. 9-10 (August 2021): 737–70. http://dx.doi.org/10.26421/qic21.9-10-2.

Full text
Abstract:
We formalize the problem of dissipative quantum encoding, and explore the advantages of using Markovian evolution to prepare a quantum code in the desired logical space, with emphasis on discrete-time dynamics and the possibility of exact finite-time convergence. In particular, we investigate robustness of the encoding dynamics and their ability to tolerate initialization errors, thanks to the existence of non-trivial basins of attraction. As a key application, we show that for stabilizer quantum codes on qubits, a finite-time dissipative encoder may always be constructed, by using at most a number of quantum maps determined by the number of stabilizer generators. We find that even in situations where the target code lacks gauge degrees of freedom in its subsystem form, dissipative encoders afford nontrivial robustness against initialization errors, thus overcoming a limitation of purely unitary encoding procedures. Our general results are illustrated in a number of relevant examples, including Kitaev's toric code.
APA, Harvard, Vancouver, ISO, and other styles
34

Konno, Shunya, Warit Asavanant, Fumiya Hanamura, Hironari Nagayoshi, Kosuke Fukui, Atsushi Sakaguchi, Ryuhoh Ide, et al. "Logical states for fault-tolerant quantum computation with propagating light." Science 383, no. 6680 (January 19, 2024): 289–93. http://dx.doi.org/10.1126/science.adk7560.

Full text
Abstract:
To harness the potential of a quantum computer, quantum information must be protected against error by encoding it into a logical state that is suitable for quantum error correction. The Gottesman-Kitaev-Preskill (GKP) qubit is a promising candidate because the required multiqubit operations are readily available at optical frequency. To date, however, GKP qubits have been demonstrated only at mechanical and microwave frequencies. We realized a GKP state in propagating light at telecommunication wavelength and verified it through homodyne measurements without loss corrections. The generation is based on interference of cat states, followed by homodyne measurements. Our final states exhibit nonclassicality and non-Gaussianity, including the trident shape of faint instances of GKP states. Improvements toward brighter, multipeaked GKP qubits will be the basis for quantum computation with light.
APA, Harvard, Vancouver, ISO, and other styles
35

Cheng, Jianpeng, Siva Reddy, Vijay Saraswat, and Mirella Lapata. "Learning an Executable Neural Semantic Parser." Computational Linguistics 45, no. 1 (March 2019): 59–94. http://dx.doi.org/10.1162/coli_a_00342.

Full text
Abstract:
This article describes a neural semantic parser that maps natural language utterances onto logical forms that can be executed against a task-specific environment, such as a knowledge base or a database, to produce a response. The parser generates tree-structured logical forms with a transition-based approach, combining a generic tree-generation algorithm with domain-general grammar defined by the logical language. The generation process is modeled by structured recurrent neural networks, which provide a rich encoding of the sentential context and generation history for making predictions. To tackle mismatches between natural language and logical form tokens, various attention mechanisms are explored. Finally, we consider different training settings for the neural semantic parser, including fully supervised training where annotated logical forms are given, weakly supervised training where denotations are provided, and distant supervision where only unlabeled sentences and a knowledge base are available. Experiments across a wide range of data sets demonstrate the effectiveness of our parser.
APA, Harvard, Vancouver, ISO, and other styles
36

Cheremisinova, L. D. "Formal verification of logical descriptions with functional uncertainty based on logarithmic encoding of conditions." Automation and Remote Control 73, no. 7 (July 2012): 1216–26. http://dx.doi.org/10.1134/s0005117912070119.

Full text
APA, Harvard, Vancouver, ISO, and other styles
37

Zhou, Pei, Jiangping Zhu, and Zhisheng You. "3-D face registration solution with speckle encoding based spatial-temporal logical correlation algorithm." Optics Express 27, no. 15 (July 12, 2019): 21004. http://dx.doi.org/10.1364/oe.27.021004.

Full text
APA, Harvard, Vancouver, ISO, and other styles
38

Yang, Ping, C. R. Ramakrishnan, and Scott A. Smolka. "A logical encoding of the π-calculus: model checking mobile processes using tabled resolution." International Journal on Software Tools for Technology Transfer 6, no. 1 (April 6, 2004): 38–66. http://dx.doi.org/10.1007/s10009-003-0136-3.

Full text
APA, Harvard, Vancouver, ISO, and other styles
39

Grüning, André, and Ioana Sporea. "Supervised Learning of Logical Operations in Layered Spiking Neural Networks with Spike Train Encoding." Neural Processing Letters 36, no. 2 (May 15, 2012): 117–34. http://dx.doi.org/10.1007/s11063-012-9225-1.

Full text
APA, Harvard, Vancouver, ISO, and other styles
40

Guo, Zhen, and Munindar P. Singh. "Representing and Determining Argumentative Relevance in Online Discussions: A General Approach." Proceedings of the International AAAI Conference on Web and Social Media 17 (June 2, 2023): 292–302. http://dx.doi.org/10.1609/icwsm.v17i1.22146.

Full text
Abstract:
Understanding an online argumentative discussion is essential for understanding users' opinions on a topic and their underlying reasoning. A key challenge in determining completeness and persuasiveness of argumentative discussions is to assess how arguments under a topic are connected in a logical and coherent manner. Online argumentative discussions, in contrast to essays or face-to-face communication, challenge techniques for judging argument relevance because online discussions involve multiple participants and often exhibit incoherence in reasoning and inconsistencies in writing style. We define relevance as the logical and topical connections between small texts representing argument fragments in online discussions. We provide a corpus comprising pairs of sentences, labeled with argumentative relevance between the sentences in each pair. We propose a computational approach relying on content reduction and a Siamese neural network architecture for modeling argumentative connections and determining argumentative relevance between texts. Experimental results indicate that our approach is effective in measuring relevance between arguments, and outperforms strong and well-adopted baselines. Further analysis demonstrates the benefit of using our argumentative relevance encoding on a downstream task, predicting how impactful an online comment is to certain topic, comparing to encoding that does not consider logical connection.
APA, Harvard, Vancouver, ISO, and other styles
41

Wang, Chenyang, Guannan Ma, Di Wei, Xinru Zhang, Peihan Wang, Cuidan Li, Jing Xing, et al. "Mainstream encoding–decoding methods of DNA data storage." CCF Transactions on High Performance Computing 4, no. 1 (March 2022): 23–33. http://dx.doi.org/10.1007/s42514-022-00094-z.

Full text
Abstract:
AbstractDNA storage is a new digital data storage technology based on specific encoding and decoding methods between 0 and 1 binary codes of digital data and A-T-C-G quaternary codes of DNAs, which and is expected to develop into a major data storage form in the future due to its advantages (such as high data density, long storage time, low energy consumption, convenience for carrying, concealed transportation and multiple encryptions). In this review, we mainly summarize the recent research advances of four main encoding and decoding methods of DNA storage technology: direct mapping method between 0 and 1 binary and A-T-C-G quaternary codes in early-stage, fountain code for higher logical storage density, inner and outer codes for random access DNA storage data, and CRISPR mediated in vivo DNA storage method. The first three encoding/decoding methods belong to in vitro DNA storage, representing the mainstream research and application in DNA storage. Their advantages and disadvantages are also reviewed: direct mapping method is easy and efficient, but has high error rate and low logical density; fountain code can achieve higher storage density without random access; inner and outer code has error-correction design to realize random access at the expense of logic density. This review provides important references and improved understanding of DNA storage methods. Development of efficient and accurate DNA storage encoding and decoding methods will play a very important and even decisive role in the transition of DNA storage from the laboratory to practical application, which may fundamentally change the information industry in the future.
APA, Harvard, Vancouver, ISO, and other styles
42

KARLSSON, Jens. "Linguistic Temporality, Logical Meaning and Narrative Perspectives: Adverbs /zai/ and /you/ in Modern Standard Chinese." Acta Linguistica Asiatica 1, no. 2 (October 20, 2011): 25–38. http://dx.doi.org/10.4312/ala.1.2.25-38.

Full text
Abstract:
In this paper is presented an inquiry into some aspects of the meaning and usage of two temporal adverbs zai (再) and you (又) in Modern Standard Chinese. A decompositional analysis of the semantic encoding of the adverbs is conducted, aiming to better explain their recorded differences in usage. First, a sketch of some of the fundamental features of linguistic temporality is provided in order to model the structure of temporal semantic information encoded in the adverbs. Non-temporal (logical) meaning such as assertion and inference is also shown to be an important aspect of the semantic content of the adverbs. Adverbs zai and you are shown to encode the same semantic content except for a difference in viewpoint; the first being prospective, the second retrospective. Concrete linguistic examples reflecting the intrinsic semantic encoding of the adverbs are raised and discussed. It is then argued that through combining the decompositional analysis with ideas concerning conceptual analogy, some issues raised by Lu and Ma (1999) regarding the usage of zai and you in past and future settings may be resolved.
APA, Harvard, Vancouver, ISO, and other styles
43

Acharya, Rajeev, Igor Aleiner, Richard Allen, Trond I. Andersen, Markus Ansmann, Frank Arute, Kunal Arya, et al. "Suppressing quantum errors by scaling a surface code logical qubit." Nature 614, no. 7949 (February 22, 2023): 676–81. http://dx.doi.org/10.1038/s41586-022-05434-1.

Full text
Abstract:
AbstractPractical quantum computing will require error rates well below those achievable with physical qubits. Quantum error correction1,2 offers a path to algorithmically relevant error rates by encoding logical qubits within many physical qubits, for which increasing the number of physical qubits enhances protection against physical errors. However, introducing more qubits also increases the number of error sources, so the density of errors must be sufficiently low for logical performance to improve with increasing code size. Here we report the measurement of logical qubit performance scaling across several code sizes, and demonstrate that our system of superconducting qubits has sufficient performance to overcome the additional errors from increasing qubit number. We find that our distance-5 surface code logical qubit modestly outperforms an ensemble of distance-3 logical qubits on average, in terms of both logical error probability over 25 cycles and logical error per cycle ((2.914 ± 0.016)% compared to (3.028 ± 0.023)%). To investigate damaging, low-probability error sources, we run a distance-25 repetition code and observe a 1.7 × 10−6 logical error per cycle floor set by a single high-energy event (1.6 × 10−7 excluding this event). We accurately model our experiment, extracting error budgets that highlight the biggest challenges for future systems. These results mark an experimental demonstration in which quantum error correction begins to improve performance with increasing qubit number, illuminating the path to reaching the logical error rates required for computation.
APA, Harvard, Vancouver, ISO, and other styles
44

Yonekura, Miki, and Shunji Nishimura. "Fractional Encoding of At-Most-K Constraints on SAT." Electronics 12, no. 15 (July 25, 2023): 3211. http://dx.doi.org/10.3390/electronics12153211.

Full text
Abstract:
The satisfiability problem (SAT) in propositional logic determines if there is an assignment of values that makes a given propositional formula true. Recently, fast SAT solvers have been developed, and SAT encoding research has gained attention. This enables various real-world problems to be transformed into SAT and solved, realizing a solution to the original problems. We propose a new encoding method, Fractional Encoding, which focuses on the At-Most-K constraints—a bottleneck of computational complexity—and reduces the scale of logical expressions by dividing target variables. Furthermore, we confirm that Fractional Encoding outperforms existing methods in terms of the number of generated clauses and required auxiliary variables. Hence, it enables the efficient solving of real-world problems like planning and hardware verification.
APA, Harvard, Vancouver, ISO, and other styles
45

Hoernle, Nick, Rafael Michael Karampatsis, Vaishak Belle, and Kobi Gal. "MultiplexNet: Towards Fully Satisfied Logical Constraints in Neural Networks." Proceedings of the AAAI Conference on Artificial Intelligence 36, no. 5 (June 28, 2022): 5700–5709. http://dx.doi.org/10.1609/aaai.v36i5.20512.

Full text
Abstract:
We propose a novel way to incorporate expert knowledge into the training of deep neural networks. Many approaches encode domain constraints directly into the network architecture, requiring non-trivial or domain-specific engineering. In contrast, our approach, called MultiplexNet, represents domain knowledge as a quantifier-free logical formula in disjunctive normal form (DNF) which is easy to encode and to elicit from human experts. It introduces a latent Categorical variable that learns to choose which constraint term optimizes the error function of the network and it compiles the constraints directly into the output of existing learning algorithms. We demonstrate the efficacy of this approach empirically on several classical deep learning tasks, such as density estimation and classification in both supervised and unsupervised settings where prior knowledge about the domains was expressed as logical constraints. Our results show that the MultiplexNet approach learned to approximate unknown distributions well, often requiring fewer data samples than the alternative approaches. In some cases, MultiplexNet finds better solutions than the baselines; or solutions that could not be achieved with the alternative approaches. Our contribution is in encoding domain knowledge in a way that facilitates inference. We specifically focus on quantifier-free logical formulae that are specified over the output domain of a network. We show that this approach is both efficient and general; and critically, our approach guarantees 100% constraint satisfaction in a network's output.
APA, Harvard, Vancouver, ISO, and other styles
46

Ntalaperas, Dimitrios, and Nikos Konofaos. "Encoding Two-Qubit Logical States and Quantum Operations Using the Energy States of a Physical System." Technologies 10, no. 1 (December 22, 2021): 1. http://dx.doi.org/10.3390/technologies10010001.

Full text
Abstract:
In this paper, we introduce a novel coding scheme, which allows single quantum systems to encode multi-qubit registers. This allows for more efficient use of resources and the economy in designing quantum systems. The scheme is based on the notion of encoding logical quantum states using the charge degree of freedom of the discrete energy spectrum that is formed by introducing impurities in a semiconductor material. We propose a mechanism of performing single qubit operations and controlled two-qubit operations, providing a mechanism for achieving these operations using appropriate pulses generated by Rabi oscillations. The above architecture is simulated using the Armonk single qubit quantum computer of IBM to encode two logical quantum states into the energy states of Armonk’s qubit and using custom pulses to perform one and two-qubit quantum operations.
APA, Harvard, Vancouver, ISO, and other styles
47

Iqlima, Tri Wilfi, and Susanah Susanah. "PROFIL PENALARAN ANALOGI SISWA DALAM PEMECAHAN MASALAH MATEMATIKA DITINJAU DARI KEMAMPUAN MATEMATIKA." MATHEdunesa 9, no. 1 (January 23, 2020): 35–39. http://dx.doi.org/10.26740/mathedunesa.v9n1.p35-39.

Full text
Abstract:
Analogy reasoning is the process of thinking logically and analytically in drawing conclusions based on the similarities between the two things being compared. The purpose of this study is to describe the analogy reasoning of students in solving mathematical problems in terms of high, medium, and low mathematical abilities. This research is a descriptive study with a qualitative approach. Data collection was carried out in class IX-H of SMP Negeri 5 Surabaya in the 2019/2020 school year by 33 students and each subject was selected for each category of mathematical ability. The results of the analysis of Problem Solving Tests and interviews show that students with high, medium, and low mathematical abilities mention information that is known and what is asked for logical reasons on the source and target problem, and explain the relations between the information. This indicates that each subject has an encoding process. Each subject also mentions and explains the concepts used to solve source problems, which means each subject has an inferring process. The difference is, subjects with high mathematical ability mention the same concepts between the source problem and the target problem and explain the concepts used to solve the target problem, then students can complete the target problem. This means that the subject is doing two other processes, namely mapping and applying. Subjects with medium mathematical abilities are mentioning the same concept between the source problem and the target problem but cannot explain the concept used in the target problem. However, the subject only did one of the two indicators in the mapping process, so the analogy reasoning process carried out by the subject was encoding and inferring. While students with low mathematical abilities are stopped in the encoding and inferring processes. Keywords: Analogy Reasoning, Mathematical Abilitiy
APA, Harvard, Vancouver, ISO, and other styles
48

Fields, Chris, James F. Glazebrook, and Antonino Marcianò. "Reference Frame Induced Symmetry Breaking on Holographic Screens." Symmetry 13, no. 3 (March 3, 2021): 408. http://dx.doi.org/10.3390/sym13030408.

Full text
Abstract:
Any interaction between finite quantum systems in a separable joint state can be viewed as encoding classical information on an induced holographic screen. Here we show that when such an interaction is represented as a measurement, the quantum reference frames (QRFs) deployed to identify systems and pick out their pointer states induce decoherence, breaking the symmetry of the holographic encoding in an observer-relative way. Observable entanglement, contextuality, and classical memory are, in this representation, logical and temporal relations between QRFs. Sharing entanglement as a resource requires a priori shared QRFs.
APA, Harvard, Vancouver, ISO, and other styles
49

Cholak, Peter A., and Leo A. Harrington. "Definable Encodings in the Computably Enumerable Sets." Bulletin of Symbolic Logic 6, no. 2 (June 2000): 185–96. http://dx.doi.org/10.2307/421206.

Full text
Abstract:
The purpose of this communication is to announce some recent results on the computably enumerable sets. There are two disjoint sets of results; the first involves invariant classes and the second involves automorphisms of the computably enumerable sets. What these results have in common is that the guts of the proofs of these theorems uses a new form of definable coding for the computably enumerable sets.We will work in the structure of the computably enumerable sets. The language is just inclusion, ⊆. This structure is called ε.All sets will be computably enumerable non-computable sets and all degrees will be computably enumerable and non-computable, unless otherwise noted. Our notation and definitions are standard and follow Soare [1987]; however we will warm up with some definitions and notation issues so the reader need not consult Soare [1987]. Some historical remarks follow in Section 2.1 and throughout Section 3.We will also consider the quotient structure ε modulo the ideal of finite sets, ε*. ε* is a definable quotient structure of ε since “Χ is finite” is definable in ε; “Χ is finite” iff all subsets of Χ are computable (it takes a little computability theory to show if Χ is infinite then Χ has an infinite non-computable subset). We use A* to denote the equivalent class of A under the ideal of finite sets.
APA, Harvard, Vancouver, ISO, and other styles
50

Waqar Bhat, Mohammad, and Dr Kiran V. "High-Speed 16-bit Carry Bypass Adder Design." International Journal of Research and Review 9, no. 11 (November 3, 2022): 74–78. http://dx.doi.org/10.52403/ijrr.20221112.

Full text
Abstract:
Adders are one of the most significant blocks in a logical arithmetic unit. These are employed in a wide range of applications, from incrementing the value of a program variable to high-speed applications such as video-encoding, digital-signal. There are various adders with varying propagation delays. As a result, selecting an efficient adder is critical for system performance. This research compares the latency of several 16 bit adders that are available. The delay of logic gates in 180nm technology has been quantitatively modelled utilizing logical effort. The delay is then utilized to calculate the delay of various adders in Virtuoso Cadence while simulating. Virtuoso Cadence was used to perform functional verification on adders. Each adder's route delay has also been determined. Based on this research, a new carry bypass adder has been developed to reduce propagation delay. The suggested carry bypass adder reduces combinational path latency by 7.2 percent and logical effort delay by 29.5 percent, respectively. Keywords: Algorithms for 16-bit adders, logical effort, propagation delay, carry skip adder
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography