Journal articles on the topic 'Language compilers'

To see the other types of publications on this topic, follow the link: Language compilers.

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 journal articles for your research on the topic 'Language compilers.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse journal articles on a wide variety of disciplines and organise your bibliography correctly.

1

Paraskevopoulou, Zoe, John M. Li, and Andrew W. Appel. "Compositional optimizations for CertiCoq." Proceedings of the ACM on Programming Languages 5, ICFP (August 22, 2021): 1–30. http://dx.doi.org/10.1145/3473591.

Full text
Abstract:
Compositional compiler verification is a difficult problem that focuses on separate compilation of program components with possibly different verified compilers. Logical relations are widely used in proving correctness of program transformations in higher-order languages; however, they do not scale to compositional verification of multi-pass compilers due to their lack of transitivity. The only known technique to apply to compositional verification of multi-pass compilers for higher-order languages is parametric inter-language simulations (PILS), which is however significantly more complicated than traditional proof techniques for compiler correctness. In this paper, we present a novel verification framework for lightweight compositional compiler correctness . We demonstrate that by imposing the additional restriction that program components are compiled by pipelines that go through the same sequence of intermediate representations , logical relation proofs can be transitively composed in order to derive an end-to-end compositional specification for multi-pass compiler pipelines. Unlike traditional logical-relation frameworks, our framework supports divergence preservation—even when transformations reduce the number of program steps. We achieve this by parameterizing our logical relations with a pair of relational invariants . We apply this technique to verify a multi-pass, optimizing middle-end pipeline for CertiCoq, a compiler from Gallina (Coq’s specification language) to C. The pipeline optimizes and closure-converts an untyped functional intermediate language (ANF or CPS) to a subset of that language without nested functions, which can be easily code-generated to low-level languages. Notably, our pipeline performs more complex closure-allocation optimizations than the state of the art in verified compilation. Using our novel verification framework, we prove an end-to-end theorem for our pipeline that covers both termination and divergence and applies to whole-program and separate compilation, even when different modules are compiled with different optimizations. Our results are mechanized in the Coq proof assistant.
APA, Harvard, Vancouver, ISO, and other styles
2

Hartel, Pieter H., Marc Feeley, Martin Alt, Lennart Augustsson, Peter Baumann, Marcel Beemster, Emmanuel Chailloux, et al. "Benchmarking implementations of functional languages with ‘Pseudoknot’, a float-intensive benchmark." Journal of Functional Programming 6, no. 4 (July 1996): 621–55. http://dx.doi.org/10.1017/s0956796800001891.

Full text
Abstract:
AbstractOver 25 implementations of different functional languages are benchmarked using the same program, a floating-point intensive application taken from molecular biology. The principal aspects studied are compile time and execution time for the various implementations that were benchmarked. An important consideration is how the program can be modified and tuned to obtain maximal performance on each language implementation. With few exceptions, the compilers take a significant amount of time to compile this program, though most compilers were faster than the then current GNU C compiler (GCC version 2.5.8). Compilers that generate C or Lisp are often slower than those that generate native code directly: the cost of compiling the intermediate form is normally a large fraction of the total compilation time. There is no clear distinction between the runtime performance of eager and lazy implementations when appropriate annotations are used: lazy implementations have clearly come of age when it comes to implementing largely strict applications, such as the Pseudoknot program. The speed of C can be approached by some implementations, but to achieve this performance, special measures such as strictness annotations are required by non-strict implementations. The benchmark results have to be interpreted with care. Firstly, a benchmark based on a single program cannot cover a wide spectrum of ‘typical’ applications. Secondly, the compilers vary in the kind and level of optimisations offered, so the effort required to obtain an optimal version of the program is similarly varied.
APA, Harvard, Vancouver, ISO, and other styles
3

Serrano, Manuel. "Of JavaScript AOT compilation performance." Proceedings of the ACM on Programming Languages 5, ICFP (August 22, 2021): 1–30. http://dx.doi.org/10.1145/3473575.

Full text
Abstract:
The fastest JavaScript production implementations use just-in-time (JIT) compilation and the vast majority of academic publications about implementations of dynamic languages published during the last two decades focus on JIT compilation. This does not imply that static compilers (AoT) cannot be competitive; as comparatively little effort has been spent creating fast AoT JavaScript compilers, a scientific comparison is lacking. This paper presents the design and implementation of an AoT JavaScript compiler, focusing on a performance analysis. The paper reports on two experiments, one based on standard JavaScript benchmark suites and one based on new benchmarks chosen for their diversity of styles, authors, sizes, provenance, and coverage of the language. The first experiment shows an advantage to JIT compilers, which is expected after the decades of effort that these compilers have paid to these very tests. The second shows more balanced results, as the AoT compiler generates programs that reach competitive speeds and that consume significantly less memory. The paper presents and evaluates techniques that we have either invented or adapted from other systems, to improve AoT JavaScript compilation.
APA, Harvard, Vancouver, ISO, and other styles
4

HARDIN, THÉRÈSE, LUC MARANGET, and BRUNO PAGANO. "Functional runtime systems within the lambda-sigma calculus." Journal of Functional Programming 8, no. 2 (March 1998): 131–76. http://dx.doi.org/10.1017/s0956796898002986.

Full text
Abstract:
We define a weak λ-calculus, λσw, as a subsystem of the full λ-calculus with explicit substitutions λσ[uArr ]. We claim that λσw could be the archetypal output language of functional compilers, just as the λ-calculus is their universal input language. Furthermore, λσ[uArr ] could be the adequate theory to establish the correctness of functional compilers. Here we illustrate these claims by proving the correctness of four simplified compilers and runtime systems modelled as abstract machines. The four machines we prove are the Krivine machine, the SECD, the FAM and the CAM. Thus, we give the first formal proofs of Cardelli's FAM and of its compiler.
APA, Harvard, Vancouver, ISO, and other styles
5

Pompougnac, Hugo, Ulysse Beaugnon, Albert Cohen, and Dumitru Potop Butucaru. "Weaving Synchronous Reactions into the Fabric of SSA-form Compilers." ACM Transactions on Architecture and Code Optimization 19, no. 2 (June 30, 2022): 1–25. http://dx.doi.org/10.1145/3506706.

Full text
Abstract:
We investigate the programming of reactive systems combining closed-loop control with performance-intensive components such as Machine Learning (ML). Reactive control systems are often safety-critical and associated with real-time execution requirements, a domain of predilection for synchronous programming languages. Extending the high levels of assurance found in reactive control systems to computationally intensive code remains an open issue. We tackle it by unifying concepts and algorithms from synchronous languages with abstractions commonly found in general-purpose and ML compilers. This unification across embedded and high-performance computing enables a high degree of reuse of compiler abstractions and code. We first recall commonalities between dataflow synchronous languages and the static single assignment (SSA) form of general-purpose/ML compilers. We highlight the key mechanisms of synchronous languages that SSA does not cover—denotational concepts such as synchronizing computations with an external time base, cyclic and reactive I/O, as well as the operational notions of relaxing control flow dominance and the modeling of absent values. We discover that initialization-related static analyses and code generation aspects can be fully decoupled from other aspects of synchronous semantics such as memory management and causality analysis, the latter being covered by existing dominance-based algorithms of SSA-form compilers. We show how the SSA form can be seamlessly extended to enable all SSA-based transformations and optimizations on reactive programs with synchronous concurrency. We derive a compilation flow suitable for both high-performance and reactive aspects of a control application, by embedding the Lustre dataflow synchronous language into the SSA-based MLIR/LLVM compiler infrastructure. This allows the modeling of signal processing and deep neural network inference in the (closed) loop of feedback-directed control systems. With only minor efforts leveraging the MLIR infrastructure, the generated code matches or outperforms state-of-the-art synchronous language compilers on computationally intensive ML applications.
APA, Harvard, Vancouver, ISO, and other styles
6

Michael, Alexandra E., Anitha Gollamudi, Jay Bosamiya, Evan Johnson, Aidan Denlinger, Craig Disselkoen, Conrad Watt, et al. "MSWasm: Soundly Enforcing Memory-Safe Execution of Unsafe Code." Proceedings of the ACM on Programming Languages 7, POPL (January 9, 2023): 425–54. http://dx.doi.org/10.1145/3571208.

Full text
Abstract:
Most programs compiled to WebAssembly (Wasm) today are written in unsafe languages like C and C++. Unfortunately, memory-unsafe C code remains unsafe when compiled to Wasm—and attackers can exploit buffer overflows and use-after-frees in Wasm almost as easily as they can on native platforms. Memory- Safe WebAssembly (MSWasm) proposes to extend Wasm with language-level memory-safety abstractions to precisely address this problem. In this paper, we build on the original MSWasm position paper to realize this vision. We give a precise and formal semantics of MSWasm, and prove that well-typed MSWasm programs are, by construction, robustly memory safe. To this end, we develop a novel, language-independent memory-safety property based on colored memory locations and pointers. This property also lets us reason about the security guarantees of a formal C-to-MSWasm compiler—and prove that it always produces memory-safe programs (and preserves the semantics of safe programs). We use these formal results to then guide several implementations: Two compilers of MSWasm to native code, and a C-to-MSWasm compiler (that extends Clang). Our MSWasm compilers support different enforcement mechanisms, allowing developers to make security-performance trade-offs according to their needs. Our evaluation shows that on the PolyBenchC suite, the overhead of enforcing memory safety in software ranges from 22% (enforcing spatial safety alone) to 198% (enforcing full memory safety), and 51.7% when using hardware memory capabilities for spatial safety and pointer integrity. More importantly, MSWasm’s design makes it easy to swap between enforcement mechanisms; as fast (especially hardware-based) enforcement techniques become available, MSWasm will be able to take advantage of these advances almost for free.
APA, Harvard, Vancouver, ISO, and other styles
7

Dold, Axel, Friedrich von Henke, and Wolfgang Goerigk. "A Completely Verified Realistic Bootstrap Compiler." International Journal of Foundations of Computer Science 14, no. 04 (August 2003): 659–80. http://dx.doi.org/10.1142/s0129054103001947.

Full text
Abstract:
This paper reports on a large verification effort in constructing an initial fully trusted bootstrap compiler executable for a realistic system programming language and real target processor. The construction and verification process comprises three tasks: the verification of the compiling specification (a relation between abstract source and target programs) with respect to the language semantics and a realistic correctness criterion. This proof has been completely mechanized using the PVS verification system and is one of the largest case-studies in formal verification we are aware of. Second, the implementation of the specification in the high-level source language following a transformational approach, and finally, the implementation and verification of a binary executable written in the compiler's target language. For the latter task, a realistic technique has been developed, which is based on rigorous a-posteriori syntactic code inspection and which guarantees, for the first time, trusted execution of generated machine programs. The context of this work is the joint German research effort Verifix aiming at developing methods for the construction of correct compilers for realistic source languages and real target processors.
APA, Harvard, Vancouver, ISO, and other styles
8

Clément, Basile, and Albert Cohen. "End-to-end translation validation for the halide language." Proceedings of the ACM on Programming Languages 6, OOPSLA1 (December 8, 2022): 1–30. http://dx.doi.org/10.1145/3527328.

Full text
Abstract:
This paper considers the correctness of domain-specific compilers for tensor programming languages through the study of Halide, a popular representative. It describes a translation validation algorithm for affine Halide specifications, independently of the scheduling language. The algorithm relies on “prophetic” annotations added by the compiler to the generated array assignments. The annotations provide a refinement mapping from assignments in the generated code to the tensor definitions from the specification. Our implementation leverages an affine solver and a general SMT solver, and scales to complete Halide benchmarks.
APA, Harvard, Vancouver, ISO, and other styles
9

BERNARD, THOMAS A. M., CLEMENS GRELCK, and CHRIS R. JESSHOPE. "ON THE COMPILATION OF A LANGUAGE FOR GENERAL CONCURRENT TARGET ARCHITECTURES." Parallel Processing Letters 20, no. 01 (March 2010): 51–69. http://dx.doi.org/10.1142/s0129626410000053.

Full text
Abstract:
The challenge of programming many-core architectures efficiently and effectively requires models and methods to co-design chip architectures and their software tool chain, using an approach that is both vertical and general. In this paper, we present compilation schemes for a general model of concurrency captured in a parallel language designed for system-level programming and as a target for higher level compilers. We also expose the challenges of integrating these transformation rules into a sequential-oriented compiler. Moreover, we discuss resource mapping inherent to those challenges. Our aim has been to reuse as much of the existing sequential compiler technology as possible in order to harness decades of prior research in compiling sequential languages.
APA, Harvard, Vancouver, ISO, and other styles
10

Gagan, B. R., Shivaprakash T, Thirumalai Shaktivel C, Vaishak P, and Kushal Kumar B. N. "Design of a New Language Seeks Literature Survey." International Journal for Research in Applied Science and Engineering Technology 10, no. 3 (March 31, 2022): 1623–27. http://dx.doi.org/10.22214/ijraset.2022.40949.

Full text
Abstract:
Abstract: In a scientific study, computing is a must-have tool. In general, scientists have various difficulties, requirements, and views when it comes to computation, which need to be addressed by the programming language that they use, this cannot be satisfied by general-purpose languages. Also, researchers need to concentrate on the issue they are working on rather than the optimizations for the calculations, so instead of using a general-purpose language, if there exists a language whose compiler would take care of those optimizations, it would make their work easier and faster. This is a survey of the work undertaken to design the programming language and its compiler. The primary goal of this research is to examine the function of work, implementation strategy, steps taken for improving the performance, the procedure of benchmarking, and finally, the outcome of the papers studied. The survey's main conclusions are that: the most common language mentioned among the papers was Python which appears to be more popular among developers due to its simple syntax and library support for computing. On the other hand, Python lacks performance, to compensate for this performance issue, the community has developed tools like Cython, Numba, Pythran, etc, which can be used to speed up Python. Domain-specific languages such as Wolfram, Seq, and ELI highlighted various methods for overcoming problems. Some languages like Wolfram and ELI moved from interpreter to compiler to get the performance boost. Most of the compilers use LLVM as the backend for optimizations and code generation. Keywords: scientific computation, compiler, programming language
APA, Harvard, Vancouver, ISO, and other styles
11

Biggar, Paul, Edsko de Vries, and David Gregg. "A practical solution for achieving language compatibility in scripting language compilers." Science of Computer Programming 77, no. 9 (August 2012): 971–89. http://dx.doi.org/10.1016/j.scico.2011.01.004.

Full text
APA, Harvard, Vancouver, ISO, and other styles
12

Klepinin, A. V., and A. A. Melentyev. "Integration of semantic verifiers into Java language compilers." Automatic Control and Computer Sciences 45, no. 7 (December 2011): 408–12. http://dx.doi.org/10.3103/s014641161107008x.

Full text
APA, Harvard, Vancouver, ISO, and other styles
13

Ciric, Miroslav, and Svetozar Rancic. "Parsing in different languages." Facta universitatis - series: Electronics and Energetics 18, no. 2 (2005): 299–307. http://dx.doi.org/10.2298/fuee0502299c.

Full text
Abstract:
A compiler is a translator that accepts as input formatted source file or files, and produces as output a file that may be run directly on a computer. Given the same ANSI C++ compliant input file, two different ANSI C++ compliant compilers running on the same operating system produce two different executable programs that should execute in exactly the same way. To some degree, this is achieved by the standardization of the C++ language, but it is also possible because computer programming languages like C++ can be compiled using reliable technologies with long traditions and understood characteristics. LALR(k), as practical version of LR, is such reliable technology for parsing. Traditional LALR(1) tool YACC has proved his value during years of successful applications. Nowadays there are a few commercial and noncommercial alternatives that are very interesting and promising. This paper will examine some of the them with ability of parsing in different programming languages.
APA, Harvard, Vancouver, ISO, and other styles
14

Klimis, Vasileios, Jack Clark, Alan Baker, David Neto, John Wickerson, and Alastair F. Donaldson. "Taking Back Control in an Intermediate Representation for GPU Computing." Proceedings of the ACM on Programming Languages 7, POPL (January 9, 2023): 1740–69. http://dx.doi.org/10.1145/3571253.

Full text
Abstract:
We describe our experiences successfully applying lightweight formal methods to substantially improve and reformulate an important part of Standard Portable Intermediate Representation SPIRV, an industry-standard language for GPU computing. The formal model that we present has allowed us to (1) identify several ambiguities and needless complexities in the way that structured control flow was defined in the SPIRV specification; (2) interact with the authors of the SPIRV specification to rectify these problems; (3) validate the developer tools and conformance test suites that support the SPIRV language by cross-checking them against our formal model, improving the tools, test suites, and our models in the process; and (4) develop a novel method for fuzzing SPIRV compilers to detect miscompilation bugs that leverages our formal model. The latest release of the SPIRV specification incorporates the revised set of control-flow definitions that have arisen from our work. Furthermore, our novel compiler-fuzzing technique has led to the discovery of twenty distinct, previously unknown bugs in SPIRV compilers from Google, the Khronos Group, Intel, and Mozilla. Our work showcases the practical impact that formal modelling and analysis techniques can have on the design and implementation of industry-standard programming languages.
APA, Harvard, Vancouver, ISO, and other styles
15

Baev, Roman Vyacheslavovich, Leonid Vladlenovich Skvortsov, Evgeny Alekseevich Kudryashov, Ruben Arturovich Buchatskiy, and Roman Aleksandrovich Zhuykov. "Prevention of vulnerabilities arising from optimization of code with Undefined Behavior." Proceedings of the Institute for System Programming of the RAS 33, no. 4 (2021): 195–210. http://dx.doi.org/10.15514/ispras-2021-33(4)-14.

Full text
Abstract:
Aggressive optimization in modern compilers may uncover vulnerabilities in program code that did not lead to bugs prior to optimization. The source of these vulnerabilities is in code with undefined behavior. Programmers use such constructs relying on some particular behavior these constructs showed before in their experience, but the compiler is not obliged to stick to that behavior and may change the behavior if it’s needed for optimization since the behavior is undefined by language standard. This article describes approaches to detection and elimination of vulnerabilities arising from optimization in the case when source code is available but its modification is undesirable or impossible. Concept of a safe compiler (i.e. compiler that ensures no vulnerability is added to the program during optimization) is presented and implementation of such a compiler on top of GCC compiler is described. Implementation of safe compiler’s functionality is divided into three security levels whose applicability is discussed in the article. Feasibility of using the safe compiler on real-world codebases is demonstrated and possible performance losses are estimated.
APA, Harvard, Vancouver, ISO, and other styles
16

Siebert, Dr Fridtjof. "Fuzion - Safety through Simplicity." ACM SIGAda Ada Letters 41, no. 1 (October 28, 2022): 83–86. http://dx.doi.org/10.1145/3570315.3570323.

Full text
Abstract:
Fuzion is a modern, general purpose programming language that unifies concepts found in structured, functional and object-oriented programming languages into the concept of a Fuzion feature. It combines a powerful syntax and safety features based on the design-by-contract principle with a simple intermediate representation that enables powerful optimizing compilers and static analysis tools to verify correctness aspects. Fuzion maps different concepts into the concept of a Fuzion feature and uses a simple intermediate language that is friendly for static analysis tools as well as for optimizing compilers. Fuzion was influenced by many other languages including Java, Python, Eiffel, Rust, Ada, Go, Lua, Kotlin, C#, F#, Nim, Julia, Clojure, C/C++, Scala, and many more. The goal of Fuzion is to define a language that has the expressive power present in these languages and allow high-performance implementation and powerful analysis tools. Furthermore, Fuzion addresses requirements for safety-critical applications by adding support for contracts that enable formal specification and detailed control over runtime checks.
APA, Harvard, Vancouver, ISO, and other styles
17

WELLS, J. B., ALLYN DIMOCK, ROBERT MULLER, and FRANKLYN TURBAK. "A calculus with polymorphic and polyvariant flow types." Journal of Functional Programming 12, no. 3 (May 2002): 183–227. http://dx.doi.org/10.1017/s0956796801004245.

Full text
Abstract:
We present λCIL, a typed λ-calculus which serves as the foundation for a typed intermediate language for optimizing compilers for higher-order polymorphic programming languages. The key innovation of λCIL is a novel formulation of intersection and union types and flow labels on both terms and types. These flow types can encode polyvariant control and data flow information within a polymorphically typed program representation. Flow types can guide a compiler in generating customized data representations in a strongly typed setting. Since λCIL enjoys confluence, standardization, and subject reduction properties, it is a valuable tool for reasoning about programs and program transformations.
APA, Harvard, Vancouver, ISO, and other styles
18

Aho, Alfred, and Jeffrey Ullman. "Abstractions, their algorithms, and their compilers." Communications of the ACM 65, no. 2 (February 2022): 76–91. http://dx.doi.org/10.1145/3490685.

Full text
Abstract:
Jeffrey D. Ullman and Alfred V. Aho are recipients of the 2020 ACM A.M. Turing award. They were recognized for creating fundamental algorithms and theory underlying programming language implementation and for synthesizing these results and those of others in their highly influential books, which educated generations of computer scientists.
APA, Harvard, Vancouver, ISO, and other styles
19

Boujarwah, A. S., K. Saleh, and J. Al-Dallal. "Testing syntax and semantic coverage of Java language compilers." Information and Software Technology 41, no. 1 (January 1999): 15–28. http://dx.doi.org/10.1016/s0950-5849(98)00075-5.

Full text
APA, Harvard, Vancouver, ISO, and other styles
20

Prastowo, Tadeus, Luigi Palopoli, and Luca Abeni. "Tice: A real‐time language compilable using C++ compilers." Software: Practice and Experience 50, no. 12 (October 5, 2020): 2252–92. http://dx.doi.org/10.1002/spe.2891.

Full text
APA, Harvard, Vancouver, ISO, and other styles
21

Liu, Jiawei, Yuxiang Wei, Sen Yang, Yinlin Deng, and Lingming Zhang. "Coverage-guided tensor compiler fuzzing with joint IR-pass mutation." Proceedings of the ACM on Programming Languages 6, OOPSLA1 (December 8, 2022): 1–26. http://dx.doi.org/10.1145/3527317.

Full text
Abstract:
In the past decade, Deep Learning (DL) systems have been widely deployed in various application domains to facilitate our daily life, e.g., natural language processing, healthcare, activity recognition, and autonomous driving. Meanwhile, it is extremely challenging to ensure the correctness of DL systems (e.g., due to their intrinsic nondeterminism), and bugs in DL systems can cause serious consequences and may even threaten human lives. In the literature, researchers have explored various techniques to test, analyze, and verify DL models, since their quality directly affects the corresponding system behaviors. Recently, researchers have also proposed novel techniques for testing the underlying operator-level DL libraries (such as TensorFlow and PyTorch), which provide general binary implementations for each high-level DL operator and are the foundation for running DL models on different hardware platforms. However, there is still limited work targeting the reliability of the emerging tensor compilers (also known as DL compilers), which aim to automatically compile high-level tensor computation graphs directly into high-performance binaries for better efficiency, portability, and scalability than traditional operator-level libraries. Therefore, in this paper, we target the important problem of tensor compiler testing, and have proposed Tzer, a practical fuzzing technique for the widely used TVM tensor compiler. Tzer focuses on mutating the low-level Intermediate Representation (IR) for TVM due to the limited mutation space for the high-level IR. More specifically, Tzer leverages both general-purpose and tensor-compiler-specific mutators guided by coverage feedback for diverse and evolutionary IR mutation; furthermore, since tensor compilers provide various passes (i.e., transformations) for IR optimization, Tzer also performs pass mutation in tandem with IR mutation for more effective fuzzing. Our experimental results show that Tzer substantially outperforms existing fuzzing techniques on tensor compiler testing, with 75% higher coverage and 50% more valuable tests than the 2nd-best technique. Also, different components of Tzer have been validated via ablation study. To date, Tzer has detected 49 previously unknown bugs for TVM, with 37 bugs confirmed and 25 bugs fixed (PR merged).
APA, Harvard, Vancouver, ISO, and other styles
22

Bahr, Patrick, and Graham Hutton. "Monadic compiler calculation (functional pearl)." Proceedings of the ACM on Programming Languages 6, ICFP (August 29, 2022): 80–108. http://dx.doi.org/10.1145/3547624.

Full text
Abstract:
Bahr and Hutton recently developed a new approach to calculating correct compilers directly from specifications of their correctness. However, the methodology only considers converging behaviour of the source language, which means that the compiler could potentially produce arbitrary, erroneous code for source programs that diverge. In this article, we show how the methodology can naturally be extended to support the calculation of compilers that address both convergent and divergent behaviour simultaneously , without the need for separate reasoning for each aspect. Our approach is based on the use of the partiality monad to make divergence explicit, together with the use of strong bisimilarity to support equational-style calculations, but also generalises to other forms of effect by changing the underlying monad.
APA, Harvard, Vancouver, ISO, and other styles
23

Benzaken, Véronique, Évelyne Contejean, Mohammed Houssem Hachmaoui, Chantal Keller, Louis Mandel, Avraham Shinnar, and Jérôme Siméon. "Translating canonical SQL to imperative code in Coq." Proceedings of the ACM on Programming Languages 6, OOPSLA1 (December 8, 2022): 1–27. http://dx.doi.org/10.1145/3527327.

Full text
Abstract:
SQL is by far the most widely used and implemented query language. Yet, on some key features, such as correlated queries and NULL value semantics, many implementations diverge or contain bugs. We leverage recent advances in the formalization of SQL and query compilers to develop DBCert, the first mechanically verified compiler from SQL queries written in a canonical form to imperative code. Building DBCert required several new contributions which are described in this paper. First, we specify and mechanize a complete translation from SQL to the Nested Relational Algebra which can be used for query optimization. Second, we define Imp, a small imperative language sufficient to express SQL and which can target several execution languages including JavaScript. Finally, we develop a mechanized translation from the nested relational algebra to Imp, using the nested relational calculus as an intermediate step.
APA, Harvard, Vancouver, ISO, and other styles
24

Golumbic, Martin Charles, and Vladimir Rainish. "Instruction Scheduling Across Control Flow." Scientific Programming 2, no. 3 (1993): 1–5. http://dx.doi.org/10.1155/1993/536143.

Full text
Abstract:
Instruction scheduling algorithms are used in compilers to reduce run-time delays for the compiled code by the reordering or transformation of program statements, usually at the intermediate language or assembly code level. Considerable research has been carried out on scheduling code within the scope of basic blocks, i.e., straight line sections of code, and very effective basic block schedulers are now included in most modern compilers and especially for pipeline processors. In previous work Golumbic and Rainis: IBM J. Res. Dev., Vol. 34, pp.93–97, 1990, we presented code replication techniques for scheduling beyond the scope of basic blocks that provide reasonable improvements of running time of the compiled code, but which still leaves room for further improvement. In this article we present a new method for scheduling beyond basic blocks called SHACOOF. This new technique takes advantage of a conventional, high quality basic block scheduler by first suppressing selected subsequences of instructions and then scheduling the modified sequence of instructions using the basic block scheduler. A candidate subsequence for suppression can be found by identifying a region of a program control flow graph, called an S-region, which has a unique entry and a unique exit and meets predetermined criteria. This enables scheduling of a sequence of instructions beyond basic block boundaries, with only minimal changes to an existing compiler, by identifying beneficial opportunities to cover delays that would otherwise have been beyond its scope.
APA, Harvard, Vancouver, ISO, and other styles
25

BOOTH, STEPHEN. "ASSEMBLY LANGUAGE PROGRAMMING FOR THE 1860." International Journal of Modern Physics C 04, no. 06 (December 1993): 1279–84. http://dx.doi.org/10.1142/s0129183193000999.

Full text
Abstract:
The structure of the i860 is reviewed briefly. The code generation strategies used by compilers and assembly language programmers are compared. The problems associated with generating efficient code for the i860 are highlighted and a case study from QCD is used to demonstrate these points. Timing results for the case study are presented.
APA, Harvard, Vancouver, ISO, and other styles
26

PEYTON JONES, SIMON, and PHIL WADLER. "EDITORIAL." Journal of Functional Programming 8, no. 4 (July 1998): 319–21. http://dx.doi.org/10.1017/s0956796898003049.

Full text
Abstract:
In the end, research on functional languages does little good unless they are used to write something other than compilers for functional languages. However, if one scans a typical functional programming conference or journal, one mainly sees papers on twists in language design, speed-ups in compiled code, clever new analyses, or refinements to semantic models. It much less common to see a paper that considers a functional language as a tool to some other practical end. We would like to see this change.The Journal of Functional Programming carries, and will continue to carry, articles on all aspects of functional programming from lambda calculus theory to language design to implementation. But we have specially sought, and will continue to seek, papers on functional programming practice and experience.Research and papers on practice and experience sometimes receive less attention because they are perceived as possessing less academic content. So we want to remind potential authors that we have published a number of papers on this topic in the past, and to spell out the criteria we apply to such papers.
APA, Harvard, Vancouver, ISO, and other styles
27

Lai, Yong, Dayou Liu, and Minghao Yin. "New Canonical Representations by Augmenting OBDDs with Conjunctive Decomposition." Journal of Artificial Intelligence Research 58 (March 8, 2017): 453–521. http://dx.doi.org/10.1613/jair.5271.

Full text
Abstract:
We identify two families of canonical knowledge compilation languages. Both families augment ROBDD with conjunctive decomposition bounded by an integer i ranging from 0 to ∞. In the former, the decomposition is finest and the decision respects a chain C of variables, while both the decomposition and decision of the latter respect a tree T of variables. In particular, these two families cover the three existing languages ROBDD, ROBDD with as many implied literals as possible, and AND/OR BDD. We demonstrate that each language in the first family is complete, while each one in the second family is incomplete with expressivity that does not decrease with incremental i. We also demonstrate that the succinctness does not decrease from the i-th language in the second family to the i-th language in the first family, and then to the (i+1)-th language in the first family. For the operating efficiency, on the one hand, we show that the two families of languages support a rich class of tractable logical operations, and particularly the tractability of each language in the second family is not less than that of ROBDD; and on the other hand, we introduce a new time efficiency criterion called rapidity which reflects the idea that exponential operations may be preferable if the language can be exponentially more succinct, and we demonstrate that the rapidity of each operation does not decrease from the i-th language in the second family to the i-th language in the first family, and then to the (i+1)-th language in the first family. Furthermore, we develop a compiler for the last language in the first family (i = ∞). Empirical results show that the compiler significantly advances the compiling efficiency of canonical representations. In fact, its compiling efficiency is comparable with that of the state-of-the-art compilers of non-canonical representations. We also provide a compiler for the i-th language in the first family by translating the last language in the first family into the i-th language (i < ∞). Empirical results show that we can sometimes use the i-th language instead of the last language without any obvious loss of space efficiency.
APA, Harvard, Vancouver, ISO, and other styles
28

Jacobs, Koen, Dominique Devriese, and Amin Timany. "Purity of an ST monad: full abstraction by semantically typed back-translation." Proceedings of the ACM on Programming Languages 6, OOPSLA1 (December 8, 2022): 1–27. http://dx.doi.org/10.1145/3527326.

Full text
Abstract:
In 1995, Launchbury and Peyton Jones extended Haskell with an ST monad that allows the programmer to use higher-order mutable state. They informally argued that these state computations were safely encapsulated, and as such, that the rich reasoning principles stemming from the purity of the language, were not threatened. In this paper, we give a formal account of the preservation of purity after adding an ST monad to a simply-typed call-by-value recursive lambda calculus. We state and prove full abstraction when embedding the pure language into its extension with ST; contextual equivalences from the pure language continue to hold in the presence of ST. Proving full abstraction of compilers is usually done by emulating or back-translating the target features (here: ST computations) into the source language, a well-known challenge in the secure compilation community. We employ a novel proof technique for proving our full abstraction result that allows us to use a semantically (but not syntactically) typed back-translation into an intermediate language. We believe that this technique provides additional insight into our proof and that it is of general interest to researchers studying programming languages and compilers using full abstraction. The results presented here are fully formalized in the Coq proof assistant using the Iris framework.
APA, Harvard, Vancouver, ISO, and other styles
29

Fitzgerald, Steven M., and Rodney R. Oldehoeft. "Update-in-Place Analysis for True Multidimensional Arrays." Scientific Programming 5, no. 2 (1996): 147–60. http://dx.doi.org/10.1155/1996/493673.

Full text
Abstract:
Applicative languages have been proposed for defining algorithms for parallel architectures because they are implicitly parallel and lack side effects. However, straightforward implementations of applicative-language compilers may induce large amounts of copying to preserve program semantics. The unnecessary copying of data can increase both the execution time and the memory requirements of an application. To eliminate the unnecessary copying of data, the Sisal compiler uses both build-in-place and update-in-place analyses. These optimizations remove unnecessary array copy operations through compile-time analysis. Both build-in-place and update-in-place are based on hierarchical ragged arrays, i.e., the vector-of-vectors array model. Although this array model is convenient for certain applications, many optimizations are precluded, e.g., vectorization. To compensate for this deficiency, new languages, such as Sisal 2.0, have extended array models that allow for both high-level array operations to be performed and efficient implementations to be devised. In this article, we introduce a new method to perform update-in-place analysis that is applicable to arrays stored either in hierarchical or in contiguous storage. Consequently, the array model that is appropriate for an application can be selected without the loss of performance. Moreover, our analysis is more amenable for distributed memory and large software systems.
APA, Harvard, Vancouver, ISO, and other styles
30

Hansen, Annette Skovsted. "Re-vitalizing an indigenous language: Dictionaries of Ainu languages in Japan, 1625–2013." Lexicographica 30, no. 1 (October 10, 2014): 547–78. http://dx.doi.org/10.1515/lexi-2014-0017.

Full text
Abstract:
AbstractThe re-vitalization of indigenous languages depends on political and legal support and the implementation of language rights depends on knowledge of vocabulary and grammar structures of the individual languages. Throughout the nineteenth century world, compilers of dictionaries adapted indigenous languages to match standards defined in nation-building and, thereby, enabled latent possibilities for indigenous populations to re-vitalize their languages in connection with the United Nations Year for Indigenous Peoples in 1993, and the first United Nations Decade for Indigenous Peoples, 1995-2004. This article focuses on dictionaries of the languages of the Ainu populations in the borderlands between the nation-states Japan and Russia. The main argument is that the Ainu Cultural Promotion Act promulgated in 1997 had a significant impact on the production and purpose of Ainu dictionaries. The dictionaries prior to 1997 functioned, predominantly, as records, which contributed to the increased visibility of Ainu populations inside and outside Japan in the immediate national interests of Japan, whereas the dictionaries published after 1997 are intended to enable the active use of Ainu language today. An important sub-point is that the post-1997 Ainu dictionaries rely heavily on dictionaries, word lists, and grammar books compiled before 1997, which have therefore come to support efforts to re-vitalize Ainu languages in the twenty-first century.
APA, Harvard, Vancouver, ISO, and other styles
31

Hennoste, T., Mare Koit, T. Roosmaa, and M. Saluveer. "Structure and Usage of the Tartu University Corpus of Written Estonian." International Journal of Corpus Linguistics 3, no. 2 (January 1, 1998): 279–304. http://dx.doi.org/10.1075/ijcl.3.2.06hen.

Full text
Abstract:
This paper provides an overview of the first computer corpus of the Estonian language compiled at the University of Tartu. It was based on the design principles of the LOB and Brown corpora. The main part of the corpus was assembled from 1991-1995 and contains about 1 million textual words. It was compiled by an interdepartmental computational linguistics research group of the university. This paper gives a survey of the text groups in the corpus and of the problems the compilers had to solve together with the proposed solutions and outlines the main differences from the model corpora and the underlying reasons for them. These are followed by a review of the available computer routines for processing the corpus.
APA, Harvard, Vancouver, ISO, and other styles
32

Zelenchuk, Nikita, Ekaterina Pristavka, and Aleksandr Maliavko. "Compiler from El programing language: improvement and development." Digital Technology Security, no. 2 (June 25, 2021): 9–19. http://dx.doi.org/10.17212/2782-2230-2021-2-9-19.

Full text
Abstract:
The implementation of the new multi-paradigm (functionally- imperative) programming language El, developed at the Department of Computer Science of the Novosibirsk State Technical University, in the form of a compiler is associated with the need to find ways to solve a number of complex problems. The current version of the compiler does implement only partially functionality of the language and generates far from optimal executable codes. In this paper, we consider the problem of an efficient compilation of an El-program, taking into account the need to implement new high-level data structures (two-sided lists, vectors with special forms of access, and a number of others) and control structures of the language, which make it possible to uniformly define cyclic and branching computational processes, as well as those laid down in the language a mechanism for explicitly controlling the mutability of variables. The tasks of improving and developing a compiler organized according to the classical multi-platform scheme are briefly considered, in which the front-end (lexical, syntactic, and semantic analyzers) converts the program to be translated into pseudocode of a single format, and used efficient infrastructure for building LLVM compilers as a back-end that turns pseudocode into executable code for different platforms. Execution of all possible operations on elements of high-level data structures (lists, tuples, vectors), as well as on arbitrary-precision numbers, has been moved to the runtime support library and, accordingly, can be deeply optimized. For this structure, the outlined ways of solving the problem of developing and improving the compiler by deep reforming and optimization of the chain of transformations of the translated program implemented by the front-end are formulated. At the initial stage, it is planned to implement a new compiler for two platforms: Linux and Windows.
APA, Harvard, Vancouver, ISO, and other styles
33

Sammler, Michael, Simon Spies, Youngju Song, Emanuele D'Osualdo, Robbert Krebbers, Deepak Garg, and Derek Dreyer. "DimSum: A Decentralized Approach to Multi-language Semantics and Verification." Proceedings of the ACM on Programming Languages 7, POPL (January 9, 2023): 775–805. http://dx.doi.org/10.1145/3571220.

Full text
Abstract:
Prior work on multi-language program verification has achieved impressive results, including the compositional verification of complex compilers. But the existing approaches to this problem impose a variety of restrictions on the overall structure of multi-language programs (e.g. fixing the source language, fixing the set of involved languages, fixing the memory model, or fixing the semantics of interoperation). In this paper, we explore the problem of how to avoid such global restrictions. Concretely, we present DimSum : a new, decentralized approach to multi-language semantics and verification, which we have implemented in the Coq proof assistant. Decentralization means that we can define and reason about languages independently from each other (as independent modules communicating via events), but also combine and translate between them when necessary (via a library of combinators). We apply DimSum to a high-level imperative language Rec (with an abstract memory model and function calls), a low-level assembly language Asm (with a concrete memory model, arbitrary jumps, and syscalls), and a mathematical specification language Spec. We evaluate DimSum on two case studies: an Asm library extending Rec with support for pointer comparison, and a coroutine library for Rec written in Asm. In both cases, we show how DimSum allows the Asm libraries to be abstracted to Rec-level specifications, despite the behavior of the Asm libraries not being syntactically expressible in Rec itself. We also verify an optimizing multi-pass compiler from Rec to Asm, showing that it is compatible with these Asm libraries.
APA, Harvard, Vancouver, ISO, and other styles
34

Gäher, Lennard, Michael Sammler, Simon Spies, Ralf Jung, Hoang-Hai Dang, Robbert Krebbers, Jeehoon Kang, and Derek Dreyer. "Simuliris: a separation logic framework for verifying concurrent program optimizations." Proceedings of the ACM on Programming Languages 6, POPL (January 16, 2022): 1–31. http://dx.doi.org/10.1145/3498689.

Full text
Abstract:
Today’s compilers employ a variety of non-trivial optimizations to achieve good performance. One key trick compilers use to justify transformations of concurrent programs is to assume that the source program has no data races : if it does, they cause the program to have undefined behavior (UB) and give the compiler free rein. However, verifying correctness of optimizations that exploit this assumption is a non-trivial problem. In particular, prior work either has not proven that such optimizations preserve program termination (particularly non-obvious when considering optimizations that move instructions out of loop bodies), or has treated all synchronization operations as external functions (losing the ability to reorder instructions around them). In this work we present Simuliris , the first simulation technique to establish termination preservation (under a fair scheduler) for a range of concurrent program transformations that exploit UB in the source language. Simuliris is based on the idea of using ownership to reason modularly about the assumptions the compiler makes about programs with well-defined behavior. This brings the benefits of concurrent separation logics to the space of verifying program transformations: we can combine powerful reasoning techniques such as framing and coinduction to perform thread-local proofs of non-trivial concurrent program optimizations. Simuliris is built on a (non-step-indexed) variant of the Coq-based Iris framework, and is thus not tied to a particular language. In addition to demonstrating the effectiveness of Simuliris on standard compiler optimizations involving data race UB, we also instantiate it with Jung et al.’s Stacked Borrows semantics for Rust and generalize their proofs of interesting type-based aliasing optimizations to account for concurrency.
APA, Harvard, Vancouver, ISO, and other styles
35

Semenenko-Basin, Ilya V., and Stefano Caprio. "Russian Liturgical Memories in the Slavic Byzantine-Catholic Menologion (Recensio Vulgata) of the Mid-20th Century." Slovene 10, no. 1 (2021): 368–91. http://dx.doi.org/10.31168/2305-6754.2021.10.1.16.

Full text
Abstract:
The article is devoted to the menologion (calendar of saints) compiled in the 20th century for Russian Byzantine Catholics. The latter are a church community with its own Byzantine-Slavic worship and piety, which follow both the Catholic and the Eastern spiritual traditions. Like the entire liturgical literature of the Russian Eastern Catholics, the menologion was created in Rome under the auspices of the Congregation for Eastern Churches, as part of the activities of the Russian Catholic Apostolate, i.e., of the mission of the Catholic Church addressed to Russia and the Russian diaspora in the world. The corpus of service books for Russian, Bulgarian and Serbian Eastern Catholics was called Recensio Vulgata. The menologion under study is contained in the books of Recensio Vulgata and was compiled on the basis of the Orthodox menologia of pre-revolutionary Russia. The compilers of the Byzantine-Catholic menologion did not just select Russian liturgical memories in a certain way, they also included the names of several martyrs of the Eastern Catholic Churches and some additional commemorations of Western saints. According to the compilers of the menologion, the history of Catholic (orthodox) holiness in North-Eastern Russia ended at the turn of the 1440s, when the Principality of Moscow and the Novgorod Republic abandoned the Union of Florence. The menologion reflects the era after the Union of Florence in the events that show the invariable patronage of the Mother of God over the people and the Russian land. The Recensio Vulgata menologion (RVM) contains twelve Russia-specific holidays that honor icons of the Mother of God, nine of which celebrate the events of the period from the late 15th to the 17th centuries. The compilers of the menologion created a well-devised system in which the East Slavic saints, the ancient saints of the Byzantine menologion, the Latin teachers of the Church, the saints of the Byzantine Catholic churches of different eras all are subject to harmonious logic, and harmony serves to organize the whole.
APA, Harvard, Vancouver, ISO, and other styles
36

Burtseva, Alla O. "The Soviet Journal “LOKAF” on Foreign Literature: How not to Become a Remarquable." Slovene 10, no. 1 (2021): 347–67. http://dx.doi.org/10.31168/2305-6754.2021.10.1.15.

Full text
Abstract:
The article is devoted to the menologion (calendar of saints) compiled in the 20th century for Russian Byzantine Catholics. The latter are a church community with its own Byzantine-Slavic worship and piety, which follow both the Catholic and the Eastern spiritual traditions. Like the entire liturgical literature of the Russian Eastern Catholics, the menologion was created in Rome under the auspices of the Congregation for Eastern Churches, as part of the activities of the Russian Catholic Apostolate, i.e., of the mission of the Catholic Church addressed to Russia and the Russian diaspora in the world. The corpus of service books for Russian, Bulgarian and Serbian Eastern Catholics was called Recensio Vulgata. The menologion under study is contained in the books of Recensio Vulgata and was compiled on the basis of the Orthodox menologia of pre-revolutionary Russia. The compilers of the Byzantine-Catholic menologion did not just select Russian liturgical memories in a certain way, they also included the names of several martyrs of the Eastern Catholic Churches and some additional commemorations of Western saints. According to the compilers of the menologion, the history of Catholic (orthodox) holiness in North-Eastern Russia ended at the turn of the 1440s, when the Principality of Moscow and the Novgorod Republic abandoned the Union of Florence. The menologion reflects the era after the Union of Florence in the events that show the invariable patronage of the Mother of God over the people and the Russian land. The Recensio Vulgata menologion (RVM) contains twelve Russia-specific holidays that honor icons of the Mother of God, nine of which celebrate the events of the period from the late 15th to the 17th centuries. The compilers of the menologion created a well-devised system in which the East Slavic saints, the ancient saints of the Byzantine menologion, the Latin teachers of the Church, the saints of the Byzantine Catholic churches of different eras all are subject to harmonious logic, and harmony serves to organize the whole.
APA, Harvard, Vancouver, ISO, and other styles
37

Haveraaen, Magne, Karla Morris, Damian Rouson, Hari Radhakrishnan, and Clayton Carson. "High-Performance Design Patterns for Modern Fortran." Scientific Programming 2015 (2015): 1–14. http://dx.doi.org/10.1155/2015/942059.

Full text
Abstract:
This paper presents ideas for using coordinate-free numerics in modern Fortran to achieve code flexibility in the partial differential equation (PDE) domain. We also show how Fortran, over the last few decades, has changed to become a language well-suited for state-of-the-art software development. Fortran’s new coarray distributed data structure, the language’s class mechanism, and its side-effect-free, pure procedure capability provide the scaffolding on which we implement HPC software. These features empower compilers to organize parallel computations with efficient communication. We present some programming patterns that support asynchronous evaluation of expressions comprised of parallel operations on distributed data. We implemented these patterns using coarrays and the message passing interface (MPI). We compared the codes’ complexity and performance. The MPI code is much more complex and depends on external libraries. The MPI code on Cray hardware using the Cray compiler is 1.5–2 times faster than the coarray code on the same hardware. The Intel compiler implements coarrays atop Intel’s MPI library with the result apparently being 2–2.5 times slower than manually coded MPI despite exhibiting nearly linear scaling efficiency. As compilers mature and further improvements to coarrays comes in Fortran 2015, we expect this performance gap to narrow.
APA, Harvard, Vancouver, ISO, and other styles
38

Xu, Haoran, and Fredrik Kjolstad. "Copy-and-patch compilation: a fast compilation algorithm for high-level languages and bytecode." Proceedings of the ACM on Programming Languages 5, OOPSLA (October 20, 2021): 1–30. http://dx.doi.org/10.1145/3485513.

Full text
Abstract:
Fast compilation is important when compilation occurs at runtime, such as query compilers in modern database systems and WebAssembly virtual machines in modern browsers. We present copy-and-patch, an extremely fast compilation technique that also produces good quality code. It is capable of lowering both high-level languages and low-level bytecode programs to binary code, by stitching together code from a large library of binary implementation variants. We call these binary implementations stencils because they have holes where missing values must be inserted during code generation. We show how to construct a stencil library and describe the copy-and-patch algorithm that generates optimized binary code. We demonstrate two use cases of copy-and-patch: a compiler for a high-level C-like language intended for metaprogramming and a compiler for WebAssembly. Our high-level language compiler has negligible compilation cost: it produces code from an AST in less time than it takes to construct the AST. We have implemented an SQL database query compiler on top of this metaprogramming system and show that on TPC-H database benchmarks, copy-and-patch generates code two orders of magnitude faster than LLVM -O0 and three orders of magnitude faster than higher optimization levels. The generated code runs an order of magnitude faster than interpretation and 14% faster than LLVM -O0. Our WebAssembly compiler generates code 4.9X-6.5X faster than Liftoff, the WebAssembly baseline compiler in Google Chrome. The generated code also outperforms Liftoff's by 39%-63% on the Coremark and PolyBenchC WebAssembly benchmarks.
APA, Harvard, Vancouver, ISO, and other styles
39

Köster, Marcel, Roland Leißa, Sebastian Hack, Richard Membarth, and Philipp Slusallek. "Code Refinement of Stencil Codes." Parallel Processing Letters 24, no. 03 (September 2014): 1441003. http://dx.doi.org/10.1142/s0129626414410035.

Full text
Abstract:
A straightforward implementation of an algorithm in a general-purpose programming language does usually not deliver peak performance: Compilers often fail to automatically tune the code for certain hardware peculiarities like memory hierarchy or vector execution units. Manually tuning the code is firstly error-prone as well as time-consuming and secondly taints the code by exposing those peculiarities to the implementation. A popular method to avoid these problems is to implement the algorithm in a Domain-Specific Language (DSL). A DSL compiler can then automatically tune the code for the target platform. In this article we show how to embed a DSL for stencil codes in another language. In contrast to prior approaches we only use a single language for this task which offers explicit control over code refinement. This is used to specialize stencils for particular scenarios. Our results show that our specialized programs achieve competitive performance compared to hand-tuned CUDA programs while maintaining a convenient coding experience.
APA, Harvard, Vancouver, ISO, and other styles
40

ABRAMOV, SERGEI, and ROBERT GLÜCK. "FROM STANDARD TO NON-STANDARD SEMANTICS BY SEMANTICS MODIFIERS." International Journal of Foundations of Computer Science 12, no. 02 (April 2001): 171–211. http://dx.doi.org/10.1142/s0129054101000448.

Full text
Abstract:
An approach for systematically modifying the semantics of programming languages by semantics modifiers is described. Semantics modifiers are a class of programs that allow the development of general and reusable "semantics components". Language independence is achieved through the interpretive approach: an interpreter serves as a mediator between the new language and the language for which the non-standard semantics was implemented. Inverse computation, equivalence transformation and neighborhood analysis are shown to be semantics modifiers. Experiments with these modifiers show the computational feasibility of this approach. Seven modifier projections are given which allow the efficient implementation of non-standard interpreters and non-standard compilers by program specialization or other powerful program transformation methods.
APA, Harvard, Vancouver, ISO, and other styles
41

Jena, Swagat Kumar, Satyabrata Das, and Satya Prakash Sahoo. "Design and Development of a Parallel Lexical Analyzer for C Language." International Journal of Knowledge-Based Organizations 8, no. 1 (January 2018): 68–82. http://dx.doi.org/10.4018/ijkbo.2018010105.

Full text
Abstract:
Future of computing is rapidly moving towards massively multi-core architecture because of its power and cost advantages. Almost everywhere Multi-core processors are being used now-a-days and number of cores per chip is also relatively increasing. To exploit full potential offered by multi-core architecture, the system software like compilers should be designed for parallelized execution. In the past, various significant works have been made to change the design of traditional compiler to take advantages of the future multi-core platform. This paper focuses on adapting parallelism in the lexical analysis phase of the compilation process. The main objective of our proposal is to do the lexical analysis i.e., finding the tokens in an input stream in parallel. We use the parallel constructs available in OpenMP to achieve parallelism in the lexical analysis process for multi-core machines. The experimental result of our proposal shows a significant performance improvement in the parallel lexical analysis phase as compared to sequential version in terms of time of execution.
APA, Harvard, Vancouver, ISO, and other styles
42

Jain, Abhishek, Manohar Kumar, and Manohar Kumar. "System Programming Drawbacks." Journal of Advance Research in Computer Science & Engineering (ISSN: 2456-3552) 2, no. 4 (April 30, 2015): 16–22. http://dx.doi.org/10.53555/nncse.v2i4.453.

Full text
Abstract:
A system programming language usually refers to a programming language used for system programming; such languages are designed for writing system software, which usually requires different development approaches when compared to application software. System software is computer software designed to operate and control the computer hardware, and to provide a platform for running application software. System software includes software categories such as operating systems, utility software, device drivers, compilers, and linkers. In contrast with application languages, system programming languages typically offer more-direct access to the physical hardware of the machine: an archetypical system programming language in this sense was BCPL. System programming languages often lack built in input/output facilities because a system-software project usually develops its own input/output or builds on basic monitor I/O or screen management services facilities. The distinction between languages for system programming and applications programming became blurred with widespread popularity of PL/I, C and Pascal.
APA, Harvard, Vancouver, ISO, and other styles
43

Benkner, Siegfried. "VFC: The Vienna Fortran Compiler." Scientific Programming 7, no. 1 (1999): 67–81. http://dx.doi.org/10.1155/1999/304639.

Full text
Abstract:
High Performance Fortran (HPF) offers an attractive high‐level language interface for programming scalable parallel architectures providing the user with directives for the specification of data distribution and delegating to the compiler the task of generating an explicitly parallel program. Available HPF compilers can handle regular codes quite efficiently, but dramatic performance losses may be encountered for applications which are based on highly irregular, dynamically changing data structures and access patterns. In this paper we introduce the Vienna Fortran Compiler (VFC), a new source‐to‐source parallelization system for HPF+, an optimized version of HPF, which addresses the requirements of irregular applications. In addition to extended data distribution and work distribution mechanisms, HPF+ provides the user with language features for specifying certain information that decisively influence a program’s performance. This comprises data locality assertions, non‐local access specifications and the possibility of reusing runtime‐generated communication schedules of irregular loops. Performance measurements of kernels from advanced applications demonstrate that with a high‐level data parallel language such as HPF+ a performance close to hand‐written message‐passing programs can be achieved even for highly irregular codes.
APA, Harvard, Vancouver, ISO, and other styles
44

Stepanov, Daniil, and Vladimir Itsykson. "Backend Bug Finder — a platform for effective compiler fuzzing." Information and Control Systems, no. 6 (December 27, 2022): 31–40. http://dx.doi.org/10.31799/1684-8853-2022-6-31-40.

Full text
Abstract:
Introduction: The standard way to check the quality of a compiler is manual testing. However, it does not allow to cover a vast diversity of programs that can be written in a target programming language. Today, in addition to manual written tests there are many automated compiler testing methods, among which fuzzing is one of the most powerful and useful. A compiler fuzzer is a tool that generates a random program in a target language and checks how the compiler works in this language. Purpose: To develop a platform for compiler fuzzing and, based on it, to develop a tool for Kotlin compiler testing. Results: We have developed Backend Bug Finder which is a platform for compiler fuzzing is. We have chosen a mutation-based approach as a method for generating random programs. First, an existing program is entered to the mutator as the input to be then transformed in some way. Mutations can be both trivial, for example, replacing arithmetic operators with others, and complex, changing the structure of the program. Next, the resulting program is fed to the input of the compiler with the following check of its operation. The developed test oracle can detect three types of errors: crashes, miscompilations, and performance degradations. If an error is detected, the test case is fed into the post-processing module, where reduction and deduplication algorithms are applied. We have developed a tool for fuzzing the Kotlin language compiler based on the platform for its approbation, which showed the applicability of the proposed approach for finding errors in modern compilers. Practical relevance: Over a year and a half of work, our tool has found thousands of different Kotlin compiler bugs, more than 200 of which were sent to the developers, and more than 80 have been fixed.
APA, Harvard, Vancouver, ISO, and other styles
45

Smolonogina, E. A., and I. O. Sitnikova. "Parameters of Lexicographic Description of New Vocabulary of Modern German in Electronic Dictionary of Neologisms of Leibniz Institute for German Language." Nauchnyi dialog 11, no. 2 (March 19, 2022): 140–58. http://dx.doi.org/10.24224/2227-1295-2022-11-2-140-158.

Full text
Abstract:
The problem of lexicographic fixation of neologisms at the present stage of development of German-language lexicography by private explanatory dictionaries of electronic format is considered. A fairly successful example of lexicographic practice was chosen as the object of the study — an electronic dictionary of neologisms of the modern German language on the lexicographic online portal of the Leibniz Institute for the German Language. The dictionary is a “living”, constantly updated edition, which is based on the criteria for the definition and classification of neologisms, developed by its compilers. The authors of the article performed an analysis of the parameters of the lexicographic description of neologisms in this dictionary. The question is raised about the mandatory or optional nature of the parameters used by the compilers of the dictionary when forming the microtext (dictionary entry). Standard and specific parameters of lexicographic description, which are profile for the dictionary of new words are distinguished: example-definition, first fixation, language reflection, encyclopedic information, representation in dictionaries, etc. The analysis of lexicographic practice sets the task of summarizing the practical experience of lexicographers in order to determine the optimal set of parameters for the codification of neology.
APA, Harvard, Vancouver, ISO, and other styles
46

Grossberg, David M. "Orthopraxy in Tannaitic Literature." Journal for the Study of Judaism 41, no. 4-5 (2010): 517–61. http://dx.doi.org/10.1163/157006310x503621.

Full text
Abstract:
AbstractM. Sanhedrin 10:1 is well-known as a succinct statement of rabbinic doctrine. Yet as a statement of doctrine, this mishnah’s language is remarkably pragmatic: it proscribes saying certain things but does not explicitly proscribe believing them. I propose that this use of practical rather than doctrinal phraseology was an intentional editorial stance of the Mishnah’s compilers. A close philological examination of parallel texts in the Tosefta and Seder Olam reveals that earlier generations of the textual tradition underlying this mishnah phrased these same prohibitions using doctrinal terms such as “denying” or “not acknowledging.” Moreover, this choice of pragmatic language is evident throughout the Mishnah, even when fundamentals of Judaic faith such as belief in one God and in the oral Torah are being addressed. The Mishnah’s compilers, perhaps in response to trends like early Christian antinomianism and heresiology, chose to produce a work dedicated to orthopraxy rather than orthodoxy.
APA, Harvard, Vancouver, ISO, and other styles
47

APPEL, ANDREW W., and TREVOR JIM. "Shrinking lambda expressions in linear time." Journal of Functional Programming 7, no. 5 (September 1997): 515–40. http://dx.doi.org/10.1017/s0956796897002839.

Full text
Abstract:
Functional-language compilers often perform optimizations based on beta and delta reduction. To avoid speculative optimizations that can blow up the code size, we might wish to use only shrinking reduction rules guaranteed to make the program smaller: these include dead-variable elimination, constant folding, and a restricted beta rule that inlines only functions that are called just once. The restricted beta rule leads to a shrinking rewrite system that has not previously been studied. We show some efficient normalization algorithms that are immediately useful in optimizing compilers; and we give a confluence proof for our system, showing that the choice of normalization algorithm does not affect final code quality.
APA, Harvard, Vancouver, ISO, and other styles
48

Ye, Qianchuan, and Benjamin Delaware. "Oblivious algebraic data types." Proceedings of the ACM on Programming Languages 6, POPL (January 16, 2022): 1–29. http://dx.doi.org/10.1145/3498713.

Full text
Abstract:
Secure computation allows multiple parties to compute joint functions over private data without leaking any sensitive data, typically using powerful cryptographic techniques. Writing secure applications using these techniques directly can be challenging, resulting in the development of several programming languages and compilers that aim to make secure computation accessible. Unfortunately, many of these languages either lack or have limited support for rich recursive data structures, like trees. In this paper, we propose a novel representation of structured data types, which we call oblivious algebraic data types, and a language for writing secure computations using them. This language combines dependent types with constructs for oblivious computation, and provides a security-type system which ensures that adversaries can learn nothing more than the result of a computation. Using this language, authors can write a single function over private data, and then easily build an equivalent secure computation according to a desired public view of their data.
APA, Harvard, Vancouver, ISO, and other styles
49

Moss, J. Eliot B., Trek Palmer, Timothy Richards, Edward K. Walters, and Charles C. Weems. "CISL: A Class-Based Machine Description Language for Co-Generation of Compilers and Simulators." International Journal of Parallel Programming 33, no. 2-3 (June 2005): 231–46. http://dx.doi.org/10.1007/s10766-005-3587-1.

Full text
APA, Harvard, Vancouver, ISO, and other styles
50

JACKSON, ETHAN K. "A Module System for Domain-Specific Languages." Theory and Practice of Logic Programming 14, no. 4-5 (July 2014): 771–85. http://dx.doi.org/10.1017/s1471068414000337.

Full text
Abstract:
AbstractDomain-specific languages(DSLs) are routinely created to simplify difficult or specialized programming tasks. They expose useful abstractions and design patterns in the form of language constructs, provide static semantics to eagerly detect misuse of these constructs, and dynamic semantics to completely define how language constructs interact. However, implementing and composing DSLs is a non-trivial task, and there is a lack of tools and techniques.We address this problem by presenting a complete module system over LP for DSL construction, reuse, and composition. LP is already useful for DSL design, because it supports executable language specifications using notations familiar to language designers. We extend LP with a module system that is simple (with a few concepts), succinct (for key DSL specification scenarios), and composable (on the level of languages, compilers, and programs). These design choices reflect our use of LP for industrial DSL design. Our module system has been implemented in theformulalanguage, and was used to build key Windows 8 device drivers via DSLs. Though we present our module system as it actually appears in ourformulalanguage, our emphasis is on concepts adaptable to other LP languages.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography