Academic literature on the topic 'Language compilers'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the lists of relevant articles, books, theses, conference reports, and other scholarly sources on the topic 'Language compilers.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Journal articles on the topic "Language compilers"

1

Paraskevopoulou, Zoe, John M. Li, and Andrew W. Appel. "Compositional optimizations for CertiCoq." Proceedings of the ACM on Programming Languages 5, ICFP (August 22, 2021): 1–30. http://dx.doi.org/10.1145/3473591.

Full text
Abstract:
Compositional compiler verification is a difficult problem that focuses on separate compilation of program components with possibly different verified compilers. Logical relations are widely used in proving correctness of program transformations in higher-order languages; however, they do not scale to compositional verification of multi-pass compilers due to their lack of transitivity. The only known technique to apply to compositional verification of multi-pass compilers for higher-order languages is parametric inter-language simulations (PILS), which is however significantly more complicated than traditional proof techniques for compiler correctness. In this paper, we present a novel verification framework for lightweight compositional compiler correctness . We demonstrate that by imposing the additional restriction that program components are compiled by pipelines that go through the same sequence of intermediate representations , logical relation proofs can be transitively composed in order to derive an end-to-end compositional specification for multi-pass compiler pipelines. Unlike traditional logical-relation frameworks, our framework supports divergence preservation—even when transformations reduce the number of program steps. We achieve this by parameterizing our logical relations with a pair of relational invariants . We apply this technique to verify a multi-pass, optimizing middle-end pipeline for CertiCoq, a compiler from Gallina (Coq’s specification language) to C. The pipeline optimizes and closure-converts an untyped functional intermediate language (ANF or CPS) to a subset of that language without nested functions, which can be easily code-generated to low-level languages. Notably, our pipeline performs more complex closure-allocation optimizations than the state of the art in verified compilation. Using our novel verification framework, we prove an end-to-end theorem for our pipeline that covers both termination and divergence and applies to whole-program and separate compilation, even when different modules are compiled with different optimizations. Our results are mechanized in the Coq proof assistant.
APA, Harvard, Vancouver, ISO, and other styles
2

Hartel, Pieter H., Marc Feeley, Martin Alt, Lennart Augustsson, Peter Baumann, Marcel Beemster, Emmanuel Chailloux, et al. "Benchmarking implementations of functional languages with ‘Pseudoknot’, a float-intensive benchmark." Journal of Functional Programming 6, no. 4 (July 1996): 621–55. http://dx.doi.org/10.1017/s0956796800001891.

Full text
Abstract:
AbstractOver 25 implementations of different functional languages are benchmarked using the same program, a floating-point intensive application taken from molecular biology. The principal aspects studied are compile time and execution time for the various implementations that were benchmarked. An important consideration is how the program can be modified and tuned to obtain maximal performance on each language implementation. With few exceptions, the compilers take a significant amount of time to compile this program, though most compilers were faster than the then current GNU C compiler (GCC version 2.5.8). Compilers that generate C or Lisp are often slower than those that generate native code directly: the cost of compiling the intermediate form is normally a large fraction of the total compilation time. There is no clear distinction between the runtime performance of eager and lazy implementations when appropriate annotations are used: lazy implementations have clearly come of age when it comes to implementing largely strict applications, such as the Pseudoknot program. The speed of C can be approached by some implementations, but to achieve this performance, special measures such as strictness annotations are required by non-strict implementations. The benchmark results have to be interpreted with care. Firstly, a benchmark based on a single program cannot cover a wide spectrum of ‘typical’ applications. Secondly, the compilers vary in the kind and level of optimisations offered, so the effort required to obtain an optimal version of the program is similarly varied.
APA, Harvard, Vancouver, ISO, and other styles
3

Serrano, Manuel. "Of JavaScript AOT compilation performance." Proceedings of the ACM on Programming Languages 5, ICFP (August 22, 2021): 1–30. http://dx.doi.org/10.1145/3473575.

Full text
Abstract:
The fastest JavaScript production implementations use just-in-time (JIT) compilation and the vast majority of academic publications about implementations of dynamic languages published during the last two decades focus on JIT compilation. This does not imply that static compilers (AoT) cannot be competitive; as comparatively little effort has been spent creating fast AoT JavaScript compilers, a scientific comparison is lacking. This paper presents the design and implementation of an AoT JavaScript compiler, focusing on a performance analysis. The paper reports on two experiments, one based on standard JavaScript benchmark suites and one based on new benchmarks chosen for their diversity of styles, authors, sizes, provenance, and coverage of the language. The first experiment shows an advantage to JIT compilers, which is expected after the decades of effort that these compilers have paid to these very tests. The second shows more balanced results, as the AoT compiler generates programs that reach competitive speeds and that consume significantly less memory. The paper presents and evaluates techniques that we have either invented or adapted from other systems, to improve AoT JavaScript compilation.
APA, Harvard, Vancouver, ISO, and other styles
4

HARDIN, THÉRÈSE, LUC MARANGET, and BRUNO PAGANO. "Functional runtime systems within the lambda-sigma calculus." Journal of Functional Programming 8, no. 2 (March 1998): 131–76. http://dx.doi.org/10.1017/s0956796898002986.

Full text
Abstract:
We define a weak λ-calculus, λσw, as a subsystem of the full λ-calculus with explicit substitutions λσ[uArr ]. We claim that λσw could be the archetypal output language of functional compilers, just as the λ-calculus is their universal input language. Furthermore, λσ[uArr ] could be the adequate theory to establish the correctness of functional compilers. Here we illustrate these claims by proving the correctness of four simplified compilers and runtime systems modelled as abstract machines. The four machines we prove are the Krivine machine, the SECD, the FAM and the CAM. Thus, we give the first formal proofs of Cardelli's FAM and of its compiler.
APA, Harvard, Vancouver, ISO, and other styles
5

Pompougnac, Hugo, Ulysse Beaugnon, Albert Cohen, and Dumitru Potop Butucaru. "Weaving Synchronous Reactions into the Fabric of SSA-form Compilers." ACM Transactions on Architecture and Code Optimization 19, no. 2 (June 30, 2022): 1–25. http://dx.doi.org/10.1145/3506706.

Full text
Abstract:
We investigate the programming of reactive systems combining closed-loop control with performance-intensive components such as Machine Learning (ML). Reactive control systems are often safety-critical and associated with real-time execution requirements, a domain of predilection for synchronous programming languages. Extending the high levels of assurance found in reactive control systems to computationally intensive code remains an open issue. We tackle it by unifying concepts and algorithms from synchronous languages with abstractions commonly found in general-purpose and ML compilers. This unification across embedded and high-performance computing enables a high degree of reuse of compiler abstractions and code. We first recall commonalities between dataflow synchronous languages and the static single assignment (SSA) form of general-purpose/ML compilers. We highlight the key mechanisms of synchronous languages that SSA does not cover—denotational concepts such as synchronizing computations with an external time base, cyclic and reactive I/O, as well as the operational notions of relaxing control flow dominance and the modeling of absent values. We discover that initialization-related static analyses and code generation aspects can be fully decoupled from other aspects of synchronous semantics such as memory management and causality analysis, the latter being covered by existing dominance-based algorithms of SSA-form compilers. We show how the SSA form can be seamlessly extended to enable all SSA-based transformations and optimizations on reactive programs with synchronous concurrency. We derive a compilation flow suitable for both high-performance and reactive aspects of a control application, by embedding the Lustre dataflow synchronous language into the SSA-based MLIR/LLVM compiler infrastructure. This allows the modeling of signal processing and deep neural network inference in the (closed) loop of feedback-directed control systems. With only minor efforts leveraging the MLIR infrastructure, the generated code matches or outperforms state-of-the-art synchronous language compilers on computationally intensive ML applications.
APA, Harvard, Vancouver, ISO, and other styles
6

Michael, Alexandra E., Anitha Gollamudi, Jay Bosamiya, Evan Johnson, Aidan Denlinger, Craig Disselkoen, Conrad Watt, et al. "MSWasm: Soundly Enforcing Memory-Safe Execution of Unsafe Code." Proceedings of the ACM on Programming Languages 7, POPL (January 9, 2023): 425–54. http://dx.doi.org/10.1145/3571208.

Full text
Abstract:
Most programs compiled to WebAssembly (Wasm) today are written in unsafe languages like C and C++. Unfortunately, memory-unsafe C code remains unsafe when compiled to Wasm—and attackers can exploit buffer overflows and use-after-frees in Wasm almost as easily as they can on native platforms. Memory- Safe WebAssembly (MSWasm) proposes to extend Wasm with language-level memory-safety abstractions to precisely address this problem. In this paper, we build on the original MSWasm position paper to realize this vision. We give a precise and formal semantics of MSWasm, and prove that well-typed MSWasm programs are, by construction, robustly memory safe. To this end, we develop a novel, language-independent memory-safety property based on colored memory locations and pointers. This property also lets us reason about the security guarantees of a formal C-to-MSWasm compiler—and prove that it always produces memory-safe programs (and preserves the semantics of safe programs). We use these formal results to then guide several implementations: Two compilers of MSWasm to native code, and a C-to-MSWasm compiler (that extends Clang). Our MSWasm compilers support different enforcement mechanisms, allowing developers to make security-performance trade-offs according to their needs. Our evaluation shows that on the PolyBenchC suite, the overhead of enforcing memory safety in software ranges from 22% (enforcing spatial safety alone) to 198% (enforcing full memory safety), and 51.7% when using hardware memory capabilities for spatial safety and pointer integrity. More importantly, MSWasm’s design makes it easy to swap between enforcement mechanisms; as fast (especially hardware-based) enforcement techniques become available, MSWasm will be able to take advantage of these advances almost for free.
APA, Harvard, Vancouver, ISO, and other styles
7

Dold, Axel, Friedrich von Henke, and Wolfgang Goerigk. "A Completely Verified Realistic Bootstrap Compiler." International Journal of Foundations of Computer Science 14, no. 04 (August 2003): 659–80. http://dx.doi.org/10.1142/s0129054103001947.

Full text
Abstract:
This paper reports on a large verification effort in constructing an initial fully trusted bootstrap compiler executable for a realistic system programming language and real target processor. The construction and verification process comprises three tasks: the verification of the compiling specification (a relation between abstract source and target programs) with respect to the language semantics and a realistic correctness criterion. This proof has been completely mechanized using the PVS verification system and is one of the largest case-studies in formal verification we are aware of. Second, the implementation of the specification in the high-level source language following a transformational approach, and finally, the implementation and verification of a binary executable written in the compiler's target language. For the latter task, a realistic technique has been developed, which is based on rigorous a-posteriori syntactic code inspection and which guarantees, for the first time, trusted execution of generated machine programs. The context of this work is the joint German research effort Verifix aiming at developing methods for the construction of correct compilers for realistic source languages and real target processors.
APA, Harvard, Vancouver, ISO, and other styles
8

Clément, Basile, and Albert Cohen. "End-to-end translation validation for the halide language." Proceedings of the ACM on Programming Languages 6, OOPSLA1 (December 8, 2022): 1–30. http://dx.doi.org/10.1145/3527328.

Full text
Abstract:
This paper considers the correctness of domain-specific compilers for tensor programming languages through the study of Halide, a popular representative. It describes a translation validation algorithm for affine Halide specifications, independently of the scheduling language. The algorithm relies on “prophetic” annotations added by the compiler to the generated array assignments. The annotations provide a refinement mapping from assignments in the generated code to the tensor definitions from the specification. Our implementation leverages an affine solver and a general SMT solver, and scales to complete Halide benchmarks.
APA, Harvard, Vancouver, ISO, and other styles
9

BERNARD, THOMAS A. M., CLEMENS GRELCK, and CHRIS R. JESSHOPE. "ON THE COMPILATION OF A LANGUAGE FOR GENERAL CONCURRENT TARGET ARCHITECTURES." Parallel Processing Letters 20, no. 01 (March 2010): 51–69. http://dx.doi.org/10.1142/s0129626410000053.

Full text
Abstract:
The challenge of programming many-core architectures efficiently and effectively requires models and methods to co-design chip architectures and their software tool chain, using an approach that is both vertical and general. In this paper, we present compilation schemes for a general model of concurrency captured in a parallel language designed for system-level programming and as a target for higher level compilers. We also expose the challenges of integrating these transformation rules into a sequential-oriented compiler. Moreover, we discuss resource mapping inherent to those challenges. Our aim has been to reuse as much of the existing sequential compiler technology as possible in order to harness decades of prior research in compiling sequential languages.
APA, Harvard, Vancouver, ISO, and other styles
10

Gagan, B. R., Shivaprakash T, Thirumalai Shaktivel C, Vaishak P, and Kushal Kumar B. N. "Design of a New Language Seeks Literature Survey." International Journal for Research in Applied Science and Engineering Technology 10, no. 3 (March 31, 2022): 1623–27. http://dx.doi.org/10.22214/ijraset.2022.40949.

Full text
Abstract:
Abstract: In a scientific study, computing is a must-have tool. In general, scientists have various difficulties, requirements, and views when it comes to computation, which need to be addressed by the programming language that they use, this cannot be satisfied by general-purpose languages. Also, researchers need to concentrate on the issue they are working on rather than the optimizations for the calculations, so instead of using a general-purpose language, if there exists a language whose compiler would take care of those optimizations, it would make their work easier and faster. This is a survey of the work undertaken to design the programming language and its compiler. The primary goal of this research is to examine the function of work, implementation strategy, steps taken for improving the performance, the procedure of benchmarking, and finally, the outcome of the papers studied. The survey's main conclusions are that: the most common language mentioned among the papers was Python which appears to be more popular among developers due to its simple syntax and library support for computing. On the other hand, Python lacks performance, to compensate for this performance issue, the community has developed tools like Cython, Numba, Pythran, etc, which can be used to speed up Python. Domain-specific languages such as Wolfram, Seq, and ELI highlighted various methods for overcoming problems. Some languages like Wolfram and ELI moved from interpreter to compiler to get the performance boost. Most of the compilers use LLVM as the backend for optimizations and code generation. Keywords: scientific computation, compiler, programming language
APA, Harvard, Vancouver, ISO, and other styles

Dissertations / Theses on the topic "Language compilers"

1

Seefried, Sean Computer Science &amp Engineering Faculty of Engineering UNSW. "Language extension via dynamically extensible compilers." Awarded by:University of New South Wales. Computer Science and Engineering, 2006. http://handle.unsw.edu.au/1959.4/29524.

Full text
Abstract:
This dissertation provides the motivation for and evidence in favour of an approach to language extension via dynamic loading of plug-ins. There is a growing realisation that language features are often a superior choice to software libraries for implementing applications. Among the benefits are increased usability, safety and efficiency. Unfortunately, designing and implementing new languages is difficult and time consuming. Thus, reuse of language infrastructure is an attractive implementation avenue. The central question then becomes, what is the best method to extend languages? Much research has focussed on methods of extension based on using features of the language itself such as macros or reflection. This dissertation focuses on a complementary solution: plug-in compilers. In this approach languages are extended at run-time via dynamic extensions to compilers, called plug-ins. Plug-ins can be used to extend the expressiveness, safety and efficiency of languages. However, a plug-in compiler provides other benefits. Plug-in compilers encourage modularity, lower the barrier of entry to development, and facilitate the distribution and use of experimental language extensions. This dissertation describes how plug-in support is added, to both the front and back-end of a compiler, and demonstrates their application through a pair of case studies.
APA, Harvard, Vancouver, ISO, and other styles
2

Junaidu, Sahalu B. "A parallel functional language compiler for message-passing multicomputers." Thesis, University of St Andrews, 1998. http://hdl.handle.net/10023/13450.

Full text
Abstract:
The research presented in this thesis is about the design and implementation of Naira, a parallel, parallelising compiler for a rich, purely functional programming language. The source language of the compiler is a subset of Haskell 1.2. The front end of Naira is written entirely in the Haskell subset being compiled. Naira has been successfully parallelised and it is the largest successfully parallelised Haskell program having achieved good absolute speedups on a network of SUN workstations. Having the same basic structure as other production compilers of functional languages, Naira's parallelisation technology should carry forward to other functional language compilers. The back end of Naira is written in C and generates parallel code in the C language which is envisioned to be run on distributed-memory machines. The code generator is based on a novel compilation scheme specified using a restricted form of Milner's 7r-calculus which achieves asynchronous communication. We present the first working implementation of this scheme on distributed-memory message-passing multicomputers with split-phase transactions. Simulated assessment of the generated parallel code indicates good parallel behaviour. Parallelism is introduced using explicit, advisory user annotations in the source' program and there are two major aspects of the use of annotations in the compiler. First, the front end of the compiler is parallelised so as to improve its efficiency at compilation time when it is compiling input programs. Secondly, the input programs to the compiler can themselves contain annotations based on which the compiler generates the multi-threaded parallel code. These, therefore, make Naira, unusually and uniquely, both a parallel and a parallelising compiler. We adopt a medium-grained approach to granularity where function applications form the unit of parallelism and load distribution. We have experimented with two different task distribution strategies, deterministic and random, and have also experimented with thread-based and quantum- based scheduling policies. Our experiments show that there is little efficiency difference for regular programs but the quantum-based scheduler is the best in programs with irregular parallelism. The compiler has been successfully built, parallelised and assessed using both idealised and realistic measurement tools: we obtained significant compilation speed-ups on a variety of simulated parallel architectures. The simulated results are supported by the best results obtained on real hardware for such a large program: we measured an absolute speedup of 2.5 on a network of 5 SUN workstations. The compiler has also been shown to have good parallelising potential, based on popular test programs. Results of assessing Naira's generated unoptimised parallel code are comparable to those produced by other successful parallel implementation projects.
APA, Harvard, Vancouver, ISO, and other styles
3

Reig, Galilea Fermín Javier. "Compiler architecture using a portable intermediate language." Connect to e-thesis, 2002. http://theses.gla.ac.uk/686/.

Full text
Abstract:
Thesis (Ph.D.) - University of Glasgow, 2002.
Ph.D. thesis submitted to the Department of Computing Science, University of Glasgow, 2002. Includes bibliographical references. Print version also available.
APA, Harvard, Vancouver, ISO, and other styles
4

Cardone, Richard Joseph. "Language and compiler support for mixin programming." Access restricted to users with UT Austin EID Full text (PDF) from UMI/Dissertation Abstracts International, 2002. http://wwwlib.umi.com/cr/utexas/fullcit?p3077428.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Fross, Bradley K. "Splash-2 shared-memory architecture for supporting high level language compilers." Thesis, Virginia Tech, 1995. http://hdl.handle.net/10919/42064.

Full text
Abstract:

Modem computer technology has been evolving for nearly fifty years, and has seen many architectural innovations along the way. One of the latest technologies to come about is the reconfigurable processor-based custom computing machine (CCM). CCMs use field programmable gate arrays (FPGAs) as their processing cores, giving them the flexibility of software systems with performance comparable to that of dedicated custom hardware. Hardware description languages are currently used to program CCMs. However, research is being performed to investigate the use of high-level languages (HLLs), such as the C programming language, to create CCM programs. Many aspects of CCM architectures, such as local memory systems, are not conducive to HLL compiler usage. This thesis proposes and evaluates the use of a shared-memory architecture on a Splash-2 CCM to promote the development and usage of HLL compilers for CCM systems.


Master of Science
APA, Harvard, Vancouver, ISO, and other styles
6

Seaton, Christopher Graham. "Specialising dynamic techniques for implementing the Ruby programming language." Thesis, University of Manchester, 2015. https://www.research.manchester.ac.uk/portal/en/theses/specialising-dynamic-techniques-for-implementing-the-ruby-programming-language(0899248b-bbec-4d4c-9507-f775f023407c).html.

Full text
Abstract:
The Ruby programming language is dynamically typed, uses dynamic and late bound dispatch for all operators, method calls and many control structures, and provides extensive metaprogramming and introspective tooling functionality. Unlike other languages where these features are available, in Ruby their use is not avoided and key parts of the Ruby ecosystem use them extensively, even for inner-loop operations. This makes a high-performance implementation of Ruby problematic. Existing implementations either do not attempt to dynamically optimise Ruby programs, or achieve relatively limited success in optimising Ruby programs containing these features. One way that the community has worked around the limitations of existing Ruby implementations is to write extension modules in the C programming language. These are statically compiled and then dynamically linked into the Ruby implementation. Compared to equivalent Ruby, this C code is often more efficient for computationally intensive code. However the interface that these C extensions provides is defined by the non-optimising reference implementation of Ruby. Implementations which want to optimise by using different internal representations must do extensive copying to provide the same interface. This then limits the performance of the C extensions in those implementations. This leaves Ruby in the difficult position where it is not only difficult to implement the language efficiently, but the previous workaround for that problem, C extensions, also limits efforts to improve performance. This thesis describes an implementation of the Ruby programming language which embraces the Ruby language and optimises specifically for Ruby as it is used in practice. It provides a high performance implementation of Ruby's dynamic features, at the same time as providing a high performance implementation of C extensions. The implementation provides a high level of compatibility with existing Ruby implementations and does not limit the available features in order to achieve high performance. Common to all the techniques that are described in this thesis is the concept of specialisation. The conventional approach taken to optimise a dynamic language such as Ruby is to profile the program as it runs. Feedback from the profiling can then be used to specialise the program for the data and control flow it is actually experiencing. This thesis extends and advances that idea by specialising for conditions beyond normal data and control flow. Programs that call a method, or lookup a variable or constant by dynamic name rather than literal syntax can be specialised for the dynamic name by generalising inline caches. Debugging and introspective tooling is implemented by specialising the code for debug conditions such as the presence of a breakpoint or an attached tracing tool. C extensions are interpreted and dynamically optimised rather than being statically compiled, and the interface which the C code is programmed against is provided as an abstraction over the underlying implementation which can then independently specialise. The techniques developed in this thesis have a significant impact on performance of both synthetic benchmarks and kernels from real-world Ruby programs. The implementation of Ruby which has been developed achieves an order of magnitude or better increase in performance compared to the next-best implementation. In many cases the techniques are 'zero-overhead', in that the generated machine code is exactly the same for when the most dynamic features of Ruby are used, as when only static features are used.
APA, Harvard, Vancouver, ISO, and other styles
7

Moon, Hae-Kyung. "Compiler construction for a simple Pascal-like language." Virtual Press, 1994. http://liblink.bsu.edu/uhtbin/catkey/897511.

Full text
Abstract:
In this thesis a compiler called SPASCAL is implemented which translates source programs in a simple Pascal-like language called SPASCAL into target programs in the VAX assembly language. This thesis clearly describes the main aspects of a compiler: lexical analysis and syntactic analysis, including the symbol-table routines and the error-handling routines. This thesis uses regular expressions to define the lexical structure and a context-free grammar to define the syntactic structure of SPASCAL. The compiler is constructed using syntax-directed translation, context-free grammars and a set of semantic rules. SPASCAL Compiler is written with standard C in UNIX.
Department of Computer Science
APA, Harvard, Vancouver, ISO, and other styles
8

Hessaraki, Alireza. "CCC86, a generic 8086 C-language cross compiler plus communication package." Virtual Press, 1987. http://liblink.bsu.edu/uhtbin/catkey/544004.

Full text
Abstract:
The Cross Compiler is an excellent and valuable program development tool. It provides to the user a low level compiled language that allows character (byte), integer (8086 word) and pointer (8086 one word address) manipulation. It also allows recursion, has modern flow and a rich set of operators.The Communication Program which include file transfer utility allows the student to download or upload their C program to a PC. It allows use of the Modem. The file transferring can be done using XON/XOFF or XMODEM. It also supports INS 8250 UART chip, plus 16450 high speed device found in hardware such as IBM AT Serial/Parallel Adapter.
Department of Computer Science
APA, Harvard, Vancouver, ISO, and other styles
9

Calnan, III Paul W. "EXTRACT: Extensible Transformation and Compiler Technology." Digital WPI, 2003. https://digitalcommons.wpi.edu/etd-theses/484.

Full text
Abstract:
Code transformation is widely used in programming. Most developers are familiar with using a preprocessor to perform syntactic transformations (symbol substitution and macro expansion). However, it is often necessary to perform more complex transformations using semantic information contained in the source code. In this thesis, we developed EXTRACT; a general-purpose code transformation language. Using EXTRACT, it is possible to specify, in a modular and extensible manner, a variety of transformations on Java code such as insertion, removal, and restructuring. In support of this, we also developed JPath, a path language for identifying portions of Java source code. Combined, these two technologies make it possible to identify source code that is to be transformed and then specify how that code is to be transformed. We evaluate our technology using three case studies: a type name qualifier which transforms Java class names into fully-qualified class names; a contract checker which enforces pre- and post-conditions across behavioral subtypes; and a code obfuscator which mangles the names of a class's methods and fields such that they cannot be understood by a human, without breaking the semantic content of the class.
APA, Harvard, Vancouver, ISO, and other styles
10

Cook, Philip John. "Incremental compilation in language-based environments /." [St. Lucia, Qld.], 2006. http://www.library.uq.edu.au/pdfserve.php?image=thesisabs/absthe19173.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles

Books on the topic "Language compilers"

1

Kaplan, Randy M. Constructing language processors for little languages. New York: Wiley, 1994.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
2

Constructing language processors for little languages. New York: Wiley, 1994.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
3

Kiong, Derek Beng Kee. Compiler technology: Tools, translators, and language implementation. Boston: Kluwer Academic Publishers, 1997.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
4

Programming language processors: Compilers and interpreters. New York: Prentice Hall, 1993.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
5

Jukka, Paakki, ed. Automating language implementation. New York: Ellis Horwood, 1990.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
6

Design of compilers: Techniques of programming language translation. Boca Raton: CRC Press, 1992.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
7

Lemone, Karen A. Fundamentalsof compilers: An introduction to computer language translation. Boca Raton: CRC Press, 1991.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
8

Fundamentals of compilers: An introduction to computer language translation. Boca Raton: CRC Press, 1992.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
9

service), SpringerLink (Online, ed. Programming Language Concepts. London: Springer London, 2012.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
10

Writing compilers and interpreters. 2nd ed. New York: Wiley Computer Publishing, 1996.

Find full text
APA, Harvard, Vancouver, ISO, and other styles

Book chapters on the topic "Language compilers"

1

Koskimies, Kai. "Software engineering aspects in language implementation." In Compiler Compilers and High Speed Compilation, 39–51. Berlin, Heidelberg: Springer Berlin Heidelberg, 1989. http://dx.doi.org/10.1007/3-540-51364-7_3.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Schumi, Richard, and Jun Sun. "SpecTest: Specification-Based Compiler Testing." In Fundamental Approaches to Software Engineering, 269–91. Cham: Springer International Publishing, 2021. http://dx.doi.org/10.1007/978-3-030-71500-7_14.

Full text
Abstract:
AbstractCompilers are error-prone due to their high complexity. They are relevant for not only general purpose programming languages, but also for many domain specific languages. Bugs in compilers can potentially render all programs at risk. It is thus crucial that compilers are systematically tested, if not verified. Recently, a number of efforts have been made to formalise and standardise programming language semantics, which can be applied to verify the correctness of the respective compilers. In this work, we present a novel specification-based testing method named SpecTest to better utilise these semantics for testing. By applying an executable semantics as test oracle, SpecTest can discover deep semantic errors in compilers. Compared to existing approaches, SpecTest is built upon a novel test coverage criterion called semantic coverage which brings together mutation testing and fuzzing to specifically target less tested language features. We apply SpecTest to systematically test two compilers, i.e., the Java compiler and the Solidity compiler. SpecTest improves the semantic coverage of both compilers considerably and reveals multiple previously unknown bugs.
APA, Harvard, Vancouver, ISO, and other styles
3

mughal, Khalid Azim. "Generation of incremental indirect threaded code for language-based programming environments." In Compiler Compilers and High Speed Compilation, 230–42. Berlin, Heidelberg: Springer Berlin Heidelberg, 1989. http://dx.doi.org/10.1007/3-540-51364-7_18.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Lipps, Peter, Ulrich Möncke, and Reinhard Wilhelm. "OPTRAN - A language/system for the specification of program transformations: System overview and experiences." In Compiler Compilers and High Speed Compilation, 52–65. Berlin, Heidelberg: Springer Berlin Heidelberg, 1989. http://dx.doi.org/10.1007/3-540-51364-7_4.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Strout, Michelle Mills, Saumya Debray, Kate Isaacs, Barbara Kreaseck, Julio Cárdenas-Rodríguez, Bonnie Hurwitz, Kat Volk, et al. "Language-Agnostic Optimization and Parallelization for Interpreted Languages." In Languages and Compilers for Parallel Computing, 36–46. Cham: Springer International Publishing, 2019. http://dx.doi.org/10.1007/978-3-030-35225-7_4.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Kuncak, Viktor, Patrick Lam, and Martin Rinard. "A Language for Role Specifications." In Languages and Compilers for Parallel Computing, 366–82. Berlin, Heidelberg: Springer Berlin Heidelberg, 2003. http://dx.doi.org/10.1007/3-540-35767-x_24.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Hukerikar, Saurabh, and Christian Engelmann. "Language Support for Reliable Memory Regions." In Languages and Compilers for Parallel Computing, 73–87. Cham: Springer International Publishing, 2017. http://dx.doi.org/10.1007/978-3-319-52709-3_6.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Chamberlain, Bradford L., E. Christopher Lewis, and Lawrence Snyder. "Language Support for Pipelining Wavefront Computations." In Languages and Compilers for Parallel Computing, 318–32. Berlin, Heidelberg: Springer Berlin Heidelberg, 2000. http://dx.doi.org/10.1007/3-540-44905-1_20.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Stichnoth, James M., and Thomas Gross. "A communication backend for parallel language compilers." In Languages and Compilers for Parallel Computing, 224–38. Berlin, Heidelberg: Springer Berlin Heidelberg, 1996. http://dx.doi.org/10.1007/bfb0014202.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Sura, Zehra, Chi-Leung Wong, Xing Fang, Jaejin Lee, Samuel P. Midkiff, and David Padua. "Automatic Implementation of Programming Language Consistency Models." In Languages and Compilers for Parallel Computing, 172–87. Berlin, Heidelberg: Springer Berlin Heidelberg, 2005. http://dx.doi.org/10.1007/11596110_12.

Full text
APA, Harvard, Vancouver, ISO, and other styles

Conference papers on the topic "Language compilers"

1

Zaafrani, Abderrazek, and Xinmin Tian. "Performance Portability of XL HPF Compiler on IBM SP2 and SMP Multiprocessors." In International Symposium on Computer Architecture and High Performance Computing. Sociedade Brasileira de Computação, 1999. http://dx.doi.org/10.5753/sbac-pad.1999.19767.

Full text
Abstract:
High Performance Fortran (HPF) is a data-parallel programming language that allows the programmer to specify the data decomposition onto the processors while the compiler takes care of the tedious tasks of communication generation and computation partitioning. Shifting some of the complex tasks from the user to the compiler should encourage programmers to write and port code to parallel machines especially if the compiler implements these tasks efficiently. In this paper, performance results and analysis of a subset of the SPEC92 is presented for the XL HPF compiler on IBM SP2 machines. In addition to obtaining good performance from the compiler, one of the the main concerns of HPF users is portability. Experimental results and analysis are presented in this paper to investigate performance portability (consistency) first across multiprocessor architectures and then across compilers. For performance portability across multiprocessor machines, the same XL HPF compiler used for the IBM SP2 distributed memory machine experiment is also used to compile and execute the same applications but on IBM SMP machines. The comparable speedup and behaviour obtained for both machines indicates that HPF compilers can be portable across different architectures. For performance portability across compilers, various HPF programming techniques and recommendations are introduced to increase the chances of obtaining performance consistency with different HPF compilers.
APA, Harvard, Vancouver, ISO, and other styles
2

Davidson, J. "Session details: Compilers." In PLDI06: ACM SIGPLAN Conference on Programming Language Design and Implementation 2006. New York, NY, USA: ACM, 2006. http://dx.doi.org/10.1145/3245510.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Pearce, David J. "Language Design Meets Verifying Compilers (Keynote)." In GPCE '22: 21st ACM SIGPLAN International Conference on Generative Programming: Concepts and Experiences. New York, NY, USA: ACM, 2022. http://dx.doi.org/10.1145/3564719.3570917.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Rompf, Tiark, Arvind K. Sujeeth, Kevin J. Brown, HyoukJoong Lee, Hassan Chafi, and Kunle Olukotun. "Surgical precision JIT compilers." In PLDI '14: ACM SIGPLAN Conference on Programming Language Design and Implementation. New York, NY, USA: ACM, 2014. http://dx.doi.org/10.1145/2594291.2594316.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Padhye, Rohan, Koushik Sen, and Paul N. Hilfinger. "ChocoPy: a programming language for compilers courses." In the 2019 ACM SIGPLAN Symposium. New York, New York, USA: ACM Press, 2019. http://dx.doi.org/10.1145/3358711.3361627.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Biggar, Paul, Edsko de Vries, and David Gregg. "A practical solution for scripting language compilers." In the 2009 ACM symposium. New York, New York, USA: ACM Press, 2009. http://dx.doi.org/10.1145/1529282.1529709.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Fehr, Mathieu, Jeff Niu, River Riddle, Mehdi Amini, Zhendong Su, and Tobias Grosser. "IRDL: an IR definition language for SSA compilers." In PLDI '22: 43rd ACM SIGPLAN International Conference on Programming Language Design and Implementation. New York, NY, USA: ACM, 2022. http://dx.doi.org/10.1145/3519939.3523700.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Pingali, Keshav. "Session details: Domain specific languages and their compilers." In PLDI '11: ACM SIGPLAN Conference on Programming Language Design and Implementation. New York, NY, USA: ACM, 2011. http://dx.doi.org/10.1145/3249141.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Allan, Chris, Oege de Moor, Damien Sereni, Ganesh Sittampalam, Julian Tibble, Clark Verbrugge, Pavel Avgustinov, et al. "abc the aspectBench compiler for aspectJ a workbench for aspect-oriented programming language and compilers research." In Companion to the 20th annual ACM SIGPLAN conference. New York, New York, USA: ACM Press, 2005. http://dx.doi.org/10.1145/1094855.1094877.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Ferreira, Mariana, and Bruno Lopes. "Integrating tools to reason about Reo circuits." In Workshop Brasileiro de Lógica. Sociedade Brasileira de Computação, 2022. http://dx.doi.org/10.5753/wbl.2022.222914.

Full text
Abstract:
Critical systems are present in many applications and require high reliability. However, there are still challenges for the verification and certification of these systems. The graphical language Reo is based on coordination and model the communication of software components. A set of existing tools offers compilers and reasoners based on proof assistants and model checkers for Reo-specified systems. This paper proposes the integration of these compilers through an interface that allows building Reo circuits, converting the model to the compilers' input language, simplifying the use of logic tools and allowing the creation of new channels in addition to the canonical ones. The theory used, integrated tools, features of interface and some examples are presented.
APA, Harvard, Vancouver, ISO, and other styles

Reports on the topic "Language compilers"

1

Willcock, Jeremiah J. A Language for Specifying Compiler Optimizations for Generic Software. Office of Scientific and Technical Information (OSTI), January 2007. http://dx.doi.org/10.2172/926400.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Amarasinghe, Saman. ZettaBricks: A Language Compiler and Runtime System for Anyscale Computing. Office of Scientific and Technical Information (OSTI), March 2015. http://dx.doi.org/10.2172/1176882.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Gulden, Samuel L. The Development of a Compiler Design Course With Ada as The Implementation Language. Fort Belvoir, VA: Defense Technical Information Center, January 1988. http://dx.doi.org/10.21236/ada265126.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Furey, John, Austin Davis, and Jennifer Seiter-Moser. Natural language indexing for pedoinformatics. Engineer Research and Development Center (U.S.), September 2021. http://dx.doi.org/10.21079/11681/41960.

Full text
Abstract:
The multiple schema for the classification of soils rely on differing criteria but the major soil science systems, including the United States Department of Agriculture (USDA) and the international harmonized World Reference Base for Soil Resources soil classification systems, are primarily based on inferred pedogenesis. Largely these classifications are compiled from individual observations of soil characteristics within soil profiles, and the vast majority of this pedologic information is contained in nonquantitative text descriptions. We present initial text mining analyses of parsed text in the digitally available USDA soil taxonomy documentation and the Soil Survey Geographic database. Previous research has shown that latent information structure can be extracted from scientific literature using Natural Language Processing techniques, and we show that this latent information can be used to expedite query performance by using syntactic elements and part-of-speech tags as indices. Technical vocabulary often poses a text mining challenge due to the rarity of its diction in the broader context. We introduce an extension to the common English vocabulary that allows for nearly-complete indexing of USDA Soil Series Descriptions.
APA, Harvard, Vancouver, ISO, and other styles
5

Yang, Lian. The object-oriented design of a hardware description language analyser for the DIADES silicon compiler system. Portland State University Library, January 2000. http://dx.doi.org/10.15760/etd.6144.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Jones, Larry, and Frank Glandorf. Integrated Information Support System (IISS). Volume 8. User Interface Subsystem. Part 14. Forms Language Compiler Development Specification. Fort Belvoir, VA: Defense Technical Information Center, November 1985. http://dx.doi.org/10.21236/ada182582.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Morenc, Carol, Sandy Barker, and Penny Robie. Integrated Information Support System (IISS). Volume 8. User Interface Subsystem. Part 15. Forms Language Compiler Product Specification. Fort Belvoir, VA: Defense Technical Information Center, November 1985. http://dx.doi.org/10.21236/ada182583.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Glandorf, Frank. Integrated Information Support System (IISS). Volume 8. User Interface Subsystem. Part 16. Forms Language Compiler Unit Test Plan. Fort Belvoir, VA: Defense Technical Information Center, November 1985. http://dx.doi.org/10.21236/ada182584.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Pikilnyak, Andrey V., Nadia M. Stetsenko, Volodymyr P. Stetsenko, Tetiana V. Bondarenko, and Halyna V. Tkachuk. Comparative analysis of online dictionaries in the context of the digital transformation of education. [б. в.], June 2021. http://dx.doi.org/10.31812/123456789/4431.

Full text
Abstract:
The article is devoted to a comparative analysis of popular online dictionaries and an overview of the main tools of these resources to study a language. The use of dictionaries in learning a foreign language is an important step to understanding the language. The effectiveness of this process increases with the use of online dictionaries, which have a lot of tools for improving the educational process. Based on the Alexa Internet resource it was found the most popular online dictionaries: Cambridge Dictionary, Wordreference, Merriam–Webster, Wiktionary, TheFreeDictionary, Dictionary.com, Glosbe, Collins Dictionary, Longman Dictionary, Oxford Dictionary. As a result of the deep analysis of these online dictionaries, we found out they have the next standard functions like the word explanations, transcription, audio pronounce, semantic connections, and examples of use. In propose dictionaries, we also found out the additional tools of learning foreign languages (mostly English) that can be effective. In general, we described sixteen functions of the online platforms for learning that can be useful in learning a foreign language. We have compiled a comparison table based on the next functions: machine translation, multilingualism, a video of pronunciation, an image of a word, discussion, collaborative edit, the rank of words, hints, learning tools, thesaurus, paid services, sharing content, hyperlinks in a definition, registration, lists of words, mobile version, etc. Based on the additional tools of online dictionaries we created a diagram that shows the functionality of analyzed platforms.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography