Siga este enlace para ver otros tipos de publicaciones sobre el tema: Compilers (Computer programs).

Artículos de revistas sobre el tema "Compilers (Computer programs)"

Crea una cita precisa en los estilos APA, MLA, Chicago, Harvard y otros

Elija tipo de fuente:

Consulte los 50 mejores artículos de revistas para su investigación sobre el tema "Compilers (Computer programs)".

Junto a cada fuente en la lista de referencias hay un botón "Agregar a la bibliografía". Pulsa este botón, y generaremos automáticamente la referencia bibliográfica para la obra elegida en el estilo de cita que necesites: APA, MLA, Harvard, Vancouver, Chicago, etc.

También puede descargar el texto completo de la publicación académica en formato pdf y leer en línea su resumen siempre que esté disponible en los metadatos.

Explore artículos de revistas sobre una amplia variedad de disciplinas y organice su bibliografía correctamente.

1

Feng, Jing Ge, Ye Ping He y Qiu Ming Tao. "Evaluation of Compilers’ Capability of Automatic Vectorization Based on Source Code Analysis". Scientific Programming 2021 (30 de noviembre de 2021): 1–15. http://dx.doi.org/10.1155/2021/3264624.

Texto completo
Resumen
Automatic vectorization is an important technique for compilers to improve the parallelism of programs. With the widespread usage of SIMD (Single Instruction Multiple Data) extensions in modern processors, automatic vectorization has become a hot topic in the research of compiler techniques. Accurately evaluating the effectiveness of automatic vectorization in typical compilers is quite valuable for compiler optimization and design. This paper evaluates the effectiveness of automatic vectorization, analyzes the limitation of automatic vectorization and the main causes, and improves the automatic vectorization technology. This paper firstly classifies the programs by two main factors: program characteristics and transformation methods. Then, it evaluates the effectiveness of automatic vectorization in three well-known compilers (GCC, LLVM, and ICC, including their multiple versions in recent 5 years) through TSVC (Test Suite for Vectorizing Compilers) benchmark. Furthermore, this paper analyzes the limitation of automatic vectorization based on source code analysis, and introduces the differences between academic research and engineering practice in automatic vectorization and the main causes, Finally, it gives some suggestions as to how to improve automatic vectorization capability.
Los estilos APA, Harvard, Vancouver, ISO, etc.
2

Susungi, Adilla y Claude Tadonki. "Intermediate Representations for Explicitly Parallel Programs". ACM Computing Surveys 54, n.º 5 (junio de 2021): 1–24. http://dx.doi.org/10.1145/3452299.

Texto completo
Resumen
While compilers generally support parallel programming languages and APIs, their internal program representations are mostly designed from the sequential programs standpoint (exceptions include source-to-source parallel compilers, for instance). This makes the integration of compilation techniques dedicated to parallel programs more challenging. In addition, parallelism has various levels and different targets, each of them with specific characteristics and constraints. With the advent of multi-core processors and general purpose accelerators, parallel computing is now a common and pervasive consideration. Thus, software support to parallel programming activities is essential to make this technical transition more realistic and beneficial. The case of compilers is fundamental as they deal with (parallel) programs at a structural level, thus the need for intermediate representations. This article surveys and discusses attempts to provide intermediate representations for the proper support of explicitly parallel programs. We highlight the gap between available contributions and their concrete implementation in compilers and then exhibit possible future research directions.
Los estilos APA, Harvard, Vancouver, ISO, etc.
3

AlFayez, Fayez. "Algorithms for Pre-Compiling Programs by Parallel Compilers". Computer Systems Science and Engineering 44, n.º 3 (2023): 2165–76. http://dx.doi.org/10.32604/csse.2023.026238.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
4

Dold, Axel, Friedrich von Henke y Wolfgang Goerigk. "A Completely Verified Realistic Bootstrap Compiler". International Journal of Foundations of Computer Science 14, n.º 04 (agosto de 2003): 659–80. http://dx.doi.org/10.1142/s0129054103001947.

Texto completo
Resumen
This paper reports on a large verification effort in constructing an initial fully trusted bootstrap compiler executable for a realistic system programming language and real target processor. The construction and verification process comprises three tasks: the verification of the compiling specification (a relation between abstract source and target programs) with respect to the language semantics and a realistic correctness criterion. This proof has been completely mechanized using the PVS verification system and is one of the largest case-studies in formal verification we are aware of. Second, the implementation of the specification in the high-level source language following a transformational approach, and finally, the implementation and verification of a binary executable written in the compiler's target language. For the latter task, a realistic technique has been developed, which is based on rigorous a-posteriori syntactic code inspection and which guarantees, for the first time, trusted execution of generated machine programs. The context of this work is the joint German research effort Verifix aiming at developing methods for the construction of correct compilers for realistic source languages and real target processors.
Los estilos APA, Harvard, Vancouver, ISO, etc.
5

Ciric, Miroslav y Svetozar Rancic. "Parsing in different languages". Facta universitatis - series: Electronics and Energetics 18, n.º 2 (2005): 299–307. http://dx.doi.org/10.2298/fuee0502299c.

Texto completo
Resumen
A compiler is a translator that accepts as input formatted source file or files, and produces as output a file that may be run directly on a computer. Given the same ANSI C++ compliant input file, two different ANSI C++ compliant compilers running on the same operating system produce two different executable programs that should execute in exactly the same way. To some degree, this is achieved by the standardization of the C++ language, but it is also possible because computer programming languages like C++ can be compiled using reliable technologies with long traditions and understood characteristics. LALR(k), as practical version of LR, is such reliable technology for parsing. Traditional LALR(1) tool YACC has proved his value during years of successful applications. Nowadays there are a few commercial and noncommercial alternatives that are very interesting and promising. This paper will examine some of the them with ability of parsing in different programming languages.
Los estilos APA, Harvard, Vancouver, ISO, etc.
6

Steele, James K. y Ronald R. Biederman. "Powder Diffraction Pattern Simulation and Analysis". Advances in X-ray Analysis 37 (1993): 101–7. http://dx.doi.org/10.1154/s0376030800015561.

Texto completo
Resumen
The graphics capability and speed available in modern personal computers has encouraged an increase in the use of a direct pattern comparison approach to the analysis of x-ray and electron diffraction patterns. Several researchers over the past 30 years have presented programs and algorithms which calculate and display powder patterns for xray diffraction. These programs originally required a main frame computer which was expensive and generally not available to all researchers. With the recent advances in the speed of personal computers, language compilers, and high resoultion graphics, expecially within the past 5 years, real time calculations and display of calculated patterns is becoming widely available. The power of this approach will be demonstrated through the use of an IBM compatable personal computer code developed by the authors.
Los estilos APA, Harvard, Vancouver, ISO, etc.
7

Lin, Jenny, Vidya Narayanan, Yuka Ikarashi, Jonathan Ragan-Kelley, Gilbert Bernstein y James McCann. "Semantics and Scheduling for Machine Knitting Compilers". ACM Transactions on Graphics 42, n.º 4 (26 de julio de 2023): 1–26. http://dx.doi.org/10.1145/3592449.

Texto completo
Resumen
Machine knitting is a well-established fabrication technique for complex soft objects, and both companies and researchers have developed tools for generating machine knitting patterns. However, existing representations for machine knitted objects are incomplete (do not cover the complete domain of machine knittable objects) or overly specific (do not account for symmetries and equivalences among knitting instruction sequences). This makes it difficult to define correctness in machine knitting, let alone verify the correctness of a given program or program transformation. The major contribution of this work is a formal semantics for knitout, a low-level Domain Specific Language for knitting machines. We accomplish this by using what we call the fenced tangle , which extends concepts from knot theory to allow for a mathematical definition of knitting program equivalence that matches the intuition behind knit objects. Finally, using this formal representation, we prove the correctness of a sequence of rewrite rules; and demonstrate how these rewrite rules can form the foundation for higher-level tasks such as compiling a program for a specific machine and optimizing for time/reliability, all while provably generating the same knit object under our proposed semantics. By establishing formal definitions of correctness, this work provides a strong foundation for compiling and optimizing knit programs.
Los estilos APA, Harvard, Vancouver, ISO, etc.
8

Burgin, Mark. "Triadic Automata and Machines as Information Transformers". Information 11, n.º 2 (13 de febrero de 2020): 102. http://dx.doi.org/10.3390/info11020102.

Texto completo
Resumen
Algorithms and abstract automata (abstract machines) are used to describe, model, explore and improve computers, cell phones, computer networks, such as the Internet, and processes in them. Traditional models of information processing systems—abstract automata—are aimed at performing transformations of data. These transformations are performed by their hardware (abstract devices) and controlled by their software (programs)—both of which stay unchanged during the whole computational process. However, in physical computers, their software is also changing by special tools such as interpreters, compilers, optimizers and translators. In addition, people change the hardware of their computers by extending the external memory. Moreover, the hardware of computer networks is incessantly altering—new computers and other devices are added while other computers and other devices are disconnected. To better represent these peculiarities of computers and computer networks, we introduce and study a more complete model of computations, which is called a triadic automaton or machine. In contrast to traditional models of computations, triadic automata (machine) perform computational processes transforming not only data but also hardware and programs, which control data transformation. In addition, we further develop taxonomy of classes of automata and machines as well as of individual automata and machines according to information they produce.
Los estilos APA, Harvard, Vancouver, ISO, etc.
9

Rushinek, Avi y Sara F. Rushinek. "Operating systems, compilers, assemblers and application programs: audit trails of user satisfaction". Microprocessors and Microsystems 9, n.º 5 (junio de 1985): 241–49. http://dx.doi.org/10.1016/0141-9331(85)90272-8.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
10

LaRose, Ryan. "Overview and Comparison of Gate Level Quantum Software Platforms". Quantum 3 (25 de marzo de 2019): 130. http://dx.doi.org/10.22331/q-2019-03-25-130.

Texto completo
Resumen
Quantum computers are available to use over the cloud, but the recent explosion of quantum software platforms can be overwhelming for those deciding on which to use. In this paper, we provide a current picture of the rapidly evolving quantum computing landscape by comparing four software platforms - Forest (pyQuil), Qiskit, ProjectQ, and the Quantum Developer Kit (Q#) - that enable researchers to use real and simulated quantum devices. Our analysis covers requirements and installation, language syntax through example programs, library support, and quantum simulator capabilities for each platform. For platforms that have quantum computer support, we compare hardware, quantum assembly languages, and quantum compilers. We conclude by covering features of each and briefly mentioning other quantum computing software packages.
Los estilos APA, Harvard, Vancouver, ISO, etc.
11

PHILLIPS, C. y R. PERROTT. "PROBLEMS WITH DATA PARALLELISM". Parallel Processing Letters 11, n.º 01 (marzo de 2001): 77–94. http://dx.doi.org/10.1142/s0129626401000440.

Texto completo
Resumen
The gradual evolution of language features and approaches used for the programming of distributed memory machines underwent substantial advances in the 1990s. One of the most promising and widely praised approaches was based on data parallelism and resulted in High Performance Fortran. This paper reports on an experiment using that approach based on a commercial distributed memory machine, available compilers and simple test programs. The results are disappointing and not encouraging. The variety of components involved and the lack of detailed knowledge available for the compilers compound the difficulties of obtaining results and doing comparisons. The results show great variation and question the premise that communication is the decisive factor in performance determination. The results are also a contribution towards the difficult tasks of predicating performance on a distributed memory computer.
Los estilos APA, Harvard, Vancouver, ISO, etc.
12

Jerbi, Khaled, Mickaël Raulet, Olivier Déforges y Mohamed Abid. "Automatic Generation of Optimized and Synthesizable Hardware Implementation from High-Level Dataflow Programs". VLSI Design 2012 (16 de agosto de 2012): 1–14. http://dx.doi.org/10.1155/2012/298396.

Texto completo
Resumen
In this paper, we introduce the Reconfigurable Video Coding (RVC) standard based on the idea that video processing algorithms can be defined as a library of components that can be updated and standardized separately. MPEG RVC framework aims at providing a unified high-level specification of current MPEG coding technologies using a dataflow language called Cal Actor Language (CAL). CAL is associated with a set of tools to design dataflow applications and to generate hardware and software implementations. Before this work, the existing CAL hardware compilers did not support high-level features of the CAL. After presenting the main notions of the RVC standard, this paper introduces an automatic transformation process that analyses the non-compliant features and makes the required changes in the intermediate representation of the compiler while keeping the same behavior. Finally, the implementation results of the transformation on video and still image decoders are summarized. We show that the obtained results can largely satisfy the real time constraints for an embedded design on FPGA as we obtain a throughput of 73 FPS for MPEG 4 decoder and 34 FPS for coding and decoding process of the LAR coder using a video of CIF image size. This work resolves the main limitation of hardware generation from CAL designs.
Los estilos APA, Harvard, Vancouver, ISO, etc.
13

Nagy, Stefan. "The Fun in Fuzzing". Queue 20, n.º 6 (31 de diciembre de 2022): 80–87. http://dx.doi.org/10.1145/3580504.

Texto completo
Resumen
Stefan Nagy, an assistant professor in the Kahlert School of Computing at the University of Utah, takes us on a tour of recent research in software fuzzing, or the systematic testing of programs via the generation of novel or unexpected inputs. The first paper he discusses extends the state of the art in coverage-guided fuzzing with the semantic notion of "likely invariants," inferred via techniques from property-based testing. The second explores encoding domain-specific knowledge about certain bug classes into test-case generation. His last selection takes us through the looking glass, randomly generating entire C programs and using differential analysis to compare traces of optimized and unoptimized executions, in order to find bugs in the compilers themselves.
Los estilos APA, Harvard, Vancouver, ISO, etc.
14

Stepanov, Daniil y Vladimir Itsykson. "Backend Bug Finder — a platform for effective compiler fuzzing". Information and Control Systems, n.º 6 (27 de diciembre de 2022): 31–40. http://dx.doi.org/10.31799/1684-8853-2022-6-31-40.

Texto completo
Resumen
Introduction: The standard way to check the quality of a compiler is manual testing. However, it does not allow to cover a vast diversity of programs that can be written in a target programming language. Today, in addition to manual written tests there are many automated compiler testing methods, among which fuzzing is one of the most powerful and useful. A compiler fuzzer is a tool that generates a random program in a target language and checks how the compiler works in this language. Purpose: To develop a platform for compiler fuzzing and, based on it, to develop a tool for Kotlin compiler testing. Results: We have developed Backend Bug Finder which is a platform for compiler fuzzing is. We have chosen a mutation-based approach as a method for generating random programs. First, an existing program is entered to the mutator as the input to be then transformed in some way. Mutations can be both trivial, for example, replacing arithmetic operators with others, and complex, changing the structure of the program. Next, the resulting program is fed to the input of the compiler with the following check of its operation. The developed test oracle can detect three types of errors: crashes, miscompilations, and performance degradations. If an error is detected, the test case is fed into the post-processing module, where reduction and deduplication algorithms are applied. We have developed a tool for fuzzing the Kotlin language compiler based on the platform for its approbation, which showed the applicability of the proposed approach for finding errors in modern compilers. Practical relevance: Over a year and a half of work, our tool has found thousands of different Kotlin compiler bugs, more than 200 of which were sent to the developers, and more than 80 have been fixed.
Los estilos APA, Harvard, Vancouver, ISO, etc.
15

Wu, Jiang, Jianjun Xu, Xiankai Meng, Haoyu Zhang y Zhuo Zhang. "Enabling Reliability-Driven Optimization Selection with Gate Graph Attention Neural Network". International Journal of Software Engineering and Knowledge Engineering 30, n.º 11n12 (noviembre de 2020): 1641–65. http://dx.doi.org/10.1142/s0218194020400240.

Texto completo
Resumen
Modern compilers provide a huge number of optional compilation optimization options. It is necessary to select the appropriate compilation optimization options for different programs or applications. To mitigate this problem, machine learning is widely used as an efficient technology. How to ensure the integrity and effectiveness of program information is the key to problem mitigation. In addition, when selecting the best compilation optimization option, the optimization goals are often execution speed, code size, and CPU consumption. There is not much research on program reliability. This paper proposes a Gate Graph Attention Neural Network (GGANN)-based compilation optimization option selection model. The data flow and function-call information are integrated into the abstract syntax tree as the program graph-based features. We extend the deep neural network based on GGANN and build a learning model that learns the heuristics method for program reliability. The experiment is performed under the Clang compiler framework. Compared with the traditional machine learning method, our model improves the average accuracy by 5–11% in the optimization option selection for program reliability. At the same time, experiments show that our model has strong scalability.
Los estilos APA, Harvard, Vancouver, ISO, etc.
16

Zhang, Du y Michael D. Kramer. "GAPS: A Genetic Programming System". International Journal on Artificial Intelligence Tools 12, n.º 02 (junio de 2003): 187–206. http://dx.doi.org/10.1142/s0218213003001198.

Texto completo
Resumen
One of the major approaches in the field of evolutionary computation is genetic programming. Genetic programming tackles the issue of how to automatically create a computer program for a given problem from some initial problem statement. The goal is accomplished by genetically breeding a population of computer programs in terms of genetic operations. In this paper, we describe a genetic programming system called GAPS. GAPS has the following features: (1) It implements the standard generational algorithm for genetic programming with some refinement on controlling introns growth during evolution process and improved termination criteria. (2) It includes an extensible language tailored to the needs of genetic programming. And (3) It is a complete, standalone system that allows for genetic programming tasks to be carried out without requiring other tools such as compilers. Results with GAPS have been satisfactory.
Los estilos APA, Harvard, Vancouver, ISO, etc.
17

Amir, Amihood y Carl H. Smith. "The Syntax of Parallelism". Fundamenta Informaticae 19, n.º 3-4 (1 de octubre de 1993): 383–402. http://dx.doi.org/10.3233/fi-1993-193-409.

Texto completo
Resumen
One of the problems associated with the introduction of parallel processors is the so called “dusty deck” problem. A solution entails the development of optimizing compilers that transform programs previously written for a conventional serial processor into functionally equivalent programs that exploit the parallel processing capabilities of the new multiprocessor machines. We introduce a function Composition Model that models parallel architectures as a hierarchy of syntactic function definitions. Position in the hierarchy is equivalent to parallel time complexity in the modelled architecture. Other parallel concepts such as global vs. local communications, concurrency or exclusivity of read and write, and the number of processors used in a computation, are modelled as well. We rigorously prove that a compiler that optimizes a program for parallelism on a CREW PRAM is not effectively computable, even if it is also given an optimal serial program for the same task and a time bounding function. It turns out that the function composition model is similar to some traditional models, such as the Grzegorczyk Hierarchy. Our parallel interpretation of the Grzegorczyk Hierarchy offers new insights and admits a new cleaner and more elegant definition of the hierarchy with a single base class, as opposed to Grzegorczyk’s three.
Los estilos APA, Harvard, Vancouver, ISO, etc.
18

Rodríguez, Gabriel, Louis-Noël Pouchet y Juan Touriño. "Representing Integer Sequences Using Piecewise-Affine Loops". Mathematics 9, n.º 19 (24 de septiembre de 2021): 2368. http://dx.doi.org/10.3390/math9192368.

Texto completo
Resumen
A formal, high-level representation of programs is typically needed for static and dynamic analyses performed by compilers. However, the source code of target applications is not always available in an analyzable form, e.g., to protect intellectual property. To reason on such applications, it becomes necessary to build models from observations of its execution. This paper details an algebraic approach which, taking as input the trace of memory addresses accessed by a single memory reference, synthesizes an affine loop with a single perfectly nested reference that generates the original trace. This approach is extended to support the synthesis of unions of affine loops, useful for minimally modeling traces generated by automatic transformations of polyhedral programs, such as tiling. The resulting system is capable of processing hundreds of gigabytes of trace data in minutes, minimally reconstructing 100% of the static control parts in PolyBench/C applications and 99.99% in the Pluto-tiled versions of these benchmarks. As an application example of the trace modeling method, trace compression is explored. The affine representations built for the memory traces of PolyBench/C codes achieve compression factors of the order of 106 and 103 with respect to gzip for the original and tiled versions of the traces, respectively.
Los estilos APA, Harvard, Vancouver, ISO, etc.
19

Erbsen, Andres, Jade Philipoom, Dustin Jamner, Ashley Lin, Samuel Gruetter, Clément Pit-Claudel y Adam Chlipala. "Foundational Integration Verification of a Cryptographic Server". Proceedings of the ACM on Programming Languages 8, PLDI (20 de junio de 2024): 1704–29. http://dx.doi.org/10.1145/3656446.

Texto completo
Resumen
We present verification of a bare-metal server built using diverse implementation techniques and languages against a whole-system input-output specification in terms of machine code, network packets, and mathematical specifications of elliptic-curve cryptography. We used very different formal-reasoning techniques throughout the stack, ranging from computer algebra, symbolic execution, and verification-condition generation to interactive verification of functional programs including compilers for C-like and functional languages. All these component specifications and domain-specific reasoning techniques are defined and justified against common foundations in the Coq proof assistant. Connecting these components is a minimalistic specification style based on functional programs and assertions over simple objects, omnisemantics for program execution, and basic separation logic for memory layout. This design enables us to bring the components together in a top-level correctness theorem that can be audited without understanding or trusting the internal interfaces and tools. Our case study is a simple cryptographic server for flipping of a bit of state through public-key authenticated network messages, and its proof shows total functional correctness including static bounds on memory usage. This paper also describes our experiences with the specific verification tools we build upon, along with detailed analysis of reasons behind the widely varying levels of productivity we experienced between combinations of tools and tasks.
Los estilos APA, Harvard, Vancouver, ISO, etc.
20

Dévai, Gergely, Zoltán Gera y Zoltán Kelemen. "Language abstractions for low level optimization techniques". Computer Science and Information Systems 11, n.º 4 (2014): 1499–514. http://dx.doi.org/10.2298/csis130224080d.

Texto completo
Resumen
Programmers are often forced to implement performance-critical applications at a low abstraction level. This leads to programs that are hard to develop and maintain because the program text is mixed with low level optimization tricks and is far from the algorithm it implements. Even if compilers are smart nowadays and provide the user with many automatically applied optimizations, practice shows that in some cases it is hopeless to optimize the program automatically without the programmer?s knowledge. A complementary approach is to allow the programmer to fine tune the program by providing him with language features that make the optimization easier. These are language abstractions that make optimization techniques explicit without adding too much syntactic noise to the program text. This paper presents such language abstractions for two well-known optimizations: bitvectors and SIMD (Single Instruction Multiple Data). The language features are implemented in the embedded domain specific language Feldspar which is specifically tailored for digital signal processing applications. While we present these language elements as part of Feldspar, the ideas behind them are general enough to be applied in other language definition projects as well.
Los estilos APA, Harvard, Vancouver, ISO, etc.
21

BONE, PAUL, ZOLTAN SOMOGYI y PETER SCHACHTE. "Estimating the overlap between dependent computations for automatic parallelization". Theory and Practice of Logic Programming 11, n.º 4-5 (julio de 2011): 575–91. http://dx.doi.org/10.1017/s1471068411000184.

Texto completo
Resumen
AbstractResearchers working on the automatic parallelization of programs have long known that too much parallelism can be even worse for performance than too little, because spawning a task to be run on another CPU incurs overheads. Autoparallelizing compilers have therefore long tried to use granularity analysis to ensure that they only spawn off computations whose cost will probably exceed the spawn-off cost by a comfortable margin. However, this is not enough to yield good results, because data dependencies may also limit the usefulness of running computations in parallel. If one computation blocks almost immediately and can resume only after another has completed its work, then the cost of parallelization again exceeds the benefit. We present a set of algorithms for recognizing places in a program where it is worthwhile to execute two or more computations in parallel that pay attention to the second of these issues as well as the first. Our system uses profiling information to compute the times at which a procedure call consumes the values of its input arguments and the times at which it produces the values of its output arguments. Given two calls that may be executed in parallel, our system uses the times of production and consumption of the variables they share to determine how much their executions would overlap if they were run in parallel, and therefore whether executing them in parallel is a good idea or not. We have implemented this technique for Mercury in the form of a tool that uses profiling data to generate recommendations about what to parallelize, for the Mercury compiler to apply on the next compilation of the program. We present preliminary results that show that this technique can yield useful parallelization speedups, while requiring nothing more from the programmer than representative input data for the profiling run.
Los estilos APA, Harvard, Vancouver, ISO, etc.
22

WOLSKI, RICH. "STATIC SCHEDULING OF HIERARCHICAL PROGRAM GRAPHS". Parallel Processing Letters 05, n.º 04 (diciembre de 1995): 611–22. http://dx.doi.org/10.1142/s0129626495000540.

Texto completo
Resumen
Many parallel compilation systems represent programs internally as Directed Acyclic Graphs (DAGs). However, the storage of these DAGs becomes prohibitive when the program being compiled is large. In this paper we describe a compile-time scheduling methodology for hierarchical DAG programs represented in the IFX intermediate form. The method we present is itself hierarchical reducing the storage that would otherwise be required by a single flat DAG representation. We describe the scheduling model and demonstrate the method using the Optimizing Sisal Compiler and two scientific applications.
Los estilos APA, Harvard, Vancouver, ISO, etc.
23

Fahringer, T. y A. Požgaj. "P3T+: A Performance Estimator for Distributed and Parallel Programs". Scientific Programming 8, n.º 2 (2000): 73–93. http://dx.doi.org/10.1155/2000/217384.

Texto completo
Resumen
Developing distributed and parallel programs on today's multiprocessor architectures is still a challenging task. Particular distressing is the lack of effective performance tools that support the programmer in evaluating changes in code, problem and machine sizes, and target architectures. In this paper we introduceP3T+ which is a performance estimator for mostly regular HPF (High Performance Fortran) programs but partially covers also message passing programs (MPI).P3T+ is unique by modeling programs, compiler code transformations, and parallel and distributed architectures. It computes at compile-time a variety of performance parameters including work distribution, number of transfers, amount of data transferred, transfer times, computation times, and number of cache misses. Several novel technologies are employed to compute these parameters: loop iteration spaces, array access patterns, and data distributions are modeled by employing highly effective symbolic analysis. Communication is estimated by simulating the behavior of a communication library used by the underlying compiler. Computation times are predicted through pre-measured kernels on every target architecture of interest. We carefully model most critical architecture specific factors such as cache lines sizes, number of cache lines available, startup times, message transfer time per byte, etc.P3T+ has been implemented and is closely integrated with the Vienna High Performance Compiler (VFC) to support programmers develop parallel and distributed applications. Experimental results for realistic kernel codes taken from real-world applications are presented to demonstrate both accuracy and usefulness ofP3T+.
Los estilos APA, Harvard, Vancouver, ISO, etc.
24

Li, Yichen, Dongwei Xiao, Zhibo Liu, Qi Pang y Shuai Wang. "Metamorphic Testing of Secure Multi-party Computation (MPC) Compilers". Proceedings of the ACM on Software Engineering 1, FSE (12 de julio de 2024): 1216–37. http://dx.doi.org/10.1145/3643781.

Texto completo
Resumen
The demanding need to perform privacy-preserving computations among multiple data owners has led to the prosperous development of secure multi-party computation (MPC) protocols. MPC offers protocols for parties to jointly compute a function over their inputs while keeping those inputs private. To date, MPC has been widely adopted in various real-world, privacy-sensitive sectors, such as healthcare and finance. Moreover, to ease the adoption of MPC, industrial and academic MPC compilers have been developed to automatically translate programs describing arbitrary MPC procedures into low-level MPC executables. Compiling high-level descriptions into high-efficiency MPC executables is challenging: the compilation often involves converting high-level languages into several intermediate representations (IR), e.g., arithmetic or boolean circuits, optimizing the computation/communication cost, and picking proper MPC protocols (and underlying virtual machines) for a particular task and threat model. Various optimizations and heuristics are employed during the compilation procedure to improve the efficiency of the generated MPC executables. Despite the prosperous adoption of MPC compilers by industrial vendors and academia, a principled and systematic understanding of the correctness of MPC compilers does not yet exist. To fill this critical gap, this paper introduces MT-MPC, a metamorphic testing (MT) framework specifically designed for MPC compilers to effectively uncover erroneous compilations. Our approach proposes three metamorphic relations (MRs) that are tailored for MPC programs to mutate high-level MPC programs (compiler inputs). We then examine if MPC compilers yield semantics-equivalent MPC executables regarding the original and mutated MPC programs by comparing their execution results. Real-world MPC compilers exhibit a high level of engineering quality. Nevertheless, we detected 4,772 inputs that can result in erroneous compilations in three popular MPC compilers available on the market. While the discovered error-triggering inputs do not cause the MPC compilers to crash directly, they can lead to the generation of incorrect MPC executables, jeopardizing the underlying dependability of the computation. With substantial manual effort and help from the MPC compiler developers, we uncovered thirteen bugs in these MPC compilers by debugging them using the error-triggering inputs. Our proposed testing frameworks and findings can be used to guide developers in their efforts to improve MPC compilers.
Los estilos APA, Harvard, Vancouver, ISO, etc.
25

Taylor, Matthew, Katherine Coons, Behnam Robatmili, Bertrand Maher, Doug Burger y Kathryn McKinley. "Evolving Compiler Heuristics to Manage Communication and Contention". Proceedings of the AAAI Conference on Artificial Intelligence 24, n.º 1 (5 de julio de 2010): 1690–93. http://dx.doi.org/10.1609/aaai.v24i1.7711.

Texto completo
Resumen
As computer architectures become increasingly complex, hand-tuning compiler heuristics becomes increasingly tedious and time consuming for compiler developers. This paper presents a case study that uses a genetic algorithm to learn a compiler policy. The target policy implicitly balances communication and contention among processing elements of the TRIPS processor, a physically realized prototype chip. We learn specialized policies for individual programs as well as general policies that work well across all programs. We also employ a two-stage method that first classifies the code being compiled based on salient characteristics, and then chooses a specialized policy based on that classification.This work is particularly interesting for the AI community because it 1) emphasizes the need for increased collaboration between AI researchers and researchers from other branches of computer science and 2) discusses a machine learning setup where training on the custom hardware requires weeks of training, rather than the more typical minutes or hours.
Los estilos APA, Harvard, Vancouver, ISO, etc.
26

Mirzaev, Farrukh y Khurshid Nematov. "Performance of Free Software and Freeware in Telecommunication System". Journal La Multiapp 1, n.º 5 (12 de enero de 2021): 20–24. http://dx.doi.org/10.37899/journallamultiapp.v1i5.276.

Texto completo
Resumen
This article discusses the Performance of Free Software and Freeware in Telecommunication System. Software or software is a special term for data that is digitally formatted and stored, including computer programs, their documentation, and various information that can be read and written by a computer. In other words, part of a computer system that is intangible. This term emphasizes the difference from computer hardware. Making the software itself requires a "programming language" written by the programmer which is then compiled with the compiler application so that it becomes code that can be recognized by the hardware machine.
Los estilos APA, Harvard, Vancouver, ISO, etc.
27

Jain, Aayush, Huijia Lin y Amit Sahai. "Indistinguishability Obfuscation from Well-Founded Assumptions". Communications of the ACM 67, n.º 3 (22 de febrero de 2024): 97–105. http://dx.doi.org/10.1145/3611095.

Texto completo
Resumen
At least since the initial public proposal of public-key cryptography based on computational hardness conjectures, cryptographers have contemplated the possibility of a "one-way compiler" that translates computer programs into "incomprehensible" but equivalent forms. And yet, the search for such a "one-way compiler" remained elusive for decades. We examine a formalization of this concept with the notion of indistinguishability obfuscation (iO) . Roughly speaking, iO requires that the compiled versions of any two equivalent programs (with the same size and running time) be indistinguishable from any efficient adversary. Finally, we show how to construct iO in such a way that we can prove the security of our iO scheme based on well-studied computational hardness conjectures in cryptography.
Los estilos APA, Harvard, Vancouver, ISO, etc.
28

Lee, Jaejin, David A. Padua y Samuel P. Midkiff. "Basic compiler algorithms for parallel programs". ACM SIGPLAN Notices 34, n.º 8 (agosto de 1999): 1–12. http://dx.doi.org/10.1145/329366.301105.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
29

Soni, Sumit y Pravesh Kumar. "Compiler Construction for a Network Identification". Journal of Advance Research in Electrical & Electronics Engineering (ISSN: 2208-2395) 1, n.º 4 (30 de abril de 2014): 1317. http://dx.doi.org/10.53555/nneee.v1i4.240.

Texto completo
Resumen
The Compiler Construction is mainly used to improve the network processing system through the advance advance compiler design.The name Compiler is primarily used for programs that translate source code from a high –level programming language to a lower level language.The advance compiler design is useful in the industrial application like telecommunication. Our approach to increase the flexibility ,productivity and usability of the system to target with the minimal effort .we use the bit level addressing for the network processor we show how a compiler backend has been design and also improve the feature of the compiler and fully operational. A compiler is a computer program (or set of programs) that transforms source code written in a programming language (the source language) into another computer language (the target language, often having a binary form known as object code The most common reason for wanting to transform source code is to create an executabe program.
Los estilos APA, Harvard, Vancouver, ISO, etc.
30

HAN, JIA WEI. "ON THE POWER OF QUERY-INDEPENDENT COMPILATION". International Journal of Software Engineering and Knowledge Engineering 02, n.º 02 (junio de 1992): 277–92. http://dx.doi.org/10.1142/s0218194092000142.

Texto completo
Resumen
Recursive query processing techniques can be classified into three categories: interpretation, query-dependent compilation and query-independent compilation. Query-dependent compilation compiles IDB (Intensional Data Base) programs based on possible query instantiations into query-specific programs, while query-independent compilation compiles IDB programs into query-independent and easily analyzable relational expressions. Previous studies show that linear recursions can be query-independently compiled into highly regular forms. This study analyzes the power of query-independent compilation and shows that (i) query-independent compilation captures more binding information than other methods for irregular linear recursions; (ii) the compilation provides succinct information for selection of efficient query processing methods; and (iii) it facilitates constraint-based processing of complex queries. Finally, query-independent compilation can be applied to more complex recursions as well.
Los estilos APA, Harvard, Vancouver, ISO, etc.
31

Wells, Mark B., Margaret A. Hug y Rollo Silver. "Libraries as programs preserved within compiler continuations". ACM SIGPLAN Notices 20, n.º 7 (julio de 1985): 83–91. http://dx.doi.org/10.1145/17919.806830.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
32

Armstrong, Brian, Seon Wook Kim, Insung Park, Michael Voss y Rudolf Eigenmann. "Compiler-based tools for analyzing parallel programs". Parallel Computing 24, n.º 3-4 (mayo de 1998): 401–20. http://dx.doi.org/10.1016/s0167-8191(98)00019-2.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
33

Marouf, Lzzam A., Omar W. Abdul-Wahab y Luma A. Kamel. "FTIZZY CONTROLLERS FOR SINGLE POINTCONTROLLER-I (SPC-l) SYSTEMS". Journal of Engineering 10, n.º 1 (13 de marzo de 2024): 91–101. http://dx.doi.org/10.31026/j.eng.2004.01.08.

Texto completo
Resumen
Advances in computer technology have introduced computers everywhere. One of the fields that the computers have entered is the field of process control and data acquisition systems. On the other hand, fuzzy control is emerging as an alternative to conventional control to control different systems. this paper is concerned with applying fuzzy control to a locally designed and manufactured process controller. This controller is designated by single point controller-1 (SPC-l)' It is basically a flexible and general-purpose, stand-alone, single-point controller. The CPU section of the SPC-1 is the AT89C5l general purpose microcontroller. The fuzzy controlalgorithms were imple-mented as programs to be executed by this microcontroller. These programs were written in C language and were translated to machine language by Keil8051 C compiler p Vision V5.1.
Los estilos APA, Harvard, Vancouver, ISO, etc.
34

He, Xuzhen. "Accelerated linear algebra compiler for computationally efficient numerical models: Success and potential area of improvement". PLOS ONE 18, n.º 2 (24 de febrero de 2023): e0282265. http://dx.doi.org/10.1371/journal.pone.0282265.

Texto completo
Resumen
The recent dramatic progress in machine learning is partially attributed to the availability of high-performant computers and development tools. The accelerated linear algebra (XLA) compiler is one such tool that automatically optimises array operations (mostly fusion to reduce memory operations) and compiles the optimised operations into high-performant programs specific to target computing platforms. Like machine-learning models, numerical models are often expressed in array operations, and thus their performance can be boosted by XLA. This study is the first of its kind to examine the efficiency of XLA for numerical models, and the efficiency is examined stringently by comparing its performance with that of optimal implementations. Two shared-memory computing platforms are examined–the CPU platform and the GPU platform. To obtain optimal implementations, the computing speed and its optimisation are rigorously studied by considering different workloads and the corresponding computer performance. Two simple equations are found to faithfully modell the computing speed of numerical models with very few easily-measureable parameters. Regarding operation optimisation within XLA, results show that models expressed in low-level operations (e.g., slice, concatenation, and arithmetic operations) are successfully fused while high-level operations (e.g., convolution and roll) are not. Regarding compilation within XLA, results show that for the CPU platform of certain computers and certain simple numerical models on the GPU platform, XLA achieves high efficiency (> 80%) for large problems and acceptable efficiency (10%~80%) for medium-size problems–the gap is from the overhead cost of Python. Unsatisfactory performance is found for the CPU platform of other computers (operations are compiled in a non-optimal way) and for high-dimensional complex models for the GPU platform, where each GPU thread in XLA handles 4 (single precision) or 2 (double precision) output elements–hoping to exploit the high-performant instructions that can read/write 4 or 2 floating-point numbers with one instruction. However, these instructions are rarely used in the generated code for complex models and performance is negatively affected. Therefore, flags should be added to control the compilation for these non-optimal scenarios.
Los estilos APA, Harvard, Vancouver, ISO, etc.
35

Bowles, Andrew. "Trends in applying abstract interpretation". Knowledge Engineering Review 7, n.º 2 (junio de 1992): 157–71. http://dx.doi.org/10.1017/s0269888900006275.

Texto completo
Resumen
AbstractAbstract interpretation is a principled approach to inferring properties of a program's execution by simulating that execution using an interpreter which computes over some abstraction of the program's usual, concrete domain, and which collects the information of interest during the execution. Abstract interpretation has been used as the basis of research in logic and functional programming, particularly in applications concerned with compiler optimizations. However, abstract interpretation has the potential to be used in other applications, such as debugging or verification of programs. In this paper we review the use of abstract interpretation to both compiler optimizations and to other applications, attempting to give a flavour of the kind of information it is possible to infer and some of the issues involved
Los estilos APA, Harvard, Vancouver, ISO, etc.
36

Liu, Amanda, Gilbert Bernstein, Adam Chlipala y Jonathan Ragan-Kelley. "A Verified Compiler for a Functional Tensor Language". Proceedings of the ACM on Programming Languages 8, PLDI (20 de junio de 2024): 320–42. http://dx.doi.org/10.1145/3656390.

Texto completo
Resumen
Producing efficient array code is crucial in high-performance domains like image processing and machine learning. It requires the ability to control factors like compute intensity and locality by reordering computations into different stages and granularities with respect to where they are stored. However, traditional pure, functional tensor languages struggle to do so. In a previous publication, we introduced ATL as a pure, functional tensor language capable of systematically decoupling compute and storage order via a set of high-level combinators known as reshape operators. Reshape operators are a unique functional-programming construct since they manipulate storage location in the generated code by modifying the indices that appear on the left-hand sides of storage expressions. We present a formal correctness proof for an implementation of the compilation algorithm, marking the first verification of a lowering algorithm targeting imperative loop nests from a source functional language that enables separate control of compute and storage ordering. One of the core difficulties of this proof required properly formulating the complex invariants to ensure that these storage-index remappings were well-formed. Notably, this exercise revealed a soundness bug in the original published compilation algorithm regarding the truncation reshape operators. Our fix is a new type system that captures safety conditions that were previously implicit and enables us to prove compiler correctness for well-typed source programs. We evaluate this type system and compiler implementation on a range of common programs and optimizations, including but not limited to those previously studied to demonstrate performance comparable to established compilers like Halide.
Los estilos APA, Harvard, Vancouver, ISO, etc.
37

DE LA BANDA, MARIC GARCÍA, WARWICK HARVEY, KIM MARRIOTT, PETER J. STUCKEY y BART DEMOEN. "Checking modes of HAL programs". Theory and Practice of Logic Programming 5, n.º 6 (31 de octubre de 2005): 623–67. http://dx.doi.org/10.1017/s1471068404002327.

Texto completo
Resumen
Recent constraint logic programming (CLP) languages, such as HAL and Mercury, require type, mode and determinism declarations for predicates. This information allows the generation of efficient target code and the detection of many errors at compile-time. Unfortunately, mode checking in such languages is difficult. One of the main reasons is that, for each predicate mode declaration, the compiler is required to appropriately re-order literals in the predicate's definition. The task is further complicated by the need to handle complex instantiations (which interact with type declarations and higher-order predicates) and automatic initialization of solver variables. Here we define mode checking for strongly typed CLP languages which require reordering of clause body literals. In addition, we show how to handle a simple case of polymorphic modes by using the corresponding polymorphic types.
Los estilos APA, Harvard, Vancouver, ISO, etc.
38

Leinenbach, Dirk y Elena Petrova. "Pervasive Compiler Verification – From Verified Programs to Verified Systems". Electronic Notes in Theoretical Computer Science 217 (julio de 2008): 23–40. http://dx.doi.org/10.1016/j.entcs.2008.06.040.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
39

Michel, Jesse, Kevin Mu, Xuanda Yang, Sai Praveen Bangaru, Elias Rojas Collins, Gilbert Bernstein, Jonathan Ragan-Kelley, Michael Carbin y Tzu-Mao Li. "Distributions for Compositionally Differentiating Parametric Discontinuities". Proceedings of the ACM on Programming Languages 8, OOPSLA1 (29 de abril de 2024): 893–922. http://dx.doi.org/10.1145/3649843.

Texto completo
Resumen
Computations in physical simulation, computer graphics, and probabilistic inference often require the differentiation of discontinuous processes due to contact, occlusion, and changes at a point in time. Popular differentiable programming languages, such as PyTorch and JAX, ignore discontinuities during differentiation. This is incorrect for parametric discontinuities —conditionals containing at least one real-valued parameter and at least one variable of integration. We introduce Potto, the first differentiable first-order programming language to soundly differentiate parametric discontinuities. We present a denotational semantics for programs and program derivatives and show the two accord. We describe the implementation of Potto, which enables separate compilation of programs. Our prototype implementation overcomes previous compile-time bottlenecks achieving an 88.1x and 441.2x speed up in compile time and a 2.5x and 7.9x speed up in runtime, respectively, on two increasingly large image stylization benchmarks. We showcase Potto by implementing a prototype differentiable renderer with separately compiled shaders.
Los estilos APA, Harvard, Vancouver, ISO, etc.
40

Esraa, Esraa. "Predicting Loop Vectorization through Machine Learning Algorithms". Fusion: Practice and Applications 15, n.º 2 (2024): 36–45. http://dx.doi.org/10.54216/fpa.150203.

Texto completo
Resumen
Automatic vectorization is often utilized to improve the speed of compute-intensive programs on current CPUs. However, there is enormous space for improvement in present compiler auto-vectorization capabilities. Execution with optimizing code on these resource-controlled strategies is essential for both energy and performance efficiency. While vectorization suggests major performance developments, conventional compiler auto-vectorization techniques often fail. This study investigated the prospective of machine learning algorithms to enhance vectorization. The study proposes an ensemble learning method by employing Random Forest (RF), Feedforward Neural Network (FNN), and Support Vector Machine (SVM) algorithms to estimate the effectiveness of vectorization over Trimaran Single-Value Code (TSVC) loops. Unlike existing methods that depend on static program features, we leverage dynamic features removed from hardware counter-events to build efficient and robust machine learning models. Our approach aims to improve the performance of e-business microcontroller platforms while identifying profitable vectorization opportunities. We assess our method using a benchmark group of 155 loops with two commonly used compilers (GCC and Clang). The results demonstrated high accuracy in predicting vectorization benefits in e-business applications.
Los estilos APA, Harvard, Vancouver, ISO, etc.
41

Mehta, Sanyam y Pen-Chung Yew. "Improving compiler scalability: optimizing large programs at small price". ACM SIGPLAN Notices 50, n.º 6 (7 de agosto de 2015): 143–52. http://dx.doi.org/10.1145/2813885.2737954.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
42

Black, Paul y Iqbal Gondal. "Cross-Compiler Bipartite Vulnerability Search". Electronics 10, n.º 11 (7 de junio de 2021): 1356. http://dx.doi.org/10.3390/electronics10111356.

Texto completo
Resumen
Open-source libraries are widely used in software development, and the functions from these libraries may contain security vulnerabilities that can provide gateways for attackers. This paper provides a function similarity technique to identify vulnerable functions in compiled programs and proposes a new technique called Cross-Compiler Bipartite Vulnerability Search (CCBVS). CCBVS uses a novel training process, and bipartite matching to filter SVM model false positives to improve the quality of similar function identification. This research uses debug symbols in programs compiled from open-source software products to generate the ground truth. This automatic extraction of ground truth allows experimentation with a wide range of programs. The results presented in the paper show that an SVM model trained on a wide variety of programs compiled for Windows and Linux, x86 and Intel 64 architectures can be used to predict function similarity and that the use of bipartite matching substantially improves the function similarity matching performance.
Los estilos APA, Harvard, Vancouver, ISO, etc.
43

Tian, Xinmin, Jay P. Hoeflinger, Grant Haab, Yen-Kuang Chen, Milind Girkar y Sanjiv Shah. "A compiler for exploiting nested parallelism in OpenMP programs". Parallel Computing 31, n.º 10-12 (octubre de 2005): 960–83. http://dx.doi.org/10.1016/j.parco.2005.03.007.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
44

Yang, Yi, Ping Xiang, Jingfei Kong y Huiyang Zhou. "An optimizing compiler for GPGPU programs with input-data sharing". ACM SIGPLAN Notices 45, n.º 5 (mayo de 2010): 343–44. http://dx.doi.org/10.1145/1837853.1693505.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
45

Huang, Lei, Deepak Eachempati, Marcus W. Hervey y Barbara Chapman. "Exploiting global optimizations for openmp programs in the openuh compiler". ACM SIGPLAN Notices 44, n.º 4 (14 de febrero de 2009): 289–90. http://dx.doi.org/10.1145/1594835.1504219.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
46

Newton, Ryan R., Lewis D. Girod, Michael B. Craig, Samuel R. Madden y John Gregory Morrisett. "Design and evaluation of a compiler for embedded stream programs". ACM SIGPLAN Notices 43, n.º 7 (27 de junio de 2008): 131–40. http://dx.doi.org/10.1145/1379023.1375675.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
47

Marowka, Ami. "Bsp2omp: A Compiler For Translating Bsp Programs To Openmp". International Journal of Parallel, Emergent and Distributed Systems 24, n.º 4 (agosto de 2009): 293–310. http://dx.doi.org/10.1080/17445760902719927.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
48

Ierotheou, C. S., S. P. Johnson, P. F. Leggett, M. Cross, E. W. Evans, H. Jin, M. Frumkin y J. Yan. "The Semi-Automatic Parallelisation of Scientific Application Codes Using a Computer Aided Parallelisation Toolkit". Scientific Programming 9, n.º 2-3 (2001): 163–73. http://dx.doi.org/10.1155/2001/327048.

Texto completo
Resumen
The shared-memory programming model can be an effective way to achieve parallelism on shared memory parallel computers. Historically however, the lack of a programming standard using directives and the limited scalability have affected its take-up. Recent advances in hardware and software technologies have resulted in improvements to both the performance of parallel programs with compiler directives and the issue of portability with the introduction of OpenMP. In this study, the Computer Aided Parallelisation Toolkit has been extended to automatically generate OpenMP-based parallel programs with nominal user assistance. We categorize the different loop types and show how efficient directives can be placed using the toolkit's in-depth interprocedural analysis. Examples are taken from the NAS parallel benchmarks and a number of real-world application codes. This demonstrates the great potential of using the toolkit to quickly parallelise serial programs as well as the good performance achievable on up to 300 processors for hybrid message passing-directive parallelisations.
Los estilos APA, Harvard, Vancouver, ISO, etc.
49

Campanoni, Simone, Kevin Brownell, Svilen Kanev, Timothy M. Jones, Gu-Yeon Wei y David Brooks. "Automatically accelerating non-numerical programs by architecture-compiler co-design". Communications of the ACM 60, n.º 12 (27 de noviembre de 2017): 88–97. http://dx.doi.org/10.1145/3139461.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
50

Lenglet, Romain, Thierry Coupaye y Eric Bruneton. "Composing transformations of compiled Java programs with Jabyce". Computer Science and Information Systems 1, n.º 2 (2004): 83–125. http://dx.doi.org/10.2298/csis0402083l.

Texto completo
Resumen
This article introduces Jabyce, a software framework for the implementation and composition of transformations of compiled Java programs. Most distinguishing features of Jabyce are 1) its interaction orientation, i.e. it represents elements of transformed programs as interactions (method calls) which generally consumes less memory and CPU time than representing programs as graphs of objects; and 2) its component orientation, i.e. it allows for the design and composition of transformers as software components based on the Fractal component model. This latter point is strongly connected to infra-structural and architectural issues, and software engineering aspects such as composing, scaling, maintaining and evolving transformers. Jabyce is compared with other existing compiled Java programs transformation systems using an extension of a previous well-known categorization of program transformation systems.
Los estilos APA, Harvard, Vancouver, ISO, etc.
Ofrecemos descuentos en todos los planes premium para autores cuyas obras están incluidas en selecciones literarias temáticas. ¡Contáctenos para obtener un código promocional único!

Pasar a la bibliografía