Auswahl der wissenschaftlichen Literatur zum Thema „Compilers (Computer programs)“

Geben Sie eine Quelle nach APA, MLA, Chicago, Harvard und anderen Zitierweisen an

Wählen Sie eine Art der Quelle aus:

Machen Sie sich mit den Listen der aktuellen Artikel, Bücher, Dissertationen, Berichten und anderer wissenschaftlichen Quellen zum Thema "Compilers (Computer programs)" bekannt.

Neben jedem Werk im Literaturverzeichnis ist die Option "Zur Bibliographie hinzufügen" verfügbar. Nutzen Sie sie, wird Ihre bibliographische Angabe des gewählten Werkes nach der nötigen Zitierweise (APA, MLA, Harvard, Chicago, Vancouver usw.) automatisch gestaltet.

Sie können auch den vollen Text der wissenschaftlichen Publikation im PDF-Format herunterladen und eine Online-Annotation der Arbeit lesen, wenn die relevanten Parameter in den Metadaten verfügbar sind.

Zeitschriftenartikel zum Thema "Compilers (Computer programs)"

1

Feng, Jing Ge, Ye Ping He und Qiu Ming Tao. „Evaluation of Compilers’ Capability of Automatic Vectorization Based on Source Code Analysis“. Scientific Programming 2021 (30.11.2021): 1–15. http://dx.doi.org/10.1155/2021/3264624.

Der volle Inhalt der Quelle
Annotation:
Automatic vectorization is an important technique for compilers to improve the parallelism of programs. With the widespread usage of SIMD (Single Instruction Multiple Data) extensions in modern processors, automatic vectorization has become a hot topic in the research of compiler techniques. Accurately evaluating the effectiveness of automatic vectorization in typical compilers is quite valuable for compiler optimization and design. This paper evaluates the effectiveness of automatic vectorization, analyzes the limitation of automatic vectorization and the main causes, and improves the automatic vectorization technology. This paper firstly classifies the programs by two main factors: program characteristics and transformation methods. Then, it evaluates the effectiveness of automatic vectorization in three well-known compilers (GCC, LLVM, and ICC, including their multiple versions in recent 5 years) through TSVC (Test Suite for Vectorizing Compilers) benchmark. Furthermore, this paper analyzes the limitation of automatic vectorization based on source code analysis, and introduces the differences between academic research and engineering practice in automatic vectorization and the main causes, Finally, it gives some suggestions as to how to improve automatic vectorization capability.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
2

Susungi, Adilla, und Claude Tadonki. „Intermediate Representations for Explicitly Parallel Programs“. ACM Computing Surveys 54, Nr. 5 (Juni 2021): 1–24. http://dx.doi.org/10.1145/3452299.

Der volle Inhalt der Quelle
Annotation:
While compilers generally support parallel programming languages and APIs, their internal program representations are mostly designed from the sequential programs standpoint (exceptions include source-to-source parallel compilers, for instance). This makes the integration of compilation techniques dedicated to parallel programs more challenging. In addition, parallelism has various levels and different targets, each of them with specific characteristics and constraints. With the advent of multi-core processors and general purpose accelerators, parallel computing is now a common and pervasive consideration. Thus, software support to parallel programming activities is essential to make this technical transition more realistic and beneficial. The case of compilers is fundamental as they deal with (parallel) programs at a structural level, thus the need for intermediate representations. This article surveys and discusses attempts to provide intermediate representations for the proper support of explicitly parallel programs. We highlight the gap between available contributions and their concrete implementation in compilers and then exhibit possible future research directions.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
3

AlFayez, Fayez. „Algorithms for Pre-Compiling Programs by Parallel Compilers“. Computer Systems Science and Engineering 44, Nr. 3 (2023): 2165–76. http://dx.doi.org/10.32604/csse.2023.026238.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
4

Dold, Axel, Friedrich von Henke und Wolfgang Goerigk. „A Completely Verified Realistic Bootstrap Compiler“. International Journal of Foundations of Computer Science 14, Nr. 04 (August 2003): 659–80. http://dx.doi.org/10.1142/s0129054103001947.

Der volle Inhalt der Quelle
Annotation:
This paper reports on a large verification effort in constructing an initial fully trusted bootstrap compiler executable for a realistic system programming language and real target processor. The construction and verification process comprises three tasks: the verification of the compiling specification (a relation between abstract source and target programs) with respect to the language semantics and a realistic correctness criterion. This proof has been completely mechanized using the PVS verification system and is one of the largest case-studies in formal verification we are aware of. Second, the implementation of the specification in the high-level source language following a transformational approach, and finally, the implementation and verification of a binary executable written in the compiler's target language. For the latter task, a realistic technique has been developed, which is based on rigorous a-posteriori syntactic code inspection and which guarantees, for the first time, trusted execution of generated machine programs. The context of this work is the joint German research effort Verifix aiming at developing methods for the construction of correct compilers for realistic source languages and real target processors.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
5

Ciric, Miroslav, und Svetozar Rancic. „Parsing in different languages“. Facta universitatis - series: Electronics and Energetics 18, Nr. 2 (2005): 299–307. http://dx.doi.org/10.2298/fuee0502299c.

Der volle Inhalt der Quelle
Annotation:
A compiler is a translator that accepts as input formatted source file or files, and produces as output a file that may be run directly on a computer. Given the same ANSI C++ compliant input file, two different ANSI C++ compliant compilers running on the same operating system produce two different executable programs that should execute in exactly the same way. To some degree, this is achieved by the standardization of the C++ language, but it is also possible because computer programming languages like C++ can be compiled using reliable technologies with long traditions and understood characteristics. LALR(k), as practical version of LR, is such reliable technology for parsing. Traditional LALR(1) tool YACC has proved his value during years of successful applications. Nowadays there are a few commercial and noncommercial alternatives that are very interesting and promising. This paper will examine some of the them with ability of parsing in different programming languages.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
6

Steele, James K., und Ronald R. Biederman. „Powder Diffraction Pattern Simulation and Analysis“. Advances in X-ray Analysis 37 (1993): 101–7. http://dx.doi.org/10.1154/s0376030800015561.

Der volle Inhalt der Quelle
Annotation:
The graphics capability and speed available in modern personal computers has encouraged an increase in the use of a direct pattern comparison approach to the analysis of x-ray and electron diffraction patterns. Several researchers over the past 30 years have presented programs and algorithms which calculate and display powder patterns for xray diffraction. These programs originally required a main frame computer which was expensive and generally not available to all researchers. With the recent advances in the speed of personal computers, language compilers, and high resoultion graphics, expecially within the past 5 years, real time calculations and display of calculated patterns is becoming widely available. The power of this approach will be demonstrated through the use of an IBM compatable personal computer code developed by the authors.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
7

Lin, Jenny, Vidya Narayanan, Yuka Ikarashi, Jonathan Ragan-Kelley, Gilbert Bernstein und James McCann. „Semantics and Scheduling for Machine Knitting Compilers“. ACM Transactions on Graphics 42, Nr. 4 (26.07.2023): 1–26. http://dx.doi.org/10.1145/3592449.

Der volle Inhalt der Quelle
Annotation:
Machine knitting is a well-established fabrication technique for complex soft objects, and both companies and researchers have developed tools for generating machine knitting patterns. However, existing representations for machine knitted objects are incomplete (do not cover the complete domain of machine knittable objects) or overly specific (do not account for symmetries and equivalences among knitting instruction sequences). This makes it difficult to define correctness in machine knitting, let alone verify the correctness of a given program or program transformation. The major contribution of this work is a formal semantics for knitout, a low-level Domain Specific Language for knitting machines. We accomplish this by using what we call the fenced tangle , which extends concepts from knot theory to allow for a mathematical definition of knitting program equivalence that matches the intuition behind knit objects. Finally, using this formal representation, we prove the correctness of a sequence of rewrite rules; and demonstrate how these rewrite rules can form the foundation for higher-level tasks such as compiling a program for a specific machine and optimizing for time/reliability, all while provably generating the same knit object under our proposed semantics. By establishing formal definitions of correctness, this work provides a strong foundation for compiling and optimizing knit programs.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
8

Burgin, Mark. „Triadic Automata and Machines as Information Transformers“. Information 11, Nr. 2 (13.02.2020): 102. http://dx.doi.org/10.3390/info11020102.

Der volle Inhalt der Quelle
Annotation:
Algorithms and abstract automata (abstract machines) are used to describe, model, explore and improve computers, cell phones, computer networks, such as the Internet, and processes in them. Traditional models of information processing systems—abstract automata—are aimed at performing transformations of data. These transformations are performed by their hardware (abstract devices) and controlled by their software (programs)—both of which stay unchanged during the whole computational process. However, in physical computers, their software is also changing by special tools such as interpreters, compilers, optimizers and translators. In addition, people change the hardware of their computers by extending the external memory. Moreover, the hardware of computer networks is incessantly altering—new computers and other devices are added while other computers and other devices are disconnected. To better represent these peculiarities of computers and computer networks, we introduce and study a more complete model of computations, which is called a triadic automaton or machine. In contrast to traditional models of computations, triadic automata (machine) perform computational processes transforming not only data but also hardware and programs, which control data transformation. In addition, we further develop taxonomy of classes of automata and machines as well as of individual automata and machines according to information they produce.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
9

Rushinek, Avi, und Sara F. Rushinek. „Operating systems, compilers, assemblers and application programs: audit trails of user satisfaction“. Microprocessors and Microsystems 9, Nr. 5 (Juni 1985): 241–49. http://dx.doi.org/10.1016/0141-9331(85)90272-8.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
10

LaRose, Ryan. „Overview and Comparison of Gate Level Quantum Software Platforms“. Quantum 3 (25.03.2019): 130. http://dx.doi.org/10.22331/q-2019-03-25-130.

Der volle Inhalt der Quelle
Annotation:
Quantum computers are available to use over the cloud, but the recent explosion of quantum software platforms can be overwhelming for those deciding on which to use. In this paper, we provide a current picture of the rapidly evolving quantum computing landscape by comparing four software platforms - Forest (pyQuil), Qiskit, ProjectQ, and the Quantum Developer Kit (Q#) - that enable researchers to use real and simulated quantum devices. Our analysis covers requirements and installation, language syntax through example programs, library support, and quantum simulator capabilities for each platform. For platforms that have quantum computer support, we compare hardware, quantum assembly languages, and quantum compilers. We conclude by covering features of each and briefly mentioning other quantum computing software packages.
APA, Harvard, Vancouver, ISO und andere Zitierweisen

Dissertationen zum Thema "Compilers (Computer programs)"

1

Biglari-Abhari, Morteza. „Performance improvement through predicated execution in VLIW machines“. Title page, contents and abstract only, 2000. http://web4.library.adelaide.edu.au/theses/09PH/09phb593.pdf.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
2

Deighan, John. „VPI PROLOG compiler project report“. Master's thesis, This resource online, 1991. http://scholar.lib.vt.edu/theses/available/etd-01262010-020109/.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
3

Park, Eun Jung. „Methodology of dynamic compiler option selection based on static program analysis implementation and evaluation /“. Access to citation, abstract and download form provided by ProQuest Information and Learning Company; downloadable PDF file, 74 p, 2007. http://proquest.umi.com/pqdweb?did=1407501141&sid=12&Fmt=2&clientId=8331&RQT=309&VName=PQD.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
4

Walker, Kenneth William. „The implementation of an optimizing compiler for Icon“. Diss., The University of Arizona, 1991. http://hdl.handle.net/10150/185599.

Der volle Inhalt der Quelle
Annotation:
There are many optimizations that can be applied while translating Icon programs. These optimizations and the analyses needed to apply them are of interest for two reasons. First, Icon's unique combination of characteristics requires developing new techniques for implementing them. Second, these optimizations are used in variety of languages and Icon can be used as a medium for extending the state of the art. Many of these optimizations require detailed control of the generated code. Previous production implementations of the Icon programming language have been interpreters. The virtual machine code of an interpreter is seldom flexible enough to accommodate these optimizations and modifying the virtual machine to add the flexibility destroys the simplicity that justified using an interpreter in the first place. These optimizations can only reasonably be implemented in a compiler. In order to explore these optimizations for Icon programs, a compiler was developed. This dissertation describes the compiler and the optimizations it employs. It also describes a run-time system designed to support the analyses and optimizations. Icon variables are untyped. The compiler contains a type inferencing system that determines what values variables and expression may take on during program execution. This system is effective in the presence of values with pointer semantics and of assignments to components of data structures. The compiler stores intermediate results in temporary variables rather than on a stack. A simple and efficient algorithm was developed for determining the lifetimes of intermediate results in the presence of goal-directed evaluation. This allows an efficient allocation of temporary variables to intermediate results. The compiler uses information from type inferencing and liveness analysis to simplify generated code. Performance measurements on a variety of Icon programs show these optimizations to be effective.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
5

Calnan, Paul W. „EXTRACT, Extensible Transformation and Compiler Technology“. Link to electronic thesis, 2003. http://www.wpi.edu/Pubs/ETD/Available/etd-0429103-152947.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
6

Cardone, Richard Joseph. „Language and compiler support for mixin programming“. Access restricted to users with UT Austin EID Full text (PDF) from UMI/Dissertation Abstracts International, 2002. http://wwwlib.umi.com/cr/utexas/fullcit?p3077428.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
7

Jones, Philip E. C. „Common subexpression detection in dataflow programs /“. Title page, contents and summary only, 1989. http://web4.library.adelaide.edu.au/theses/09SM/09smj78.pdf.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
8

Müller-Olm, Markus. „Modular compiler verification : a refinement algebraic approach advocating stepwise abstraction /“. Berlin [u.a.] : Springer, 1997. http://www.loc.gov/catdir/enhancements/fy0815/97013428-d.html.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
9

Lapinskii, Viktor. „Algorithms for compiler-assisted design space exploration of clustered VLIW ASIP datapaths /“. Full text (PDF) from UMI/Dissertation Abstracts International, 2001. http://wwwlib.umi.com/cr/utexas/fullcit?p3008376.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
10

Junaidu, Sahalu B. „A parallel functional language compiler for message-passing multicomputers“. Thesis, University of St Andrews, 1998. http://hdl.handle.net/10023/13450.

Der volle Inhalt der Quelle
Annotation:
The research presented in this thesis is about the design and implementation of Naira, a parallel, parallelising compiler for a rich, purely functional programming language. The source language of the compiler is a subset of Haskell 1.2. The front end of Naira is written entirely in the Haskell subset being compiled. Naira has been successfully parallelised and it is the largest successfully parallelised Haskell program having achieved good absolute speedups on a network of SUN workstations. Having the same basic structure as other production compilers of functional languages, Naira's parallelisation technology should carry forward to other functional language compilers. The back end of Naira is written in C and generates parallel code in the C language which is envisioned to be run on distributed-memory machines. The code generator is based on a novel compilation scheme specified using a restricted form of Milner's 7r-calculus which achieves asynchronous communication. We present the first working implementation of this scheme on distributed-memory message-passing multicomputers with split-phase transactions. Simulated assessment of the generated parallel code indicates good parallel behaviour. Parallelism is introduced using explicit, advisory user annotations in the source' program and there are two major aspects of the use of annotations in the compiler. First, the front end of the compiler is parallelised so as to improve its efficiency at compilation time when it is compiling input programs. Secondly, the input programs to the compiler can themselves contain annotations based on which the compiler generates the multi-threaded parallel code. These, therefore, make Naira, unusually and uniquely, both a parallel and a parallelising compiler. We adopt a medium-grained approach to granularity where function applications form the unit of parallelism and load distribution. We have experimented with two different task distribution strategies, deterministic and random, and have also experimented with thread-based and quantum- based scheduling policies. Our experiments show that there is little efficiency difference for regular programs but the quantum-based scheduler is the best in programs with irregular parallelism. The compiler has been successfully built, parallelised and assessed using both idealised and realistic measurement tools: we obtained significant compilation speed-ups on a variety of simulated parallel architectures. The simulated results are supported by the best results obtained on real hardware for such a large program: we measured an absolute speedup of 2.5 on a network of 5 SUN workstations. The compiler has also been shown to have good parallelising potential, based on popular test programs. Results of assessing Naira's generated unoptimised parallel code are comparable to those produced by other successful parallel implementation projects.
APA, Harvard, Vancouver, ISO und andere Zitierweisen

Bücher zum Thema "Compilers (Computer programs)"

1

1958-, Cytron Ron K., und LeBlanc Richard J. 1950-, Hrsg. Crafting a compiler. Boston: Addison-Wesley, 2010.

Den vollen Inhalt der Quelle finden
APA, Harvard, Vancouver, ISO und andere Zitierweisen
2

Safonov, V. O. Trustworthy compilers. Hoboken, N.J: John Wiley & Sons, 2010.

Den vollen Inhalt der Quelle finden
APA, Harvard, Vancouver, ISO und andere Zitierweisen
3

Reis, Anthony J. Dos. Compiler construction using Java, JavaCC, and Yacc. Hoboken, N.J: Wiley-IEEE Computer Society, 2011.

Den vollen Inhalt der Quelle finden
APA, Harvard, Vancouver, ISO und andere Zitierweisen
4

Kiong, Derek Beng Kee. Compiler technology: Tools, translators, and language implementation. Boston: Kluwer Academic Publishers, 1997.

Den vollen Inhalt der Quelle finden
APA, Harvard, Vancouver, ISO und andere Zitierweisen
5

Kaplan, Randy M. Constructing language processors for little languages. New York: Wiley, 1994.

Den vollen Inhalt der Quelle finden
APA, Harvard, Vancouver, ISO und andere Zitierweisen
6

Wilhelm, R. Compiler design. Wokingham, England: Addison-Wesley Publishing Co., 1995.

Den vollen Inhalt der Quelle finden
APA, Harvard, Vancouver, ISO und andere Zitierweisen
7

Tofte, Mads. Compiler generators: What they can do, what they might do, and what they will probably never do. Berlin: Springer-Verlag, 1990.

Den vollen Inhalt der Quelle finden
APA, Harvard, Vancouver, ISO und andere Zitierweisen
8

CC '92 (1992 Paderborn, Germany). Compiler construction: 4th International Conference CC'92, Paderborn, FRG, October 5-7, 1992 : proceedings. Berlin: Springer-Verlag, 1992.

Den vollen Inhalt der Quelle finden
APA, Harvard, Vancouver, ISO und andere Zitierweisen
9

Meduna, Alexander. Elements of compiler design. Boca Raton, FL: Auerbach Publications, 2008.

Den vollen Inhalt der Quelle finden
APA, Harvard, Vancouver, ISO und andere Zitierweisen
10

CC 2003 (2003 Warsaw, Poland). Compiler construction: 12th international conference, CC 2003 held as part of the Joint European Conferences on Theory and Practice of Software, ETAPS 2003, Warsaw, Poland, April 7-11, 2003 : proceedings. Berlin: Springer, 2003.

Den vollen Inhalt der Quelle finden
APA, Harvard, Vancouver, ISO und andere Zitierweisen

Buchteile zum Thema "Compilers (Computer programs)"

1

Klijnsma, Tosca, und Tim Steenvoorden. „Semantic Equivalence of Task-Oriented Programs in TopHat“. In Lecture Notes in Computer Science, 100–125. Cham: Springer International Publishing, 2022. http://dx.doi.org/10.1007/978-3-031-21314-4_6.

Der volle Inhalt der Quelle
Annotation:
AbstractTask-oriented programming (top) is a new programming paradigm for specifying multi-user workflows. To reason formally about top programs, a formal language called TopHat has been designed, together with its operational semantics. For proving properties about task-oriented programs, it is desirable to know when two TopHat-programs are semantically equivalent. This paper aims to answer this question. We show that a task can be in either one of five conditions, and for every two tasks in the same condition, we define what it means for them to be semantically equivalent. Using this definition, we study a number of transformation laws for TopHat-programs, which can be used by developers and compilers to optimise top-programs. We show that the "Image missing" operation on types in TopHat is a functor but cannot be a monad. We support our findings with proofs formalised in the dependently typed programming language Idris.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
2

Sakagami, Hitoshi. „Three-Dimensional Fluid Code with XcalableMP“. In XcalableMP PGAS Programming Language, 165–79. Singapore: Springer Singapore, 2020. http://dx.doi.org/10.1007/978-981-15-7683-6_6.

Der volle Inhalt der Quelle
Annotation:
AbstractIn order to adapt parallel computers to general convenient tools for computational scientists, a high-level and easy-to-use portable parallel programming paradigm is mandatory. XcalableMP, which is proposed by the XcalableMP Specification Working Group, is a directive-based language extension for Fortran and C to easily describe parallelization in programs for distributed memory parallel computers. The Omni XcalableMP compiler, which is provided as a reference XcalableMP compiler, is currently implemented as a source-to-source translator. It converts XcalableMP programs to standard MPI programs, which can be easily compiled by the native Fortran compiler and executed on most of parallel computers. A three-dimensional Eulerian fluid code written in Fortran is parallelized by XcalableMP using two different programming models with the ordinary domain decomposition method, and its performances are measured on the K computer. Programs converted by the Omni XcalableMP compiler prevent native Fortran compiler optimizations and show lower performance than that of hand-coded MPI programs. Finally almost the same performances are obtained by using specific compiler options of the native Fortran compiler in the case of a global-view programming model, but performance degradation is not improved by specifying any native compiler options when the code is parallelized by a local-view programming model.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
3

Zangerl, Peter, Peter Thoman und Thomas Fahringer. „Compiler Generated Progress Estimation for OpenMP Programs“. In Lecture Notes in Computer Science, 107–21. Cham: Springer International Publishing, 2019. http://dx.doi.org/10.1007/978-3-030-25636-4_9.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
4

Craig, Stephen-John, und Michael Leuschel. „A Compiler Generator for Constraint Logic Programs“. In Lecture Notes in Computer Science, 148–61. Berlin, Heidelberg: Springer Berlin Heidelberg, 2004. http://dx.doi.org/10.1007/978-3-540-39866-0_17.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
5

Chen, Christopher K., Margo I. Seltzer und Mark R. Greenstreet. „Shellac: A Compiler Synthesizer for Concurrent Programs“. In Lecture Notes in Computer Science, 33–51. Cham: Springer International Publishing, 2023. http://dx.doi.org/10.1007/978-3-031-25803-9_3.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
6

Kiefer, Moritz, Vladimir Klebanov und Mattias Ulbrich. „Relational Program Reasoning Using Compiler IR“. In Lecture Notes in Computer Science, 149–65. Cham: Springer International Publishing, 2016. http://dx.doi.org/10.1007/978-3-319-48869-1_12.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
7

Xia, Songtao, und James Hook. „Certifying Temporal Properties for Compiled C Programs“. In Lecture Notes in Computer Science, 161–74. Berlin, Heidelberg: Springer Berlin Heidelberg, 2004. http://dx.doi.org/10.1007/978-3-540-24622-0_15.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
8

Vitek, Jan, R. Nigel Horspool und James S. Uhl. „Compile-time analysis of object-oriented programs“. In Lecture Notes in Computer Science, 236–50. Berlin, Heidelberg: Springer Berlin Heidelberg, 1992. http://dx.doi.org/10.1007/3-540-55984-1_22.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
9

Subramanian, Ram, und Santosh Pande. „Efficient program partitioning based on compiler controlled communication“. In Lecture Notes in Computer Science, 4–18. Berlin, Heidelberg: Springer Berlin Heidelberg, 1999. http://dx.doi.org/10.1007/bfb0097884.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
10

Scherer, Alex, Thomas Gross und Willy Zwaenepoel. „Adaptive Parallelism for OpenMP Task Parallel Programs“. In Languages, Compilers, and Run-Time Systems for Scalable Computers, 113–27. Berlin, Heidelberg: Springer Berlin Heidelberg, 2000. http://dx.doi.org/10.1007/3-540-40889-4_9.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen

Konferenzberichte zum Thema "Compilers (Computer programs)"

1

Bagly, Anton Pavlovich, Nikita Maksimovich Krivosheev und Boris Yakovlevich Steinberg. „Automation of programs parallelization with optimization of data transfers“. In 24th Scientific Conference “Scientific Services & Internet – 2022”. Keldysh Institute of Applied Mathematics, 2022. http://dx.doi.org/10.20948/abrau-2022-17.

Der volle Inhalt der Quelle
Annotation:
This paper is concerned with development of parallelizing compiler onto computer system with distributed memory. Industrial parallelizing compilers create programs for shared memory systems. Transformation of sequential programs onto systems with distributed memory requires development of new functions. This is becoming topical for future computer systems with hundreds and more cores.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
2

Steinberg, Boris Yakovlevich. „On the creation of parallelizing compilers for distributed memory computing systems“. In 25th Scientific Conference “Scientific Services & Internet – 2023”. Keldysh Institute of Applied Mathematics, 2023. http://dx.doi.org/10.20948/abrau-2023-23.

Der volle Inhalt der Quelle
Annotation:
This work describes the conditions for creating optimizing parallelizing compilers for computing systems with distributed memory. Optimizing program transformations as specific to distributed systems memory, and converting programs that are needed for distributed memory computing systems and can improve compilers for computer systems and shared memory. Are given arguments in favor of creating parallelizing compilers for distributed memory computing systems follows based on a high-level internal representation and with high-level output language.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
3

Greynolds, Alan W. „Squeezing large fortran programs into personal computers“. In OSA Annual Meeting. Washington, D.C.: Optica Publishing Group, 1986. http://dx.doi.org/10.1364/oam.1986.fm2.

Der volle Inhalt der Quelle
Annotation:
Even though it is one of the oldest programming languages around, fortran is still the language of choice in much of the scientific community. In fact, its popularity has increased with the recent availability of first class compilers and fast floating point support for personal computers. By sticking closely to the fortran-77 standard and making use of a device-independent graphics protocol, it is relatively easy to write sophisticated programs that can be executed on anything from large supercomputers to desk-top microcomputers. However, when dealing with large programs (>10,000 lines of code) with large data arrays (greater than a megabyte), limitations in personal computer hardware and software need to be addressed. A new optical analysis program of this size is currently running, with almost no modification in source code, on not only DEC VAX minicomputers but also personal computers based on both the Intel and Motorola microprocessors.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
4

Sun, Yu, und Wei Zhang. „On-Line Trace Based Automatic Parallelization of Java Programs on Multicore Platforms“. In 2011 INTERACT-15: 15th Workshop on Interaction between Compilers and Computer Architectures. IEEE, 2011. http://dx.doi.org/10.1109/interact.2011.11.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
5

Dimitrov, Dimitar, und Ivaylo Penev. „DESIGN OF A TRAINING COMPILER FOR INCREASING THE EFFICIENCY OF LANGUAGE PROCESSORS LEARNING“. In eLSE 2021. ADL Romania, 2021. http://dx.doi.org/10.12753/2066-026x-21-077.

Der volle Inhalt der Quelle
Annotation:
The paper presents the design of a training compiler which is developed for the purposes of education in compilers and language processors in computer science courses. The presented compiler has the following main advantages compared to known training compilers used in various universities - a simplified modular structure and the building of an explicit abstract syntactic tree of the input program. The modules in the compiler structure are lexical analyzer, syntactic analyzer, semantic analyzer and code generator. This separation allows students to effectively study the main stages of compilation - lexical analysis, parsing, semantic analysis and code generation. Building and visualizing an explicit abstract syntax tree helps students to understand the translation of the program into the compiler's front-end and make the transition to the compiler's back-end. The compiler translates a program written in a high-level language into virtual machine code. An interpreter to execute the generated virtual machine code is also presented. The presented design is compared to other known training compilers used in various university courses. The input language is procedurally oriented and is a subset of the C and Java languages, which makes it easier for students to use it. Language has enough resources to solve many practical problems. The input program for the compiler is a sequence of definitions of variables and functions. The language of the training compiler is strongly typed. Variables, constants and expressions are related to a specific type. Input-output operations require a certain type of arguments, arithmetic-logical operations are defined for specific types of arguments and type of returned result. At the end of the paper are presented the results of the work of the training compiler in translating a sample input program to code for a virtual machine. The results demonstrate the output of each compiler module - a token stream, an abstract syntax tree, and a set of virtual machine instructions. The structure of the presented training compiler can be used for different input languages in training on compilers and language processors.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
6

Ward, A. C., und W. P. Seering. „Quantitative Inference in a Mechanical Design “Compiler”“. In ASME 1989 Design Technical Conferences. American Society of Mechanical Engineers, 1989. http://dx.doi.org/10.1115/detc1989-0011.

Der volle Inhalt der Quelle
Annotation:
Abstract This paper introduces the theory underlying a computer program that takes as input a schematic of a mechanical or hydraulic power transmission system, plus specifications and a utility function, and returns catalog numbers from predefined catalogs for the optimal selection of components implementing the design. Unlike programs for designing single components or systems, this program provides the designer with a high level “language“ in which to compose new designs. It then performs much of the detailed design process. The process of “compilation”, or transformation from a high to a low level description, is based on a formalization of quantitative inferences about hierarchically organized sets of artifacts and operating conditions. This allows design compilation without the exhaustive enumeration of alternatives. The paper introduces the formalism, illustrating its use with examples. It then outlines some differences from previous work, and summarizes early tests and conclusions.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
7

Beckmann, Leo H. J. F. „A small-computer program for optical design and analysis, written in 'C'“. In International Lens Design. Washington, D.C.: Optica Publishing Group, 1990. http://dx.doi.org/10.1364/ild.1990.ltuc5.

Der volle Inhalt der Quelle
Annotation:
A modular computer program for the design and analysis of optical systems on a small computer has been developed under the name "Opdesign". While the origins date back into the late seventies and the use of BASIC on, successively, programmable calculators and 8-bit home computers, the current program is written in 'C' and runs on different types of personal computers. Concurrently, the speed of calculations, in particular a skew-ray tracing, went up from one ray surface per second to 30..60 ray surfaces per second depending on the compiler and the hardware. With a program size (executable code) in excess of 250 kByte, the package covers system data (input, general properties), aberration calculation (third order-, wavefront-, exact ray aberrations), a number of tools for design manipulations, and automatic design improvement (by damped least squares method).
APA, Harvard, Vancouver, ISO und andere Zitierweisen
8

Andreyeva, Tatiana Anatolievna, und Lidia Vasiljevna Gorodnyaya. „Can the Contribution of Software Decisions to Program Performance Be Measured?“ In 25th Scientific Conference “Scientific Services & Internet – 2023”. Keldysh Institute of Applied Mathematics, 2023. http://dx.doi.org/10.20948/abrau-2023-2.

Der volle Inhalt der Quelle
Annotation:
The article concerns measurement of the effect that programming solutions have on productivity of programming and on program performance, and to their connection with educational programming and program improvement in practice. When program effectiveness is measured directly, the productivity of a complex consisting of a computer, a compiler, and a program is really measured instead of the productivity of programming solutions. These measurements do not fully reflect the contribution of programming solutions. And the functional models can provide a metric scale capable to separate features of programming languages and systems from features of programs and programming solutions. One of possible approaches is to measure the productivity of programs and programming solutions on the basis of normalized functional forms and thereby to reduce the dependence of measurements on hardware and programming systems.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
9

Midorikawa, Edson Toshimi, und Liria Matsumoto Sato. „COMMUNION: towards a cooperative strategy for high-performance memory management“. In Simpósio Brasileiro de Arquitetura de Computadores e Processamento de Alto Desempenho. Sociedade Brasileira de Computação, 1997. http://dx.doi.org/10.5753/sbac-pad.1997.22645.

Der volle Inhalt der Quelle
Annotation:
The memory system is the most critical component of modern high-performance computer systems, because of its growing inability to keep up with the processor requests. Technological trends have produced a large and growing gap between CPU speeds and DRAM speeds. Many researches have focused this memory system problem, including program optimizing techniques, data locality enhancement, hardware and software prefetching, decoupled architectures, mutithreading, speculative loads and execution. These techniques have achieved a relative suceess, but they focus only one component in the hardware or software systems. We present here a new strategy for memory management in high-performance computer systems, named COMMUNION. The basic idea behind this strategy is cooperation. We introduce some possibilities of interaction among system programs that are responsible to generate and execute application programs. So, we investigate two specific interactions: between the compiler and the operating system, and among the compiling system components. The experimental results show that it's possible to achieve improvements of about 10 times in execution time, and about 5 times in memory demand. In the interaction between compiler and operating system, named Compiler-Aided Page Replacement (CAPR), we achieved a reduction of about 10% in the space-time product, with an increase of only 0.5% in the total execution time. All these results show that it's possible to manage main memory with better efficiency than what is provided by current systems.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
10

„Program Committee“. In 9th Annual Workshop on Interaction between Compilers and Computer Architectures (INTERACT'05). IEEE, 2005. http://dx.doi.org/10.1109/interact.2005.12.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
Wir bieten Rabatte auf alle Premium-Pläne für Autoren, deren Werke in thematische Literatursammlungen aufgenommen wurden. Kontaktieren Sie uns, um einen einzigartigen Promo-Code zu erhalten!

Zur Bibliographie