Gotowa bibliografia na temat „Compilers (Computer programs)”

Utwórz poprawne odniesienie w stylach APA, MLA, Chicago, Harvard i wielu innych

Wybierz rodzaj źródła:

Zobacz listy aktualnych artykułów, książek, rozpraw, streszczeń i innych źródeł naukowych na temat „Compilers (Computer programs)”.

Przycisk „Dodaj do bibliografii” jest dostępny obok każdej pracy w bibliografii. Użyj go – a my automatycznie utworzymy odniesienie bibliograficzne do wybranej pracy w stylu cytowania, którego potrzebujesz: APA, MLA, Harvard, Chicago, Vancouver itp.

Możesz również pobrać pełny tekst publikacji naukowej w formacie „.pdf” i przeczytać adnotację do pracy online, jeśli odpowiednie parametry są dostępne w metadanych.

Artykuły w czasopismach na temat "Compilers (Computer programs)"

1

Feng, Jing Ge, Ye Ping He i Qiu Ming Tao. "Evaluation of Compilers’ Capability of Automatic Vectorization Based on Source Code Analysis". Scientific Programming 2021 (30.11.2021): 1–15. http://dx.doi.org/10.1155/2021/3264624.

Pełny tekst źródła
Streszczenie:
Automatic vectorization is an important technique for compilers to improve the parallelism of programs. With the widespread usage of SIMD (Single Instruction Multiple Data) extensions in modern processors, automatic vectorization has become a hot topic in the research of compiler techniques. Accurately evaluating the effectiveness of automatic vectorization in typical compilers is quite valuable for compiler optimization and design. This paper evaluates the effectiveness of automatic vectorization, analyzes the limitation of automatic vectorization and the main causes, and improves the automatic vectorization technology. This paper firstly classifies the programs by two main factors: program characteristics and transformation methods. Then, it evaluates the effectiveness of automatic vectorization in three well-known compilers (GCC, LLVM, and ICC, including their multiple versions in recent 5 years) through TSVC (Test Suite for Vectorizing Compilers) benchmark. Furthermore, this paper analyzes the limitation of automatic vectorization based on source code analysis, and introduces the differences between academic research and engineering practice in automatic vectorization and the main causes, Finally, it gives some suggestions as to how to improve automatic vectorization capability.
Style APA, Harvard, Vancouver, ISO itp.
2

Susungi, Adilla, i Claude Tadonki. "Intermediate Representations for Explicitly Parallel Programs". ACM Computing Surveys 54, nr 5 (czerwiec 2021): 1–24. http://dx.doi.org/10.1145/3452299.

Pełny tekst źródła
Streszczenie:
While compilers generally support parallel programming languages and APIs, their internal program representations are mostly designed from the sequential programs standpoint (exceptions include source-to-source parallel compilers, for instance). This makes the integration of compilation techniques dedicated to parallel programs more challenging. In addition, parallelism has various levels and different targets, each of them with specific characteristics and constraints. With the advent of multi-core processors and general purpose accelerators, parallel computing is now a common and pervasive consideration. Thus, software support to parallel programming activities is essential to make this technical transition more realistic and beneficial. The case of compilers is fundamental as they deal with (parallel) programs at a structural level, thus the need for intermediate representations. This article surveys and discusses attempts to provide intermediate representations for the proper support of explicitly parallel programs. We highlight the gap between available contributions and their concrete implementation in compilers and then exhibit possible future research directions.
Style APA, Harvard, Vancouver, ISO itp.
3

AlFayez, Fayez. "Algorithms for Pre-Compiling Programs by Parallel Compilers". Computer Systems Science and Engineering 44, nr 3 (2023): 2165–76. http://dx.doi.org/10.32604/csse.2023.026238.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
4

Dold, Axel, Friedrich von Henke i Wolfgang Goerigk. "A Completely Verified Realistic Bootstrap Compiler". International Journal of Foundations of Computer Science 14, nr 04 (sierpień 2003): 659–80. http://dx.doi.org/10.1142/s0129054103001947.

Pełny tekst źródła
Streszczenie:
This paper reports on a large verification effort in constructing an initial fully trusted bootstrap compiler executable for a realistic system programming language and real target processor. The construction and verification process comprises three tasks: the verification of the compiling specification (a relation between abstract source and target programs) with respect to the language semantics and a realistic correctness criterion. This proof has been completely mechanized using the PVS verification system and is one of the largest case-studies in formal verification we are aware of. Second, the implementation of the specification in the high-level source language following a transformational approach, and finally, the implementation and verification of a binary executable written in the compiler's target language. For the latter task, a realistic technique has been developed, which is based on rigorous a-posteriori syntactic code inspection and which guarantees, for the first time, trusted execution of generated machine programs. The context of this work is the joint German research effort Verifix aiming at developing methods for the construction of correct compilers for realistic source languages and real target processors.
Style APA, Harvard, Vancouver, ISO itp.
5

Ciric, Miroslav, i Svetozar Rancic. "Parsing in different languages". Facta universitatis - series: Electronics and Energetics 18, nr 2 (2005): 299–307. http://dx.doi.org/10.2298/fuee0502299c.

Pełny tekst źródła
Streszczenie:
A compiler is a translator that accepts as input formatted source file or files, and produces as output a file that may be run directly on a computer. Given the same ANSI C++ compliant input file, two different ANSI C++ compliant compilers running on the same operating system produce two different executable programs that should execute in exactly the same way. To some degree, this is achieved by the standardization of the C++ language, but it is also possible because computer programming languages like C++ can be compiled using reliable technologies with long traditions and understood characteristics. LALR(k), as practical version of LR, is such reliable technology for parsing. Traditional LALR(1) tool YACC has proved his value during years of successful applications. Nowadays there are a few commercial and noncommercial alternatives that are very interesting and promising. This paper will examine some of the them with ability of parsing in different programming languages.
Style APA, Harvard, Vancouver, ISO itp.
6

Steele, James K., i Ronald R. Biederman. "Powder Diffraction Pattern Simulation and Analysis". Advances in X-ray Analysis 37 (1993): 101–7. http://dx.doi.org/10.1154/s0376030800015561.

Pełny tekst źródła
Streszczenie:
The graphics capability and speed available in modern personal computers has encouraged an increase in the use of a direct pattern comparison approach to the analysis of x-ray and electron diffraction patterns. Several researchers over the past 30 years have presented programs and algorithms which calculate and display powder patterns for xray diffraction. These programs originally required a main frame computer which was expensive and generally not available to all researchers. With the recent advances in the speed of personal computers, language compilers, and high resoultion graphics, expecially within the past 5 years, real time calculations and display of calculated patterns is becoming widely available. The power of this approach will be demonstrated through the use of an IBM compatable personal computer code developed by the authors.
Style APA, Harvard, Vancouver, ISO itp.
7

Lin, Jenny, Vidya Narayanan, Yuka Ikarashi, Jonathan Ragan-Kelley, Gilbert Bernstein i James McCann. "Semantics and Scheduling for Machine Knitting Compilers". ACM Transactions on Graphics 42, nr 4 (26.07.2023): 1–26. http://dx.doi.org/10.1145/3592449.

Pełny tekst źródła
Streszczenie:
Machine knitting is a well-established fabrication technique for complex soft objects, and both companies and researchers have developed tools for generating machine knitting patterns. However, existing representations for machine knitted objects are incomplete (do not cover the complete domain of machine knittable objects) or overly specific (do not account for symmetries and equivalences among knitting instruction sequences). This makes it difficult to define correctness in machine knitting, let alone verify the correctness of a given program or program transformation. The major contribution of this work is a formal semantics for knitout, a low-level Domain Specific Language for knitting machines. We accomplish this by using what we call the fenced tangle , which extends concepts from knot theory to allow for a mathematical definition of knitting program equivalence that matches the intuition behind knit objects. Finally, using this formal representation, we prove the correctness of a sequence of rewrite rules; and demonstrate how these rewrite rules can form the foundation for higher-level tasks such as compiling a program for a specific machine and optimizing for time/reliability, all while provably generating the same knit object under our proposed semantics. By establishing formal definitions of correctness, this work provides a strong foundation for compiling and optimizing knit programs.
Style APA, Harvard, Vancouver, ISO itp.
8

Burgin, Mark. "Triadic Automata and Machines as Information Transformers". Information 11, nr 2 (13.02.2020): 102. http://dx.doi.org/10.3390/info11020102.

Pełny tekst źródła
Streszczenie:
Algorithms and abstract automata (abstract machines) are used to describe, model, explore and improve computers, cell phones, computer networks, such as the Internet, and processes in them. Traditional models of information processing systems—abstract automata—are aimed at performing transformations of data. These transformations are performed by their hardware (abstract devices) and controlled by their software (programs)—both of which stay unchanged during the whole computational process. However, in physical computers, their software is also changing by special tools such as interpreters, compilers, optimizers and translators. In addition, people change the hardware of their computers by extending the external memory. Moreover, the hardware of computer networks is incessantly altering—new computers and other devices are added while other computers and other devices are disconnected. To better represent these peculiarities of computers and computer networks, we introduce and study a more complete model of computations, which is called a triadic automaton or machine. In contrast to traditional models of computations, triadic automata (machine) perform computational processes transforming not only data but also hardware and programs, which control data transformation. In addition, we further develop taxonomy of classes of automata and machines as well as of individual automata and machines according to information they produce.
Style APA, Harvard, Vancouver, ISO itp.
9

Rushinek, Avi, i Sara F. Rushinek. "Operating systems, compilers, assemblers and application programs: audit trails of user satisfaction". Microprocessors and Microsystems 9, nr 5 (czerwiec 1985): 241–49. http://dx.doi.org/10.1016/0141-9331(85)90272-8.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
10

LaRose, Ryan. "Overview and Comparison of Gate Level Quantum Software Platforms". Quantum 3 (25.03.2019): 130. http://dx.doi.org/10.22331/q-2019-03-25-130.

Pełny tekst źródła
Streszczenie:
Quantum computers are available to use over the cloud, but the recent explosion of quantum software platforms can be overwhelming for those deciding on which to use. In this paper, we provide a current picture of the rapidly evolving quantum computing landscape by comparing four software platforms - Forest (pyQuil), Qiskit, ProjectQ, and the Quantum Developer Kit (Q#) - that enable researchers to use real and simulated quantum devices. Our analysis covers requirements and installation, language syntax through example programs, library support, and quantum simulator capabilities for each platform. For platforms that have quantum computer support, we compare hardware, quantum assembly languages, and quantum compilers. We conclude by covering features of each and briefly mentioning other quantum computing software packages.
Style APA, Harvard, Vancouver, ISO itp.

Rozprawy doktorskie na temat "Compilers (Computer programs)"

1

Biglari-Abhari, Morteza. "Performance improvement through predicated execution in VLIW machines". Title page, contents and abstract only, 2000. http://web4.library.adelaide.edu.au/theses/09PH/09phb593.pdf.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
2

Deighan, John. "VPI PROLOG compiler project report". Master's thesis, This resource online, 1991. http://scholar.lib.vt.edu/theses/available/etd-01262010-020109/.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
3

Park, Eun Jung. "Methodology of dynamic compiler option selection based on static program analysis implementation and evaluation /". Access to citation, abstract and download form provided by ProQuest Information and Learning Company; downloadable PDF file, 74 p, 2007. http://proquest.umi.com/pqdweb?did=1407501141&sid=12&Fmt=2&clientId=8331&RQT=309&VName=PQD.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
4

Walker, Kenneth William. "The implementation of an optimizing compiler for Icon". Diss., The University of Arizona, 1991. http://hdl.handle.net/10150/185599.

Pełny tekst źródła
Streszczenie:
There are many optimizations that can be applied while translating Icon programs. These optimizations and the analyses needed to apply them are of interest for two reasons. First, Icon's unique combination of characteristics requires developing new techniques for implementing them. Second, these optimizations are used in variety of languages and Icon can be used as a medium for extending the state of the art. Many of these optimizations require detailed control of the generated code. Previous production implementations of the Icon programming language have been interpreters. The virtual machine code of an interpreter is seldom flexible enough to accommodate these optimizations and modifying the virtual machine to add the flexibility destroys the simplicity that justified using an interpreter in the first place. These optimizations can only reasonably be implemented in a compiler. In order to explore these optimizations for Icon programs, a compiler was developed. This dissertation describes the compiler and the optimizations it employs. It also describes a run-time system designed to support the analyses and optimizations. Icon variables are untyped. The compiler contains a type inferencing system that determines what values variables and expression may take on during program execution. This system is effective in the presence of values with pointer semantics and of assignments to components of data structures. The compiler stores intermediate results in temporary variables rather than on a stack. A simple and efficient algorithm was developed for determining the lifetimes of intermediate results in the presence of goal-directed evaluation. This allows an efficient allocation of temporary variables to intermediate results. The compiler uses information from type inferencing and liveness analysis to simplify generated code. Performance measurements on a variety of Icon programs show these optimizations to be effective.
Style APA, Harvard, Vancouver, ISO itp.
5

Calnan, Paul W. "EXTRACT, Extensible Transformation and Compiler Technology". Link to electronic thesis, 2003. http://www.wpi.edu/Pubs/ETD/Available/etd-0429103-152947.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
6

Cardone, Richard Joseph. "Language and compiler support for mixin programming". Access restricted to users with UT Austin EID Full text (PDF) from UMI/Dissertation Abstracts International, 2002. http://wwwlib.umi.com/cr/utexas/fullcit?p3077428.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
7

Jones, Philip E. C. "Common subexpression detection in dataflow programs /". Title page, contents and summary only, 1989. http://web4.library.adelaide.edu.au/theses/09SM/09smj78.pdf.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
8

Müller-Olm, Markus. "Modular compiler verification : a refinement algebraic approach advocating stepwise abstraction /". Berlin [u.a.] : Springer, 1997. http://www.loc.gov/catdir/enhancements/fy0815/97013428-d.html.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
9

Lapinskii, Viktor. "Algorithms for compiler-assisted design space exploration of clustered VLIW ASIP datapaths /". Full text (PDF) from UMI/Dissertation Abstracts International, 2001. http://wwwlib.umi.com/cr/utexas/fullcit?p3008376.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
10

Junaidu, Sahalu B. "A parallel functional language compiler for message-passing multicomputers". Thesis, University of St Andrews, 1998. http://hdl.handle.net/10023/13450.

Pełny tekst źródła
Streszczenie:
The research presented in this thesis is about the design and implementation of Naira, a parallel, parallelising compiler for a rich, purely functional programming language. The source language of the compiler is a subset of Haskell 1.2. The front end of Naira is written entirely in the Haskell subset being compiled. Naira has been successfully parallelised and it is the largest successfully parallelised Haskell program having achieved good absolute speedups on a network of SUN workstations. Having the same basic structure as other production compilers of functional languages, Naira's parallelisation technology should carry forward to other functional language compilers. The back end of Naira is written in C and generates parallel code in the C language which is envisioned to be run on distributed-memory machines. The code generator is based on a novel compilation scheme specified using a restricted form of Milner's 7r-calculus which achieves asynchronous communication. We present the first working implementation of this scheme on distributed-memory message-passing multicomputers with split-phase transactions. Simulated assessment of the generated parallel code indicates good parallel behaviour. Parallelism is introduced using explicit, advisory user annotations in the source' program and there are two major aspects of the use of annotations in the compiler. First, the front end of the compiler is parallelised so as to improve its efficiency at compilation time when it is compiling input programs. Secondly, the input programs to the compiler can themselves contain annotations based on which the compiler generates the multi-threaded parallel code. These, therefore, make Naira, unusually and uniquely, both a parallel and a parallelising compiler. We adopt a medium-grained approach to granularity where function applications form the unit of parallelism and load distribution. We have experimented with two different task distribution strategies, deterministic and random, and have also experimented with thread-based and quantum- based scheduling policies. Our experiments show that there is little efficiency difference for regular programs but the quantum-based scheduler is the best in programs with irregular parallelism. The compiler has been successfully built, parallelised and assessed using both idealised and realistic measurement tools: we obtained significant compilation speed-ups on a variety of simulated parallel architectures. The simulated results are supported by the best results obtained on real hardware for such a large program: we measured an absolute speedup of 2.5 on a network of 5 SUN workstations. The compiler has also been shown to have good parallelising potential, based on popular test programs. Results of assessing Naira's generated unoptimised parallel code are comparable to those produced by other successful parallel implementation projects.
Style APA, Harvard, Vancouver, ISO itp.

Książki na temat "Compilers (Computer programs)"

1

1958-, Cytron Ron K., i LeBlanc Richard J. 1950-, red. Crafting a compiler. Boston: Addison-Wesley, 2010.

Znajdź pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
2

Safonov, V. O. Trustworthy compilers. Hoboken, N.J: John Wiley & Sons, 2010.

Znajdź pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
3

Reis, Anthony J. Dos. Compiler construction using Java, JavaCC, and Yacc. Hoboken, N.J: Wiley-IEEE Computer Society, 2011.

Znajdź pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
4

Kiong, Derek Beng Kee. Compiler technology: Tools, translators, and language implementation. Boston: Kluwer Academic Publishers, 1997.

Znajdź pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
5

Kaplan, Randy M. Constructing language processors for little languages. New York: Wiley, 1994.

Znajdź pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
6

Wilhelm, R. Compiler design. Wokingham, England: Addison-Wesley Publishing Co., 1995.

Znajdź pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
7

Tofte, Mads. Compiler generators: What they can do, what they might do, and what they will probably never do. Berlin: Springer-Verlag, 1990.

Znajdź pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
8

CC '92 (1992 Paderborn, Germany). Compiler construction: 4th International Conference CC'92, Paderborn, FRG, October 5-7, 1992 : proceedings. Berlin: Springer-Verlag, 1992.

Znajdź pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
9

Meduna, Alexander. Elements of compiler design. Boca Raton, FL: Auerbach Publications, 2008.

Znajdź pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
10

CC 2003 (2003 Warsaw, Poland). Compiler construction: 12th international conference, CC 2003 held as part of the Joint European Conferences on Theory and Practice of Software, ETAPS 2003, Warsaw, Poland, April 7-11, 2003 : proceedings. Berlin: Springer, 2003.

Znajdź pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.

Części książek na temat "Compilers (Computer programs)"

1

Klijnsma, Tosca, i Tim Steenvoorden. "Semantic Equivalence of Task-Oriented Programs in TopHat". W Lecture Notes in Computer Science, 100–125. Cham: Springer International Publishing, 2022. http://dx.doi.org/10.1007/978-3-031-21314-4_6.

Pełny tekst źródła
Streszczenie:
AbstractTask-oriented programming (top) is a new programming paradigm for specifying multi-user workflows. To reason formally about top programs, a formal language called TopHat has been designed, together with its operational semantics. For proving properties about task-oriented programs, it is desirable to know when two TopHat-programs are semantically equivalent. This paper aims to answer this question. We show that a task can be in either one of five conditions, and for every two tasks in the same condition, we define what it means for them to be semantically equivalent. Using this definition, we study a number of transformation laws for TopHat-programs, which can be used by developers and compilers to optimise top-programs. We show that the "Image missing" operation on types in TopHat is a functor but cannot be a monad. We support our findings with proofs formalised in the dependently typed programming language Idris.
Style APA, Harvard, Vancouver, ISO itp.
2

Sakagami, Hitoshi. "Three-Dimensional Fluid Code with XcalableMP". W XcalableMP PGAS Programming Language, 165–79. Singapore: Springer Singapore, 2020. http://dx.doi.org/10.1007/978-981-15-7683-6_6.

Pełny tekst źródła
Streszczenie:
AbstractIn order to adapt parallel computers to general convenient tools for computational scientists, a high-level and easy-to-use portable parallel programming paradigm is mandatory. XcalableMP, which is proposed by the XcalableMP Specification Working Group, is a directive-based language extension for Fortran and C to easily describe parallelization in programs for distributed memory parallel computers. The Omni XcalableMP compiler, which is provided as a reference XcalableMP compiler, is currently implemented as a source-to-source translator. It converts XcalableMP programs to standard MPI programs, which can be easily compiled by the native Fortran compiler and executed on most of parallel computers. A three-dimensional Eulerian fluid code written in Fortran is parallelized by XcalableMP using two different programming models with the ordinary domain decomposition method, and its performances are measured on the K computer. Programs converted by the Omni XcalableMP compiler prevent native Fortran compiler optimizations and show lower performance than that of hand-coded MPI programs. Finally almost the same performances are obtained by using specific compiler options of the native Fortran compiler in the case of a global-view programming model, but performance degradation is not improved by specifying any native compiler options when the code is parallelized by a local-view programming model.
Style APA, Harvard, Vancouver, ISO itp.
3

Zangerl, Peter, Peter Thoman i Thomas Fahringer. "Compiler Generated Progress Estimation for OpenMP Programs". W Lecture Notes in Computer Science, 107–21. Cham: Springer International Publishing, 2019. http://dx.doi.org/10.1007/978-3-030-25636-4_9.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
4

Craig, Stephen-John, i Michael Leuschel. "A Compiler Generator for Constraint Logic Programs". W Lecture Notes in Computer Science, 148–61. Berlin, Heidelberg: Springer Berlin Heidelberg, 2004. http://dx.doi.org/10.1007/978-3-540-39866-0_17.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
5

Chen, Christopher K., Margo I. Seltzer i Mark R. Greenstreet. "Shellac: A Compiler Synthesizer for Concurrent Programs". W Lecture Notes in Computer Science, 33–51. Cham: Springer International Publishing, 2023. http://dx.doi.org/10.1007/978-3-031-25803-9_3.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
6

Kiefer, Moritz, Vladimir Klebanov i Mattias Ulbrich. "Relational Program Reasoning Using Compiler IR". W Lecture Notes in Computer Science, 149–65. Cham: Springer International Publishing, 2016. http://dx.doi.org/10.1007/978-3-319-48869-1_12.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
7

Xia, Songtao, i James Hook. "Certifying Temporal Properties for Compiled C Programs". W Lecture Notes in Computer Science, 161–74. Berlin, Heidelberg: Springer Berlin Heidelberg, 2004. http://dx.doi.org/10.1007/978-3-540-24622-0_15.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
8

Vitek, Jan, R. Nigel Horspool i James S. Uhl. "Compile-time analysis of object-oriented programs". W Lecture Notes in Computer Science, 236–50. Berlin, Heidelberg: Springer Berlin Heidelberg, 1992. http://dx.doi.org/10.1007/3-540-55984-1_22.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
9

Subramanian, Ram, i Santosh Pande. "Efficient program partitioning based on compiler controlled communication". W Lecture Notes in Computer Science, 4–18. Berlin, Heidelberg: Springer Berlin Heidelberg, 1999. http://dx.doi.org/10.1007/bfb0097884.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
10

Scherer, Alex, Thomas Gross i Willy Zwaenepoel. "Adaptive Parallelism for OpenMP Task Parallel Programs". W Languages, Compilers, and Run-Time Systems for Scalable Computers, 113–27. Berlin, Heidelberg: Springer Berlin Heidelberg, 2000. http://dx.doi.org/10.1007/3-540-40889-4_9.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.

Streszczenia konferencji na temat "Compilers (Computer programs)"

1

Bagly, Anton Pavlovich, Nikita Maksimovich Krivosheev i Boris Yakovlevich Steinberg. "Automation of programs parallelization with optimization of data transfers". W 24th Scientific Conference “Scientific Services & Internet – 2022”. Keldysh Institute of Applied Mathematics, 2022. http://dx.doi.org/10.20948/abrau-2022-17.

Pełny tekst źródła
Streszczenie:
This paper is concerned with development of parallelizing compiler onto computer system with distributed memory. Industrial parallelizing compilers create programs for shared memory systems. Transformation of sequential programs onto systems with distributed memory requires development of new functions. This is becoming topical for future computer systems with hundreds and more cores.
Style APA, Harvard, Vancouver, ISO itp.
2

Steinberg, Boris Yakovlevich. "On the creation of parallelizing compilers for distributed memory computing systems". W 25th Scientific Conference “Scientific Services & Internet – 2023”. Keldysh Institute of Applied Mathematics, 2023. http://dx.doi.org/10.20948/abrau-2023-23.

Pełny tekst źródła
Streszczenie:
This work describes the conditions for creating optimizing parallelizing compilers for computing systems with distributed memory. Optimizing program transformations as specific to distributed systems memory, and converting programs that are needed for distributed memory computing systems and can improve compilers for computer systems and shared memory. Are given arguments in favor of creating parallelizing compilers for distributed memory computing systems follows based on a high-level internal representation and with high-level output language.
Style APA, Harvard, Vancouver, ISO itp.
3

Greynolds, Alan W. "Squeezing large fortran programs into personal computers". W OSA Annual Meeting. Washington, D.C.: Optica Publishing Group, 1986. http://dx.doi.org/10.1364/oam.1986.fm2.

Pełny tekst źródła
Streszczenie:
Even though it is one of the oldest programming languages around, fortran is still the language of choice in much of the scientific community. In fact, its popularity has increased with the recent availability of first class compilers and fast floating point support for personal computers. By sticking closely to the fortran-77 standard and making use of a device-independent graphics protocol, it is relatively easy to write sophisticated programs that can be executed on anything from large supercomputers to desk-top microcomputers. However, when dealing with large programs (>10,000 lines of code) with large data arrays (greater than a megabyte), limitations in personal computer hardware and software need to be addressed. A new optical analysis program of this size is currently running, with almost no modification in source code, on not only DEC VAX minicomputers but also personal computers based on both the Intel and Motorola microprocessors.
Style APA, Harvard, Vancouver, ISO itp.
4

Sun, Yu, i Wei Zhang. "On-Line Trace Based Automatic Parallelization of Java Programs on Multicore Platforms". W 2011 INTERACT-15: 15th Workshop on Interaction between Compilers and Computer Architectures. IEEE, 2011. http://dx.doi.org/10.1109/interact.2011.11.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
5

Dimitrov, Dimitar, i Ivaylo Penev. "DESIGN OF A TRAINING COMPILER FOR INCREASING THE EFFICIENCY OF LANGUAGE PROCESSORS LEARNING". W eLSE 2021. ADL Romania, 2021. http://dx.doi.org/10.12753/2066-026x-21-077.

Pełny tekst źródła
Streszczenie:
The paper presents the design of a training compiler which is developed for the purposes of education in compilers and language processors in computer science courses. The presented compiler has the following main advantages compared to known training compilers used in various universities - a simplified modular structure and the building of an explicit abstract syntactic tree of the input program. The modules in the compiler structure are lexical analyzer, syntactic analyzer, semantic analyzer and code generator. This separation allows students to effectively study the main stages of compilation - lexical analysis, parsing, semantic analysis and code generation. Building and visualizing an explicit abstract syntax tree helps students to understand the translation of the program into the compiler's front-end and make the transition to the compiler's back-end. The compiler translates a program written in a high-level language into virtual machine code. An interpreter to execute the generated virtual machine code is also presented. The presented design is compared to other known training compilers used in various university courses. The input language is procedurally oriented and is a subset of the C and Java languages, which makes it easier for students to use it. Language has enough resources to solve many practical problems. The input program for the compiler is a sequence of definitions of variables and functions. The language of the training compiler is strongly typed. Variables, constants and expressions are related to a specific type. Input-output operations require a certain type of arguments, arithmetic-logical operations are defined for specific types of arguments and type of returned result. At the end of the paper are presented the results of the work of the training compiler in translating a sample input program to code for a virtual machine. The results demonstrate the output of each compiler module - a token stream, an abstract syntax tree, and a set of virtual machine instructions. The structure of the presented training compiler can be used for different input languages in training on compilers and language processors.
Style APA, Harvard, Vancouver, ISO itp.
6

Ward, A. C., i W. P. Seering. "Quantitative Inference in a Mechanical Design “Compiler”". W ASME 1989 Design Technical Conferences. American Society of Mechanical Engineers, 1989. http://dx.doi.org/10.1115/detc1989-0011.

Pełny tekst źródła
Streszczenie:
Abstract This paper introduces the theory underlying a computer program that takes as input a schematic of a mechanical or hydraulic power transmission system, plus specifications and a utility function, and returns catalog numbers from predefined catalogs for the optimal selection of components implementing the design. Unlike programs for designing single components or systems, this program provides the designer with a high level “language“ in which to compose new designs. It then performs much of the detailed design process. The process of “compilation”, or transformation from a high to a low level description, is based on a formalization of quantitative inferences about hierarchically organized sets of artifacts and operating conditions. This allows design compilation without the exhaustive enumeration of alternatives. The paper introduces the formalism, illustrating its use with examples. It then outlines some differences from previous work, and summarizes early tests and conclusions.
Style APA, Harvard, Vancouver, ISO itp.
7

Beckmann, Leo H. J. F. "A small-computer program for optical design and analysis, written in 'C'". W International Lens Design. Washington, D.C.: Optica Publishing Group, 1990. http://dx.doi.org/10.1364/ild.1990.ltuc5.

Pełny tekst źródła
Streszczenie:
A modular computer program for the design and analysis of optical systems on a small computer has been developed under the name "Opdesign". While the origins date back into the late seventies and the use of BASIC on, successively, programmable calculators and 8-bit home computers, the current program is written in 'C' and runs on different types of personal computers. Concurrently, the speed of calculations, in particular a skew-ray tracing, went up from one ray surface per second to 30..60 ray surfaces per second depending on the compiler and the hardware. With a program size (executable code) in excess of 250 kByte, the package covers system data (input, general properties), aberration calculation (third order-, wavefront-, exact ray aberrations), a number of tools for design manipulations, and automatic design improvement (by damped least squares method).
Style APA, Harvard, Vancouver, ISO itp.
8

Andreyeva, Tatiana Anatolievna, i Lidia Vasiljevna Gorodnyaya. "Can the Contribution of Software Decisions to Program Performance Be Measured?" W 25th Scientific Conference “Scientific Services & Internet – 2023”. Keldysh Institute of Applied Mathematics, 2023. http://dx.doi.org/10.20948/abrau-2023-2.

Pełny tekst źródła
Streszczenie:
The article concerns measurement of the effect that programming solutions have on productivity of programming and on program performance, and to their connection with educational programming and program improvement in practice. When program effectiveness is measured directly, the productivity of a complex consisting of a computer, a compiler, and a program is really measured instead of the productivity of programming solutions. These measurements do not fully reflect the contribution of programming solutions. And the functional models can provide a metric scale capable to separate features of programming languages and systems from features of programs and programming solutions. One of possible approaches is to measure the productivity of programs and programming solutions on the basis of normalized functional forms and thereby to reduce the dependence of measurements on hardware and programming systems.
Style APA, Harvard, Vancouver, ISO itp.
9

Midorikawa, Edson Toshimi, i Liria Matsumoto Sato. "COMMUNION: towards a cooperative strategy for high-performance memory management". W Simpósio Brasileiro de Arquitetura de Computadores e Processamento de Alto Desempenho. Sociedade Brasileira de Computação, 1997. http://dx.doi.org/10.5753/sbac-pad.1997.22645.

Pełny tekst źródła
Streszczenie:
The memory system is the most critical component of modern high-performance computer systems, because of its growing inability to keep up with the processor requests. Technological trends have produced a large and growing gap between CPU speeds and DRAM speeds. Many researches have focused this memory system problem, including program optimizing techniques, data locality enhancement, hardware and software prefetching, decoupled architectures, mutithreading, speculative loads and execution. These techniques have achieved a relative suceess, but they focus only one component in the hardware or software systems. We present here a new strategy for memory management in high-performance computer systems, named COMMUNION. The basic idea behind this strategy is cooperation. We introduce some possibilities of interaction among system programs that are responsible to generate and execute application programs. So, we investigate two specific interactions: between the compiler and the operating system, and among the compiling system components. The experimental results show that it's possible to achieve improvements of about 10 times in execution time, and about 5 times in memory demand. In the interaction between compiler and operating system, named Compiler-Aided Page Replacement (CAPR), we achieved a reduction of about 10% in the space-time product, with an increase of only 0.5% in the total execution time. All these results show that it's possible to manage main memory with better efficiency than what is provided by current systems.
Style APA, Harvard, Vancouver, ISO itp.
10

"Program Committee". W 9th Annual Workshop on Interaction between Compilers and Computer Architectures (INTERACT'05). IEEE, 2005. http://dx.doi.org/10.1109/interact.2005.12.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
Oferujemy zniżki na wszystkie plany premium dla autorów, których prace zostały uwzględnione w tematycznych zestawieniach literatury. Skontaktuj się z nami, aby uzyskać unikalny kod promocyjny!

Do bibliografii