Pour voir les autres types de publications sur ce sujet consultez le lien suivant : Compilers (Computer programs).

Thèses sur le sujet « Compilers (Computer programs) »

Créez une référence correcte selon les styles APA, MLA, Chicago, Harvard et plusieurs autres

Choisissez une source :

Consultez les 50 meilleures thèses pour votre recherche sur le sujet « Compilers (Computer programs) ».

À côté de chaque source dans la liste de références il y a un bouton « Ajouter à la bibliographie ». Cliquez sur ce bouton, et nous générerons automatiquement la référence bibliographique pour la source choisie selon votre style de citation préféré : APA, MLA, Harvard, Vancouver, Chicago, etc.

Vous pouvez aussi télécharger le texte intégral de la publication scolaire au format pdf et consulter son résumé en ligne lorsque ces informations sont inclues dans les métadonnées.

Parcourez les thèses sur diverses disciplines et organisez correctement votre bibliographie.

1

Biglari-Abhari, Morteza. « Performance improvement through predicated execution in VLIW machines ». Title page, contents and abstract only, 2000. http://web4.library.adelaide.edu.au/theses/09PH/09phb593.pdf.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
2

Deighan, John. « VPI PROLOG compiler project report ». Master's thesis, This resource online, 1991. http://scholar.lib.vt.edu/theses/available/etd-01262010-020109/.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
3

Park, Eun Jung. « Methodology of dynamic compiler option selection based on static program analysis implementation and evaluation / ». Access to citation, abstract and download form provided by ProQuest Information and Learning Company ; downloadable PDF file, 74 p, 2007. http://proquest.umi.com/pqdweb?did=1407501141&sid=12&Fmt=2&clientId=8331&RQT=309&VName=PQD.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
4

Walker, Kenneth William. « The implementation of an optimizing compiler for Icon ». Diss., The University of Arizona, 1991. http://hdl.handle.net/10150/185599.

Texte intégral
Résumé :
There are many optimizations that can be applied while translating Icon programs. These optimizations and the analyses needed to apply them are of interest for two reasons. First, Icon's unique combination of characteristics requires developing new techniques for implementing them. Second, these optimizations are used in variety of languages and Icon can be used as a medium for extending the state of the art. Many of these optimizations require detailed control of the generated code. Previous production implementations of the Icon programming language have been interpreters. The virtual machine code of an interpreter is seldom flexible enough to accommodate these optimizations and modifying the virtual machine to add the flexibility destroys the simplicity that justified using an interpreter in the first place. These optimizations can only reasonably be implemented in a compiler. In order to explore these optimizations for Icon programs, a compiler was developed. This dissertation describes the compiler and the optimizations it employs. It also describes a run-time system designed to support the analyses and optimizations. Icon variables are untyped. The compiler contains a type inferencing system that determines what values variables and expression may take on during program execution. This system is effective in the presence of values with pointer semantics and of assignments to components of data structures. The compiler stores intermediate results in temporary variables rather than on a stack. A simple and efficient algorithm was developed for determining the lifetimes of intermediate results in the presence of goal-directed evaluation. This allows an efficient allocation of temporary variables to intermediate results. The compiler uses information from type inferencing and liveness analysis to simplify generated code. Performance measurements on a variety of Icon programs show these optimizations to be effective.
Styles APA, Harvard, Vancouver, ISO, etc.
5

Calnan, Paul W. « EXTRACT, Extensible Transformation and Compiler Technology ». Link to electronic thesis, 2003. http://www.wpi.edu/Pubs/ETD/Available/etd-0429103-152947.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
6

Cardone, Richard Joseph. « Language and compiler support for mixin programming ». Access restricted to users with UT Austin EID Full text (PDF) from UMI/Dissertation Abstracts International, 2002. http://wwwlib.umi.com/cr/utexas/fullcit?p3077428.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
7

Jones, Philip E. C. « Common subexpression detection in dataflow programs / ». Title page, contents and summary only, 1989. http://web4.library.adelaide.edu.au/theses/09SM/09smj78.pdf.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
8

Müller-Olm, Markus. « Modular compiler verification : a refinement algebraic approach advocating stepwise abstraction / ». Berlin [u.a.] : Springer, 1997. http://www.loc.gov/catdir/enhancements/fy0815/97013428-d.html.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
9

Lapinskii, Viktor. « Algorithms for compiler-assisted design space exploration of clustered VLIW ASIP datapaths / ». Full text (PDF) from UMI/Dissertation Abstracts International, 2001. http://wwwlib.umi.com/cr/utexas/fullcit?p3008376.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
10

Junaidu, Sahalu B. « A parallel functional language compiler for message-passing multicomputers ». Thesis, University of St Andrews, 1998. http://hdl.handle.net/10023/13450.

Texte intégral
Résumé :
The research presented in this thesis is about the design and implementation of Naira, a parallel, parallelising compiler for a rich, purely functional programming language. The source language of the compiler is a subset of Haskell 1.2. The front end of Naira is written entirely in the Haskell subset being compiled. Naira has been successfully parallelised and it is the largest successfully parallelised Haskell program having achieved good absolute speedups on a network of SUN workstations. Having the same basic structure as other production compilers of functional languages, Naira's parallelisation technology should carry forward to other functional language compilers. The back end of Naira is written in C and generates parallel code in the C language which is envisioned to be run on distributed-memory machines. The code generator is based on a novel compilation scheme specified using a restricted form of Milner's 7r-calculus which achieves asynchronous communication. We present the first working implementation of this scheme on distributed-memory message-passing multicomputers with split-phase transactions. Simulated assessment of the generated parallel code indicates good parallel behaviour. Parallelism is introduced using explicit, advisory user annotations in the source' program and there are two major aspects of the use of annotations in the compiler. First, the front end of the compiler is parallelised so as to improve its efficiency at compilation time when it is compiling input programs. Secondly, the input programs to the compiler can themselves contain annotations based on which the compiler generates the multi-threaded parallel code. These, therefore, make Naira, unusually and uniquely, both a parallel and a parallelising compiler. We adopt a medium-grained approach to granularity where function applications form the unit of parallelism and load distribution. We have experimented with two different task distribution strategies, deterministic and random, and have also experimented with thread-based and quantum- based scheduling policies. Our experiments show that there is little efficiency difference for regular programs but the quantum-based scheduler is the best in programs with irregular parallelism. The compiler has been successfully built, parallelised and assessed using both idealised and realistic measurement tools: we obtained significant compilation speed-ups on a variety of simulated parallel architectures. The simulated results are supported by the best results obtained on real hardware for such a large program: we measured an absolute speedup of 2.5 on a network of 5 SUN workstations. The compiler has also been shown to have good parallelising potential, based on popular test programs. Results of assessing Naira's generated unoptimised parallel code are comparable to those produced by other successful parallel implementation projects.
Styles APA, Harvard, Vancouver, ISO, etc.
11

Sarcar, Amritam. « Runtime assertion checking for JML on the eclipse platform using AST merging ». To access this resource online via ProQuest Dissertations and Theses @ UTEP, 2009. http://0-proquest.umi.com.lib.utep.edu/login?COPT=REJTPTU0YmImSU5UPTAmVkVSPTI=&clientId=2515.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
12

Chan, Robin Isaac Man-Hang. « An Estelle-C compiler for automatic protocol implementation ». Thesis, University of British Columbia, 1987. http://hdl.handle.net/2429/26184.

Texte intégral
Résumé :
Over the past few years, much experience has been gained in semi-automatic protocol implementation using an existing Estelle-C compiler developed at the University of British Columbia. However, with the continual evolution of the Estelle language, that compiler is now obsolete. The present study found substantial syntactic and semantic differences between the Estelle language as implemented by the existing compiler and that specified in the latest ISO document to warrant the construction of a new Estelle-C compiler. The result is a new compiler which translates Estelle as defined in the second version of the ISO Draft Proposal 9074 into the programming language C. The new Estelle-C compiler addresses issues such as dynamic reconfiguration of modules and maintenance of priority relationships among nested modules. A run-time environment capable of supporting the new Estelle features is also presented. The implementation strategy used in the new Estelle-C compiler is illustrated by using the alternating bit protocol found in the ISO Draft Proposal 9074 document.
Science, Faculty of
Computer Science, Department of
Graduate
Styles APA, Harvard, Vancouver, ISO, etc.
13

Coleman, Jesse J. « The design, construction, and implementation of an engineering software command processor and macro compiler / ». Online version of thesis, 1995. http://hdl.handle.net/1850/12219.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
14

Calnan, III Paul W. « EXTRACT : Extensible Transformation and Compiler Technology ». Digital WPI, 2003. https://digitalcommons.wpi.edu/etd-theses/484.

Texte intégral
Résumé :
Code transformation is widely used in programming. Most developers are familiar with using a preprocessor to perform syntactic transformations (symbol substitution and macro expansion). However, it is often necessary to perform more complex transformations using semantic information contained in the source code. In this thesis, we developed EXTRACT; a general-purpose code transformation language. Using EXTRACT, it is possible to specify, in a modular and extensible manner, a variety of transformations on Java code such as insertion, removal, and restructuring. In support of this, we also developed JPath, a path language for identifying portions of Java source code. Combined, these two technologies make it possible to identify source code that is to be transformed and then specify how that code is to be transformed. We evaluate our technology using three case studies: a type name qualifier which transforms Java class names into fully-qualified class names; a contract checker which enforces pre- and post-conditions across behavioral subtypes; and a code obfuscator which mangles the names of a class's methods and fields such that they cannot be understood by a human, without breaking the semantic content of the class.
Styles APA, Harvard, Vancouver, ISO, etc.
15

Hopper, Michael A. « A compiler framework for multithreaded parallel systems ». Diss., Georgia Institute of Technology, 1997. http://hdl.handle.net/1853/15638.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
16

Liang, Wen. « A compiler optimization framework for Concordia Parallel C ». Thesis, National Library of Canada = Bibliothèque nationale du Canada, 1998. http://www.collectionscanada.ca/obj/s4/f2/dsk2/ftp01/MQ39488.pdf.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
17

Moon, Hae-Kyung. « Compiler construction for a simple Pascal-like language ». Virtual Press, 1994. http://liblink.bsu.edu/uhtbin/catkey/897511.

Texte intégral
Résumé :
In this thesis a compiler called SPASCAL is implemented which translates source programs in a simple Pascal-like language called SPASCAL into target programs in the VAX assembly language. This thesis clearly describes the main aspects of a compiler: lexical analysis and syntactic analysis, including the symbol-table routines and the error-handling routines. This thesis uses regular expressions to define the lexical structure and a context-free grammar to define the syntactic structure of SPASCAL. The compiler is constructed using syntax-directed translation, context-free grammars and a set of semantic rules. SPASCAL Compiler is written with standard C in UNIX.
Department of Computer Science
Styles APA, Harvard, Vancouver, ISO, etc.
18

Lu, Jing. « Semi-automatic protocol implementation using an Estelle-C compiler, LAPB and RTS protocols as examples ». Thesis, University of British Columbia, 1990. http://hdl.handle.net/2429/29419.

Texte intégral
Résumé :
Formal Description Techniques allow for the use of automated tools during the specification and development of communication protocols. Estelle is a standardized formal description technique developed by ISO to remove ambiguities in the specification of communication protocols and services. The UBC Estelle-C compiler automates the implementation of protocols by producing an executable C implementation directly from its Estelle specification. In this thesis, we investigate the automated protocol implementation methodology using the Estelle-C compiler. First, we describe the improvements made to the compiler to support the latest version of Estelle. Then, we present and discuss the semiautomated implementations of the LAPB protocol in the CCITT X.25 Recommendation and the RTS protocol in the CCITT X.400 MHS series using this compiler. Finally, we compare the automatic and manual protocol implementations of LAPB and RTS protocols in terms of functional coverage, development time, code size, and performance measure. The results strongly indicate the overall advantages of automatic protocol implementation method over the manual approach.
Science, Faculty of
Computer Science, Department of
Graduate
Styles APA, Harvard, Vancouver, ISO, etc.
19

Reig, Galilea Fermín Javier. « Compiler architecture using a portable intermediate language ». Connect to e-thesis, 2002. http://theses.gla.ac.uk/686/.

Texte intégral
Résumé :
Thesis (Ph.D.) - University of Glasgow, 2002.
Ph.D. thesis submitted to the Department of Computing Science, University of Glasgow, 2002. Includes bibliographical references. Print version also available.
Styles APA, Harvard, Vancouver, ISO, etc.
20

Wendt, Alan Lee. « An optimizing code generator generator ». Diss., The University of Arizona, 1989. http://hdl.handle.net/10150/184771.

Texte intégral
Résumé :
This dissertation describes a system that constructs efficient, retargetable code generators and optimizers. chop reads nonprocedural descriptions of a computer's instruction set and of a naive code generator for the computer, and it writes an integrated code generator and peephole optimizer for it. The resulting code generators are very efficient because they interpret no tables; they are completely hard-coded. Nor do they build complex data structures to communicate between code generation and optimization phases. Interphase communication is reduced to the point that the code generator's output is often encoded in the program counter and conveyed to the optimizer by jumping to the right label. chop's code generator and optimizer are based on a very simple formalism, namely rewriting rules. An instrumented version of the compiler infers the optimization rules as it complies a training suite, and it records them for translation into hard code and inclusion into the production version. I have replaced the Portable C Compiler's code generator with one generated by chop. Despite a costly interface, the resulting compiler runs 30% to 50% faster than the original Portable C Compiler (pcc) and generates comparable code. This figure is diluted by common lexical analysis, parsing, and semantic analysis and by comparable code emission. Allowing for these, the new code generator appears to run approximately seven times faster than that of the original pcc.
Styles APA, Harvard, Vancouver, ISO, etc.
21

Farkas, Alex Miklós. « Program construction and evolution in a persistent integrated programming environment / ». Title page, contents and abstract only, 1995. http://web4.library.adelaide.edu.au/theses/09PH/09phf229.pdf.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
22

Cai, Qiong Computer Science &amp Engineering Faculty of Engineering UNSW. « Profile-guided redundancy elimination ». Awarded by:University of New South Wales. School of Computer Science and Engineering, 2006. http://handle.unsw.edu.au/1959.4/25156.

Texte intégral
Résumé :
Program optimizations analyze and transform the programs such that better performance results can be achieved. Classical optimizations mainly use the static properties of the programs to analyze program code and make sure that the optimizations work for every possible combination of the program and the input data. This approach is conservative in those cases when the programs show the same runtime behaviors for most of their execution time. On the other hand, profile-guided optimizations use runtime profiling information to discover the aforementioned common behaviors of the programs and explore more optimization opportunities, which are missed in the classical, non-profile-guided optimizations. Redundancy elimination is one of the most powerful optimizations in compilers. In this thesis, a new partial redundancy elimination (PRE) algorithm and a partial dead code elimination algorithm (PDE) are proposed for a profile-guided redundancy elimination framework. During the design and implementation of the algorithms, we address three critical issues: optimality, feasibility and profitability. First, we prove that both our speculative PRE algorithm and our region-based PDE algorithm are optimal for given edge profiling information. The total number of dynamic occurrences of redundant expressions or dead codes cannot be further eliminated by any other code motion. Moreover, our speculative PRE algorithm is lifetime optimal, which means that the lifetimes of new introduced temporary variables are minimized. Second, we show that both algorithms are practical and can be efficiently implemented in production compilers. For SPEC CPU2000 benchmarks, the average compilation overhead for our PRE algorithm is 3%, and the average overhead for our PDE algorithm is less than 2%. Moreover, edge profiling rather than expensive path profiling is sufficient to guarantee the optimality of the algorithms. Finally, we demonstrate that the proposed profile-guided redundancy elimination techniques can provide speedups on real machines by conducting a thorough performance evaluation. To the best of our knowledge, this is the first performance evaluation of the profile-guided redundancy elimination techniques on real machines.
Styles APA, Harvard, Vancouver, ISO, etc.
23

Torri, Stephen A. Hamilton John A. « Generic reverse engineering architecture with compiler and compression classification components ». Auburn, Ala, 2009. http://hdl.handle.net/10415/1583.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
24

Crowfoot, Norman C. « A visual aid for designing regular expression parsers / ». Online version of thesis, 1988. http://hdl.handle.net/1850/10446.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
25

Cook, Philip John. « Incremental compilation in language-based environments / ». [St. Lucia, Qld.], 2006. http://www.library.uq.edu.au/pdfserve.php?image=thesisabs/absthe19173.pdf.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
26

Hessaraki, Alireza. « CCC86, a generic 8086 C-language cross compiler plus communication package ». Virtual Press, 1987. http://liblink.bsu.edu/uhtbin/catkey/544004.

Texte intégral
Résumé :
The Cross Compiler is an excellent and valuable program development tool. It provides to the user a low level compiled language that allows character (byte), integer (8086 word) and pointer (8086 one word address) manipulation. It also allows recursion, has modern flow and a rich set of operators.The Communication Program which include file transfer utility allows the student to download or upload their C program to a PC. It allows use of the Modem. The file transferring can be done using XON/XOFF or XMODEM. It also supports INS 8250 UART chip, plus 16450 high speed device found in hardware such as IBM AT Serial/Parallel Adapter.
Department of Computer Science
Styles APA, Harvard, Vancouver, ISO, etc.
27

McMichael, Lonny D. (Lonny Dean). « A Neural Network Configuration Compiler Based on the Adaptrode Neuronal Model ». Thesis, University of North Texas, 1992. https://digital.library.unt.edu/ark:/67531/metadc501018/.

Texte intégral
Résumé :
A useful compiler has been designed that takes a high level neural network specification and constructs a low level configuration file explicitly specifying all network parameters and connections. The neural network model for which this compiler was designed is the adaptrode neuronal model, and the configuration file created can be used by the Adnet simulation engine to perform network experiments. The specification language is very flexible and provides a general framework from which almost any network wiring configuration may be created. While the compiler was created for the specialized adaptrode model, the wiring specification algorithms could also be used to specify the connections in other types of networks.
Styles APA, Harvard, Vancouver, ISO, etc.
28

Baudendistel, Kurt. « Compiler development for fixed-point processors ». Diss., Georgia Institute of Technology, 1992. http://hdl.handle.net/1853/15355.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
29

Mbhambhu, Tsakani Joseph. « A Java bytecode compiler for the 8051 micro-controller ». Thesis, Stellenbosch : Stellenbosch University, 2002. http://hdl.handle.net/10019.1/52645.

Texte intégral
Résumé :
Thesis (MScEng)--Stellenbosch University, 2002.
ENGLISH ABSTRACT: This report describes the development of a Java Bytecode Compiler (JBC) for the 8051 micro-controller. Bytecodes are found in the class file generated when a Java source file is compiled with the java compiler (javac). On Java platforms, the Java Virtual Machine (JVM) interprets and executes the bytecodes. Currently existing Java platforms do not support programming the 8051 using Java. As an 8-bit micro-controller with only 64 KB of total memory, the 8051's word size and memory is too limited to implement a NM. Moreover, full applications of the 8051 require that it handles hardware interrupts and access 110 ports and special registers. This thesis proposes a JBC to compile the standard bytecodes found in the class file and generate equivalent assembly code that can run on the 8051. The JBC was tested on the 8051 compatible AT89C52*44 micro-controller with a program that simulates an irrigation controller. The code generated by the JBC executes correctly but is large in size and runs slower than code of a program written in assembly. Conclusions drawn are that the JBC can be used to compile Java programs intended for the 8051 and its family of micro-controllers. In particular, it is especially a good tool for people who prefer Java to other languages. The JBC is suitable for smaller programs that do not have efficiency as a major requirement.
AFRIKAANSE OPSOMMING: Hierdie tesis beskryf die ontwikkeling van 'n Java "Bytecode" samesteller (Java Bytecode Compiler, JBC) vir die 8051 mikro beheerder argitektuur. "Bytecodes" is die produk van die standaard Java samesteller "javac" en word deur 'n platform spesifieke Java Virtuele Masjien gelees en uitgevoer. Geen NM is huidig beskikbaar vir die 8051 argitektuur nie. Die gekose 8-bis 8051 mikro beheerder het 'n beperkte interne geheue van 64kB. Hierdie beperking maak dit nie geskik vir 'n IVM nie. Daar moet ook voorsiening gemaak word om hardeware onderbrekings te hantering en te kan kommunikeer met die poorte en spesiale registers van die mikro beheerder. JBC word ontwikkel wat die standaard "Bytecode" kompileer na geskikte masjien kode wat dan op die mikro beheerder gebruik kan word. Die JBC is ontwikkel en toets en 'n eenvoudige besproeiing program is geskryf om op 'n Atmel AT89C52*44 te loop. Die kode werk goed maar is nog nie geoptimeer nie en loop onnodig stadig. Optimerings metodes word aanbeveel en bespreek. Die gevolgtrekking is dat die huidige JBC kan gebruik word om Java kode te skryfvir die 8051 beheerder. Dit maak die hardeware platform nou beskikbaar aan Java programmeerders. Daar moet wel gelet word dat die JBC op die oomblik net geskik is vir klein programme en waar spoed nie die primêre vereiste is nie.
Styles APA, Harvard, Vancouver, ISO, etc.
30

Orgill, Gregory Leavitt. « Java Performance of the Rijndael Encryption Algorithm Across Compilers and Virtual Machines ». Diss., CLICK HERE for online access, 2005. http://contentdm.lib.byu.edu/ETD/image/etd681.pdf.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
31

Song, Jun. « Compiling ACE for Distributed-Memory Machines ». PDXScholar, 1992. https://pdxscholar.library.pdx.edu/open_access_etds/4568.

Texte intégral
Résumé :
Distributed-memory machines offer a very high level of performance, flexibility and scalability. But the memory organization of this kind of machine determines that processes on different processors must communicate explicitly by sending and receiving messages. As a result, the programmer faces the enormously difficult task of detailed planning of algorithm-irrelevant, low-level communication issues. This level of programming resembles writing assembly programs for a sequential machine. ACE is a message-passing language with abstract communication statements. It was defined by Dr. Jingke Li at Portland State University. The communication in ACE is still explicit, but it is abstracted to a higher level. The abstraction can help balance the needs of ease of programming and high performance. This thesis discusses how those high-level communication abstractions can be transformed into low-level communication routines. It presents the design and implementation of a compiler that transforms an ACE program into a C program with low-level communication routines. The compiler is implemented for the Intel iPSC/2 hypercube multiprocessor machine. Compared to their low-level counterparts, ACE programs are easier to write and are more understandable. Compared to their high level counterparts, more efficient code can be generated since the communication information is expressed explicitly in ACE and the compiler itself is much less complex. ACE also enables the users to fine tune some critical communication segments. Some well known parallel algorithms written in ACE are compiled by the compiler as examples, and experimental results of their performance are included.
Styles APA, Harvard, Vancouver, ISO, etc.
32

Heller, Richard. « Checkpointing without operating system intervention implementing Griewank's algorithm ». Ohio : Ohio University, 1998. http://www.ohiolink.edu/etd/view.cgi?ohiou1176494831.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
33

Shear, Raymond F. « Implementation of a Modula 2 subset compiler supporting a "C" language interface using commonly available UNIX tools / ». Online version of thesis, 1989. http://hdl.handle.net/1850/10505.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
34

Muff, Urs C. « Backtracking model languages / ». Access author, 2000.

Trouver le texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
35

Lerner, Sorin. « Automatically proving the correctness of program analyses and transformations / ». Thesis, Connect to this title online ; UW restricted, 2006. http://hdl.handle.net/1773/7001.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
36

De, Subrato Kumar. « Design of a retargetable compiler for digital signal processors ». Diss., Georgia Institute of Technology, 2002. http://hdl.handle.net/1853/15740.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
37

Burke, Patrick William. « A New Look at Retargetable Compilers ». Thesis, University of North Texas, 2014. https://digital.library.unt.edu/ark:/67531/metadc699988/.

Texte intégral
Résumé :
Consumers demand new and innovative personal computing devices every 2 years when their cellular phone service contracts are renewed. Yet, a 2 year development cycle for the concurrent development of both hardware and software is nearly impossible. As more components and features are added to the devices, maintaining this 2 year cycle with current tools will become commensurately harder. This dissertation delves into the feasibility of simplifying the development of such systems by employing heterogeneous systems on a chip in conjunction with a retargetable compiler such as the hybrid computer retargetable compiler (Hy-C). An example of a simple architecture description of sufficient detail for use with a retargetable compiler like Hy-C is provided. As a software engineer with 30 years of experience, I have witnessed numerous system failures. A plethora of software development paradigms and tools have been employed to prevent software errors, but none have been completely successful. Much discussion centers on software development in the military contracting market, as that is my background. The dissertation reviews those tools, as well as some existing retargetable compilers, in an attempt to determine how those errors occurred and how a system like Hy-C could assist in reducing future software errors. In the end, the potential for a simple retargetable solution like Hy-C is shown to be very simple, yet powerful enough to provide a very capable product in a very fast-growing market.
Styles APA, Harvard, Vancouver, ISO, etc.
38

Kim, Minjang. « Dynamic program analysis algorithms to assist parallelization ». Diss., Georgia Institute of Technology, 2012. http://hdl.handle.net/1853/45758.

Texte intégral
Résumé :
All market-leading processor vendors have started to pursue multicore processors as an alternative to high-frequency single-core processors for better energy and power efficiency. This transition to multicore processors no longer provides the free performance gain enabled by increased clock frequency for programmers. Parallelization of existing serial programs has become the most powerful approach to improving application performance. Not surprisingly, parallel programming is still extremely difficult for many programmers mainly because thinking in parallel is simply beyond the human perception. However, we believe that software tools based on advanced analyses can significantly reduce this parallelization burden. Much active research and many tools exist for already parallelized programs such as finding concurrency bugs. Instead we focus on program analysis algorithms that assist the actual parallelization steps: (1) finding parallelization candidates, (2) understanding the parallelizability and profits of the candidates, and (3) writing parallel code. A few commercial tools are introduced for these steps. A number of researchers have proposed various methodologies and techniques to assist parallelization. However, many weaknesses and limitations still exist. In order to assist the parallelization steps more effectively and efficiently, this dissertation proposes Prospector, which consists of several new and enhanced program analysis algorithms. First, an efficient loop profiling algorithm is implemented. Frequently executed loop can be candidates for profitable parallelization targets. The detailed execution profiling for loops provides a guide for selecting initial parallelization targets. Second, an efficient and rich data-dependence profiling algorithm is presented. Data dependence is the most essential factor that determines parallelizability. Prospector exploits dynamic data-dependence profiling, which is an alternative and complementary approach to traditional static-only analyses. However, even state-of-the-art dynamic dependence analysis algorithms can only successfully profile a program with a small memory footprint. Prospector introduces an efficient data-dependence profiling algorithm to support large programs and inputs as well as provides highly detailed profiling information. Third, a new speedup prediction algorithm is proposed. Although the loop profiling can give a qualitative estimate of the expected profit, obtaining accurate speedup estimates needs more sophisticated analysis. Prospector introduces a new dynamic emulation method to predict parallel speedups from annotated serial code. Prospector also provides a memory performance model to predict speedup saturation due to increased memory traffic. Compared to the latest related work, Prospector significantly improves both prediction accuracy and coverage. Finally, Prospector provides algorithms that extract hidden parallelism and advice on writing parallel code. We present a number of case studies how Prospector assists manual parallelization in particular cases including privatization, reduction, mutex, and pipelining.
Styles APA, Harvard, Vancouver, ISO, etc.
39

Railing, Brian Paul. « Collecting and representing parallel programs with high performance instrumentation ». Diss., Georgia Institute of Technology, 2015. http://hdl.handle.net/1853/54431.

Texte intégral
Résumé :
Computer architecture has looming challenges with finding program parallelism, process technology limits, and limited power budget. To navigate these challenges, a deeper understanding of parallel programs is required. I will discuss the task graph representation and how it enables programmers and compiler optimizations to understand and exploit dynamic aspects of the program. I will present Contech, which is a high performance framework for generating dynamic task graphs from arbitrary parallel programs. The Contech framework supports a variety of languages and parallelization libraries, and has been tested on both x86 and ARM. I will demonstrate how this framework encompasses a diversity of program analyses, particularly by modeling a dynamically reconfigurable, heterogeneous multi-core processor.
Styles APA, Harvard, Vancouver, ISO, etc.
40

Shende, Sameer Suresh. « The role of instrumentation and mapping in performance measurement / ». view abstract or download file of text, 2001. http://wwwlib.umi.com/cr/uoregon/fullcit?p3024533.

Texte intégral
Résumé :
Thesis (Ph. D.)--University of Oregon, 2001.
Typescript. Includes vita and abstract. Includes bibliographical references (leaves 141-156). Also available for download via the World Wide Web; free to University of Oregon users.
Styles APA, Harvard, Vancouver, ISO, etc.
41

Balasubramaniam, Mahadevan. « Performance analysis and evaluation of dynamic loop scheduling techniques in a competitive runtime environment for distributed memory architectures ». Master's thesis, Mississippi State : Mississippi State University, 2003. http://library.msstate.edu/etd/show.asp?etd=etd-04022003-154254.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
42

Porto, João Paulo. « Técnicas e arquitetura para captura de traços e execução especulativa ». [s.n.], 2011. http://repositorio.unicamp.br/jspui/handle/REPOSIP/275770.

Texte intégral
Résumé :
Orientador: Guido Costa Souza de Araújo
Tese (doutorado) - Universidade Estadual de Campinas, Instituto de Computação
Made available in DSpace on 2018-08-17T08:05:03Z (GMT). No. of bitstreams: 1 Porto_JoaoPaulo_D.pdf: 1983380 bytes, checksum: edef40fbafe26ce2d849308b08daf786 (MD5) Previous issue date: 2011
Resumo: É sabido que o modelo de desenvolvimento de micro-processadores baseado na extração de Instruction-Level Parallelism (ILP) de código sequencial atingiu seu limite. Encontrar soluções escaláveis e eficientes que permitam a manutenção de inúmeras instruções em execução simultaneamente tem se mostrado um desafio maior que o imaginado. Neste sentido, arquitetos e micro-arquitetos de computadores vêm buscando soluções alternativas para o desenvolvimento de novas arquiteturas. Dentre as soluções existentes, vêm ganhando força as baseadas na extração de Thread-Level Parallelism (TLP). Resumidamente, TLP é um tipo de paralelismo que tenta quebrar um programa sequencial em tarefas relativamente independentes entre si para executá-las em paralelo. TLP pode ser extraído por hardware ou software. Idealmente, uma solução híbrida deve ser utilizada, com o software realizando a identificação das oportunidades de extração de TLP, e com o hardware provendo suporte para execução do código gerado. Com tal solução de compromisso, o hardware fica livre da necessidade de especular, e o software pode trabalhar com maiores garantias. Nesta Tese, estudaram-se formas automáticas de paralelização e extração de TLP. Inicialmente, focou-se em traces dinâmicos de execução de programas sequenciais. Técnicas existentes (tais como MRET e Trace Trees) mostraram-se inapropriadas, de modo que desenvolveu-se uma nova técnica chamada Compact Trace Tree (CTT), que mostrou-se mais rápida que Trace Trees. Trace Tree (TT) também apresentam grande nível de especialização de código (tail duplication), característica ausente em MRET. Além de CTT, esta Tese apresenta Trace Execution Automata (TEA), um autômato que representa traces de execução. Esta representação revelou, em nossos experimentos, quase 80% de economia de espaço quando comparada com a representação usual. A seguir, o foco da Tese foi voltado para laços de execução e para paralelização estática de código sequencial através de Decoupled Software Pipeline (DSWP). Nosso primeiro resultado nesta direção, usando Java, mostrou claramente que sem nenhum suporte em hardware, a paralelização estática de programas poderia atingir um ganho de desempenho médio de 48% nas aplicações paralelizadas. Finalmente, a Tese propõe um modelo de execução paralelo baseado em DSWP que permite a consistência de dados entre as diversas threads de programas paralelizados. Apesar de não avaliar esta arquitetura completamente, os resultados iniciais são promissores. Além disso, o suporte necessário em hardware é simples e acomoda-se sobre o protocolo de coerência de cache existente, sem alterações sensíveis no processador
Abstract: The usual, Insturction-Level Parallelism (ILP)-oriented, microprocessor development model is known to have reached a hard-to-break limit. Finding scalable and efficient solutions that keep several instructions on-the-fly simultaneously has proven to be moredifficult than imagined. In this sense, computer architects and micro-architects have been seeking alternatives to develop new architectures. Among all, the TLP-based solutions are gaining strength. In short, TLP strives to break a sequential program into quasi-independent tasks in order to execute them in parallel. TLP can be extracted either by hardware or software. Ideally, a hybrid solution would be employed, with the software being responsible to identifying TLP opportunities, and the hardware offering support for the parallel code execution. With such solution, the hardware is free from the heavy speculation burden, whilst the software can be parallelized with more warranties. In this Thesis, automatic parallelization and TLP strategies were studied. The research first focused on dynamic execution traces. Existing techniques, such as MRET and Trace Trees proved unsuitable for our goals, which led us to develop a new trace identification technique called Compact Trace Trees, which showed to be faster than Trace Trees. Compact Trace Trees also present trace specialization, which MRET lacks. Besides Compact Trace Trees, this Thesis presents a new trace representation called Trace Execution Automata, an automaton representing the execution traces. This technique revealed nearly 80% memory size savings when compared to the usual, code duplication representation. Next, the Thesis' focus shifted to parallelizing loops statically. Our initial result in this direction, using Java and without any hardware support, clearly revealed that static parallelization of sequential programs could reach a 48% average speedup when compared to their sequential execution. Finally, a new, Decoupled Software Pipelining-based execution model with automatic data coherence amongst parallelized programs'threads is proposed by the Thesis. Despite the lack of a full model evaluation, the initial results are promising. Differently from other proposals, the hardware support necessary for this architecture is simple and builds upon the existing cache coherence protocol, without any modifications to this sensitive system component
Doutorado
Doutor em Ciência da Computação
Styles APA, Harvard, Vancouver, ISO, etc.
43

Baixo, André Oliveira Loureiro do 1986. « Mecanismo para execução especulativa de aplicações paralelizadas por técnicas DOPIPE usando replicação de estágios ». [s.n.], 2012. http://repositorio.unicamp.br/jspui/handle/REPOSIP/275691.

Texte intégral
Résumé :
Orientador: Guido Costa Souza de Araújo
Dissertação (mestrado) - Universidade Estadual de Campinas, Instituto de Computação
Made available in DSpace on 2018-08-21T04:52:37Z (GMT). No. of bitstreams: 1 Baixo_AndreOliveiraLoureirodo_M.pdf: 1756118 bytes, checksum: 00900e9463b55e1800da080419da53c7 (MD5) Previous issue date: 2012
Resumo: A utilização máxima dos núcleos de arquiteturas multi-processadas é fundamental para permitir uma utilização completa do paralelismo disponível em processadores modernos. A fim de obter desempenho escalável, técnicas de paralelização requerem um ajuste cuidadoso de: (a) mecanismo arquitetural para especulação; (b) ambiente de execução; e (c) transformações baseadas em software. Mecanismos de hardware e software já foram propostos para tratar esse problema. Estes mecanismos, ou requerem alterações profundas (e arriscadas) nos protocolos de coerência de cache, ou exibem uma baixa escalabilidade de desempenho para uma gama de aplicações. Trabalhos recentes em técnicas de paralelização baseadas em DOPIPE (como DSWP) sugerem que a combinação de versionamento de dados baseado em paginação com especulação em software pode resultar em bons ganhos de desempenho. Embora uma solução apenas em software pareça atrativa do ponto de vista da indústria, essa não utiliza todo o potencial da microarquitetura para detectar e explorar paralelismo. A adição de tags às caches para habilitar o versionamento de dados, conforme recentemente anunciado pela indústria, pode permitir uma melhor exploração de paralelismo no nível da microarquitetura. Neste trabalho, é apresentado um modelo de execução que permite tanto a especulação baseada em DOPIPE, como as técnicas de paralelização especulativas tradicionais. Este modelo é baseado em uma simples abordagem com tags de cache para o versionamento de dados, que interage naturalmente com protocolos de coerência de cache tradicionais, não necessitando que estes sejam alterados. Resultados experimentais, utilizando benchmarks SPEC e PARSEC, revelam um ganho de desempenho geométrico médio de 21.6× para nove programas sequenciais em uma máquina simulada de 24 núcleos, demonstrando uma melhora na escalabilidade quando comparada a uma abordagem apenas em software
Abstract: Maximal utilization of cores in multicore architectures is key to realize the potential performance available from modern microprocessors. In order to achieve scalable performance, parallelization techniques rely on carefully tunning speculative architecture support, runtime environment and software-based transformations. Hardware and software mechanisms have already been proposed to address this problem. They either require deep (and risky) changes on the existing hardware and cache coherence protocols, or exhibit poor performance scalability for a range of applications. Recent work on DOPIPE-based parallelization techniques (e.g. DSWP) has suggested that the combination of page-based data versioning with software speculation can result in good speed-ups. Although a softwareonly solution seems very attractive from an industry point-of-view, it does not enable the whole potential of the microarchitecture in detecting and exploiting parallelism. The addition of cache tags as an enabler for data versioning, as recently announced in the industry, could allow a better exploitation of parallelism at the microarchitecture level. In this paper we present an execution model that supports both DOPIPE-based speculation and traditional speculative parallelization techniques. It is based on a simple cache tagging approach for data versioning, which integrates smoothly with typical cache coherence protocols, and does not require any changes to them. Experimental results, using SPEC and PARSEC benchmarks, reveal a geometric mean speedup of 21.6x for nine sequential programs in a 24-core simulated CMP, while demonstrate improved scalability when compared to a software-only approach
Mestrado
Ciência da Computação
Mestre em Ciência da Computação
Styles APA, Harvard, Vancouver, ISO, etc.
44

Elms, Kim. « Debugging optimised code using function interpretation ». Thesis, Queensland University of Technology, 1999.

Trouver le texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
45

Lucas, Divino César Soares 1985. « Modeling the performance impact of hot code misprediction in Cross-ISA virtual machines = Modelagem do impacto de erros de predição de código quente no desempenho de máquinas virtuais ». [s.n.], 2013. http://repositorio.unicamp.br/jspui/handle/REPOSIP/275628.

Texte intégral
Résumé :
Orientadores: Guido Costa Souza de Araújo, Edson Borin
Dissertação (mestrado) - Universidade Estadual de Campinas, Instituto de Computação
Made available in DSpace on 2018-08-23T12:28:12Z (GMT). No. of bitstreams: 1 Lucas_DivinoCesarSoares_M.pdf: 1053361 bytes, checksum: e29ab79838532619ba298ddde8ba0f39 (MD5) Previous issue date: 2013
Resumo: Máquinas virtuais (MVs) são sistemas que se propõem a eliminar a incompatibilidade entre duas, em geral diferentes, interfaces e dessa forma habilitar a comunicação entre diferentes sistemas. Nesse sentido, atuando como mediadores, uma MV está em um ponto que a permite fomentar o desenvolvimento de soluções inovadoras para vários problemas. Tais sistemas geralmente utilizam técnicas de emulação, por exemplo, interpretação ou tradução dinâmica de binários, para executar o código da aplicação cliente. Para determinar qual técnica de emulação é a ideal para um trecho de código geralmente é necessário que a MV empregue algum tipo de predição para determinar se o benefício de compilar o código supera os custos. Este problema, na maioria dos casos, resume-se a predizer se o dado trecho de código será frequentemente executado ou não, problema conhecido pelo nome de Predição de Código Quente. Em geral, se o preditor sinalizar um trecho de código como quente, a MV imediatamente toma a decisão de compilá-lo. Contudo, um problema surge nesta estratégia, à resposta do preditor é apenas a decisão de uma heurística e é, portanto, suscetível a erros. Quando o preditor sinaliza como quente um trecho de código que não será frequentemente executado, ou seja, um código que de fato é "frio", ele está fazendo uma predição errônea de código quente. Quando uma predição incorreta é feita, ocorre que a técnica de emulação que a MV utilizará para emular o trecho de código não compensará o seu custo e, portanto a MV gastará mais tempo executando o seu próprio código do que o código da aplicação cliente. Neste trabalho, foi avaliado o impacto de predições incorretas de código quente no desempenho de MVs emulando vários tipos de aplicações. Na análise realizada foi avaliado o preditor de código quente baseado em limiar, uma técnica frequentemente utilizada para identificar regiões de código que serão frequentemente executadas. Para fazer esta análise foi criado um modelo matemático para simular o comportamento de tal preditor e a partir deste modelo uma série de resultados puderam ser explorados. Inicialmente é mostrado que este preditor frequentemente erra a predição e, como conseqüência, o tempo gasto fazendo compilações torna-se o maior componente do tempo de execução da MV. Também é mostrado como diferentes limiares de predição afetam o número de predições incorretas e qual o impacto disto no desempenho da MV. Também são apresentados resultados indicando qual o impacto do custo de compilação, tradução e velocidade do código traduzido no desempenho da MV. Por fim é mostrado que utilizando apenas o conjunto de aplicações do SPEC CPU 2006 para avaliar o desempenho de MVs que utilizam o preditor de código quente baseado em limiar pode levar a resultados imprecisos
Abstract: Virtual machines are systems that aim to eliminate the compatibility gap between two, possible distinct, interfaces, thus enabling them to communicate. This way, acting like a mediator, the VM lies at an important position that enables it to foster innovative solutions for many problems. Such systems usually rely on emulation techniques, such as interpretation and dynamic binary translation, to execute guest application code. In order to select the best emulation technique for each code segment, the VM typically needs to predict whether the cost of compiling the code overcome its future execution time. This problem, in the common case, reduce to predicting if the given code region will be frequently executed or not, a problem called Hot Code Prediction. Generally, if the predictor flags a given code region as hot the VM instantly takes the decision to compile it. However, a problem came out from this strategy, the predictor response is only a decision made by means of a heuristic and thus it can be incorrect. Whenever the predictor flags a code region that will be infrequently executed (cold code) as hot code, we say that it is doing a hotness misprediction. Whenever a misprediction happens it means that the technique the VM will use to emulate the code will not have its cost amortized by executing the optimized code and thus the VM will, in fact, spend more time executing its own code rather than the guest application code. In this work we measure the impact of hotness mispredictions in a VM emulating several kinds of applications. In our analysis we evaluate the threshold-based hot code predictor, a technique commonly used to predict hot code fragments. To do so we developed a mathematical model to simulate the behavior of such predictor and we use it to estimate the impact of mispredictions in several benchmarks. We show that this predictor frequently mispredicts the code hotness and as a result the VM emulation performance becomes dominated by miscompilations. Moreover, we show how the threshold choice can affect the number of mispredictions and how this impacts the VM performance. We also show how the compilation, interpretation and steady state execution cost of translated instructions affect the VM performance. At the end we show that using SPEC CPU 2006 benchmarks to measure the performance of a VM using the threshold-based predictor can lead to misleading results
Mestrado
Ciência da Computação
Mestre em Ciência da Computação
Styles APA, Harvard, Vancouver, ISO, etc.
46

Li, Shen Carmen C. Duren Russell Walker. « Evaluating Impulse C and multiple parallelism partitions for a low-cost reconfigurable computing system ». Waco, Tex. : Baylor University, 2008. http://hdl.handle.net/2104/5280.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
47

Peters, Arthur. « The Basic Scheme for the Evaluation of Functional Logic Programs ». PDXScholar, 2012. https://pdxscholar.library.pdx.edu/open_access_etds/914.

Texte intégral
Résumé :
Functional logic languages provide a powerful programming paradigm combining the features of functional languages and logic languages. However, current implementations of functional logic languages are complex, slow, or both. This thesis presents a scheme, called the Basic Scheme, for compiling and executing functional logic languages based on non-deterministic graph rewriting. This thesis also describes the implementation and optimization of a prototype of the Basic Scheme. The prototype is simple and performs well compared to other current implementations.
Styles APA, Harvard, Vancouver, ISO, etc.
48

Zhuang, Xiaotong. « Compiler Optimizations for Multithreaded Multicore Network Processors ». Diss., Georgia Institute of Technology, 2006. http://hdl.handle.net/1853/11566.

Texte intégral
Résumé :
Network processors are new types of multithreaded multicore processors geared towards achieving both fast processing speed and flexibility of programming. The architecture of network processors considers many special properties for packet processing, including multiple threads, multiple processor cores on the same chip, special functional units, simplified ISA and simplified pipeline, etc. The architectural peculiarities of network processors raise new challenges for compiler design and optimization. Due to very high clocking speeds, the CPU memory gap on such processors is huge, making registers extremely precious. Moreover, the register file is split into two banks, and for any ALU instruction, the two source operands must come from different banks. We present and compare three different approaches to do register allocation and bank assignment. We also address the problem of sharing registers across threads in order to maximize the utilization of hardware resources. The context switches on the IXP network processor only happen when long latency operations are encountered. As a result, context switches are highly frequent. Therefore, the designer of the IXP network processor decided to make context switches extremely lightweight, i.e. only the program counter(PC) is stored together with the context. Since registers are not saved and restored during context switches, it becomes difficult to share registers across threads. For a conventional processor, each thread can assume that it can use the entire register file, because registers are always part of the context. However, with lightweight context switch, each thread must take a separate piece of the register file, making register usage inefficient. Programs executing on network processors typically have runtime constraints. Scheduling of multiple programs sharing a CPU must be orchestrated by the OS and the hardware using certain sharing policies. Real time applications demand a real time aware OS kernel to meet their specified deadlines. However, due to stringent performance requirements on network processors, neither OS nor hardware mechanisms is typically feasible. In this work, we demonstrate that a compiler approach could achieve some of the OS scheduling and real time scheduling functionalities without introducing a hefty overhead.
Styles APA, Harvard, Vancouver, ISO, etc.
49

Restovic, Valderrama Maria Inés. « Compilador ASN.1 e codificador/decodificador para BER ». [s.n.], 1992. http://repositorio.unicamp.br/jspui/handle/REPOSIP/259533.

Texte intégral
Résumé :
Orientador: Manuel de Jesus Mendes
Dissertação (mestrado) - Universidade Estadual de Campinas, Faculdade de Engenharia Elétrica e de Computação
Made available in DSpace on 2018-08-18T21:31:47Z (GMT). No. of bitstreams: 1 RestovicValderrama_MariaInes_M.pdf: 1435363 bytes, checksum: 073357e2857410d0a58e683b2837ca3e (MD5) Previous issue date: 1992
Resumo: Neste trabalho apresenta-se uma ferramenta chamada Compilador ASN.1, cujo objetivo é fornecer uma representação concreta para a sintaxe abstrata ASN.1, de forma que, as especificações das PDU's dos protocolos de aplicação, geralmente escritas em ASN.1, possam ser utilizadas computacionalmente. Uma das funções prioritárias da camada de apresentação de um protocolo de comunicação é produzir uma codificação dos valores destas PDU's, baseando-se nas regras definidas pela norma BER. Assim, o compilador deve fornecer numa segunda tarefa, as rotinas de codificação e decodificação específicas para cada PDU compilada, utilizando um conjunto de funções que se encontram em duas bibliotecas auxiliares que realizam estas conversões
Abstract: This work presents a tool called "Compilador ASN.1", which main objective is to provide a concrete representation for the abstract syntax ASN.1, in order to translate the application protocol PDU's specification, written in ASN.1, to the C language. One of the main functions of the presentation layer is produce an encode-decode for the PDU's data values, based on the BER norm. Therefore, a second compiler task is to provide the specific encode-decode routines for each compiled PDU, using a function set available in two complementary libraries that carry out these conversions
Mestrado
Automação
Mestre em Engenharia Elétrica
Styles APA, Harvard, Vancouver, ISO, etc.
50

Wu, Xiaoqing. « Component-based language implementation with object-oriented syntax and aspect-oriented semantics ». Birmingham, Ala. : University of Alabama at Birmingham, 2007. http://www.mhsl.uab.edu/dt/2007p/wu.pdf.

Texte intégral
Résumé :
Thesis (Ph. D.)--University of Alabama at Birmingham, 2007.
Additional advisors: Jeff Gray, Marjan Mernik, Alan Sprague, Murat Tanik. Description based on contents viewed June 25, 2007; title from title screen. Includes bibliographical references (p. 132-138).
Styles APA, Harvard, Vancouver, ISO, etc.
Nous offrons des réductions sur tous les plans premium pour les auteurs dont les œuvres sont incluses dans des sélections littéraires thématiques. Contactez-nous pour obtenir un code promo unique!

Vers la bibliographie