Dissertations / Theses on the topic 'Computer program theory'

To see the other types of publications on this topic, follow the link: Computer program theory.

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 dissertations / theses for your research on the topic 'Computer program theory.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.

1

Khamiss, A.-A. M. "Program construction in Martin-Lof's theory of types." Thesis, University of Essex, 1986. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.373210.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Jervis, Clive Andrew. "A theory of program correctness with three valued logic." Thesis, University of Leeds, 1988. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.277297.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Ireland, Andrew. "Mechanization of program construction in Martin-Loef's theory of types." Thesis, University of Stirling, 1989. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.236080.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Duong, Chay N. "A study of new-wave theory and an implementation of the new wave theory into GTSELOS computer program." Thesis, Georgia Institute of Technology, 1997. http://hdl.handle.net/1853/21492.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Newlands, D. A., and mikewood@deakin edu au. "Structured development of an asynchronous forth processor using trace theory." Deakin University. School of Sciences, 1989. http://tux.lib.deakin.edu.au./adt-VDU/public/adt-VDU20050915.140144.

Full text
Abstract:
This thesis examines the use of a structured design methodology in the design of asynchronous circuits so that high level constructs can be specified purely in terms of signal exchanges and without the intrusion of lower level concepts. Trace theory is used to specify a multi-processor Forth machine at a high level then part of the design is further elaborated using trace theory operations to (insure that the behaviours of the lower level constructs will combine to give the high level specified behaviour without locking or other hazards. A novel form of threaded language to take advantage of the machine architecture is developed. At suitable points the design is tested by simulation. The stack element which is designed is reduced to an electric circuit which is itself tested by simulation to verify the design.
APA, Harvard, Vancouver, ISO, and other styles
6

Doshi, Vishal D. (Vishal Devendra). "Functional compression : theory and application." Thesis, Massachusetts Institute of Technology, 2008. http://hdl.handle.net/1721.1/43038.

Full text
Abstract:
Thesis (S.M.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science; and, (S.M. in Technology and Policy)--Massachusetts Institute of Technology Engineering Systems Division, Technology and Policy Program, 2008.
Includes bibliographical references (p. 75-77).
We consider the problem of functional compression. The objective is to separately compress possibly correlated discrete sources such that an arbitrary deterministic function of those sources can be computed given the compressed data from each source. This is motivated by problems in sensor networks and database privacy. Our architecture gives a quantitative definition of privacy for database statistics. Further, we show that it can provide significant coding gains in sensor networks. We consider both the lossless and lossy computation of a function. Specifically, we present results of the rate regions for three instances of the problem where there are two sources: 1) lossless computation where one source is available at the decoder, 2) under a special condition, lossless computation where both sources are separately encoded, and 3) lossy computation where one source is available at the decoder. Wyner and Ziv (1976) considered the third problem for the special case f(X, Y) = X and derived a rate distortion function. Yamamoto (1982) extended this result to a general function. Both of these results are in terms of an auxiliary random variable. Orlitsky and Roche (2001), for the zero distortion case, gave this variable a precise interpretation in terms of the properties of the characteristic graph; this led to a particular coding scheme. We extend that result by providing an achievability scheme that is based on the coloring of the characteristic graph. This suggests a layered architecture where the functional layer controls the coloring scheme, and the data layer uses existing distributed source coding schemes. We extend this graph coloring method to provide algorithms and rates for all three problems.
by Vishal D. Doshi.
S.M.
APA, Harvard, Vancouver, ISO, and other styles
7

Lee, Kathryn Green Melville Joel G. "Comparison of the theory, application, and results of one- and two- dimensional flow models." Auburn, Ala., 2006. http://repo.lib.auburn.edu/2006%20Summer/Theses/LEE_KATHRYN_42.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Ghica, Dan R. "A games-based foundation for compositional software model checking /." Oxford : Oxford University Computing Laboratory, 2002. http://web.comlab.ox.ac.uk/oucl/publications/tr/rr-02-13.html.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Heyer, Tim. "Semantic Inspection of Software Artifacts From Theory to Practice." Doctoral thesis, Linköping : Univ, 2001. http://www.ep.liu.se/diss/science_technology/07/25/index.html.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Rushton, Matthew V. "Static and dynamic type systems." Diss., Connect to the thesis Connect to the thesis, 2004. http://hdl.handle.net/10066/1483.

Full text
APA, Harvard, Vancouver, ISO, and other styles
11

Weissenbacher, Georg. "Program analysis with interpolants." Thesis, University of Oxford, 2010. http://ora.ox.ac.uk/objects/uuid:6987de8b-92c2-4309-b762-f0b0b9a165e6.

Full text
Abstract:
This dissertation discusses novel techniques for interpolation-based software model checking, an approximate method which uses Craig interpolation to compute invariants of programs. Our work addresses two aspects of program analyses based on model checking: verification (the construction of correctness proofs for programs) and falsification (the detection of counterexamples that violate the specification). In Hoare's calculus, a proof of correctness comprises assertions which establish that a program adheres to its specification. The principal challenge is to derive appropriate assertions and loop invariants. Contemporary software verification tools use Craig interpolation (as opposed to traditional predicate transformers such as the weakest precondition) to derive approximate assertions. The performance of the model checker is contingent on the Craig interpolants computed. We present novel interpolation techniques which provide the following advantages over existing methods. Firstly, the resulting interpolants are sound with respect to the bit-level semantics of programs, which is an improvement over interpolation systems that use linear arithmetic over the reals to approximate bit-vector arithmetic and/or do not support bit-level operations. Secondly, our interpolation systems afford us a choice of interpolants and enable us to fine-tune their logical strength and structure. In contrast, existing procedures are limited to a single ad-hoc choice of an interpolant. Interpolation-based verification tools are typically forced to refine an initial approximation repeatedly in order to achieve the accuracy required to establish or refute the correctness of a program. The detection of a counterexample containing a repetitive construct may necessitate one refinement step (involving the computation of additional interpolants) for each iteration of the loop. We present a heuristic that aims to avoid the repeated and computationally expensive construction of interpolants, thus enabling the detection of deeply buried defects such as buffer overflows. Finally, we present an implementation of our techniques and evaluate them on a set of standardised device driver and buffer overflow benchmarks.
APA, Harvard, Vancouver, ISO, and other styles
12

Merten, Samuel A. "A Verified Program for the Enumeration of All Maximal Independent Sets." Ohio University / OhioLINK, 2016. http://rave.ohiolink.edu/etdc/view?acc_num=ohiou1479829000576398.

Full text
APA, Harvard, Vancouver, ISO, and other styles
13

Bicker, Marcelle M. "A toolkit for uncertainty reasoning and representation using fuzzy set theory in PROLOG expert systems /." Online version of thesis, 1987. http://hdl.handle.net/1850/10294.

Full text
APA, Harvard, Vancouver, ISO, and other styles
14

Escalante, Osuna Carlos. "Estimating the cost of GraphLog queries." Thesis, National Library of Canada = Bibliothèque nationale du Canada, 1997. http://www.collectionscanada.ca/obj/s4/f2/dsk2/tape16/PQDD_0002/NQ32743.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
15

Majakwara, Jacob. "Application of multiserver queueing to call centres." Thesis, Rhodes University, 2010. http://hdl.handle.net/10962/d1015461.

Full text
Abstract:
The simplest and most widely used queueing model in call centres is the M/M/k system, sometimes referred to as Erlang-C. For many applications the model is an over-simplification. Erlang-C model ignores among other things busy signals, customer impatience and services that span multiple visits. Although the Erlang-C formula is easily implemented, it is not easy to obtain insight from its answers (for example, to find an approximate answer to questions such as "how many additional agents do I need if the arrival rate doubles?"). An approximation of the Erlang-C formula that gives structural insight into this type of question would be of use to better understand economies of scale in call centre operations. Erlang-C based predictions can also turn out highly inaccurate because of violations of underlying assumptions and these violations are not straightforward to model. For example, non-exponential service times lead one to the M/G/k queue which, in stark contrast to the M/M/k system, is difficult to analyse. This thesis deals mainly with the general M/GI/k model with abandonment. The arrival process conforms to a Poisson process, service durations are independent and identically distributed with a general distribution, there are k servers, and independent and identically distributed customer abandoning times with a general distribution. This thesis will endeavour to analyse call centres using M/GI/k model with abandonment and the data to be used will be simulated using EZSIM-software. The paper by Brown et al. [3] entitled "Statistical Analysis of a Telephone Call Centre: A Queueing-Science Perspective," will be the basis upon which this thesis is built.
APA, Harvard, Vancouver, ISO, and other styles
16

Bacak, Gökşen Ufuktepe Ünal. "Vertex Coloring of A Graph/." [s.l.] : [s.n.], 2004. http://library.iyte.edu.tr/tezler/master/matematik/T000416.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
17

Beşeri, Tina Ufuktepe Ünal. "Edge Coloring of A Graph/." [s.l.]: [s.n.], 2004. http://library.iyte.edu.tr/tezler/master/matematik/T000439.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
18

Katirai, Hooman. "A theory and toolkit for the mathematics of privacy : methods for anonymizing data while minimizing information loss." Thesis, Massachusetts Institute of Technology, 2006. http://hdl.handle.net/1721.1/34526.

Full text
Abstract:
Thesis (S.M.)--Massachusetts Institute of Technology, Engineering Systems Division, Technology and Policy Program; and, Thesis (S.M.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 2006.
Includes bibliographical references (leaves 85-86).
Privacy laws are an important facet of our society. But they can also serve as formidable barriers to medical research. The same laws that prevent casual disclosure of medical data have also made it difficult for researchers to access the information they need to conduct research into the causes of disease. But it is possible to overcome some of these legal barriers through technology. The US law known as HIPAA, for example, allows medical records to be released to researchers without patient consent if the records are provably anonymized prior to their disclosure. It is not enough for records to be seemingly anonymous. For example, one researcher estimates that 87.1% of the US population can be uniquely identified by the combination of their zip, gender, and date of birth - fields that most people would consider anonymous. One promising technique for provably anonymizing records is called k-anonymity. It modifies each record so that it matches k other individuals in a population - where k is an arbitrary parameter. This is achieved by, for example, changing specific information such as a date of birth, to a less specific counterpart such as a year of birth.
(cont.) Previous studies have shown that achieving k-anonymity while minimizing information loss is an NP-hard problem; thus a brute force search is out of the question for most real world data sets. In this thesis, we present an open source Java toolkit that seeks to anonymize data while minimizing information loss. It uses an optimization framework and methods typically used to attack NP-hard problems including greedy search and clustering strategies. To test the toolkit a number of previously unpublished algorithms and information loss metrics have been implemented. These algorithms and measures are then empirically evaluated using a data set consisting of 1000 real patient medical records taken from a local hospital. The theoretical contributions of this work include: (1) A new threat model for privacy - that allows an adversary's capabilities to be modeled using a formalism called a virtual attack database. (2) Rationally defensible information loss measures - we show that previously published information loss measures are difficult to defend because they fall prey to what is known as the "weighted indexing problem." To remedy this problem we propose a number of information-loss measures that are in principle more attractive than previously published measures.
(cont.) (3) Shown that suppression and generalization - two concepts that were previously thought to be distinct - are in fact the same thing; insofar as each generalization can be represented by a suppression and vice versa. (4) We show that Domain Generalization Hierarchies can be harvested to assist the construction of a Bayesian network to measure information loss. (5) A database can be thought of as a sub-sample of a population. We outline a technique that allows one to predict k-anonymity in a population. This allows us, under some conditions, to release records that match fewer than k individuals in a database while still achieving k-anonymity against an adversary according to some probability and confidence interval. While we have chosen to focus our thesis on the anonymization of medical records, our methodologies, toolkit and command line tools are equally applicable to any tabular data such as the data one finds in relational databases - the most common type of database today.
by Hooman Katirai.
S.M.
APA, Harvard, Vancouver, ISO, and other styles
19

Jami, Valentina. "Development of Computer Program for Wind Resource Assessment, Rotor Design and Rotor Performance." Wright State University / OhioLINK, 2017. http://rave.ohiolink.edu/etdc/view?acc_num=wright1513703072278665.

Full text
APA, Harvard, Vancouver, ISO, and other styles
20

Shah, Vijay Pravin. "An advanced signal processing toolkit for Java applications." Master's thesis, Mississippi State : Mississippi State University, 2002. http://library.msstate.edu/etd/show.asp?etd=etd-11102002-141018.

Full text
APA, Harvard, Vancouver, ISO, and other styles
21

Scholz, Jason B. "Real-time performance estimation and optimizaton of digital communication links /." Title page, contents and abstract only, 1992. http://web4.library.adelaide.edu.au/theses/09PH/09phs368.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
22

Salim, Hamid M. "Cyber safety : a systems thinking and systems theory approach to managing cyber security risks." Thesis, Massachusetts Institute of Technology, 2014. http://hdl.handle.net/1721.1/90804.

Full text
Abstract:
Thesis: S.M. in Engineering and Management, Massachusetts Institute of Technology, Engineering Systems Division, System Design and Management Program, 2014.
Thesis: S.M., Massachusetts Institute of Technology, Department of Electrical Engineering and Computer Science, 2014.
93
Cataloged from PDF version of thesis.
Includes bibliographical references (pages 148-156).
If we are to manage cyber security risks more effectively in today's complex and dynamic Web 2.0 environment, then a new way of thinking is needed to complement traditional approaches. According to Symantec's 2014 Internet Security Threat Report, in 2012 more than ten million identities that included real names, dates of birth, and social security were exposed by a single breach. In 2013 there were eight breaches that each exposed over ten million identities. These breaches were recorded despite the fact that significant resources are expended, on managing cyber security risks each year by businesses and governments. The objective of this thesis was twofold. The first objective was to understand why traditional approaches for managing cyber security risks were not yielding desired results. Second, propose a new method for managing cyber security risks more effectively. The thesis investigated widely used approaches and standards, and puts forward a method based on the premise that traditional technology centric approaches have become ineffective on their own. This lack of efficacy can be attributed primarily to the fact that, Web 2.0 is a dynamic and a complex socio-technical system that is continuously evolving. This thesis proposes a new method for managing cyber security risks based on a model for accident or incident analysis, used in Systems Safety field. The model is called System-Theoretic Accident Model and Processes (STAMP). It is rooted in Systems Thinking and Systems Theory. Based on a case study specifically written for this thesis, the largest cyber-attack reported in 2007 on a major US based retailer, is analyzed using the STAMP model. The STAMP based analysis revealed insights both at systemic and detailed level, which otherwise would not be available, if traditional approaches were used for analysis. Further, STAMP generated specific recommendations for managing cyber security risks more effectively.
by Hamid M. Salim.
S.M. in Engineering and Management
S.M.
APA, Harvard, Vancouver, ISO, and other styles
23

Kim, Pilho. "E-model event-based graph data model theory and implementation /." Diss., Atlanta, Ga. : Georgia Institute of Technology, 2009. http://hdl.handle.net/1853/29608.

Full text
Abstract:
Thesis (Ph.D)--Electrical and Computer Engineering, Georgia Institute of Technology, 2010.
Committee Chair: Madisetti, Vijay; Committee Member: Jayant, Nikil; Committee Member: Lee, Chin-Hui; Committee Member: Ramachandran, Umakishore; Committee Member: Yalamanchili, Sudhakar. Part of the SMARTech Electronic Thesis and Dissertation Collection.
APA, Harvard, Vancouver, ISO, and other styles
24

Ornelas, Gilbert. "Set-valued extensions of fuzzy logic classification theorems /." To access this resource online via ProQuest Dissertations and Theses @ UTEP, 2007. http://0-proquest.umi.com.lib.utep.edu/login?COPT=REJTPTU0YmImSU5UPTAmVkVSPTI=&clientId=2515.

Full text
APA, Harvard, Vancouver, ISO, and other styles
25

Lin, Chia-Yang. "Conceptual model builder." CSUSB ScholarWorks, 2004. https://scholarworks.lib.csusb.edu/etd-project/2708.

Full text
Abstract:
Whenever one designs a new database system, an Entity-Relationship Diagram (ER diagram) is always needed to present the structure of this database. Using the graphically well-arranged ER Diagram helps you to easily understand the entities, attributes, domains, primary keys, foreign keys, constraints, and relationships inside a database. This data-modeling tool is an ideal choice for companies and developers.
APA, Harvard, Vancouver, ISO, and other styles
26

Gedela, Naga Venkata Praveen babu. "MEASUREMENT AND ITS HISTORICAL CONTEXT." Kent State University / OhioLINK, 2008. http://rave.ohiolink.edu/etdc/view?acc_num=kent1226037175.

Full text
APA, Harvard, Vancouver, ISO, and other styles
27

Kaiser, Alexander. "Monotonicity in shared-memory program verification." Thesis, University of Oxford, 2013. http://ora.ox.ac.uk/objects/uuid:1d16b4b5-524a-40db-b7bf-062374f8679c.

Full text
Abstract:
Predicate abstraction is a key enabling technology for applying model checkers to programs written in mainstream languages. It has been used very successfully for debugging sequential system-level C code. Although model checking was originally designed for analysing concurrent systems, there is little evidence of fruitful applications of predicate abstraction to shared-variable concurrent software. The goal of the present thesis is to close this gap. We propose an algorithmic solution implementing predicate abstraction that targets safety properties in non-recursive programs executed by an unbounded number of threads, which communicate via shared memory or higher-level mechanisms, such as mutexes and broadcasts. As system-level code makes frequent use of such primitives, their correct usage is critical to ensure reliability. Monotonicity - the property that thread actions remain executable when other threads are added to the current global state - is a natural and common feature of human-written concurrent software. It is also useful: if every thread’s memory is finite, monotonicity often guarantees the decidability of safety properties even when the number of running threads is unspecified. In this thesis, we show that the process of obtaining finite-data thread abstrac tions for model checking is not always compatible with monotonicity. Predicate-abstracting certain mainstream asynchronous software such as the ticket busy-wait lock algorithm results in non-monotone multi-threaded Boolean programs, despite the monotonicity of the input program: the monotonicity is lost in the abstraction. As a result, the unbounded thread Boolean programs do not give rise to well quasi-ordered systems [1], for which sound and complete safety checking algorithms are available. In fact, safety checking turns out to be undecidable for the obtained class of abstract programs, despite the finiteness of the individual threads’ state spaces. Our solution is to restore the monotonicity in the abstract program, using an inexpensive closure operator that precisely preserves all safety properties from the (non-monotone) abstract program without the closure. As a second contribution, we present a novel, sound and complete, yet empirically much improved algorithm for verifying abstractions, applicable to general well quasi-ordered systems. Our approach is to gradually widen the set of safety queries during the search by program states that involve fewer threads and are thus easier to decide, and are likely to finalise the decision on earlier queries. To counter the negative impact of "bad guesses", i.e. program states that turn out feasible, the search is supported by a parallel engine that generates such states; these are never selected for widening. We present an implementation of our techniques and extensive experiments on multi-threaded C programs, including device driver code from FreeBSD and Solaris. The experiments demonstrate that by exploiting monotonicity, model checking techniques - enabled by predicate abstraction - scale to realistic programs even of a few thousands of multi-threaded C code lines.
APA, Harvard, Vancouver, ISO, and other styles
28

Faitelson, David. "Program synthesis from domain specific object models." Thesis, University of Oxford, 2008. http://ora.ox.ac.uk/objects/uuid:0c5a992e-dad4-435c-a576-e3ed504bcdbd.

Full text
Abstract:
Automatically generating a program from its specification eliminates a large source of errors that is often unavoidable in a manual approach. While a general purpose code generator is impossible to build, it is possible to build a practical code generator for a specific domain. This thesis investigates the theory behind Booster — a domain specific, object based specification language and automatic code generator. The domain of Booster is information systems — systems that consist of a rich object model in which the objects refer to each other to form a complicated network of associations. The operations of such systems are conceptually simple (changing the attributes of objects, adding or removing new objects and creating or destroying associations) but they are tricky to implement correctly. The thesis focuses on the theoretical foundation of the Booster approach, in particular on three contributions: semantics, model completion, and code generation. The semantics of a Booster model is a single abstract data type (ADT) where the invariants and the methods of all the classes in the model are promoted to the level of the ADT. This is different from the traditional view that considers each class as a separate ADT. The thesis argues that the Booster semantics is a better model of object oriented systems. The second important contribution is the idea of model completion — a process that augments the postconditions of methods with additional predicates that follow from the system’s invariant and the method’s original intention. The third contribution describes a simple but effective code generation technique that is based on interpreting postconditions as executable statements and uses weakest preconditions to ensure that the generated code refines its specification.
APA, Harvard, Vancouver, ISO, and other styles
29

Masten-Cain, Kathryn. "Toward a Grounded Theory of Community Networking." Thesis, University of North Texas, 2014. https://digital.library.unt.edu/ark:/67531/metadc500035/.

Full text
Abstract:
This dissertation presents a preliminary grounded theory of community networking based on 63 evaluations of community networking projects funded by the National Telecommunications and Information Administration’s Technology Opportunities Program (TOP) between 1994 and 2007. The substantive grounded theory developed is that TOP projects differed in their contribution to positive outcomes for intended disadvantaged community beneficiaries based on the extent and manner in which they involved the disadvantaged community during four grant process phases: partnership building, project execution, evaluation, and close-out. Positive outcomes for the community were facilitated by using existing communication channels, such as schools, to connect with intended beneficiaries; local financial institutions to provide infrastructure to support local trade; and training to connect community members to jobs. Theoretical contributions include situating outcomes for disadvantaged communities within the context of the grant process; introducing the “vulnerable community” concept; and identifying other concepts and properties that may be useful in further theoretical explorations. Methodological contributions include demonstrating grounded theory as a viable method for exploring large text-based datasets; paving the way for machine learning approaches to analyzing qualitative data; and illustrating how project evaluations can be used in a similar fashion as interview data. Practical contributions include providing information to guide community networking-related policies and initiatives from the perspectives of stakeholders at all levels, including establishing funded projects as local employment opportunities and re-conceptualizing sustainability in terms of human networks rather than technological networks.
APA, Harvard, Vancouver, ISO, and other styles
30

McHard, Richard William. "Sperner properties of the ideals of a Boolean lattice." Diss., [Riverside, Calif.] : University of California, Riverside, 2009. http://proquest.umi.com/pqdweb?index=0&did=1957320841&SrchMode=2&sid=2&Fmt=2&VInst=PROD&VType=PQD&RQT=309&VName=PQD&TS=1268340717&clientId=48051.

Full text
Abstract:
Thesis (Ph. D.)--University of California, Riverside, 2009.
Includes abstract. Title from first page of PDF file (viewed March 11, 2010). Available via ProQuest Digital Dissertations. Includes bibliographical references (p. 170-172). Also issued in print.
APA, Harvard, Vancouver, ISO, and other styles
31

Krishnaswami, Neelakantan R. "Verifying Higher-Order Imperative Programs with Higher-Order Separation Logic." Research Showcase @ CMU, 2012. http://repository.cmu.edu/dissertations/164.

Full text
Abstract:
In this thesis I show is that it is possible to give modular correctness proofs of interesting higher-order imperative programs using higher-order separation logic. To do this, I develop a model higher-order imperative programming language, and develop a program logic for it. I demonstrate the power of my program logic by verifying a series of examples. This includes both realistic patterns of higher-order imperative programming such as the subject-observer pattern, as well as examples demonstrating the use of higher-order logic to reason modularly about highly aliased data structures such as the union-find disjoint set algorithm.
APA, Harvard, Vancouver, ISO, and other styles
32

Cash, Heather. "A Library of Functions in C++ for Building and Manipulating Large Graphs." Honors in the Major Thesis, University of Central Florida, 2006. http://digital.library.ucf.edu/cdm/ref/collection/ETH/id/1213.

Full text
Abstract:
This item is only available in print in the UCF Libraries. If this is your Honors Thesis, you can help us make it available online for use by researchers around the world by following the instructions on the distribution consent form at http://library.ucf.edu/Systems/DigitalInitiatives/DigitalCollections/InternetDistributionConsentAgreementForm.pdf You may also contact the project coordinator, Kerri Bottorff, at kerri.bottorff@ucf.edu for more information.
Bachelors
Engineering and Computer Science
Computer Science
APA, Harvard, Vancouver, ISO, and other styles
33

Zhao, Wang. "Domain knowledge transformation (DKT) for conceptual design of mechanical systems /." free to MU campus, to others for purchase, 1997. http://wwwlib.umi.com/cr/mo/fullcit?p9841351.

Full text
APA, Harvard, Vancouver, ISO, and other styles
34

Neron, Pierre. "A Quest for Exactness: Program Transformation for Reliable Real Numbers." Phd thesis, Ecole Polytechnique X, 2013. http://tel.archives-ouvertes.fr/tel-00924379.

Full text
Abstract:
Cette thèse présente un algorithme qui élimine les racines carrées et les divi- sions dans des programmes sans boucles, utilisés dans des systèmes embarqués, tout en préservant la sémantique. L'élimination de ces opérations permet d'éviter les erreurs d'arrondis à l'exécution, ces erreurs d'arrondis pouvant entraîner un comportement complètement inattendu de la part du programme. Cette trans- formation respecte les contraintes du code embarqué, en particulier la nécessité pour le programme produit de s'exécuter en mémoire fixe. Cette transformation utilise deux algorithmes fondamentaux développés dans cette thèse. Le premier permet d'éliminer les racines carrées et les divisions des expressions booléennes contenant des comparaisons d'expressions arithmétiques. Le second est un algo- rithme qui résout un problème d'anti-unification particulier, que nous appelons anti-unification contrainte. Cette transformation de programme est définie et prou- vée dans l'assistant de preuves PVS. Elle est aussi implantée comme une stratégie de ce système. L'anti-unification contrainte est aussi utilisée pour étendre la transformation à des programmes contenant des fonctions. Elle permet ainsi d'éliminer les racines carrées et les divisions de spécifications écrites en PVS. La robustesse de cette méthode est mise en valeur par un exemple conséquent: l'élimination des racines carrées et des divisions dans un programme de détection des conflits aériens.
APA, Harvard, Vancouver, ISO, and other styles
35

Haller, Leopold Carl Robert. "Abstract satisfaction." Thesis, University of Oxford, 2013. http://ora.ox.ac.uk/objects/uuid:68f76f3a-485b-4c98-8d02-5e8d6b844b4e.

Full text
Abstract:
This dissertation shows that satisfiability procedures are abstract interpreters. This insight provides a unified view of program analysis and satisfiability solving and enables technology transfer between the two fields. The framework underlying these developments provides systematic recipes that show how intuition from satisfiability solvers can be lifted to program analyzers, how approximation techniques from program analyzers can be integrated into satisfiability procedures and how program analyzers and satisfiability solvers can be combined. Based on this work, we have developed new tools for checking program correctness and for solving satisfiability of quantifier-free first-order formulas. These tools outperform existing approaches. We introduce abstract satisfaction, an algebraic framework for applying abstract interpre- tation to obtain sound, but potentially incomplete satisfiability procedures. The framework allows the operation of satisfiability procedures to be understood in terms of fixed point computations involving deduction and abduction transformers on lattices. It also enables satisfiability solving and program correctness to be viewed as the same algebraic problem. Using abstract satisfaction, we show that a number of satisfiability procedures can be understood as abstract interpreters, including Boolean constraint propagation, the dpll and cdcl algorithms, St ̊almarck’s procedure, the dpll(t) framework and solvers based on congruence closure and the Bellman-Ford algorithm. Our work leads to a novel understand- ing of satisfiability architectures as refinement procedures for abstract analyses and allows us to relate these procedures to independent developments in program analysis. We use this perspective to develop Abstract Conflict-Driven Clause Learning (acdcl), a rigorous, lattice-based generalization of cdcl, the central algorithm of modern satisfiability research. The acdcl framework provides a solution to the open problem of lifting cdcl to new prob- lem domains and can be instantiated over many lattices that occur in practice. We provide soundness and completeness arguments for acdcl that apply to all such instantiations. We evaluate the effectiveness of acdcl by investigating two practical instantiations: fp-acdcl, a satisfiability procedure for the first-order theory of floating point arithmetic, and cdfpl, an interval-based program analyzer that uses cdcl-style learning to improve the precision of a program analysis. fp-acdcl is faster than competing approaches in 80% of our benchmarks and it is faster by more than an order of magnitude in 60% of the benchmarks. Out of 33 safe programs, cdfpl proves 16 more programs correct than a mature interval analysis tool and can conclusively determine the presence of errors in 24 unsafe benchmarks. Compared to bounded model checking, cdfpl is on average at least 260 times faster on our benchmark set.
APA, Harvard, Vancouver, ISO, and other styles
36

Pajovic, Milutin. "The development and application of random matrix theory in adaptive signal processing in the sample deficient regime." Thesis, Massachusetts Institute of Technology, 2014. http://hdl.handle.net/1721.1/93775.

Full text
Abstract:
Thesis: Ph. D., Joint Program in Applied Ocean Science and Engineering (Massachusetts Institute of Technology, Department of Electrical Engineering and Computer Science; and the Woods Hole Oceanographic Institution), 2014.
This electronic version was submitted by the student author. The certified thesis is available in the Institute Archives and Special Collections.
Cataloged from student-submitted PDF version of thesis.
Includes bibliographical references (pages 237-243).
This thesis studies the problems associated with adaptive signal processing in the sample deficient regime using random matrix theory. The scenarios in which the sample deficient regime arises include, among others, the cases where the number of observations available in a period over which the channel can be approximated as time-invariant is limited (wireless communications), the number of available observations is limited by the measurement process (medical applications), or the number of unknown coefficients is large compared to the number of observations (modern sonar and radar systems). Random matrix theory, which studies how different encodings of eigenvalues and eigenvectors of a random matrix behave, provides suitable tools for analyzing how the statistics estimated from a limited data set behave with respect to their ensemble counterparts. The applications of adaptive signal processing considered in the thesis are (1) adaptive beamforming for spatial spectrum estimation, (2) tracking of time-varying channels and (3) equalization of time-varying communication channels. The thesis analyzes the performance of the considered adaptive processors when operating in the deficient sample support regime. In addition, it gains insights into behavior of different estimators based on the estimated second order statistics of the data originating from time-varying environment. Finally, it studies how to optimize the adaptive processors and algorithms so as to account for deficient sample support and improve the performance. In particular, random matrix quantities needed for the analysis are characterized in the first part. In the second part, the thesis studies the problem of regularization in the form of diagonal loading for two conventionally used spatial power spectrum estimators based on adaptive beamforming, and shows the asymptotic properties of the estimators, studies how the optimal diagonal loading behaves and compares the estimators on the grounds of performance and sensitivity to optimal diagonal loading. In the third part, the performance of the least squares based channel tracking algorithm is analyzed, and several practical insights are obtained. Finally, the performance of multi-channel decision feedback equalizers in time-varying channels is characterized, and insights concerning the optimal selection of the number of sensors, their separation and constituent filter lengths are presented.
by Milutin Pajovic.
Ph. D.
APA, Harvard, Vancouver, ISO, and other styles
37

Poernomo, Iman Hafiz 1976. "Variations on a theme of Curry and Howard : the Curry-Howard isomorphism and the proofs-as-programs paradigm adapted to imperative and structured program synthesis." Monash University, School of Computer Science and Software Engineering, 2003. http://arrow.monash.edu.au/hdl/1959.1/9405.

Full text
APA, Harvard, Vancouver, ISO, and other styles
38

Merry, Alexander. "Reasoning with !-graphs." Thesis, University of Oxford, 2013. http://ora.ox.ac.uk/objects/uuid:416c2e6d-2932-4220-8506-50e6b403b660.

Full text
Abstract:
The aim of this thesis is to present an extension to the string graphs of Dixon, Duncan and Kissinger that allows the finite representation of certain infinite families of graphs and graph rewrite rules, and to demonstrate that a logic can be built on this to allow the formalisation of inductive proofs in the string diagrams of compact closed and traced symmetric monoidal categories. String diagrams provide an intuitive method for reasoning about monoidal categories. However, this does not negate the ability for those using them to make mistakes in proofs. To this end, there is a project (Quantomatic) to build a proof assistant for string diagrams, at least for those based on categories with a notion of trace. The development of string graphs has provided a combinatorial formalisation of string diagrams, laying the foundations for this project. The prevalence of commutative Frobenius algebras (CFAs) in quantum information theory, a major application area of these diagrams, has led to the use of variable-arity nodes as a shorthand for normalised networks of Frobenius algebra morphisms, so-called "spider notation". This notation greatly eases reasoning with CFAs, but string graphs are inadequate to properly encode this reasoning. This dissertation firstly extends string graphs to allow for variable-arity nodes to be represented at all, and then introduces !-box notation – and structures to encode it – to represent string graph equations containing repeated subgraphs, where the number of repetitions is abitrary. This can be used to represent, for example, the "spider law" of CFAs, allowing two spiders to be merged, as well as the much more complex generalised bialgebra law that can arise from two interacting CFAs. This work then demonstrates how we can reason directly about !-graphs, viewed as (typically infinite) families of string graphs. Of particular note is the presentation of a form of graph-based induction, allowing the formal encoding of proofs that previously could only be represented as a mix of string diagrams and explanatory text.
APA, Harvard, Vancouver, ISO, and other styles
39

Blažková, Klára. "Teorie vnitřního prostředí." Master's thesis, Vysoké učení technické v Brně. Fakulta stavební, 2013. http://www.nusl.cz/ntk/nusl-225941.

Full text
Abstract:
The aim of master's thesis is to find an optimal solution in terms of window shielding factors in shaping the internal environment in the interior. In therms of this work there was used a computer program Teruna to created mathematical model entered the room in the experimental section. The theoretical part inquires into the theory of the internal environment. The project inquires into the design and application of a mathematical model of two variants room office in the premises of the factory building.
APA, Harvard, Vancouver, ISO, and other styles
40

Suoja, Nicole Marie. "Directional wavenumber characteristics of short sea waves." Thesis, Massachusetts Institute of Technology, 2000. http://hdl.handle.net/1721.1/88473.

Full text
Abstract:
Thesis (Ph. D.)--Joint Program in Applied Ocean Science and Engineering (Massachusetts Institute of Technology, Dept. of Ocean Engineering; and the Woods Hole Oceanographic Institution), 2000.
Includes bibliographical references (leaves 134-141).
by Nicole Marie Suoja.
Ph.D.
APA, Harvard, Vancouver, ISO, and other styles
41

Fruth, Matthias. "Formal methods for the analysis of wireless network protocols." Thesis, University of Oxford, 2011. http://ora.ox.ac.uk/objects/uuid:df2c08f4-001c-42d3-a2f4-9922f081fb49.

Full text
Abstract:
In this thesis, we present novel software technology for the analysis of wireless networks, an emerging area of computer science. To address the widely acknowledged lack of formal foundations in this field, probabilistic model checking, a formal method for verification and performance analysis, is used. Contrary to test and simulation, it systematically explores the full state space and therefore allows reasoning about all possible behaviours of a system. This thesis contributes to design, modelling, and analysis of ad-hoc networks and randomised distributed coordination protocols. First, we present a new hybrid approach that effectively combines probabilistic model checking and state-of-the-art models from the simulation community in order to improve the reliability of design and analysis of wireless sensor networks and their protocols. We describe algorithms for the automated generation of models for both analysis methods and their implementation in a tool. Second, we study spatial properties of wireless sensor networks, mainly with respect to Quality of Service and energy properties. Third, we investigate the contention resolution protocol of the networking standard ZigBee. We build a generic stochastic model for this protocol and analyse Quality of Service and energy properties of it. Furthermore, we assess the applicability of different interference models. Fourth, we explore slot allocation protocols, which serve as a bandwidth allocation mechanism for ad-hoc networks. We build a generic model for this class of protocols, study real-world protocols, and optimise protocol parameters with respect to Quality of Service and energy constraints. We combine this with the novel formalisms for wireless communication and interference models, and finally we optimise local (node) and global (network) routing policies. This is the first application of probabilistic model checking both to protocols of the ZigBee standard and protocols for slot allocation.
APA, Harvard, Vancouver, ISO, and other styles
42

Jaoua, Ali. "Recouvrement avant de programmes sous les hypotheses de specifications deterministes et non deterministes." Toulouse 3, 1987. http://www.theses.fr/1987TOU30227.

Full text
Abstract:
Etude de la specification et de l'abstraction fonctionnelle de programmes. Definition de la coherence de programmes en termes relationnels. Definition des differents niveaux de coherence d'etats de programmes et des caracterisations de ces niveaux. Proposition d'une methodologie pratique de recouvrement avant basee sur l'idee de preserver, au moyen d'assertions executables, un niveau donne de coherences. Presentation d'une methodologie hybride de validation de programmes basee sur la verification formelle de certaines proprietes critiques du programme, et le recouvrement avant des proprietes non critiques
APA, Harvard, Vancouver, ISO, and other styles
43

Huffman, Brian Charles. "HOLCF '11: A Definitional Domain Theory for Verifying Functional Programs." PDXScholar, 2011. https://pdxscholar.library.pdx.edu/open_access_etds/113.

Full text
Abstract:
HOLCF is an interactive theorem proving system that uses the mathematics of domain theory to reason about programs written in functional programming languages. This thesis introduces HOLCF '11, a thoroughly revised and extended version of HOLCF that advances the state of the art in program verification: HOLCF '11 can reason about many program definitions that are beyond the scope of other formal proof tools, while providing a high degree of proof automation. The soundness of the system is ensured by adhering to a definitional approach: New constants and types are defined in terms of previous concepts, without introducing new axioms. Major features of HOLCF '11 include two high-level definition packages: the Fixrec package for defining recursive functions, and the Domain package for defining recursive datatypes. Each of these uses the domain-theoretic concept of least fixed points to translate user-supplied recursive specifications into safe low-level definitions. Together, these tools make it easy for users to translate a wide variety of functional programs into the formalism of HOLCF. Theorems generated by the tools also make it easy for users to reason about their programs, with a very high level of confidence in the soundness of the results. As a case study, we present a fully mechanized verification of a model of concurrency based on powerdomains. The formalization depends on many features unique to HOLCF '11, and is the first verification of such a model in a formal proof tool.
APA, Harvard, Vancouver, ISO, and other styles
44

Ramsey, Terry 1946. "The calendar heap: A new implementation of the calendar queue." Thesis, The University of Arizona, 1989. http://hdl.handle.net/10150/291354.

Full text
Abstract:
A new implementation of the calendar queue is described in this thesis. The calendar queue as previously implemented depended upon the use of multiple linked lists for the control of queue discipline. In the calendar heap implementation, the heap has been used to replace the previous functions of the linked list. Testing of the claim of O(1) execution time for the calendar queue was done. Comparisons of execution times of the calendar queue and the calendar heap have been made. Descriptions of the implementation as well as the complete C code for the calendar heap are included.
APA, Harvard, Vancouver, ISO, and other styles
45

Tessler, Michael. "Specifications of a software environment for the computer-aided design of control systems." Thesis, McGill University, 1985. http://digitool.Library.McGill.CA:80/R/?func=dbin-jump-full&object_id=63218.

Full text
APA, Harvard, Vancouver, ISO, and other styles
46

Kreuger, Per. "Computational Issues in Calculi of Partial Inductive Definitions." Doctoral thesis, Decisions, Networks and Analytics lab, 1995. http://urn.kb.se/resolve?urn=urn:nbn:se:ri:diva-21196.

Full text
Abstract:
We study the properties of a number of algorithms proposed to explore the computational space generated by a very simple and general idea: the notion of a mathematical definition and a number of suggested formal interpretations ofthis idea. Theories of partial inductive definitions (PID) constitute a class of logics based on the notion of an inductive definition. Formal systems based on this notion can be used to generalize Horn-logic and naturally allow and suggest extensions which differ in interesting ways from generalizations based on first order predicate calculus. E.g. the notion of completion generated by a calculus of PID and the resulting notion of negation is completely natural and does not require externally motivated procedures such as "negation as failure". For this reason, computational issues arising in these calculi deserve closer inspection. This work discuss a number of finitary theories of PID and analyzethe algorithmic and semantical issues that arise in each of them. There has been significant work on implementing logic programming languages in this setting and we briefly present the programming language and knowledge modelling tool GCLA II in which many of the computational prob-lems discussed arise naturally in practice.

Also published as SICS Dissertation no. SICS-D-19

APA, Harvard, Vancouver, ISO, and other styles
47

Park, Seongmin. "A hypertext learning system for theory of computation." Virtual Press, 1993. http://liblink.bsu.edu/uhtbin/catkey/897499.

Full text
Abstract:
The Hypertext concept was introduced about 50 years ago. This thesis presents the development of a reference system using the Hypertext concept. HYATS (HYpertext Automata and Turing Theory Learning 5ys,em) is a system which helps users learn many topics in the area of theory of computation. The system is implemented by Guide which is a general purpose Hypertext system running on PC-Windows environment. HYATS also includes a Turing machine simulating program which was written by Dominique Atger as her Master's Thesis in 1993, so that users can actually experiment with Turing machines learned through HYATS. HYATS will be not only the reference system, but also the complete package of actual learning system. The motivation behind this project is to study basic concepts of a Hypertext system so that it will also contribute to G-Net research. HYATS can be used as a prototype for future development of versions of by using other Hypertext systems such as NoteCards.
Department of Computer Science
APA, Harvard, Vancouver, ISO, and other styles
48

Islam, Mustafa R. "A hypertext graph theory reference system." Virtual Press, 1993. http://liblink.bsu.edu/uhtbin/catkey/879844.

Full text
Abstract:
G-Net system is being developed by the members of the G-Net research group under the supervision of Dr. K. Jay Bagga. The principle objective of the G-Net system is to provide an integrated tool for dealing with various aspects of graph theory. G-Net system is divided into two parts. GETS (Graph theory Experiments Tool Set) will provide a set of tools to experiment with graph theory, and HYGRES (HYpertext Graph theory Reference Service), the second subcomponent of the G-Net system to aid graph theory study and research. In this research a hypertext application is built to present the graph theory concepts, graph models and the algorithms. In other words, HYGRES (Guide Version) provides the hypertext facilities for organizing a graph theory database in a very natural and interactive way. An hypertext application development tool, called Guide, is used to implement this version of HYGRES. This project integrates the existing version of GETS so that it can also provide important services to HYGRES. The motivation behind this project is to study the initial criterion for developing a hypertext system, which can be used for future development of a stand alone version of the G-Net system.
Department of Computer Science
APA, Harvard, Vancouver, ISO, and other styles
49

Gill, David Michael. "Automatic theorem proving programs and group presentations." Thesis, University of St Andrews, 1995. https://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.268121.

Full text
APA, Harvard, Vancouver, ISO, and other styles
50

Lee, Timothy J. "CHITRA93 : a tool to analyze system behavior by visualizing and modeling ensembles of traces /." Master's thesis, This resource online, 1994. http://scholar.lib.vt.edu/theses/available/etd-10242009-020057/.

Full text
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography