Dissertations / Theses on the topic 'Computer programs – Verification'

To see the other types of publications on this topic, follow the link: Computer programs – Verification.

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 dissertations / theses for your research on the topic 'Computer programs – Verification.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.

1

Koskinen, Eric John. "Thermal verification of programs." Thesis, University of Cambridge, 2013. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.607698.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Duracz, Jan Andrzej. "Verification of floating point programs." Thesis, Aston University, 2010. http://publications.aston.ac.uk/15778/.

Full text
Abstract:
In this thesis we present an approach to automated verification of floating point programs. Existing techniques for automated generation of correctness theorems are extended to produce proof obligations for accuracy guarantees and absence of floating point exceptions. A prototype automated real number theorem prover is presented, demonstrating a novel application of function interval arithmetic in the context of subdivision-based numerical theorem proving. The prototype is tested on correctness theorems for two simple yet nontrivial programs, proving exception freedom and tight accuracy guarantees automatically. The prover demonstrates a novel application of function interval arithmetic in the context of subdivision-based numerical theorem proving. The experiments show how function intervals can be used to combat the information loss problems that limit the applicability of traditional interval arithmetic in the context of hard real number theorem proving.
APA, Harvard, Vancouver, ISO, and other styles
3

Wickerson, John Peter. "Concurrent verification for sequential programs." Thesis, University of Cambridge, 2013. https://www.repository.cam.ac.uk/handle/1810/265613.

Full text
Abstract:
This dissertation makes two contributions to the field of software verification. The first explains how verification techniques originally developed for concurrency can be usefully applied to sequential programs. The second describes how sequential programs can be verified using diagrams that have a parallel nature. The first contribution involves a new treatment of stability in verification methods based on rely-guarantee. When an assertion made in one thread of a concurrent system cannot be invalidated by the actions of other threads, that assertion is said to be 'stable'. Stability is normally enforced through side-conditions on rely-guarantee proof rules. This dissertation proposes instead to encode stability information into the syntactic form of the assertion. This approach, which we call explicit stabilisation, brings several benefits. First, we empower rely-guarantee with the ability to reason about library code for the first time. Second, when the rely-guarantee method is redepleyed in a sequential setting, explicit stabilisation allows more details of a module's implementation to be hidden when verifying clients. Third, explicit stabilisation brings a more nuanced understanding of the important issue of stability in concurrent and sequential verification; such an understanding grows ever more important as verification techniques grow ever more complex. The second contribution is a new method of presenting program proofs conducted in separation logic. Building on work by Jules Bean, the ribbon proof is a diagrammatic alternative to the standard 'proof outline'. By emphasising the structure of a proof, ribbon proofs are intelligible and hence useful pedagogically. Because they contain less redundancy than proof outlines, and allow each proof step to be checked locally, they are highly scalable; this we illustrate with a ribbon proof of the Version 7 Unix memory manager. Where proof outlines are cumbersome to modify, ribbon proofs can be visually manoeuvred to yield proofs of variant programs. We describe the ribbon proof system, prove its soundness and completeness, and outline a prototype tool for mechanically checking the diagrams it produ1res.
APA, Harvard, Vancouver, ISO, and other styles
4

Lang, Matthew. "Maximality modular verification and implementability /." Columbus, Ohio : Ohio State University, 2009. http://rave.ohiolink.edu/etdc/view?acc%5Fnum=osu1243962353.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Sobel, Ann E. Kelley. "Modular verification of concurrent systems /." The Ohio State University, 1986. http://rave.ohiolink.edu/etdc/view?acc_num=osu1487267546983528.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Nagarajan, R. "Typed concurrent programs : specification and verification." Thesis, Imperial College London, 1999. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.369244.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Ponzio, Stephen J. (Stephen John). "Restricted branching programs and hardware verification." Thesis, Massachusetts Institute of Technology, 1995. http://hdl.handle.net/1721.1/35042.

Full text
Abstract:
Thesis (Ph. D.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 1995.
Includes bibliographical references (p. 71-77).
by Stephen John Ponzio.
Ph.D.
APA, Harvard, Vancouver, ISO, and other styles
8

Simic, Stella. "Bit-precise Verification of Numerical Properties in Fixed-point Programs." Thesis, IMT Alti Studi Lucca, 2022. http://e-theses.imtlucca.it/365/1/Simic_phdthesis.pdf.

Full text
Abstract:
Numerical software is prone to inaccuracies due to the finite representation of numbers. These inaccuracies propagate, possibly non-linearly, throughout the statements of a program, making it hard to predict the accumulated errors. Moreover, in programs that contain control structures, numerical errors can affect the control flow. As a result of these inaccuracies, reachability, and thus safety, may be altered with respect to the intended infinite-precision computation. This thesis considers programs that use fixed-point arithmetic to compute over non-integer quantities in finite precision. We first define a semantics of fixed-point operations in terms of operations over bit-vectors. The proposed semantics generalizes current attempts to a standardization of fixedpoint arithmetic. We then consider the problem of bit-precise numerical accuracy certification of fixed-point programs with control structures and arithmetic over variables of arbitrary, mixed precision and possibly non-deterministic value. By applying a set of parametrized transformation rules based on computable expressions for the errors incurred by single program statements, we reduce the problem of assessing whether a fixed-point program can exceed a given error bound to a reachability problem in a bit-vector program. We present an experimental evaluation of the certification technique, implemented in a prototype analyzer in a bounded model checking-based verification workflow. Our experiments on a set of fixed-point arithmetic routines commonly used in the industry show that the proposed technique can successfully certify numerical errors and can do so bitprecisely, making it the only such verification technique.
APA, Harvard, Vancouver, ISO, and other styles
9

Mahtab, Tazeen 1981. "Automated verification of model-based programs under uncertainty." Thesis, Massachusetts Institute of Technology, 2004. http://hdl.handle.net/1721.1/28453.

Full text
Abstract:
Thesis (M. Eng. and S.B.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 2004.
Includes bibliographical references (p. 89-91).
Highly robust embedded systems have been enabled through software executives that have the ability to reason about their environment. Those that employ the model-based autonomy paradigm automatically diagnose and plan future actions, based on models of themselves and their environment. This includes autonomous systems that must operate in harsh and dynamic environments, like, deep space. Such systems must be robust to a large space of possible failure scenarios. This large state space poses difficulties for traditional scenario-based testing, leading to a need for new approaches to verification and validation. We propose a novel verification approach that generates an analysis of the most likely failure scenarios for a model-based program. By finding only the lost likely failures, we increase the relevance and reduce the quantity of information the developer must examine. First, we provide the ability to verify a stochastic system that encodes both off-nominal and nominal scenarios. We incorporate uncertainty into the verification process by acknowledging that all such programs may fail, but in different ways, with different likelihoods. The verification process is one of finding the most likely executions that fail the specification. Second, we provide a capability for verifying executable specifications that are fault-aware. We generalize offline plant model verification to the verification of model-based programs, which consist of both a plant model that captures the physical plant's nominal and off-nominal states and a control program that specifies its desired behavior. Third, we verify these specifications through execution of the RMPL executive itself. We therefore circumvent the difficulty of formalizing the behavior of complex
(cont.) software executives. We present the RMPL Verifier, a tool for verification of model-based programs written in the Reactive Model-based Programming Language (RMPL) for the Titan execution kernel. Using greedy forward-directed search, this tool finds as counterexamples to the program's goal specification the most likely executions that do not achieve the goal within a given time bound.
by Tazeen Mahtab.
M.Eng.and S.B.
APA, Harvard, Vancouver, ISO, and other styles
10

Hanna, Ziyad. "A symbolic execution framework for algorithm-level modelling and verification of computer microarchitecture." Thesis, University of Oxford, 2011. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.560923.

Full text
Abstract:
This dissertation addresses the challenge of modelling and functional verification for com- plex computer micro-architecture designs. It is evident that the emerging span and com- plexity of new computer architectures outstrip existing design modelling and verification methods. Several attempts in industry and academia, including High Level Modelling, still do not scale to address this problem. Typically they lack precise and clear language semantics for formal analysis, and do not have native support for concurrency, or the design language and methodology do not fit. Therefore, the gap between what current solutions provide and what industry needs is increasing. In this research we aim to leap ahead of the common incremental research in this area, and develop a new framework for algorithm level modelling and verification. We introduce a high level and executable Architectural Specification Language (ASL) for modelling the functional behaviour of the architectural algorithms. The semantics of our models is based on the theory of Abstract State Machines with synchronous parallel execution and finite choice, which we find naturally suitable for hardware modelling. Our framework is also powered by native symbolic execution algorithms for enabling high- level verification, design explorations and refinement checks of the high level models down to the design implementation. We developed a new framework that implements our ideas through ASL and supports symbolic execution. We demonstrate the utility of our language and symbolic execu- tion on examples and case studies in various modelling domains, and show a promising framework and methodology. We believe our approach will make it easier to explore micro-architectural algorithm behavior and easier to validate this using dynamic or formal techniques, thus yielding a promising attack on the modelling and verification problem, and enabling more productive convergence to high quality implementations.
APA, Harvard, Vancouver, ISO, and other styles
11

Zhou, Zhiquan, and 周智泉. "Verification of program properties: from testing to semi-proving." Thesis, The University of Hong Kong (Pokfulam, Hong Kong), 2003. http://hub.hku.hk/bib/B31245134.

Full text
APA, Harvard, Vancouver, ISO, and other styles
12

Müller-Olm, Markus. "Modular compiler verification : a refinement algebraic approach advocating stepwise abstraction /." Berlin [u.a.] : Springer, 1997. http://www.loc.gov/catdir/enhancements/fy0815/97013428-d.html.

Full text
APA, Harvard, Vancouver, ISO, and other styles
13

Nakade, Radha Vi. "Verification of Task Parallel Programs Using Predictive Analysis." BYU ScholarsArchive, 2016. https://scholarsarchive.byu.edu/etd/6176.

Full text
Abstract:
Task parallel programming languages provide a way for creating asynchronous tasks that can run concurrently. The advantage of using task parallelism is that the programmer can write code that is independent of the underlying hardware. The runtime determines the number of processor cores that are available and the most efficient way to execute the tasks. When two or more concurrently executing tasks access a shared memory location and if at least one of the accesses is for writing, data race is observed in the program. Data races can introduce non-determinism in the program output making it important to have data race detection tools. To detect data races in task parallel programs, a new Sound and Complete technique based on computation graphs is presented in this work. The data race detection algorithm runs in O(N2) time where N is number of nodes in the graph. A computation graph is a directed acyclic graph that represents the execution of the program. For detecting data races, the computation graph stores shared heap locations accessed by the tasks. An algorithm for creating computation graphs augmented with memory locations accessed by the tasks is also described here. This algorithm runs in O(N) time where N is the number of operations performed in the tasks. This work also presents an implementation of this technique for the Java implementation of the Habanero programming model. The results of this data race detector are compared to Java Pathfinder's precise race detector extension and permission regions based race detector extension. The results show a significant reduction in the time required for data race detection using this technique.
APA, Harvard, Vancouver, ISO, and other styles
14

Newcomb, Tom C. "Model checking data-independent systems with arrays." Thesis, University of Oxford, 2003. http://ora.ox.ac.uk/objects/uuid:7fc75da9-e653-4578-b061-8a1cc30ba609.

Full text
Abstract:
We say a program is data-independent with respect to a data type X if the operations it can perform on values of type X are restricted to just equality testing, although the system may also input, store and move around (via assignment) values of type X within its variables. This property can be exploited to give procedures for the automatic verification, called model checking, of such programs independently of the instance for the type X. This thesis considers data-independent programs with arrays, which are useful for modelling memory systems such as cache protocols. The main question of interest is the following parameterised model-checking problem: whether a program satisfies its specification for all non-empty finite instances of its types. In order to obtain these results, we present a UNITY-like programming language with arrays that is suited to the study of decidability of various modelchecking problems, whilst being useful for prototyping memory systems such as caches. Its semantics are given in terms of transition systems, and we use the modal μ-calculus, a branching-time temporal logic with recursion, as our specification language. We describe a model-checking procedure for programs that use arrays indexed by one data-independent type X and storing values from another Y. This allows us to prove properties about parameterised systems: for example, that memory systems can be verified independently of memory size and data values. This decidability result is shown to extend to data-independent programs with many types and multidimensional arrays which are acyclic, meaning it is not possible to form loops of types in the 'indexed by' relation. Conversely, it is shown that even reachability model-checking problems are undecidable for classes of programs that allow cyclic-array programs. We give practical motivation for these decidability results by demonstrating how one could verify a fault-tolerant interface on a set of unreliable memories, and the cache protocol in the Pentium Pro processor. Significantly, the verifications are performed independently of many of these systems' parameters. These case studies suggest two extensions to the language: an array reset instruction, which sets every element of an array to a particular value, and an array assignment or copy instruction. Both are shown to restrict decidability of model checking problems; however we can obtain some interesting decidability results for arrays with reset by restricting the number of arrays to just one, or by allowing the arrays only to store fixed finite types, such as the booleans.
APA, Harvard, Vancouver, ISO, and other styles
15

趙炳權 and Ping-kuen Peter Chiu. "Interval logic and modified labelled-net for system specificationand verification." Thesis, The University of Hong Kong (Pokfulam, Hong Kong), 1985. http://hub.hku.hk/bib/B31206839.

Full text
APA, Harvard, Vancouver, ISO, and other styles
16

Acosta, Zapién Carlos Eduardo. "A constraint-based approach to verification of programs with floating-point numbers." To access this resource online via ProQuest Dissertations and Theses @ UTEP, 2007. http://0-proquest.umi.com.lib.utep.edu/login?COPT=REJTPTU0YmImSU5UPTAmVkVSPTI=&clientId=2515.

Full text
APA, Harvard, Vancouver, ISO, and other styles
17

Luo, Xueyi. "A tool for computer verification of properties of certain classes of visibility graphs." Virtual Press, 1994. http://liblink.bsu.edu/uhtbin/catkey/897510.

Full text
Abstract:
Segment endpoint visibility graph is a representation scheme for art gallery problems, guard problems, and other shortest path or shortest circuit problems. In the research of visibility graphs, drawing graphs is a time-consuming task. VGE (Visibility Graphs Editor) is developed for visibility graphs reseacheres to create and modify graphs interactively in X-window environment. Appropriate graphics user interface allows the researcher to edit a graph, save and open a file, and make a hard copy of a graph. VGE is developed in C under X-window environment and using EZD[3] graphics tool. The thesis also discusses the uses of EZD. Although it is still only a prototype, VGE is a successful tool for analyzing visibility graphs.
Department of Computer Science
APA, Harvard, Vancouver, ISO, and other styles
18

Escalante, Marco Antonio. "Probabilistic timing verification and timing analysis for synthesis of digital interface controllers." Thesis, National Library of Canada = Bibliothèque nationale du Canada, 1998. http://www.collectionscanada.ca/obj/s4/f2/dsk2/tape17/PQDD_0023/NQ36637.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
19

Grobler, Leon D. "A kernel to support computer-aided verification of embedded software." Thesis, Stellenbosch : University of Stellenbosch, 2006. http://hdl.handle.net/10019.1/2479.

Full text
Abstract:
Thesis (MSc (Mathematical Sciences)--University of Stellenbosch, 2006.
Formal methods, such as model checking, have the potential to improve the reliablility of software. Abstract models of systems are subjected to formal analysis, often showing subtle defects not discovered by traditional testing.
APA, Harvard, Vancouver, ISO, and other styles
20

Xie, Gaoyan. "Fundamental studies on a decompositional and hybrid approach to automatic verification of component-based systems." Online access for everyone, 2005. http://www.dissertations.wsu.edu/Dissertations/Summer2005/g%5Fxie%5F072205.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
21

Cyriac, Aiswarya, and Aiswarya Cyriac. "Verification of communicating recursive programs via split-width." Phd thesis, École normale supérieure de Cachan - ENS Cachan, 2014. http://tel.archives-ouvertes.fr/tel-01015561.

Full text
Abstract:
This thesis investigates automata-theoretic techniques for the verification of physically distributed machines communicating via unbounded reliable channels. Each of these machines may run several recursive programs (multi-threading). A recursive program may also use several unbounded stack and queue data-structures for its local-computation needs. Such real-world systems are so powerful that all verification problems become undecidable. We introduce and study a new parameter called split-width for the under-approximate analysis of such systems. Split-width is the minimum number of splits required in the behaviour graphs to obtain disjoint parts which can be reasoned about independently. Thus it provides a divide-and-conquer approach for their analysis. With the parameter split-width, we obtain optimal decision procedures for various verification problems on these systems like reachability, inclusion, etc. and also for satisfiability and model checking against various logical formalisms such as monadic second-order logic, propositional dynamic logic and temporal logics. It is shown that behaviours of a system have bounded split-width if and only if they have bounded clique-width. Thus, by Courcelle's results on uniformly bounded-degree graphs, split-width is not only sufficient but also necessary to get decidability for MSO satisfiability checking. We then study the feasibility of distributed controllers for our generic distributed systems. We propose several controllers, some finite state and some deterministic, which ensure that the behaviours of the system have bounded split-width. Such a distributedly controlled system yields decidability for the various verification problems by inheriting the optimal decision procedures for split-width. These also extend or complement many known decidable subclasses of systems studied previously.
APA, Harvard, Vancouver, ISO, and other styles
22

Ranganath, Venkatesh Prasad. "Scalable and accurate approaches for program dependence analysis, slicing, and verification of concurrent object oriented programs." Diss., Manhattan, Kan. : Kansas State University, 2006. http://hdl.handle.net/2097/248.

Full text
APA, Harvard, Vancouver, ISO, and other styles
23

Bradfield, Julian Charles. "Verifying temporal properties of systems with applications to petri nets." Thesis, University of Edinburgh, 1991. http://hdl.handle.net/1842/6565.

Full text
Abstract:
This thesis provides a powerful general-purpose proof technique for the verification of systems, whether finite or infinite. It extends the idea of finite local model-checking, which was introduced by Stirling and Walker: rather than traversing the entire state space of a model, as is done for model-checking in the sense of Emerson, Clarke et al. (checking whether a (finite) model satisfies a formula), local model-checking asks whether a particular state satisfies a formula, and only explores the nearby states far enough to answer that question. The technique used was a tableau method, constructing a tableau according to the formula and the local structure of the model. This tableau technique is here generalized to the infinite case by considering sets of states, rather than single states; because the logic used, the propositional modal mu-calculus, separates simple modal and boolean connectives from powerful fix-point operators (which make the logic more expressive than many other temporal logics), it is possible to give a relatively straightforward set of rules for constructing a tableau. Much of the subtlety is removed from the tableau itself, and put into a relation on the state space defined by the tableau-the success of the tableau then depends on the well-foundedness of this relation. This development occupies the second and third chapters: the second considers the modal mu-calculus, and explains its power, while the third develops the tableau technique itself The generalized tableau technique is exhibited on Petri nets, and various standard notions from net theory are shown to play a part in the use of the technique on nets-in particular, the invariant calculus has a major role. The requirement for a finite presentation of tableaux for infinite systems raises the question of the expressive power of the mu-calculus. This is studied in some detail, and it is shown that on reasonably powerful models of computation, such as Petri nets, the mu-calculus can express properties that are not merely undecidable, but not even arithmetical. The concluding chapter discusses some of the many questions still to be answered, such as the incorporation of formal reasoning within the tableau system, and the power required of such reasoning.
APA, Harvard, Vancouver, ISO, and other styles
24

Wilson, Thomas. "The Omnibus language and integrated verification approach." Thesis, University of Stirling, 2007. http://hdl.handle.net/1893/260.

Full text
Abstract:
This thesis describes the Omnibus language and its supporting framework of tools. Omnibus is an object-oriented language which is superficially similar to the Java programming language but uses value semantics for objects and incorporates a behavioural interface specification language. Specifications are defined in terms of a subset of the query functions of the classes for which a frame-condition logic is provided. The language is well suited to the specification of modelling types and can also be used to write implementations. An overview of the language is presented and then specific aspects such as subtleties in the frame-condition logic, the implementation of value semantics and the role of equality are discussed. The challenges of reference semantics are also discussed. The Omnibus language is supported by an integrated verification tool which provides support for three assertion-based verification approaches: run-time assertion checking, extended static checking and full formal verification. The different approaches provide different balances between rigour and ease of use. The Omnibus tool allows these approaches to be used together in different parts of the same project. Guidelines are presented in order to help users avoid conflicts when using the approaches together. The use of the integrated verification approach to meet two key requirements of safe software component reuse, to have clear descriptions and some form of certification, are discussed along with the specialised facilities provided by the Omnibus tool to manage the distribution of components. The principles of the implementation of the tool are described, focussing on the integrated static verifier module that supports both extended static checking and full formal verification through the use of an intermediate logic. The different verification approaches are used to detect and correct a range of errors in a case study carried out using the Omnibus language. The case study is of a library system where copies of books, CDs and DVDs are loaned out to members. The implementation consists of 2278 lines of Omnibus code spread over 15 classes. To allow direct comparison of the different assertion-based verification approaches considered, run-time assertion checking, extended static checking and then full formal verification are applied to the application in its entirety. This directly illustrates the different balances between error coverage and ease-of-use which the approaches offer. Finally, the verification policy system is used to allow the approaches to be used together to verify different parts of the application.
APA, Harvard, Vancouver, ISO, and other styles
25

Vroon, Daron. "Automatically Proving the Termination of Functional Programs." Diss., Georgia Institute of Technology, 2007. http://hdl.handle.net/1853/19734.

Full text
Abstract:
Establishing the termination of programs is a fundamental problem in the field of software verification. For transformational programs, termination is used to extend partial correctness to total correctness. For reactive systems, termination reasoning is used to establish liveness properties. In the context of theorem proving, termination is used to establish the consistency of definitional axioms and to automate proofs by induction. Of course, termination is an undecidable problem, as Turing himself proved. However, the question remains: how automatic can a general termination analysis be in practice? In this dissertation, we develop two new general frameworks for reasoning about termination and demonstrate their effectiveness in automating the task of proving termination in the domain of applicative first-order functional languages. The foundation of the first framework is the development of the first known complete set of algorithms for ordinal arithmetic over an ordinal notation. We provide algorithms for ordinal ordering ($<$), addition, subtraction, multiplication, and exponentiation on the ordinals up to epsilon-naught. We prove correctness and complexity results for each algorithm. We also create a library for automating arithmetic reasoning over epsilon-naught in the ACL2 theorem proving system. This ordinal library enables new termination proofs that were previously not possible in previous versions of ACL2. The foundation of the second framework is an algorithm for fully automating termination reasoning with no user assistance. This algorithm uses a combination of theorem proving and static analysis to create a Calling Context Graph (CCG), a novel abstraction that captures the looping behavior of the program. Calling Context Measures (CCMs) are then used to prove that no infinite path through the CCG can be an actual computation of the program. We implement this algorithm in the ACL2, and empirically evaluate its effectiveness on the regression suite, a collection of over 11,000 user-defined functions from a wide variety of applications.
APA, Harvard, Vancouver, ISO, and other styles
26

Rezine, Othmane. "Verification of networks of communicating processes : Reachability problems and decidability issues." Doctoral thesis, Uppsala universitet, Avdelningen för datorteknik, 2017. http://urn.kb.se/resolve?urn=urn:nbn:se:uu:diva-334788.

Full text
Abstract:
Computer systems are used in almost all aspects of our lives and our dependency on them keeps on increasing. When computer systems are used to handle critical tasks, any software failure can cause severe human and/or material losses. Therefore, for such applications, it is important to detect software errors at an early stage of software development. Furthermore, the growing use of concurrent and distributed programs exponentially increases the complexity of computer systems, making the problem of detecting software errors even harder (if not impossible). This calls for defining systematic and efficient techniques to evaluate the safety and the correctness of programs. The aim of Model-Checking is to analyze automatically whether a given program satisfies its specification. Early applications of Model-Checking were restricted to systems whose behaviors can be captured by finite graphs, so called finite-state systems. Since many computer systems cannot be modeled as finite-state machines, there has been a growing interest in extending the applicability of Model-Checking to infinite-state systems. The goal of this thesis is to extend the applicability of Model Checking for three instances of infinite-state systems: Ad-Hoc Networks, Dynamic Register Automata and Multi Pushdown Systems. Each one of these instances models challenging types of networks of communicating processes. In both Ad-Hoc Networks and Dynamic Register Automata, communication is carried through message passing. In each type of network, a graph topology models the communication links between processes in the network. The graph topology is static in the case of Ad-Hoc Networks while it is dynamic in the case of Dynamic Register Automata. The number of processes in both types of networks is unbounded. Finally, we consider Multi Pushdown Systems, a model used to study the behaviors of concurrent programs composed of sequential recursive sequential programs communicating through a shared memory.
APA, Harvard, Vancouver, ISO, and other styles
27

Gerber, Erick D. B. "A model checker for the LF system." Thesis, Stellenbosch : Stellenbosch University, 2007. http://hdl.handle.net/10019.1/19597.

Full text
Abstract:
Thesis (MSc)--University of Stellenbosch, 2007.
ENGLISH ABSTRACT: Computer aided veri cation techniques, such as model checking, can be used to improve the reliability of software. Model checking is an algorithmic approach to illustrate the correctness of temporal logic speci cations in the formal description of hardware and software systems. In contrast to traditional testing tools, model checking relies on an exhaustive search of all the possible con gurations that these systems may exhibit. Traditionally model checking is applied to abstract or high level designs of software. However, often interpreting or translating these abstract designs to implementations introduce subtle errors. In recent years one trend in model checking has been to apply the model checking algorithm directly to the implementations instead. This thesis is concerned with building an e cient model checker for a small concurrent langauge developed at the University of Stellenbosch. This special purpose langauge, LF, is aimed at developement of small embedded systems. The design of the language was carefully considered to promote safe programming practices. Furthermore, the language and its runtime support system was designed to allow directly model checking LF programs. To achieve this, the model checker extends the existing runtime support infrastructure to generate the state space of an executing LF program.
AFRIKAANSE OPSOMMING: Rekenaar gebaseerde program toetsing, soos modeltoetsing, kan gebruik word om die betroubaarheid van sagteware te verbeter. Model toetsing is 'n algoritmiese benadering om die korrektheid van temporale logika spesi kasies in die beskrywing van harde- of sagteware te bewys. Anders as met tradisionlee program toetsing, benodig modeltoetsing 'n volledige ondersoek van al die moontlike toestande waarin so 'n beskrywing homself kan bevind. Model toetsing word meestal op abstrakte modelle van sagteware of die ontwerp toegepas. Indien die ontwerp of model aan al die spesi kasies voldoen word die abstrakte model gewoontlik vertaal na 'n implementasie. Die vertalings proses word gewoontlik met die hand gedoen en laat ruimte om nuwe foute, en selfs foute wat uitgeskakel in die model of ontwerp is te veroorsaak. Deesdae, is 'n gewilde benadering tot modeltoetsing om di e tegnieke direk op die implementasie toe te pas, en sodoende die ekstra moeite van model konstruksie en vertaling uit te skakel. Hierdie tesis handel oor die ontwerp, implementasie en toetsing van 'n e ektiewe modeltoetser vir 'n klein gelyklopende taal, LF, wat by die Universiteit van Stellenbosch ontwikkel is. Die enkeldoelige taal, LF, is gemik op die veilige ontwikkeling van ingebedde sagteware. Die taal is ontwerp om veilige programmerings praktyke aan te moedig. Verder is die taal en die onderliggende bedryfstelsel so ontwerp om 'n model toetser te akkomodeer. Om die LF programme direk te kan toets, is die model toetser 'n integrale deel van die bedryfstelsel sodat dit die program kan aandryf om alle moontlike toestande te besoek.
APA, Harvard, Vancouver, ISO, and other styles
28

Hay, Karen June. "A proof methodology for verification of real-time and fault-tolerance properties of distributed programs." Diss., The University of Arizona, 1993. http://hdl.handle.net/10150/186261.

Full text
Abstract:
From the early days of programming, the dependability of software has been a concern. The development of distributed systems that must respond in real-time and continue to function correctly in spite of hardware failure have increased the concern while making the task of ensuring dependability more complex. This dissertation presents a technique for improving confidence in software designed to execute on a distributed system of fail-stop processors. The methodology presented is based on a temporal logic augmented with time intervals and probability distributions. A temporal logic augmented with time intervals, Bounded Time Temporal Logic (BTTL), supports the specification and verification of real-time properties such as, "The program will poll the sensor every t to T time units." Analogously, a temporal logic augmented with probability distributions, Probabilistic Bounded Time Temporal Logic (PBTTL), supports reasoning about fault-tolerant properties such as, "The program will complete with probability less than or equal to p", and a combination of these properties such as, "The program will complete within t and T time units with probability less than or equal to p." The syntax and semantics of the two logics, BTTL and PBTTL, are carefully developed. This includes development of a program state model, state transition model, message passing system model and failure system model. An axiomatic program model is then presented and used for the development of a set of inference rules. The inference rules are designed to simplify use of the logic for reasoning about typical programming language constructs and commonly occurring programming scenarios. In addition to offering a systematic approach for verifying typical behaviors, the inference rules are intended to support the derivation of formulas expressing timing and probabilistic relationships between the execution times and probabilities of individual statements, groups of statements, message passing and failure recovery. Use of the methodology is demonstrated in examples of varying complexity, including five real-time examples and four combined real-time and fault-tolerant examples.
APA, Harvard, Vancouver, ISO, and other styles
29

Gaither, Danielle. "Improving Software Quality through Syntax and Semantics Verification of Requirements Models." Thesis, University of North Texas, 2018. https://digital.library.unt.edu/ark:/67531/metadc1404542/.

Full text
Abstract:
Software defects can frequently be traced to poorly-specified requirements. Many software teams manage their requirements using tools such as checklists and databases, which lack a formal semantic mapping to system behavior. Such a mapping can be especially helpful for safety-critical systems. Another limitation of many requirements analysis methods is that much of the analysis must still be done manually. We propose techniques that automate portions of the requirements analysis process, as well as clarify the syntax and semantics of requirements models using a variety of methods, including machine learning tools and our own tool, VeriCCM. The machine learning tools used help us identify potential model elements and verify their correctness. VeriCCM, a formalized extension of the causal component model (CCM), uses formal methods to ensure that requirements are well-formed, as well as providing the beginnings of a full formal semantics. We also explore the use of statecharts to identify potential abnormal behaviors from a given set of requirements. At each stage, we perform empirical studies to evaluate the effectiveness of our proposed approaches.
APA, Harvard, Vancouver, ISO, and other styles
30

Huffman, Brian Charles. "HOLCF '11: A Definitional Domain Theory for Verifying Functional Programs." PDXScholar, 2011. https://pdxscholar.library.pdx.edu/open_access_etds/113.

Full text
Abstract:
HOLCF is an interactive theorem proving system that uses the mathematics of domain theory to reason about programs written in functional programming languages. This thesis introduces HOLCF '11, a thoroughly revised and extended version of HOLCF that advances the state of the art in program verification: HOLCF '11 can reason about many program definitions that are beyond the scope of other formal proof tools, while providing a high degree of proof automation. The soundness of the system is ensured by adhering to a definitional approach: New constants and types are defined in terms of previous concepts, without introducing new axioms. Major features of HOLCF '11 include two high-level definition packages: the Fixrec package for defining recursive functions, and the Domain package for defining recursive datatypes. Each of these uses the domain-theoretic concept of least fixed points to translate user-supplied recursive specifications into safe low-level definitions. Together, these tools make it easy for users to translate a wide variety of functional programs into the formalism of HOLCF. Theorems generated by the tools also make it easy for users to reason about their programs, with a very high level of confidence in the soundness of the results. As a case study, we present a fully mechanized verification of a model of concurrency based on powerdomains. The formalization depends on many features unique to HOLCF '11, and is the first verification of such a model in a formal proof tool.
APA, Harvard, Vancouver, ISO, and other styles
31

Salmon, Yann. "Analyse d'atteignabilité pour les programmes fonctionnels avec stratégie d'évaluation en profondeur." Thesis, Rennes 1, 2015. http://www.theses.fr/2015REN1S085/document.

Full text
Abstract:
Établir des preuves de bon fonctionnement des programmes est délicat ; on a recours à des outils de preuve, qui doivent procéder par surapproximation (à cause du théorème de Rice). La complétion d'automate est un tel outil, qui surapproxime l'ensemble des termes accessibles lors de l'exécution d'un programme représenté par un système de réécriture. La stratégie d'évaluation donne l'ordre dans lequel les sous-termes d'un terme doivent être réécrits ; en tenir compte permet une meilleur précision de l'analyse. Notre thèse propose une adaptation de la complétion d'automate à la stratégie en profondeur, utilisée notamment par OCaml. Nous établissons la correction et la précision de notre méthode et montrons comment elle s'inscrit dans le cadre plus large de l'analyse de programmes fonctionnels (OCaml)
Proving that programs behave correctly is difficult; one uses proof tools, which must rely on overapproximation (because of Rice's theorem). Automaton completion is such a tool, which overapproximates the set of reachable terms during the execution of a program represented as a TRS. An evaluation strategy dictates which subterm of a term should be rewritten first; taking this into account allows for a better approximation. Our thesis sets forward an adaptation of automaton completion to the innermost strategy, which is used among others by OCaml. We prove the soundness and the precision of our adaptation and show how it is part of a greater framework for analysis of functional programms (OCaml)
APA, Harvard, Vancouver, ISO, and other styles
32

Thiagarajan, Hariharan. "Dependence analysis for inferring information flow properties in Spark ADA programs." Thesis, Kansas State University, 2011. http://hdl.handle.net/2097/13187.

Full text
Abstract:
Master of Science
Department of Computing and Information Sciences
John Hatcliff
With the increase in development of safety and security critical systems, it is important to have more sophisticated methods for engineering such systems. It can be difficult to understand and verify critical properties of these systems because of their ever growing size and complexity. Even a small error in a complex system may result in casualty or significant monetary loss. Consequently, there is a rise in the demand for scalable and accurate techniques to enable faster development and verification of high assurance systems. This thesis focuses on discovering dependencies between various parts of a system and leveraging that knowledge to infer information flow properties and to verify security policies specified for the system. The primary contribution of this thesis is a technique to build dependence graphs for languages which feature abstraction and refinement. Inter-procedural slicing and inter-procedural chopping are the techniques used to analyze the properties of the system statically. The approach outlined in this thesis provides a domain-specific language to query the information flow properties and to specify security policies for a critical system. The spec- ified policies can then be verified using existing static analysis techniques. All the above contributions are integrated with a development environment used to develop the critical system. The resulting software development tool helps programmers develop, infer, and verify safety and security systems in a single unified environment.
APA, Harvard, Vancouver, ISO, and other styles
33

Castro, David. "Structured arrows : a type-based framework for structured parallelism." Thesis, University of St Andrews, 2018. http://hdl.handle.net/10023/16093.

Full text
Abstract:
This thesis deals with the important problem of parallelising sequential code. Despite the importance of parallelism in modern computing, writing parallel software still relies on many low-level and often error-prone approaches. These low-level approaches can lead to serious execution problems such as deadlocks and race conditions. Due to the non-deterministic behaviour of most parallel programs, testing parallel software can be both tedious and time-consuming. A way of providing guarantees of correctness for parallel programs would therefore provide significant benefit. Moreover, even if we ignore the problem of correctness, achieving good speedups is not straightforward, since this generally involves rewriting a program to consider a (possibly large) number of alternative parallelisations. This thesis argues that new languages and frameworks are needed. These language and frameworks must not only support high-level parallel programming constructs, but must also provide predictable cost models for these parallel constructs. Moreover, they need to be built around solid, well-understood theories that ensure that: (a) changes to the source code will not change the functional behaviour of a program, and (b) the speedup obtained by doing the necessary changes is predictable. Algorithmic skeletons are parametric implementations of common patterns of parallelism that provide good abstractions for creating new high-level languages, and also support frameworks for parallel computing that satisfy the correctness and predictability requirements that we require. This thesis presents a new type-based framework, based on the connection between structured parallelism and structured patterns of recursion, that provides parallel structures as type abstractions that can be used to statically parallelise a program. Specifically, this thesis exploits hylomorphisms as a single, unifying construct to represent the functional behaviour of parallel programs, and to perform correct code rewritings between alternative parallel implementations, represented as algorithmic skeletons. This thesis also defines a mechanism for deriving cost models for parallel constructs from a queue-based operational semantics. In this way, we can provide strong static guarantees about the correctness of a parallel program, while simultaneously achieving predictable speedups.
APA, Harvard, Vancouver, ISO, and other styles
34

Krishnaswami, Neelakantan R. "Verifying Higher-Order Imperative Programs with Higher-Order Separation Logic." Research Showcase @ CMU, 2012. http://repository.cmu.edu/dissertations/164.

Full text
Abstract:
In this thesis I show is that it is possible to give modular correctness proofs of interesting higher-order imperative programs using higher-order separation logic. To do this, I develop a model higher-order imperative programming language, and develop a program logic for it. I demonstrate the power of my program logic by verifying a series of examples. This includes both realistic patterns of higher-order imperative programming such as the subject-observer pattern, as well as examples demonstrating the use of higher-order logic to reason modularly about highly aliased data structures such as the union-find disjoint set algorithm.
APA, Harvard, Vancouver, ISO, and other styles
35

Csallner, Christoph. "Combining over- and under-approximating program analyses for automatic software testing." Diss., Atlanta, Ga. : Georgia Institute of Technology, 2008. http://hdl.handle.net/1853/24764.

Full text
Abstract:
Thesis (Ph.D.)--Computing, Georgia Institute of Technology, 2009.
Committee Chair: Smaragdakis, Yannis; Committee Member: Dwyer, Matthew; Committee Member: Orso, Alessandro; Committee Member: Pande, Santosh; Committee Member: Rugaber, Spencer.
APA, Harvard, Vancouver, ISO, and other styles
36

Sisco, Zachary David. "Verifying Data-Oriented Gadgets in Binary Programs to Build Data-Only Exploits." Wright State University / OhioLINK, 2018. http://rave.ohiolink.edu/etdc/view?acc_num=wright1533308865314126.

Full text
APA, Harvard, Vancouver, ISO, and other styles
37

Herms, Paolo. "Certification of a Tool Chain for Deductive Program Verification." Phd thesis, Université Paris Sud - Paris XI, 2013. http://tel.archives-ouvertes.fr/tel-00789543.

Full text
Abstract:
This thesis belongs to the domain of software verification. The goalof verifying software is to ensure that an implementation, a program,satisfies the requirements, the specification. This is especiallyimportant for critical computer programs, such as control systems forair planes, trains and power plants. Here a malfunctioning occurringduring operation would have catastrophic consequences. Software requirements can concern safety or functioning. Safetyrequirements, such as not accessing memory locations outside validbounds, are often implicit, in the sense that any implementation isexpected to be safe. On the other hand, functional requirementsspecify what the program is supposed to do. The specification of aprogram is often expressed informally by describing in English or someother natural language the mission of a part of the program code.Usually program verification is then done by manual code review,simulation and extensive testing. But this does not guarantee that allpossible execution cases are captured. Deductive program proving is a complete way to ensure soundness of theprogram. Here a program along with its specificationis a mathematical object and its desired properties are logicaltheorems to be formally proved. This way, if the underlying logicsystem is consistent, we can be absolutely sure that the provenproperty holds for the program in any case.Generation of verification conditions is a technique helpingthe programmer to prove the properties he wants about his programs.Here a VCG tool analyses a program and its formal specification andproduces a mathematical formula, whose validity implies the soundnessof the program with respect to its specification. This is particularlyinteresting when the generated formulas can be proved automatically byexternal SMT solvers.This approach is based on works of Hoare and Dijkstra and iswell-understood and shown correct in theory. Deductive verificationtools have nowadays reached a maturity allowing them to be used inindustrial context where a very high level of assurance isrequired. But implementations of this approach must deal with allkinds of language features and can therefore become quite complex andcontain errors -- in the worst case stating that a program correcteven if it is not. This raises the question of the level ofconfidence granted to these tools themselves. The aim of this thesis is to address this question. We develop, inthe Coq system, a certified verification-condition generator (VCG) forACSL-annotated C programs.Our first contribution is the formalisation of an executableVCG for the Whycert intermediate language,an imperative language with loops, exceptions and recursive functionsand its soundness proof with respect to the blocking big-step operational semantics of the language.A second contribution is the formalisation of the ACSL logicallanguage and the semantics of ACSL annotations of Compcert's Clight.From the compilation of ACSL annotated Clight programs to Whycertprograms and its semantics preservation proof combined with a Whycertaxiomatisation of the Compcert memory model results our maincontribution: an integrated certified tool chainfor verification of C~programs on top of Compcert. By combining oursoundness result with the soundness of the Compcert compiler we obtaina Coq theorem relating the validity of the generated proof obligationswith the safety of the compiled assembly code.
APA, Harvard, Vancouver, ISO, and other styles
38

Bull, J. J. D. "A comparison of two different model checking techniques." Thesis, Stellenbosch : Stellenbosch University, 2003. http://hdl.handle.net/10019.1/53235.

Full text
Abstract:
Thesis (MSc)--University of Stellenbosch, 2003.
ENGLISH ABSTRACT: Model checking is a computer-aided verification technique that is used to verify properties about the formal description of a system automatically. This technique has been applied successfully to detect subtle errors in reactive systems. Such errors are extremely difficult to detect by using traditional testing techniques. The conventional method of applying model checking is to construct a model manually either before or after the implementation of a system. Constructing such a model requires time, skill and experience. An alternative method is to derive a model from an implementation automatically. In this thesis two techniques of applying model checking to reactive systems are compared, both of which have problems as well as advantages. Two specific strategies are compared in the area of protocol development: 1. Structuring a protocol as a transition system, modelling the system, and then deriving an implementation from the model. 2. Automatically translating implementation code to a verifiable model. Structuring a reactive system as a transition system makes it possible to verify the control flow of the system at implementation level-as opposed to verifying the control flow at abstract level. The result is a closer correspondence between implementation and specification (model). At the same time testing, which is restricted to small, independent code fragments that manipulate data, is simplified significantly. The construction of a model often takes too long; therefore, verification results may no longer be applicable when they become available. To address this problem, the technique of automated model extraction was suggested. This technique aims to reduce the time required to construct a model by minimising manual input during model construction. A transition system is a low-level formalism and direct execution through interpretation is feasible. However, the overhead of interpretation is the major disadvantage of this technique. With automated model extraction there are disadvantages too. For example, differences between the implementation and specification languages-such as constructs present in the implementation language that cannot be expressed in the modelling language-make the development of an automated model extraction tool extremely difficult. In conclusion, the two techniques are compared against a set of software development considerations. Since a specific technique is not always preferable, guidelines are proposed to help select the best approach in different circumstances.
AFRIKAANSE OPSOMMING: Modeltoetsing is 'n rekenaargebaseerde verifikasietegniek wat gebruik word om eienskappe rakende 'n formele spesifikasie van 'n stelsel te verifieer. Die tegniek is al suksesvol toegepas om subtiele foute in reaktiewe stelsels op te spoor. Sulke foute word uiters moeilik opgespoor as tradisionele toetsings tegnieke gebruik word. Tradisioneel word modeltoetsing toegepas deur 'n model te bou voor of na die implementasie van 'n stelsel. Om'n model te bou verg tyd, vernuf en ervaring. 'n Alternatiewe metode is om outomaties 'n model van 'n implementasie af te lei. In hierdie tesis word twee toepassingstegnieke van modeltoetsing vergelyk, waar beide tegnieke beskik oor voordele sowel as nadele. Twee strategieë word vergelyk in die gebied van protokol ontwikkeling: 1. Om 'n protokol as 'n oorgangsstelsel te struktureer, dit te moduleer en dan 'n implementasie van die model af te lei. 2. Om outomaties 'n verifieerbare model van 'n implementasie af te lei. Om 'n reaktiewe stelsel as 'n oorgangsstelsel te struktureer maak dit moontlik om die kontrolevloei op implementasie vlak te verifieer-in teenstelling met verifikasie van kontrolevloei op 'n abstrakte vlak. Die resultaat is 'n nouer band wat bestaan tussen die implementasie en die spesifikasie. Terselfdetyd word toetsing, wat beperk word tot klein, onafhanklike kodesegmente wat data manupileer, beduidend vereenvoudig. Die konstruksie van 'n model neem soms te lank; gevolglik, wanneer die verifikasieresultate beskikbaar word, is dit dalk nie meer toepaslik op die huidige weergawe van 'n implementasie nie. Om die probleem aan te spreek is 'n tegniek om modelle outomaties van implementasies af te lei, voorgestel. Die doel van die tegniek is om die tyd wat dit neem om 'n model te bou te verminder deur handtoevoer tot 'n minimum te beperk. 'n Oorgangsstelsel is 'n laevlak formalisme en direkte uitvoering deur interpretasie is wesenlik. Die oorhoofse koste van die interpreteerder is egter die grootste nadeel van die tegniek. Daar is ook nadele wat oorweeg moet word rakende die tegniek om outomaties modelle van implementasies af te lei. Byvoorbeeld, verskille tussen die implementasietaal en spesifikasietaal=-soos byvoorbleed konstrukte wat in die implementasietaal gebruik word wat nie in die modeleringstaal voorgestel kan word nie-vrnaak die ontwikkeling van 'n modelafieier uiters moeilik. As gevolg word die twee tegnieke vergelyk teen 'n stel van programatuurontwikkelingsoorwegings. Omdat 'n spesifieke tegniek nie altyd voorkeur kan geniet nie, word riglyne voorgestel om te help met die keuse om die beste tegniek te kies in verskillende omstandighede.
APA, Harvard, Vancouver, ISO, and other styles
39

Might, Matthew Brendon. "Environment Analysis of Higher-Order Languages." Diss., Georgia Institute of Technology, 2007. http://hdl.handle.net/1853/16289.

Full text
Abstract:
Any analysis of higher-order languages must grapple with the tri-facetted nature of lambda. In one construct, the fundamental control, environment and data structures of a language meet and intertwine. With the control facet tamed nearly two decades ago, this work brings the environment facet to heel, defining the environment problem and developing its solution: environment analysis. Environment analysis allows a compiler to reason about the equivalence of environments, i.e., name-to-value mappings, that arise during a program's execution. In this dissertation, two different techniques-abstract counting and abstract frame strings-make this possible. A third technique, abstract garbage collection, makes both of these techniques more precise and, counter to intuition, often faster as well. An array of optimizations and even deeper analyses which depend upon environment analysis provide motivation for this work. In an abstract interpretation, a single abstract entity represents a set of concrete entities. When the entities under scrutiny are bindings-single name-to-value mappings, the atoms of environment-then determining when the equality of two abstract bindings infers the equality of their concrete counterparts is the crux of environment analysis. Abstract counting does this by tracking the size of represented sets, looking for singletons, in order to apply the following principle: If {x} = {y}, then x = y. Abstract frame strings enable environmental reasoning by statically tracking the possible stack change between the births of two environments; when this change is effectively empty, the environments are equivalent. Abstract garbage collection improves precision by intermittently removing unreachable environment structure during abstract interpretation.
APA, Harvard, Vancouver, ISO, and other styles
40

Boulgakov, Alexandre. "Improving scalability of exploratory model checking." Thesis, University of Oxford, 2016. https://ora.ox.ac.uk/objects/uuid:76acb8bf-52e7-4078-ab4f-65f3ea07ba3d.

Full text
Abstract:
As software and hardware systems grow more complex and we begin to rely more on their correctness and reliability, it becomes exceedingly important to formally verify certain properties of these systems. If done naïvely, verifying a system can easily require exponentially more work than running it, in order to account for all possible executions. However, there are often symmetries or other properties of a system that can be exploited to reduce the amount of necessary work. In this thesis, we present a number of approaches that do this in the context of the CSP model checker FDR. CSP is named for Communicating Sequential Processes, or parallel combinations of state machines with synchronised communications. In the FDR model, the component processes are typically converted to explicit state machines while their parallel combination is evaluated lazily during model checking. Our contributions are motivated by this model but applicable to other models as well. We first address the scalability of the component machines by proposing a lazy compiler for a subset of CSPM selected to model parameterised state machines. This is a typical case where the state space explosion can make model checking impractical, since the size of the state space is exponential in the number and size of the parameters. A lazy approach to evaluating these systems allows only the reachable subset of the state space to be explored. As an example, in studying security protocols, it is common to model an intruder parameterised by knowledge of each of a list of facts; even a relatively small 100 facts results in an intractable 2100 states, but the rest of the system can ensure that only a small number of these states are reachable. Next, we address the scalability of the overall combination by presenting novel algorithms for bisimulation reduction with respect to strong bisimulation, divergence- respecting delay bisimulation, and divergence-respecting weak bisimulation. Since a parallel composition is related to the Cartesian product of its components, performing a relatively time-consuming bisimulation reduction on the components can reduce its size significantly; an efficient bisimulation algorithm is therefore very desirable. This thesis is motivated by practical implementations, and we discuss an implementation of each of the proposed algorithms in FDR. We thoroughly evaluate their performance and demonstrate their effectiveness.
APA, Harvard, Vancouver, ISO, and other styles
41

Ngô, Van Chan. "Formal verification of a synchronous data-flow compiler : from Signal to C." Phd thesis, Université Rennes 1, 2014. http://tel.archives-ouvertes.fr/tel-01067477.

Full text
Abstract:
Synchronous languages such as Signal, Lustre and Esterel are dedicated to designing safety-critical systems. Their compilers are large and complicated programs that may be incorrect in some contexts, which might produce silently bad compiled code when compiling source programs. The bad compiled code can invalidate the safety properties that are guaranteed on the source programs by applying formal methods. Adopting the translation validation approach, this thesis aims at formally proving the correctness of the highly optimizing and industrial Signal compiler. The correctness proof represents both source program and compiled code in a common semantic framework, then formalizes a relation between the source program and its compiled code to express that the semantics of the source program are preserved in the compiled code.
APA, Harvard, Vancouver, ISO, and other styles
42

Heym, Wayne David. "Computer Program Verification: Improvements for Human Reasoning /." The Ohio State University, 1995. http://rave.ohiolink.edu/etdc/view?acc_num=osu1487929745336224.

Full text
APA, Harvard, Vancouver, ISO, and other styles
43

Latham, J. T. "Abstraction in program verification." Thesis, University of Manchester, 1985. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.372026.

Full text
APA, Harvard, Vancouver, ISO, and other styles
44

Veselinov, Roman Nikolov. "Formalization and verification of rewriting-based security polices." Worcester, Mass. : Worcester Polytechnic Institute, 2008. http://www.wpi.edu/Pubs/ETD/Available/etd-043008-165615/.

Full text
APA, Harvard, Vancouver, ISO, and other styles
45

Ubhayakar, Sonali S. "Evalutation of program specification and verification systems." Monterey, Calif. : Springfield, Va. : Naval Postgraduate School ; Available from National Technical Information Service, 2003. http://library.nps.navy.mil/uhtbin/hyperion-image/03Jun%5FUbhayakar.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
46

Ubhayakar, Sonali S. "Evaluation of program specification and verification systems." Thesis, Monterey, California. Naval Postgraduate School, 2003. http://hdl.handle.net/10945/893.

Full text
Abstract:
Computer systems that earn a high degree of trust must be backed by rigorous verification methods. A verification system is an interactive environment for writing formal specifications and checking formal proofs. Verification systems allow large complicated proofs to be managed and checked interactively. We desire evaluation criteria that provide a means of finding which verification system is suitable for a specific research environment and what needs of a particular project the tool satisfies. Therefore, the purpose of this thesis is to develop a methodology and set of evaluation criteria to evaluate verification systems for their suitability to improve the assurance that systems meet security objectives. A specific verification system is evaluated with respect to the defined methodology. The main goals are to evaluate whether the verification system has the capability to express the properties of software systems and to evaluate whether the verification system can provide inter-level mapping, a feature required for understanding how a system meets security objectives.
Naval Postgraduate School author (civilian).
APA, Harvard, Vancouver, ISO, and other styles
47

Taghdiri, Mana 1979. "Automating modular program verification by refining specifications." Thesis, Massachusetts Institute of Technology, 2008. http://hdl.handle.net/1721.1/43055.

Full text
Abstract:
Thesis (Ph. D.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 2008.
Includes bibliographical references (p. 205-211).
Modular analyses of software systems rely on the specifications of the analyzed modules. In many analysis techniques (e.g. ESC/Java), the specifications have to be provided by users. This puts a considerable burden on users and thus limits the applicability of such techniques. To avoid this problem, some modular analysis techniques automatically extract module summaries that capture specific aspects of the modules' behaviors. However, such summaries are only useful in checking a restricted class of properties. We describe a static modular analysis that automatically extracts procedure specifications in order to check heap-manipulating programs against rich data structure properties. Extracted specifications are context-dependent; their precision depends on both the property being checked, and the calling context in which they are used. Starting from a rough over-approximation of the behavior of each call site, our analysis computes an abstraction of the procedure being analyzed and checks it against the property. Specifications are further refined, as needed, in response to spurious counterexamples. The analysis terminates when either the property has been validated (with respect to a finite domain), or a non-spurious counterexample has been found. Furthermore, we describe a lightweight static technique to extract specifications of heap-manipulating procedures. These specifications neither are context-dependent, nor require any domain finitizations. They summarize the general behavior of procedures in terms of their effect on program state. They bound the values of all variables and fields in the post-state of the procedure by relational expressions in terms of their values in the pre-state. The analysis maintains both upper and lower bounds so that in some cases an exact result can be obtained.
by Mana Taghdiri.
Ph.D.
APA, Harvard, Vancouver, ISO, and other styles
48

Dennis, Gregory D. (Gregory David) 1980. "A relational framework for bounded program verification." Thesis, Massachusetts Institute of Technology, 2009. http://hdl.handle.net/1721.1/55097.

Full text
Abstract:
Thesis (Ph. D.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 2009.
Cataloged from PDF version of thesis.
Includes bibliographical references (p. 131-138).
All software verification techniques, from theorem proving to testing, share the common goal of establishing a program's correctness with both (1) a high degree of confidence and (2) a low cost to the user, two criteria in tension with one another. Theorem proving offers the benefit of high confidence, but requires significant expertise and effort from the user. Testing, on the other hand, can be performed for little cost, but low-cost testing does not yield high confidence in a program's correctness. Although many static analyses can quickly and with high confidence check a program's conformance to a specification, they achieve these goals by sacrificing the expressiveness of the specification. To date, static analyses have been largely limited to the detection of shallow properties that apply to a very large class of programs, such as absence of array-bound errors and conformance to API usage conventions. Few static analyses are capable of checking strong specifications, specifications whose satisfaction relies upon the program's precise behavior. This thesis presents a new program-analysis framework that allows a procedure in an object-oriented language to be automatically checked, with high confidence, against a strong specification of its behavior. The framework is based on an intermediate relational representation of code and an analysis that examines all executions of a procedure up to a bound on the size of the heap and the number of loop unrollings. If a counterexample is detected within the bound, it is reported to the user as a trace of the procedure, though defects outside the bound will be missed.
(cont.) Unlike testing, many static analyses are not equipped with coverage metrics to detect which program behaviors the analysis failed to exercise. Our framework, in contrast, includes such a metric. When no counterexamples are found, the metric can report how thoroughly the code was covered. This information can, in turn, help the user change the bound on the analysis or strengthen the specification to make subsequent analyses more comprehensive.
by Gregory D. Dennis.
Ph.D.
APA, Harvard, Vancouver, ISO, and other styles
49

Matias, Matthew John. "Program Verification of FreeRTOS using Microsoft Dafny." Cleveland State University / OhioLINK, 2014. http://rave.ohiolink.edu/etdc/view?acc_num=csu1400085349.

Full text
APA, Harvard, Vancouver, ISO, and other styles
50

Yilma, Yirgu. "Non-monotonic concurrent constraint program verification using phase semantics." Thesis, National Library of Canada = Bibliothèque nationale du Canada, 2000. http://www.collectionscanada.ca/obj/s4/f2/dsk1/tape3/PQDD_0018/MQ48190.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography