Dissertations / Theses on the topic 'Model and Program Verification'

To see the other types of publications on this topic, follow the link: Model and Program Verification.

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 dissertations / theses for your research on the topic 'Model and Program Verification.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.

1

Beyene, Tewodros Awgichew. "Constraint-based verification of imperative programs." Master's thesis, Faculdade de Ciências e Tecnologia, 2011. http://hdl.handle.net/10362/7965.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
work presented in the context of the European Master’s program in Computational Logic, as the partial requirement for obtaining Master of Science degree in Computational Logic
The continuous reduction in the cost of computing ever since the first days of computers has resulted in the ubiquity of computing systems today; there is no any sphere of life in the daily routine of human beings that is not directly or indirectly influenced by computer systems anymore. But this high reliance on computers has not come without a risk to the society or a challenge to computer scientists. As many computer systems of today are safety critical, it is crucial for computer scientists to make sure that computer systems, both the hardware and software components, behave correctly under all circumstances. In this study, we are interested in techniques of program verification that are aimed at ensuring the correctness of the software component. In this work, constraint programming techniques are used to device a program verification framework where constraint solvers play the role of typical verification tools. The programs considered are written in some subset of Java, and their specifications are written in some subset of Java Modeling Language(JML). In our framework, the program verification process has two principal steps: constraint generation and constraint solving. A program together with its specification is first parsed into a system of constraints. And then, the system of constraints is processed using constraint solvers so that the correctness of the original program is proved to hold, or not, based on the outcome of the constraint solving. The performance of our framework is compared with other well-known program verification tools using standard benchmarks, and our framework has performed quite well for most of the cases.
2

Kaiser, Alexander. "Monotonicity in shared-memory program verification." Thesis, University of Oxford, 2013. http://ora.ox.ac.uk/objects/uuid:1d16b4b5-524a-40db-b7bf-062374f8679c.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Predicate abstraction is a key enabling technology for applying model checkers to programs written in mainstream languages. It has been used very successfully for debugging sequential system-level C code. Although model checking was originally designed for analysing concurrent systems, there is little evidence of fruitful applications of predicate abstraction to shared-variable concurrent software. The goal of the present thesis is to close this gap. We propose an algorithmic solution implementing predicate abstraction that targets safety properties in non-recursive programs executed by an unbounded number of threads, which communicate via shared memory or higher-level mechanisms, such as mutexes and broadcasts. As system-level code makes frequent use of such primitives, their correct usage is critical to ensure reliability. Monotonicity - the property that thread actions remain executable when other threads are added to the current global state - is a natural and common feature of human-written concurrent software. It is also useful: if every thread’s memory is finite, monotonicity often guarantees the decidability of safety properties even when the number of running threads is unspecified. In this thesis, we show that the process of obtaining finite-data thread abstrac tions for model checking is not always compatible with monotonicity. Predicate-abstracting certain mainstream asynchronous software such as the ticket busy-wait lock algorithm results in non-monotone multi-threaded Boolean programs, despite the monotonicity of the input program: the monotonicity is lost in the abstraction. As a result, the unbounded thread Boolean programs do not give rise to well quasi-ordered systems [1], for which sound and complete safety checking algorithms are available. In fact, safety checking turns out to be undecidable for the obtained class of abstract programs, despite the finiteness of the individual threads’ state spaces. Our solution is to restore the monotonicity in the abstract program, using an inexpensive closure operator that precisely preserves all safety properties from the (non-monotone) abstract program without the closure. As a second contribution, we present a novel, sound and complete, yet empirically much improved algorithm for verifying abstractions, applicable to general well quasi-ordered systems. Our approach is to gradually widen the set of safety queries during the search by program states that involve fewer threads and are thus easier to decide, and are likely to finalise the decision on earlier queries. To counter the negative impact of "bad guesses", i.e. program states that turn out feasible, the search is supported by a parallel engine that generates such states; these are never selected for widening. We present an implementation of our techniques and extensive experiments on multi-threaded C programs, including device driver code from FreeBSD and Solaris. The experiments demonstrate that by exploiting monotonicity, model checking techniques - enabled by predicate abstraction - scale to realistic programs even of a few thousands of multi-threaded C code lines.
3

Bezuidenhout, Johannes Abraham. "Automated program generation : bridging the gap between model and implementation." Thesis, Stellenbosch : Stellenbosch University, 2012. http://hdl.handle.net/10019.1/19584.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Thesis (MSc)--University of Stellenbosch, 2007.
ENGLISH ABSTRACT: The general goal of this thesis is the investigation of a technique that allows model checking to be directly integrated into the software development process, preserving the benefits of model checking while addressing some of its limitations. A technique was developed that allows a complete executable implementation to be generated from an enhanced model specification. This included the development of a program, the Generator, that completely automates the generation process. In addition, it is illustrated how structuring the specification as a transitions system formally separates the control flow from the details of manipulating data. This simplifies the verification process which is focused on checking control flow in detail. By combining this structuring approach with automated implementation generation we ensure that the verified system behaviour is preserved in the actual implementation. An additional benefit is that data manipulation, which is generally not suited to model checking, is restricted to separate, independent code fragments that can be verified using verification techniques for sequential programs. These data manipulation code segments can also be optimised for the implementation without affecting the verification of the control structure. This technique was used to develop a reactive system, an FTP server, and this experiment illustrated that efficient code can be automatically generated while preserving the benefits of model checking.
AFRIKAANSE OPSOMMING: Hierdie tesis ondersoek ’n tegniek wat modeltoetsing laat deel uitmaak van die sagtewareontwikkelingsproses, en sodoende betroubaarheid verbeter terwyl sekere tekorkominge van die tradisionele modeltoetsing proses aangespreek word. Die tegniek wat ontwikkel is maak dit moontlik om ’n volledige uitvoerbare implementasie vanaf ’n gespesialiseerde model spesifikasie te genereer. Om die implementasie-generasie stap ten volle te outomatiseer is ’n program, die Generator, ontwikkel. Daarby word dit ook gewys hoe die kontrolevloei op ’n formele manier geskei kan word van data-manipulasie deur gebruik te maak van ’n staatoorgangsstelsel struktureringsbenadering. Dit vereenvoudig die verifikasie proses, wat fokus op kontrolevloei. Deur di´e struktureringsbenadering te kombineer met outomatiese implementasie-generasie, word verseker dat die geverifieerde stelsel se gedrag behou word in die finale implementasie. ’n Bykomende voordeel is dat data-manipulasie, wat gewoonlik nie geskik is vir modeltoetsing nie, beperk word tot aparte, onafhanklike kode segmente wat geverifieer kan word deur gebruik te maak van verifikasie tegnieke vir sekwensi¨eele programme. Hierdie data-manipulasie kode segmente kan ook geoptimeer word vir die implementasie sonder om die verifikasie van die kontrole struktuur te be¨ınvloed. Hierdie tegniek word gebruik om ’n reaktiewe stelsel, ’n FTP bediener, te ontwikkel, en di´e eksperiment wys dat doeltreffende kode outomaties gegenereer kan word terwyl die voordele van modeltoetsing behou word.
4

Wardana, Awang Noor Indra. "Development of automatic program verification for continuous function chart based on model checking /." Kassel : Kassel Univ. Press, 2009. http://d-nb.info/998980234/04.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

de, Carvalho Gomes Pedro. "Automatic Extraction of Program Models for Formal Software Verification." Doctoral thesis, KTH, Teoretisk datalogi, TCS, 2015. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-176286.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
In this thesis we present a study of the generation of abstract program models from programs in real-world programming languages that are employed in the formal verification of software. The thesis is divided into three parts, which cover distinct types of software systems, programming languages, verification scenarios, program models and properties.The first part presents an algorithm for the extraction of control flow graphs from sequential Java bytecode programs. The graphs are tailored for a compositional technique for the verification of temporal control flow safety properties. We prove that the extracted models soundly over-approximate the program behaviour w.r.t. sequences of method invocations and exceptions. Therefore, the properties that are established with the compositional technique over the control flow graphs also hold for the programs. We implement the algorithm as ConFlEx, and evaluate the tool on a number of test cases.The second part presents a technique to generate program models from incomplete software systems, i.e., programs where the implementation of at least one of the components is not available. We first define a framework to represent incomplete Java bytecode programs, and extend the algorithm presented in the first part to handle missing code. Then, we introduce refinement rules, i.e., conditions for instantiating the missing code, and prove that the rules preserve properties established over control flow graphs extracted from incomplete programs. We have extended ConFlEx to support the new definitions, and re-evaluate the tool, now over test cases of incomplete programs.The third part addresses the verification of multithreaded programs. We present a technique to prove the following property of synchronization with condition variables: "If every thread synchronizing under the same condition variables eventually enters its synchronization block, then every thread will eventually exit the synchronization". To support the verification, we first propose SyncTask, a simple intermediate language for specifying synchronized parallel computations. Then, we propose an annotation language for Java programs to assist the automatic extraction of SyncTask programs, and show that, for correctly annotated programs, the above-mentioned property holds if and only if the corresponding SyncTask program terminates. We reduce the termination problem into a reachability problem on Coloured Petri Nets. We define an algorithm to extract nets from SyncTask programs, and show that a program terminates if and only if its corresponding net always reaches a particular set of dead configurations. The extraction of SyncTask programs and their translation into Petri nets is implemented as the STaVe tool. We evaluate the technique by feeding annotated Java programs to STaVe, then verifying the extracted nets with a standard Coloured Petri Net analysis tool
Den här avhandlingen studerar automatisk konstruktion av abstrakta modeller för formell verifikation av program skrivna i verkliga programmeringsspråk. Avhandlingen består av tre delar som involverar olika typer av program, programmeringsspråk, verifikationsscenarier, programmodeller och egenskaper.Del ett presenterar en algoritm för generation av flödesgrafer från sekventiella program i Java bytekod. Graferna är skräddarsydda för en kompositionell teknik för verifikationen av temporala kontrollflödens säkerhetsegenskaper. Vi visar att de extraherade modellerna sunt överapproximerar programbeteenden med avseende på sekvenser av metodanrop och -undantag. Således gäller egenskaperna som kan fastställas genom kompositionstekniken över kontrollflöden även för programmen. Vi implementerar dessutom algoritmen i form av verktyget ConFlEx och utvärderar verktyget på ett antal testfall.Del två presenterar en teknik för att generera modeller av ofullständiga program. Det vill säga, program där implementationen av åtminstone en komponent inte är tillgänglig. Vi definierar ett ramverk för att representera ofullständiga Java bytekodsprogram och utökar algoritmen från del ett till att hantera ofullständig kod.  Därefter presenterar vi raffineringsregler - villkor för att instansiera den saknade koden - och bevisar att reglerna bevarar relevanta egenskaper av kontrollflödesgrafer. Vi har dessutom utökat ConFlEx till att stödja de nya definitionerna och har omvärderat verktyget på testfall av ofullständiga program.Del tre angriper verifikation av multitrådade program. Vi presenterar en teknik för att bevisa följande egenskap för synkronisering med vilkorsvariabler: "Om varje trådsynkronisering under samma villkor så småningom stiger in i sitt synkroniseringsblock så kommer varje tråd också till slut lämna synkroniseringen". För att stödja verifikationen så introducerar vi först SyncTask - ett enkelt mellanliggande språk för att specificera synkronisering av parallella beräkningar. Därefter presenterar vi ett annoteringsspråk för Java som tillåter automatisk extrahering av SyncTask-program och visar att egenskapen gäller om och endast om motsvarande SyncTask-program terminerar. Vi reducerar termineringsproblemet till ett nåbarhetsproblem på färgade Petrinät samt definierar en algoritm som skapar Petrinät från SyncTask-program där programmet terminerar om och endast om nätet alltid når en särskild mängd av döda konfigurationer. Extraktionen av SyncTask-program och deras motsvarande Petrinät är implementerade i form av verktyget STaVe.  Slutligen utvärderar vi verktyget genom att mata annoterade.

QC 20151101

6

He, Nannan. "Exploring Abstraction Techniques for Scalable Bit-Precise Verification of Embedded Software." Diss., Virginia Tech, 2009. http://hdl.handle.net/10919/27683.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Conventional testing has become inadequate to satisfy rigorous reliability requirements of embedded software that is playing an increasingly important role in many safety critical applications. Automatic formal verification is a viable avenue for ensuring the reliability of such software. Recently, more and more formal verification techniques have begun modeling a non-Boolean data variable as a bit-vector with bounded width (i.e. a vector of multiple bits like 32- or 64- bits) to implement bit-precise verification. One major challenge in the scalable application of such bit-precise verification on real-world embedded software is that the state space for verification can be intractably large. In this dissertation, several abstraction techniques are explored to deal with this scalability challenge in the bit-precise verification of embedded software. First, we propose a tight integration of program slicing, which is an important static program analysis technique, with bounded model checking (BMC). While many software verification tools apply program slicing as a separate preprocessing step, we integrate slicing operations into our model construction and reduction process and enhance them with compilation optimization techniques to compute accurate program slices. We also apply a proof-based abstraction-refinement framework to further remove those program segments irrelevant to the property being verified. Next, we present a method of using symbolic simulation for scalable formal verification. The simulation involves distinguishing X as symbolic values to abstract concrete variablesâ values. Also, the method embeds this symbolic simulation in a counterexample-guided abstraction-refinement framework to automatically construct and verify an abstract model, which has a smaller state space than that of the original concrete program. This dissertation also presents our efforts on using two common testability metrics â controllability metric (CM) and observability metric (OM) â as the high-level structural guidance for scalable bit-precise verification. A new abstraction approach is proposed based on the concept of under- and over-approximation to efficiently solve bit-vector formulas generated from embedded software verification instances. These instances include both complicated arithmetic computations and intensive control structures. Our approach applies CM and OM to assist the abstraction refinement procedure in two ways: (1) it uses CM and OM to guide the construction of a simple under-approximate model, which includes only a subset of execution paths in a verification instance, so that a counterexample that refutes the instance can be obtained with reduced effort, and (2) in order to reduce the cost of using proof-based refinement alone, it uses OM heuristics to guide the restoration of additional verification-relevant formula constraints with low computational cost for refinement. Experiments show a significant reduction of the solving time compared to state-of-the-art solvers for the bit-vector arithmetic. This dissertation finally proposes an efficient algorithm to discover non-uniform encoding widths of individual variables in the verification model, which may be smaller than their original modeling width but sufficient for the verification. Our algorithm distinguishes itself from existing approaches in that it is path-oriented; it takes advantage of CM and OM values to guide the computation of the initial, non-uniform encoding widths, and the effective adjustment of these widths along different paths, until the property is verified. It can restrict the search from those paths that are deemed less favorable or have been searched in previous steps, thus simplifying the problem. Experiments demonstrate that our algorithm can significantly speed up the verification especially in searching for a counterexample that violates the property under verification.
Ph. D.
7

Mahtab, Tazeen 1981. "Automated verification of model-based programs under uncertainty." Thesis, Massachusetts Institute of Technology, 2004. http://hdl.handle.net/1721.1/28453.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Thesis (M. Eng. and S.B.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 2004.
Includes bibliographical references (p. 89-91).
Highly robust embedded systems have been enabled through software executives that have the ability to reason about their environment. Those that employ the model-based autonomy paradigm automatically diagnose and plan future actions, based on models of themselves and their environment. This includes autonomous systems that must operate in harsh and dynamic environments, like, deep space. Such systems must be robust to a large space of possible failure scenarios. This large state space poses difficulties for traditional scenario-based testing, leading to a need for new approaches to verification and validation. We propose a novel verification approach that generates an analysis of the most likely failure scenarios for a model-based program. By finding only the lost likely failures, we increase the relevance and reduce the quantity of information the developer must examine. First, we provide the ability to verify a stochastic system that encodes both off-nominal and nominal scenarios. We incorporate uncertainty into the verification process by acknowledging that all such programs may fail, but in different ways, with different likelihoods. The verification process is one of finding the most likely executions that fail the specification. Second, we provide a capability for verifying executable specifications that are fault-aware. We generalize offline plant model verification to the verification of model-based programs, which consist of both a plant model that captures the physical plant's nominal and off-nominal states and a control program that specifies its desired behavior. Third, we verify these specifications through execution of the RMPL executive itself. We therefore circumvent the difficulty of formalizing the behavior of complex
(cont.) software executives. We present the RMPL Verifier, a tool for verification of model-based programs written in the Reactive Model-based Programming Language (RMPL) for the Titan execution kernel. Using greedy forward-directed search, this tool finds as counterexamples to the program's goal specification the most likely executions that do not achieve the goal within a given time bound.
by Tazeen Mahtab.
M.Eng.and S.B.
8

Newcomb, Tom C. "Model checking data-independent systems with arrays." Thesis, University of Oxford, 2003. http://ora.ox.ac.uk/objects/uuid:7fc75da9-e653-4578-b061-8a1cc30ba609.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
We say a program is data-independent with respect to a data type X if the operations it can perform on values of type X are restricted to just equality testing, although the system may also input, store and move around (via assignment) values of type X within its variables. This property can be exploited to give procedures for the automatic verification, called model checking, of such programs independently of the instance for the type X. This thesis considers data-independent programs with arrays, which are useful for modelling memory systems such as cache protocols. The main question of interest is the following parameterised model-checking problem: whether a program satisfies its specification for all non-empty finite instances of its types. In order to obtain these results, we present a UNITY-like programming language with arrays that is suited to the study of decidability of various modelchecking problems, whilst being useful for prototyping memory systems such as caches. Its semantics are given in terms of transition systems, and we use the modal μ-calculus, a branching-time temporal logic with recursion, as our specification language. We describe a model-checking procedure for programs that use arrays indexed by one data-independent type X and storing values from another Y. This allows us to prove properties about parameterised systems: for example, that memory systems can be verified independently of memory size and data values. This decidability result is shown to extend to data-independent programs with many types and multidimensional arrays which are acyclic, meaning it is not possible to form loops of types in the 'indexed by' relation. Conversely, it is shown that even reachability model-checking problems are undecidable for classes of programs that allow cyclic-array programs. We give practical motivation for these decidability results by demonstrating how one could verify a fault-tolerant interface on a set of unreliable memories, and the cache protocol in the Pentium Pro processor. Significantly, the verifications are performed independently of many of these systems' parameters. These case studies suggest two extensions to the language: an array reset instruction, which sets every element of an array to a particular value, and an array assignment or copy instruction. Both are shown to restrict decidability of model checking problems; however we can obtain some interesting decidability results for arrays with reset by restricting the number of arrays to just one, or by allowing the arrays only to store fixed finite types, such as the booleans.
9

Rezine, Othmane. "Verification of networks of communicating processes : Reachability problems and decidability issues." Doctoral thesis, Uppsala universitet, Avdelningen för datorteknik, 2017. http://urn.kb.se/resolve?urn=urn:nbn:se:uu:diva-334788.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Computer systems are used in almost all aspects of our lives and our dependency on them keeps on increasing. When computer systems are used to handle critical tasks, any software failure can cause severe human and/or material losses. Therefore, for such applications, it is important to detect software errors at an early stage of software development. Furthermore, the growing use of concurrent and distributed programs exponentially increases the complexity of computer systems, making the problem of detecting software errors even harder (if not impossible). This calls for defining systematic and efficient techniques to evaluate the safety and the correctness of programs. The aim of Model-Checking is to analyze automatically whether a given program satisfies its specification. Early applications of Model-Checking were restricted to systems whose behaviors can be captured by finite graphs, so called finite-state systems. Since many computer systems cannot be modeled as finite-state machines, there has been a growing interest in extending the applicability of Model-Checking to infinite-state systems. The goal of this thesis is to extend the applicability of Model Checking for three instances of infinite-state systems: Ad-Hoc Networks, Dynamic Register Automata and Multi Pushdown Systems. Each one of these instances models challenging types of networks of communicating processes. In both Ad-Hoc Networks and Dynamic Register Automata, communication is carried through message passing. In each type of network, a graph topology models the communication links between processes in the network. The graph topology is static in the case of Ad-Hoc Networks while it is dynamic in the case of Dynamic Register Automata. The number of processes in both types of networks is unbounded. Finally, we consider Multi Pushdown Systems, a model used to study the behaviors of concurrent programs composed of sequential recursive sequential programs communicating through a shared memory.
10

de, Carvalho Gomes Pedro. "Sound Modular Extraction of Control Flow Graphs from Java Bytecode." Licentiate thesis, KTH, Teoretisk datalogi, TCS, 2012. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-105275.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Control flow graphs (CFGs) are abstract program models that preserve the control flow information. They have been widely utilized for many static analyses in the past decades. Unfortunately, previous studies about the CFG construction from modern languages, such as Java, have either neglected advanced features that influence the control flow, or do not provide a correctness argument. This is a bearable issue for some program analyses, but not for formal methods, where the soundness of CFGs is a mandatory condition for the verification of safety-critical properties. Moreover, when developing open systems, i.e., systems in which at least one component is missing, one may want to extract CFGs to verify the available components. Soundness is even harder to achieve in this scenario, because of the unknown inter-dependencies involving missing components. In this work we present two variants of a CFG extraction algorithm from Java bytecode considering precise exceptional flow, which are sound w.r.t to the JVM behavior. The first algorithm extracts CFGs from fully-provided (closed) programs only. It proceeds in two phases. Initially the Java bytecode is translated into a stack-less intermediate representation named BIR, which provides explicit representation of exceptions, and is more compact than the original bytecode. Next, we define the transformation from BIR to CFGs, which, among other features, considers the propagation of uncaught exceptions within method calls. We then establish its correctness: the behavior of the extracted CFGs is shown to be a sound over-approximation of the behavior of the original programs. Thus, temporal safety properties that hold for the CFGs also hold for the program. We prove this by suitably combining the properties of the two transformations with those of a previous idealized CFG extraction algorithm, whose correctness has been proven directly. The second variant of the algorithm is defined for open systems. We generalize the extraction algorithm for closed systems for a modular set-up, and resolve inter-dependencies involving missing components by using user-provided interfaces. We establish its correctness by defining a refinement relation between open systems, which constrains the instantiation of missing components. We prove that if the relation holds, then the CFGs extracted from the components of the original open system are sound over-approximations of the CFGs for the same components in the refined system. Thus, temporal safety properties that hold for an open system also hold for closed systems that refine it. We have implemented both algorithms as the ConFlEx tool. It uses Sawja, an external library for the static analysis of Java bytecode, to transform bytecode into BIR, and to resolve virtual method calls. We have extended Sawja to support open systems, and improved its exception type analysis. Experimental results have shown that the algorithm for closed systems generates more precise CFGs than the modular algorithm. This was expected, due to the heavy over-approximations the latter has to perform to be sound. Also, both algorithms are linear in the number of bytecode instructions. Therefore, ConFlEx is efficient for the extraction of CFGs from either open, or closed Java bytecode programs.

QC 20121122

11

de, Carvalho Gomes Pedro, and Attilio Picoco. "Sound Extraction of Control-Flow Graphs from open Java Bytecode Systems." KTH, Teoretisk datalogi, TCS, 2012. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-104076.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Formal verification techniques have been widely deployed as means to ensure the quality of software products. Unfortunately, they suffer with the combinatorial explosion of the state space. That is, programs have a large number of states, sometimes infinite. A common approach to alleviate the problem is to perform the verification over abstract models from the program. Control-flow graphs (CFG) are one of the most common models, and have been widely studied in the past decades. Unfortunately, previous works over modern programming languages, such as Java, have either neglected features that influence the control-flow, or do not provide a correctness argument about the CFG construction. This is an unbearable issue for formal verification, where soundness of CFGs is a mandatory condition for the verification of safety-critical properties. Moreover, one may want to extract CFGs from the available components of an open system. I.e., a system whose at least one of the components is missing. Soundness is even harder to achieve in this scenario, because of the unknown inter-dependences between software components. In the current work we present a framework to extract control-flow graphs from open Java Bytecode systems in a modular fashion. Our strategy requires the user to provide interfaces for the missing components. First, we present a formal definition of open Java bytecode systems. Next, we generalize a previous algorithm that performs the extraction of CFGs for closed programs to a modular set-up. The algorithm uses the user-provided interfaces to resolve inter-dependences involving missing components. Eventually the missing components will arrive, and the open system will become closed, and can execute. However, the arrival of a component may affect the soundness of CFGs which have been extracted previously. Thus, we define a refinement relation, which is a set of constraints upon the arrival of components, and prove that the relation guarantees the soundness of CFGs extracted with the modular algorithm. Therefore, the control-flow safety properties verified over the original CFGs still hold in the refined model. We implemented the modular extraction framework in the ConFlEx tool. Also, we have implemented the reusage from previous extractions, to enable the incremental extraction of a newly arrived component. Our technique performs substantial over-approximations to achieve soundness. Despite this, our test cases show that ConFlEx is efficient. Also, the extraction of the CFGs gets considerable speed-up by reusing results from previous analyses.

QC 20121029


Verification of Control-Flow Properties of Programs with Procedures(CVPP)
12

Fourie, Jean Francois. "Reducing communication in distributed model checking." Thesis, Stellenbosch : University of Stellenbosch, 2009. http://hdl.handle.net/10019.1/2176.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Thesis (Msc (Mathematical Sciences. Computer Science))--University of Stellenbosch, 2009.
ENGLISH ABSTRACT: Model checkers are programs that automatically verify, without human assistance, that certain user-specified properties hold in concurrent software systems. Since these programs often have expensive time and memory requirements, an active area of research is the development of distributed model checkers that run on clusters. Of particular interest is how the communication between the machines can be reduced to speed up their running time. In this thesis the design decisions involved in an on-the-fly distributed model checker are identified and discussed. Furthermore, the implementation of such a program is described. The central idea behind the algorithm is the generation and distribution of data throughout the nodes of the cluster. We introduce several techniques to reduce the communication among the nodes, and study their effectiveness by means of a set of models.
AFRIKAANSE OPSOMMING: Modeltoetsers is programme wat outomaties bevestig, sonder enige hulp van die gebruiker, dat gelopende sagteware aan sekere gespesifiseerde eienskappe voldoen. Die feit dat hierdie programme dikwels lang looptye en groot geheues benodig, het daartoe aanleiding gegee dat modeltoetsers wat verspreid oor ’n groep rekenaars hardloop, aktief nagevors word. Dit is veral belangrik om vas te stel hoe die kommunikasie tussen rekenaars verminder kan word om sodoende die looptyd te verkort. Hierdie tesis identifiseer en bespreek die ontwerpsbesluite betrokke in die ontwikkeling van ’n verspreide modeltoetser. Verder word die implementasie van so ’n program beskryf. Die kernidee is die generasie en verspreiding van data na al die rekenaars in die groep wat aan die probleem werk. Ons stel verskeie tegnieke voor om die kommunikasie tussen die rekenaar te verminder en bestudeer die effektiwiteit van hierdie tegnieke aan die hand van ’n lys modelle.
13

Wardana, Awang Noor Indra [Verfasser]. "Development of Automatic Program Verification for Continuous Function Chart based on Model Checking / Awang Noor Indra Wardana." Kassel : Kassel University Press, 2009. http://d-nb.info/1011715880/34.

Full text
APA, Harvard, Vancouver, ISO, and other styles
14

Nakade, Radha Vi. "Verification of Task Parallel Programs Using Predictive Analysis." BYU ScholarsArchive, 2016. https://scholarsarchive.byu.edu/etd/6176.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Task parallel programming languages provide a way for creating asynchronous tasks that can run concurrently. The advantage of using task parallelism is that the programmer can write code that is independent of the underlying hardware. The runtime determines the number of processor cores that are available and the most efficient way to execute the tasks. When two or more concurrently executing tasks access a shared memory location and if at least one of the accesses is for writing, data race is observed in the program. Data races can introduce non-determinism in the program output making it important to have data race detection tools. To detect data races in task parallel programs, a new Sound and Complete technique based on computation graphs is presented in this work. The data race detection algorithm runs in O(N2) time where N is number of nodes in the graph. A computation graph is a directed acyclic graph that represents the execution of the program. For detecting data races, the computation graph stores shared heap locations accessed by the tasks. An algorithm for creating computation graphs augmented with memory locations accessed by the tasks is also described here. This algorithm runs in O(N) time where N is the number of operations performed in the tasks. This work also presents an implementation of this technique for the Java implementation of the Habanero programming model. The results of this data race detector are compared to Java Pathfinder's precise race detector extension and permission regions based race detector extension. The results show a significant reduction in the time required for data race detection using this technique.
15

Berglund, Lasse. "Executive Summaries in Software Model Checking." Thesis, KTH, Teoretisk datalogi, TCS, 2018. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-231433.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Model checking is a technique used to verify whether a model meets a given specification by exhaustively and automatically checking each reachable state in the model. It is a well-developed technique, but it suffers from some issues, perhaps most importantly the state space explosion problem. Models may contain so many states that must be checked means that the model checking procedure may be intractable. In this thesis we investigate whether procedure summaries can be used to improve the performance of model checking. Procedure summaries are concise representations of parts of a program, such as a function or method. We present a design and an implementation of dynamically generated summaries as an extension of Java PathFinder, a virtual machine executing Java bytecode that is able to model check programs written in Java by backtracking execution, to explore different schedulings etc. We find that our summaries incur an overhead that outweighs the benefits in most cases, but the approach shows promise in certain cases, in particular when stateless model checking is used. We also provide some statistics related to cases when our summaries are applicable that could provide guidance for future work within this field.
Model checking, eller modellkontroll, är en välkänd teknik inom programverifikation som används för att verifiera att en modell, ofta av ett program, uppfyller en given specifikation genom att undersöka alla nåbara tillstånd i modellen. Det är en välutvecklad teknik som lider av några brister, en av de viktigaste är det så kallade state space explosion-problemet. Modellerna kan bestå av så många olika tillstånd att \textit{model checking} inte går att använda. I den här rapporten undersöker vi om vi kan tillämpa så kallade procedur-sammanfattningar för att förbättra prestandan av model checking. Procedur-sammanfattningar är representationer av delar av program, till exempel metoder eller funktioner. Vi presenterar en design och implementation av dynamiskt genererade sammanfattningar i form av ett tillägg till Java PathFinder, en virtuell maskin som exekverar Java bytecode som kan utföra model checking genom att backa körningar för att till exempel utforska olika schemaläggningar. Våra procedur-sammanfattningar har i många fall en negativ effekt på körtid, men visar på lovande resultat i vissa fall, i synnerhet när så kallad stateless model checking används. Vi presenterar också resultat kopplat till fall när våra sammanfattningar är applicerbara som kan leda vägen för fortsatt arbete inom området.
16

Haziza, Frédéric. "Few is Just Enough! : Small Model Theorem for Parameterized Verification and Shape Analysis." Doctoral thesis, Uppsala universitet, Avdelningen för datorteknik, 2015. http://urn.kb.se/resolve?urn=urn:nbn:se:uu:diva-264171.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
This doctoral thesis considers the automatic verification of parameterized systems, i.e. systems with an arbitrary number of communicating components, such as mutual exclusion protocols, cache coherence protocols or heap manipulating programs. The components may be organized in various topologies such as words, multisets, rings, or trees. The task is to show correctness regardless of the size of the system and we consider two methods to prove safety:(i) a backward reachability analysis, using the well-quasi ordered framework and monotonic abstraction, and (ii) a forward analysis which only needs to inspect a small number of components in order to show correctness of the whole system. The latter relies on an abstraction function that views the system from the perspective of a fixed number of components. The abstraction is used during the verification procedure in order to dynamically detect cut-off points beyond which the search of the state-space need not continue. Our experimentation on a variety of benchmarks demonstrate that the method is highly efficient and that it works well even for classes of systems with undecidable property. It has been, for example, successfully applied to verify a fine-grained model of Szymanski's mutual exclusion protocol. Finally, we applied the methods to solve the complex problem of verifying highly concurrent data-structures, in a challenging setting: We do not a priori bound the number of threads, the size of the data-structure, the domain of the data to store nor do we require the presence of a garbage collector. We successfully verified the concurrent Treiber's stack and Michael & Scott's queue, in the aforementioned setting. To the best of our knowledge, these verification problems have been considered challenging in the parameterized verification community and could not be carried out automatically by other existing methods.
17

Prestero, Timothy (Timothy Jason) 1970. "Verification of a six-degree of freedom simulation model for the REMUS autonomous underwater vehicle." Thesis, Massachusetts Institute of Technology, 2001. http://hdl.handle.net/1721.1/65068.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Thesis (S.M.)--Joint Program in Applied Ocean Science and Engineering (Massachusetts Institute of Technology, Dept. of Ocean Engineering; and the Woods Hole Oceanographic Institution); and, (S.M.)--Joint Program in Applied Ocean Science and Engineering (Massachusetts Institute of Technology, Dept. of Mechanical Engineering; and the Woods Hole Oceanographic Institution), 2001.
Includes bibliographical references (p. 125-127).
mproving the performance of modular, low-cost autonomous underwater vehicles (AUVs) in such applications as long-range oceanographic survey, autonomous docking, and shallow-water mine countermeasures requires improving the vehicles' maneuvering precision and battery life. These goals can be achieved through the improvement of the vehicle control system. A vehicle dynamics model based on a combination of theory and empirical data would provide an efficient platform for vehicle control system development, and an alternative to the typical trial-and-error method of vehicle control system field tuning. As there exists no standard procedure for vehicle modeling in industry, the simulation of each vehicle system represents a new challenge. Developed by von Alt and associates at the Woods Hole Oceanographic Institute, the REMUS AUV is a small, low-cost platform serving in a range of oceanographic applications. This thesis describes the development and verification of a six degree of freedom, non-linear simulation model for the REMUS vehicle, the first such model for this platform. In this model, the external forces and moments resulting from hydrostatics, hydrodynamic lift and drag, added mass, and the control inputs of the vehicle propeller and fins are all defined in terms of vehicle coefficients. This thesis describes the derivation of these coefficients in detail. The equations determining the coefficients, as well as those describing the vehicle rigid-body dynamics, are left in non-linear form to better simulate the inherently non-linear behavior of the vehicle. Simulation of the vehicle motion is achieved through numeric integration of the equations of motion. The simulator output is then checked against vehicle dynamics data collected in experiments performed at sea. The simulator is shown to accurately model the motion of the vehicle.
by Timothy Prestero.
S.M.
18

Faitelson, David. "Program synthesis from domain specific object models." Thesis, University of Oxford, 2008. http://ora.ox.ac.uk/objects/uuid:0c5a992e-dad4-435c-a576-e3ed504bcdbd.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Automatically generating a program from its specification eliminates a large source of errors that is often unavoidable in a manual approach. While a general purpose code generator is impossible to build, it is possible to build a practical code generator for a specific domain. This thesis investigates the theory behind Booster — a domain specific, object based specification language and automatic code generator. The domain of Booster is information systems — systems that consist of a rich object model in which the objects refer to each other to form a complicated network of associations. The operations of such systems are conceptually simple (changing the attributes of objects, adding or removing new objects and creating or destroying associations) but they are tricky to implement correctly. The thesis focuses on the theoretical foundation of the Booster approach, in particular on three contributions: semantics, model completion, and code generation. The semantics of a Booster model is a single abstract data type (ADT) where the invariants and the methods of all the classes in the model are promoted to the level of the ADT. This is different from the traditional view that considers each class as a separate ADT. The thesis argues that the Booster semantics is a better model of object oriented systems. The second important contribution is the idea of model completion — a process that augments the postconditions of methods with additional predicates that follow from the system’s invariant and the method’s original intention. The third contribution describes a simple but effective code generation technique that is based on interpreting postconditions as executable statements and uses weakest preconditions to ensure that the generated code refines its specification.
19

Cyriac, Aiswarya. "Verification of communicating recursive programs via split-width." Thesis, Cachan, Ecole normale supérieure, 2014. http://www.theses.fr/2014DENS0004/document.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Cette thèse développe des techniques à base d'automates pour la vérification formelle de systèmes physiquement distribués communiquant via des canaux fiables de tailles non bornées. Chaque machine peut exécuter localement plusieurs programmes récursifs (multi-threading). Un programme récursif peut également utiliser pour ses calculs locaux des structures de données non bornées, comme des files ou des piles. Ces systèmes, utilisés en pratique, sont si puissants que tous leurs problèmes de vérification deviennent indécidables. Nous introduisons et étudions un nouveau paramètre, appelé largeur de coupe (split-width), pour l'analyse de ces systèmes. Cette largeur de coupe est définie comme le nombre minimum de scissions nécessaires pour partitioner le graphe d'une exécution en parties sur lesquelles on pourra raisonner de manière indépendante. L'analyse est ainsi réalisée avec une approche diviser pour régner. Lorsqu'on se restreint à la classe des comportements ayant une largeur de coupe bornée par une constante, on obtient des procédures de décision optimales pour divers problèmes de vérification sur ces systèmes tels que l'accessibilité, l'inclusion, etc. ainsi que pour la satisfaisabilité et le model checking par rapport à divers formalismes comme la logique monadique du second ordre, la logique dynamique propositionnelle et des logiques temporelles. On montre aussi que les comportements d'un système ont une largeur de coupe bornée si et seulement si ils ont une largeur de clique bornée. Ainsi, grâce aux résultats de Courcelle sur les graphes de degré uniformément borné, la largeur de coupe est non seulement suffisante, mais aussi nécessaire pour obtenir la décidabilité du problème de satisfaisabilité d'une formule de la logique monadique du second ordre. Nous étudions ensuite l'existence de contrôleurs distribués génériques pour nos systèmes distribués. Nous proposons plusieurs contrôleurs, certains ayant un nombre fini d'états et d'autres étant déterministes, qui assurent que les comportements du système sont des graphes ayant une largeur de coupe bornée. Un système ainsi contrôlé de manière distribuée hérite des procédures de décision optimales pour les différents problèmes de vérification lorsque la largeur de coupe est bornée. Cette classe décidable de système généralise plusieurs sous-classes décidables étudiées précédemment
This thesis investigates automata-theoretic techniques for the verification of physically distributed machines communicating via unbounded reliable channels. Each of these machines may run several recursive programs (multi-threading). A recursive program may also use several unbounded stack and queue data-structures for its local-computation needs. Such real-world systems are so powerful that all verification problems become undecidable. We introduce and study a new parameter called split-width for the under-approximate analysis of such systems. Split-width is the minimum number of splits required in the behaviour graphs to obtain disjoint parts which can be reasoned about independently. Thus it provides a divide-and-conquer approach for their analysis. With the parameter split-width, we obtain optimal decision procedures for various verification problems on these systems like reachability, inclusion, etc. and also for satisfiability and model checking against various logical formalisms such as monadic second-order logic, propositional dynamic logic and temporal logics. It is shown that behaviours of a system have bounded split-width if and only if they have bounded clique-width. Thus, by Courcelle's results on uniformly bounded-degree graphs, split-width is not only sufficient but also necessary to get decidability for MSO satisfiability checking. We then study the feasibility of distributed controllers for our generic distributed systems. We propose several controllers, some finite state and some deterministic, which ensure that the behaviours of the system have bounded split-width. Such a distributedly controlled system yields decidability for the various verification problems by inheriting the optimal decision procedures for split-width. These also extend or complement many known decidable subclasses of systems studied previously
20

Shaner, Steve M. "Modular verification of higher-order methods with mandatory calls specified by model programs." [Ames, Iowa : Iowa State University], 2008.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
21

Bart, Anicet. "Constraint modelling and solving of some verification problems." Thesis, Ecole nationale supérieure Mines-Télécom Atlantique Bretagne Pays de la Loire, 2017. http://www.theses.fr/2017IMTA0031/document.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
La programmation par contraintes offre des langages et des outils permettant de résoudre des problèmes à forte combinatoire et à la complexité élevée tels que ceux qui existent en vérification de programmes. Dans cette thèse nous résolvons deux familles de problèmes de la vérification de programmes. Dans chaque cas de figure nous commençons par une étude formelle du problème avant de proposer des modèles en contraintes puis de réaliser des expérimentations. La première contribution concerne un langage réactif synchrone représentable par une algèbre de diagramme de blocs. Les programmes utilisent des flux infinis et modélisent des systèmes temps réel. Nous proposons un modèle en contraintes muni d’une nouvelle contrainte globale ainsi que ses algorithmes de filtrage inspirés de l’interprétation abstraite. Cette contrainte permet de calculer des sur-approximations des valeurs des flux des diagrammes de blocs. Nous évaluons notre processus de vérification sur le langage FAUST, qui est un langage dédié à la génération de flux audio. La seconde contribution concerne les systèmes probabilistes représentés par des chaînes de Markov à intervalles paramétrés, un formalisme de spécification qui étend les chaînes de Markov. Nous proposons des modèles en contraintes pour vérifier des propriétés qualitatives et quantitatives. Nos modèles dans le cas qualitatif améliorent l’état de l’art tandis que ceux dans le cas quantitatif sont les premiers proposés à ce jour. Nous avons implémenté nos modèles en contraintes en problèmes de programmation linéaire en nombres entiers et en problèmes de satisfaction modulo des théories. Les expériences sont réalisées à partir d’un jeu d’essais de la bibliothèque PRISM
Constraint programming offers efficient languages andtools for solving combinatorial and computationally hard problems such as the ones proposed in program verification. In this thesis, we tackle two families of program verification problems using constraint programming.In both contexts, we first propose a formal evaluation of our contributions before realizing some experiments.The first contribution is about a synchronous reactive language, represented by a block-diagram algebra. Such programs operate on infinite streams and model real-time processes. We propose a constraint model together with a new global constraint. Our new filtering algorithm is inspired from Abstract Interpretation. It computes over-approximations of the infinite stream values computed by the block-diagrams. We evaluated our verification process on the FAUST language (a language for processing real-time audio streams) and we tested it on examples from the FAUST standard library. The second contribution considers probabilistic processes represented by Parametric Interval Markov Chains, a specification formalism that extends Markov Chains. We propose constraint models for checking qualitative and quantitative reachability properties. Our models for the qualitative case improve the state of the art models, while for the quantitative case our models are the first ones. We implemented and evaluated our verification constraint models as mixed integer linear programs and satisfiability modulo theory programs. Experiments have been realized on a PRISM based benchmark
22

Gerber, Erick D. B. "A model checker for the LF system." Thesis, Stellenbosch : Stellenbosch University, 2007. http://hdl.handle.net/10019.1/19597.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Thesis (MSc)--University of Stellenbosch, 2007.
ENGLISH ABSTRACT: Computer aided veri cation techniques, such as model checking, can be used to improve the reliability of software. Model checking is an algorithmic approach to illustrate the correctness of temporal logic speci cations in the formal description of hardware and software systems. In contrast to traditional testing tools, model checking relies on an exhaustive search of all the possible con gurations that these systems may exhibit. Traditionally model checking is applied to abstract or high level designs of software. However, often interpreting or translating these abstract designs to implementations introduce subtle errors. In recent years one trend in model checking has been to apply the model checking algorithm directly to the implementations instead. This thesis is concerned with building an e cient model checker for a small concurrent langauge developed at the University of Stellenbosch. This special purpose langauge, LF, is aimed at developement of small embedded systems. The design of the language was carefully considered to promote safe programming practices. Furthermore, the language and its runtime support system was designed to allow directly model checking LF programs. To achieve this, the model checker extends the existing runtime support infrastructure to generate the state space of an executing LF program.
AFRIKAANSE OPSOMMING: Rekenaar gebaseerde program toetsing, soos modeltoetsing, kan gebruik word om die betroubaarheid van sagteware te verbeter. Model toetsing is 'n algoritmiese benadering om die korrektheid van temporale logika spesi kasies in die beskrywing van harde- of sagteware te bewys. Anders as met tradisionlee program toetsing, benodig modeltoetsing 'n volledige ondersoek van al die moontlike toestande waarin so 'n beskrywing homself kan bevind. Model toetsing word meestal op abstrakte modelle van sagteware of die ontwerp toegepas. Indien die ontwerp of model aan al die spesi kasies voldoen word die abstrakte model gewoontlik vertaal na 'n implementasie. Die vertalings proses word gewoontlik met die hand gedoen en laat ruimte om nuwe foute, en selfs foute wat uitgeskakel in die model of ontwerp is te veroorsaak. Deesdae, is 'n gewilde benadering tot modeltoetsing om di e tegnieke direk op die implementasie toe te pas, en sodoende die ekstra moeite van model konstruksie en vertaling uit te skakel. Hierdie tesis handel oor die ontwerp, implementasie en toetsing van 'n e ektiewe modeltoetser vir 'n klein gelyklopende taal, LF, wat by die Universiteit van Stellenbosch ontwikkel is. Die enkeldoelige taal, LF, is gemik op die veilige ontwikkeling van ingebedde sagteware. Die taal is ontwerp om veilige programmerings praktyke aan te moedig. Verder is die taal en die onderliggende bedryfstelsel so ontwerp om 'n model toetser te akkomodeer. Om die LF programme direk te kan toets, is die model toetser 'n integrale deel van die bedryfstelsel sodat dit die program kan aandryf om alle moontlike toestande te besoek.
23

Cyriac, Aiswarya, and Aiswarya Cyriac. "Verification of communicating recursive programs via split-width." Phd thesis, École normale supérieure de Cachan - ENS Cachan, 2014. http://tel.archives-ouvertes.fr/tel-01015561.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
This thesis investigates automata-theoretic techniques for the verification of physically distributed machines communicating via unbounded reliable channels. Each of these machines may run several recursive programs (multi-threading). A recursive program may also use several unbounded stack and queue data-structures for its local-computation needs. Such real-world systems are so powerful that all verification problems become undecidable. We introduce and study a new parameter called split-width for the under-approximate analysis of such systems. Split-width is the minimum number of splits required in the behaviour graphs to obtain disjoint parts which can be reasoned about independently. Thus it provides a divide-and-conquer approach for their analysis. With the parameter split-width, we obtain optimal decision procedures for various verification problems on these systems like reachability, inclusion, etc. and also for satisfiability and model checking against various logical formalisms such as monadic second-order logic, propositional dynamic logic and temporal logics. It is shown that behaviours of a system have bounded split-width if and only if they have bounded clique-width. Thus, by Courcelle's results on uniformly bounded-degree graphs, split-width is not only sufficient but also necessary to get decidability for MSO satisfiability checking. We then study the feasibility of distributed controllers for our generic distributed systems. We propose several controllers, some finite state and some deterministic, which ensure that the behaviours of the system have bounded split-width. Such a distributedly controlled system yields decidability for the various verification problems by inheriting the optimal decision procedures for split-width. These also extend or complement many known decidable subclasses of systems studied previously.
24

Weitz, Noah. "Analysis of Verification and Validation Techniques for Educational CubeSat Programs." DigitalCommons@CalPoly, 2018. https://digitalcommons.calpoly.edu/theses/1854.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Since their creation, CubeSats have become a valuable educational tool for university science and engineering programs. Unfortunately, while aerospace companies invest resources to develop verification and validation methodologies based on larger-scale aerospace projects, university programs tend to focus resources on spacecraft development. This paper looks at two different types of methodologies in an attempt to improve CubeSat reliability: generating software requirements and utilizing system and software architecture modeling. Both the Consortium Requirements Engineering (CoRE) method for software requirements and the Monterey Phoenix modeling language for architecture modeling were tested for usability in the context of PolySat, Cal Poly's CubeSat research program. In the end, neither CoRE nor Monterey Phoenix provided the desired results for improving PolySat's current development procedures. While a modified version of CoRE discussed in this paper does allow for basic software requirements to be generated, the resulting specification does not provide any more granularity than PolySat's current institutional knowledge. Furthermore, while Monterey Phoenix is a good tool to introduce students to model-based systems engineering (MBSE) concepts, the resulting graphs generated for a PolySat specific project were high-level and did not find any issues previously discovered through trial and error methodologies. While neither method works for PolySat, the aforementioned results do provide benefits for university programs looking to begin developing CubeSats.
25

Palikareva, Hristina. "Techniques and tools for the verification of concurrent systems." Thesis, University of Oxford, 2012. http://ora.ox.ac.uk/objects/uuid:fc2028e1-2a45-459a-afdd-70001893f3d8.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Model checking is an automatic formal verification technique for establishing correctness of systems. It has been widely used in industry for analysing and verifying complex safety-critical systems in application domains such as avionics, medicine and computer security, where manual testing is infeasible and even minor errors could have dire consequences. In our increasingly parallelised world, concurrency has become pivotal and seamlessly woven within programming paradigms, however, extremely challenging when it comes to modelling and establishing correctness of intended behaviour. Tools for model checking concurrent systems face severe limitations due to scalability problems arising from the need to examine all possible interleavings (schedules) of executions of parallel components. Moreover, concurrency poses additional challenges to model checking, giving rise to phenomena such as nondeterminism, deadlock, livelock, etc. In this thesis we focus on adapting and developing novel model-checking techniques for concurrent systems in the setting of the process algebra CSP and its primary model checker FDR. CSP allows for a compact modelling and precise analysis of event-based concurrency, grounded on synchronous message passing as a fundamental mechanism of inter-component communication. In particular, we investigate techniques based on symbolic model checking, static analysis and abstraction, all of them exploiting the compositionality inherent in CSP and targeting to increase the scale of systems that can be tractably analysed. Firstly, we investigate symbolic model-checking techniques based on Boolean satisfiability (SAT), which we adapt for the traces model of CSP. We tailor bounded model checking (BMC), that can be used for bug detection, and temporal k-induction, which aims at establishing inductiveness of properties and is capable of both bug finding and establishing the correctness of systems. Secondly, we propose a static analysis framework for establishing livelock freedom of CSP processes, with lessons for other concurrent formalisms. As opposed to traditional exhaustive state-space exploration, our framework employs a system of rules on the syntax of a process to calculate a sound approximation of its fair/co-fair sets of events. The rules either safely classify a process as livelock-free or report inconclusiveness, thereby trading accuracy for speed. Finally, we develop a series of abstraction/refinement schemes for the traces, stable-failures and failures-divergences models of CSP and embed them into a fully automated and compositional CEGAR framework. For each of those techniques we present an implementation and an experimental evaluation on a set of CSP benchmarks.
26

Tuch, Harvey Computer Science &amp Engineering Faculty of Engineering UNSW. "Formal memory models for verifying C systems code." Publisher:University of New South Wales. Computer Science & Engineering, 2008. http://handle.unsw.edu.au/1959.4/41233.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Systems code is almost universally written in the C programming language or a variant. C has a very low level of type and memory abstraction and formal reasoning about C systems code requires a memory model that is able to capture the semantics of C pointers and types. At the same time, proof-based verification demands abstraction, in particular from the aliasing and frame problems. In this thesis, we study the mechanisation of a series of models, from semantic to separation logic, for achieving this abstraction when performing interactive theorem-prover based verification of C systems code in higher- order logic. We do not commit common oversimplifications, but correctly deal with C's model of programming language values and the heap, while developing the ability to reason abstractly and efficiently. We validate our work by demonstrating that the models are applicable to real, security- and safety-critical code by formally verifying the memory allocator of the L4 microkernel. All formalisations and proofs have been developed and machine-checked in the Isabelle/HOL theorem prover.
27

De, Oliveira Steven. "Finding constancy in linear routines." Thesis, Université Paris-Saclay (ComUE), 2018. http://www.theses.fr/2018SACLS207/document.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
La criticité des programmes dépasse constamment de nouvelles frontières car ils sont de plus en plus utilisés dans la prise de décision (voitures autonomes, robots chirurgiens, etc.). Le besoin de développer des programmes sûrs et de vérifier les programmes existants émerge donc naturellement.Pour prouver formellement la correction d'un programme, il faut faire face aux défis de la mise à l'échelle et de la décidabilité. Programmes composés de millions de lignes de code, complexité de l'algorithme, concurrence, et même de simples expressions polynomiales font partis des problèmes que la vérification formelle doit savoir gérer. Pour y arriver, les méthodes formelles travaillent sur des abstractions des états des programmes étudiés afin d'analyser des approximations de leur comportement. L'analyse des boucles est un axe entier de la vérification formelle car elles sont encore aujourd'hui peu comprises. Bien que certaines d'entre elles peuvent facilement être traitées, il existe des exemples apparemment très simples mais dont le comportement n'a encore aujourd'hui pas été résolu (par exemple, on ne sait toujours pas pourquoi la suite de Syracuse, simple boucle linéaire, converge toujours vers 1).L'approche la plus commune pour gérer les boucles est l'utilisation d'invariants de boucle, c'est à dire de relations sur les variables manipulées par une boucle qui sont vraies à chaque fois que la boucle recommence. En général, les invariants utilisent les mêmes expressions que celles utilisées dans la boucle : si elle manipule explicitement la mémoire par exemple, on s'attend à utiliser des invariants portant sur la mémoire. Cependant, il existe des boucles contenant uniquement des affectations linéaires qui n'admettent pas d'invariants linéaires, mais polynomiaux.Les boucles linéaires sont elles plus expressives que ce qu'il paraîtrait ?Cette thèse présente de nouvelles propriétés sur les boucles linéaires et polynomiales. Il est déjà connu que les boucles linéaires sont polynomialement expressives, au sens ou si plusieurs variables évoluent linéairement dans une boucle, alors n'importe quel monôme de ces variables évolue linéairement. La première contribution de cette thèse est la caractérisation d'une sous classe de boucles polynomiales exactement aussi expressives que des boucles linéaires, au sens où il existe une boucle linéaire avec le même comportement. Ensuite, deux nouvelles méthodes de génération d'invariants sont présentées.La première méthode est basée sur l'interprétation abstraite et s'intéresse aux filtres linéaires convergents. Ces filtres jouent un rôle important dans de nombreux systèmes embarqués (dans l'avionique par exemple) et requièrent l'utilisation de flottants, un type de valeurs qui peut mener à des erreurs d'imprécision s'ils sont mal utilisés. Aussi, la présence d'affectations aléatoires dans ces filtres rend leur analyse encore plus complexe.La seconde méthode traite d'une approche différente basée sur la génération d'invariants pour n'importe quel type de boucles linéaires. Elle part d'un nouveau théorème présenté dans cette thèse qui caractérise les invariants comme étant les vecteurs propres de la transformation linéaire traitée. Cette méthode est généralisée pour prendre en compte les conditions, les boucles imbriquées et le non déterminisme dans les affectations.La génération d'invariants n'est pas un but en soi, mais un moyen. Cette thèse s'intéresse au genre de problèmes que peut résoudre les invariants générés par la seconde méthode. Le premier problème traité est problème de l'orbite (Kannan-Lipton Orbit problem), dont il est possible de générer des certificats de non accessibilité en utilisant les vecteurs propres de la transformation considerée. En outre, les vecteurs propres sont mis à l'épreuve en pratique par leur utilisation dans le model-checker CaFE basé sur la verification de propriétés temporelles sur des programmes C
The criticality of programs constantly reaches new boundaries as they are relied on to take decisions in place of the user (autonomous cars, robot surgeon, etc.). This raised the need to develop safe programs and to verify the already existing ones.Anyone willing to formally prove the soundness of a program faces the two challenges of scalability and undecidability. Million of lines of code, complexity of the algorithm, concurrency, and even simple polynomial expressions are part of the issues formal verification have to deal with. In order to succeed, formal methods rely on state abstraction to analyze approximations of the behavior of the analyzed program.The analysis of loops is a full axis of formal verification, as this construction is still today not well understood. Though some of them can be easily handled when they perform simple operations, there still exist some seemingly basic loops whose behavior has not been solved yet (the Syracuse sequence for example is suspected to be undecidable).The most common approach for the treatment of loops is the use of loop invariants, i.e. relations on variables that are true at the beginning of the loop and after every step. In general, invariants are expected to use the same set of expressions used in the loop: if a loop manipulates the memory on a structure for example, invariants will naturally use expressions involving memory operations. However, there exist loops containing only linear instructions that admit only polynomial invariants (for example, the sum on integers $sumlimits_{i=0}^n i$ can be computed by a linear loop and is a degree 2 polynomial in n), hence using expressions that are syntacticallyabsent of the loop. Is the previous remark wrong then ?This thesis presents new insights on loops containing linear and polynomial instructions. It is already known that linear loops are polynomially expressive, in the sense that if a variable evolves linearly, then any monomial of this variable evolves linearly. The first contribution of this thesis is the extraction of a class of polynomial loops that is exactly as expressive as linear loops, in the sense that there exist a linear loop with the exact same behavior. Then, two new methods for generating invariants are presented.The first method is based on abstract interpretation and is focused on a specific kind of linear loops called linear filters. Linear filters play a role in many embedded systems (plane sensors for example) and require the use of floating point operations, that may be imprecise and lead to errors if they are badly handled. Also, the presence of non deterministic assignments makes their analysis even more complex.The second method treats of a more generic subject by finding a complete set of linear invariants of linear loops that is easily computable. This technique is based on the linear algebra concept of eigenspace. It is extended to deal with conditions, nested loops and non determinism in assignments.Generating invariants is an interesting topic, but it is not an end in itself, it must serve a purpose. This thesis investigates the expressivity of invariantsgenerated by the second method by generating counter examples for the Kannan-Lipton Orbit problem.It also presents the tool PILAT implementing this technique and compares its efficiency technique with other state-of-the-art invariant synthesizers. The effective usefulness of the invariants generated by PILAT is demonstrated by using the tool in concert with CaFE, a model-checker for C programs based on temporal logics
28

Zeng, Reng. "Methods for Modeling and Analyzing Concurrent Software." FIU Digital Commons, 2013. http://digitalcommons.fiu.edu/etd/931.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Concurrent software executes multiple threads or processes to achieve high performance. However, concurrency results in a huge number of different system behaviors that are difficult to test and verify. The aim of this dissertation is to develop new methods and tools for modeling and analyzing concurrent software systems at design and code levels. This dissertation consists of several related results. First, a formal model of Mondex, an electronic purse system, is built using Petri nets from user requirements, which is formally verified using model checking. Second, Petri nets models are automatically mined from the event traces generated from scientific workflows. Third, partial order models are automatically extracted from some instrumented concurrent program execution, and potential atomicity violation bugs are automatically verified based on the partial order models using model checking. Our formal specification and verification of Mondex have contributed to the world wide effort in developing a verified software repository. Our method to mine Petri net models automatically from provenance offers a new approach to build scientific workflows. Our dynamic prediction tool, named McPatom, can predict several known bugs in real world systems including one that evades several other existing tools. McPatom is efficient and scalable as it takes advantage of the nature of atomicity violations and considers only a pair of threads and accesses to a single shared variable at one time. However, predictive tools need to consider the tradeoffs between precision and coverage. Based on McPatom, this dissertation presents two methods for improving the coverage and precision of atomicity violation predictions: 1) a post-prediction analysis method to increase coverage while ensuring precision; 2) a follow-up replaying method to further increase coverage. Both methods are implemented in a completely automatic tool.
29

Wichers, Sacha. "Verification of numerical models for hydrothermal plume water through field measurements at TAG." Thesis, Massachusetts Institute of Technology, 2005. http://hdl.handle.net/1721.1/39173.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Thesis (S.M.)--Joint Program in Applied Ocean Science and Engineering (Massachusetts Institute of Technology, Dept. of Ocean Engineering; and the Woods Hole Oceanographic Institution), 2005.
Includes bibliographical references (p. 63-65).
Hydrothermal vents discharge superheated, mineral rich water into our oceans, thereby providing a habitat for exotic chemosynthetic biological communities. Hydrothermal fluids are convected upwards until they cool and reach density equilibrium, at which point they advect laterally with the current. The neutrally buoyant plume layer can have length scales on the order of several kilometers, and it therefore provides the best means to detect the presence of vent fields on the seafloor, which typically have length scales on the order of a few meters. This thesis uses field measurements of the velocity, temperature and particulate anomalies associated with the TAG hydrothermal plume to demonstrate that tidal currents exert a strong impact on the plume shape, and to provide new constraints on the thermal power of the TAG hydrothermal system. The results show that the power output of the TAG system is on the order of 6000 MW, which is up to two orders of magnitude greater than previous estimates, and that there is considerably more entrainment than had previously been assumed.
by Sacha Wichers.
S.M.
30

Kerbrat, Alain. "Méthodes symboliques pour la vérification de processus communicants : étude et mise en oeuvre." Université Joseph Fourier (Grenoble), 1994. http://tel.archives-ouvertes.fr/tel-00005100.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Ce travail porte sur la vérification formelle de programmes parallèles. Parmi les méthodes habituellement utilisées, nous nous intéressons aux méthodes basées sur la construction d'un modèle du programme à vérifier; la vérification proprement dite s'effectue sur ce modèle. Cette approche est limitée par l'explosion de la taille du modèle, dès que le programme traite est de complexité réaliste. Notre but est l'étude et la mise en œuvre de techniques permettant d'effectuer la vérification malgré cette explosion. Les techniques que nous présentons sont liées par une caractéristique commune : l'utilisation de méthodes symboliques de représentation du modèle. Nous étudions en premier lieu des techniques de réduction de modèles. Ces réductions s'opèrent par rapport à des relations d'équivalence basées sur la notion de bisimulation. Nous étudions en particulier un algorithme de minimisation de modèle pendant sa génération (Génération de Modèle Minimal). Dans une seconde partie, nous nous intéressons a deux techniques symboliques de représentation de modèles. Il s'agit d'une part de Graphes de Décision Binaires, qui permettent la manipulation efficace de formules booléennes, et d'autre part de systèmes d'inéquations linéaires, connus sous le nom de polyèdres convexes, pour la manipulation de variables entières. L'utilisation de ces techniques permet de représenter et manipuler des modèles de taille souvent prohibitive pour des méthodes énumératives classiques. Nous présentons la mise en œuvre de méthodes de comparaison et réduction de modèles aves les Graphes de Décision Binaires, avec en particulier l'algorithme de Génération de Modèle Minimal. L'application de l'outil correspondant à plusieurs exemples de programmes LOTOS a permis de montrer l'intérêt, mais aussi les limites de l'utilisation de cette représentation symbolique. Enfin, nous présentons une méthode d'analyse statique de protocoles, basée sur l'utilisation des polyèdres convexes. Cette analyse permet le calcul d'approximations supérieures d'invariants du programme et de vérifier la véracité de propriétés définies en termes de variables du programme.
31

Bull, J. J. D. "A comparison of two different model checking techniques." Thesis, Stellenbosch : Stellenbosch University, 2003. http://hdl.handle.net/10019.1/53235.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Thesis (MSc)--University of Stellenbosch, 2003.
ENGLISH ABSTRACT: Model checking is a computer-aided verification technique that is used to verify properties about the formal description of a system automatically. This technique has been applied successfully to detect subtle errors in reactive systems. Such errors are extremely difficult to detect by using traditional testing techniques. The conventional method of applying model checking is to construct a model manually either before or after the implementation of a system. Constructing such a model requires time, skill and experience. An alternative method is to derive a model from an implementation automatically. In this thesis two techniques of applying model checking to reactive systems are compared, both of which have problems as well as advantages. Two specific strategies are compared in the area of protocol development: 1. Structuring a protocol as a transition system, modelling the system, and then deriving an implementation from the model. 2. Automatically translating implementation code to a verifiable model. Structuring a reactive system as a transition system makes it possible to verify the control flow of the system at implementation level-as opposed to verifying the control flow at abstract level. The result is a closer correspondence between implementation and specification (model). At the same time testing, which is restricted to small, independent code fragments that manipulate data, is simplified significantly. The construction of a model often takes too long; therefore, verification results may no longer be applicable when they become available. To address this problem, the technique of automated model extraction was suggested. This technique aims to reduce the time required to construct a model by minimising manual input during model construction. A transition system is a low-level formalism and direct execution through interpretation is feasible. However, the overhead of interpretation is the major disadvantage of this technique. With automated model extraction there are disadvantages too. For example, differences between the implementation and specification languages-such as constructs present in the implementation language that cannot be expressed in the modelling language-make the development of an automated model extraction tool extremely difficult. In conclusion, the two techniques are compared against a set of software development considerations. Since a specific technique is not always preferable, guidelines are proposed to help select the best approach in different circumstances.
AFRIKAANSE OPSOMMING: Modeltoetsing is 'n rekenaargebaseerde verifikasietegniek wat gebruik word om eienskappe rakende 'n formele spesifikasie van 'n stelsel te verifieer. Die tegniek is al suksesvol toegepas om subtiele foute in reaktiewe stelsels op te spoor. Sulke foute word uiters moeilik opgespoor as tradisionele toetsings tegnieke gebruik word. Tradisioneel word modeltoetsing toegepas deur 'n model te bou voor of na die implementasie van 'n stelsel. Om'n model te bou verg tyd, vernuf en ervaring. 'n Alternatiewe metode is om outomaties 'n model van 'n implementasie af te lei. In hierdie tesis word twee toepassingstegnieke van modeltoetsing vergelyk, waar beide tegnieke beskik oor voordele sowel as nadele. Twee strategieë word vergelyk in die gebied van protokol ontwikkeling: 1. Om 'n protokol as 'n oorgangsstelsel te struktureer, dit te moduleer en dan 'n implementasie van die model af te lei. 2. Om outomaties 'n verifieerbare model van 'n implementasie af te lei. Om 'n reaktiewe stelsel as 'n oorgangsstelsel te struktureer maak dit moontlik om die kontrolevloei op implementasie vlak te verifieer-in teenstelling met verifikasie van kontrolevloei op 'n abstrakte vlak. Die resultaat is 'n nouer band wat bestaan tussen die implementasie en die spesifikasie. Terselfdetyd word toetsing, wat beperk word tot klein, onafhanklike kodesegmente wat data manupileer, beduidend vereenvoudig. Die konstruksie van 'n model neem soms te lank; gevolglik, wanneer die verifikasieresultate beskikbaar word, is dit dalk nie meer toepaslik op die huidige weergawe van 'n implementasie nie. Om die probleem aan te spreek is 'n tegniek om modelle outomaties van implementasies af te lei, voorgestel. Die doel van die tegniek is om die tyd wat dit neem om 'n model te bou te verminder deur handtoevoer tot 'n minimum te beperk. 'n Oorgangsstelsel is 'n laevlak formalisme en direkte uitvoering deur interpretasie is wesenlik. Die oorhoofse koste van die interpreteerder is egter die grootste nadeel van die tegniek. Daar is ook nadele wat oorweeg moet word rakende die tegniek om outomaties modelle van implementasies af te lei. Byvoorbeeld, verskille tussen die implementasietaal en spesifikasietaal=-soos byvoorbleed konstrukte wat in die implementasietaal gebruik word wat nie in die modeleringstaal voorgestel kan word nie-vrnaak die ontwikkeling van 'n modelafieier uiters moeilik. As gevolg word die twee tegnieke vergelyk teen 'n stel van programatuurontwikkelingsoorwegings. Omdat 'n spesifieke tegniek nie altyd voorkeur kan geniet nie, word riglyne voorgestel om te help met die keuse om die beste tegniek te kies in verskillende omstandighede.
32

Ben, Henda Noomene. "Infinite-state Stochastic and Parameterized Systems." Doctoral thesis, Uppsala University, Department of Information Technology, 2008. http://urn.kb.se/resolve?urn=urn:nbn:se:uu:diva-8915.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:

A major current challenge consists in extending formal methods in order to handle infinite-state systems. Infiniteness stems from the fact that the system operates on unbounded data structure such as stacks, queues, clocks, integers; as well as parameterization.

Systems with unbounded data structure are natural models for reasoning about communication protocols, concurrent programs, real-time systems, etc. While parameterized systems are more suitable if the system consists of an arbitrary number of identical processes which is the case for cache coherence protocols, distributed algorithms and so forth.

In this thesis, we consider model checking problems for certain fundamental classes of probabilistic infinite-state systems, as well as the verification of safety properties in parameterized systems. First, we consider probabilistic systems with unbounded data structures. In particular, we study probabilistic extensions of Lossy Channel Systems (PLCS), Vector addition Systems with States (PVASS) and Noisy Turing Machine (PNTM). We show how we can describe the semantics of such models by infinite-state Markov chains; and then define certain abstract properties, which allow model checking several qualitative and quantitative problems.

Then, we consider parameterized systems and provide a method which allows checking safety for several classes that differ in the topologies (linear or tree) and the semantics (atomic or non-atomic). The method is based on deriving an over-approximation which allows the use of a symbolic backward reachability scheme. For each class, the over-approximation we define guarantees monotonicity of the induced approximate transition system with respect to an appropriate order. This property is convenient in the sense that it preserves upward closedness when computing sets of predecessors.

33

Padoan, Tommaso. "Tableaux, automata and games for true concurrency properties." Doctoral thesis, Università degli studi di Padova, 2019. http://hdl.handle.net/11577/3425438.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
In the formal verification of software systems, model-checking is one of the most studied and applied techniques. Systems represented by mathematical models are checked against properties expressed as logical formulae. When the subjects of the verification are concurrent and distributed systems, models and specification logics capable of faithfully capturing the concurrent features of computations, like causal dependency and independence between actions of the systems, can sometimes be convenient or even essential. In this thesis we study the foundations of the model-checking problem in a logic for true concurrency, whose formulae predicate over executability of actions in computations and their causality and concurrency relations. The logic represents the logical counterpart of history-preserving bisimilarity, a popular behavioural equivalence in the true concurrent spectrum and one of the strongest known to be decidable. It includes least and greatest fixpoint operators in mu-calculus style, making it able to express properties of infinite computations. The logic is interpreted over event structures, a classical semantic model for true concurrency, that describes the behaviour of systems in terms of the events that can occur in computations and the causal dependencies and conflicts between them. It can be naturally used over any operational model that admits a causal semantic. Of particular interest, here, will be Petri nets. They are a widely adopted operational model allowing for a natural representation of concurrent and distributed systems. We prove that the model-checking problem is decidable over a class of event structures satisfying a suitable regularity condition, and provide three decision procedures based on three different approaches. Along the lines of some classical work for the modal mu-calculus, we first develop a local model-checker in the form of a tableau system, for which we prove termination, soundness and completeness. The tableau system allows for a clean and intuitive proof of decidability, but a direct implementation of the procedure can be extremely inefficient. On the way to a practical implementation, we then present an automata-based model-checking technique. Given a formula and a model, a parity tree automaton is constructed whose language is non-empty if and only if the model satisfies the formula. The automaton is usually infinite. We discuss how it can be quotiented to a finite automaton preserving its language via a suitable equivalence over its states. We show how the method instantiates on finite safe Petri nets, where such equivalence can be effectively computed. Finally, we devise a general game-theoretic approach to the solution of systems of fixpoint equations over a wide class of lattices. This applies, in principle, to a multitude of verification tasks, which reduce to the computation of fixpoints of monotone functions. Being an instance of those tasks, we show how the method can be instantiated and applied to the model-checking problem of interest in the thesis.
34

Bekkouche, Mohammed. "Combinaison des techniques de Bounded Model Checking et de programmation par contraintes pour l'aide à la localisation d'erreurs : exploration des capacités des CSP pour la localisation d'erreurs." Thesis, Nice, 2015. http://www.theses.fr/2015NICE4096/document.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Un vérificateur de modèle peut produire une trace de contreexemple, pour un programme erroné, qui est souvent difficile à exploiter pour localiser les erreurs dans le code source. Dans ma thèse, nous avons proposé un algorithme de localisation d'erreurs à partir de contreexemples, nommé LocFaults, combinant les approches de Bounded Model Checking (BMC) avec un problème de satisfaction de contraintes (CSP). Cet algorithme analyse les chemins du CFG (Control Flow Graph) du programme erroné pour calculer les sous-ensembles d'instructions suspectes permettant de corriger le programme. En effet, nous générons un système de contraintes pour les chemins du graphe de flot de contrôle pour lesquels au plus k instructions conditionnelles peuvent être erronées. Ensuite, nous calculons les MCSs (Minimal Correction Sets) de taille limitée sur chacun de ces chemins. La suppression de l'un de ces ensembles de contraintes donne un sous-ensemble satisfiable maximal, en d'autres termes, un sous-ensemble maximal de contraintes satisfaisant la postcondition. Pour calculer les MCSs, nous étendons l'algorithme générique proposé par Liffiton et Sakallah dans le but de traiter des programmes avec des instructions numériques plus efficacement. Cette approche a été évaluée expérimentalement sur des programmes académiques et réalistes
A model checker can produce a trace of counter-example for erroneous program, which is often difficult to exploit to locate errors in source code. In my thesis, we proposed an error localization algorithm from counter-examples, named LocFaults, combining approaches of Bounded Model-Checking (BMC) with constraint satisfaction problem (CSP). This algorithm analyzes the paths of CFG (Control Flow Graph) of the erroneous program to calculate the subsets of suspicious instructions to correct the program. Indeed, we generate a system of constraints for paths of control flow graph for which at most k conditional statements can be wrong. Then we calculate the MCSs (Minimal Correction Sets) of limited size on each of these paths. Removal of one of these sets of constraints gives a maximal satisfiable subset, in other words, a maximal subset of constraints satisfying the postcondition. To calculate the MCSs, we extend the generic algorithm proposed by Liffiton and Sakallah in order to deal with programs with numerical instructions more efficiently. This approach has been experimentally evaluated on a set of academic and realistic programs
35

Djoudi, Adel. "Binary level static analysis." Thesis, Université Paris-Saclay (ComUE), 2016. http://www.theses.fr/2016SACLX093.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Les méthodes de vérification automatique des logiciels connaissent un succès croissant depuis le début des années 2000, suite à plusieurs succès industriels (Microsoft, Airbus, etc.). L'analyse statique vise, à partir d'une description du programme, à inférer automatiquement des propriétés vérifiées par celui-ci. Les techniques standards d'analyse statique travaillent sur le code source du logiciel, écrit par exemple en C ou Java. Cependant, avoir accès au code source n'est pas envisageable pour de nombreuses applications relatives à la sécurité, soit que le code source n'est pas disponible (code mobile, virus informatiques), soit que le développeur ne veut pas le divulguer (composants sur étagère, certification par un tiers).Nous nous intéressons dans cette thèse à la conception et au développement d'une plate-forme d'analyse statique de code binaire à des fins d'analyse de sécurité. Nos principales contributions se font à trois niveaux: sémantique, implémentation et analyse statique.Tout d'abord, la sémantique des programmes binaires analysés est basée sur un formalisme générique appelé DBA qui a été enrichi avec des mécanismes de spécification et d'abstraction. La définition de la sémantique des programmes binaires requiert aussi un modèle mémoire adéquat.Nous proposons un modèle mémoire adapté au binaire, inspiré des travaux récents sur le code C bas-niveau. Ce nouveau modèle permet de profiter de l'abstraction du modèle à régions tout en gardant l'expressivité du modèle plat.Ensuite, notre plate-forme d'analyse de code binaire nommée BinSec offre trois services de base: désassemblage, simulation et analyse statique.Chaque instruction machine est traduite vers un bloc d'instructions DBA avec une sémantique équivalente. Une large partie des instructions x86 est gérée par la plateforme. Une passe de simplification permet d'éliminer les calculs intermédiaires inutiles afin d'optimiser le fonctionnement des analyses ultérieures. Nos simplifications permettent notamment d'éliminer jusqu'à75% des mises à jours de flags.Enfin, nous avons développé un moteur d'analyse statique de programmes binaires basé sur l'interprétation abstraite. Outre des domaines adaptés aux spécificités du code binaire, nous nous sommes concentrés sur le contrôle par l'utilisateur du compromis entre précision/correction et efficacité. De plus, nous proposons une approche originale de reconstruction de conditions dehaut-niveau à partir des conditions bas-niveau afin de gagner plus de précision d'analyse. L'approche est sûre, efficace, indépendante de la plateforme cibleet peut atteindre des taux de reconstruction très élevés
Automatic software verification methods have seen increasing success since the early 2000s, thanks to several industrial successes (Microsoft, Airbus, etc.).Static program analysis aims to automatically infer verified properties of programs, based on their descriptions. The standard static analysis techniques apply on the software source code, written for instance in C or Java. However, access to source code is not possible for many safety-related applications, whether the source code is not available (mobile code, computer virus), or the developer does not disclose it (shelf components, third party certification).We are interested in this dissertation in design and development of a static binary analysis platform for safety analysis. Our contributions are made at three levels: semantics, implementation and static analysis.First, the semantics of analyzed binary programs is based on a generic, simple and concise formalism called DBA. It is extended with some specification and abstraction mechanisms in this dissertation. A well defined semantics of binary programs requires also an adequate memory model. We propose a new memory model adapted to binary level requirements and inspired from recent work on low-level C. This new model allows to enjoy the abstraction of the region-based memory model while keeping the expressiveness of the flat model.Second, our binary code analysis platform BinSec offers three basic services:disassembly, simulation and static analysis. Each machine instruction is translated into a block of semantically equivalent DBA instructions. The platform handles a large part of x86 instructions. A simplification step eliminates useless intermediate calculations in order to ease further analyses. Our simplifications especially allow to eliminate up to 75% of flag updates.Finally, we developed a static analysis engine for binary programs based on abstract interpretation. Besides abstract domains specifically adapted to binary analysis, we focused on the user control of trade offs between accuracy/correctness and efficiency. In addition, we offer an original approach for high-level conditions recovery from low-level conditions in order to enhance analysis precision. The approach is sound, efficient, platform-independent and it achieves very high ratio of recovery
36

Boulgakov, Alexandre. "Improving scalability of exploratory model checking." Thesis, University of Oxford, 2016. https://ora.ox.ac.uk/objects/uuid:76acb8bf-52e7-4078-ab4f-65f3ea07ba3d.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
As software and hardware systems grow more complex and we begin to rely more on their correctness and reliability, it becomes exceedingly important to formally verify certain properties of these systems. If done naïvely, verifying a system can easily require exponentially more work than running it, in order to account for all possible executions. However, there are often symmetries or other properties of a system that can be exploited to reduce the amount of necessary work. In this thesis, we present a number of approaches that do this in the context of the CSP model checker FDR. CSP is named for Communicating Sequential Processes, or parallel combinations of state machines with synchronised communications. In the FDR model, the component processes are typically converted to explicit state machines while their parallel combination is evaluated lazily during model checking. Our contributions are motivated by this model but applicable to other models as well. We first address the scalability of the component machines by proposing a lazy compiler for a subset of CSPM selected to model parameterised state machines. This is a typical case where the state space explosion can make model checking impractical, since the size of the state space is exponential in the number and size of the parameters. A lazy approach to evaluating these systems allows only the reachable subset of the state space to be explored. As an example, in studying security protocols, it is common to model an intruder parameterised by knowledge of each of a list of facts; even a relatively small 100 facts results in an intractable 2100 states, but the rest of the system can ensure that only a small number of these states are reachable. Next, we address the scalability of the overall combination by presenting novel algorithms for bisimulation reduction with respect to strong bisimulation, divergence- respecting delay bisimulation, and divergence-respecting weak bisimulation. Since a parallel composition is related to the Cartesian product of its components, performing a relatively time-consuming bisimulation reduction on the components can reduce its size significantly; an efficient bisimulation algorithm is therefore very desirable. This thesis is motivated by practical implementations, and we discuss an implementation of each of the proposed algorithms in FDR. We thoroughly evaluate their performance and demonstrate their effectiveness.
37

Šimáček, Jiří. "Verifikace Programů se složitými datovými strukturami." Doctoral thesis, Vysoké učení technické v Brně. Fakulta informačních technologií, 2012. http://www.nusl.cz/ntk/nusl-261270.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Tato práce se zabývá verifikací nekonečně stavových systémů, konkrétně, verifikací programů využívajích složité dynamicky propojované datové struktury. V minulosti se k řešení tohoto problému objevilo mnoho různých přístupů, avšak žádný z nich doposud nebyl natolik robustní, aby fungoval ve všech případech, se kterými se lze v praxi setkat. Ve snaze poskytnout vyšší úroveň automatizace a současně umožnit verifikaci programů se složitějšími datovými strukturami v této práci navrhujeme nový přístup, který je založen zejména na použití stromových automatů, ale je také částečně inspirován některými myšlenkami, které jsou převzaty z metod založených na separační logice. Mimo to také představujeme několik vylepšení v oblasti implementace operací nad stromovými automaty, které jsou klíčové pro praktickou využitelnost navrhované verifikační metody. Konkrétně uvádíme optimalizovaný algoritmus pro výpočet simulací pro přechodový systém s návěštími, pomocí kterého lze efektivněji počítat simulace pro stromové automaty. Dále uvádíme nový algoritmus pro testování inkluze stromových automatů společně s experimenty, které ukazují, že tento algoritmus překonává jiné existující přístupy.
38

Fruth, Matthias. "Formal methods for the analysis of wireless network protocols." Thesis, University of Oxford, 2011. http://ora.ox.ac.uk/objects/uuid:df2c08f4-001c-42d3-a2f4-9922f081fb49.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
In this thesis, we present novel software technology for the analysis of wireless networks, an emerging area of computer science. To address the widely acknowledged lack of formal foundations in this field, probabilistic model checking, a formal method for verification and performance analysis, is used. Contrary to test and simulation, it systematically explores the full state space and therefore allows reasoning about all possible behaviours of a system. This thesis contributes to design, modelling, and analysis of ad-hoc networks and randomised distributed coordination protocols. First, we present a new hybrid approach that effectively combines probabilistic model checking and state-of-the-art models from the simulation community in order to improve the reliability of design and analysis of wireless sensor networks and their protocols. We describe algorithms for the automated generation of models for both analysis methods and their implementation in a tool. Second, we study spatial properties of wireless sensor networks, mainly with respect to Quality of Service and energy properties. Third, we investigate the contention resolution protocol of the networking standard ZigBee. We build a generic stochastic model for this protocol and analyse Quality of Service and energy properties of it. Furthermore, we assess the applicability of different interference models. Fourth, we explore slot allocation protocols, which serve as a bandwidth allocation mechanism for ad-hoc networks. We build a generic model for this class of protocols, study real-world protocols, and optimise protocol parameters with respect to Quality of Service and energy constraints. We combine this with the novel formalisms for wireless communication and interference models, and finally we optimise local (node) and global (network) routing policies. This is the first application of probabilistic model checking both to protocols of the ZigBee standard and protocols for slot allocation.
39

Ferrarezi, Rodrigo César. "Framework para modelagem e verificação formal de programas de controle de sistemas instrumentados de segurança." Universidade de São Paulo, 2014. http://www.teses.usp.br/teses/disponiveis/3/3152/tde-31122015-112539/.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Devido à alta complexidade dos Sistemas Produtivos, o projeto de sistemas de controle adequados às exigências normativas vinculadas aos processos industriais que são executados, e seu impacto no ser humano e no ambiente demandam a necessidade do desenvolvimento de soluções de controle que sejam seguras e estáveis no sentido de não causar interrupções no processo produtivo e danos ao ser humano e ao meio. Uma abordagem para o desenvolvimento de sistemas que contemplem estes requisitos baseia-se no conceito de Sistemas Instrumentados de Segurança e na aplicação das normas IEC 61508 e IEC 61511. Entretanto, assim como o desenvolvimento de qualquer software, os programas de controle de SIS também estão sujeitos a erros de especificação e projeto, mesmo quando o desenvolvimento é feito conforme os critérios normatizados. Além dos erros de projeto, também deve ser levado em consideração que as camadas de prevenção e mitigação especificadas nas normas podem ser desenvolvidas separadamente e dessa forma podem ocorrer comportamentos não previstos ou indesejáveis quando da operação conjunta delas. Uma das formas para uma melhoria na confiabilidade desses programas e que também é um requerimento pertinente ao ciclo de desenvolvimento de um SIS - de acordo com as normas de segurança IEC 61508 e IEC 61511 - é a aplicação de técnicas de verificação formal dos modelos desses programas de controle bem como o uso de um ambiente unificado para modelagem desses sistemas de controle, onde suas interações possam ser mais bem compreendidas. Atualmente, umas das técnicas mais proeminentes para a verificação de sistemas é o Model Checking, que realiza uma busca exaustiva no espaço de estados de um sistema dirigido por eventos, verificando as propriedades especificadas a partir de proposições estabelecidas em lógica temporal. Para esse trabalho é utilizada a lógica TCTL devido a sua capacidade de expressar propriedades em domínio temporal denso. Como ferramenta computacional será usado o ambiente GHENeSys, que propicia um ambiente unificado para modelagem, simulação e verificação dos sistemas por conjugar os benefícios de rede de Petri para modelagem e as técnicas de Model Checking para verificação de modelos.
Due to the high complexity of the actual Productive Systems, the design of suitable control systems according to the applicable industrial standards, and the possible negative impacts on the human being, on the environment and on equipment, the development of control solutions that are be both secure and stable as some systems have to operate nonstop is much demanded. One approach for the development systems with such requirements is the use of Safety Instrumented Systems complying with the standards IEC 61508 and IEC 61511. However, as on the development of any kind of software, SIS control programs are also prone to specification and design errors, even when the control programs are developed according to the applicable standards. Besides design errors, must be taken into consideration the fact that the SIS prevention and mitigation layers, as prescribed on the standards, can be developed individually and thus presenting unanticipated or undesirable behaviors when operating together. One way to improve the reliability of these control programs, which is also required by the safety standards IEC 61508 and IEC 61511 as part of the SIS development cycle, is the application of formal verification techniques on the control software models. Another way is to use a unified approach for modeling these control systems, and thus having the opportunity to understand their interactions better. Currently, one of the most prominent techniques for the verification of systems is the Model Checking. Such technique performs an exhaustive search in the space state of an event driven system, verifying the properties specified as established propositions in temporal logic. On this work, the TCTL logic is used due its ability to express properties in the dense time domain. As computational tool will be used GHENeSys environment, as it provides a unified environment for modeling, simulating and the verification of systems, which enjoys the benefits of modelling through Petri Nets and Model Checking techniques for formal verification.
40

Kotoun, Michal. "Symbolické automaty v analýze programů s řetězci." Master's thesis, Vysoké učení technické v Brně. Fakulta informačních technologií, 2020. http://www.nusl.cz/ntk/nusl-433553.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Mnoho aplikací přijímá, odesílá a zpracovává data v textové podobě. Správné a bezpečné zpracování těchto dat je typicky zajištěno tzv. ošetřením řetězců (string sanitization). Pomocí metod formální verifikace je možné analyzovat takovéto operace s řetězci a prověřit, zda jsou správně navržené či implementované.  Naším cílem je vytvořit obecný nástroj pro analýzu systémů jejichž konfigurace lze kódovat pomocí slov z vhodné abecedy, a také jeho specializaci pro analýzu programů pracujících s řetězci. Nejprve jsou popsaný konečné automaty a převodníky a poté různé třídy a podtřídy symbolických převodníků, zejména pak jejich omezení. Na základě těchto informací je pak pro použití v analýze programů navržen nový typ symbolických převodníků. Dále je popsán regulární model checking, speciálně pak jeho variantu založenou na abstrakci automatů, tzv. ARMC, u kterého je známo že dokáže velmi úspěšně překonat problém stavové exploze u automatů a umožňuje nám tzv. dosáhnout pevného bodu v analýze. Poté je navržena vlastní analýza programů psaných v imperativním paradigmatu, a to zejména programů manipulujících s řetězci, založená na principech ARMC. Následuje popis vlastní implementace nástroje s důrazem na jeho praktické vlastnosti. Rovněž jsou popsaný důležité části knihovny AutomataDotNet, na které nástroj staví. Práci je uzavřena diskuzí experimentů s nástrojem provedených na příkladech z knihovny LibStranger.
41

Bobot, François. "Logique de séparation et vérification déductive." Phd thesis, Université Paris Sud - Paris XI, 2011. http://tel.archives-ouvertes.fr/tel-00652508.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Cette thèse s'inscrit dans la démarche de preuve de programmes à l'aide de vérification déductive. La vérification déductive consiste à produire, à partir des sources d'un programme, c'est-à-dire ce qu'il fait, et de sa spécification, c'est-à-dire ce qu'il est sensé faire, une conjecture qui si elle est vraie alors le programme et sa spécification concordent. On utilise principalement des démonstrateurs automatiques pour montrer la validité de ces formules. Quand ons'intéresse à la preuve de programmes qui utilisent des structures de données allouées en mémoire, il est élégant et efficace de spécifier son programme en utilisant la logique de séparation qui est apparu il y a une dizaine d'année. Cela implique de prouver des conjectures comportant les connectives de la logique de séparation, or les démonstrateurs automatiques ont surtout fait des progrès dans la logique du premier ordre qui ne les contient pas.Ce travail de thèse propose des techniques pour que les idées de la logique de séparation puissent apparaître dans les spécifications tout en conservant la possibilité d'utiliser des démonstrateurs pour la logique du premier ordre. Cependant les conjectures que l'ont produit ne sont pas dans la même logique du premier ordre que celles des démonstrateurs. Pour permettre une plus grande automatisation, ce travail de thèse a également défini de nouvelles conversions entre la logique polymorphe du premier ordre et la logique multi-sortée dupremier ordre utilisé par la plupart des démonstrateurs.La première partie a donné lieu à une implémentation dans l'outil Jessie, la seconde a donné lieu à une participation conséquente à l'écriture de l'outil Why3 et particulièrement dans l'architecture et écriture des transformations qui implémentent ces simplifications et conversions.
42

Latham, J. T. "Abstraction in program verification." Thesis, University of Manchester, 1985. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.372026.

Full text
APA, Harvard, Vancouver, ISO, and other styles
43

Nutz, Alexander [Verfasser], and Andreas [Akademischer Betreuer] Podelski. "Data flow in program verification." Freiburg : Universität, 2019. http://d-nb.info/1209878453/34.

Full text
APA, Harvard, Vancouver, ISO, and other styles
44

Gonzalez, Perez Carlos Alberto. "Pragmatic model verification." Thesis, Nantes, Ecole des Mines, 2014. http://www.theses.fr/2014EMNA0189/document.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
L’Ingénierie Dirigée par les Modèles (IDM) est une approche populaire pour le développement logiciel qui favorise l’utilisation de modèles au sein des processus de développement. Dans un processus de développement logiciel base sur l’IDM, le logiciel est développé en créant des modèles qui sont transformés successivement en d’autres modèles et éventuellement en code source. Quand l’IDM est utilisée pour le développement de logiciels complexes, la complexité des modèles et des transformations de modèles augmente, risquant d’affecter la fiabilité du processus de développement logiciel ainsi que le logiciel en résultant.Traditionnellement, la fiabilité des logiciels est assurée au moyen d’approches pour la vérification de logiciels, basées sur l’utilisation de techniques pour l’analyse formelle de systèmes et d’approches pour le test de logiciels. Pour assurer la fiabilité du processus IDM de développement logiciel, ces techniques ont en quelque sorte été adaptées pour essayer de s’assurer la correction des modèles et des transformations de modèles associées. L’objectif de cette thèse est de fournir de nouveaux mécanismes améliorant les approches existantes pour la vérification de modèles statiques, et d’analyser comment ces approches peuvent s’avérer utiles lors du test des transformations de modèles
Model-Driven Engineering (MDE) is a popular approach to the development of software which promotes the use of models as first-Class citizens in the software development process. In a MDE-Based software development process, software is developed by creating models to be successively transformed into another models and eventually into the software source code. When MDE is applied to the development of complex software systems, the complexity of models and model transformations increase, thus risking both, the reliability of the software development process and the soundness of the resulting software. Traditionally, ensuring software correctness and absence of errors has been addressed by means of software verification approaches, based on the utilization of formal analysis techniques, and software testing approaches. In order to ensure the reliability of MDE-Based software development processes, these techniques have some how been adapted to try to ensure correctness of models and model transformations. The objective of this thesis is to provide new mechanisms to improve the landscape of approaches devoted to the verification of static models, and analyze how these static model verification approaches can be of assistance at the time of testing model transformations
45

Fan, Yang, Hidehiko Masuhara, Tomoyuki Aotani, Flemming Nielson, and Hanne Riis Nielson. "AspectKE*: Security aspects with program analysis for distributed systems." Universität Potsdam, 2010. http://opus.kobv.de/ubp/volltexte/2010/4136/.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Enforcing security policies to distributed systems is difficult, in particular, when a system contains untrusted components. We designed AspectKE*, a distributed AOP language based on a tuple space, to tackle this issue. In AspectKE*, aspects can enforce access control policies that depend on future behavior of running processes. One of the key language features is the predicates and functions that extract results of static program analysis, which are useful for defining security aspects that have to know about future behavior of a program. AspectKE* also provides a novel variable binding mechanism for pointcuts, so that pointcuts can uniformly specify join points based on both static and dynamic information about the program. Our implementation strategy performs fundamental static analysis at load-time, so as to retain runtime overheads minimal. We implemented a compiler for AspectKE*, and demonstrate usefulness of AspectKE* through a security aspect for a distributed chat system.
46

Clouet, Myriam. "Langage de spécification et outil de vérification pour le consentement et la nécessité des données fondés sur une classification relative au respect de la vie privée." Electronic Thesis or Diss., université Paris-Saclay, 2024. http://www.theses.fr/2024UPASG023.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
La vie privée est un droit fondamental que l'on retrouve dans plusieurs lois dans le monde entier. Donc, vérifier qu'un système respecte la vie privée est crucial. Cependant, la vie privée est une notion complexe. Assurer qu'un système respecte la vie privée nécessite de vérifier que les propriétés de vie privées sont respectées durant tout le cycle de vie, ce qui complique le processus de vérification. Une grande variété de solutions ont été proposées pour améliorer le respect de la vie privée. Il est souvent difficile de précisément identifier quand elles ciblent la même problématique.Dans cette thèse, j'adresse ces problèmes en proposant une façon de classifier des articles concernant la vie privée et une approche pour vérifier des propriétés de vie privée à deux étapes du cycle de vie. Je propose GePyR, une nouvelle représentation de la vie privée, générique et spécialisable, et une ontologie instanstiable, PyCO, qui modélise les éléments clés de la vie privée et leurs relations. Cette classification est évaluée sur la littérature, en réalisant un Systematic Mapping Study. Dans cette thèse, je formalise également deux propriétés de vie privée, la conformité aux finalités consenties et la conformité à la nécessité des données. Je propose une nouveau langage de spécification, CSpeL, qui permet de formaliser les éléments nécessaires pour vérifier ces propriétés, et introduit un nouvel outil, CASTT, pour vérifier ces propriétés sur des traces d'exécutions, sur un modèle ou un programme. Cette approche est appliquée sur deux cas d'utilisation à deux niveaux d'abstraction, pour évaluer sa correction, sont efficacité et son utilité
Privacy is a fundamental right implemented in many laws around the world. Therefore, verifying that a system complies with privacy is crucial. However, privacy is also a complex notion. Besides, ensuring compliance of a software system with respect to privacy requires to verify that the expected privacy properties hold during the whole system lifecycle. It usually involves different abstraction levels, which complicates the verification process. Many different solutions have been proposed to enhance privacy. It is often quite hard to precisely identify whether they target the same problem.This thesis addresses these issues by proposing a way to classify privacy papers and an approach to verify privacy properties at two different development stages. It proposes GePyR, a new generic and specializable representation, and an instantiable ontology, PyCO, that models key privacy elements and their relationships. This classification is evaluated on the literature, by using a Systematic Mapping Study. This thesis also formalizes two privacy properties, purpose compliance and necessity compliance. It proposes a new specification language, named CSpeL, that allows engineers to formally specify key system elements with regards to these properties, and introduces a new tool, CASTT, to verify the aforementioned properties on execution traces, on a model or on a program. This approach is applied on two use cases, both at two abstraction levels (model and code), in order to evaluate its correctness, its efficiency and its usefulness
47

Axelsson, Roland. "Verification of Non-Regular Program Properties." Diss., lmu, 2010. http://nbn-resolving.de/urn:nbn:de:bvb:19-116775.

Full text
APA, Harvard, Vancouver, ISO, and other styles
48

Durand-Gasselin, Antoine. "Automata based logics for program verification." Paris 7, 2013. http://www.theses.fr/2013PA077238.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
La vérification formelle de programmes consiste à raisonner rigoureusement, à l'aide de logiques mathématiques sur des programmes informatiques afin de démontrer leur validité vis-à-vis d'une certaine spécification. Cela permet d'obtenir une très forte assurance d'absence de bug. Le choix du cadre logique dans lequel effectuer la vérification d'un programme est dicté par deux impératifs: la logique doit être suffisamment expressive pour exprimer la spécification voulue et rendre compte de chacune des instructions du programme; et elle doit être suffisamment peu complexe, afin de pouvoir automatiser autant que possible (et en temps raisonable), le processus de vérification. Dans cette thèse, nous étudions deux types de logiques et comment les manipuler pour la vérification formelle. D'abord 1 logique du premier ordre sur les structures automatiques, par exemple l'Arithmétique de Presburger, qui permet d'exprimer des propriétés sur les entiers. Nous montrons qu'elles peuvent être efficacement représentées et manipulées avec des automates, typiquement nous détaillons un algorithme générique basé sur les automates qui permet de décider entre autres l'Arithmétique de Presburger avec une complexité proche de sa difficulté. Ensuite, nous présentons des transducteurs finis avec des registres en écriture seule, et étudions leur pouvoir expressif. Nous verrons qu'ils sont une extension élégante des automates finis pour définir des transformations de mots, étant un modèle calculatoire simple et intuitif, et nous établissons qu'ils permettent d'implémenter les transformations définissables par la logique monadique du second ordre
Formal software verification consists in a rigorous analysis of programs using mathematical logic, in order to demonstrat they meet their specification, and ensure with strong confidence in the absence of bugs. The choice of the logical framework to perform the analysis is crucial, and must address two issues: not only must the logic be expressive enough to express the specification and reflect the action of each instruction, but also the logic formalism must be simple enough in order to automate efficiently as much as possible the verification process. In this thesis we will study two logical frameworks and show how to manipulate those. First we will focus on first-order logics over automatic structures, such as Presburger Arithmetics which allows to express properties about integers. We will show that formulas can be effectively represented as automata, and we will detail a generic automata-based algorithn that allow to decide such first-order logics, whose complexity closely matches the hardness of those logics. Then we will present streaming string transducers, which are automata with a finite number of write-only registers and we will inspect their expressiveness. We will present this elegant extension of finite state automata to define string transformations as an intuitive computational model and we will establish their expressive power is precisely express transformations definable by monadic second-order logics
49

Pace, Kyongsuk P. "FNMOC model verification system." Thesis, Monterey, Calif. : Springfield, Va. : Naval Postgraduate School ; Available from National Technical Information Service, 1998. http://handle.dtic.mil/100.2/ADA349774.

Full text
APA, Harvard, Vancouver, ISO, and other styles
50

Taghdiri, Mana 1979. "Automating modular program verification by refining specifications." Thesis, Massachusetts Institute of Technology, 2008. http://hdl.handle.net/1721.1/43055.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Thesis (Ph. D.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 2008.
Includes bibliographical references (p. 205-211).
Modular analyses of software systems rely on the specifications of the analyzed modules. In many analysis techniques (e.g. ESC/Java), the specifications have to be provided by users. This puts a considerable burden on users and thus limits the applicability of such techniques. To avoid this problem, some modular analysis techniques automatically extract module summaries that capture specific aspects of the modules' behaviors. However, such summaries are only useful in checking a restricted class of properties. We describe a static modular analysis that automatically extracts procedure specifications in order to check heap-manipulating programs against rich data structure properties. Extracted specifications are context-dependent; their precision depends on both the property being checked, and the calling context in which they are used. Starting from a rough over-approximation of the behavior of each call site, our analysis computes an abstraction of the procedure being analyzed and checks it against the property. Specifications are further refined, as needed, in response to spurious counterexamples. The analysis terminates when either the property has been validated (with respect to a finite domain), or a non-spurious counterexample has been found. Furthermore, we describe a lightweight static technique to extract specifications of heap-manipulating procedures. These specifications neither are context-dependent, nor require any domain finitizations. They summarize the general behavior of procedures in terms of their effect on program state. They bound the values of all variables and fields in the post-state of the procedure by relational expressions in terms of their values in the pre-state. The analysis maintains both upper and lower bounds so that in some cases an exact result can be obtained.
by Mana Taghdiri.
Ph.D.

To the bibliography