Dissertations / Theses on the topic 'Vérification de programmes et de modèles'
Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles
Consult the top 50 dissertations / theses for your research on the topic 'Vérification de programmes et de modèles.'
Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.
You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.
Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.
Sellami, Yanis. "Raisonnement abductif modulo des théories et application à la vérification de programmes." Thesis, Université Grenoble Alpes, 2020. http://www.theses.fr/2020GRALM018.
This thesis introduces a generic method to compute the prime implicates of a logical formula, i.e., the most general clausal consequences of a given logical formula, in any given decidable theory. The principle used is to recursively add to an initially empty set, literals taken from a preselected set of hypotheses until it can be proven that the disjunction of these literals is a consequence of the formula under consideration. Proofs of the termination, correctness and completeness of this algorithm are provided, which ensures that the clauses the method computes are indeed implicates and that all the prime implicates that can be constructed from the initial set are indeed returned. This algorithm is then applied in the context of program verification, in which it is used to generate loop invariants that permit to verify assertions on programs. The Abdulot framework, a C++ implementation of the system, is also introduced. Technical considerations required for the design of such systems are detailed, such as their insertion within a well-furnished ecosystem of provers and SMT-Solvers. This implemented framework will be used to perform an experimental evaluation of the method and will show its practical relevance, as it can be used to solve a large scope of problems.This thesis also presents introductory work on an implicant minimizer and applies it in the context of separation logic. The method described in this part could also be used to perform bi-abduction in separation logic, with an algorithm that is described but has not been implemented
Sangnier, Arnaud. "Vérification de systèmes avec compteurs et pointeurs." Cachan, Ecole normale supérieure, 2008. http://www.theses.fr/2008DENS0051.
In the past years, formal methods have shown to be a succesfull approach to ensure that the behavior of an informatic system will respect some properties. Among the different existing techniques, model-checking have been recently studied and successfully applied to a lot of models like counter systems, lossy channel systems, pushdown automata, timed automata, etc. In this thesis, we consider two different models to verify programs which manipulate integer variables and pointer variables. In a first part, we deal with counter systems. We define the model and the different restrictions which have been proposed. We then introduce a restricted class of counter systems, called the reversal-bounded counter machines, for which many reachability problems are decidable. We show that this class can be extended keeping the decidability results and we prove that we can decide whether a Vector Addition System with States is reversal-bounded or not, which is not possible for general counter systems. We then study the problem of model-checking counter systems with different temporal logics. The temporal logics we consider allow to speak about the data manipulated by the system. In particular, we show that the model-checking of deterministic one-counter automata with formulae of LTL with registers is decidable, and becomes undecidable when considering non deterministic one-counter automata and two counter automata. In a second part, we introduce the model of pointer systems, which is used to represent programs manipulating single linked lists. We propose an algorithm to translate any pointer system into a bisimilar counter system. This allows us to reuse existing techniques over counter systems to analyze these programs. We then propose an extension of CTL* to verify temporal properties for such programs, and we study the decidability of the model-checking problem for this new logic. Finally we present the tool TOPICS (Translation of Programs Into Counter Systems) which translates a C-like program with pointers and integer variables into a counter system
Blanchard, Allan. "Aide à la vérification de programmes concurrents par transformation de code et de spécifications." Thesis, Orléans, 2016. http://www.theses.fr/2016ORLE2073/document.
Formal verification of concurrent programs is a hard task. There exists different methods to perform such a task, but very few are applied to the verification of programs written using real life programming languages. On the other side, formal verification of sequential programs is successfully applied for many years, and allows to get high confidence in our systems. As an alternative to dedicated concurrent program analyses, we propose a method to transform concurrent programs into sequential ones to make them analyzable by tools dedicated to sequential programs. This work takes place within the analysis framework FRAMA-C, dedicated to the analysis of C code specified with ACSL. The different analyses provided by FRAMA-C are plugins to the framework, which are currently mostly dedicated to sequential programs. We apply this method to the verification of a concurrent code taken from an hypervisor. We describe the automation of the method implemented by a new plugin to FRAMAC that allow to produce, from a specified concurrent program, an equivalent specified sequential program. We present the basis of a formalization of the method with the objective to prove its validity. This validity is admissible only for the class of sequentially consistent programs. So, we finally propose a prototype of constraint solver for weak memory models, which is able to determine whether a program is in this class or not, depending on the targeted hardware
Jahier, Erwan. "Analyse dynamique de programme : Mise en oeuvre automatisée d'analyseurs performants et spécifications de modèles d'exécution." Rennes, INSA, 2000. http://www.theses.fr/2000ISAR0009.
Several studies show that most of the software production cost is spent during the maintenance phase. During that phase, to locate bugs, to optimize programs, or to add new functionalities, it is essential to understand programs, and in particular to understand their runtime behavior. Dynamic analysis tools such as debuggers, profilers, or monitors, are very useful in that respect. However, such tools are expensive to implement because: (1) it generally requires to modify the compiling system, which is tedious and not always possible; (2) the needs in dynamic analysis tools vary from one user to another, depending on its competence, on its experience of the programming system, and on its knowledge of the code to maintain; (3) such tools are generally difficult to reuse. It is therefore desirable that each user is able to specify easily the dynamic analyses he needs. Hence, we propose an architecture that eases dynamic analysis tools implementation. This architecture is based on: (1) a systematic instrumentation of the program which gives a detailed image of the execution, the trace; (2) a set of trace processing primitives that lets one analyse the trace efficiently. The resulting analysers have performance of the same order of magnitude that their equivalent implemented ``by hand'' by modifying the compiling system. They can be implemented by programmers without any knowledge of the compiling system. This architecture let them implement the tools they need, adapted to their level of comprehension of the code they are in charge to maintain. Furthermore, the modular structure of the proposed architecture should ease the analysers reuse. This work has been held within the context of the logical and functional programming language Mercury. However, the concepts we used do not depend on the programming paradigm. The trace on which we base the implementation of our dynamic analysis tools should reflect as much as possible the runtime behavior of programs. Therefore, we also propose a framework to specify execution traces. This framework is based on an operational semantics of the language to analyse. Such formal specifications of the trace let us experimentally validate tracers, and prove their correctness. This work have been held within the context of the logical programming language Prolog
Bobot, François. "Logique de séparation et vérification déductive." Phd thesis, Université Paris Sud - Paris XI, 2011. http://tel.archives-ouvertes.fr/tel-00652508.
Fernandes, Pires Anthony. "Amélioration des processus de vérification de programmes par combinaison des méthodes formelles avec l’Ingénierie Dirigée par les Modèles." Thesis, Toulouse, ISAE, 2014. http://www.theses.fr/2014ESAE0023/document.
During software development, and more specifically embedded avionics applications development, verification is very expensive. A promising lead to reduce its costs is the use of formal methods. Formal methods are mathematical techniques which allow performing rigorous and high-valued verification tasks during software development. They are already applied in industry. However, the high level of expertise required for their use is a major obstacle for their massive use. In addition to the verification costs issue, today software and their development are subject to an increase in complexity. Model Driven Engineering (MDE) allows dealing with these difficulties by offering models, and tasks to capitalize on these models all along the development lifecycle. The goal of this PhD thesis is to establish a link between formal methods and MDE in order to propose to non-expert users a formal and automatic software verification approach which helps to improve software verification processes. We propose to automatically generate annotations, corresponding to the expected behavioural properties of the software, from the design model to the source code. Then, these annotations can be verified using deductive proof tools in order to ensure that the behaviour of the code conforms to the design model. This PhD thesis takes place in the industrial context of Atos. So, it is necessary to take into account its technical specificities. We use UML for the design modeling, the C language for the software implementation and the Frama-C tool for the proof of this implementation. We also take into account the constraints of the avionics field in which Atos intervenes, and specifically the certification constraints. The contributions of this PhD thesis are the definition of a subset of UML state machine dedicated to the behavioural design of embedded avionics software and in line with current industrial practices, the definition of a C implementation pattern, the definition of generation patterns for the behavioural properties from the design model to the source code and the implementation of the whole approach in a prototype in accordance with the working environment of the potential users associated with Atos. The proposed approach is then assessed with respect to the starting goal of the thesis, to the expectation of the software engineering community and to related work
Bardou, Romain. "Vérification de programmes avec pointeurs à l'aide de régions et de permissions." Thesis, Paris 11, 2011. http://www.theses.fr/2011PA112220/document.
Deductive verification consists in annotating programs by a specification, i.e. logic formulas which describe the behavior of the program, and prove that programs verify their specification. Tools such as the Why platform take a program and its specification as input and compute logic formulas such that, if they are valid, the program verifies its specification. These logic formulas can be proven automatically or using proof assistants.When a program is written in a language supporting pointer aliasing, i.e. if several variables may denote the same memory cell, then reasoning about the program becomes particularly tricky. It is necessary to specify which pointers may or may not be equal. Invariants of data structures, in particular, are harder to maintain.This thesis proposes a type system which allows to structure the heap in a modular fashion in order to control pointer aliases and data invariants. It is based on the notions of region and permission. Programs are then translated to Why such that pointers are separated as best as possible, to facilitate reasoning. This thesis also proposes an inference mechanism to alleviate the need to write region operations introduced by the language. A model is introduced to describe the semantics of the language and prove its safety. In particular, it is proven that if the type of a pointer tells that its invariant holds, then this invariant indeed holds in the model. This work has been implemented as a tool named Capucine. Several examples have been written to illustrate the language, and where verified using Capucine
Bardou, Romain. "Verification de programmes avec pointeurs a l'aide de regions et de permissions." Phd thesis, Université Paris Sud - Paris XI, 2011. http://tel.archives-ouvertes.fr/tel-00647331.
Clouet, Myriam. "Langage de spécification et outil de vérification pour le consentement et la nécessité des données fondés sur une classification relative au respect de la vie privée." Electronic Thesis or Diss., université Paris-Saclay, 2024. http://www.theses.fr/2024UPASG023.
Privacy is a fundamental right implemented in many laws around the world. Therefore, verifying that a system complies with privacy is crucial. However, privacy is also a complex notion. Besides, ensuring compliance of a software system with respect to privacy requires to verify that the expected privacy properties hold during the whole system lifecycle. It usually involves different abstraction levels, which complicates the verification process. Many different solutions have been proposed to enhance privacy. It is often quite hard to precisely identify whether they target the same problem.This thesis addresses these issues by proposing a way to classify privacy papers and an approach to verify privacy properties at two different development stages. It proposes GePyR, a new generic and specializable representation, and an instantiable ontology, PyCO, that models key privacy elements and their relationships. This classification is evaluated on the literature, by using a Systematic Mapping Study. This thesis also formalizes two privacy properties, purpose compliance and necessity compliance. It proposes a new specification language, named CSpeL, that allows engineers to formally specify key system elements with regards to these properties, and introduces a new tool, CASTT, to verify the aforementioned properties on execution traces, on a model or on a program. This approach is applied on two use cases, both at two abstraction levels (model and code), in order to evaluate its correctness, its efficiency and its usefulness
Ngo, Van Chan. "Vérification Formelle d'un Compilateur Synchrone: de Signal vers C." Phd thesis, Université Rennes 1, 2014. http://tel.archives-ouvertes.fr/tel-01058041.
Chorfi, Redha. "Abstraction et vérification de programmes informatiques." Thesis, Université Laval, 2008. http://www.theses.ulaval.ca/2008/25710/25710.pdf.
Garavel, Hubert. "Compilation et vérification de programmes LOTOS." Grenoble 1, 1989. http://tel.archives-ouvertes.fr/tel-00004339.
Cassez, Franck. "Compilation et vérification des programmes ELECTRE." Nantes, 1993. http://www.theses.fr/1993NANT2013.
Djoko, Djoko Simplice. "Analyses et vérification des programmes à aspects." Phd thesis, Université de Nantes, 2009. http://tel.archives-ouvertes.fr/tel-00752116.
Atig, Mohamed Faouzi. "Vérification de Programmes Concurrents : Décidabilité et Complexité." Paris 7, 2010. http://www.theses.fr/2010PA077066.
This thesis addresses the verification problems in both, concurrent and recursive Systems as well as concurrent Systems with store buffers. We establish the required theoretical basis for automated analyses: decidability and complexity results for reachability problems. In a first time, we are interested in verifying concurrent programs where each process corresponds to a sequential program with (recursive) procedure calls. The difficulty in analyzing such programs cornes from the interaction between recursion and concurrency which makes the reachability problems undecidable in general. However, in practice programs obey additional constraints that can be exploited to turn the reachability problem decidable. Their study is subject of this thesis. These conditions may be seen as constraints to impose on the order between the actions of the analyzed programs. Moreover, these decidability results can be used to perform an under-approximation analysis to effectively detect bad behaviors of the analyzed programs. In a second time, we study concurrent programs running under weak memory models. In such kind of programs, the order between actions of the same process is relaxed (for performance reasons) by allowing the permutation between certain types of memory operations. This makes reasoning about the behaviors of concurrent programs much more difficult. Moreover, it is not clear how to apply standard reasoning techniques. Our works show that indeed according to the type of relaxation, the reachability problem becomes décidable (but with a highly complexity) in other cases, it even turns out undecidability
Declerck, David. "Vérification par model-checking de programmes concurrents paramétrés sur des modèles mémoires faibles." Thesis, Université Paris-Saclay (ComUE), 2018. http://www.theses.fr/2018SACLS336/document.
Modern multiprocessors and microprocesseurs implement weak or relaxed memory models, in which the apparent order of memory operation does not follow the sequential consistency (SC) proposed by Leslie Lamport. Any concurrent program running on such architecture and designed with an SC model in mind may exhibit new behaviors during its execution, some of which may potentially be incorrect. For instance, a mutual exclusion algorithm, correct under an interleaving semantics, may no longer guarantee mutual exclusion when implemented on a weaker architecture. Reasoning about the semantics of such programs is a difficult task. Moreover, most concurrent algorithms are designed for an arbitrary number of processus. We would like to ensure the correctness of concurrent algorithms, regardless of the number of processes involved. For this purpose, we rely on the Model Checking Modulo Theories (MCMT) framework, developed by Ghilardi and Ranise, which allows for the verification of safety properties of parameterized concurrent programs, that is to say, programs involving an arbitrary number of processes. We extend this technology with a theory for reasoning about weak memory models. The result of this work is an extension of the Cubicle model checker called Cubicle-W, which allows the verification of safety properties of parameterized transition systems running under a weak memory model similar to TSO
Bolduc, Claude. "Oméga-Algèbre : théorie et application en vérification de programmes." Thesis, Université Laval, 2006. http://www.theses.ulaval.ca/2006/23728/23728.pdf.
Kleene algebra is the algebraic theory of finite automata and regular expressions. Recently, Kozen has proposed a framework based on Kleene algebra with tests (a variant of Kleene algebra) for verifying that a program satisfies a security policy specified by a security automaton. Unfortunately, this approach does not allow to verify liveness properties for reactive programs (programs that execute forever). The goal of this thesis is to extend the framework proposed by Kozen for program verification to remove this limitation. For that, we develop the theory of omega algebra, an extension of Kleene algebra suitable for reasoning about nonterminating executions. The main part of this thesis is about omega algebra. In particular, we show some completeness results about the Horn theory of omega algebra.
Inscrit au Tableau d'honneur de la Faculté des études supérieures
Brown, Naïma. "Vérification et mise en œuvre distribuée des programmes unity." Nancy 1, 1994. http://www.theses.fr/1994NAN10384.
Pavlova, Mariela. "Vérification de programmes en code octet et ses applications." Nice, 2007. http://www.theses.fr/2007NICE4010.
Program verification techniques based on programming logics and verification condition generators provide a powerful means to reason about programs. Whereas these techniques have very often been employed in the context of high-level languages in order to benefit from their structural nature. It is often required, especially in the context of mobile code, to prove the correctness of compiled programs. Thus it is highly desirable to have a means of bringing the benefits of source code verification to code consumers. We propose a mechanism that allows to transfer evidence from source programs to compiled programs. It builds upon a specification language for bytecode, a verification condition generator that operates on annotated programs, and a compiler that transforms source annotations into bytecode annotations. We show that the verification condition generator is sound, and that the proof bytecode level nearly coincides. We illustrate the benefits of oue framework in two case studies
Kerbrat, Alain. "Méthodes symboliques pour la vérification de processus communicants : étude et mise en oeuvre." Université Joseph Fourier (Grenoble), 1994. http://tel.archives-ouvertes.fr/tel-00005100.
Hurlin, Clément. "Spécification et vérification de programmes orientés objets en logique de séparation." Phd thesis, Université de Nice Sophia-Antipolis, 2009. http://tel.archives-ouvertes.fr/tel-00424979.
Cette adaptation inclut un système de vérification pour des programmes similaires aux programmes Java. Ce système est constitué d'un ensemble de triplets de Hoare qui forment un algorithme de vérification. La preuve de correction de ce système a été effectuée et ce système a été évalué sur plusieurs exemples ambitieux (dont la classe Itérateur de la librairie Java et un algorithme de couplage de verrous).
En plus de l'extension décrite ci-dessus, plusieurs analyses utilisant la logique de séparation ont été inventées.
La première analyse consiste à spécifier les séquences d'appels de méthodes autorisées (appelés "protocoles") dans les classes. Cette analyse décrit finement des protocoles complexes (telle que celui de la classe Itérateur). En outre, nous avons proposé une nouvelle technique permettant de vérifier que les spécifications d'un programme sont correctes en utilisant les protocoles.
La seconde analyse permet de montrer qu'une formule en logique de séparation n'implique pas une autre formule. Cela est utile dans les vérificateurs de programmes car ceux-ci doivent fréquemment démontrer des implications entre formules. L'intérêt de cette analyse est que sa complexité est basse : cela permet de l'utiliser souvent sans consommer beaucoup de ressources.
La troisième analyse permet de paralléliser automatiquement des programmes. Cette analyse prend en entrée des programmes prouvés en logique de séparation et rend en sortie des programmes parallélisés, optimisés, et prouvés. Notre analyse utilise la sémantique de séparation de l'opérateur "*" pour détecter quand deux sous programmes accèdent à des parties disjointes du tas. Dans ce cas, la parallélisation est possible. L'algorithme de détection est implémenté par un système de réécriture.
Ratel, Christophe. "Définition et réalisation d'un outil de vérification formelle de programmes LUSTRE." Phd thesis, Grenoble 1, 1992. http://tel.archives-ouvertes.fr/tel-00341223.
Botbol, Vincent. "Analyse statique de programmes concurrents avec variables numériques." Electronic Thesis or Diss., Sorbonne université, 2018. https://accesdistant.sorbonne-universite.fr/login?url=https://theses-intra.sorbonne-universite.fr/2018SORUS390.pdf.
Verifying distributed systems is a difficult problem on both theoretical and practice levels, in particular when systems are capable of local numerical computations. The goal of this thesis is to provide a formal verification method of such systems. We present a general model, based on abstract interpretation, allowing the construction of static analyses for systems of communicating processes. Our methodology is inspired by Regular Model Checking where the set of program states are represented as lattice automata and the program semantics are encoded using rewriting systems applied on the language recognized by the automata. This model offers the possiblity of expressing communications between processes as well as dynamic creation/destruction of process. Using the abstract interpretation methodology, we are able to provide a sound over-approximation of the reachability set of programs allowing us to verify numerical safety properties. We implemented this method allowing us to automatically analyse programs that use the distributed computation library MPI/C
Ben, Ahmed Daho Okacha. "Vérification de programmes de commande numérique pour l'usinage de poches : une application du partitionnement du plan." Limoges, 1998. http://www.theses.fr/1998LIMO0020.
Bouyer-Decitre, Patricia. "Modèles et algorithmes pour la vérification des systèmes temporisés." Cachan, Ecole normale supérieure, 2002. http://www.theses.fr/2002DENS0008.
Kanig, Johannes. "Spécification et preuve de programmes d'ordre supérieur." Paris 11, 2010. http://www.theses.fr/2010PA112183.
Ln this thesis, we first present a theoretical system that enables proof of higher-order programs with side effects. This system consists of three major parts. First, a programming language with a traditional type, effect and region system, with effect polymorphism. Second, a higher-order specification language, that also contains a means to describe modifications of the state. Finally, a weakest precondition calcul us that, starting from an annotated program, allows to obtain proof obligations, that is, formulas whose validity implies the correctness of the program w. R. T. Its specification. We also present two restrictions of the initial system. The first disallows region aliasing, obtaining better modularity of the calculus, the second restricts the system to regions that are singleton, containing only a single reference. Both restrictions enable important simplifications that can be applied to proof obligations, but restrict the set of accepted programs. We also present an implementation of this theoretical system, called Who. This tool uses in particular translations of the proof obligations to higher-order logic and first-order logic; these translations are detailed in this thesis. The translation to higher-order logic in particular allows using the Coq proof assistant to validate proof obligations. The translation to first-order logic allows using automated theorem provers. Higher-order programs, found in the standard library of OCaml and elsewhere, have been proved correct using the tool Who, as well as a continuation-based implementation of the Koda-Ruskey algorithm
Kunz, César. "Préservation des preuves et transformation de programmes." Phd thesis, École Nationale Supérieure des Mines de Paris, 2009. http://pastel.archives-ouvertes.fr/pastel-00004940.
Le, Viet Hoang. "Une couverture combinant tests et preuves pour la vérification formelle." Thesis, Toulouse, ISAE, 2019. http://www.theses.fr/2019ESAE0023/document.
Currently, industrial-strength software development usually relies on unit testing or unitproof in order to ensure high-level requirements. Combining these techniques has already beendemonstrated more effective than using one of them alone. The verification engineer is yetnot been to combine these techniques because of the lack of a common notion of coverage fortesting and proving. Defining such a notion is the main objective of this thesis.We introduce here a new notion of coverage, named « label-mutant coverage ». It subsumesmost existing structural coverage criteria for unit testing, including statement coverage,branch coverage or MC/DC coverage, while allowing to decide whether the chosen criterionis satisfied by relying on a formal verification technique, either testing or proving or both.It also subsumes functional coverage criteria. Furthermore, we also introduce a method thatmakes use of automatic tools for testing or proving in order to reduce the verification costwhile satisfying the chosen coverage criterion. This method is implemented inside Frama-C, aframework for verification of C code (Frama-C). This way, it offers to the engineer a way tocontrol and to perform the expected verifications
Clochard, Martin. "Méthodes et outils pour la spécification et la preuve de propriétés difficiles de programmes séquentiels." Thesis, Université Paris-Saclay (ComUE), 2018. http://www.theses.fr/2018SACLS071/document.
This thesis is set in the domain of deductive verification of programs, which consists of transforming a property to be verified about a program into a logical statement, and then proving this statement. Effective verification of a program can pose many practical difficulties. In fact, the concepts behind the program may be sufficient to impede verification. Indeed, some programs can be quite short and use only simple constructions, and yet prove very difficult to verify. This leads us to the following question: in the context of a deductive program verification environment based on automatic provers, what methods can be applied to reduce the effort required both to specify complex behaviors, as well as to prove that a program respects these expected behaviors? To carry out our study, we placed ourselves in the context of the deductive verification environment of programs Why3. The verification of programs in Why3 is based on the generation of verification conditions, and the use of external provers to prove them, whether these provers are automatic or interactive. We have developed several methods, some general and others specific to some program classes, to reduce manual effort. Our contributions are as follows. First, we add features to Why3 to assist the verification process, including a lightweight declarative proof mechanism based on the notion of cut indicators. Then we present a method for checking the absence of arithmetic overflow, for use cases which are difficult to process by standard methods. Finally, we are interested in the development of a generic library for the specification and proof of code generating programs
Petiot, Guillaume. "Contribution à la vérification de programmes C par combinaison de tests et de preuves." Thesis, Besançon, 2015. http://www.theses.fr/2015BESA2045/document.
Software verification often relies on a formal specification encoding the program properties to check. Formally specifying and deductively verifying programs is difficult and time consuming and requires some knowledge about theorem provers. Indeed, a proof failure for a program can be due to a noncompliance between the code and its specification, a loop or callee contrat being insufficient to prove another property, or a prover incapacity. It is often difficult for the user to decide which one of these three reasons causes a given proof failure. Indeed, this feedback is not (or rarely) provided by the theorem prover thus requires a thorough review of the code and the specification. This thesis develops a method to automatically diagnose proof failures and facilitate the specification and verification task. This work takes place within the analysis framework for C programs FRAMAC, that provides the specification language ACSL, the deductive verification plugin WP, and the structural test generator PATHCRAWLER. The proposed method consists in diagnosing proof failures using structural test generation on an instrumented version of the program under verification
Guider, Romain. "Analyse statique de programmes Java [et] application à la parallélisation." Nice, 2000. http://www.theses.fr/2000NICE5434.
Evangelista, Sami. "Méthodes et outils de vérification pour les réseaux de Petri de haut niveau : Application à la vérification de programmes Ada concurrents." Paris, CNAM, 2006. http://www.theses.fr/2006CNAM0543.
This thesis enters in the frame of the automatic verification of concurrent software based on an intermediary formal language, colored Petri nets. We particularly endeavor to define, or adapt, methods which aim at tackling the state explosion induced by an exhaustive exploration of the state space. We work at two levels : at a structural level, by defining some automatic automatic abstraction rules of the model, and at a semantic level, by reducing the reachabiblity graph of the system. In order to validate the practical interest of the proposed techniques we implemented them in two tools: Helena a model checker for high level Petri nets and Quasar, a platform for the verification of concurrent Ada software
Munch-Maccagnoni, Guillaume. "Syntaxe et modèles d'une composition non-associative des programmes et des preuves." Phd thesis, Université Paris-Diderot - Paris VII, 2013. http://tel.archives-ouvertes.fr/tel-00918642.
Munch, Guillaume. "Syntaxe et modèles d'une composition non-associative des programmes et des preuves." Paris 7, 2013. http://www.theses.fr/2013PA077130.
The thesis is a contribution to the understanding of the nature, role, and mechanisms of polarisation in programming languages, proof theory and categorical models. Polarisation corresponds to the idea that we can relax the associativity of composition, as we show by relating duploids, our direct model of polarisation, to adjunctions. As a consequence, polarisation underlies many models of computation, which we further show by decomposing continuation-passing-style models of delimited control in three fondamental steps which allowing us to reconstruct four call-by-name variants of the shift and reset operators. It also explains constructiveness-related phenomena in proof theory, which we illustrate by providing a formulae-as-types interpretation for polarisation in general and for an involutive negation in particular. The cornerstone of our approach is an interactive term-based représentation of proofs and programs (L calculi) which exposes the structure of polarities. It is based on the correspondence between abstract machines and sequent calculi, and it aims at synthesising various trends: the modelling of control, evaluation order and effects in programming languages, the quest for a relationship between categorical duality and continuations, and the interactive notion of construction in proof theory. We give a gentle introduction to our approach which only assumes elementary knowledge of simply-typed lambda calculus and rewriting
Hubert, Thierry. "Analyse statique et preuve de programmes industriels critiques." Paris 11, 2008. http://www.theses.fr/2008PA112061.
In this thesis, we contributed to the development of the Why platform to design a proof method for the safety of critical industrial programs. At first, this thesis presents the platform Why such as it existed before the thesis. This platform, based on the calculus of weakest precondition, is directly used on the source code and generates the verification conditions which must be validated to ensure the program safety. The first contribution is a case study of proof of a program implementing a complex algorithm on graphs: Schorr-Waite. The second contribution consists of a separation analysis of pointers. This analysis, based on a separation in regions of the memory, is a type-based analysis, thus completely static. The third contribution consists of a method of simplification of the verification conditions. Indeed, the verification conditions often contain a large number of hypotheses that are useless for the validation of them. To resolve this problem an analysis of relevance of the hypotheses was developed to simplify the verification conditions. This thesis ends with the case study of a critical industrial program developed at Dassault Aviation to validate our approach
Jeannet, Bertrand. "Partitionnement dynamique dans l'analyse de relations linéaires et application à la vérification de programmes synchrones." Grenoble INPG, 2000. http://www.theses.fr/2000INPG0074.
Zinelabdine, Aoufa. "Modélisation des programmes d'investissement : méthodologie et étude de cas." Châtenay-Malabry, Ecole centrale de Paris, 1986. http://www.theses.fr/1986ECAP0016.
Baro, Sylvain. "Conception et implémentation d'un système d'aide à la spécification et à la preuve de programmes ML." Phd thesis, Université Paris-Diderot - Paris VII, 2003. http://tel.archives-ouvertes.fr/tel-00008416.
Heyd, Barbara. "Application de la théorie des types et du démonstrateur COQ à la vérification de programmes parallèles." Nancy 1, 1997. http://www.theses.fr/1997NAN10291.
This thesis is an approach to the formaI specification and verification of distributed systems and in particular to the computer assisted verification. In this work, we use the COQ prover to verify concurrent programs and the chosen specification mechanism is the UNITY logic. First, we describe the Calculus of Inductive Constructions, which is the logic used in the Coq prover. Then we introduce the UNITY theory, but we linger over the problems of its logic, due to the ambiguity of quantifications in the Hoare triples. We propose also an extension of the logic, which solves these problems: a program's context is introduced and a new definition for reachable states is given. After the correctness and the completness of this new logic, we present the Coq-UNITY tool, which results of the implementation of this extension in the Coq system. To illustrate the feasibility of our approach, we present several examples of different complexities: the euclidian division, the reader-writer, the parking and the lift control. These four examples are the first case studies. Each of them highlights a particular problem either in proofs or in specifications. A last example describes a telecommunication protocol, namely the ABT/DT protocol. This example shows that the context is an important notion. Once the context is well-defined, it becomes very helpful in the verification
Contet, Jean-Michel. "Modèles multi-agents réactifs pour la navigation multi-véhicules : spécification formelle et vérification." Phd thesis, Université de Technologie de Belfort-Montbeliard, 2009. http://tel.archives-ouvertes.fr/tel-00472415.
Chapurlat, Vincent. "Vérification et validation de modèles de systèmes complexes: application à la Modélisation d'Entreprise." Habilitation à diriger des recherches, Université Montpellier II - Sciences et Techniques du Languedoc, 2007. http://tel.archives-ouvertes.fr/tel-00204981.
Le travail de recherche entrepris depuis le début du Doctorat en 1991 relève de la thématique de la modélisation de systèmes complexes puis de la vérification et de la validation de ces modèles. Ceci a pour objectif d'assurer, ou à défaut de rassurer, le modeleur sur la qualité des modèles, sur leur pertinence vis-à-vis du système considéré et sur le respect d'exigences qui ont présidé à leur construction. La recherche a donc consisté au développement d'approches de modélisation, de spécification formelle de propriétés, de vérification par preuve de propriétés au moyen de Graphes Conceptuels et de simulation comportementale. Les domaines d'application privilégiés ont été les systèmes de contrôle commande répartis, puis plus largement la modélisation d'entreprise et tentent aujourd'hui d'intégrer une dimension risque dans la modélisation d'entreprise et de s'ouvrir plus largement à l'ingénierie des systèmes complexes. Les résultats sont des langages et un cadre de modélisation intégré, un langage de spécification baptisé LUSP, une suite de mécanismes de preuve formelle et de simulation qui ont donné lieu à divers encadrements de thèses, de travaux et à des transferts vers l'industrie.
Enfin, l'activité d'enseignement a tenté de rester cohérente avec le profil de compétence à la fois de producticien et d'ingénierie système acquis ou inspiré par la thématique de recherche. Elle s'est déroulée dans le cadre de diverses Universités, Ecoles d'Ingénieurs ou de cursus spécialisés. Les résultats sont des propositions et l'accompagnement de thématiques nouvelles, une activité d'ingénierie pédagogique et une implication dans diverses responsabilités administratives.
Dreyfus, Alois. "Contributions à la vérification et à la validation efficaces fondées sur des modèles." Thesis, Besançon, 2014. http://www.theses.fr/2014BESA2076/document.
The thesis contributes to development of automatic methods for model-based verification and validation ofcomputer systems. It is divided into two parts: verification and test generation.In the verification part, for the problem of regular model checking undecidable in general, two new approxi-mation techniques are defined in order to provide efficient (semi-)algorithms. Over-approximations of the setof reachable states are computed, with the objective of ensuring the termination of the exploration of the statespace. Reachable states (or over-approximations of this set of states) are represented by regular languages or,equivalently, by finite-state automata. The first technique consists in over-approximating the set of reachablestates by merging states of automata, based on simple syntactic criteria, or on a combination of these criteria.The second approximation technique also merges automata states, by using transducers. For the second tech-nique, we develop a new approach to refine approximations, inspired by the CEGAR paradigm (for Counter-Example-Guided Abstraction Refinement). These proposals have been tested on examples of mutual exclusionprotocols.In the test generation part, a technique that combines the random generation with coverage criteria, fromcontext-free models (context-free grammars, pushdown automata) is defined. Generate tests from these mo-dels (instead of doing from graphs) reduces the model abstraction level, and therefore allows having moretests executable in the real system. These proposals have been tested on the JSON grammar (JavaScript ObjectNotation), as well as on pushdown automata of mutually recursive functions, of an XPath query, and of theShunting-Yard algorithm
Kmimech, Mourad. "Vérification d’assemblages de composants logiciels : Application aux modèles de composants UML2.0 et Ugatze." Pau, 2010. http://www.theses.fr/2010PAUU3017.
The component approach aims for the reuse by a coherent and easy components assembly. But obtaining a coherent components assembly is not an easy exercise. To achieve this, we advocate a contractual approach distinguishing different syntactic, structural, semantic, synchronization and service quality contracts. We have successfully applied this approach on two models of semi-formal contractual components: UML2. 0 and Ugatze. Indeed, we propose two approaches: VerifComponentUML2. 0 and VerifComponentUgatze. The VerifComponentUML2. 0 approach aims the verification of syntactic, structural, synchronization and quality service contracts on a UML2. 0 component assembly through two formal component models Acme/Armani and Wright. VerifComponentUML2. 0 has two tools: Wr2fdr and Wright2Ada. The tool Wr2fdr allows translating Wright to CSP contracts in order to verify synchronization using the model checker FDR. It is a IDM tool Wright2Ada which allow is transforming Wright to Ada, in order to open UML2. 0 on static analysis and dynamic tools associated with Ada. VerifComponentUgatze approach provides a frame allowing to check syntactic and structural contracts of an Ugatze component assembly through Acme/Armani
Lewicki, Alexandre. "Conception de modèles haut niveau pour l'optimisation et la vérification de systèmes Bluetooth." Nice, 2008. http://www.theses.fr/2008NICE4110.
The different works conducted in this thesis were to design high level functional models that were used in a wireless system design flow. The MCSE methodology was followed to design those models and the results have been used for Bluetooth technology system design and verification. The first part of the work presents the MCSE methodology that has been used for the design of the models. Starting from the specification of a concrete use case, a temperature sensor, we designed a functional model of the system with successive refinement steps. The models were then translated in SystemC, a C++ library that allows describing both hardware and software parts of a system. The results of the exploitation of the models in a wireless network simulation can be used for protocol analysis, performance analysis and performance exploration. The second part of the work was to introduce the functional models in a hardware verification environment. Two different techniques for design engineers and verification engineers have been settled. This technique brings enhanced verification features with the possibility to write complex tests
Rousset, Nicolas. "Automatisation de la Spécification et de la Vérification d'applications Java Card." Paris 11, 2008. http://www.theses.fr/2008PA112065.
This work is about static verification of formally-annotated Java Card programs, by deductive methods. It aims at making such an approach practicable in an industrial setting. Implementations have been performed inside the Krakatoa prototype, and experiments were conducted on industrial applets. The first part concerns the improvement of the automation in the verification step. The first contribution is a precise interpretation of the semantics of the Java Card language: transactions and card tear. The second contribution proposes a policy of non-null references, allowing to verify the validity of memory accesses by static typing. The third contribution is an interprocedural analysis for inferring annotations, by abstract interpretation, allowing to obtain loop invariants, and pre- and post-conditions for methods. The second part is about the design of specifications. The first contribution proposes links between JML-like annotations and abstract specifications. Functional properties are expressed using algebraic specifications, whose link with the program is defined by a refinement relation. The second contribution proposes a structured use of UML diagrams allowing to generate annotations, to verify specific safety properties (e. G. Structural invariants, protocol descriptions). Finally, a perspective is opened towards the definition and the automatic propagation of annotations to assist security audits of Java Card applets
Lazaar, Nadjib. "Méthodologie et outil de Test, de localisation de fautes et de correction automatique des programmes à contraintes." Rennes 1, 2011. http://www.theses.fr/2011REN1S115.
The success of several constraint-based modelling languages such as OPL (Optimization programming Language) of IBM Ilog, COMET of Dynadec, Sicstus Prolog, Gecode, appeals for better software engineering practices, particularly in the testing phase. These languages aim at solving industrial combinatorial problems that arise in optimization, planning, or scheduling. Recently, a new trend has emerged that propose also to use CP programs to address critical applications in e-Commerce, air-traffic control and management, or critical software development that must be thoroughly verified before being used in applications. While constraint program debugging drew the attention of many researchers, few supports in terms of software engineering and testing have been proposed to help verify this kind of programs. In the present thesis, we define a testing theory for constraint programming. For that, we propose a general framework of constraint program development which supposes that a first declarative and simple constraint model is available from the problem specifications analysis. Then, this model is refined using classical techniques such as constraint reformulation, surrogate, redundant, implied and global constraint addition, or symmetry-breaking to form an improved constraint model that must be tested before being used to address real-sized problems. We think that most of the faults are introduced in this refinement step and propose a process which takes the first declarative model as an oracle for detecting non-conformities and derive practical test purposes from this process. Therefore, we enhance the proposed testing framework to introduce a methodology for an automatic tuning with fault localization and correction in constraint programs. We implemented these approaches in a new tool called CPTEST that was used to automatically detect, localize and correct faults on classical benchmark programs and real-world CP problem: the car-sequencing problem
Girard, Pierre. "Formalisation et mise en œuvre d'une analyse statique de code en vue de la vérification d'applications sécurisées." Toulouse, ENSAE, 1996. http://www.theses.fr/1996ESAE0010.
Benmerzoug, Djamel. "Modèles et outils formels pour l'intégration d'applications d'entreprises." Paris 6, 2009. http://www.theses.fr/2009PA066344.
Schrammel, Peter. "Méthodes logico-numériques pour la vérification des systèmes discrets et hybrides." Phd thesis, Université de Grenoble, 2012. http://tel.archives-ouvertes.fr/tel-00809357.
Kamsu-Foguem, Bernard. "Modélisation et vérification des propriétés de systèmes complexes : Application aux processus d'entreprise." Montpellier 2, 2004. http://www.theses.fr/2004MON20050.