To see the other types of publications on this topic, follow the link: Software analysis and verification.

Dissertations / Theses on the topic 'Software analysis and verification'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 dissertations / theses for your research on the topic 'Software analysis and verification.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.

1

Jobredeaux, Romain J. "Formal verification of control software." Diss., Georgia Institute of Technology, 2015. http://hdl.handle.net/1853/53841.

Full text
Abstract:
In a context of heightened requirements for safety-critical embedded systems and ever-increasing costs of verification and validation, this research proposes to advance the state of formal analysis for control software. Formal methods are a field of computer science that uses mathematical techniques and formalisms to rigorously analyze the behavior of programs. This research develops a framework and tools to express and prove high level properties of control law implementations. One goal is to bridge the gap between control theory and computer science. An annotation language is extended with symbols and axioms to describe control-related concepts at the code level. Libraries of theorems, along with their proofs, are developed to enable an interactive proof assistant to verify control-related properties. Through integration in a prototype tool, the process of verification is made automatic, and applied to several example systems.In a context of heightened requirements for safety-critical embedded systems and ever-increasing costs of verification and validation, this research proposes to advance the state of formal analysis for control software. Formal methods are a field of computer science that uses mathematical techniques and formalisms to rigorously analyze the behavior of programs. This research develops a framework and tools to express and prove high level properties of control law implementations. One goal is to bridge the gap between control theory and computer science. An annotation language is extended with symbols and axioms to describe control-related concepts at the code level. Libraries of theorems, along with their proofs, are developed to enable an interactive proof assistant to verify control-related properties. Through integration in a prototype tool, the process of verification is made automatic, and applied to several example systems.
APA, Harvard, Vancouver, ISO, and other styles
2

Domagoj, Babić. "Exploiting structure for scalable software verification." Thesis, University of British Columbia, 2008. http://hdl.handle.net/2429/1502.

Full text
Abstract:
Software bugs are expensive. Recent estimates by the US National Institute of Standards and Technology claim that the cost of software bugs to the US economy alone is approximately 60 billion USD annually. As society becomes increasingly software-dependent, bugs also reduce our productivity and threaten our safety and security. Decreasing these direct and indirect costs represents a significant research challenge as well as an opportunity for businesses. Automatic software bug-finding and verification tools have a potential to completely revolutionize the software engineering industry by improving reliability and decreasing development costs. Since software analysis is in general undecidable, automatic tools have to use various abstractions to make the analysis computationally tractable. Abstraction is a double-edged sword: coarse abstractions, in general, yield easier verification, but also less precise results. This thesis focuses on exploiting the structure of software for abstracting away irrelevant behavior. Programmers tend to organize code into objects and functions, which effectively represent natural abstraction boundaries. Humans use such structural abstractions to simplify their mental models of software and for constructing informal explanations of why a piece of code should work. A natural question to ask is: How can automatic bug-finding tools exploit the same natural abstractions? This thesis offers possible answers. More specifically, I present three novel ways to exploit structure at three different steps of the software analysis process. First, I show how symbolic execution can preserve the data-flow dependencies of the original code while constructing compact symbolic representations of programs. Second, I propose structural abstraction, which exploits the structure preserved by the symbolic execution. Structural abstraction solves a long-standing open problem --- scalable interprocedural path- and context-sensitive program analysis. Finally, I present an automatic tuning approach that exploits the fine-grained structural properties of software (namely, data- and control-dependency) for faster property checking. This novel approach resulted in a 500-fold speedup over the best previous techniques. Automatic tuning not only redefined the limits of automatic software analysis tools, but also has already found its way into other domains (like model checking), demonstrating the generality and applicability of this idea.
APA, Harvard, Vancouver, ISO, and other styles
3

White, Maurice Walter. "Verification and evaluation of structural analysis and design software." Thesis, Virginia Tech, 1991. http://hdl.handle.net/10919/41489.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

de, Carvalho Gomes Pedro. "Automatic Extraction of Program Models for Formal Software Verification." Doctoral thesis, KTH, Teoretisk datalogi, TCS, 2015. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-176286.

Full text
Abstract:
In this thesis we present a study of the generation of abstract program models from programs in real-world programming languages that are employed in the formal verification of software. The thesis is divided into three parts, which cover distinct types of software systems, programming languages, verification scenarios, program models and properties.The first part presents an algorithm for the extraction of control flow graphs from sequential Java bytecode programs. The graphs are tailored for a compositional technique for the verification of temporal control flow safety properties. We prove that the extracted models soundly over-approximate the program behaviour w.r.t. sequences of method invocations and exceptions. Therefore, the properties that are established with the compositional technique over the control flow graphs also hold for the programs. We implement the algorithm as ConFlEx, and evaluate the tool on a number of test cases.The second part presents a technique to generate program models from incomplete software systems, i.e., programs where the implementation of at least one of the components is not available. We first define a framework to represent incomplete Java bytecode programs, and extend the algorithm presented in the first part to handle missing code. Then, we introduce refinement rules, i.e., conditions for instantiating the missing code, and prove that the rules preserve properties established over control flow graphs extracted from incomplete programs. We have extended ConFlEx to support the new definitions, and re-evaluate the tool, now over test cases of incomplete programs.The third part addresses the verification of multithreaded programs. We present a technique to prove the following property of synchronization with condition variables: "If every thread synchronizing under the same condition variables eventually enters its synchronization block, then every thread will eventually exit the synchronization". To support the verification, we first propose SyncTask, a simple intermediate language for specifying synchronized parallel computations. Then, we propose an annotation language for Java programs to assist the automatic extraction of SyncTask programs, and show that, for correctly annotated programs, the above-mentioned property holds if and only if the corresponding SyncTask program terminates. We reduce the termination problem into a reachability problem on Coloured Petri Nets. We define an algorithm to extract nets from SyncTask programs, and show that a program terminates if and only if its corresponding net always reaches a particular set of dead configurations. The extraction of SyncTask programs and their translation into Petri nets is implemented as the STaVe tool. We evaluate the technique by feeding annotated Java programs to STaVe, then verifying the extracted nets with a standard Coloured Petri Net analysis tool
Den här avhandlingen studerar automatisk konstruktion av abstrakta modeller för formell verifikation av program skrivna i verkliga programmeringsspråk. Avhandlingen består av tre delar som involverar olika typer av program, programmeringsspråk, verifikationsscenarier, programmodeller och egenskaper.Del ett presenterar en algoritm för generation av flödesgrafer från sekventiella program i Java bytekod. Graferna är skräddarsydda för en kompositionell teknik för verifikationen av temporala kontrollflödens säkerhetsegenskaper. Vi visar att de extraherade modellerna sunt överapproximerar programbeteenden med avseende på sekvenser av metodanrop och -undantag. Således gäller egenskaperna som kan fastställas genom kompositionstekniken över kontrollflöden även för programmen. Vi implementerar dessutom algoritmen i form av verktyget ConFlEx och utvärderar verktyget på ett antal testfall.Del två presenterar en teknik för att generera modeller av ofullständiga program. Det vill säga, program där implementationen av åtminstone en komponent inte är tillgänglig. Vi definierar ett ramverk för att representera ofullständiga Java bytekodsprogram och utökar algoritmen från del ett till att hantera ofullständig kod.  Därefter presenterar vi raffineringsregler - villkor för att instansiera den saknade koden - och bevisar att reglerna bevarar relevanta egenskaper av kontrollflödesgrafer. Vi har dessutom utökat ConFlEx till att stödja de nya definitionerna och har omvärderat verktyget på testfall av ofullständiga program.Del tre angriper verifikation av multitrådade program. Vi presenterar en teknik för att bevisa följande egenskap för synkronisering med vilkorsvariabler: "Om varje trådsynkronisering under samma villkor så småningom stiger in i sitt synkroniseringsblock så kommer varje tråd också till slut lämna synkroniseringen". För att stödja verifikationen så introducerar vi först SyncTask - ett enkelt mellanliggande språk för att specificera synkronisering av parallella beräkningar. Därefter presenterar vi ett annoteringsspråk för Java som tillåter automatisk extrahering av SyncTask-program och visar att egenskapen gäller om och endast om motsvarande SyncTask-program terminerar. Vi reducerar termineringsproblemet till ett nåbarhetsproblem på färgade Petrinät samt definierar en algoritm som skapar Petrinät från SyncTask-program där programmet terminerar om och endast om nätet alltid når en särskild mängd av döda konfigurationer. Extraktionen av SyncTask-program och deras motsvarande Petrinät är implementerade i form av verktyget STaVe.  Slutligen utvärderar vi verktyget genom att mata annoterade.

QC 20151101

APA, Harvard, Vancouver, ISO, and other styles
5

Molin, Oscar. "Design verification through software architecture recovery : Meeting ISO 26262 requirements on software using static analysis." Thesis, Uppsala universitet, Institutionen för informationsteknologi, 2013. http://urn.kb.se/resolve?urn=urn:nbn:se:uu:diva-202149.

Full text
Abstract:
Emerging functional safety standards in the automotive industry will create new challenges for companies sitting on large deposits of legacy code. When refactoring existing code for compliance with standards such as ISO 26262, great savings could be made if work products required by the standard could be automatically generated from existing source code. In this thesis, we explore the possibilities to generate graphical software architectures, data-flow graphs and software architectural descriptions directly from existing C source code. By parsing the source code to find structures and the relations between them, we were able to create relational graphs that represents the software of an entire system or that of just one component, using different levels of abstraction where appropriate. We create a proof-of-concept tool chain that can generate two kinds of graphical architecture views and one data-flow view. Although these tools are by no means ready for production, they do show promise and are already useful as development tools for better software understanding. Finally we test the tool chain on current production ECU (Electric Control Unit) software used in heavy trucks and buses and evaluate the results against the requirements of the ISO 26262 standard. This thesis was done at Scania CV AB in Södertälje, Sweden.
APA, Harvard, Vancouver, ISO, and other styles
6

Limouee, Maryam. "Verification of NYSlab a software for the analysis of jointed pavements /." To access this resource online via ProQuest Dissertations and Theses @ UTEP, 2009. http://0-proquest.umi.com.lib.utep.edu/login?COPT=REJTPTU0YmImSU5UPTAmVkVSPTI=&clientId=2515.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Mrvaljevic, Pavle. "Tool orchestration for modeling, verification and analysis of collaborating autonomous machines." Thesis, Mälardalens högskola, Akademin för innovation, design och teknik, 2020. http://urn.kb.se/resolve?urn=urn:nbn:se:mdh:diva-48884.

Full text
Abstract:
System-of-systems (SoS) is a collective of multiple system units that have a common purpose. In this thesis, the Volvo Electric Site is investigated as an example case study in which safety and performance properties of collaborating autonomous machines are evaluated and analyzed. Formal methods in software engineering aim to prove the correctness of the system by evaluating its mathematical model. We use an actor-based framework, AdaptiveFlow, for modeling system functionalities and timing features. The aim is to link an abstract model evaluation and a simulation of real-world cases that are deployed in the VCE Simulator. In addition, it is necessary to make sure that AdaptiveFlow provides correct-by-design scenarios. The verification is conducted by developing an orchestration method between the AdaptiveFlow framework and the VCE Simulator. A tool named VMap is developed throughout this thesis for automated mapping of the input models of AdaptiveFlow and the VCE Simulator to make the orchestration possible. Furthermore, AdaptiveFlow is perceived in two different ways, as a design tool, and as an analysis tool. The models created in AdaptiveFlow are directly mapped to the VCE Simulator by using the VMap tool where the VCE Simulator is used as a testbed for checking these models. The outcome of this thesis is reflected in the establishment of a mapping pattern between AdaptiveFlow inputs and VCE simulator by developing the VMap tool for automatic mapping. It was shown that there is a natural mapping between the AdaptiveFlow models and VCE simulator inputs. By using VMap, we can quickly get to the desired scenarios. Through the development of three different cases, the results show that it is possible to design safe and optimal scenarios by orchestrating the AdaptiveFlow and the VCE Simulator using the VMap tool as well as the correlation between results from AdaptiveFlow and VCE Simulator.
APA, Harvard, Vancouver, ISO, and other styles
8

Motta, Teixeira Leopoldo. "Verification and refactoring of configuration knowledge for software product lines." Universidade Federal de Pernambuco, 2010. https://repositorio.ufpe.br/handle/123456789/2323.

Full text
Abstract:
Made available in DSpace on 2014-06-12T15:56:44Z (GMT). No. of bitstreams: 2 arquivo2985_1.pdf: 5101466 bytes, checksum: 86a375e15b77076e9eb9adffbe664c52 (MD5) license.txt: 1748 bytes, checksum: 8a4605be74aa9ea9d79846c1fba20a33 (MD5) Previous issue date: 2010
Conselho Nacional de Desenvolvimento Científico e Tecnológico
Uma linha de produtos de software (LPS) é definida como um conjunto de sistemas de software que compartilham características em comum, mas que são suficientemente distintos entre si, desenvolvidos a partir de um conjunto de artefatos reusáveis. Modelos de features e configuração são usados para possibilitar a geração automática de produtos a partir destes artefatos. Um modelo de features representa o conjunto de possíveis configurações de produto de uma LPS, enquanto o modelo de configuração estabelece o mapeamento entre features e implementação. Por exemplo, associando expressões de features, na forma de proposições lógicas, a artefatos. Os benefícios de produtividade que a abordagem de LPS fornece tornam possível que uma LPS seja capaz de gerar milhares de produtos. Neste contexto, erros cometidos ao especificar o modelo de configuração podem resultar em produtos inválidos - o problema da composição segura. Este problema pode ser difícil de ser detectado manualmente, já que os modelos de features e configuração podem tornar-se muito complexos. Gerar todos os produtos de uma LPS pode não ser prático, dado que existem LPS em que é possível gerar milhares de produtos. No entanto, mesmo modelos de configuração que não permitem a geração de produtos inválidos podem ter problemas na sua estrutura interna, como complexidade e duplicação, especialmente no contexto de LPS grandes, onde sua manutenção pode se tornar difícil. Precisamos nos certificar de que não introduzimos erros ao corrigir estes problemas. Neste trabalho, é proposta uma abordagem automática de verificação de composição segura para LPS baseadas em modelos de configuração. Esta abordagem é baseada na tradução de instâncias específicas de modelos de features e configuração em lógica proposicional, usando uma teoria codificada com Alloy. O suporte ferramental fornecido pelo Alloy Analyzer auxilia a verificação. Também é proposto um catálogo de refatoramentos simples para modelos de configuração, como uma maneira de evitar erros ao corrigir problemas na estrutura interna de tais modelos. Este catálogo é formalizado usando uma teoria geral para modelos de configuração especificada com o Prototype Verification System (PVS). Nós avaliamos a abordagem de verificação usando sete versões de uma LPS, com modelos de features que possibilitam a geração de até 272 produtos. Os resultados demonstram a vantagem de usar esta abordagem ao invés de gerar todos os produtos da LPS, já que o tempo médio para compilar um único produto da LPS é maior que o tempo para analisá-la na maior das versões analisadas. Também avaliamos o catálogo de refatoramento provando consistência (soundness) dos refatoramentos propostos no provador de teoremas de PVS
APA, Harvard, Vancouver, ISO, and other styles
9

de, Carvalho Gomes Pedro. "Sound Modular Extraction of Control Flow Graphs from Java Bytecode." Licentiate thesis, KTH, Teoretisk datalogi, TCS, 2012. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-105275.

Full text
Abstract:
Control flow graphs (CFGs) are abstract program models that preserve the control flow information. They have been widely utilized for many static analyses in the past decades. Unfortunately, previous studies about the CFG construction from modern languages, such as Java, have either neglected advanced features that influence the control flow, or do not provide a correctness argument. This is a bearable issue for some program analyses, but not for formal methods, where the soundness of CFGs is a mandatory condition for the verification of safety-critical properties. Moreover, when developing open systems, i.e., systems in which at least one component is missing, one may want to extract CFGs to verify the available components. Soundness is even harder to achieve in this scenario, because of the unknown inter-dependencies involving missing components. In this work we present two variants of a CFG extraction algorithm from Java bytecode considering precise exceptional flow, which are sound w.r.t to the JVM behavior. The first algorithm extracts CFGs from fully-provided (closed) programs only. It proceeds in two phases. Initially the Java bytecode is translated into a stack-less intermediate representation named BIR, which provides explicit representation of exceptions, and is more compact than the original bytecode. Next, we define the transformation from BIR to CFGs, which, among other features, considers the propagation of uncaught exceptions within method calls. We then establish its correctness: the behavior of the extracted CFGs is shown to be a sound over-approximation of the behavior of the original programs. Thus, temporal safety properties that hold for the CFGs also hold for the program. We prove this by suitably combining the properties of the two transformations with those of a previous idealized CFG extraction algorithm, whose correctness has been proven directly. The second variant of the algorithm is defined for open systems. We generalize the extraction algorithm for closed systems for a modular set-up, and resolve inter-dependencies involving missing components by using user-provided interfaces. We establish its correctness by defining a refinement relation between open systems, which constrains the instantiation of missing components. We prove that if the relation holds, then the CFGs extracted from the components of the original open system are sound over-approximations of the CFGs for the same components in the refined system. Thus, temporal safety properties that hold for an open system also hold for closed systems that refine it. We have implemented both algorithms as the ConFlEx tool. It uses Sawja, an external library for the static analysis of Java bytecode, to transform bytecode into BIR, and to resolve virtual method calls. We have extended Sawja to support open systems, and improved its exception type analysis. Experimental results have shown that the algorithm for closed systems generates more precise CFGs than the modular algorithm. This was expected, due to the heavy over-approximations the latter has to perform to be sound. Also, both algorithms are linear in the number of bytecode instructions. Therefore, ConFlEx is efficient for the extraction of CFGs from either open, or closed Java bytecode programs.

QC 20121122

APA, Harvard, Vancouver, ISO, and other styles
10

de, Carvalho Gomes Pedro, and Attilio Picoco. "Sound Extraction of Control-Flow Graphs from open Java Bytecode Systems." KTH, Teoretisk datalogi, TCS, 2012. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-104076.

Full text
Abstract:
Formal verification techniques have been widely deployed as means to ensure the quality of software products. Unfortunately, they suffer with the combinatorial explosion of the state space. That is, programs have a large number of states, sometimes infinite. A common approach to alleviate the problem is to perform the verification over abstract models from the program. Control-flow graphs (CFG) are one of the most common models, and have been widely studied in the past decades. Unfortunately, previous works over modern programming languages, such as Java, have either neglected features that influence the control-flow, or do not provide a correctness argument about the CFG construction. This is an unbearable issue for formal verification, where soundness of CFGs is a mandatory condition for the verification of safety-critical properties. Moreover, one may want to extract CFGs from the available components of an open system. I.e., a system whose at least one of the components is missing. Soundness is even harder to achieve in this scenario, because of the unknown inter-dependences between software components. In the current work we present a framework to extract control-flow graphs from open Java Bytecode systems in a modular fashion. Our strategy requires the user to provide interfaces for the missing components. First, we present a formal definition of open Java bytecode systems. Next, we generalize a previous algorithm that performs the extraction of CFGs for closed programs to a modular set-up. The algorithm uses the user-provided interfaces to resolve inter-dependences involving missing components. Eventually the missing components will arrive, and the open system will become closed, and can execute. However, the arrival of a component may affect the soundness of CFGs which have been extracted previously. Thus, we define a refinement relation, which is a set of constraints upon the arrival of components, and prove that the relation guarantees the soundness of CFGs extracted with the modular algorithm. Therefore, the control-flow safety properties verified over the original CFGs still hold in the refined model. We implemented the modular extraction framework in the ConFlEx tool. Also, we have implemented the reusage from previous extractions, to enable the incremental extraction of a newly arrived component. Our technique performs substantial over-approximations to achieve soundness. Despite this, our test cases show that ConFlEx is efficient. Also, the extraction of the CFGs gets considerable speed-up by reusing results from previous analyses.

QC 20121029


Verification of Control-Flow Properties of Programs with Procedures(CVPP)
APA, Harvard, Vancouver, ISO, and other styles
11

He, Nannan. "Exploring Abstraction Techniques for Scalable Bit-Precise Verification of Embedded Software." Diss., Virginia Tech, 2009. http://hdl.handle.net/10919/27683.

Full text
Abstract:
Conventional testing has become inadequate to satisfy rigorous reliability requirements of embedded software that is playing an increasingly important role in many safety critical applications. Automatic formal verification is a viable avenue for ensuring the reliability of such software. Recently, more and more formal verification techniques have begun modeling a non-Boolean data variable as a bit-vector with bounded width (i.e. a vector of multiple bits like 32- or 64- bits) to implement bit-precise verification. One major challenge in the scalable application of such bit-precise verification on real-world embedded software is that the state space for verification can be intractably large. In this dissertation, several abstraction techniques are explored to deal with this scalability challenge in the bit-precise verification of embedded software. First, we propose a tight integration of program slicing, which is an important static program analysis technique, with bounded model checking (BMC). While many software verification tools apply program slicing as a separate preprocessing step, we integrate slicing operations into our model construction and reduction process and enhance them with compilation optimization techniques to compute accurate program slices. We also apply a proof-based abstraction-refinement framework to further remove those program segments irrelevant to the property being verified. Next, we present a method of using symbolic simulation for scalable formal verification. The simulation involves distinguishing X as symbolic values to abstract concrete variablesâ values. Also, the method embeds this symbolic simulation in a counterexample-guided abstraction-refinement framework to automatically construct and verify an abstract model, which has a smaller state space than that of the original concrete program. This dissertation also presents our efforts on using two common testability metrics â controllability metric (CM) and observability metric (OM) â as the high-level structural guidance for scalable bit-precise verification. A new abstraction approach is proposed based on the concept of under- and over-approximation to efficiently solve bit-vector formulas generated from embedded software verification instances. These instances include both complicated arithmetic computations and intensive control structures. Our approach applies CM and OM to assist the abstraction refinement procedure in two ways: (1) it uses CM and OM to guide the construction of a simple under-approximate model, which includes only a subset of execution paths in a verification instance, so that a counterexample that refutes the instance can be obtained with reduced effort, and (2) in order to reduce the cost of using proof-based refinement alone, it uses OM heuristics to guide the restoration of additional verification-relevant formula constraints with low computational cost for refinement. Experiments show a significant reduction of the solving time compared to state-of-the-art solvers for the bit-vector arithmetic. This dissertation finally proposes an efficient algorithm to discover non-uniform encoding widths of individual variables in the verification model, which may be smaller than their original modeling width but sufficient for the verification. Our algorithm distinguishes itself from existing approaches in that it is path-oriented; it takes advantage of CM and OM values to guide the computation of the initial, non-uniform encoding widths, and the effective adjustment of these widths along different paths, until the property is verified. It can restrict the search from those paths that are deemed less favorable or have been searched in previous steps, thus simplifying the problem. Experiments demonstrate that our algorithm can significantly speed up the verification especially in searching for a counterexample that violates the property under verification.
Ph. D.
APA, Harvard, Vancouver, ISO, and other styles
12

Gaither, Danielle. "Improving Software Quality through Syntax and Semantics Verification of Requirements Models." Thesis, University of North Texas, 2018. https://digital.library.unt.edu/ark:/67531/metadc1404542/.

Full text
Abstract:
Software defects can frequently be traced to poorly-specified requirements. Many software teams manage their requirements using tools such as checklists and databases, which lack a formal semantic mapping to system behavior. Such a mapping can be especially helpful for safety-critical systems. Another limitation of many requirements analysis methods is that much of the analysis must still be done manually. We propose techniques that automate portions of the requirements analysis process, as well as clarify the syntax and semantics of requirements models using a variety of methods, including machine learning tools and our own tool, VeriCCM. The machine learning tools used help us identify potential model elements and verify their correctness. VeriCCM, a formalized extension of the causal component model (CCM), uses formal methods to ensure that requirements are well-formed, as well as providing the beginnings of a full formal semantics. We also explore the use of statecharts to identify potential abnormal behaviors from a given set of requirements. At each stage, we perform empirical studies to evaluate the effectiveness of our proposed approaches.
APA, Harvard, Vancouver, ISO, and other styles
13

Manjunathaiah, M. "Compile-time analysis of array sections for parallelization and parallel program verification." Thesis, University of Southampton, 1996. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.243078.

Full text
APA, Harvard, Vancouver, ISO, and other styles
14

Graves, Jamie Robert. "Forensic verification of operating system activity via novel data, acquisition and analysis techniques." Thesis, Edinburgh Napier University, 2009. http://researchrepository.napier.ac.uk/Output/6699.

Full text
Abstract:
Digital Forensics is a nascent field that faces a number of technical, procedural and cultural difficulties that must be overcome if it is to be recognised as a scientific discipline, and not just an art. Technical problems involve the need to develop standardised tools and techniques for the collection and analysis of digital evidence. This thesis is mainly concerned with the technical difficulties faced by the domain. In particular, the exploration of techniques that could form the basis of trusted standards to scientifically verify data. This study presents a set of techniques, and methodologies that can be used to describe the fitness of system calls originating from the Windows NT platform as a form of evidence. It does so in a manner that allows for open investigation into the manner in which the activities described by this form of evidence can be verified. The performance impact on the Device Under Test (DUT) is explored via the division of the Windows NT system calls into service subsets. Of particular interest to this work is the file subset, as the system calls can be directly linked to user interaction. The subsequent quality of data produced by the collection tool is examined via the use of the Basic Local Alignment Search Tool (BLAST) sequence alignment algorithm . In doing so, this study asserts that system calls provide a recording, or time line, of evidence extracted from the operating system, which represents actions undertaken. In addition, it asserts that these interactions can be compared against known profiles (fingerprints) of activity using BLAST, which can provide a set of statistics relating to the quality of match, and a measure of the similarities of sequences under scrutiny. These are based on Karlin-Altschul statistics which provides, amongst other values, a P-Value to describe how often a sequence will occur within a search space. The manner in which these statistics are calculated is augmented by the novel generation of the NM1,5_D7326 scoring matrix based on empirical data gathered from the operating system, which is compared against the de facto, biologically generated, BLOSUM62 scoring matrix. The impact on the Windows 2000 and Windows XP DUTs of monitoring most of the service subsets, including the file subset, is statistically insignificant when simple user interactions are performed on the operating system. For the file subset, p = 0.58 on Windows 2000 Service Pack 4, and p = 0.84 on Windows XP Service Pack 1. This study shows that if the event occurred in a sequence that originated on an operating system that was not subjected to high process load or system stress, a great deal of confidence can be placed in a gapped match, using either the NM_I.5~7326 or BLOSUM62 scoring matrices, indicating an event occurred, as all fingerprints of interest (FOI) were identified. The worst-case BLOSUM62 P-Value = 1.10E-125, and worst-case NM1.5_D7326 P-Value = 1.60E-72, showing that these matrices are comparable in their sensitivity during normal system conditions. This cannot be said for sequences gathered during high process load or system stress conditions. The NM1.5_D7326 scoring matrix failed to identify any FOI. The BLOSUM62 scoring matrix returned a number of matches that may have been the FOI, as discerned via the supporting statistics, but were not positively identified within the evaluation criteria. The techniques presented in this thesis are useful, structured and quantifiable. They provide the basis for a set of methodologies that can be used for providing objective data for additional studies into this form of evidence, which can further explore the details of the calibration and analysis methods, thus supplying the basis for a trusted form of evidence, which may be described as fit-for-purpose.
APA, Harvard, Vancouver, ISO, and other styles
15

Stahlbauer, Andreas [Verfasser], Sven [Akademischer Betreuer] Apel, and Willem [Akademischer Betreuer] Visser. "Abstract Transducers for Software Analysis and Verification / Andreas Stahlbauer ; Sven Apel, Willem Visser." Passau : Universität Passau, 2020. http://d-nb.info/1219730890/34.

Full text
APA, Harvard, Vancouver, ISO, and other styles
16

Shaffer, Alan B. "An application of Alloy to static analysis for secure information flow and verification of software systems." Monterey, Calif. : Naval Postgraduate School, 2008. http://edocs.nps.edu/npspubs/scholarly/dissert/2008/Dec/08Dec%5FShaffer_PhD.pdf.

Full text
Abstract:
Dissertation (Ph.D. in Computer Science)--Naval Postgraduate School, December 2008.
Dissertation Supervisor: Auguston, Mikhail. "December 2008." Description based on title screen as viewed on January 29, 2009. Includes bibliographical references (p. 87-93). Also available in print.
APA, Harvard, Vancouver, ISO, and other styles
17

Weissenbacher, Georg. "Program analysis with interpolants." Thesis, University of Oxford, 2010. http://ora.ox.ac.uk/objects/uuid:6987de8b-92c2-4309-b762-f0b0b9a165e6.

Full text
Abstract:
This dissertation discusses novel techniques for interpolation-based software model checking, an approximate method which uses Craig interpolation to compute invariants of programs. Our work addresses two aspects of program analyses based on model checking: verification (the construction of correctness proofs for programs) and falsification (the detection of counterexamples that violate the specification). In Hoare's calculus, a proof of correctness comprises assertions which establish that a program adheres to its specification. The principal challenge is to derive appropriate assertions and loop invariants. Contemporary software verification tools use Craig interpolation (as opposed to traditional predicate transformers such as the weakest precondition) to derive approximate assertions. The performance of the model checker is contingent on the Craig interpolants computed. We present novel interpolation techniques which provide the following advantages over existing methods. Firstly, the resulting interpolants are sound with respect to the bit-level semantics of programs, which is an improvement over interpolation systems that use linear arithmetic over the reals to approximate bit-vector arithmetic and/or do not support bit-level operations. Secondly, our interpolation systems afford us a choice of interpolants and enable us to fine-tune their logical strength and structure. In contrast, existing procedures are limited to a single ad-hoc choice of an interpolant. Interpolation-based verification tools are typically forced to refine an initial approximation repeatedly in order to achieve the accuracy required to establish or refute the correctness of a program. The detection of a counterexample containing a repetitive construct may necessitate one refinement step (involving the computation of additional interpolants) for each iteration of the loop. We present a heuristic that aims to avoid the repeated and computationally expensive construction of interpolants, thus enabling the detection of deeply buried defects such as buffer overflows. Finally, we present an implementation of our techniques and evaluate them on a set of standardised device driver and buffer overflow benchmarks.
APA, Harvard, Vancouver, ISO, and other styles
18

Weitz, Noah. "Analysis of Verification and Validation Techniques for Educational CubeSat Programs." DigitalCommons@CalPoly, 2018. https://digitalcommons.calpoly.edu/theses/1854.

Full text
Abstract:
Since their creation, CubeSats have become a valuable educational tool for university science and engineering programs. Unfortunately, while aerospace companies invest resources to develop verification and validation methodologies based on larger-scale aerospace projects, university programs tend to focus resources on spacecraft development. This paper looks at two different types of methodologies in an attempt to improve CubeSat reliability: generating software requirements and utilizing system and software architecture modeling. Both the Consortium Requirements Engineering (CoRE) method for software requirements and the Monterey Phoenix modeling language for architecture modeling were tested for usability in the context of PolySat, Cal Poly's CubeSat research program. In the end, neither CoRE nor Monterey Phoenix provided the desired results for improving PolySat's current development procedures. While a modified version of CoRE discussed in this paper does allow for basic software requirements to be generated, the resulting specification does not provide any more granularity than PolySat's current institutional knowledge. Furthermore, while Monterey Phoenix is a good tool to introduce students to model-based systems engineering (MBSE) concepts, the resulting graphs generated for a PolySat specific project were high-level and did not find any issues previously discovered through trial and error methodologies. While neither method works for PolySat, the aforementioned results do provide benefits for university programs looking to begin developing CubeSats.
APA, Harvard, Vancouver, ISO, and other styles
19

Pamplin, Jason Andrew. "Formal Object Interaction Language: Modeling and Verification of Sequential and Concurrent Object-Oriented Software." unrestricted, 2007. http://etd.gsu.edu/theses/available/etd-04222007-205349/.

Full text
Abstract:
Thesis (Ph. D.)--Georgia State University, 2007.
Title from file title page. Ying Zhu, committee chair; Xiaolin Hu, Geoffrey Hubona, Roy Johnson, Rajshekhar Sunderraman, committee members. Electronic text (216 p. : ill. (some col.)) : digital, PDF file. Description based on contents viewed Nov. 29, 2007. Includes bibliographical references (p. 209-216).
APA, Harvard, Vancouver, ISO, and other styles
20

Dernehl, Christian Michael [Verfasser], Stefan [Akademischer Betreuer] Kowalewski, and Dieter [Akademischer Betreuer] Moormann. "Verification of embedded software models by combining abstract interpretation, symbolic execution and stability analysis / Christian Michael Dernehl ; Stefan Kowalewski, Dieter Moormann." Aachen : Universitätsbibliothek der RWTH Aachen, 2019. http://d-nb.info/1210710358/34.

Full text
APA, Harvard, Vancouver, ISO, and other styles
21

Jedryszek, Jakub. "A model-driven development and verification approach for medical devices." Thesis, Kansas State University, 2014. http://hdl.handle.net/2097/18222.

Full text
Abstract:
Master of Science
Department of Computing and Information Sciences
John Hatcliff
Medical devices are safety-critical systems whose failure may put human life in danger. They are becoming more advanced and thus more complex. This leads to bigger and more complicated code-bases that are hard to maintain and verify. Model-driven development provides high-level and abstract description of the system in the form of models that omit details, which are not relevant during the design phase. This allows for certain types of verification and hazard analysis to be performed on the models. These models can then be translated into code. However, errors that do not exist in the models may be introduced during the implementation phase. Automated translation from verified models to code may prevent to some extent. This thesis proposes approach for model-driven development and verification of medical devices. Models are created in AADL (Architecture Analysis & Design Language), a language for software and hardware architecture modeling. AADL models are translated to SPARK Ada, contract-based programming language, which is suitable for software verification. Generated code base is further extended by developers to implement internals of specific devices. Created programs can be verified using SPARK tools. A PCA (Patient Controlled Analgesia) pump medical device is used to illustrate the primary artifacts and process steps. The foundation for this work is "Integrated Clinical Environment Patient-Controlled Analgesia Infusion Pump System Requirements" document and AADL Models created by Brian Larson. In addition to proposed model-driven development approach, a PCA pump prototype was created using the BeagleBoard-xM device as a platform. Some components of PCA pump prototype were verified by SPARK tools and Bakar Kiasan.
APA, Harvard, Vancouver, ISO, and other styles
22

Maurica, Andrianampoizinimaro Fonenantsoa. "Analyses de terminaison des calculs flottants." Thesis, La Réunion, 2017. http://www.theses.fr/2017LARE0030/document.

Full text
Abstract:
Le tristement célèbre Ecran Bleu de la Mort de Windows introduit bien le problème traité. Ce bug est souvent causé par la non-terminaison d'un pilote matériel : le programme s'exécute infiniment, bloquant ainsi toutes les ressources qu'il s'est approprié pour effectuer ses calculs. Cette thèse développe des techniques qui permettent de décider, préalablement à l'exécution, la terminaison d'un programme donné pour l'ensemble des valeurs possibles de ses paramètres en entrée. En particulier, nous nous intéressons aux programmes qui manipulent des nombres flottants. Ces nombres sont omniprésents dans les processeurs actuels et sont utilisés par pratiquement tous les développeurs informatiques. Pourtant, ils sont souvent mal compris et, de fait, source de bugs. En effet, les calculs flottants sont entachés d'erreurs, inhérentes au fait qu'ils sont effectués avec une mémoire finie. Par exemple, bien que vraie dans les réels, l'égalité 0.2 + 0.3 = 0.5 est fausse dans les flottants. Non gérées correctement, ces erreurs peuvent amener à des évènements catastrophiques, tel l'incident du missile Patriot qui a fait 28 morts. Les théories que nous développons sont illustrées, et mises à l'épreuve par des extraits de codes issus de programmes largement répandus. Notamment, nous avons pu exhiber des bugs de terminaisons dues à des calculs flottants incorrects dans certains paquets de la distribution Ubuntu
The infamous Blue Screen of Death of Windows appropriately introduces the problem at hand. This bug is often caused by a non-terminating device driver: the program runs infinitely, blocking in the process all the resources it allocated for its calculations. This thesis develops techniques that allow to decide, before runtime,termination of a given program for any possible value ​​of its inputs. In particular, we are interested in programs that manipulate floating-point numbers. These numbers are ubiquitous in current processors andare used by nearly all software developers. Yet, they are often misunderstood and, hence, source of bugs.Indeed, floating-point computations are tainted with errors. This is because they are performed within a finite amount of memory. For example, although true in the reals, the equality 0.2 + 0.3 = 0.5 is false in the floats. Not handled properly, these errors can lead to catastrophic events,such as the Patriot missile incident that killed 28 people. The theories we develop are illustrated, and put to the test, by code snippets taken from widely used programs. Notably, we were able to exhibit termination bugs due toincorrect floating-point computations in some packages of the Ubuntu distribution
APA, Harvard, Vancouver, ISO, and other styles
23

Natraj, Shailendra. "An Empirical Evaluation & Comparison of Effectiveness & Efficiency of Fault Detection Testing Techniques." Thesis, Blekinge Tekniska Högskola, Sektionen för datavetenskap och kommunikation, 2013. http://urn.kb.se/resolve?urn=urn:nbn:se:bth-4047.

Full text
Abstract:
Context: The thesis is the analysis work of the replication of software experiment conducted by Natalia and Sira at Technical University of Madrid, SPAIN. The empirical study was conducted for the verification and validation of experimental data, and to evaluate the effectiveness and efficiency of the testing techniques. The analysis blocks, considered for the analysis were observable fault, failure visibility and observed faults. The statistical data analysis involved the ANOVA and Classification package of SPSS. Objective: To evaluate and compare the result obtained from the statistical data analysis. To establish the verification and validation of effectiveness and efficiency of testing techniques by using ANOVA and Classification tree analysis for percentage subject, percentage defect-subject and values (Yes / No) for each of the blocks. RQ1: Empirical evaluation of effectiveness of fault detection testing technique, using data analysis (ANOVA and Classification tree package). For the blocks (observable fault, failure visibility and observed faults) using ANOVA and Classification tree. RQ2: Empirical evaluation of efficiency of fault detection technique, based on time and number of test cases using ANOVA. RQ3: Comparison and inference of the obtained results for both effectiveness and efficiency. Method:The research will be focused on the statistical data analysis to empirically evaluate the effectiveness and efficiency of the fault detection technique for the experimental data collected at UPM (Technical university of Madrid, SPAIN). Empirical Strategy Used: Software Experiment. Results: Based on the planned research work. The analysis result obtained for the observable fault types were standardized (Ch5). Within the observable fault block, both the techniques, functional and structural were equally effective. In the failure visibility block, the results were partially standardized. The program types nametbl and ntree were equally effective in fault detection than cmdline. The result for observed fault block was partially standardized and diverse. The list for significant factors in this blocks were program types, fault types and techniques. In the efficiency block, the subject took less time in isolating the fault in the program type cmdline. Also the efficiency in fault detection was seen in cmdline with the help of generated test cases. Conclusion:This research will help the practitioners in the industry and academic in understanding the factors influencing the effectiveness and efficiency of testing techniques.This work also presents a comprehensive analysis and comparison of results of the blocks observable fault, failure visibility and observed faults. We discuss the factors influencing the efficiency of the fault detection techniques.
shailendra.natraj@gmail.com +4917671952062
APA, Harvard, Vancouver, ISO, and other styles
24

Olorisade, Babatunde Kazeem. "Summarizing the Results of a Series of Experiments : Application to the Effectiveness of Three Software Evaluation Techniques." Thesis, Blekinge Tekniska Högskola, Sektionen för datavetenskap och kommunikation, 2009. http://urn.kb.se/resolve?urn=urn:nbn:se:bth-3799.

Full text
Abstract:
Software quality has become and persistently remains a big issue among software users and developers. So, the importance of software evaluation cannot be overemphasized. An accepted fact in software engineering is that software must undergo evaluation process during development to ascertain and improve its quality level. In fact, there are too many techniques than a single developer could master, yet, it is impossible to be certain that software is free of defects. Therefore, it may not be realistic or cost effective to remove all software defects prior to product release. So, it is crucial for developers to be able to choose from available evaluation techniques, the one most suitable and likely to yield optimum quality results for different products - it bogs down to choosing the most appropriate for different situations. However, not much knowledge is available on the strengths and weaknesses of the available evaluation techniques. Most of the information related to the techniques available is focused on how to apply the techniques but not on the applicability conditions of the techniques – practical information, suitability, strengths, weaknesses etc. This research focuses on contributing to the available applicability knowledge of software evaluation techniques. More precisely, it focuses on code reading by stepwise abstraction as representative of the static technique, as well as equivalence partitioning (functional technique) and decision coverage (structural technique) as representatives of the dynamic technique. The specific focus of the research is to summarize the results of a series of experiments conducted to investigate the effectiveness of these techniques among other factors. By effectiveness in this research, we mean the potential of each of the techniques to generate test cases capable of revealing software faults in the case of the dynamic techniques or the ability of the static technique to generate abstractions that will aid the detection of faults. The experiments used two versions of three different programs with seven different faults seeded into each of the programs. This work uses the results of the eight different experiments performed and analyzed separately, to explore this fact. The analysis results were pooled together and jointly summarized in this research to extract a common knowledge from the experiments using a qualitative deduction approach created in this work as it was decided not to use formal aggregation at this stage. Since the experiments were performed by different researchers, in different years and in some cases at different site, there were several problems that have to be tackled in order to be able to summarize the results. Part of the problems is the fact that the data files exist in different languages, the structure of the files are different, different names is used for data fields, the analysis were done using different confidence level etc. The first step, taken at the inception of this research was to apply all the techniques to the programs used during the experiments in order to detect the faults. This purpose of this personal experience with the experiment is to be familiarized and get acquainted to the faults, failures, the programs and the experiment situations in general and also, to better understand the data as recorded from the experiments. Afterwards, the data files were recreated to conform to a uniform language, data meaning, file style and structure. A well structured directory was created to keep all the data, analysis and experiment files for all the experiments in the series. These steps paved the way for a feasible results synthesis. Using our method, the technique, program, fault, program – technique, program – fault and technique – fault were selected as main and interaction effects having significant knowledge relevant to the analysis summary result. The result, as reported in this thesis, indicated that the functional technique and the structural technique are equally effective as far as the programs and faults in these experiments are concerned. Both perform better than the code review. Also, the analysis revealed that the effectiveness of the techniques is influenced by the fault type and the program type. Some faults were found to exhibit better behavior with certain programs, some were better detected with certain techniques and even the techniques yield different result in different programs.
I can alternatively be contacted through: qasimbabatunde@yahoo.co.uk
APA, Harvard, Vancouver, ISO, and other styles
25

Santos, Bruno Roberto. "Um método para verificação formal e dinâmica de sistemas de software concorrentes." Universidade Federal de Alagoas, 2016. http://www.repositorio.ufal.br/handle/riufal/1540.

Full text
Abstract:
This work presents a method to perform formal and dynamic verification of concurrent software. The objective is to provide a method capable of identifying problems in programs whose execution is based on multiple threads, and analyze behavioral properties. The method is able to detect problems in concurrent software, as well as check conformity of the concurrent software with desirable behavior, based on information collected dynamically, i.e. at runtime. The information collected consists of the software execution flow as well as data about the way communicate the software components during this run. The data collected reflect the software's execution, which ensures greater confidence to the information collected. This information is analyzed to identify deadlocks and race conditions in a process called Dynamic Analysis. In addition, this information is also used to automatically generate a model that describes the behavior of a software, which is used for verification of behavioral properties. This process is called Formal Verification. The automatic model generation eliminates the need for manual construction of the model, which requires much effort and knowledge of formal methods, this can increase costs and development time software. However, the dynamic analysis is known to only perform coverage of the current behavior of competing software systems. Current behavior is one that occurs only during an execution of concurrent software systems, without considering all other possible behaviors from the non-determinism. Due to the non-determinism, concurrent software can produce different results for the same input to each execution of software. Therefore reproduce the behavior that leads to competitive software failure is a complex task. This paper proposes a method to perform formal verification and dynamic concurrent software capable of capturing the non-deterministic behavior of these systems and provide reduced development costs by eliminating the need for manual construction of concurrent software system models. The method is validated by a case study consists of three test software systems.
Fundação de Amparo a Pesquisa do Estado de Alagoas
Neste trabalho é apresentado um método para verificação formal e dinâmica de software concorrentes. O objetivo é oferecer um método capaz de identificar problemas inerentes a programas cuja execução baseia-se em múltiplas threads, além de analisar propriedades comportamentais descritas com base nos preceitos da lógica temporal. Propõe-se um método capaz de detectar problemas e verificar formalmente a adequação da execução de sistemas de software concorrentes com relação ao comportamento desejável a tais sistemas, baseando-se em informações coletadas dinamicamente, ou seja, em tempo de execução. As informações coletadas correspondem às sequências de execução de sistemas de software, bem como dados sobre a maneira como se comunicam seus componentes durante sua execução. Os dados colhidos refletem a execução do sistema de software propriamente dito, o que garante maior confiança às informações coletadas. Tais informações são analisadas de modo a identificar impasses e condições de corrida em um processo denominado Análise Dinâmica. Ademais, estas informações também são utilizadas para geração automática de um modelo que descreve o comportamento do sistema de software, o qual é utilizado para verificação de propriedades comportamentais. A este processo de verificação dá-se o nome de Verificação Formal. A geração automática do modelo elimina a necessidade de construção manual do mesmo, que requer muito esforço e conhecimento acerca de métodos formais, isso pode aumentar custos e tempo de desenvolvimento do sistema de software. Entretanto, a análise dinâmica é conhecida por apenas realizar cobertura sobre o comportamento atual de sistemas de software concorrentes, sem considerar a análise de todas as outras possíveis sequências de execuções devido ao não determinismo. Em razão do comportamento não determinístico, sistemas de software concorrentes são capazes de produzir resultados diferentes para a mesma entrada a cada nova execução. Deste modo, reproduzir o comportamento que leva sistemas de software concorrente à falha é uma tarefa complexa. O presente trabalho propõe um método para realizar verificação formal e dinâmica de sistemas de software concorrente capaz de capturar o comportamento não determinístico desses sistemas, além de proporcionar a redução de custos de desenvolvimento através da eliminação da necessidade de construção manual de modelos de sistemas de software concorrente. O método é validado através de um estudo de caso composto por testes em três sistemas de software.
APA, Harvard, Vancouver, ISO, and other styles
26

Andrej, Sekáč. "Performance evaluation based on data from code reviews." Thesis, Blekinge Tekniska Högskola, Institutionen för datalogi och datorsystemteknik, 2016. http://urn.kb.se/resolve?urn=urn:nbn:se:bth-12734.

Full text
Abstract:
Context. Modern code review tools such as Gerrit have made available great amounts of code review data from different open source projects as well as other commercial projects. Code reviews are used to keep the quality of produced source code under control but the stored data could also be used for evaluation of the software development process. Objectives. This thesis uses machine learning methods for an approximation of review expert’s performance evaluation function. Due to limitations in the size of labelled data sample, this work uses semisupervised machine learning methods and measure their influence on the performance. In this research we propose features and also analyse their relevance to development performance evaluation. Methods. This thesis uses Radial Basis Function networks as the regression algorithm for the performance evaluation approximation and Metric Based Regularisation as the semi-supervised learning method. For the analysis of feature set and goodness of fit we use statistical tools with manual analysis. Results. The semi-supervised learning method achieved a similar accuracy to supervised versions of algorithm. The feature analysis showed that there is a significant negative correlation between the performance evaluation and three other features. A manual verification of learned models on unlabelled data achieved 73.68% accuracy. Conclusions. We have not managed to prove that the used semisupervised learning method would perform better than supervised learning methods. The analysis of the feature set suggests that the number of reviewers, the ratio of comments to the change size and the amount of code lines modified in later parts of development are relevant to performance evaluation task with high probability. The achieved accuracy of models close to 75% leads us to believe that, considering the limited size of labelled data set, our work provides a solid base for further improvements in the performance evaluation approximation.
APA, Harvard, Vancouver, ISO, and other styles
27

Becker, Martin [Verfasser], Samarjit [Akademischer Betreuer] Chakraborty, Daniel [Gutachter] Müller-Gritschneder, Marco [Gutachter] Caccamo, and Samarjit [Gutachter] Chakraborty. "Towards Source-Level Timing Analysis of Embedded Software Using Functional Verification Methods / Martin Becker ; Gutachter: Daniel Müller-Gritschneder, Marco Caccamo, Samarjit Chakraborty ; Betreuer: Samarjit Chakraborty." München : Universitätsbibliothek der TU München, 2020. http://d-nb.info/1220319732/34.

Full text
APA, Harvard, Vancouver, ISO, and other styles
28

Csallner, Christoph. "Combining over- and under-approximating program analyses for automatic software testing." Diss., Atlanta, Ga. : Georgia Institute of Technology, 2008. http://hdl.handle.net/1853/24764.

Full text
Abstract:
Thesis (Ph.D.)--Computing, Georgia Institute of Technology, 2009.
Committee Chair: Smaragdakis, Yannis; Committee Member: Dwyer, Matthew; Committee Member: Orso, Alessandro; Committee Member: Pande, Santosh; Committee Member: Rugaber, Spencer.
APA, Harvard, Vancouver, ISO, and other styles
29

Nimal, Vincent P. J. "Static analyses over weak memory." Thesis, University of Oxford, 2014. http://ora.ox.ac.uk/objects/uuid:469907ec-6f61-4015-984e-7ca8757b992c.

Full text
Abstract:
Writing concurrent programs with shared memory is often not trivial. Correctly synchronising the threads and handling the non-determinism of executions require a good understanding of the interleaving semantics. Yet, interleavings are not sufficient to model correctly the executions of modern, multicore processors. These executions follow rules that are weaker than those observed by the interleavings, often leading to reorderings in the sequence of updates and readings from memory; the executions are subject to a weaker memory consistency. Reorderings can produce executions that would not be observable with interleavings, and these possible executions also depend on the architecture that the processors implement. It is therefore necessary to locate and understand these reorderings in the context of a program running, or to prevent them in an automated way. In this dissertation, we aim to automate the reasoning behind weak memory consistency and perform transformations over the code so that developers need not to consider all the specifics of the processors when writing concurrent programs. We claim that we can do automatic static analysis for axiomatically-defined weak memory models. The method that we designed also allows re-use of automated verification tools like model checkers or abstract interpreters that were not designed for weak memory consistency, by modification of the input programs. We define an abstraction in detail that allows us to reason statically about weak memory models over programs. We locate the parts of the code where the semantics could be affected by the weak memory consistency. We then provide a method to explicitly reveal the resulting reorderings so that usual verification techniques can handle the program semantics under a weaker memory consistency. We finally provide a technique that synthesises synchronisations so that the program would behave as if only interleavings were allowed. We finally test these approaches on artificial and real software. We justify our choice of an axiomatic model with the scalability of the approach and the runtime performance of the programs modified by our method.
APA, Harvard, Vancouver, ISO, and other styles
30

Settenvini, Matteo. "Algorithmic Analysis of Name-Bounded Programs : From Java programs to Petri Nets via π-calculus." Thesis, Blekinge Tekniska Högskola, Institutionen för programvaruteknik, 2014. http://urn.kb.se/resolve?urn=urn:nbn:se:bth-3112.

Full text
Abstract:
Context. Name-bounded analysis is a type of static analysis that allows us to take a concurrent program, abstract away from it, and check for some interesting properties, such as deadlock-freedom, or watching the propagation of variables across different components or layers of the system. Objectives. In this study we investigate the difficulties of giving a representation of computer programs in a name-bounded variation of π-calculus. Methods. A preliminary literature review is conducted to assess the presence (or lack thereof) of other successful translations from real-world programming languages to π-calculus, as well for the presence of relevant prior art in the modelling of concurrent systems. Results. This thesis gives a novel translation going from a relevant subset of the Java programming language, to its corresponding name-bounded π-calculus equivalent. In particular, the strengths of our translation are being able to dispose of names representing inactive objects when there are no circular references, and a transparent handling of polymorphism and dynamic method resolution. The resulting processes can then be further transformed into their Petri-Net representation, enabling us to check for important properties, such as reachability and coverability of program states. Conclusions. We conclude that some important properties that are not, in general, easy to check for concurrent programs, can be in fact be feasibly determined by giving a more constrained model in π-calculus first, and as Petri Nets afterwards.
+49 151 52966429
APA, Harvard, Vancouver, ISO, and other styles
31

Urban, Caterina. "Static analysis by abstract interpretation of functional temporal properties of programs." Thesis, Paris, Ecole normale supérieure, 2015. http://www.theses.fr/2015ENSU0017/document.

Full text
Abstract:
L’objectif général de cette thèse est le développement de méthodes mathématiques correctes et efficaces en pratique pour prouver automatiquement la correction de logiciels. Plus précisément, cette thèse est fondée sur la théorie de l’interprétation abstraite, un cadre mathématique puissant pour l’approximation du comportement des programmes. En particulier, cette thèse se concentre sur la preuve des propriétés de vivacité des programmes, qui représentent des conditions qui doivent être réalisés ultimement ou de manière répétée pendant l’exécution du programme. La terminaison des programmes est la propriété de vivacité la plus fréquemment considérée. Cette thèse conçoit des nouvelles approximations, afin de déduire automatiquement des conditions suffisantes pour la terminaison des programmes et synthétiser des fonctions de rang définies par morceaux, qui fournissent des bornes supérieures sur le temps d’attente avant la terminaison. Les approximations sont paramétriques dans le choix entre l’expressivité et le coût des approximations sous-jacentes, qui maintiennent des informations sur l’ensemble des valeurs possibles des variables du programme ainsi que les relations numériques possibles entre elles. Cette thèse développe également un cadre d’interprétation abstraite pour prouver des propriétés de vivacité, qui vient comme une généralisation du cadre proposé pour la terminaison. En particulier, le cadre est dédié à des propriétés de vivacité exprimées dans la logique temporelle, qui sont utilisées pour s’assurer qu’un événement souhaitable se produit une fois ou une infinité de fois au cours de l’exécution du programme. Comme pour la terminaison,des fonctions de rang définies par morceaux sont utilisées pour déduire des préconditions suffisantes pour ces propriétés, et fournir des bornes supérieures sur le temps d’attente avant un événement souhaitable. Les résultats présentés dans cette thèse ont été mis en œuvre dans un prototype d’analyseur. Les résultats expérimentaux montrent qu’il donne de bons résultats sur une grande variété de programmes, il est compétitif avec l’état de l’art, et il est capable d’analyser des programmes qui sont hors de la portée des méthodes existantes
The overall aim of this thesis is the development of mathematically sound and practically efficient methods for automatically proving the correctness of computer software. More specifically, this thesis is grounded in the theory of abstract interpretation, a powerful mathematical framework for approximating the behavior of programs. In particular, this thesis focuses on provingprogram liveness properties, which represent requirements that must be eventually or repeatedly realized during program execution. Program termination is the most prominent liveness property. This thesis designs new program approximations, in order to automatically infer sufficient preconditions for program termination and synthesize so called piecewisedefined ranking functions, which provide upper bounds on the waiting time before termination. The approximations are parametric in the choice between the expressivity and the cost of the underlying approximations, which maintain information about the set of possible values of the program variables along with the possible numerical relationships between them. This thesis also contributes an abstract interpretation framework for proving liveness properties, which comes as a generalization of the framework proposedfor termination. In particular, the framework is dedicated to liveness properties expressed in temporal logic, which are used to ensure that some desirable event happens once or infinitely many times during program execution. As for program termination, piecewise-defined ranking functions are used to infer sufficient preconditions for these properties, and to provide upper boundson the waiting time before a desirable event. The results presented in this thesis have been implemented into a prototype analyzer. Experimental results show that it performs well on a wide variety of benchmarks, it is competitive with the state of the art, and is able to analyze programs that are out of the reach of existing methods
APA, Harvard, Vancouver, ISO, and other styles
32

Vyvial, Pavel. "Statická detekce častých chyb JBoss aplikačního serveru." Master's thesis, Vysoké učení technické v Brně. Fakulta informačních technologií, 2010. http://www.nusl.cz/ntk/nusl-237160.

Full text
Abstract:
First, a few bugs from a list of common bug were chosen and patterns describing these bugs were inferred. Then, detectors searching for such patterns were implemented as plug-ins to FindBugs static analyzer. Finally, detectors were used to detect bugs in development version of JBoss AS. Results are presented at the end of this paper.
APA, Harvard, Vancouver, ISO, and other styles
33

Duan, Daliang. "Epsp : un environnement support de genie logiciel base sur l'approche du prototypage de systeme et sur le langage prolog." Toulouse 3, 1987. http://www.theses.fr/1987TOU30223.

Full text
Abstract:
Epsp offre une interface informelle pour etablir un prototype de systeme en prolog (psp). Cette interface a essentiellement pour but de rendre prolog le plus transparent possible a l'utilisateur. Epsp utilise une specification executable comme prototype, ecrit en prolog. Epsp permet le raffinage a partir de la specification par tranformation successive du psp. Epsp est constitue de plusieurs outils logiciels. Il est developpe en prolo nongp et pascal
APA, Harvard, Vancouver, ISO, and other styles
34

Chrszon, Philipp, Clemens Dubslaff, Sascha Klüppelholz, and Christel Baier. "Family-Based Modeling and Analysis for Probabilistic Systems." Springer, 2016. https://tud.qucosa.de/id/qucosa%3A70790.

Full text
Abstract:
Feature-based formalisms provide an elegant way to specify families of systems that share a base functionality and differ in certain features. They can also facilitate an all-in-one analysis, where all systems of the family are analyzed at once on a single family model instead of one-by-one. This paper presents the basic concepts of the tool ProFeat, which provides a guarded-command language for modeling families of probabilistic systems and an automatic translation of family models to the input language of the probabilistic model checker PRISM. This translational approach enables a family-based quantitative analysis with PRISM. Besides modeling families of systems that differ in system parameters such as the number of identical processes or channel sizes, ProFeat also provides special support for the modeling and analysis of (probabilistic) product lines with dynamic feature switches, multi-features and feature attributes. By means of several case studies we show how ProFeat eases family-based modeling and compare the one-by-one and all-in-one analysis approach.
APA, Harvard, Vancouver, ISO, and other styles
35

Žárský, Jan. "Instrumentace Java programů, kontrakty pro paralelismus." Master's thesis, Vysoké učení technické v Brně. Fakulta informačních technologií, 2021. http://www.nusl.cz/ntk/nusl-445489.

Full text
Abstract:
Contracts for concurrency describe required atomicity of method sequences in concurrent programs. This work proposes a dynamic analyzer to verify programs written in Java against contracts for concurrency. The analyzer was designed to detect violations of parametric contracts with spoilers. The proposed analyzer was implemented as an extension to the RoadRunner framework. Support for accessing the method arguments and return values was added to RoadRunner as a part of the solution. The analyzer was fully implemented and verified on a set of testing programs.
APA, Harvard, Vancouver, ISO, and other styles
36

Maïga, Oumar. "An integrated language for the specification, simulation, formal analysis and enactment of discrete event systems." Thesis, Clermont-Ferrand 2, 2015. http://www.theses.fr/2015CLF22662/document.

Full text
Abstract:
Cette thèse propose une méthodologie qui intègre les méthodes formelles dans la spécification, la conception, la vérification et la validation des systèmes complexes concurrents et distribués avec une perspective à événements discrets. La méthodologie est basée sur le langage graphique HILLS (High Level Language for System Specification) que nous avons défini. HiLLS intègre des concepts de génie logiciel et de théorie des systèmes pour une spécification des systèmes. Précisément, HiLLS intègre des concepts et notations de DEVS (Discrete Event System Specification), UML (Unified Modeling Language) et Object-Z. Les objectifs de HILLS incluent la définition d’une syntaxe concrète graphique qui facilite la communicabilité des modèles et plusieurs domaines sémantiques pour la simulation, le prototypage, l’enaction et l’accessibilité à l’analyse formelle. L’Enaction se définit par le processus de création d’une instance du système qui s’exécute en temps réel (par opposition au temps virtuel utilisé en simulation). HiLLS permet la construction hiérarchique et modulaire des systèmes à événements discrets grâce à une description simple et rigoureuse des aspects statiques, dynamiques et fonctionnels des modèles. La sémantique pour simulation de HiLLS est définie en établissant un morphisme sémantique entre HiLLS et DEVS; de cette façon chaque modèle HiLLS peut être simulé en utilisant un simulateur DEVS. Cette approche permet aux utilisateurs DEVS d’utiliser HiLLS comme un langage de spécification dans la phase de modélisation et d’utiliser leurs propres implémentations locales ou distribuées de DEVS en phase de simulation. L’enactment des modèles HiLLS est basé sur une adaptation du patron de conception Observateur pour leur implémentation. La vérification formelle est faite en établissant un morphisme entre chaque niveau d’abstraction de HiLLS et une méthode formelle adaptée pour la vérification formelle des propriétés à ce niveau. Les modèles formels sur lesquels sont faites les vérifications formelles sont obtenus à partir des spécifications HiLLS en utilisant des morphismes. Les trois niveaux d’abstraction de HiLLS sont : le niveau composite, le niveau unitaire et le niveau des traces. Ces niveaux correspondent respectivement aux trois niveaux suivants de la hiérarchie de spécification des systèmes proposée par Zeigler : CN (Coupled Network), IOS (Input Output System) et IORO (Input Output Relation Observation). Nous avons établi des morphismes entre le niveau Composite et CSP (Communicating Sequential Processes), entre le niveau unitaire et Z, et nous utilisons les logiques temporelles telles que LTL, CTL et TCTL pour exprimer les propriétés sur les traces. HiLLS permet à la fois la spécification des modèles à structures statiques et les modèles à structures variables. Dans le cas des systèmes à structures variables, le niveau composite intègre à la fois des propriétés basées sur les états et les processus. Pour prendre en compte ces deux aspects, un morphisme est défini entre le niveau Composite de HiLLS et CSPZ (une combinaison de CSP et Z). Le processus de vérification et de validation combine la simulation, la vérification exhaustive de modèle (model checking) et la preuve de théorèmes (theorem proving) dans un Framework commun. La vérification exhaustive et la preuve de théorèmes sur les modèles HiLLS sont basées sur les outils associés aux méthodes formelles sélectionnées dans les morphismes. Nous appliquons la méthodologie de modélisation de HiLLS à la modélisation du Alternating Bit Protocol (ABP) et à celle d’un guichet automatique de dépôt de billet (Automated Teller Machine) (ATM)
This thesis proposes a methodology which integrates formal methods in the specification, design, verification and validation processes of complex, concurrent and distributed systems with discrete events perspectives. The methodology is based on the graphical language HILLS (High Level Language for System Specification) that we defined. HiLLS integrates software engineering and system theoretic views for the specification of systems. Precisely, HiLLS integrates concepts and notations from DEVS (Discrete Event System Specification), UML (Unified Modeling Language) and Object-Z. The objectives of HILLS include the definition of a highly communicable graphical concrete syntax and multiple semantic domains for simulation, prototyping, enactment and accessibility to formal analysis. Enactment refers to the process of creating an instance of system executing in real-clock time. HILLS allows hierarchical and modular construction of discrete event systems models while facilitating the modeling process due to the simple and rigorous description of the static, dynamic, structural and functional aspects of the models. Simulation semantics is defined for HiLLS by establishing a semantic mapping between HiLLS and DEVS; in this way each HiLLS model can be simulated by a DEVS simulator. This approach allow DEVS users to use HiLLS as a modeling language in the modeling phase and use their own stand alone or distributed DEVS implementation package to simulate the models. An enactment of HiLLS models is defined by adapting the observer design-pattern to their implementation. The formal verification of HiLLS models is made by establishing morphisms between each level of abstraction of HILLS and a formal method adapted for the formal verification of the properties at this level. The formal models on which are made the formal verification are obtained from HILLS specifications by using the mapping functions. The three levels of abstraction of HILLS are: the Composite level, the Unitary level and the Traces level. These levels correspond respectively to the following levels of the system specification hierarchy proposed by Zeigler: CN (Coupled Network), IOS (Input Output System) and IORO (Input Output Relation Observation). We have established morphisms between the Composite level and CSP (Communicating Sequential Processes), between Unitary level and Z and we expect to use temporal logics like LTL, CTL and TCTL to express traces level properties. HiLLS allows the specification of both static and dynamic structure systems. In case of dynamic structure systems, the composite level integrates both sate-based and process-based properties. To handle at the same time state-based and process-based properties, morphism is established between the dynamic composite level and CSPZ (a combination of CSP and Z); The verification and validation process combine simulation, model checking and theorem proving techniques in a common framework. The model checking and theorem proving of HILLS models are based on an integrated tooling framework composed of tools supporting the notations of the selected formal methods in the established morphisms. We apply our methodology to modeling of the Alternating Bit Protocol (ABP) and the Automated Teller Machine (ATM)
APA, Harvard, Vancouver, ISO, and other styles
37

Vašíček, Ondřej. "Adaptér OSLC pro analýzu softwaru." Master's thesis, Vysoké učení technické v Brně. Fakulta informačních technologií, 2021. http://www.nusl.cz/ntk/nusl-445498.

Full text
Abstract:
Cílem této práce je poskytnout snadný způsob, jak rozšířit analyzační nástroj o rozhraní splňující standard OSLC. Takové rozhraní umožňuje jednoduchou integraci nástrojů s jinými nástroji nebo systémy, umožňuje jejich vzdálené použití skrze webové služby a umožňuje je jednoduše propojit s databází pro databázové dotazy a pro perzistentní uložení dat. Toto je dosaženo návrhem a implementací OSLC adaptéru pomocí sady nástrojů Eclipse Lyo. Adaptér používá jako rozhraní doménu OSLC Automation a je dostatečně univerzální na to, aby skrze toto rozhraní pokryl funkcionalitu většiny analyzačních nástrojů za pomocí jejich stávajících rozhraní na příkazové řádce. Tato práce poskytuje úvod k OSLC, Eclipse Lyo a souvisejícím konceptům. Dále tato práce definuje požadavky a odlišnosti různých analyzačních nástrojů a diskutuje návrh adaptéru a faktory, které ovlivnily návrhová rozhodnutí. A nakonec prezentuje implementovaný adaptér a jeho vyhodnocení pomocí automatizované testovací sady a pomocí experimentů s řadou analyzačních nástrojů. Nejvýznamnější ukazatel hodnocení vytvořeného adaptéru je to, že už teď je používán v praxi pro přidání OSLC rozhraní k nástrojům ANaConDA, Perun, Spectra (všechny tři vyvíjené na VeriFIT) a HiLiTE (Honeywell).
APA, Harvard, Vancouver, ISO, and other styles
38

Letko, Zdeněk. "Analýza a testování vícevláknových programů." Doctoral thesis, Vysoké učení technické v Brně. Fakulta informačních technologií, 2012. http://www.nusl.cz/ntk/nusl-261265.

Full text
Abstract:
V disertační práci je nejprve uvedena taxonomie chyb v souběžném zpracování dat a přehled technik pro jejich dynamickou detekci. Následně jsou navrženy nové metriky pro měření synchronizace a souběžného chování programů společně s metodologií jejich odvozování. Tyto techniky se zejména uplatní v testování využívajícím techniky prohledávání prostoru a v saturačním testování. Práce dále představuje novou heuristiku vkládání šumu, jejímž cílem je maximalizace proložení instrukcí pozorovaných během testování. Tato heuristika je porovnána s již existujícími heuristikami na několika testech. Výsledky ukazují, že nová heuristika překonává ty existující v určitých případech. Nakonec práce představuje inovativní aplikaci stochastických optimalizačních algoritmů v procesu testování vícevláknových aplikací. Principem metody je hledání vhodných kombinací parametrů testů a metod vkládání šumu. Tato metoda byla prototypově implementována a otestována na množině testovacích příkladů. Výsledky ukazují, že metoda má potenciál vyznamně vylepšit testování vícevláknových programů.
APA, Harvard, Vancouver, ISO, and other styles
39

Letko, Zdeněk. "Dynamická detekce a léčení časově závislých chyb nad daty v prostředí Java." Master's thesis, Vysoké učení technické v Brně. Fakulta informačních technologií, 2008. http://www.nusl.cz/ntk/nusl-235989.

Full text
Abstract:
Finding concurrency bugs in complex software is difficult. As a contribution to coping with this problem the thesis proposes an architecture for a fully automated dynamic detection and healing of data races and atomicity violations in Java. Two distinct algorithms for detecting of data races are presented. One of them is a novel algorithm called AtomRace which detects data races as a special case of atomicity violations. The healing is based on suppressing a recurrence of the detected problem and can be performed by introducing an additional synchronization or by legally influencing the Java scheduler. Basically forces certain parts of the code  to be executed atomically. The proposed architecture uses bytecode instrumentation to be able to track and influence the execution. The architecture and algorithms were implemented and tested on multiple case studies.
APA, Harvard, Vancouver, ISO, and other styles
40

Kattenbelt, Mark Alex. "Automated quantitative software verification." Thesis, University of Oxford, 2010. http://ora.ox.ac.uk/objects/uuid:62430df4-7fdf-4c4f-b3cd-97ba8912c9f5.

Full text
Abstract:
Many software systems exhibit probabilistic behaviour, either added explicitly, to improve performance or to break symmetry, or implicitly, through interaction with unreliable networks or faulty hardware. When employed in safety-critical applications, it is important to rigorously analyse the behaviour of these systems. This can be done with a formal verification technique called model checking, which establishes properties of systems by algorithmically considering all execution scenarios. In the presence of probabilistic behaviour, we consider quantitative properties such as "the worst-case probability that the airbag fails to deploy within 10ms", instead of qualitative properties such as "the airbag eventually deploys". Although many model checking techniques exist to verify qualitative properties of software, quantitative model checking techniques typically focus on manually derived models of systems and cannot directly verify software. In this thesis, we present two quantitative model checking techniques for probabilistic software. The first is a quantitative adaptation of a successful model checking technique called counter-example guided abstraction refinement which uses stochastic two-player games as abstractions of probabilistic software. We show how to achieve abstraction and refinement in a probabilistic setting and investigate theoretical extensions of stochastic two-player game abstractions. Our second technique instruments probabilistic software in such a way that existing, non-probabilistic software verification methods can be used to compute bounds on quantitative properties of the original, uninstrumented software. Our techniques are the first to target real, compilable software in a probabilistic setting. We present an experimental evaluation of both approaches on a large range of case studies and evaluate several extensions and heuristics. We demonstrate that, with our methods, we can successfully compute quantitative properties of real network clients comprising approximately 1,000 lines of complex ANSI-C code — the verification of such software is far beyond the capabilities of existing quantitative model checking techniques.
APA, Harvard, Vancouver, ISO, and other styles
41

Chrszon, Philipp, Clemens Dubslaff, Sascha Klüppelholz, and Christel Baier. "ProFeat: Feature-oriented engineering for family-based probabilistic model checking." Springer, 2017. https://tud.qucosa.de/id/qucosa%3A70792.

Full text
Abstract:
The concept of features provides an elegant way to specify families of systems. Given a base system, features encapsulate additional functionalities that can be activated or deactivated to enhance or restrict the base system’s behaviors. Features can also facilitate the analysis of families of systems by exploiting commonalities of the family members and performing an all-in-one analysis, where all systems of the family are analyzed at once on a single family model instead of one-by-one. Most prominent, the concept of features has been successfully applied to describe and analyze (software) product lines. We present the tool ProFeat that supports the feature-oriented engineering process for stochastic systems by probabilistic model checking. To describe families of stochastic systems, ProFeat extends models for the prominent probabilistic model checker Prism by feature-oriented concepts, including support for probabilistic product lines with dynamic feature switches, multi-features and feature attributes. ProFeat provides a compact symbolic representation of the analysis results for each family member obtained by Prism to support, e.g., model repair or refinement during feature-oriented development. By means of several case studies we show how ProFeat eases family-based quantitative analysis and compare one-by-one and all-in-one analysis approaches.
APA, Harvard, Vancouver, ISO, and other styles
42

Taylor, Ramsay G. "Verification of hardware dependent software." Thesis, University of Sheffield, 2012. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.575744.

Full text
Abstract:
Many good processes exist for ensuring the integrity of software systems, Some are analysis processes that seek to confirm that cer- tain properties hold for the system, and these rely on the ability to infer a correct model of the behaviour of the software, To ensure that such inference is possible many high-integrity systems are writ- ten in "safe" language subsets that restrict the program to constructs whose behaviour is sufficiently abstract and well defined that it can be determined independent of the execution environment. This nec- essarily prevents any assumptions about the system hardware. but consequently makes it impossible to use these techniques on software that must interact with the hardware. such as device drivers. This thesis addresses this shortcoming by taking the opposite approach: if the analyst accepts absolute hardware dependence - that the analysis will only be valid for a particular target system: the hardware that the driver is intended to control -- then the specifica- tion of the system can be used to infer the behaviour of the software that interacts with it, An analysis process is developed that operates on disassembled executable files and formal system specifications to produce CSP-OZ formal models of the software's behaviour, This analysis process is implemented in a prototype called Spurinna. that is then used in conjunction with the verification tools Z2SAL, the SAL suite, and IsabelleHOL. to demonstrate the verification of prop- erties of the software.
APA, Harvard, Vancouver, ISO, and other styles
43

Kirschenbaum, Jason P. "Investigations in Automating Software Verification." The Ohio State University, 2011. http://rave.ohiolink.edu/etdc/view?acc_num=osu1306862918.

Full text
APA, Harvard, Vancouver, ISO, and other styles
44

Maroneze, André Oliveira. "Certified Compilation and Worst-Case Execution Time Estimation." Thesis, Rennes 1, 2014. http://www.theses.fr/2014REN1S030/document.

Full text
Abstract:
Les systèmes informatiques critiques - tels que les commandes de vol électroniques et le contrôle des centrales nucléaires - doivent répondre à des exigences strictes en termes de sûreté de fonctionnement. Nous nous intéressons ici à l'application de méthodes formelles - ancrées sur de solides bases mathématiques - pour la vérification du comportement des logiciels critiques. Plus particulièrement, nous spécifions formellement nos algorithmes et nous les prouvons corrects, à l'aide de l'assistant à la preuve Coq - un logiciel qui vérifie mécaniquement la correction des preuves effectuées et qui apporte un degré de confiance très élevé. Nous appliquons ici des méthodes formelles à l'estimation du Temps d'Exécution au Pire Cas (plus connu par son abréviation en anglais, WCET) de programmes C. Le WCET est une propriété importante pour la sûreté de fonctionnement des systèmes critiques, mais son estimation exige des analyses sophistiquées. Pour garantir l'absence d'erreurs lors de ces analyses, nous avons formellement vérifié une méthode d'estimation du WCET fondée sur la combinaison de deux techniques principales: une estimation de bornes de boucles et une estimation du WCET via la méthode IPET (Implicit Path Enumeration Technique). L'estimation de bornes de boucles est elle-même décomposée en trois étapes : un découpage de programmes, une analyse de valeurs opérant par interprétation abstraite, et une méthode de calcul de bornes. Chacune de ces étapes est formellement vérifiée dans un chapitre qui lui est dédiée. Le développement a été intégré au compilateur C formellement vérifié CompCert. Nous prouvons que le résultat de l'estimation est correct et nous évaluons ses performances dans des ensembles de benchmarks de référence dans le domaine. Les contributions de cette thèse incluent la formalisation des techniques utilisées pour estimer le WCET, l'outil d'estimation lui-même (obtenu à partir de la formalisation), et l'évaluation expérimentale des résultats. Nous concluons que le développement fondé sur les méthodes formelles permet d'obtenir des résultats intéressants en termes de précision, mais il exige des précautions particulières pour s'assurer que l'effort de preuve reste maîtrisable. Le développement en parallèle des spécifications et des preuves est essentiel à cette fin. Les travaux futurs incluent la formalisation de modèles de coût matériel, ainsi que le développement d'analyses plus sophistiquées pour augmenter la précision du WCET estimé
Safety-critical systems - such as electronic flight control systems and nuclear reactor controls - must satisfy strict safety requirements. We are interested here in the application of formal methods - built upon solid mathematical bases - to verify the behavior of safety-critical systems. More specifically, we formally specify our algorithms and then prove them correct using the Coq proof assistant - a program capable of mechanically checking the correctness of our proofs, providing a very high degree of confidence. In this thesis, we apply formal methods to obtain safe Worst-Case Execution Time (WCET) estimations for C programs. The WCET is an important property related to the safety of critical systems, but its estimation requires sophisticated techniques. To guarantee the absence of errors during WCET estimation, we have formally verified a WCET estimation technique based on the combination of two main methods: a loop bound estimation and the WCET estimation via the Implicit Path Enumeration Technique (IPET). The loop bound estimation itself is decomposed in three steps: a program slicing, a value analysis based on abstract interpretation, and a loop bound calculation stage. Each stage has a chapter dedicated to its formal verification. The entire development has been integrated into the formally verified C compiler CompCert. We prove that the final estimation is correct and we evaluate its performances on a set of reference benchmarks. The contributions of this thesis include (a) the formalization of the techniques used to estimate the WCET, (b) the estimation tool itself (obtained from the formalization), and (c) the experimental evaluation. We conclude that our formally verified development obtains interesting results in terms of precision, but it requires special precautions to ensure the proof effort remains manageable. The parallel development of specifications and proofs is essential to this end. Future works include the formalization of hardware cost models, as well as the development of more sophisticated analyses to improve the precision of the estimated WCET
APA, Harvard, Vancouver, ISO, and other styles
45

Hughes, Roger Brett. "Automated interactive software verification and synthesis." Thesis, Brunel University, 1992. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.306741.

Full text
APA, Harvard, Vancouver, ISO, and other styles
46

Jackson, David Mark. "Logical verification of reactive software systems." Thesis, University of Oxford, 1992. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.305989.

Full text
APA, Harvard, Vancouver, ISO, and other styles
47

Leonardsson, Carl. "Verification of Software under Relaxed Memory." Doctoral thesis, Uppsala universitet, Avdelningen för datorteknik, 2016. http://urn.kb.se/resolve?urn=urn:nbn:se:uu:diva-297201.

Full text
Abstract:
The work covered in this thesis concerns automatic analysis of correctness of parallel programs running under relaxed memory models. When a parallel program is compiled and executed on a modern architecture, various optimizations may cause it to behave in unexpected ways. In particular, accesses to the shared memory may appear in the execution in the opposite order to how they appear in the control flow of the original program source code. The memory model determines which memory accesses can be reordered in a program on a given system. Any memory model that allows some observable memory access reordering is called a relaxed memory model. The reorderings may cause bugs and make the production of parallel programs more difficult. In this work, we consider three main approaches to analysis of correctness of programs running under relaxed memory models. An exact analysis for finite state programs running under the TSO memory model (Paper I). This technique is based on the well quasi ordering framework. An over-approximate analysis for integer programs running under TSO (Paper II), based on predicate abstraction combined with a buffer abstraction. Two under-approximate analysis techniques for programs running under the TSO, PSO or POWER memory models (Papers III and IV). The latter two techniques are based on stateless model checking and dynamic partial order reduction. In addition to determining whether a program is correct under a given memory model, the problem of automatic fence synthesis is also considered. A memory fence is an instruction that can be inserted into a program in order to locally disable some memory access reorderings. The fence synthesis problem is the problem of automatically inferring a minimal set of memory fences which restores sufficient order in a given program to ensure its correctness.
UPMARC
APA, Harvard, Vancouver, ISO, and other styles
48

Tagore, Aditi. "Techniques to Improve Automated Software Verification." The Ohio State University, 2014. http://rave.ohiolink.edu/etdc/view?acc_num=osu1397661277.

Full text
APA, Harvard, Vancouver, ISO, and other styles
49

Ballis, Demis. "Rule-Based Software Verification and Correction." Doctoral thesis, Universitat Politècnica de València, 2008. http://hdl.handle.net/10251/1948.

Full text
Abstract:
The increasing complexity of software systems has led to the development of sophisticated formal Methodologies for verifying and correcting data and programs. In general, establishing whether a program behaves correctly w.r.t. the original programmer s intention or checking the consistency and the correctness of a large set of data are not trivial tasks as witnessed by many case studies which occur in the literature. In this dissertation, we face two challenging problems of verification and correction. Specifically, verification and correction of declarative programs, and the verification and correction of Web sites (i.e. large collections of semistructured data). Firstly, we propose a general correction scheme for automatically correcting declarative, rule-based programs which exploits a combination of bottom-up as well as topdown inductive learning techniques. Our hybrid hodology is able to infer program corrections that are hard, or even impossible, to obtain with a simpler,automatic top-down or bottom-up learner. Moreover, the scheme will be also particularized to some well-known declarative programming paradigm: that is, the functional logic and the functional programming paradigm. Secondly, we formalize a framework for the automated verification of Web sites which can be used to specify integrity conditions for a given Web site, and then automatically check whether these conditions are fulfilled. We provide a rule-based, formal specification language which allows us to define syntactic as well as semantic properties of the Web site. Then, we formalize a verification technique which detects both incorrect/forbidden patterns as well as lack of information, that is, incomplete/missing Web pages. Useful information is gathered during the verification process which can be used to repair the Web site. So, after a verification phase, one can also infer semi-automatically some possible corrections in order to fix theWeb site. The methodology is based on a novel rewrit
Ballis, D. (2005). Rule-Based Software Verification and Correction [Tesis doctoral no publicada]. Universitat Politècnica de València. https://doi.org/10.4995/Thesis/10251/1948
Palancia
APA, Harvard, Vancouver, ISO, and other styles
50

Addy, Edward A. "Verification and validation in software product line engineering." Morgantown, W. Va. : [West Virginia University Libraries], 1999. http://etd.wvu.edu/templates/showETD.cfm?recnum=1068.

Full text
Abstract:
Thesis (Ph. D.)--West Virginia University, 1999.
Title from document title page. Document formatted into pages; contains vi, 75 p. : ill. (some col.). Includes abstract. Includes bibliographical references (p. 35-39).
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography