Dissertations / Theses on the topic 'Verification and testing'

To see the other types of publications on this topic, follow the link: Verification and testing.

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 dissertations / theses for your research on the topic 'Verification and testing.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.

1

Rotting, Tjädermo Viktor, and Alex Tanskanen. "System Upgrade Verification : An automated test case study." Thesis, Linköpings universitet, Institutionen för datavetenskap, 2019. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-165125.

Full text
Abstract:
We live in a society where automatization is becoming more common, whether it be cars or artificial intelligence. Software needs to be updated using patches, however, these patches have the possibility of breaking components. This study takes such a patch in the context of Ericsson, identifies what needs to be tested, investigates whether the tests can be automated and assesses how maintainable they are. Interviews were used for the identification of system and software parts in need of testing. Then tests were implemented in an automated test suite to test functionality of either a system or software. The goal was to reduce time of troubleshooting for employees without interrupting sessions for users as well as set up a working test suite. When the automated testing is completed and implemented in the test suite, the study is concluded by measuring the maintainability of the scripts using both metrics and human assessment through interviews. The result showed the testing suite proved maintainable, both from the metric point of view and from human assessment.
APA, Harvard, Vancouver, ISO, and other styles
2

Wu, Weixin. "Mining constraints for Testing and Verification." Thesis, Virginia Tech, 2009. http://hdl.handle.net/10919/31056.

Full text
Abstract:
With the advances in VLSI and System-On-Chip (SOC) technologies, the complexity of hardware systems has increased manifold. The increasing complexity poses serious challenges to the digital hardware design. Functional verification has become one of the most expensive and time-consuming components of the current product development cycle. Today, design verification alone often surpasses 70% of the total development cost and the situation has been projected to continue to worsen. The two most widely used formal methods for design verification are Equivalence Checking and Model Checking. During the design phase, hardware goes through several stages of optimizations for area, speed, power, etc. Determining the functional correctness of the design after each optimization step by means of exhaustive simulation can be prohibitively expensive. An alternative to prove functional correctness of the optimized design is to determine the design's functional equivalence with respect to some golden model which is known to be functionally correct. Efficient techniques to perform this process is known as Equivalence Checking. Equivalence Checking requires that the implementation circuit should be functionally equivalent to the specification circuit. Complexities in Equivalence Checking can be exponential to the circuit size in the worst case. Since Equivalence Checking of sequential circuits still remains a challenging problem, in this thesis, we first address this problem using efficient learning techniques. In contrast to the traditional learning methods, our method employs a mining algorithm to discover global constraints among several nodes efficiently in a sequential circuit. In a Boolean satisfiability (SAT) based framework for the bounded sequential equivalence checking, by taking advantage of the repeated search space, our mining algorithm is only performed on a small window size of unrolled circuit, and the mined relations could be reused subsequently. These powerful relations, when added as new constraint clauses to the original formula, help to significantly increase the deductive power for the SAT engine, thereby pruning a larger portion of the search space. Likewise, the memory required and time taken to solve these problems are alleviated. We also propose a pseudo-functional test generation method based on effective functional constraints extraction. We use mining techniques to extract a set of multi-node functional constraints which consists of illegal states and internal signal correlation. Then the functional constraints are imposed to a ATPG tool to generate pseudo functional delay tests.
Master of Science
APA, Harvard, Vancouver, ISO, and other styles
3

Jayabharathi, Rathish. "Hierarchical timing verification and delay fault testing /." Digital version accessible at:, 1999. http://wwwlib.umi.com/cr/utexas/main.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Nilsson, Daniel. "System for firmware verification." Thesis, University of Kalmar, School of Communication and Design, 2009. http://urn.kb.se/resolve?urn=urn:nbn:se:hik:diva-2372.

Full text
Abstract:

Software verification is an important part of software development and themost practical way to do this today is through dynamic testing. This reportexplains concepts connected to verification and testing and also presents thetesting-framework Trassel developed during the writing of this report.Constructing domain specific languages and tools by using an existinglanguage as a starting ground can be a good strategy for solving certainproblems, this was tried with Trassel where the description-language forwriting test-cases was written as a DSL using Python as the host-language.

APA, Harvard, Vancouver, ISO, and other styles
5

Argote, Garcia Gonzalo. "Formal verification and testing of software architectural models." FIU Digital Commons, 2009. http://digitalcommons.fiu.edu/etd/1308.

Full text
Abstract:
Ensuring the correctness of software has been the major motivation in software research, constituting a Grand Challenge. Due to its impact in the final implementation, one critical aspect of software is its architectural design. By guaranteeing a correct architectural design, major and costly flaws can be caught early on in the development cycle. Software architecture design has received a lot of attention in the past years, with several methods, techniques and tools developed. However, there is still more to be done, such as providing adequate formal analysis of software architectures. On these regards, a framework to ensure system dependability from design to implementation has been developed at FIU (Florida International University). This framework is based on SAM (Software Architecture Model), an ADL (Architecture Description Language), that allows hierarchical compositions of components and connectors, defines an architectural modeling language for the behavior of components and connectors, and provides a specification language for the behavioral properties. The behavioral model of a SAM model is expressed in the form of Petri nets and the properties in first order linear temporal logic. This dissertation presents a formal verification and testing approach to guarantee the correctness of Software Architectures. The Software Architectures studied are expressed in SAM. For the formal verification approach, the technique applied was model checking and the model checker of choice was Spin. As part of the approach, a SAM model is formally translated to a model in the input language of Spin and verified for its correctness with respect to temporal properties. In terms of testing, a testing approach for SAM architectures was defined which includes the evaluation of test cases based on Petri net testing theory to be used in the testing process at the design level. Additionally, the information at the design level is used to derive test cases for the implementation level. Finally, a modeling and analysis tool (SAM tool) was implemented to help support the design and analysis of SAM models. The results show the applicability of the approach to testing and verification of SAM models with the aid of the SAM tool.
APA, Harvard, Vancouver, ISO, and other styles
6

Zhou, Zhiquan, and 周智泉. "Verification of program properties: from testing to semi-proving." Thesis, The University of Hong Kong (Pokfulam, Hong Kong), 2003. http://hub.hku.hk/bib/B31245134.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Sudol, Alicia. "A methodology for modeling the verification, validation, and testing process for launch vehicles." Diss., Georgia Institute of Technology, 2015. http://hdl.handle.net/1853/54429.

Full text
Abstract:
Completing the development process and getting to first flight has become a difficult hurdle for launch vehicles. Program cancellations in the last 30 years were largely due to cost overruns and schedule slips during the design, development, testing and evaluation (DDT&E) process. Unplanned rework cycles that occur during verification, validation, and testing (VVT) phases of development contribute significantly to these overruns, accounting for up to 75% of development cost. Current industry standard VVT planning is largely subjective with no method for evaluating the impact of rework. The goal of this research is to formulate and implement a method that will quantitatively capture the impact of unplanned rework by assessing the reliability, cost, schedule, and risk of VVT activities. First, the fidelity level of each test is defined and the probability of rework between activities is modeled using a dependency structure matrix. Then, a discrete event simulation projects the occurrence of rework cycles and evaluates the impact on reliability, cost, and schedule for a set of VVT activities. Finally, a quadratic risk impact function is used to calculate the risk level of the VVT strategy based on the resulting output distributions. This method is applied to alternative VVT strategies for the Space Shuttle Main Engine to demonstrate how the impact of rework can be mitigated, using the actual test history as a baseline. Results indicate rework cost to be the primary driver in overall project risk, and yield interesting observations regarding the trade-off between the upfront cost of testing and the associated cost of rework. Ultimately, this final application problem demonstrates the merits of this methodology in evaluating VVT strategies and providing a risk-informed decision making framework for the verification, validation, and testing process of launch vehicle systems.
APA, Harvard, Vancouver, ISO, and other styles
8

Belsick, Charlotte Ann. "Space Vehicle Testing." DigitalCommons@CalPoly, 2012. https://digitalcommons.calpoly.edu/theses/888.

Full text
Abstract:
Requirement verification and validation is a critical component of building and delivering space vehicles with testing as the preferred method. This Master’s Project presents the space vehicle test process from planning through test design and execution. It starts with an overview of the requirements, validation, and verification. The four different verification methods are explained including examples as to what can go wrong if the verification is done incorrectly. Since the focus of this project is on test, test verification is emphasized. The philosophy behind testing, including the “why” and the methods, is presented. The different levels of testing, the test objectives, and the typical tests are discussed in detail. Descriptions of the different types of tests are provided including configurations and test challenges. While most individuals focus on hardware only, software is an integral part of any space product. As such, software testing, including mistakes and examples, is also presented. Since testing is often not performed flawlessly the first time, sections on anomalies, including determining root cause, corrective action, and retest is included. A brief discussion of defect detection in test is presented. The project is actually presented in total in the Appendix as a Power Point document.
APA, Harvard, Vancouver, ISO, and other styles
9

Ipate, Florentin Eugen. "Theory of X-machines with applications in specification and testing." Thesis, University of Sheffield, 1995. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.319486.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Woo, Yan, and 胡昕. "A dynamic integrity verification scheme for tamper-resistancesoftware." Thesis, The University of Hong Kong (Pokfulam, Hong Kong), 2005. http://hub.hku.hk/bib/B34740478.

Full text
APA, Harvard, Vancouver, ISO, and other styles
11

Seward, Balaji B. "Small engine emissions testing laboratory development and emissions sampling system verification." Morgantown, W. Va. : [West Virginia University Libraries], 2010. http://hdl.handle.net/10450/11024.

Full text
Abstract:
Thesis (M.S.)--West Virginia University, 2010.
Title from document title page. Document formatted into pages; contains xvi, 110 p. : ill. Includes abstract. Includes bibliographical references (p. 108-110).
APA, Harvard, Vancouver, ISO, and other styles
12

Banga, Mainak. "Testing and Verification Strategies for Enhancing Trust in Third Party IPs." Diss., Virginia Tech, 2010. http://hdl.handle.net/10919/30085.

Full text
Abstract:
Globalization in semiconductor industry has surged up the trend of outsourcing component design and manufacturing process across geographical boundaries. While cost reduction and short time to market are the driving factors behind this trend, the authenticity of the final product remains a major question. Third party deliverables are solely based on mutual trust and any manufacturer with a malicious intent can fiddle with the original design to make it work otherwise than expected in certain specific situations. In case such a backfire happens, the consequences can be disastrous especially for mission critical systems such as space-explorations, defense equipments such as missiles, life saving equipments such as medical gadgets where a single failure can translate to a loss of lives or millions of dollars. Thus accompanied with outsourcing, comes the question of trustworthy design - "how to ensure that integrity of the product manufactured by a third party has not been compromised". This dissertation aims towards developing verification methodologies and implementing non-destructive testing strategies to ensure the authenticity of a third party IP. This can be accomplished at various levels in the IC product life cycle. At the design stage, special testability features can be incorporated in the circuit to enhance its overall testability thereby making the otherwise hard to test portions of the design testable at the post silicon stage. We propose two different approaches to enhance the testability of the overall circuit. The first allows improved at-speed testing for the design while the second aims to exaggerate the effect of unwanted tampering (if present) on the IC. At the verification level, techniques like sequential equivalence checking can be employed to compare the third-party IP against a genuine specification and filter out components showing any deviation from the intended behavior. At the post silicon stage power discrepancies beyond a certain threshold between two otherwise identical ICs can indicate the presence of a malicious insertion in one of them. We have addressed all of them in this dissertation and suggested techniques that can be employed at each stage. Our experiments show promising results for detecting such alterations/insertions in the original design.
Ph. D.
APA, Harvard, Vancouver, ISO, and other styles
13

Härkönen, J. (Janne). "Improving product development process through verification and validation." Doctoral thesis, University of Oulu, 2009. http://urn.fi/urn:isbn:9789514291661.

Full text
Abstract:
Abstract The workload of Verification and Validation (V&V) has increased constantly in the high technology industries. The changes in the business environment, with fast time-to-market and demands to decrease research and development costs, have increased the importance of efficient product creation process, including V&V. The significance of the V&V related know-how and testing is increasing in the high tech business environment. As a consequence, companies in the ICT sector have pressures for improving product development process and verification and validation activities. The main motive for this research arises from the fact that the research has been scarce on verification and validation from product development process perspective. This study approaches the above mentioned goal from four perspectives: current challenges and success factors, V&V maturity in different NPD phases, benchmarking automotive sector, and shifting the emphasis of NPD efforts. This dissertation is qualitative in nature and is based on interviewing experienced industrial managers, reflecting their views against scientific literature. The researcher has analysed the obtained material and made conclusions. The main implications of this doctoral dissertation can be concluded as a visible need to shift the emphasis of V&V activities to early NPD. These activities should be viewed and managed over the entire NPD process. There is a need for companies to understand the V&V maturity in different NPD phases and develop activities based on this understanding. Verification and validation activities must be seen as an integral element for successful NPD. Benchmarking other sectors may enable identifying development potential for NPD process. The automotive sector being a mature sector, has developed practices for successfully handling requirements during NPD. The role of V&V is different in different NPD phases. Set-based type V&V can provide required understanding during early product development. In addition, developing parallel technological alternatives and platforms during early NPD also support shifting the emphasis towards earlier development phases.
APA, Harvard, Vancouver, ISO, and other styles
14

Antti, William. "Virtualized Functional Verification of Cross-Platform Software Applications." Thesis, Luleå tekniska universitet, Institutionen för system- och rymdteknik, 2019. http://urn.kb.se/resolve?urn=urn:nbn:se:ltu:diva-74599.

Full text
Abstract:
With so many developers writing code, so many choose to become a developer every day, using tools to aid in the work process is needed. With all the testing being done for multiple different devices and sources there is a need to make it better and more efficient. In this thesis connecting the variety of different tools such as version control, project management, issue tracking and test systems is explored as a possible solution. A possible solution was implemented and then analyzed through a questionnaire that were answered by developers. For an example results as high as 75\% answering 5 if they liked the connection between the issue tracking system and the test results. 75\% also gave a 5 when asked about if they liked the way the test results were presented. The answers they gave about the implementation made it possible to conclude that it is possible to achieve a solution that can solve some of the presented problems. A better way to connect various tools to present and analyze the test results coming from multiple different sources.
APA, Harvard, Vancouver, ISO, and other styles
15

Chin, Quee Shawn L. "Design verification for tissue engineered vascular grafts." Thesis, Georgia Institute of Technology, 2001. http://hdl.handle.net/1853/19689.

Full text
APA, Harvard, Vancouver, ISO, and other styles
16

Scott, Hanna E. T. "A Balance between Testing and Inspections : An Extended Experiment Replication on Code Verification." Thesis, Blekinge Tekniska Högskola, Avdelningen för programvarusystem, 2004. http://urn.kb.se/resolve?urn=urn:nbn:se:bth-1751.

Full text
Abstract:
An experiment replication comparing the performance of traditional structural code testing with inspection meeting preparation using scenario based reading. Original experiment was conducted by Per Runeson and Anneliese Andrews in 2003 at Washington State University.
En experiment-replikering där traditionell strukturell kod-testning jämförs med inspektionsmötesförberedelse användandes scenario-baserad kodläsning. Det ursprungliga experimentet utfördes av Per Runeson och Anneliese Andrews på Washington State University år 2003.
APA, Harvard, Vancouver, ISO, and other styles
17

Gonzalez, Perez Carlos Alberto. "Pragmatic model verification." Thesis, Nantes, Ecole des Mines, 2014. http://www.theses.fr/2014EMNA0189/document.

Full text
Abstract:
L’Ingénierie Dirigée par les Modèles (IDM) est une approche populaire pour le développement logiciel qui favorise l’utilisation de modèles au sein des processus de développement. Dans un processus de développement logiciel base sur l’IDM, le logiciel est développé en créant des modèles qui sont transformés successivement en d’autres modèles et éventuellement en code source. Quand l’IDM est utilisée pour le développement de logiciels complexes, la complexité des modèles et des transformations de modèles augmente, risquant d’affecter la fiabilité du processus de développement logiciel ainsi que le logiciel en résultant.Traditionnellement, la fiabilité des logiciels est assurée au moyen d’approches pour la vérification de logiciels, basées sur l’utilisation de techniques pour l’analyse formelle de systèmes et d’approches pour le test de logiciels. Pour assurer la fiabilité du processus IDM de développement logiciel, ces techniques ont en quelque sorte été adaptées pour essayer de s’assurer la correction des modèles et des transformations de modèles associées. L’objectif de cette thèse est de fournir de nouveaux mécanismes améliorant les approches existantes pour la vérification de modèles statiques, et d’analyser comment ces approches peuvent s’avérer utiles lors du test des transformations de modèles
Model-Driven Engineering (MDE) is a popular approach to the development of software which promotes the use of models as first-Class citizens in the software development process. In a MDE-Based software development process, software is developed by creating models to be successively transformed into another models and eventually into the software source code. When MDE is applied to the development of complex software systems, the complexity of models and model transformations increase, thus risking both, the reliability of the software development process and the soundness of the resulting software. Traditionally, ensuring software correctness and absence of errors has been addressed by means of software verification approaches, based on the utilization of formal analysis techniques, and software testing approaches. In order to ensure the reliability of MDE-Based software development processes, these techniques have some how been adapted to try to ensure correctness of models and model transformations. The objective of this thesis is to provide new mechanisms to improve the landscape of approaches devoted to the verification of static models, and analyze how these static model verification approaches can be of assistance at the time of testing model transformations
APA, Harvard, Vancouver, ISO, and other styles
18

Angerhofer, Bernhard J. "Collaborative supply chain modelling and performance measurement." Thesis, Brunel University, 2002. http://bura.brunel.ac.uk/handle/2438/4993.

Full text
Abstract:
For many years, supply chain research focused on operational aspects and therefore mainly on the optimisation of parts of the production and distribution processes. Recently, there has been an increasing interest in supply chain management and collaboration between supply chain partners. However, there is no model that takes into consideration all aspects required to adequately represent and measure the performance of a collaborative supply chain. This thesis proposes a model of a collaborative supply chain, consisting of six constituents, all of which are required in order to provide a complete picture of such a collaborative supply chain. In conjunction with that, a collaborative supply chain performance indicator is developed. It is based on three types of measures to allow the adequate measurement of collaborative supply chain performance. The proposed model of a collaborative supply chain and the collaborative supply chain performance indicator are implemented as a computer simulation. This is done in the form of a decision support environment, whose purpose is to show how changes in any of the six constituents affect collaborative supply chain performance. The decision support environment is configured and populated with information and data obtained in a case study. Verification and validation testing in three different scenarios demonstrate that the decision support environment adequately fulfils it purpose.
APA, Harvard, Vancouver, ISO, and other styles
19

El, Maarabani Mazen. "Verification and test of interoperability security policies." Phd thesis, Institut National des Télécommunications, 2012. http://tel.archives-ouvertes.fr/tel-00717602.

Full text
Abstract:
Nowadays, there is an increasing need for interaction in business community. In such context, organizations collaborate with each other in order to achieve a common goal. In such environment, each organization has to design and implement an interoperability security policy. This policy has two objectives: (i) it specifies the information or the resources to be shared during the collaboration and (ii) it define the privileges of the organizations' users. To guarantee a certain level of security, it is mandatory to check whether the organizations' information systems behave as required by the interoperability security policy. In this thesis we propose a method to test the behavior of a system with respect to its interoperability security policies. Our methodology is based on two approaches: active testing approach and passive testing approach. We found that these two approaches are complementary when checking contextual interoperability security policies. Let us mention that a security policy is said to be contextual if the activation of each security rule is constrained with conditions. The active testing consists in generating a set of test cases from a formal model. Thus, we first propose a method to integrate the interoperability security policies in a formal model. This model specifies the functional behavior of an organization. The functional model is represented using the Extended Finite Automata formalism, whereas the interoperability security policies are specified using OrBAC model and its extension O2O. In addition, we propose a model checking based method to check whether the behavior of a model respects some interoperability security policies. To generate the test cases, we used a dedicated tool developed in our department. The tool allows generating abstract test cases expressed in the TTCN notation to facilitate its portability. In passive testing approach, we specify the interoperability policy, that the system under test has to respect, with Linear Temporal logics. We analyze then the collected traces of the system execution in order to deduce a verdict on their conformity with respect to the interoperability policy. Finally, we show the applicability of our methods though a hospital network case study. This application allows to demonstrate the effectiveness and reliability of the proposed approaches
APA, Harvard, Vancouver, ISO, and other styles
20

Ranganathan, Krishna. "DVTG - Design Verification Test Generation from Rosetta Specifications." University of Cincinnati / OhioLINK, 2001. http://rave.ohiolink.edu/etdc/view?acc_num=ucin994691304.

Full text
APA, Harvard, Vancouver, ISO, and other styles
21

Moschoglou, Georgios Moschos. "Software testing tools and productivity." Virtual Press, 1996. http://liblink.bsu.edu/uhtbin/catkey/1014862.

Full text
Abstract:
Testing statistics state that testing consumes more than half of a programmer's professional life, although few programmers like testing, fewer like test design and only 5% of their education will be devoted to testing. The main goal of this research is to test the efficiency of two software testing tools. Two experiments were conducted in the Computer Science Department at Ball State University. The first experiment compares two conditions - testing software using no tool and testing software using a command-line based testing tool - to the length of time and number of test cases needed to achieve an 80% statement coverage for 22 graduate students in the Computer Science Department. The second experiment compares three conditions - testing software using no tool, testing software using a command-line based testing tool, and testing software using a GUI interactive tool with added functionality - to the length of time and number of test cases needed to achieve 95% statement coverage for 39 graduate and undergraduate students in the same department.
Department of Computer Science
APA, Harvard, Vancouver, ISO, and other styles
22

Tekin, Yasar. "An Automated Tool For Requirements Verification." Master's thesis, METU, 2004. http://etd.lib.metu.edu.tr/upload/12605401/index.pdf.

Full text
Abstract:
In today&
#8217
s world, only those software organizations that consistently produce high quality products can succeed. This situation enforces the effective usage of defect prevention and detection techniques. One of the most effective defect detection techniques used in software development life cycle is verification of software requirements applied at the end of the requirements engineering phase. If the existing verification techniques can be automated to meet today&
#8217
s work environment needs, the effectiveness of these techniques can be increased. This study focuses on the development and implementation of an automated tool that automates verification of software requirements modeled in Aris eEPC and Organizational Chart for automatically detectable defects. The application of reading techniques on a project and comparison of results of manual and automated verification techniques applied to a project are also discussed.
APA, Harvard, Vancouver, ISO, and other styles
23

Sundbaum, Niklas. "Automated Verification of Load Test Results in a Continuous Delivery Deployment Pipeline." Thesis, KTH, Skolan för datavetenskap och kommunikation (CSC), 2015. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-169656.

Full text
Abstract:
Continuous delivery is a software development methodology that aims to reduce development cycle time by putting a strong emphasis on automation, quality and rapid feedback. This thesis develops an automated method for detecting performance regressions as part of a continuous delivery deployment pipeline. The chosen method is based on control charts, a tool commonly used within statistical process control. This method is implemented as part of a continuous delivery deployment pipeline and its ability to detect performance regressions is then evaluated by injecting various performance bottlenecks in a sample application. The results from this thesis show that using a control chart based approach is a viable option when trying to automate verification of load test results in the context of continuous delivery.
Kontinuerlig leverans är en utvecklingsmetodik för mjukvara med målet att reducera ledtid genom att fokusera på automatisering, kvalitet och snabb återkoppling. I det här examensarbetet utvecklas en automatiserad metod för att upptäcka försämringar i prestanda i en deployment pipeline för kontinuerlig leverans. Den valda metoden baseras på kontrolldiagram, ett verktyg som ofta används inom statistisk processkontroll. Metoden implementeras som en del av en deployment pipeline för kontinuerlig leverans och dess förmåga att upptäcka prestandaförsämringar utvärderas genom att olika prestandarelaterade flaskhalsar implementeras i en testapplikation. Resultaten från arbetet visar att en metod baserad på kontrolldiagram är ett tänkbart alternativ för att automatisera verifiering lasttestresultat inom kontinuerlig leverans.
APA, Harvard, Vancouver, ISO, and other styles
24

Deng, Xianghua. "Contract-based verification and test case generation for open systems." Diss., Manhattan, Kan. : Kansas State University, 2007. http://hdl.handle.net/2097/345.

Full text
APA, Harvard, Vancouver, ISO, and other styles
25

Debesay, Teclemicael Tewelde. "Experimental verification of the finite element analysis of a dynamically loaded semi-trailer." Thesis, Stellenbosch : Stellenbosch University, 2004. http://hdl.handle.net/10019.1/49960.

Full text
Abstract:
Thesis (MScEng)--University of Stellenbosch, 2004.
ENGLISH ABSTRACT: The aim of the thesis is to compare results obtained from a finite element analysis method (FEM) to experimental results of a 12.2m long semi-trailer driven off-road. Semi-trailers are of great importance in the transport industry. Furthermore, the need to obtain optimum and reliable semitrailers in this crucial stream of industry is indispensable. The work focuses on comparing the two results so that the finite element method may be used as design analysis and redesigning tool as a substitute to testing. The semi-trailer was driven on a relatively rough off-road at different speeds, at 70kmlh, 50kmlh and 40kmlh loaded with about 12 tonnes of brick pallets. The forces at the suspension of the semitrailer and strains at different parts were measured with the help of strain gauges and other data acquisition equipment. A finite element model of the semi-trailer was modelled in Nastran for Windows. The trailer parameters in the finite element were tuned to curve fit the test results. A comparison of the two /' results was made based on the average of absolute values and standard deviation, to verify the validity of the finite element model.
AFRIKAANSE OPSOMMING: Die doel van die tesis is om die resultate wat verkry is deur eindige element metodes (EEM) met eksperimentele resultate van 'n 12.2 m lang leunwa wat bestuur is op 'n grondpad te vergelyk. Leunwaens is baie belangrik vir die vervoer industrie. Verder is die behoefte om optimum en betroubare leunwaens in die industrie te vervaardig baie nodig. Die werk fokus daarop om die 2 resultate te vergelyk sodat die EEM gebruik kan word as ontwerp analiese en herontwerp gereedskapstuk en as 'n vervanging vir toetsing. Die leunwa was op 'n redelike rowwe pad teen verskillende snelhede nl. 70km/h, 50km/h en 40km/h met 'n 12 ton baksteen vrag gery. Die kragte by die suspensie van die leunwa en die vervorming by verskillende onderdele is gemeet met behulp van rekstrokies en ander data versamelings toerusting 'n Eindige element model van die leunwa is gemodelleer in "Nastran for Windows". Die sleepwa parameters in die eindige element model is verstel d.m.v krommepassing van die toets resultate. 'n Vergelyking van die 2 resultate is gebasseer op die gemiddelde van die absolute waardes en standaard afwykings, om die geldigheid van die eindige element model te kontroleer.
APA, Harvard, Vancouver, ISO, and other styles
26

Piirainen, R. M. (Risto-Matti). "Automatic verification of 3GPP throughput counters In PDCP/RLC/MAC layer capacity testing." Master's thesis, University of Oulu, 2017. http://urn.fi/URN:NBN:fi:oulu-201710112984.

Full text
Abstract:
Counters provide information about the functionality of the base station. That information is highly valuable for mobile operators, who re-configure their networks partly based on that information. Mobile operators also monitor counters to see, what base station is capable of. Each base station has hundreds of different counters, measuring numerous different things continuously. Counter information is provided by the base station software, which takes care of keeping all the counters up-to-date. Nowadays base stations are very efficient, and they are capable to handle thousands of different requests in the blink of the eye. Increasing complexity of the product poses an enormous challenge for counters, since calculating the values for each counter gets more complex. This research was conducted in Finnish large-scale telecommunications company. Although counters are extremely important for customers, they are not verified effectively in case company’s LTE PDCP/RLC/MAC layer’s capacity tests. The goal of this research is to create a mechanism, which makes it possible to easily enable automatic counter verification in any automated capacity test case. Design science research was applied to achieve this goal. In this research, literature review is conducted to gain understanding for LTE, 3GPP throughput counters, and about the capacity testing environment of the case company. Then a new counter verification system for LTE PDCP/RLC/MAC layer’s capacity tests is designed and implemented. After the system is implemented, expected counter values need to be calculated for each test case and counter, which are part of this research. Evaluation of the system is made against the system requirements and to accuracy of limit value calculations. As a conclusion, it can be said that the implementation of the system was a success, but in some of the test cases, counters provided unexpected results. The implemented system was able to catch the faults, but the root causes for problems are not clear. In total, 185 test case – counter combinations were verified, and in almost 13 % of them counter verification failed the test case, because counter provided unexpected value. In future, it would be beneficial to make a root cause analysis for the issues this research pointed out.
APA, Harvard, Vancouver, ISO, and other styles
27

Shahan, Michael R. "Development and verification of a laboratory for the emissions testing of locomotive engines." Morgantown, W. Va. : [West Virginia University Libraries], 2008. https://eidr.wvu.edu/etd/documentdata.eTD?documentid=5975.

Full text
Abstract:
Thesis (M.S.)--West Virginia University, 2008.
Title from document title page. Document formatted into pages; contains xi, 118 p. : ill. (some col.). Includes abstract. Includes bibliographical references (p. 102-103).
APA, Harvard, Vancouver, ISO, and other styles
28

Reich, Jason S. "Property-based testing and properties as types : a hybrid approach to supercompiler verification." Thesis, University of York, 2013. http://etheses.whiterose.ac.uk/5650/.

Full text
Abstract:
This thesis describes a hybrid approach to compiler verification. Property-based testing and mechanised proof are combined to support the verification of a supercompiler — a particular source-to-source program optimisation. A careful developer may use formal methods to ensure that their program code is correct to specifications. Poorly constructed compilers (and their associated machinery) can produce object code that does not have the same meaning as the source program. Therefore, to ensure the correctness of the executable program, each component of the compilation pipeline needs to be verified. Lazy SmallCheck — a property-based testing library — is extended with support for existential qualification, functional values and a technique for displaying partial counterexamples. Lazy SmallCheck is then applied to the efficient generation of test programs for a small first-order functional language, specified using declarative statements of program validity. We extend the technique with several definitions of canonical programs to reduce the test-data space. A supercompiler is implemented for a core higher-order language, contrasting implementations found in other publications. We also survey the techniques and themes seen in the literature on compiler proof. These surveys inform the development of an abstract verified supercompiler in a dependently-typed language. In this work, we represent correctness properties as types. This abstract model is then adapted to integrate mechanical proof and results of property-based testing to verify a working supercompiler implementation. While more work is required to improve the framework’s ease-of-use and the speed of verification, the results show that this approach to hybrid verification is feasible.
APA, Harvard, Vancouver, ISO, and other styles
29

Belt, P. (Pekka). "Improving verification and validation activities in ICT companies—product development management approach." Doctoral thesis, University of Oulu, 2009. http://urn.fi/urn:isbn:9789514291487.

Full text
Abstract:
Abstract The main motive for this research arises from the fact that the research has been scarce on verification and validation (V&V) activities from the management viewpoint, even though V&V has been covered from the technical viewpoint. There was a clear need for studying the management aspects due to the development of the information and communications technology (ICT) sector, and increased significance of V&V activities. ICT has developed into a turbulent, high clock-speed sector and the importance of V&V activities has increased significantly. As a consequence, companies in the ICT sector require ideas for improving their verification and validation activities from the product development management viewpoint. This study approaches the above mentioned goal from four perspectives: current V&V management challenges, organisational and V&V maturities, benchmarking another sector, and uncertainty during new product development (NPD). This dissertation is qualitative in nature and is based on interviewing experienced industrial managers, reflecting their views against scientific literature. The researcher has analysed the obtained material and made conclusions. The main implications of this doctoral dissertation can be concluded as a need to overcome the current tendency to organise through functional silos, and low maturity of V&V activities. Verification and validation activities should be viewed and managed over the entire NPD process. This requires new means for cross-functional integration. The maturity of the overall management system needs to be adequate to enable higher efficiency and effectiveness of V&V activities. There are pressures to shift the emphasis of V&V to early NPD and simultaneously delay decision-making in NPD projects to a stage where enough information is available. Understanding enhancing V&V methods are a potential way to advance towards these goals.
APA, Harvard, Vancouver, ISO, and other styles
30

Zhang, Lei. "Modeling and Verification of Simulation tools for Carburizing and Carbonitriding." Digital WPI, 2017. https://digitalcommons.wpi.edu/etd-dissertations/484.

Full text
Abstract:
"The CHTE surface hardening simulation tools, CarboNitrideTool© and CarbTool© have been enhanced to improve the accuracy of the simulation and to predict the microstructure and microhardness profiles after the heat treatment process. These tools can be used for the prediction of both gas and low pressure carburizing processes. The steel alloys in the data base include 10XX, 48XX, 51XX, 86XX, 93XX and Pyrowear 53. They have been used by CHTE members to design efficient carburizing cycles to maximum the profit by controlling the cost and time. In the current software, the model has successfully predicted the carbon concentration profiles for gas carburizing process and many low pressure carburizing processes. In some case, the simulation toll may not work well with the low pressure carburizing process, especially with AISI 9310 alloy. In the previous simulation, a constant carbon flux boundary condition was used. However, it has been experimentally proven that the flux is a function of time. The high carbon potential may cause soot and carbides at the outer edge. The soot and carbides will impede the diffusion of carbon during the low pressure carburizing process. The constant carbon flux cannot be appropriately used as the boundary condition. An improved model for the process is proposed. In the modeling, carbon potential and mass transfer coefficient are calculated and used as the boundary condition. CarbonitrideToolⒸ has been developed for the prediction of both carbon and nitrogen profiles for carbonitriding process. The microstructure and hardness profile is also needed by the industry. The nitrogen is an austenite stabilizer which result in high amount of retained austenite (RA). RA plays important role in the hardness. The model has been developed to predict the Martensite start temperature (Ms) which can be used for RA prediction. Mixture rule is used then to predict the hardness profiles. Experiments has been conducted to verify the simulation. The hardness profile is also predicted for tempered carburized alloys. Hollomon-Jaffe equation was used. A matrix of tempering experiments are conducted to study the Hollomon Jaffe parameter for AISI 8620 and AISI 9310 alloy. Constant C value is calculated with a new mathematical method. With the calculation result, the hardness profile can be predicted with input of tempering time and temperature. Case depth and surface hardness are important properties for carburized steel that must be well controlled. The traditional testing is usually destructive. Samples are sectioned and measured by either OES or microhardness tester. It is time consuming and can only be applied on sampled parts. The heat treating industry needs a physics based, verified simulation tool for surface hardening processes to accurately predict concentration profiles, microstructure and microhardness profiles. There is also a need for non-destructive measurement tool to accurately determine the surface hardness and case depth. Magnetic Barkhausen Noise (MBN) is one of the promising way to test the case depth and hardness. MBN measures the pulses generating by the interaction between magnetic domain walls in the ferromagnetic material and the pinning sites such as carbides, impurities and dislocation. These signals are analyzed to evaluate the properties of the carburized steel. "
APA, Harvard, Vancouver, ISO, and other styles
31

MacGahan, Christopher, and Christopher MacGahan. "Mathematical Methods for Enhanced Information Security in Treaty Verification." Diss., The University of Arizona, 2016. http://hdl.handle.net/10150/621280.

Full text
Abstract:
Mathematical methods have been developed to perform arms-control-treaty verification tasks for enhanced information security. The purpose of these methods is to verify and classify inspected items while shielding the monitoring party from confidential aspects of the objects that the host country does not wish to reveal. Advanced medical-imaging methods used for detection and classification tasks have been adapted for list-mode processing, useful for discriminating projection data without aggregating sensitive information. These models make decisions off of varying amounts of stored information, and their task performance scales with that information. Development has focused on the Bayesian ideal observer, which assumes com- plete probabilistic knowledge of the detector data, and Hotelling observer, which assumes a multivariate Gaussian distribution on the detector data. The models can effectively discriminate sources in the presence of nuisance parameters. The chan- nelized Hotelling observer has proven particularly useful in that quality performance can be achieved while reducing the size of the projection data set. The inclusion of additional penalty terms into the channelizing-matrix optimization offers a great benefit for treaty-verification tasks. Penalty terms can be used to generate non- sensitive channels or to penalize the model's ability to discriminate objects based on confidential information. The end result is a mathematical model that could be shared openly with the monitor. Similarly, observers based on the likelihood probabilities have been developed to perform null-hypothesis tasks. To test these models, neutron and gamma-ray data was simulated with the GEANT4 toolkit. Tasks were performed on various uranium and plutonium in- spection objects. A fast-neutron coded-aperture detector was simulated to image the particles.
APA, Harvard, Vancouver, ISO, and other styles
32

Misselhorn, Werner Ekhard. "Verification of hardware-in-the-loop as a valid testing method for suspension development." Diss., Pretoria : [s.n.], 2004. http://upetd.up.ac.za/thesis/available/etd-07282005-082527.

Full text
APA, Harvard, Vancouver, ISO, and other styles
33

Chunduri, Annapurna. "An Effective Verification Strategy for Testing Distributed Automotive Embedded Software Functions: A Case Study." Thesis, Blekinge Tekniska Högskola, Institutionen för programvaruteknik, 2016. http://urn.kb.se/resolve?urn=urn:nbn:se:bth-12805.

Full text
Abstract:
Context. The share and importance of software within automotive vehicles is growing steadily. Most functionalities in modern vehicles, especially safety related functions like advanced emergency braking, are controlled by software. A complex and common phenomenon in today’s automotive vehicles is the distribution of such software functions across several Electronic Control Units (ECUs) and consequently across several ECU system software modules. As a result, integration testing of these distributed software functions has been found to be a challenge. The automotive industry neither has infinite resources, nor has the time to carry out exhaustive testing of these functions. On the other hand, the traditional approach of implementing an ad-hoc selection of test scenarios based on the tester’s experience, can lead to test gaps and test redundancies. Hence, there is a pressing need within the automotive industry for a feasible and effective verification strategy for testing distributed software functions. Objectives. Firstly, to identify the current approach used to test the distributed automotive embedded software functions in literature and in a case company. Secondly, propose and validate a feasible and effective verification strategy for testing the distributed software functions that would help improve test coverage while reducing test redundan- cies and test gaps. Methods. To accomplish the objectives, a case study was conducted at Scania CV AB, Södertälje, Sweden. One of the data collection methods was through conducting interviews of different employees involved in the software testing activities. Based on the research objectives, an interview questionnaire with open-ended and close-ended questions has been used. Apart from interviews, data from relevant ar- tifacts in databases and archived documents has been used to achieve data triangulation. Moreover, to further strengthen the validity of the results obtained, adequate literature support has been presented throughout. Towards the end, a verification strategy has been proposed and validated using existing historical data at Scania. Conclusions. The proposed verification strategy to test distributed automotive embedded software functions has given promising results by providing means to identify test gaps and test redundancies. It helps establish an effective and feasible approach to capture function test coverage information that helps enhance the effectiveness of integration testing of the distributed software functions.
APA, Harvard, Vancouver, ISO, and other styles
34

Andrieu, Christian W. "Testing, validation, and verification of an expert system advisor for aircraft maintenance scheduling (ESAAMS)." Thesis, Monterey, California. Naval Postgraduate School, 1991. http://hdl.handle.net/10945/28599.

Full text
APA, Harvard, Vancouver, ISO, and other styles
35

Vahlberg, Mikael. "Verification of Risk Algorithm Implementations in a Clearing System Using a Random Testing Framework." Thesis, Umeå universitet, Institutionen för matematik och matematisk statistik, 2017. http://urn.kb.se/resolve?urn=urn:nbn:se:umu:diva-139544.

Full text
Abstract:
Clearing is keeping track of transactions until they are settled. Standardized derivatives such as options and futures can be cleared through a clearinghouse if you are a clearing member. The clearinghouse step in as an intermediary between trades and manages all occurring counterparty risk. To be able to keep track of all transactions and also monitor members risk exposure a clearinghouse use advanced clearing software. Counterparty risk is mainly handled by collecting collateral from each clearing member, the initial collateral that a clearinghouse require from a member trading with derivatives, is called initial margin. Initial margin is calculated by a risk algorithm incorporated in the clearing software. Cinnober Financial Technology delivers clearing solutions to clearinghouses world wide, software providers to the _nancial industry have high demands on software quality. Ensuring high software quality can be done by performing various types of software testing. The goal of this thesis is to implement an extendable random testing framework that can test risk algorithm implementations that are part of a clearing system under development by Cinnober. By using the implemented framework, we aim to verify if the risk algorithm SPAN calculates fair initial margin amount. We also intend to increase the quality assurance of the risk domain that is responsible for all risk calculations. In this thesis we implement a random testing framework suitable for testing risk algorithms. Furthermore, we implement a framework extension for SPAN that is used to test the SPAN algorithm's initial margin calculations. The implementation consist of two main parts, the _rst being a random generation entity that feeds the clearing system with randomized input data. The second part is a veri_cation entity called test oracle, it is responsible for verifying the SPAN algorithm's calculation results. The random testing framework for risk algorithms was successfully implemented. By running the SPAN extension of the framework, we managed to _nd four issues related to the accuracy of the SPAN algorithm. This discovery led to the conclusion that the current SPAN algorithm implementation does not calculate fair initial margin. It also led to an immediate increase of quality assurance because the issues will be corrected. As a result of the frameworks extensible characteristics, long term quality also increases.
APA, Harvard, Vancouver, ISO, and other styles
36

Feliachi, Abderrahmane. "Semantics-Based Testing for Circus." Thesis, Paris 11, 2012. http://www.theses.fr/2012PA112372/document.

Full text
Abstract:
Le travail présenté dans cette thèse est une contribution aux méthodes formelles de spécification et de vérification. Les spécifications formelles sont utilisées pour décrire un logiciel, ou plus généralement un système, d'une manière mathématique sans ambiguïté. Des techniques de vérification formelle sont définies sur la base de ces spécifications afin d'assurer l'exactitude d'un système donné. Cependant, les méthodes formelles ne sont souvent pas pratiques et facile à utiliser dans des systèmes réels. L'une des raisons est que de nombreux formalismes de spécification ne sont pas assez riches pour couvrir à la fois les exigences orientées données et orientées comportement. Certains langages de spécification ont été proposés pour couvrir ce genre d'exigences. Le langage Circus se distingue parmi ces langues par une syntaxe et une sémantique riche et complètement intégrées.L'objectif de cette thèse est de fournir un cadre formel pour la spécification et la vérification de systèmes complexes. Les spécifications sont écrites en Circus et la vérification est effectuée soit par des tests ou par des preuves de théorèmes. Des environnements similaires de spécification et de vérification ont déjà été proposés dans la littérature. Une spécificité de notre approche est de combiner des preuves de théorème avec la génération de test. En outre, la plupart des méthodes de génération de tests sont basés sur une caractérisation syntaxique des langages étudiés. Notre environnement est différent car il est basé sur la sémantique dénotationnelle et opérationnelle de Circus. L'assistant de preuves Isabelle/HOL constitue la plateforme formelle au-dessus de laquelle nous avons construit notre environnement de spécification et de vérification.La première contribution principale de notre travail est l'environnement formel de spécification et de preuve Isabelle/Circus, basé sur la sémantique dénotationnelle de Circus. Sur la base d’Isabelle/HOL nous avons fourni une intégration vérifiée d’UTP, la base de la sémantique de Circus. Cette intégration est utilisée pour formaliser la sémantique dénotationnelle du langage Circus. L'environnement Isabelle/Circus associe à cette sémantique des outils de parsing qui aident à écrire des spécifications Circus. Le support de preuve d’Isabelle/HOL peut être utilisé directement pour raisonner sur ces spécifications grâce à la représentation superficielle de la sémantique (shallow embedding). Nous présentons une application de l'environnement à des preuves de raffinement sur des processus Circus (impliquant à la fois des données et des aspects comportementaux).La deuxième contribution est l'environnement de test CirTA construit au-dessus d’Isabelle/Circus. Cet environnement fournit deux tactiques de génération de tests symboliques qui permettent la vérification de deux notions de raffinement: l'inclusion des traces et la réduction de blocages. L'environnement est basé sur une formalisation symbolique de la sémantique opérationnelle de Circus avec Isabelle/Circus. Plusieurs définitions symboliques et tactiques de génération de test sont définies dans le cadre de CirTA. L'infrastructure formelle permet de représenter explicitement les théories de test ainsi que les hypothèses de sélection de test. Des techniques de preuve et de calculs symboliques sont la base des tactiques de génération de test. L'environnement de génération de test a été utilisé dans une étude de cas pour tester un système existant de contrôle de message. Une spécification du système est écrite en Circus, et est utilisé pour générer des tests pour les deux relations de conformité définies pour Circus. Les tests sont ensuite compilés sous forme de méthodes de test JUnit qui sont ensuite exécutées sur une implémentation Java du système étudié
The work presented in this thesis is a contribution to formal specification and verification methods. Formal specifications are used to describe a software, or more generally a system, in a mathematical unambiguous way. Formal verification techniques are defined on the basis of these specifications to ensure the correctness of the resulting system. However, formal methods are often not convenient and easy to use in real system developments. One of the reasons is that many specification formalisms are not rich enough to cover both data-oriented and behavioral requirements. Some specification languages were proposed to cover this kind of requirements. The Circus language distinguishes itself among these languages by a rich syntax and a fully integrated semantics.The aim of this thesis is to provide a formal environment for specifying and verifying complex systems. Specifications are written in Circus and verification is performed either by testing or by theorem proving. Similar specifications and verification environment have already been proposed. A specificity of our approach is to combine supports for proofs and test generation. Moreover, most test generation methods are based on a syntactic characterization of the studied languages. Our proposed environment is different since it is based on the denotational and operational semantics of Circus. The Isabelle/HOL theorem prover is the formal platform on top of which we built our specification and verification environment.The first main contribution of our work is the Isabelle/Circus specification and proof environment based on the denotational semantics of Circus. On top of Isabelle/HOL we provide a machine-checked shallow embedding of UTP, the semantics basis of Circus. This embedding is used to formalize the denotational semantics of the Circus language. The Isabelle/Circus environment associates to this semantics some parsing facilities that help writing Circus specifications. The proof support of Isabelle/HOL can be used directly to reason on these specifications thanks to the shallow embedding of the semantics. We present an application of the environment to refinement proofs on Circus processes (involving both data and behavioral aspects). The second main contribution is the CirTA testing framework build on top of Isabelle/Circus. The framework provides two symbolic test generation tactics that allow checking two notions of refinement: traces inclusion and deadlocks reduction. The framework is based on a shallow symbolic formalization of the operational semantics of Circus using Isabelle/Circus. Several symbolic definition and test generation tactics are defined in the CirTA framework. The formal infrastructure allows us to represent explicitly test theories as well as test selection hypothesis. Proof techniques and symbolic computations are the basis of test generation tactics. The test generation environment was used for a case study to test an existing message monitoring system. A specification of the system is written in Circus, and used to generate tests following the defined conformance relations. The tests are then compiled in forms of JUnit test methods and executed against a Java implementation of the monitoring system.This thesis is a step towards, on one hand, the development of sophisticated testing tools making use of proof techniques and, on the other hand, the integration of testing and proving within formally verified software developments
APA, Harvard, Vancouver, ISO, and other styles
37

Jagadeesan, Harini. "Design and Verification of Privacy and User Re-authentication Systems." Thesis, Virginia Tech, 2009. http://hdl.handle.net/10919/32394.

Full text
Abstract:
In the internet age, privacy and security have become major concerns since an increasing number of transactions are made over an unsecured network. Thus there is a greater chance for private data to be misused. Further, insider attacks can result in loss of valuable data. Hence there arises a strong need for continual, non-intrusive, quick user re-authentication. Previously, a number of studies have been conducted on authentication using behavioral attributes. Currently, few successful re-authentication mechanisms are available since they use either the mouse or the keyboard for re-authentication and target particular applications. However, successful re-authentication is still dependent on a large number of factors such as user excitation level, fatigue and using just the keyboard or the mouse does not mitigate these factors successfully.

Both keyboard and mouse contain valuable, hard-to-duplicate information about the userâ s behavior. This can be used for analysis and identification of the current user. We propose an application independent system that uses this information for user re-authentication. This system will authenticate the user continually based on his/her behavioral attributes obtained from both the keyboard and mouse operations. This re-authentication system is simple, continual, non-intrusive and easily deployable. To utilize the mouse and keyboard information for re-authentication, we propose a novel heuristic that uses the percentage of mouse-to-keyboard interaction ratio. This heuristic allows us to extract suitable user-behavioral attributes. The extracted data is compared with an already trained database for user re-authentication.

The accuracy of the system is calculated by the number of correct identifications to total number of identifications. At present, the accuracy of the system is around 96% for application based user re-authentication and around 82% for application independent user re-authentication. We perform black box, white box testing and Spec# verification procedures that prove the robustness of the proposed system. On testing POCKET, a privacy protection software for children, it was found that the security of POCKET was inadequate at the user level. Our system enhances POCKET security at the user level and ensures that the childâ s privacy is protected.
Master of Science

APA, Harvard, Vancouver, ISO, and other styles
38

Williams, Steve. "Advanced Test Range Verification at RF Without Flights." International Foundation for Telemetering, 2010. http://hdl.handle.net/10150/605960.

Full text
Abstract:
ITC/USA 2010 Conference Proceedings / The Forty-Sixth Annual International Telemetering Conference and Technical Exhibition / October 25-28, 2010 / Town and Country Resort & Convention Center, San Diego, California
Flight and weapons test ranges typically include multiple Telemetry Sites (TM Sites) that receive telemetry from platforms being flown on the range. Received telemetry is processed and forwarded by them to a Range Control Center (RCC) which is responsible for flight safety, and for delivering captured best source telemetry to those responsible for the platform being flown. When range equipment or operations are impaired in their ability to receive telemetry or process it correctly, expensive and/or one-of-a-kind platforms may have to be destroyed in flight to maintain safety margins, resulting in substantial monetary loss, valuable data loss, schedule disruption and potential safety concerns. Less severe telemetry disruptions can also result in missing or garbled telemetry data, negatively impacting platform test, analysis and design modification cycles. This paper provides a high level overview of a physics-compliant Range Test System (RTS) built upon Radio Frequency (RF) Channel Simulator technology. The system is useful in verifying range operation with most range equipment configured to function as in an actual mission. The system generates RF signals with appropriate RF link effects associated with range and range rate between the flight platform and multiple telemetry tracking stations. It also emulates flight and RF characteristics of the platform, to include signal parameters, antenna modeling, body shielding and accurate flight parameters. The system is useful for hardware, software, firmware and process testing, regression testing, and fault detection test, as well as range customer assurance, and range personnel training against nominal and worst-case conditions.
APA, Harvard, Vancouver, ISO, and other styles
39

Ubah, Ifeanyi. "A Language-Recognition Approach to Unit Testing Message-Passing Systems." Thesis, KTH, Skolan för informations- och kommunikationsteknik (ICT), 2017. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-215655.

Full text
Abstract:
This thesis addresses the problem of unit testing components in message-passing systems. A message-passing system is one that comprises components communicating with each other solely via the exchange of messages. Testing aids developers in detecting and fixing potential errors and with unit testing in particular, the focus is on independently verifying the correctness of single components, such as functions and methods, in a system whose behavior is well understood. With the aid of unit testing frameworks such as those of the xUnit family, this process can not only be automated and done iteratively, but easily interleaved with the development process, facilitating rapid feedback and early detection of errors in the system. However, such frameworks work in an imperative manner and as such, are unsuitable for verifying message-passing systems where the behavior of a component is encoded in its stream of exchanged messages. In this work, we recognise that similar to streams of symbols in the field of formal languages and abstract machines, one can specify properties of a component’s message stream such that they form a language. Unit testing a component thus becomes the description of an automaton that recognizes such a specified language. We propose a platform-independent, language-recognition approach to creating unit testing frameworks for describing and verifying the behavior of message-passing components, and use this approach in creating a prototype implementation for the Kompics component model. We show that this approach can be used to perform both black box and white box testing of components, and that it is easy to work with while preventing common mistakes in practice.
APA, Harvard, Vancouver, ISO, and other styles
40

Cetin, Cagri. "Design, Testing and Implementation of a New Authentication Method Using Multiple Devices." Scholar Commons, 2015. http://scholarcommons.usf.edu/etd/5660.

Full text
Abstract:
Authentication protocols are very common mechanisms to confirm the legitimacy of someone’s or something’s identity in digital and physical systems. This thesis presents a new and robust authentication method based on users’ multiple devices. Due to the popularity of mobile devices, users are becoming more likely to have more than one device (e.g., smartwatch, smartphone, laptop, tablet, smart-car, smart-ring, etc.). The authentication system presented here takes advantage of these multiple devices to implement authentication mechanisms. In particular, the system requires the devices to collaborate with each other in order for the authentication to succeed. This new authentication protocol is robust against theft-based attacks on single device; an attacker would need to steal multiple devices in order to compromise the authentication system. The new authentication protocol comprises an authenticator and at least two user devices, where the user devices are associated with each other. To perform an authentication on a user device, the user needs to respond a challenge by using his/her associated device. After describing how this authentication protocol works, this thesis will discuss three different versions of the protocol that have been implemented. In the first implementation, the authentication process is performed by using two smartphones. Also, as a challenge, a QR code is used. In the second implementation, instead of using a QR code, NFC technology is used for challenge transmission. In the last implementation, the usability with different platforms is exposed. Instead of using smartphones, a laptop computer and a smartphone combination is used. Furthermore, the authentication protocol has been verified by using an automated protocol-verification tool to check whether the protocol satisfies authenticity and secrecy properties. Finally, these implementations are tested and analyzed to demonstrate the performance variations over different versions of the protocol.
APA, Harvard, Vancouver, ISO, and other styles
41

Wang, Zilong [Verfasser], and Rupak [Akademischer Betreuer] Majumdar. "Algorithms and Tools for Verification and Testing of Asynchronous Programs / Zilong Wang. Betreuer: Rupak Majumdar." Kaiserslautern : Technische Universität Kaiserslautern, 2016. http://d-nb.info/1096220946/34.

Full text
APA, Harvard, Vancouver, ISO, and other styles
42

Chantatub, Wachara. "The integration of software specification, verification, and testing techniques with software requirements and design processes." Thesis, University of Sheffield, 1995. http://etheses.whiterose.ac.uk/1850/.

Full text
Abstract:
Specifying, verifying, and testing software requirements and design are very important tasks in the software development process and must be taken seriously. By investing more up-front effort in these tasks, software projects will gain the benefits of reduced maintenance costs, higher software reliability, and more user-responsive software. However, many individuals involved in these tasks still find that the techniques available for the tasks are either too difficult and far from practical or if not difficult, inadequate for the tasks. This thesis proposes practical and capable techniques for specifying and verifying software requirements and design and for generating test requirements for acceptance and system testing. The proposed software requirements and design specification techniques emerge from integrating three categories of software specification languages, namely an infonnal specification language (e.g. English), semiformal specification languages (Entity-Relationship Diagrams, Data Flow Diagrams, and Data Structure Diagrams), and a formal specification language (Z with an extended subset). The four specification languages mentioned above are used to specify both software requirements and design. Both software requirements and design of a system are defined graphically in Entity-Relationship Diagrams, Data Flow Diagrams, and Data Structure Diagrams, and defined formally in Z specifications. The proposed software requirements and design verification techniques are a combination of informal and formal proofs. The informal proofs are applied to check the consistency of the semiformal specification and to check the consistency, correctness, and completeness of the formal specification against the semiformal specification. The formal proofs are applied to mathematically prove the consistency of the formal specification. Finally, the proposed technique for generating test requirements for acceptance and system testing from the formal requirements specification is presented. Two sets of test requirements are generated: test requirements for testing the critical requirements, and test requirements for testing the operations of the system.
APA, Harvard, Vancouver, ISO, and other styles
43

Onyango, Mbakisya A. "Verification of mechanistic prediction models for permanent deformation in asphalt mixes using accelerated pavement testing." Diss., Manhattan, Kan. : Kansas State University, 2009. http://hdl.handle.net/2097/1362.

Full text
APA, Harvard, Vancouver, ISO, and other styles
44

Tan, Kaige. "Building verification database and extracting critical scenarios for self-driving car testing on virtual platform." Thesis, KTH, Skolan för industriell teknik och management (ITM), 2019. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-263927.

Full text
Abstract:
This degree project, conducted at Volvo Cars, investigates a method about how to build a test database for an Autonomous Driving (AD) function on a virtual platform and how to extract critical scenarios from the test database to finish test cases reduction through optimization. The virtual platform under study is the model-in-the-loop (MIL) based Simulation Platform Active Safety (SPAS) environment and the optimization tool being used is modeFrontier. The analyzing process, in which three levels of abstraction for scenarios are proposed in order to fulfill all requirements for an AD function, is followed in the project to build the test database. Application is carried out to transform requirements from a specific Operational Design Domain (ODD) and linguistic representation into a test suite which contains concrete scenarios and test cases. A meta-model is built to help analyze system structure and parameter requirements in the level of logical scenarios. The practicability of a scenario-based approach for the design of AD function test cases generation is demonstrated with the example of building Traffic Congestion Support (TCS) test database. Obtaining the test database and the successful analysis of parameters for the TCS function on the MIL platform lead to the main goal of the thesis project, which is finding edge cases in the test database by optimizing objective functions. By defining the objective functions and building the workflow in modeFrontier after trying with different methods, the optimization process is implemented with two different algorithms separately. pilOPT is evaluated as a better solution for AD function than Multi-Objective Simulated Annealing (MOSA) in terms of computational time and edge cases finding. In addition, a noise model is added to ideal sensor model in SPAS to study the influence of noise in real test track. The result shows a big difference in Time-toCollision value, which is a defined objective function in the project. This indicates more test cases are deteriorated to critical scenario if noise is taken into consideration, which shows the influence of noise cannot be neglected during testing.
Detta examensarbete, genomfört hos Volvo Cars, undersöker en uppbyggnadsmetod av en testdatabas för Autonomous Driving (AD) på en virtuell plattform och hur man bör extrahera kritiska scenarier från testdatabasen för att reducera antalet testfall genom optimering. Den aktuella virtuella plattformen är den model-in-the-loop (MIL) baserade Simulation Platform Active Safety (SPAS) miljön och optimeringsverktyget som användes är modeFrontier. Analysprocessen, i vilken tre abstraktionsnivåer för scenarier är föreslagna i syfte att satisfiera alla kraven för AD, redogörs för i detta projekt. Tillämpning har genomförts för att transformera krav från en specifik Operational Design Domain (ODD) samt lingvistisk representation till en testsvit som innehåller konkreta scenarier och testfall. En metamodell har konstruerats för att assistera med analysen av systemstrukturen och parameterkraven i nivån av logiska scenarier. Genomförbarheten av en scenariobaserad infallsvinkel för designen av AKF-testfall demonstreras med exemplet av konstruktionen av Traffic Congestion Support (TCS)- testdatabasen. Erhållandet av testdatabasen och den framgångsrika analysen av parametrarna för TCSfunktionen på MIL-plattformen ledde till det huvudsakliga målet med examenarbetet, vilket var att identifiera kantfall i testdatabasen genom att optimera objektfunktioner. Genom att definiera objektfunktionerna och konstruera arbetsflödet i modeFrontier efter flertalet försök med olika metoder, implementerades optimeringsprocessen med tvåseparata algoritmer. pilOPT evalueras som en bättre lösning för AD jämfört med Multi-Objective Simulated Annealing (MOSA) med avseende på beräkningstid och identifiering av kantfall. Dessutom har brus adderats till den ideala sensormodellen i SPAS för att studera inflytandet av brus i en verklig testmiljö. Resultaten visar på en stor skillnad i tid-till-kollisionsvärde, vilket är en väldefinierad objektfunktion i projektet. Detta indikerar att fler testfall har försämrats till ett kritiskt scenario om brus tas man tar hänsyn till brus, vilket visar på att inflytandet av brus inte kan försummas under testning.
APA, Harvard, Vancouver, ISO, and other styles
45

Lansing, Eric. "Verification of Polymeric Material Change in the Air Intake System." Thesis, KTH, Skolan för kemivetenskap (CHE), 2017. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-213012.

Full text
Abstract:
The air intake manifold is an integral part of modern internal combustion engines. Currently manufactured in glass fibre reinforced PA66, inquiries have been raised regarding a change of material to glass fibre reinforced PP. A new engine project is the purpose for which this proposed material is evaluated. The thermochemical environment in the air intake system puts high demands on the material. Ageing treatments and tensile testing was conducted on samples of the new material, as well as on the currently used PA66 to evaluate mechanical response of each material to treatments made to simulate the air intake environment. Furthermore, understanding of the chemical setup is lacking and needs to be studied. Experiments was performed to study the chemistry of the intake environment. Results indicated that PP can retain sufficient mechanical rigidity and strength when subjected to parameters made to simulate the air intake. Moreover, results regarding the chemical environment in the air intake system provided limited information.
APA, Harvard, Vancouver, ISO, and other styles
46

Lelli, leitao Valeria. "Testing and maintenance of graphical user interfaces." Thesis, Rennes, INSA, 2015. http://www.theses.fr/2015ISAR0022/document.

Full text
Abstract:
La communauté du génie logiciel porte depuis ses débuts une attention spéciale à la qualité et la fiabilité des logiciels. De nombreuses techniques de test logiciel ont été développées pour caractériser et détecter des erreurs dans les logiciels. Les modèles de fautes identifient et caractérisent les erreurs pouvant affecter les différentes parties d’un logiciel. D’autre part, les critères de qualité logiciel et leurs mesures permettent d’évaluer la qualité du code logiciel et de détecter en amont du code potentiellement sujet à erreur. Les techniques d’analyses statiques et dynamiques scrutent, respectivement, le logiciel à l’arrêt et à l’exécution pour trouver des erreurs ou réaliser des mesures de qualité. Dans cette thèse, nous prônons le fait que la même attention doit être portée sur la qualité et la fiabilité des interfaces utilisateurs (ou interface homme-machine, IHM), au sens génie logiciel du terme. Cette thèse propose donc deux contributions dans le domaine du test et de la maintenance d’interfaces utilisateur : 1. Classification et mutation des erreurs d’interfaces utilisateur. 2. Qualité du code des interfaces utilisateur. Nous proposons tout d’abord un modèle de fautes d’IHM. Ce modèle a été conçu à partir des concepts standards d’IHM pour identifier et classer les fautes d’IHM ; Au travers d’une étude empirique menée sur du code Java existant, nous avons montré l’existence d’une mauvaise pratique récurrente dans le développement du contrôleur d’IHM, objet qui transforme les évènements produits par l’interface utilisateur pour les transformer en actions. Nous caractérisons cette nouvelle mauvaise pratique que nous avons appelée Blob listener, en référence à la méthode Blob. Nous proposons également une analyse statique permettant d’identifier automatiquement la présence du Blob listener dans le code d’interface Java Swing
The software engineering community takes special attention to the quality and the reliability of software systems. Software testing techniques have been developed to find errors in code. Software quality criteria and measurement techniques have also been assessed to detect error-prone code. In this thesis, we argue that the same attention has to be investigated on the quality and reliability of GUIs, from a software engineering point of view. We specifically make two contributions on this topic. First, GUIs can be affected by errors stemming from development mistakes. The first contribution of this thesis is a fault model that identifies and classifies GUI faults. We show that GUI faults are diverse and imply different testing techniques to be detected. Second, like any code artifact GUI code should be analyzed statically to detect implementation defects and design smells. As for the second contribution, we focus on design smells that can affect GUIs specifically. We identify and characterize a new type of design smell, called Blob listener. It occurs when a GUI listener, that gathers events to treat and transform as commands, can produce more than one command. We propose a systematic static code analysis procedure that searches for Blob listener that we implement in a tool called InspectorGuidget. Experiments we conducted exhibits positive results regarding the ability of InspectorGuidget in detecting Blob listeners. To counteract the use of Blob listeners, we propose good coding practices regarding the development of GUI listeners
APA, Harvard, Vancouver, ISO, and other styles
47

Shultz, Jacque. "Authenticating turbocharger performance utilizing ASME performance test code correction methods." Thesis, Kansas State University, 2011. http://hdl.handle.net/2097/8451.

Full text
Abstract:
Master of Science
Department of Mechanical and Nuclear Engineering
Kirby S. Chapman
Continued regulatory pressure necessitates the use of precisely designed turbochargers to create the design trapped equivalence ratio within large-bore stationary engines used in the natural gas transmission industry. The upgraded turbochargers scavenge the exhaust gases from the cylinder, and create the air manifold pressure and back pressure on the engine necessary to achieve a specific trapped mass. This combination serves to achieve the emissions reduction required by regulatory agencies. Many engine owner/operators request that an upgraded turbocharger be tested and verified prior to re-installation on engine. Verification of the mechanical integrity and airflow performance prior to engine installation is necessary to prevent field hardware iterations. Confirming the as-built turbocharger design specification prior to transporting to the field can decrease downtime and installation costs. There are however, technical challenges to overcome for comparing test-cell data to field conditions. This thesis discusses the required corrections and testing methodology to verify turbocharger onsite performance from data collected in a precisely designed testing apparatus. As the litmus test of the testing system, test performance data is corrected to site conditions per the design air specification. Prior to field installation, the turbocharger is fitted with instrumentation to collect field operating data to authenticate the turbocharger testing system and correction methods. The correction method utilized herein is the ASME Performance Test Code 10 (PTC10) for Compressors and Exhausters version 1997.
APA, Harvard, Vancouver, ISO, and other styles
48

Yilmaz, Levent. "Specifying and Verifying Collaborative Behavior in Component-Based Systems." Diss., Virginia Tech, 2002. http://hdl.handle.net/10919/26494.

Full text
Abstract:
In a parameterized collaboration design, one views software as a collection of components that play specific roles in interacting, giving rise to collaborative behavior. From this perspective, collaboration designs revolve around reusing collaborations that typify certain design patterns. Unfortunately, verifying that active, concurrently executing components obey the synchronization and communication requirements needed for the collaboration to work is a serious problem. At least two major complications arise in concurrent settings: (1) it may not be possible to analytically identify components that violate the synchronization constraints required by a collaboration, and (2) evolving participants in a collaboration independently often gives rise to unanticipated synchronization conflicts. This work presents a solution technique that addresses both of these problems. Local (that is, role-to-role) synchronization consistency conditions are formalized and associated decidable inference mechanisms are developed to determine mutual compatibility and safe refinement of synchronization behavior. More specifically, given generic parameterized collaborations and components with specific roles, mutual compatibility analysis verifies that the provided and required synchronization models are consistent and integrate correctly. Safe refinement, on the other hand, guarantees that the local synchronization behavior is maintained consistently as the roles and the collaboration are refined during development. This form of local consistency is necessary, but insufficient to guarantee a consistent collaboration overall. As a result, a new notion of global consistency (that is, among multiple components playing multiple roles) is introduced: causal process constraint analysis. A method for capturing, constraining, and analyzing global causal processes, which arise due to causal interference and interaction of components, is presented. Principally, the method allows one to: (1) represent the intended causal processes in terms of interactions depicted in UML collaboration graphs; (2) formulate constraints on such interactions and their evolution; and (3) check that the causal process constraints are satisfied by the observed behavior of the component(s) at run-time.
Ph. D.
APA, Harvard, Vancouver, ISO, and other styles
49

Mangels, Tatiana [Verfasser], Jan [Akademischer Betreuer] Peleska, and Rolf [Akademischer Betreuer] Drechsler. "Integrated Module Testing and Module Verification / Tatiana Mangels. Gutachter: Jan Peleska ; Rolf Drechsler. Betreuer: Jan Peleska." Bremen : Staats- und Universitätsbibliothek Bremen, 2013. http://d-nb.info/1072078864/34.

Full text
APA, Harvard, Vancouver, ISO, and other styles
50

Bolien, Mario. "Hybrid testing of an aerial refuelling drogue." Thesis, University of Bath, 2018. https://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.761036.

Full text
Abstract:
Hybrid testing is an emerging technique for system emulation that uses a transfer system composed of actuators and sensors to couple physical tests of a critical component or substructure to a numerical simulation of the remainder of a system and its complete operating environment. The realisation of modern real-time hybrid tests for multi-body contact-impact problems often proves infeasible due to (i) hardware with bandwidth limitations and (ii) the unavailability of control schemes that provide satisfactory force and position tracking in the presence of sharp non-linearities or discontinuities. Where this is the case, the possibility of employing a pseudo-dynamic technique remains, enabling tests to be conducted on an enlarged time scale thus relaxing bothbandwidth and response time constraints and providing inherent loop stability. Exploiting the pseudo-dynamic technique, this thesis presents the development of Robotic Pseudo-Dynamic Testing (RPsDT), a dedicated method that specifically targets the realisation of hybrid tests for multi-body contact-impact problems using commercial off- the shelve (COTS) industrial robotic manipulators. The RPsDT method is evaluated in on-ground studies of air-to-air refuelling (AAR) maneuvers with probe-hose-drogue systems where the critical contact and coupling phase is tested pseudo-dynamicallywith full-scale refuelling hardware while the flight regime is emulated in simulation. It is shown that the RPsDT method can faithfully reproduce the dominant contact impact phenomena between probe and drogue while minor discrepancies result from the absence of rate-dependant damping in the force feedback measurements. In combination with full-speed robot controlled contact tests, reliable estimates for impact forces, strain distributions and drogue responses to off-centre hits are obtained providing extensive improvements over current predictive capabilities for the in-flight behaviour of refuelling hardware and it is concluded that the technique shows great promise for industrial applications.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography