Dissertations / Theses on the topic 'Software testing, verification and validation'

To see the other types of publications on this topic, follow the link: Software testing, verification and validation.

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 dissertations / theses for your research on the topic 'Software testing, verification and validation.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.

1

Tekin, Yasar. "An Automated Tool For Requirements Verification." Master's thesis, METU, 2004. http://etd.lib.metu.edu.tr/upload/12605401/index.pdf.

Full text
Abstract:
In today&
#8217
s world, only those software organizations that consistently produce high quality products can succeed. This situation enforces the effective usage of defect prevention and detection techniques. One of the most effective defect detection techniques used in software development life cycle is verification of software requirements applied at the end of the requirements engineering phase. If the existing verification techniques can be automated to meet today&
#8217
s work environment needs, the effectiveness of these techniques can be increased. This study focuses on the development and implementation of an automated tool that automates verification of software requirements modeled in Aris eEPC and Organizational Chart for automatically detectable defects. The application of reading techniques on a project and comparison of results of manual and automated verification techniques applied to a project are also discussed.
APA, Harvard, Vancouver, ISO, and other styles
2

Imanian, James A. "Automated test case generation for reactive software systems based on environment models." Thesis, Monterey, Calif. : Springfield, Va. : Naval Postgraduate School ; Available from National Technical Information Service, 2005. http://library.nps.navy.mil/uhtbin/hyperion/05Jun%5FImanian.pdf.

Full text
Abstract:
Thesis (M.S. in Computer Science)--Naval Postgraduate School, June 2005.
Thesis Advisor(s): Mikhail Auguston, James B. Michael. Includes bibliographical references (p. 55-56). Also available online.
APA, Harvard, Vancouver, ISO, and other styles
3

Natraj, Shailendra. "An Empirical Evaluation & Comparison of Effectiveness & Efficiency of Fault Detection Testing Techniques." Thesis, Blekinge Tekniska Högskola, Sektionen för datavetenskap och kommunikation, 2013. http://urn.kb.se/resolve?urn=urn:nbn:se:bth-4047.

Full text
Abstract:
Context: The thesis is the analysis work of the replication of software experiment conducted by Natalia and Sira at Technical University of Madrid, SPAIN. The empirical study was conducted for the verification and validation of experimental data, and to evaluate the effectiveness and efficiency of the testing techniques. The analysis blocks, considered for the analysis were observable fault, failure visibility and observed faults. The statistical data analysis involved the ANOVA and Classification package of SPSS. Objective: To evaluate and compare the result obtained from the statistical data analysis. To establish the verification and validation of effectiveness and efficiency of testing techniques by using ANOVA and Classification tree analysis for percentage subject, percentage defect-subject and values (Yes / No) for each of the blocks. RQ1: Empirical evaluation of effectiveness of fault detection testing technique, using data analysis (ANOVA and Classification tree package). For the blocks (observable fault, failure visibility and observed faults) using ANOVA and Classification tree. RQ2: Empirical evaluation of efficiency of fault detection technique, based on time and number of test cases using ANOVA. RQ3: Comparison and inference of the obtained results for both effectiveness and efficiency. Method:The research will be focused on the statistical data analysis to empirically evaluate the effectiveness and efficiency of the fault detection technique for the experimental data collected at UPM (Technical university of Madrid, SPAIN). Empirical Strategy Used: Software Experiment. Results: Based on the planned research work. The analysis result obtained for the observable fault types were standardized (Ch5). Within the observable fault block, both the techniques, functional and structural were equally effective. In the failure visibility block, the results were partially standardized. The program types nametbl and ntree were equally effective in fault detection than cmdline. The result for observed fault block was partially standardized and diverse. The list for significant factors in this blocks were program types, fault types and techniques. In the efficiency block, the subject took less time in isolating the fault in the program type cmdline. Also the efficiency in fault detection was seen in cmdline with the help of generated test cases. Conclusion:This research will help the practitioners in the industry and academic in understanding the factors influencing the effectiveness and efficiency of testing techniques.This work also presents a comprehensive analysis and comparison of results of the blocks observable fault, failure visibility and observed faults. We discuss the factors influencing the efficiency of the fault detection techniques.
shailendra.natraj@gmail.com +4917671952062
APA, Harvard, Vancouver, ISO, and other styles
4

Cong, Kai. "Post-silicon Functional Validation with Virtual Prototypes." Thesis, Portland State University, 2015. http://pqdtopen.proquest.com/#viewpdf?dispub=3712209.

Full text
Abstract:

Post-silicon validation has become a critical stage in the system-on-chip (SoC) development cycle, driven by increasing design complexity, higher level of integration and decreasing time-to-market. According to recent reports, post-silicon validation effort comprises more than 50% of the overall development effort of an 65nm SoC. Though post-silicon validation covers many aspects ranging from electronic properties of hardware to performance and power consumption of whole systems, a central task remains validating functional correctness of both hardware and its integration with software. There are several key challenges to achieving accelerated and low-cost post-silicon functional validation. First, there is only limited silicon observability and controllability; second, there is no good test coverage estimation over a silicon device; third, it is difficult to generate good post-silicon tests before a silicon device is available; fourth, there is no effective software robustness testing approaches to ensure the quality of hardware/software integration.

We propose a systematic approach to accelerating post-silicon functional validation with virtual prototypes. Post-silicon test coverage is estimated in the pre-silicon stage by evaluating the test cases on the virtual prototypes. Such analysis is first conducted on the initial test suite assembled by the user and subsequently on the expanded test suite which includes test cases that are automatically generated. Based on the coverage statistics of the initial test suite on the virtual prototypes, test cases are automatically generated to improve the test coverage. In the post-silicon stage, our approach supports coverage evaluation of test cases on silicon devices to ensure fidelity of early coverage evaluation. The generated test cases are issued to silicon devices to detect inconsistencies between virtual prototypes and silicon devices using conformance checking. We further extend the test case generation framework to generate and inject fault scenario with virtual prototypes for driver robustness testing. Besides virtual prototype-based fault injection, an automatic driver fault injection approach is developed to support runtime fault generation and injection for driver robustness testing. Since virtual prototype enables early driver development, our automatic driver fault injection approach can be applied to driver testing in both pre-silicon and post-silicon stages.

For preliminary evaluation, we have applied our coverage evaluation and test generation to several network adapters and their virtual prototypes. We have conducted coverage analysis for a suite of common tests on both the virtual prototypes and silicon devices. The results show that our approach can estimate the test coverage with high fidelity. Based on the coverage estimation, we have employed our automatic test generation approach to generate additional tests. When the generated test cases were issued to both virtual prototypes and silicon devices, we observed significant coverage improvement. And we detected 20 inconsistencies between virtual prototypes and silicon devices, each of which reveals a virtual prototype or silicon device defect. After we applied virtual prototype-based fault injection approach to virtual prototypes for three widely-used network adapters, we generated and injected thousands of fault scenarios and found 2 driver bugs. For automatic driver fault injection, we have applied our approach to 12 widely used drivers with either virtual prototypes or silicon devices. After testing all these drivers, we found 28 distinct bugs.

APA, Harvard, Vancouver, ISO, and other styles
5

De, Sousa Barroca José Duarte. "Verification and validation of knowledge-based clinical decision support systems - a practical approach : A descriptive case study at Cambio CDS." Thesis, Linnéuniversitetet, Institutionen för datavetenskap och medieteknik (DM), 2021. http://urn.kb.se/resolve?urn=urn:nbn:se:lnu:diva-104935.

Full text
Abstract:
The use of clinical decision support (CDS) systems has grown progressively during the past decades. CDS systems are associated with improved patient safety and outcomes, better prescription and diagnosing practices by clinicians and lower healthcare costs. Quality assurance of these systems is critical, given the potentially severe consequences of any errors. Yet, after several decades of research, there is still no consensual or standardized approach to their verification and validation (V&V). This project is a descriptive and exploratory case study aiming to provide a practical description of how Cambio CDS, a market-leading developer of CDS services, conducts its V&V process. Qualitative methods including semi-structured interviews and coding-based textual data analysis were used to elicit the description of the V&V approaches used by the company. The results showed that the company’s V&V methodology is strongly influenced by the company’s model-driven development approach, a strong focus and leveraging of domain knowledge and good testing practices with a focus on automation and test-driven development. A few suggestions for future directions were discussed.
APA, Harvard, Vancouver, ISO, and other styles
6

Lacerda, Jésus Thiago Sousa. "Investigação de operadores essenciais de mutação para programas orientados a aspectos." Universidade Federal de São Carlos, 2014. https://repositorio.ufscar.br/handle/ufscar/585.

Full text
Abstract:
Made available in DSpace on 2016-06-02T19:06:18Z (GMT). No. of bitstreams: 1 6398.pdf: 1432485 bytes, checksum: dbb2a36cf46b2e3c828fe5dd53dc5d1a (MD5) Previous issue date: 2014-10-20
Financiadora de Estudos e Projetos
Context: The literature on software testing reports on the application of the Mutation Analysis criterion or mutation testing as a promising approach for revealing faults in aspect-oriented (AO) programs. However, it is widely known that this criterion is highly costly due to the large number of generated mutants and the effort required to identify equivalent mutants. We highlight that little existing research on mutation testing for AO programs focuses on cost reduction strategies. Objective: this work aims at investigating the cost reduction of mutation testing for AO programs. In particular, we intend to reduce the cost of mutation testing by identifying a reduced set of mutation operators that are capable of keeping the effectiveness in guaranteeing the quality of the designed test sets. Method: to achieve the goals, we applied an approach called Sufficient Procedure. Such approach yields sufficient (sets of) mutation operators. Test sets that are adequate with respect to mutants produced by sufficient operators are able to reveal the majority of faults simulated by a whole set of mutants. Results: by applying the Sufficient Procedure, we obtained substantial cost reductions for three groups of AO programs. The cost reduction in the experiments range from 52% to 62%. The final mutation scores yielded by the test sets that are adequate to mutants produced by the sufficient operators range from 92% to 94%. Conclusion: with the achieved results, we conclude that it is possible to reduce the cost of mutation testing applied to AO programs without significant losses with respect to the capacity of revealing prespecified fault types. The Sufficient Procedure has shown to be able to support cost reduction and to maintain the effectiveness of the criterion.
Contexto: A literatura de teste de software relata a aplicação do critério Análise de Mutantes ou teste de mutação em programas orientados a aspectos (OA) como uma forma promissora para revelar defeitos. Entretanto, esse critério é reconhecidamente de alto custo devido ao grande número de mutantes usualmente gerados e ao esforço para detectar os mutantes equivalentes. Ressalta-se que as iniciativas de aplicação de teste de mutação nesse contexto apresentam pouco enfoque em estratégias de redução de custo. Objetivo: este trabalho tem como objetivo investigar a redução de custo de teste de mutação para programas OA. Em específico, este trabalho objetiva reduzir o custo do teste de mutação por meio da identificação de um conjunto reduzido de operadores de mutação que mantenham a efetividade do critério em garantir a qualidade dos conjuntos de teste produzidos. Metodologia: para atingir o objetivo proposto, aplicou-se uma abordagem intitulada Procedimento Essencial, a qual resulta em conjuntos de operadores essenciais de mutação. Os testes adequados para os mutantes produzidos com esses operadores são capazes de revelar a maioria dos defeitos simulados em um conjunto completo de mutantes. Resultados: por meio da aplicação do Procedimento Essencial, foi possível obter reduções de custo substanciais para três conjuntos de programas OA. As reduções obtidas nos experimentos variam de 52% a 62%. Os escores de mutação finais alcançados pelos testes adequados aos mutantes produzidos com os operadores essenciais variam de 92% a 94%. Conclusão: com os resultados alcançados neste trabalho pode-se afirmar que é possível reduzir o custo do teste de mutação em programas OA sem perdas significativas na capacidade de revelar tipos de defeitos pré-definidos. O Procedimento Essencial mostrou-se eficaz na redução de custo e na manutenção da efetividade do critério.
APA, Harvard, Vancouver, ISO, and other styles
7

Addy, Edward A. "Verification and validation in software product line engineering." Morgantown, W. Va. : [West Virginia University Libraries], 1999. http://etd.wvu.edu/templates/showETD.cfm?recnum=1068.

Full text
Abstract:
Thesis (Ph. D.)--West Virginia University, 1999.
Title from document title page. Document formatted into pages; contains vi, 75 p. : ill. (some col.). Includes abstract. Includes bibliographical references (p. 35-39).
APA, Harvard, Vancouver, ISO, and other styles
8

von, Spakovsky Alexis P., Reffela Davidson, Ashley Mathis, and David Patterson. "Software Independent Verification and Validation (SIVandV) simplified." Monterey, California. Naval Postgraduate School, 2006. http://hdl.handle.net/10945/10063.

Full text
Abstract:
Joint Applied Project
SIVandV has been in existence for some 40 years, and many people still know little about its existence. Software IVandV certifies the quality of the software and independently validates and verifies that it meets or exceeds the customer[alpha]s expectations. Independent VandV for component or element software development activities encompasses the following: 1) review and thorough evaluations of the software development, 2) review and comment on software documentation, 3) participation in all software requirements and design reviews, and 4) participation in software integration and testing for each software build. This thesis will explore and explain the benefits and rationale for Software Independent Verification and Validation. It will identify SIVandV processes that are used to support acquisition weapon systems. [beta]SIVandV Simplified[gamma] will translate, into understandable terms, why SIVandV is considered [beta]Cheap Insurance[gamma] and why it is needed. Additionally, this thesis serves as a tutorial, providing suggested policy and guidance, suggested software Computer-Aided Software Engineering (CASE) tools, criteria, and lessons learned for implementing a successful SIVandV program.
APA, Harvard, Vancouver, ISO, and other styles
9

Arno, Matthew G. (Matthew Gordon). "Verification and validation of safety related software." Thesis, Massachusetts Institute of Technology, 1994. http://hdl.handle.net/1721.1/33517.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Argote, Garcia Gonzalo. "Formal verification and testing of software architectural models." FIU Digital Commons, 2009. http://digitalcommons.fiu.edu/etd/1308.

Full text
Abstract:
Ensuring the correctness of software has been the major motivation in software research, constituting a Grand Challenge. Due to its impact in the final implementation, one critical aspect of software is its architectural design. By guaranteeing a correct architectural design, major and costly flaws can be caught early on in the development cycle. Software architecture design has received a lot of attention in the past years, with several methods, techniques and tools developed. However, there is still more to be done, such as providing adequate formal analysis of software architectures. On these regards, a framework to ensure system dependability from design to implementation has been developed at FIU (Florida International University). This framework is based on SAM (Software Architecture Model), an ADL (Architecture Description Language), that allows hierarchical compositions of components and connectors, defines an architectural modeling language for the behavior of components and connectors, and provides a specification language for the behavioral properties. The behavioral model of a SAM model is expressed in the form of Petri nets and the properties in first order linear temporal logic. This dissertation presents a formal verification and testing approach to guarantee the correctness of Software Architectures. The Software Architectures studied are expressed in SAM. For the formal verification approach, the technique applied was model checking and the model checker of choice was Spin. As part of the approach, a SAM model is formally translated to a model in the input language of Spin and verified for its correctness with respect to temporal properties. In terms of testing, a testing approach for SAM architectures was defined which includes the evaluation of test cases based on Petri net testing theory to be used in the testing process at the design level. Additionally, the information at the design level is used to derive test cases for the implementation level. Finally, a modeling and analysis tool (SAM tool) was implemented to help support the design and analysis of SAM models. The results show the applicability of the approach to testing and verification of SAM models with the aid of the SAM tool.
APA, Harvard, Vancouver, ISO, and other styles
11

Clutterbuck, D. L. "The validation and verification of low-level code." Thesis, University of Southampton, 1986. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.373570.

Full text
APA, Harvard, Vancouver, ISO, and other styles
12

Ibrahim, Alaa E. "Scenario-based verification and validation of dynamic UML specifications." Morgantown, W. Va. : [West Virginia University Libraries], 2001. http://etd.wvu.edu/templates/showETD.cfm?recnum=1799.

Full text
Abstract:
Thesis (M.S.)--West Virginia University, 2001.
Title from document title page. Document formatted into pages; contains x, 143 p. : ill. (some col.). Vita. Includes abstract. Includes bibliographical references (p. 96-99).
APA, Harvard, Vancouver, ISO, and other styles
13

Härkönen, J. (Janne). "Improving product development process through verification and validation." Doctoral thesis, University of Oulu, 2009. http://urn.fi/urn:isbn:9789514291661.

Full text
Abstract:
Abstract The workload of Verification and Validation (V&V) has increased constantly in the high technology industries. The changes in the business environment, with fast time-to-market and demands to decrease research and development costs, have increased the importance of efficient product creation process, including V&V. The significance of the V&V related know-how and testing is increasing in the high tech business environment. As a consequence, companies in the ICT sector have pressures for improving product development process and verification and validation activities. The main motive for this research arises from the fact that the research has been scarce on verification and validation from product development process perspective. This study approaches the above mentioned goal from four perspectives: current challenges and success factors, V&V maturity in different NPD phases, benchmarking automotive sector, and shifting the emphasis of NPD efforts. This dissertation is qualitative in nature and is based on interviewing experienced industrial managers, reflecting their views against scientific literature. The researcher has analysed the obtained material and made conclusions. The main implications of this doctoral dissertation can be concluded as a visible need to shift the emphasis of V&V activities to early NPD. These activities should be viewed and managed over the entire NPD process. There is a need for companies to understand the V&V maturity in different NPD phases and develop activities based on this understanding. Verification and validation activities must be seen as an integral element for successful NPD. Benchmarking other sectors may enable identifying development potential for NPD process. The automotive sector being a mature sector, has developed practices for successfully handling requirements during NPD. The role of V&V is different in different NPD phases. Set-based type V&V can provide required understanding during early product development. In addition, developing parallel technological alternatives and platforms during early NPD also support shifting the emphasis towards earlier development phases.
APA, Harvard, Vancouver, ISO, and other styles
14

Woo, Yan, and 胡昕. "A dynamic integrity verification scheme for tamper-resistancesoftware." Thesis, The University of Hong Kong (Pokfulam, Hong Kong), 2005. http://hub.hku.hk/bib/B34740478.

Full text
APA, Harvard, Vancouver, ISO, and other styles
15

SAYAO, MIRIAM. "REQUIREMENTS VERIFICATION AND VALIDATION: NATURAL LANGUAGE PROCESSING AND SOFTWARE AGENTS." PONTIFÍCIA UNIVERSIDADE CATÓLICA DO RIO DE JANEIRO, 2007. http://www.maxwell.vrac.puc-rio.br/Busca_etds.php?strSecao=resultado&nrSeq=10927@1.

Full text
Abstract:
COORDENAÇÃO DE APERFEIÇOAMENTO DO PESSOAL DE ENSINO SUPERIOR
PONTIFÍCIA UNIVERSIDADE CATÓLICA DO RIO GRANDE DO SUL
No processo de desenvolvimento do software, atividades relacionadas ao Processo de Requisitos envolvem elicitação, modelagem, verificação e validação dos requisitos. O uso da linguagem natural no registro dos requisitos facilita a comunicação entre os participantes do processo, além de possibilitar que clientes e usuários validem requisitos sem necessitar de conhecimento extra. Por outro lado, na economia globalizada atual, o desenvolvimento de software por equipes geograficamente distribuídas está se tornando uma norma. Nesse cenário, atividades de verificação e validação de requisitos para um software de média ou alta complexidade podem envolver o tratamento de centenas ou milhares de requisitos. Com essa ordem de complexidade é importante que o engenheiro de software tenha apoio computacional para o desempenho adequado das atividades de aferição de qualidade. Neste trabalho estamos propondo uma estratégia que combina técnicas de processamento da linguagem natural (PLN) e agentes de software para apoiar as atividades de análise dos requisitos. Geramos visões textuais ou gráficas de grupos de requisitos relacionados; visões apóiam a análise de completude, a identificação de duplicidades e de dependências entre requisitos. Utilizamos técnicas de análise de conteúdo para apoiar a identificação de omissões em requisitos não funcionais. Também propomos uma estratégia para a construção ou atualização do léxico da aplicação, utilizando técnicas de PLN. Utilizamos agentes de software para implementar serviços que incorporam as estratégias referidas, e também para atuar como representantes dos participantes do projeto em desenvolvimento.
In software development process, initial activities can involve requirements elicitation, modeling and analysis (verification and validation). The use of natural language in the register of the requirements facilitates the communication among stakeholders, besides offering possibilities to customers and users to validate requirements without extra knowledge. On the other hand, in the current global economy, software development for teams geographically distributed is becoming a rule. In this scenario, requirements verification and validation for medium or high complexity software can involve the treatment of hundreds or even thousand requirements. With this complexity order it is important to provide computational support for the software engineer execute quality activities. In this work we propose a strategy which combines natural language processing (NLP) techniques and software agents to support analysis activities. We have generated textual or graphical visions from groups of related requirements; visions help completeness analysis, identification of duplicities and dependences among requirements. We use content analysis techniques to support the identification of omissions in nonfunctional requirements. Also, we propose a strategy to construct the lexicon, using NLP techniques. We use software agents to implement web services that incorporate the related strategies, and also agents to act as personal assistants for stakeholders of the software project.
APA, Harvard, Vancouver, ISO, and other styles
16

Sayre, David B. "A Runtime Verification and Validation Framework for Self-Adaptive Software." NSUWorks, 2017. http://nsuworks.nova.edu/gscis_etd/1000.

Full text
Abstract:
The concepts that make self-adaptive software attractive also make it more difficult for users to gain confidence that these systems will consistently meet their goals under uncertain context. To improve user confidence in self-adaptive behavior, machine-readable conceptual models have been developed to instrument the adaption behavior of the target software system and primary feedback loop. By comparing these machine-readable models to the self-adaptive system, runtime verification and validation may be introduced as another method to increase confidence in self-adaptive systems; however, the existing conceptual models do not provide the semantics needed to institute this runtime verification or validation. This research confirms that the introduction of runtime verification and validation for self-adaptive systems requires the expansion of existing conceptual models with quality of service metrics, a hierarchy of goals, and states with temporal transitions. Based on this expanded semantics, runtime verification and validation was introduced as a second-level feedback loop to improve the performance of the primary feedback loop and quantitatively measure the quality of service achieved in a state-based, self-adaptive system. A web-based purchasing application running in a cloud-based environment was the focus of experimentation. In order to meet changing customer purchasing demand, the self-adaptive system monitored external context changes and increased or decreased available application servers. The runtime verification and validation system operated as a second-level feedback loop to monitor quality of service goals based on internal context, and corrected self-adaptive behavior when goals are violated. Two competing quality of service goals were introduced to maintain customer satisfaction while minimizing cost. The research demonstrated that the addition of a second-level runtime verification and validation feedback loop did quantitatively improve self-adaptive system performance even with simple, static monitoring rules.
APA, Harvard, Vancouver, ISO, and other styles
17

Sudol, Alicia. "A methodology for modeling the verification, validation, and testing process for launch vehicles." Diss., Georgia Institute of Technology, 2015. http://hdl.handle.net/1853/54429.

Full text
Abstract:
Completing the development process and getting to first flight has become a difficult hurdle for launch vehicles. Program cancellations in the last 30 years were largely due to cost overruns and schedule slips during the design, development, testing and evaluation (DDT&E) process. Unplanned rework cycles that occur during verification, validation, and testing (VVT) phases of development contribute significantly to these overruns, accounting for up to 75% of development cost. Current industry standard VVT planning is largely subjective with no method for evaluating the impact of rework. The goal of this research is to formulate and implement a method that will quantitatively capture the impact of unplanned rework by assessing the reliability, cost, schedule, and risk of VVT activities. First, the fidelity level of each test is defined and the probability of rework between activities is modeled using a dependency structure matrix. Then, a discrete event simulation projects the occurrence of rework cycles and evaluates the impact on reliability, cost, and schedule for a set of VVT activities. Finally, a quadratic risk impact function is used to calculate the risk level of the VVT strategy based on the resulting output distributions. This method is applied to alternative VVT strategies for the Space Shuttle Main Engine to demonstrate how the impact of rework can be mitigated, using the actual test history as a baseline. Results indicate rework cost to be the primary driver in overall project risk, and yield interesting observations regarding the trade-off between the upfront cost of testing and the associated cost of rework. Ultimately, this final application problem demonstrates the merits of this methodology in evaluating VVT strategies and providing a risk-informed decision making framework for the verification, validation, and testing process of launch vehicle systems.
APA, Harvard, Vancouver, ISO, and other styles
18

Alphonce, Magori. "Use of software verification & validation (V&V) techniques for software process improvement." Thesis, University West, Department of Technology, Mathematics and Computer Science, 2006. http://urn.kb.se/resolve?urn=urn:nbn:se:hv:diva-574.

Full text
APA, Harvard, Vancouver, ISO, and other styles
19

Antti, William. "Virtualized Functional Verification of Cross-Platform Software Applications." Thesis, Luleå tekniska universitet, Institutionen för system- och rymdteknik, 2019. http://urn.kb.se/resolve?urn=urn:nbn:se:ltu:diva-74599.

Full text
Abstract:
With so many developers writing code, so many choose to become a developer every day, using tools to aid in the work process is needed. With all the testing being done for multiple different devices and sources there is a need to make it better and more efficient. In this thesis connecting the variety of different tools such as version control, project management, issue tracking and test systems is explored as a possible solution. A possible solution was implemented and then analyzed through a questionnaire that were answered by developers. For an example results as high as 75\% answering 5 if they liked the connection between the issue tracking system and the test results. 75\% also gave a 5 when asked about if they liked the way the test results were presented. The answers they gave about the implementation made it possible to conclude that it is possible to achieve a solution that can solve some of the presented problems. A better way to connect various tools to present and analyze the test results coming from multiple different sources.
APA, Harvard, Vancouver, ISO, and other styles
20

Moschoglou, Georgios Moschos. "Software testing tools and productivity." Virtual Press, 1996. http://liblink.bsu.edu/uhtbin/catkey/1014862.

Full text
Abstract:
Testing statistics state that testing consumes more than half of a programmer's professional life, although few programmers like testing, fewer like test design and only 5% of their education will be devoted to testing. The main goal of this research is to test the efficiency of two software testing tools. Two experiments were conducted in the Computer Science Department at Ball State University. The first experiment compares two conditions - testing software using no tool and testing software using a command-line based testing tool - to the length of time and number of test cases needed to achieve an 80% statement coverage for 22 graduate students in the Computer Science Department. The second experiment compares three conditions - testing software using no tool, testing software using a command-line based testing tool, and testing software using a GUI interactive tool with added functionality - to the length of time and number of test cases needed to achieve 95% statement coverage for 39 graduate and undergraduate students in the same department.
Department of Computer Science
APA, Harvard, Vancouver, ISO, and other styles
21

Schulte, Jan. "A Software Verification & Validation Management Framework for the Space Industry." Thesis, Blekinge Tekniska Högskola, Sektionen för datavetenskap och kommunikation, 2009. http://urn.kb.se/resolve?urn=urn:nbn:se:bth-1194.

Full text
Abstract:
Software for space applications has special requirements in terms of reliability and dependability. As the verification & validation activities (VAs) of these software systems account for more than 50% of the development effort and the industry is faced with political and market pressure to deliver software faster and cheaper, new ways need to be established to reduce this verification & validation effort. In a research project together with RUAG Aerospace Sweden AB and the Swedish Space Corporation, the Blekinge Tekniska Högskola is trying to find out how to optimize the VAs with respect to effectiveness and efficiency. The goal of this thesis is therefore to develop a coherent framework for the management and optimization of verification & validation activities (VAMOS) and is evaluated at the RUAG Aerospace Sweden AB in Göteborg.
APA, Harvard, Vancouver, ISO, and other styles
22

Napier, John. "Assessing diagnostics for fault tolerant software." Thesis, University of Bristol, 2001. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.368398.

Full text
APA, Harvard, Vancouver, ISO, and other styles
23

Cordova, Lucas Pascual. "Development and Validation of Feedback-Based Testing Tutor Tool to Support Software Testing Pedagogy." Diss., North Dakota State University, 2020. https://hdl.handle.net/10365/31749.

Full text
Abstract:
Current testing education tools provide coverage deficiency feedback that either mimics industry code coverage tools or enumerates through the associated instructor tests that were absent from the student’s test suite. While useful, these types of feedback mechanisms are akin to revealing the solution and can inadvertently lead a student down a trial-and-error path, rather than using a systematic approach. In addition to an inferior learning experience, a student may become dependent on the presence of this feedback in the future. Considering these drawbacks, there exists an opportunity to develop and investigate alternative feedback mechanisms that promote positive reinforcement of testing concepts. We believe that using an inquiry-based learning approach is a better alternative (to simply providing the answers) where students can construct and reconstruct their knowledge through discovery and guided learning techniques. To facilitate this, we present Testing Tutor, a web-based assignment submission platform to support different levels of testing pedagogy via a customizable feedback engine. This dissertation is based on the experiences of using Testing Tutor at different levels of the curriculum. The results indicate that the groups using conceptual feedback produced higher-quality test suites (achieved higher average code coverage, fewer redundant tests, and higher rates of improvement) than the groups that received traditional code coverage feedback. Furthermore, students also produced higher quality test suites when the conceptual feedback was tailored to task-level for lower division student groups and self-regulating-level for upper division student groups. We plan to perform additional studies with the following objectives: 1) improve the feedback mechanisms; 2) understand the effectiveness of Testing Tutor’s feedback mechanisms at different levels of the curriculum; and 3) understand how Testing Tutor can be used as a tool for instructors to gauge learning and determine whether intervention is necessary to improve students’ learning.
APA, Harvard, Vancouver, ISO, and other styles
24

Nilsson, Daniel. "System for firmware verification." Thesis, University of Kalmar, School of Communication and Design, 2009. http://urn.kb.se/resolve?urn=urn:nbn:se:hik:diva-2372.

Full text
Abstract:

Software verification is an important part of software development and themost practical way to do this today is through dynamic testing. This reportexplains concepts connected to verification and testing and also presents thetesting-framework Trassel developed during the writing of this report.Constructing domain specific languages and tools by using an existinglanguage as a starting ground can be a good strategy for solving certainproblems, this was tried with Trassel where the description-language forwriting test-cases was written as a DSL using Python as the host-language.

APA, Harvard, Vancouver, ISO, and other styles
25

Bazaz, Anil. "A Framework for Deriving Verification and Validation Strategies to Assess Software Security." Diss., Virginia Tech, 2006. http://hdl.handle.net/10919/27006.

Full text
Abstract:
In recent years, the number of exploits targeting software applications has increased dramatically. These exploits have caused substantial economic damages. Ensuring that software applications are not vulnerable to the exploits has, therefore, become a critical requirement. The last line of defense is to test before hand if a software application is vulnerable to exploits. One can accomplish this by testing for the presence of vulnerabilities. This dissertation presents a framework for deriving verification and validation (V&V) strategies to assess the security of a software application by testing it for the presence of vulnerabilities. This framework can be used to assess the security of any software application that executes above the level of the operating system. It affords a novel approach, which consists of testing if the software application permits violation of constraints imposed by computer system resources or assumptions made about the usage of these resources. A vulnerability exists if a constraint or an assumption can be violated. Distinctively different from other approaches found in the literature, this approach simplifies the process of assessing the security of a software application. The framework is composed of three components: (1) a taxonomy of vulnerabilities, which is an informative classification of vulnerabilities, where vulnerabilities are expressed in the form of violable constraints and assumptions; (2) an object model, which is a collection of potentially vulnerable process objects that can be present in a software application; and (3) a V&V strategies component, which combines information from the taxonomy and the object model; and provides approaches for testing software applications for the presence of vulnerabilities. This dissertation also presents a step-by-step process for using the framework to assess software security.
Ph. D.
APA, Harvard, Vancouver, ISO, and other styles
26

Levin, Anders, and Jörgen Johannesson. "Validation and verification of a third degree optimization method." Thesis, Växjö University, School of Mathematics and Systems Engineering, 2004. http://urn.kb.se/resolve?urn=urn:nbn:se:vxu:diva-25.

Full text
Abstract:

Denna kombinerade magisteruppsats i matematik och datalogi handlar om en metod för att hitta ett lokalt minimum för en unimodal funktion inom ett intervall genom användning av ett femtegradspolynom. Femtegradspolynomet skapas med hjälp av interpolation baserad på funktionsvärdena samt första och andra derivatans värden i intervallets ändpunkter. I rapporten härleds matematiskt att metoden konvergerar, följt av ett bevis för att metoden konvergerar med en konvergenshastighet av tre. Slutligen testas metoden mot två referensmetoder för att se användningsbarheten. För detta beskrivs vissa mjukvaruutvecklingsmetoder och några teststrategier. Testen utförs med sex olika funktioner och med tre olika versioner av metoden. Slutsatserna från testen visar att metoden inte är bättre att använda än referensmetoderna även om den har högre konvergenshastighet samt att metoden måste ta hänsyn till när den bara hittar nya approximationer på ena sidan av intervallet. Vi kunde även se från testerna att ingen av metoderna var bra på att ge en korrekt approximation, utan det finns behov av säkrare metoder för detta. Det är därför föreslaget i uppsatsen att man borde försöka att hitta ett annat interpolations-polynom för att förbättra metoden. Man borde även testa mot en metod som har högre konvergenshastighet. För att kunna göra det behöver man titta på andra sätt att representera numeriska värden och det skulle kunna vara intressant för att se om man då skulle få ett annat resultat.


This combined master thesis in Mathematics and in Computer Science deals with a method for finding the local minimum of a unimodal function inside a given interval by using a fifth degree polynomial. This fifth degree polynomial is created from the function value and the first and second derivative values in the end-points of the interval. In this report the presented method is derived mathematically to converge and it is then proven that the method has a convergence rate of three. Last is the method tested against two reference methods to see the usefullness of the method. To do this some software development methods are described in the report and some test strategies are given. The tests are done with six different functions and with three different implementations of the method. The conclusions from the tests are that it is often better to use one of the referencemethods instead of the presented method, even if the presented method has a better convergence rate, and that the method needs to handle when the found approximation always is on one side of the interval. We could also see from the tests that none of the methods were good on finding a correct approximation. Therefore, there exist needs for more secure methods. It is therefore suggested in the report that a search for other interpolating functions ought to be carried out in order to improve the method. Also, it could be interesting to test against another method with even higher convergence rate. To do that, another numerical representation is needed and it would be interesting to see if that changes the outcome

APA, Harvard, Vancouver, ISO, and other styles
27

Ipate, Florentin Eugen. "Theory of X-machines with applications in specification and testing." Thesis, University of Sheffield, 1995. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.319486.

Full text
APA, Harvard, Vancouver, ISO, and other styles
28

Chantatub, Wachara. "The integration of software specification, verification, and testing techniques with software requirements and design processes." Thesis, University of Sheffield, 1995. http://etheses.whiterose.ac.uk/1850/.

Full text
Abstract:
Specifying, verifying, and testing software requirements and design are very important tasks in the software development process and must be taken seriously. By investing more up-front effort in these tasks, software projects will gain the benefits of reduced maintenance costs, higher software reliability, and more user-responsive software. However, many individuals involved in these tasks still find that the techniques available for the tasks are either too difficult and far from practical or if not difficult, inadequate for the tasks. This thesis proposes practical and capable techniques for specifying and verifying software requirements and design and for generating test requirements for acceptance and system testing. The proposed software requirements and design specification techniques emerge from integrating three categories of software specification languages, namely an infonnal specification language (e.g. English), semiformal specification languages (Entity-Relationship Diagrams, Data Flow Diagrams, and Data Structure Diagrams), and a formal specification language (Z with an extended subset). The four specification languages mentioned above are used to specify both software requirements and design. Both software requirements and design of a system are defined graphically in Entity-Relationship Diagrams, Data Flow Diagrams, and Data Structure Diagrams, and defined formally in Z specifications. The proposed software requirements and design verification techniques are a combination of informal and formal proofs. The informal proofs are applied to check the consistency of the semiformal specification and to check the consistency, correctness, and completeness of the formal specification against the semiformal specification. The formal proofs are applied to mathematically prove the consistency of the formal specification. Finally, the proposed technique for generating test requirements for acceptance and system testing from the formal requirements specification is presented. Two sets of test requirements are generated: test requirements for testing the critical requirements, and test requirements for testing the operations of the system.
APA, Harvard, Vancouver, ISO, and other styles
29

Santelices, Raul A. "Change-effects analysis for effective testing and validation of evolving software." Diss., Georgia Institute of Technology, 2012. http://hdl.handle.net/1853/44737.

Full text
Abstract:
The constant modification of software during its life cycle poses many challenges for developers and testers because changes might not behave as expected or may introduce erroneous side effects. For those reasons, it is of critical importance to analyze, test, and validate software every time it changes. The most common method for validating modified software is regression testing, which identifies differences in the behavior of software caused by changes and determines the correctness of those differences. Most research to this date has focused on the efficiency of regression testing by selecting and prioritizing existing test cases affected by changes. However, little attention has been given to finding whether the test suite adequately tests the effects of changes (i.e., behavior differences in the modified software) and which of those effects are missed during testing. In practice, it is necessary to augment the test suite to exercise the untested effects. The thesis of this research is that the effects of changes on software behavior can be computed with enough precision to help testers analyze the consequences of changes and augment test suites effectively. To demonstrate this thesis, this dissertation uses novel insights to develop a fundamental understanding of how changes affect the behavior of software. Based on these foundations, the dissertation defines and studies new techniques that detect these effects in cost-effective ways. These techniques support test-suite augmentation by (1) identifying the effects of individual changes that should be tested, (2) identifying the combined effects of multiple changes that occur during testing, and (3) optimizing the computation of these effects.
APA, Harvard, Vancouver, ISO, and other styles
30

Saeed, Farrakh, and Muhammad Saeed. "Systematic Review of Verification and Validation in Dynamic Programming Languages." Thesis, Blekinge Tekniska Högskola, Avdelningen för programvarusystem, 2008. http://urn.kb.se/resolve?urn=urn:nbn:se:bth-3239.

Full text
Abstract:
The Verification and Validation provides support to improve the quality of the software. Verification and Validation ensures that the product is stable and developed according to the requirements of the end user. This thesis presents a systematic review of dynamic programming languages and verification & validation practices used for dynamic languages. This thesis presents results found in dynamic programming languages and verification & validation over the period of 1985 – 2008. The study is aimed to start from identification of dynamic aspects along with the differences between static and dynamic languages. Furthermore, this thesis is also intends to give overview of the verification and validation practices for dynamic languages. Moreover to validate the verification and validation results, a survey consisting of (i) interviews and (ii) online survey is conducted. After the analysis of systematic review, it has been found that dynamic languages are making progress in some of the areas like integration of common development framework, language enhancement, dynamic aspects etc. The Dynamic languages are lacking in providing a better performance than static languages. There are also some factors found in this study that can raise the popularity of dynamic languages in the industry. Based on the analysis of systematic review, interviews and online survey, it is concluded that there is no difference between the methodologies available for Verification and Validation. It is also revealed that dynamic languages provide support to maintain software quality with their characteristics and dynamic features. Moreover, they also support to test softwares developed with static language. It is concluded that test driven development should be adopted while working with the dynamic languages. Test driven development is supposed to be a mandatory part of dynamic languages.
Farrakh Saeed +46765597558
APA, Harvard, Vancouver, ISO, and other styles
31

Belt, P. (Pekka). "Improving verification and validation activities in ICT companies—product development management approach." Doctoral thesis, University of Oulu, 2009. http://urn.fi/urn:isbn:9789514291487.

Full text
Abstract:
Abstract The main motive for this research arises from the fact that the research has been scarce on verification and validation (V&V) activities from the management viewpoint, even though V&V has been covered from the technical viewpoint. There was a clear need for studying the management aspects due to the development of the information and communications technology (ICT) sector, and increased significance of V&V activities. ICT has developed into a turbulent, high clock-speed sector and the importance of V&V activities has increased significantly. As a consequence, companies in the ICT sector require ideas for improving their verification and validation activities from the product development management viewpoint. This study approaches the above mentioned goal from four perspectives: current V&V management challenges, organisational and V&V maturities, benchmarking another sector, and uncertainty during new product development (NPD). This dissertation is qualitative in nature and is based on interviewing experienced industrial managers, reflecting their views against scientific literature. The researcher has analysed the obtained material and made conclusions. The main implications of this doctoral dissertation can be concluded as a need to overcome the current tendency to organise through functional silos, and low maturity of V&V activities. Verification and validation activities should be viewed and managed over the entire NPD process. This requires new means for cross-functional integration. The maturity of the overall management system needs to be adequate to enable higher efficiency and effectiveness of V&V activities. There are pressures to shift the emphasis of V&V to early NPD and simultaneously delay decision-making in NPD projects to a stage where enough information is available. Understanding enhancing V&V methods are a potential way to advance towards these goals.
APA, Harvard, Vancouver, ISO, and other styles
32

Masi, Riccardo. "Software verification and validation methods with advanced design patterns and formal code analysis." Master's thesis, Alma Mater Studiorum - Università di Bologna, 2022.

Find full text
Abstract:
This thesis focuses on the description and the improvement of the host company software life cycle, with a focus on the Verification and Validation phase. The host company is an international group, the world leader in the supply of advanced technologies for the ceramic, metal, packaging industries, food and beverage, and the production of plastic containers and advanced materials. The software life cycle is an extremely important development process for building the state-of-art of software products and it is a process that requires methodology, control, and appropriate documentation. For companies, quality assurance in software development has become a very expensive activity from an economic point of view and the verification and validation phase is essential to reduce these costs. The starting point of the thesis consists of the analysis and evaluation of the answers obtained through a company survey submitted to the software developers during the first phase of the internship. Subsequently, the description of a typical software life cycle management is predominant, with particular attention to the Verification and Validation phase, explained through some practice examples. Afterward, we will analyze in detail the different methodologies and strategies of the Software Verification and Validation process, starting from static analysis, passing through classical methodologies of dynamic analysis, and concluding with innovative Verification and Validation solutions to automate the process. The main goal of the thesis is the optimization and standardization of the automation software life cycle of the host company, proposing innovative solutions for every single phase of the process and possible future research and updates.
APA, Harvard, Vancouver, ISO, and other styles
33

Andrieu, Christian W. "Testing, validation, and verification of an expert system advisor for aircraft maintenance scheduling (ESAAMS)." Thesis, Monterey, California. Naval Postgraduate School, 1991. http://hdl.handle.net/10945/28599.

Full text
APA, Harvard, Vancouver, ISO, and other styles
34

Chunduri, Annapurna. "An Effective Verification Strategy for Testing Distributed Automotive Embedded Software Functions: A Case Study." Thesis, Blekinge Tekniska Högskola, Institutionen för programvaruteknik, 2016. http://urn.kb.se/resolve?urn=urn:nbn:se:bth-12805.

Full text
Abstract:
Context. The share and importance of software within automotive vehicles is growing steadily. Most functionalities in modern vehicles, especially safety related functions like advanced emergency braking, are controlled by software. A complex and common phenomenon in today’s automotive vehicles is the distribution of such software functions across several Electronic Control Units (ECUs) and consequently across several ECU system software modules. As a result, integration testing of these distributed software functions has been found to be a challenge. The automotive industry neither has infinite resources, nor has the time to carry out exhaustive testing of these functions. On the other hand, the traditional approach of implementing an ad-hoc selection of test scenarios based on the tester’s experience, can lead to test gaps and test redundancies. Hence, there is a pressing need within the automotive industry for a feasible and effective verification strategy for testing distributed software functions. Objectives. Firstly, to identify the current approach used to test the distributed automotive embedded software functions in literature and in a case company. Secondly, propose and validate a feasible and effective verification strategy for testing the distributed software functions that would help improve test coverage while reducing test redundan- cies and test gaps. Methods. To accomplish the objectives, a case study was conducted at Scania CV AB, Södertälje, Sweden. One of the data collection methods was through conducting interviews of different employees involved in the software testing activities. Based on the research objectives, an interview questionnaire with open-ended and close-ended questions has been used. Apart from interviews, data from relevant ar- tifacts in databases and archived documents has been used to achieve data triangulation. Moreover, to further strengthen the validity of the results obtained, adequate literature support has been presented throughout. Towards the end, a verification strategy has been proposed and validated using existing historical data at Scania. Conclusions. The proposed verification strategy to test distributed automotive embedded software functions has given promising results by providing means to identify test gaps and test redundancies. It helps establish an effective and feasible approach to capture function test coverage information that helps enhance the effectiveness of integration testing of the distributed software functions.
APA, Harvard, Vancouver, ISO, and other styles
35

Belsick, Charlotte Ann. "Space Vehicle Testing." DigitalCommons@CalPoly, 2012. https://digitalcommons.calpoly.edu/theses/888.

Full text
Abstract:
Requirement verification and validation is a critical component of building and delivering space vehicles with testing as the preferred method. This Master’s Project presents the space vehicle test process from planning through test design and execution. It starts with an overview of the requirements, validation, and verification. The four different verification methods are explained including examples as to what can go wrong if the verification is done incorrectly. Since the focus of this project is on test, test verification is emphasized. The philosophy behind testing, including the “why” and the methods, is presented. The different levels of testing, the test objectives, and the typical tests are discussed in detail. Descriptions of the different types of tests are provided including configurations and test challenges. While most individuals focus on hardware only, software is an integral part of any space product. As such, software testing, including mistakes and examples, is also presented. Since testing is often not performed flawlessly the first time, sections on anomalies, including determining root cause, corrective action, and retest is included. A brief discussion of defect detection in test is presented. The project is actually presented in total in the Appendix as a Power Point document.
APA, Harvard, Vancouver, ISO, and other styles
36

Angerhofer, Bernhard J. "Collaborative supply chain modelling and performance measurement." Thesis, Brunel University, 2002. http://bura.brunel.ac.uk/handle/2438/4993.

Full text
Abstract:
For many years, supply chain research focused on operational aspects and therefore mainly on the optimisation of parts of the production and distribution processes. Recently, there has been an increasing interest in supply chain management and collaboration between supply chain partners. However, there is no model that takes into consideration all aspects required to adequately represent and measure the performance of a collaborative supply chain. This thesis proposes a model of a collaborative supply chain, consisting of six constituents, all of which are required in order to provide a complete picture of such a collaborative supply chain. In conjunction with that, a collaborative supply chain performance indicator is developed. It is based on three types of measures to allow the adequate measurement of collaborative supply chain performance. The proposed model of a collaborative supply chain and the collaborative supply chain performance indicator are implemented as a computer simulation. This is done in the form of a decision support environment, whose purpose is to show how changes in any of the six constituents affect collaborative supply chain performance. The decision support environment is configured and populated with information and data obtained in a case study. Verification and validation testing in three different scenarios demonstrate that the decision support environment adequately fulfils it purpose.
APA, Harvard, Vancouver, ISO, and other styles
37

Rotting, Tjädermo Viktor, and Alex Tanskanen. "System Upgrade Verification : An automated test case study." Thesis, Linköpings universitet, Institutionen för datavetenskap, 2019. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-165125.

Full text
Abstract:
We live in a society where automatization is becoming more common, whether it be cars or artificial intelligence. Software needs to be updated using patches, however, these patches have the possibility of breaking components. This study takes such a patch in the context of Ericsson, identifies what needs to be tested, investigates whether the tests can be automated and assesses how maintainable they are. Interviews were used for the identification of system and software parts in need of testing. Then tests were implemented in an automated test suite to test functionality of either a system or software. The goal was to reduce time of troubleshooting for employees without interrupting sessions for users as well as set up a working test suite. When the automated testing is completed and implemented in the test suite, the study is concluded by measuring the maintainability of the scripts using both metrics and human assessment through interviews. The result showed the testing suite proved maintainable, both from the metric point of view and from human assessment.
APA, Harvard, Vancouver, ISO, and other styles
38

Grobler, Leon D. "A kernel to support computer-aided verification of embedded software." Thesis, Stellenbosch : University of Stellenbosch, 2006. http://hdl.handle.net/10019.1/2479.

Full text
Abstract:
Thesis (MSc (Mathematical Sciences)--University of Stellenbosch, 2006.
Formal methods, such as model checking, have the potential to improve the reliablility of software. Abstract models of systems are subjected to formal analysis, often showing subtle defects not discovered by traditional testing.
APA, Harvard, Vancouver, ISO, and other styles
39

Alberi, Thomas James. "A Proposed Standardized Testing Procedure for Autonomous Ground Vehicles." Thesis, Virginia Tech, 2008. http://hdl.handle.net/10919/32321.

Full text
Abstract:
Development of unmanned vehicles will increase as the need to save lives rises. In both military and civilian applications, humans can be taken out of the loop through the implementation of safe and intelligent autonomous vehicles. Although hardware and software development continue to play a large role in the autonomous vehicle industry, validation of these systems will always be necessary. The ability to test these vehicles thoroughly and efficiently will ensure their proper and flawless operation. On November 3, 2007 the Defense Advanced Research Projects Agency held the Urban Challenge to drive the development of autonomous ground vehicles for military use. This event required vehicles built by teams across the world to autonomously navigate a 60 mile course in an urban environment in less than 6 hours. This thesis addresses the testing aspect of autonomous ground vehicles that exhibit the advanced behaviors necessary for operating in such an event. Specifically, the experiences of Team Victor Tango and other Urban Challenge teams are covered in detail. Testing facilities, safety measures, procedures, and validation methods utilized by these teams provide valuable information on the development of their vehicles. Combining all these aspects results in a proposed testing strategy for autonomous ground vehicles.
Master of Science
APA, Harvard, Vancouver, ISO, and other styles
40

Weitz, Noah. "Analysis of Verification and Validation Techniques for Educational CubeSat Programs." DigitalCommons@CalPoly, 2018. https://digitalcommons.calpoly.edu/theses/1854.

Full text
Abstract:
Since their creation, CubeSats have become a valuable educational tool for university science and engineering programs. Unfortunately, while aerospace companies invest resources to develop verification and validation methodologies based on larger-scale aerospace projects, university programs tend to focus resources on spacecraft development. This paper looks at two different types of methodologies in an attempt to improve CubeSat reliability: generating software requirements and utilizing system and software architecture modeling. Both the Consortium Requirements Engineering (CoRE) method for software requirements and the Monterey Phoenix modeling language for architecture modeling were tested for usability in the context of PolySat, Cal Poly's CubeSat research program. In the end, neither CoRE nor Monterey Phoenix provided the desired results for improving PolySat's current development procedures. While a modified version of CoRE discussed in this paper does allow for basic software requirements to be generated, the resulting specification does not provide any more granularity than PolySat's current institutional knowledge. Furthermore, while Monterey Phoenix is a good tool to introduce students to model-based systems engineering (MBSE) concepts, the resulting graphs generated for a PolySat specific project were high-level and did not find any issues previously discovered through trial and error methodologies. While neither method works for PolySat, the aforementioned results do provide benefits for university programs looking to begin developing CubeSats.
APA, Harvard, Vancouver, ISO, and other styles
41

Scott, Hanna E. T. "A Balance between Testing and Inspections : An Extended Experiment Replication on Code Verification." Thesis, Blekinge Tekniska Högskola, Avdelningen för programvarusystem, 2004. http://urn.kb.se/resolve?urn=urn:nbn:se:bth-1751.

Full text
Abstract:
An experiment replication comparing the performance of traditional structural code testing with inspection meeting preparation using scenario based reading. Original experiment was conducted by Per Runeson and Anneliese Andrews in 2003 at Washington State University.
En experiment-replikering där traditionell strukturell kod-testning jämförs med inspektionsmötesförberedelse användandes scenario-baserad kodläsning. Det ursprungliga experimentet utfördes av Per Runeson och Anneliese Andrews på Washington State University år 2003.
APA, Harvard, Vancouver, ISO, and other styles
42

Deng, Xianghua. "Contract-based verification and test case generation for open systems." Diss., Manhattan, Kan. : Kansas State University, 2007. http://hdl.handle.net/2097/345.

Full text
APA, Harvard, Vancouver, ISO, and other styles
43

Frazier, Edward Snead. "Assessing Security Vulnerabilities: An Application of Partial and End-Game Verification and Validation." Thesis, Virginia Tech, 2010. http://hdl.handle.net/10919/31849.

Full text
Abstract:
Modern software applications are becoming increasingly complex, prompting a need for expandable software security assessment tools. Violable constraints/assumptions presented by Bazaz [1] are expandable and can be modified to fit the changing landscape of software systems. Partial and End-Game Verification, Validation, and Testing (VV&T) strategies utilize the violable constraints/assumptions and are established by this research as viable software security assessment tools. The application of Partial VV&T to the Horticulture Club Sales Assistant is documented in this work. Development artifacts relevant to Partial VV&T review are identified. Each artifact is reviewed for the presence of constraints/assumptions by translating the constraints/assumptions to target the specific artifact and software system. A constraint/assumption review table and accompanying status nomenclature are presented that support the application of Partial VV&T. Both the constraint/assumption review table and status nomenclature are generic, allowing them to be used in applying Partial VV&T to any software system. Partial VV&T, using the constraint/assumption review table and associated status nomenclature, is able to effectively identify software vulnerabilities. End-Game VV&T is also applied to the Horticulture Club Sales Assistant. Base test strategies presented by Bazaz [1] are refined to target system specific resources such as user input, database interaction, and network connections. Refined test strategies are used to detect violations of the constraints/assumptions within the Horticulture Club Sales Assistant. End-Game VV&T is able to identify violation of constraints/assumptions, indicating vulnerabilities within the Horticulture Club Sales Assistant. Addressing vulnerabilities identified by Partial and End-Game VV&T will enhance the overall security of a software system.
Master of Science
APA, Harvard, Vancouver, ISO, and other styles
44

Thurn, Christian. "Verification and Validation of Object Oriented Software Design : Guidelines on how to Choose the Best Method." Thesis, Blekinge Tekniska Högskola, Avdelningen för programvarusystem, 2004. http://urn.kb.se/resolve?urn=urn:nbn:se:bth-2590.

Full text
Abstract:
The earlier in the development process a fault is found, the cheaper it is to correct the fault. Therefore are verification and validation methods important tools. The problem with this is that there are many methods to choose between. This thesis sheds light on how to choose between four common verification and validation methods. The verification and validation methods presented in this thesis are reviews, inspections and Fault Tree Analysis. Review and inspection methods are evaluated in an empirical study. The result of the study shows that there are differences in terms of defect detection. Based on this study and literature study, guidelines on how to choose the best method in a given context are given.
Desto tidigare i utvecklingsprocessen som ett fel hittas, desto billigare är det att rätt till detta fel. Därför är verifierings- och valideringsmetoder viktiga verktyg. Problemet är att det finns många metoder. Den här rapporten sprider ljus över hur man ska välja mellan fyra vanliga verifierings- och valideringsmetoder. Verifierings- och valideringsmetoderna i den här rapporten är granskningar, inspektioner och "Fault Tree Analysis". Granskningar och inspektioner är utvärderade i en empiriskt studie. Resultatet av studien visar att det finns skillnader mellan metoderna när det gäller att hitta fel.
APA, Harvard, Vancouver, ISO, and other styles
45

Wang, Timothy. "Credible autocoding of control software." Diss., Georgia Institute of Technology, 2015. http://hdl.handle.net/1853/53954.

Full text
Abstract:
Formal methods is a discipline of using a collection of mathematical techniques and formalisms to model and analyze software systems. Motivated by the new formal methods-based certification recommendations for safety-critical embedded software and the significant increase in the cost of verification and validation (V\&V), this research is about creating a software development process for control systems that can provide mathematical guarantees of high-level functional properties on the code. The process, dubbed credible autocoding, leverages control theory in the automatic generation of control software documented with proofs of their stability and performance. The main output of this research is an automated, credible autocoding prototype that transforms the Simulink model of the controller into C code documented with a code-level proof of the stability of the controller. The code-level proof, expressed using a formal specification language, are embedded into the code as annotations. The annotations guarantee that the auto-generated code conforms to the input model to the extent that key properties are satisfied. They also provide sufficient information to enable an independent, automatic, formal verification of the auto-generated controller software.
APA, Harvard, Vancouver, ISO, and other styles
46

Marculescu, Bogdan. "Interactive Search-Based Software Testing : Development, Evaluation, and Deployment." Doctoral thesis, Blekinge Tekniska Högskola, Institutionen för programvaruteknik, 2017. http://urn.kb.se/resolve?urn=urn:nbn:se:bth-15083.

Full text
APA, Harvard, Vancouver, ISO, and other styles
47

Snyder, Nicholas B. "DESIGN, VALIDATION, AND VERIFICATION OF THE CAL POLY EDUCATIONAL CUBESAT KIT STRUCTURE." DigitalCommons@CalPoly, 2020. https://digitalcommons.calpoly.edu/theses/2148.

Full text
Abstract:
In this thesis, the development of a structure for use in an educational CubeSat kit is explored. The potential uses of this kit include augmenting existing curricula with aspects of hands on learning, developing new ways of training students on proper space systems engineering practices, and overall contributing to academic capacity building at Cal Poly and its collaborators. The design improves on existing CubeSat kit structures by increasing accessibility to internal components by implementing a modular backplane system, as well as adding the ability to be environmentally tested. Manufacturing of the structure is completed with both additive (Fused Deposition Modeling with ABS polymer and Selective Laser Melting with AlSi10Mg metal) and subtractive (milling with Al-6061) technologies. Modal, harmonic, and random vibration analyses and tests are done to ensure the structure passes vibration testing qualification loads, as outlined by the National Aeronautics and Space Administration’s General Environmental Standards. Successful testing of the structure, defined as deforming less than 0.5 millimeters and maintaining a factor of safety above 2, is achieved with all materials of interest. Thus, the structure becomes the first publicly available CubeSat kit designed to survive environmental testing. Achieving this goal with a structure made of the cheap, widely available material ABS showcases the potential usability of 3D-printed polymers in CubeSat structures.
APA, Harvard, Vancouver, ISO, and other styles
48

Viviani, Carlos Alessandro Bassi. "Proposta de metodologia para verificação e validação software de equipamentos eletromédicos." [s.n.], 2011. http://repositorio.unicamp.br/jspui/handle/REPOSIP/259926.

Full text
Abstract:
Orientador: Vera Lúcia da Silveira Nantes Button
Dissertação (mestrado) - Universidade Estadual de Campinas, Faculdade de Engenharia Elétrica e de Computação
Made available in DSpace on 2018-08-19T17:47:06Z (GMT). No. of bitstreams: 1 Viviani_CarlosAlessandroBassi_M.pdf: 2757752 bytes, checksum: d214acf48f75ac539aed913dc8547646 (MD5) Previous issue date: 2011
Resumo: Hoje boa parte dos equipamentos eletromedicos (EEM) possui algum tipo de controle realizado por software; esse controle pode ser restrito a um ou mais subsistemas do equipamento, ou ainda ser total. A partir do momento em que o software representa papel fundamental no controle de EEM ele deixa de ser um risco intrinseco do equipamento e deve ser analisado com o mesmo rigor e criterio da analise do hardware do equipamento. A analise rigorosa dos equipamentos e concentrada no funcionamento do hardware em si e nao esta associada aos sistemas de controle, que sao feitos por softwares de controle. Uma quantidade significativa de software critico e desenvolvida por pequenas empresas, principalmente na industria de dispositivos medicos. Esse trabalho teve como objetivo primario apresentar uma proposta de metodologia para organizar o processo de teste do software de controle dos EEM, bem como definir toda a documentacao necessaria para a gerencia desse processo de teste tomando como base a norma IEEE 829:2008. Essa metodologia, que prioriza a realizacao de testes sistematicos, podera ser empregada para a verificacao e validacao dos softwares de controle de qualquer tipo de EEM, e esta dividida em duas partes fundamentais: Processo de Teste e Geracao de Documentos. Essa metodologia foi aplicada em um monitor cardiaco hospitalar comercial a fim de valida-lo e, como isso, pode garantir que o equipamento atendeu os requisitos do fabricante e principalmente da norma ao qual ele esta sujeito, e dessa forma considerou o equipamento seguro para uso clinico do ponto de vista da seguranca do software. A obtencao de todo o conteudo necessario para o processo de teste foi feita atraves do manual de utilizacao do EEM, das especificacoes tecnicas apontadas pelo fabricante e das especificacoes definidas na norma especifica do EEM que estao sujeitos a certificacao compulsoria prevista na Resolucao no. 32 da ANVISA. Como resultado dessa pesquisa foi gerado um conjunto de documentos, baseados na IEEE 829:2008, que foram utilizados desde o planejamento dos testes ate o registro dos resultados. Esses documentos sao: 1) Plano de Teste, que e uma modelagem detalhada do fluxo de trabalho durante o processo de teste; 2) Especificacao do Projeto de Teste, que refina a abordagem apresentada no Plano de Teste e identifica as funcionalidades e caracteristicas que foram testadas pelo projeto e por seus testes associados; 3) Especificacao dos Casos de Teste, que definiu os casos de teste, incluindo dados de entrada, resultados esperados, acoes e condicoes gerais para a execucao do teste; 4)Especificacao do Procedimento de Teste, que especificou os passos para executar um conjunto de casos de teste; 5) Diario de Teste, apresentou os registros cronologicos dos detalhes relevantes relacionados a execucao dos testes; 6) Relatorio de Incidente de Teste, documentou os eventos que ocorreram durante a atividade de teste e que precisaram de uma analise posterior; e 7) Relatorio resumo de Teste, apresentou, de forma resumida, os resultados das atividades de teste associadas com uma ou mais especificacoes de projeto de teste e realizou as avaliacoes baseadas nesses resultados. Dessa forma, como objetivos secundarios, foram apresentados os processos e os agentes envolvidos na certificacao de EEM no Brasil e no mundo. Na literatura foram encontrados diversos problemas com os EEM devidos, principalmente, a erros encontrados em seu software de controle. A partir dessas observacoes foram apresentados os reguladores de EEM no Brasil e como e feito o processo de certificacao, comercializacao e vigilancia pos-venda destes produtos. Para apontar os problemas que sao encontrados e documentados referentes aos EEM foi apresentado o conceito de recall e tambem como esse processo ocorre no Brasil e no mundo. A partir desta problematica foram apresentadas as normas aplicadas ao desenvolvimento de software englobando desde o processo de qualidade ate o processo final de teste onde o software de fato sera validado a fim de garantir que novos problemas relacionados aos equipamentos nao voltem a ocorrer. Como resultado primario deste trabalho teve-se a geracao dos documentos que serviram como base para o processo de teste, desde seu planejamento ate a execucao e o registro das atividades de teste. Essa documentacao consistiu em um modelo macro que podera ser aplicado em qualquer EEM. A partir da documentacao proposta pode-se realizar sua aplicacao em um monitor cardiaco hospitalar para sua verificacao (estudo de caso). Os testes funcionais aplicados aos sistemas embarcados do monitor cardiaco puderam ser considerados eficazes em diversas condicoes de uso simuladas, normais e tambem criticas ou que poderiam apresentar algum risco aos usuarios dos equipamentos. Esse estudo resultou em uma importante contribuicao para a organizacao do processo de verificacao e validacao de software de controle de EEM. A aplicacao desta proposta no monitor cardiaco sob teste pode realizar sua verificacao e validacao do ponto de vista de qualidade do software de controle, uma vez que nao apresentou defeitos, apenas um tipo uma falha considerada leve o que qualifica tal monitor cardiaco como apto para utilizacao segura
Abstract: Today a great part of electromedical equipments (EME) have some kind of control performed by software. This control can be restrict to one or more subsystems of the equipment or yet be total. Since software became a key factor in the EME control it represents an intrinsic risk and must be analyzed with the same accuracy and criterion of the equipment's hardware analysis. The rigorous analysis of the equipments is concentrated in the functioning of the hardware itself and is not associated to the software control systems. A significant amount of critical software is developed by small enterprises mainly in the EME industry. This study had as main goal to present a methodology proposal to organize the process of EME control software test as well as to define all necessary documentation for the management of this test process using the standard IEEE 829:2008. As a secondary goal of this work, the processes and agents involved in the EME certification in Brazil and in the world were reported. Several EME malfunctioning problems especially due to mistakes found in their control software were found in literature. Brazilian EME regulators and how the process of certification, commercialization and post-market surveillance of the medical products are done, were also reported. To point the problems found and documented regarding EME, the concept of recall was presented and also how this process occurs in Brazil and in the world. The proposed methodology, which prioritizes the achievement of systematic tests, can be used for verification and validation of any kind of EME control software and was divided in two fundamental parts: test process and generation of documents. The methodology was applied to a commercial hospital heart monitor in order to validate it and therefore to guarantee that the equipment has complied with the manufacture's requirements and with the standard it is subjected to. This way the equipment can be considered safe for clinical use from the software's security point of view. Some characteristics data and technical specifications, necessary for the test process, were obtained through the EME user manual and pointed by the manufacturer and EME standard specification, which are subject to compulsory certification provided by the ANVISA Brazilian resolution number 32. As a result of this research a set of documents was produced, based on the IEEE 829:2008 standard and were used from the test planning until the results record. Those documents are: 1) Test plan - detailed modeling of workflow during the test process. 2) Specification of test project - refines the approach presented in the test plan and identifies the functionalities and characteristics tested by project and associated tests. 3) Specification of test cases - specified steps to execute a set of test cases. 5) Test board - presented the chronological records of relevant details related to test execution. 6) Test incident report - documents the events occurred during the test activity that needed later analysis and 7) Test summary report - resumes briefly the results of test activities associated to one or more test project specifications and performed evaluations based on these results. As a primary result of this work there was the production of documents that were the basis for the testing process, from planning to execution and recording of test activities. This documentation consisted of a macro model that can be applied to any EME and it was used to test a hospital heart monitor. The functional tests applied to the heart monitor embedded systems were considered effective in various simulated situations, normal and critical or that could represent a risk to users of the equipment. This study resulted in an important contribution to the organization of the process of verification and validation of EME control software. The implementation of the proposed methodology on the heart monitor test was able to perform verification and validation from the point of view of control software and it was considered safe to be used since only a light kind of failure was observed
Mestrado
Engenharia Biomedica
Mestre em Engenharia Elétrica e de Computação
APA, Harvard, Vancouver, ISO, and other styles
49

Bollen, Rob. "Verification and validation of computer simulations with the purpose of licensing a pebble bed modular reactor." Thesis, Stellenbosch : Stellenbosch University, 2002. http://hdl.handle.net/10019.1/53216.

Full text
Abstract:
Thesis (MBA)--Stellenbosch University, 2002.
ENGLISH ABSTRACT: The Pebble Bed Modular Reactor is a new and inherently safe concept for a nuclear power generation plant. In order to obtain the necessary licenses to build and operate this reactor, numerous design and safety analyses need to be performed. The results of these analyses must be supported with substantial proof to provide the nuclear authorities with a sufficient level of confidence in these results to be able to supply the required licences. Beside the obvious need for a sufficient level of confidence in the safety analyses, the analyses concerned with investment protection also need to be reliable from the investors’ point of view. The process to be followed to provide confidence in these analyses is the verification and validation process. It is aimed at presenting reliable material against which to compare the results from the simulations. This material for comparison will consist of a combination of results from experimental data, extracts from actual plant data, analytical solutions and independently developed solutions for the simulation of the event to be analysed. Besides comparison with these alternative sources of information, confidence in the results will also be built by providing validated statements on the accuracy of the results and the boundary conditions with which the simulations need to comply. Numerous standards exist that address the verification and validation of computer software, for instance by organisations such as the American Society of Mechanical Engineers (ASME) and the Institute of Electrical and Electronics Engineers (IEEE). The focal points of the verification and validation of the design and safety analyses performed on typical PBMR modes and states, and the requirements imposed by both the local and overseas nuclear regulators, are not entirely enveloped by these standards. For this reason, PBMR developed a systematic and disciplined approach for the preparation of the Verification and Validation Plan, aimed at capturing the essence of the analyses. This approach aims to make a definite division between software development and the development of technical analyses, while still using similar processes for the verification and validation. The reasoning behind this is that technical analyses are performed by engineers and scientists who should only be responsible for the verification and validation of the models and data they use, but not for the software they are dependent on. Software engineers should be concerned with the delivery of qualified software to be used in the technical analyses. The PBMR verification and validation process is applicable to both hand calculations and computer-aided analyses, addressing specific requirements in clearly defined stages of the software and Technical Analysis life cycle. The verification and validation effort of the Technical Analysis activity is divided into the verification and validation of models and data, the review of calculational tasks, and the verification and validation of software, with the applicable information to be validated, captured in registers or databases. The resulting processes are as simple as possible, concise and practical. Effective use of resources is ensured and internationally accepted standards have been incorporated, aiding in faith in the process by all stakeholders, including investors, nuclear regulators and the public.
AFRIKAASE OPSOMMING: Die Modulêre Korrelbedreaktor is ’n nuwe konsep vir ’n kernkragsentrale wat inherent veilig is. Dit word deur PBMR (Edms.) Bpk. ontwikkel. Om die nodige vergunnings om so ’n reaktor te kan bou en bedryf, te bekom, moet ’n aansienlike hoeveelheid ontwerp- en veiligheidsondersoeke gedoen word. Die resultate wat hierdie ondersoeke oplewer, moet deur onweerlegbare bewyse ondersteun word om vir die owerhede ’n voldoende vlak van vertroue in die resultate te gee, sodat hulle die nodigde vergunnings kan maak. Benewens die ooglopende noodsaak om ’n voldoende vlak van vertroue in die resultate van die veiligheidsondersoeke te hê, moet die ondersoeke wat met die beskerming van die beleggers se beleggings gepaard gaan, net so betroubaar wees. Die proses wat gevolg word om vertroue in die resultate van die ondersoeke op te bou, is die proses van verifikasie en validasie. Dié proses is daarop gerig om betroubare vergelykingsmateriaal vir simulasies voor te lê. Hierdie vergelykingsmateriaal vir die gebeurtenis wat ondersoek word, sal bestaan uit enige kombinasie van inligting wat in toetsopstellings bekom is, wat in bestaande installasies gemeet is, wat analities bereken is; asook dit wat deur ’n derde party onafhanklik van die oorspronklike ontwikkelaars bekom is. Vertroue in die resultate van die ondersoeke sal, behalwe deur vergelyking met hierdie alternatiewe bronne van inligting, ook opgebou word deur die resultate te voorsien van ’n gevalideerde verklaring wat die akkuraatheid van die resultate aantoon en wat die grensvoorwaardes waaraan die simulasies ook moet voldoen, opsom. Daar bestaan ’n aansienlike hoeveelheid internasionaal aanvaarde standaarde wat die verifikasie en validasie van rekenaarsagteware aanspreek. Die standaarde kom van instansies soos die Amerikaanse Vereniging vir Meganiese Ingenieurs (ASME) en die Instituut vir Elektriese en Elektroniese Ingenieurs (IEEE) – ook van Amerika. Die aandag wat deur die Suid-Afrikaanse en oorsese kernkragreguleerders vereis word vir die toestande wat spesifiek geld vir korrelbedreaktors, word egter nie geheel en al deur daardie standaarde aangespreek nie. Daarom het die PBMR maatskappy ’n stelselmatige benadering ontwikkel om verifikasie- en validasieplanne voor te berei wat die essensie van die ondersoeke kan ondervang. Hierdie benadering is daarop gemik om ’n duidelike onderskeid te maak tussen die ontwikkeling van sagteware en die ontwikkeling van tegniese ondersoeke, terwyl steeds gelyksoortige prosesse in die verifikasie en validasie gebruik sal word. Die rede hiervoor is dat tegniese ondersoeke uitgevoer word deur ingenieurs en wetenskaplikes wat net vir verifikasie en validasie van hulle eie modelle en die gegewens verantwoordelik gehou kan word, maar nie vir die verifikasie en validasie van die sagteware wat hulle gebruik nie. Ingenieurs wat spesialiseer in sagteware-ontwikkeling behoort verantwoordelik te wees vir die daarstelling van sagteware wat deur die reguleerders gekwalifiseer kan word, sodat dit in tegniese ondersoeke op veiligheidsgebied gebruik kan word. Die verifikasie- en validasieproses van die PBMR is sowel vir handberekeninge as vir rekenaarondersteunde-ondersoek geskik. Hierdie proses spreek spesifieke vereistes in onderskeie stadiums gedurende die lewenssiklusse van die ontwikkeling van sagteware en van tegniese ondersoeke aan. Die verifikasie- en validasiewerk vir tegniese ondersoeksaktiwiteite is verdeel in die verifikasie en validasie van modelle en gegewens, die nasien van berekeninge en die verifikasie en validasie van sagteware, waarby die betrokke inligting wat gevalideer moet word, versamel word in registers of databasisse. Die prosesse wat hieruit voortgevloei het, is so eenvoudig as moontlik, beknop en prakties gehou. Hierdeur is ’n effektiewe benutting van bronne verseker. Internasionaal aanvaarde standaarde is gebruik wat die vertroue in die proses deur alle betrokkenes, insluitende beleggers, die owerhede en die publiek, sal bevorder.
APA, Harvard, Vancouver, ISO, and other styles
50

Yilmaz, Levent. "Specifying and Verifying Collaborative Behavior in Component-Based Systems." Diss., Virginia Tech, 2002. http://hdl.handle.net/10919/26494.

Full text
Abstract:
In a parameterized collaboration design, one views software as a collection of components that play specific roles in interacting, giving rise to collaborative behavior. From this perspective, collaboration designs revolve around reusing collaborations that typify certain design patterns. Unfortunately, verifying that active, concurrently executing components obey the synchronization and communication requirements needed for the collaboration to work is a serious problem. At least two major complications arise in concurrent settings: (1) it may not be possible to analytically identify components that violate the synchronization constraints required by a collaboration, and (2) evolving participants in a collaboration independently often gives rise to unanticipated synchronization conflicts. This work presents a solution technique that addresses both of these problems. Local (that is, role-to-role) synchronization consistency conditions are formalized and associated decidable inference mechanisms are developed to determine mutual compatibility and safe refinement of synchronization behavior. More specifically, given generic parameterized collaborations and components with specific roles, mutual compatibility analysis verifies that the provided and required synchronization models are consistent and integrate correctly. Safe refinement, on the other hand, guarantees that the local synchronization behavior is maintained consistently as the roles and the collaboration are refined during development. This form of local consistency is necessary, but insufficient to guarantee a consistent collaboration overall. As a result, a new notion of global consistency (that is, among multiple components playing multiple roles) is introduced: causal process constraint analysis. A method for capturing, constraining, and analyzing global causal processes, which arise due to causal interference and interaction of components, is presented. Principally, the method allows one to: (1) represent the intended causal processes in terms of interactions depicted in UML collaboration graphs; (2) formulate constraints on such interactions and their evolution; and (3) check that the causal process constraints are satisfied by the observed behavior of the component(s) at run-time.
Ph. D.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography