Rozprawy doktorskie na temat „Software defects”

Kliknij ten link, aby zobaczyć inne rodzaje publikacji na ten temat: Software defects.

Utwórz poprawne odniesienie w stylach APA, MLA, Chicago, Harvard i wielu innych

Wybierz rodzaj źródła:

Sprawdź 50 najlepszych rozpraw doktorskich naukowych na temat „Software defects”.

Przycisk „Dodaj do bibliografii” jest dostępny obok każdej pracy w bibliografii. Użyj go – a my automatycznie utworzymy odniesienie bibliograficzne do wybranej pracy w stylu cytowania, którego potrzebujesz: APA, MLA, Harvard, Chicago, Vancouver itp.

Możesz również pobrać pełny tekst publikacji naukowej w formacie „.pdf” i przeczytać adnotację do pracy online, jeśli odpowiednie parametry są dostępne w metadanych.

Przeglądaj rozprawy doktorskie z różnych dziedzin i twórz odpowiednie bibliografie.

1

Couto, César Francisco de Moura. "Predicting software defects with causality tests = Predizendo defeitos de software com testes de causalidade". Universidade Federal de Minas Gerais, 2013. http://hdl.handle.net/1843/ESBF-9GMMLN.

Pełny tekst źródła
Streszczenie:
Defect prediction is a central area of research in software engineering that aims to identify the components of a software system that are more likely to present defects. Despite the large investment in research aiming to identify an effective way to predict defects in software systems, there is still no widely used solution to this problem. Current defect prediction approaches present at least two main problems in the current defect prediction approaches. First, most approaches do not consider the idea of causality between software metrics and defects. More specifically, the studies performed to evaluate defect prediction techniques do not investigate in-depth whether the discovered relationships indicate cause-effect relations or whether they are statistical coincidences. The second problem concerns the output of the current defect prediction models. Typically, most indicate the number or the existence of defects in a component in the future. Clearly, the availability of this information is important to foster software quality. However, predicting defects as soon as they are introduced in the code is more useful to maintainers than simply signaling the future occurrences of defects. To tackle these questions, in this thesis we propose a defect prediction approach centered on more robust evidences towards causality between source code metrics (as predictors) and the occurrence of defects. More specifically, we rely on a statistical hypothesis test proposed by Clive Granger to evaluate whether past variations in source code metrics values can be used to forecast changes in time series of defects. The Granger Causality Test was originally proposed to evaluate causality between time series of economic data. Our approach triggers alarms whenever changes made to the source code of a target system are likely to present defects. We evaluated our approach in several life stages of four Java-based systems. We reached an average precision greater than 50% in three out of the four systems we evaluated. Moreover, by comparing our approach with baselines that are not based on causality tests, it achieved a better precision.
Predição de defeitos é uma área de pesquisa em engenharia de software que objetiva identificar os componentes de um sistema de software que são mais prováveis de apresentar defeitos. Apesar do grande investimento em pesquisa objetivando identificar uma maneira efetiva para predizer defeitos em sistemas de software, ainda não existe uma solução amplamente utilizada para este problema. As atuais abordagens para predição de defeitos apresentam pelo menos dois problemas principais. Primeiro, a maioria das abordagens não considera a idéia de causalidade entre métricas de software e defeitos. Mais especificamente, os estudos realizados para avaliar as técnicas de predição de defeitos não investigam em profundidade se as relações descobertas indicam relações de causa e efeito ou se são coincidências estatísticas. O segundo problema diz respeito a saída dos atuais modelos de predição de defeitos. Tipicamente, a maioria dos modelos indica o número ou a existência de defeitos em um componente no futuro. Claramente, a disponibilidade desta informação é importante para promover a qualidade de software. Entretanto, predizer defeitos logo que eles são introduzidos no código é mais útil para mantenedores que simplesmente sinalizar futuras ocorrências de defeitos. Para resolver estas questões, nós propomos uma abordagem para predição de defeitos centrada em evidências mais robustas no sentido de causalidade entre métricas de código fonte (como preditor) e a ocorrência de defeitos. Mais especificamente, nós usamos um teste de hipótese estatístico proposto por Clive Granger (Teste de Causalidade de Granger) para avaliar se variações passadas nos valores de métricas de código fonte podem ser usados para predizer mudanças em séries temporais de defeitos. Nossa abordagem ativa alarmes quando mudanças realizadas no código fonte de um sistema alvo são prováveis de produzir defeitos. Nós avaliamos nossa abordagem em várias fases da vida de quatro sistemas implementados em Java. Nós alcançamos um precisão média maior do que 50% em três dos quatro sistemas avaliados. Além disso, ao comparar nossa abordagem com abordagens que não são baseadas em testes de causalidade, nossa abordagem alcançou uma precisão melhor.
Style APA, Harvard, Vancouver, ISO itp.
2

Wang, Hui. "Software Defects Classification Prediction Based On Mining Software Repository". Thesis, Uppsala universitet, Institutionen för informationsteknologi, 2014. http://urn.kb.se/resolve?urn=urn:nbn:se:uu:diva-216554.

Pełny tekst źródła
Streszczenie:
An important goal during the cycle of software development is to find and fix existing defects as early as possible. This has much to do with software defects prediction and management. Nowadays,many  big software development companies have their own development repository, which typically includes a version control system and a bug tracking system. This has no doubt proved useful for software defects prediction. Since the 1990s researchers have been mining software repository to get a deeper understanding of the data. As a result they have come up with some software defects prediction models the past few years. There are basically two categories among these prediction models. One category is to predict how many defects still exist according to the already captured defects data in the earlier stage of the software life-cycle. The other category is to predict how many defects there will be in the newer version software according to the earlier version of the software defects data. The complexities of software development bring a lot of issues which are related with software defects. We have to consider these issues as much as possible to get precise prediction results, which makes the modeling more complex. This thesis presents the current research status on software defects classification prediction and the key techniques in this area, including: software metrics, classifiers, data pre-processing and the evaluation of the prediction results. We then propose a way to predict software defects classification based on mining software repository. A way to collect all the defects during the development of software from the Eclipse version control systems and map these defects with the defects information containing in software defects tracking system to get the statistical information of software defects, is described. Then the Eclipse metrics plug-in is used to get the software metrics of files and packages which contain defects. After analyzing and preprocessing the dataset, the tool(R) is used to build a prediction models on the training dataset, in order to predict software defects classification on different levels on the testing dataset, evaluate the performance of the model and comparedifferent models’ performance.
Style APA, Harvard, Vancouver, ISO itp.
3

Nakamura, Taiga. "Recurring software defects in high end computing". College Park, Md. : University of Maryland, 2007. http://hdl.handle.net/1903/7217.

Pełny tekst źródła
Streszczenie:
Thesis (Ph. D.) -- University of Maryland, College Park, 2007.
Thesis research directed by: Computer Science. Title from t.p. of PDF. Includes bibliographical references. Published by UMI Dissertation Services, Ann Arbor, Mich. Also available in paper.
Style APA, Harvard, Vancouver, ISO itp.
4

Hickman, Björn, i Victor Holmqvist. "Predict future software defects through machine learning". Thesis, KTH, Skolan för elektroteknik och datavetenskap (EECS), 2021. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-301864.

Pełny tekst źródła
Streszczenie:
The thesis aims to investigate the implications of software defect predictions through machine learning on project management. In addition, the study aims to examine what features of a code base that are useful for making such predictions. The features examined are of both organisational and technical nature, indicated to correlate with the introductions of software defects by previous studies. The machine learning algorithms used in the study are Random forest, logistic regression and naive Bayes. The data was collected from an open source git-repository, VSCode, where the correct classifications of reported defects originated from GitHub-Issues. The results of the study indicate that both technical features of a code base, as well as organisational factors can be useful when predicting future software defects. All three algorithms showed similar performance. Furthermore, the ML-models presented in this study show some promise as a complementary tool in project management decision making, more specifically decisions regarding planning, risk assessment and resource allocation. However, further studies in this area are of interest, in order to confirm the findings of this study and it’s limitations.
Rapportens mål var att undersöka potentiella effekter av att predicera mjukvarudefekter i ett mjukvaruprojekt. Detta genomfördes med hjälp av maskininlärning. Vidare undersöker studien vilka särdrag hos en kodbas som är av intresse för att genomföra dessa prediktioner. De undersökta särdrag som användes för att träna modellerna var av både teknisk såväl som organisatorisk karaktär. Modellerna som användes var Random forest, logistisk regression och naive Bayes. Data hämtades från ett open source git-repository, VSCode, där korrekta klassificeringar av rapporterade defekter hämtades från GitHub-Issues. Rapportens resultat ger indikationer på att både tekniska och organisatoriska särdrag är av relevans. Samtliga tre modeller påvisade liknande resultat. Vidare kan modellernas resultat visa stöd för att användas som ett komplementärt verktyg vid projektledning av mjukvaruprojekt. Närmare bestämt stöd vid riskplanering, riskbedömning och vid resursallokering. Vidare skulle fortsatta studier inom detta område vara av intresse för att bekräfta denna studies slutsatser.
Style APA, Harvard, Vancouver, ISO itp.
5

Shippey, Thomas Joshua. "Exploiting abstract syntax trees to locate software defects". Thesis, University of Hertfordshire, 2015. http://hdl.handle.net/2299/16365.

Pełny tekst źródła
Streszczenie:
Context. Software defect prediction aims to reduce the large costs involved with faults in a software system. A wide range of traditional software metrics have been evaluated as potential defect indicators. These traditional metrics are derived from the source code or from the software development process. Studies have shown that no metric clearly out performs another and identifying defect-prone code using traditional metrics has reached a performance ceiling. Less traditional metrics have been studied, with these metrics being derived from the natural language of the source code. These newer, less traditional and finer grained metrics have shown promise within defect prediction. Aims. The aim of this dissertation is to study the relationship between short Java constructs and the faultiness of source code. To study this relationship this dissertation introduces the concept of a Java sequence and Java code snippet. Sequences are created by using the Java abstract syntax tree. The ordering of the nodes within the abstract syntax tree creates the sequences, while small sub sequences of this sequence are the code snippets. The dissertation tries to find a relationship between the code snippets and faulty and non-faulty code. This dissertation also looks at the evolution of the code snippets as a system matures, to discover whether code snippets significantly associated with faulty code change over time. Methods. To achieve the aims of the dissertation, two main techniques have been developed; finding defective code and extracting Java sequences and code snippets. Finding defective code has been split into two areas - finding the defect fix and defect insertion points. To find the defect fix points an implementation of the bug-linking algorithm has been developed, called S + e . Two algorithms were developed to extract the sequences and the code snippets. The code snippets are analysed using the binomial test to find which ones are significantly associated with faulty and non-faulty code. These techniques have been performed on five different Java datasets; ArgoUML, AspectJ and three releases of Eclipse.JDT.core Results. There are significant associations between some code snippets and faulty code. Frequently occurring fault-prone code snippets include those associated with identifiers, method calls and variables. There are some code snippets significantly associated with faults that are always in faulty code. There are 201 code snippets that are snippets significantly associated with faults across all five of the systems. The technique is unable to find any significant associations between code snippets and non-faulty code. The relationship between code snippets and faults seems to change as the system evolves with more snippets becoming fault-prone as Eclipse.JDT.core evolved over the three releases analysed. Conclusions. This dissertation has introduced the concept of code snippets into software engineering and defect prediction. The use of code snippets offers a promising approach to identifying potentially defective code. Unlike previous approaches, code snippets are based on a comprehensive analysis of low level code features and potentially allow the full set of code defects to be identified. Initial research into the relationship between code snippets and faults has shown that some code constructs or features are significantly related to software faults. The significant associations between code snippets and faults has provided additional empirical evidence to some already researched bad constructs within defect prediction. The code snippets have shown that some constructs significantly associated with faults are located in all five systems, and although this set is small finding any defect indicators that transfer successfully from one system to another is rare.
Style APA, Harvard, Vancouver, ISO itp.
6

Zheng, Xue Lin. "A Framework for Early Detection of Requirements Defects". Thesis, Griffith University, 2008. http://hdl.handle.net/10072/366377.

Pełny tekst źródła
Streszczenie:
This thesis is about early detection of requirements defects. Software-centred systems’ defects can cause loss of life, loss of property, loss of data and economic losses. Requirements defects are a major source of system defects. The early detection of requirements defects prevents software-centred systems’ defects, and thus reduces the various types of losses. In the past thirty years, many methods have been developed to detect requirements defects. The most prominent methods include inspections, automated static analysis, simulation, formal specifications and more recently model-checking. Each method has different strengths and weaknesses. The lack of integration of the different detection techniques produces a knowledge gap that causes problems with repeatability, scalability, effectiveness, and efficiency of the detection process. This knowledge gap is enlarged by the lack of a well-specified defect classification scheme that specifies quality rules, collects defects, specifies defect patterns, and classifies the patterns. This thesis proposes a framework for early defect detection based on Behavior trees, a representation which makes it practical to integrate the various detection techniques. Individual requirements are translated one at a time into Requirements Behavior Trees. These Requirements Behavior Trees are then integrated into an Integrated Behavior Tree that can be inspected, statically analysed, model checked and simulated. The framework is based on the hypothesis that if a well-specified defect classification scheme is developed and different types of detectors are integrated to detect patterns that suit their capabilities and if processes are developed to cover the complete requirements lifecycle, then the framework’s detection results will be more effective and more efficient, and the results will be more repeatable and scalable than existing methods. The framework includes a Behavior Trees defect classification scheme. The scheme defines defect patterns for requirements written in English and requirements specified by Behavior Trees. The scheme has a variety of defect patterns. Each defect pattern contains the characteristics of a type of defect. Defect patterns are grouped together based on the quality rules that they violate. This framework and the hypothesis have been tested using four case studies. The results of the case studies found that compared to the Perspective-based Reading method and three conventional requirements analysis methods the framework proposed was more effective and able to detect a broader range of defect types. However, because of the lack of the tool support, the efficiency of the method is still questionable. It should however improve with better tool support.
Thesis (PhD Doctorate)
Doctor of Philosophy (PhD)
School of Information and Communication Technology
Science, Environment, Engineering and Technology
Full Text
Style APA, Harvard, Vancouver, ISO itp.
7

Phaphoom, Nattakarn. "Pair Programming and Software Defects : A Case Study". Thesis, Blekinge Tekniska Högskola, Sektionen för datavetenskap och kommunikation, 2010. http://urn.kb.se/resolve?urn=urn:nbn:se:bth-3513.

Pełny tekst źródła
Streszczenie:
Pair programming is a programming technique in which two programmers sit literally side by side working on the same task at the same computer. One member of a pair called “driver” is in charge of writing the code. The other member plays a role of “navigator”, working on the more strategic tasks, such as looking for tactical error, thinking about overall structure, and finding better alternatives. Pair programming is claimed to improve product quality, reduce defects, and shorten time to market. On the other hand, it has been criticized on cost efficiency. To increase a body of evidence regarding the real benefits of pair programming, this thesis investigates its effect on software defects and efficiency of defect correction. The analysis bases on 14-month data of project artifacts and developers' activities collected from a large Italian manufacturing company. The team of 16 developers adopts a customized version of extreme programming and practices pair programming on a daily basis. We investigate sources of defects and defect correction activities of approximately 8% of defects discovered during that time, and enhancement activities of approximately 9% of new requirements. Then we analyze whether there exists an effect of pair programming on defect rate, duration and effort of defect correction, and precision of localizing defects. The result shows that pair programming reduces the introduction of new defects when the code needs to be modified for defect corrections and enhancements.
Style APA, Harvard, Vancouver, ISO itp.
8

Almossawi, Ali. "Investigating the architectural drivers of defects in open-source software systems : an empirical study of defects and reopened defects in GNOME". Thesis, Massachusetts Institute of Technology, 2012. http://hdl.handle.net/1721.1/76566.

Pełny tekst źródła
Streszczenie:
Thesis (S.M. in Engineering and Management)--Massachusetts Institute of Technology, Engineering Systems Division, System Design and Management Program, 2012.
Cataloged from PDF version of thesis.
Includes bibliographical references (p. 64-67).
In major software systems that are developed by competent software engineers, the existence of defects in production is unlikely to be an acceptable situation. And yet, we find that in several such systems, defects remain a reality. Furthermore, the number of changes that are fixed only to then be reopened is noticeable. The implications of having defects in a system can be frustrating for all stakeholders, and when they require constant rework, they can lead to the problematic code-test-code-test mode of development. For management, such conditions can result in slipped schedules and an increase in development costs and for upper management and users, they can result in losing confidence in the product. This study looks at the drivers of defects in the mature open-source project GNOME and explores the relationship between the various drivers of these defects and software quality. Using defect-activity and source-code data for 32 systems over a period of eight years, the work presents a multiple regression model capable of explaining 16.2% of defects and a logistic regression model capable of explaining between 13.6% and 18.1% of reopened defects. The study also shows that although defects in general and reopened defects appear to move together, defects in general correlate with a measure of complexity that captures how components connect to each other whereas reopened defects correlate with a measure that captures the inner complexities of components, thereby suggesting that different types of defects are correlated with different forms of complexity.
by Ali Almossawi.
S.M.in Engineering and Management
Style APA, Harvard, Vancouver, ISO itp.
9

Vandehei, Bailey R. "Leveraging Defects Life-Cycle for Labeling Defective Classes". DigitalCommons@CalPoly, 2019. https://digitalcommons.calpoly.edu/theses/2111.

Pełny tekst źródła
Streszczenie:
Data from software repositories are a very useful asset to building dierent kinds of models and recommender systems aimed to support software developers. Specically, the identication of likely defect-prone les (i.e., classes in Object-Oriented systems) helps in prioritizing, testing, and analysis activities. This work focuses on automated methods for labeling a class in a version as defective or not. The most used methods for automated class labeling belong to the SZZ family and fail in various circum- stances. Thus, recent studies suggest the use of aect version (AV) as provided by developers and available in the issue tracker such as JIRA. However, in many cir- cumstances, the AV might not be used because it is unavailable or inconsistent. The aim of this study is twofold: 1) to measure the AV availability and consistency in open-source projects, 2) to propose, evaluate, and compare to SZZ, a new method for labeling defective classes which is based on the idea that defects have a stable life-cycle in terms of proportion of versions needed to discover the defect and to x the defect. Results related to 212 open-source projects from the Apache ecosystem, featuring a total of about 125,000 defects, show that the AV cannot be used in the majority (51%) of defects. Therefore, it is important to investigate automated meth- ods for labeling defective classes. Results related to 76 open-source projects from the Apache ecosystem, featuring a total of about 6,250,000 classes that are are aected by 60,000 defects and spread over 4,000 versions and 760,000 commits, show that the proposed method for labeling defective classes is, in average among projects and de- fects, more accurate, in terms of Precision, Kappa, F1 and MCC than all previously proposed SZZ methods. Moreover, the improvement in accuracy from combining SZZ with defects life-cycle information is statistically signicant but practically irrelevant ( overall and in average, more accurate via defects' life-cycle than any SZZ method.
Style APA, Harvard, Vancouver, ISO itp.
10

Arantes, Alessandro Oliveira. "REACTOR: Combining static analysis, testing and reverse engineering to detect software defects". Instituto Nacional de Pesquisas Espaciais (INPE), 2016. http://urlib.net/sid.inpe.br/mtc-m21b/2016/04.20.19.30.

Pełny tekst źródła
Streszczenie:
It is increasingly common the use of computer systems to replace human labor in critical systems, and since these systems have become more autonomous in decision making, they demand a high degree of quality and robustness. INPE develops embedded systems for scientific satellites and stratospheric balloons; consequently, the process of verification and validation require special care in detecting and preventing defects. In terms of complexity and system${'}$s domain in question, these processes consume specialists manpower for a long period. In this scenario, the application of techniques that can automatically support test process provide a significant gain in specialists productivity and efficiency. For this purpose, this work performs the source code reverse engineering in order to support a combination of two V\&V processes, static source code analysis and software testing, in order to detect a wider range of defects. The proposed method, called REACTOR (Reverse Engineering for stAtic Code analysis and Testing to detect sOftwaRe defects), complements the traditional way that static code analyzers work by using dynamic information obtained by an automated test case generator, which combines three different black box techniques, being also possible to infer a set of estimated expected results similar to a test oracle. However, the combination of such techniques is not trivial, especially in terms of tasks that commonly demand some action that are not easily automated. Furthermore, the static analysis by itself can not reveal several types of defects that can only be detected by combining static analysis and dynamic information. The REACTOR method has been implemented in a software tool, also called REACTOR, which exempts from a large manual labors amount from testers by automating the process and basing only on applications source code. In addition, REACTOR was applied to some case studies including one of the space application domain, and it performed better than three other well known static code analyzers.
É cada vez mais comum a utilização de sistemas computacionais em substituição à mão de obra humana em sistemas críticos, e na medida em que estes sistemas têm se tornado mais autônomos para tomar decisões, eles exigem um alto grau de qualidade e robustez. O INPE desenvolve sistemas embarcados para satélites científicos e balões estratosféricos; consequentemente, os processos de verificação e validação exigem cuidados especiais na detecção e prevenção de defeitos. E tendo em vista a complexidade e o domínio dos sistemas em questão, estes processos consomem a mão de obra especialista por um longo período. Neste cenário, a aplicação de técnicas que possam efetuar testes de forma automática auxiliam o processo proporcionando um ganho significativo de produtividade e eficácia no trabalho dos especialistas. Com esse objetivo, este trabalho realiza a engenharia reversa de código-fonte de modo a combinar dois processos de V\&V, análise estática de código fonte e teste de software, a fim de detectar uma gama mais ampla de defeitos. O método proposto, denominado REACTOR (Reverse Engineering for stAtic Code analysis and Testing to detect sOftwaRe defects), complementa a maneira tradicional pela qual os analisadores de código estático trabalham usando informações dinâmicas obtidas por um gerador de caso de teste automatizado, que combina três técnicas de caixa preta diferentes, sendo também possível inferir um conjunto de resultados esperados estimados similar a um oráculo de teste. Ainda assim, a leitura do código fonte estático por si só pode não revelar vários tipos de defeitos que só podem ser detectados combinando a análise estática com informação dinâmica. O método REACTOR foi implementado em uma ferramenta de software, também chamado de REACTOR, que poupa os testadores de um grande volume de trabalho manual automatizando o processo e baseando-se apenas no código fonte. Além disso, a REACTOR foi aplicada em alguns casos de estudo incluindo uma aplicação da área espacial, e seu desempenho foi melhor do que outras três conhecidos analisadores de código estático.
Style APA, Harvard, Vancouver, ISO itp.
11

Jaber, Khaled M. "Supporting Project Tasks, Resources, Documents, and Defects Analysis in Software Project Management". Ohio University / OhioLINK, 2016. http://rave.ohiolink.edu/etdc/view?acc_num=ohiou1461609929.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
12

Hassan, Syed Karimuddin and Syed Muhammad. "Defect Detection in SRS using Requirement Defect Taxonomy". Thesis, Blekinge Tekniska Högskola, Sektionen för datavetenskap och kommunikation, 2012. http://urn.kb.se/resolve?urn=urn:nbn:se:bth-5253.

Pełny tekst źródła
Streszczenie:
Context: Defects occurred in the SRS may cause problems in project due to implementation of poor requirements which require extra time, effort, resources and budget to complete it. Reading techniques i.e., checklist based reading (CBR) helps to guide reviewers in identifying defects in software requirement specification (SRS) during individual requirement inspections. Checklists contain potential defects/problems to look for, but often lack clear definitions with examples of the problem, and also their abstractions are different. Therefore, there is a need for identifying existing defects and classifiers and to create a consolidated version of taxonomy. Objectives: We developed taxonomy for requirement defects that are in requirement specifications and compared it with the checklist based approach. The main objective was to investigate and compare the effectiveness and efficiency of inspection techniques (checklist and taxonomy) with M.Sc. software engineering students and industry practitioners by performing a both controlled student and industry experiment. Methods: Literature review, controlled student experiment and controlled industry experiment were the research methods utilized to fulfill the objectives of this study. INSPEC and Google scholar database was used to find the articles from the literature. Controlled student experiment was conducted with the M.Sc. software engineering students and controlled industry experiment was performed with the industry practitioners to evaluate the effectiveness and efficiency of the two treatments that are checklist and taxonomy. Results: An extensive literature review helped us to identify several types of defects with their definitions and examples. In this study, we studied various defect classifiers, checklists, requirement defects and inspection techniques and then built taxonomy for requirement defects. We evaluated whether the taxonomy performed better with respect to checklist using controlled experiments with students and practitioners. Moreover, the results of student experiment (p= 0.90 for effectiveness and p=0.10 for efficiency) and practitioner experiment (p=1.0 for effectiveness and p=0.70 for efficiency) did not show significant values with respect to effectiveness and efficiency. But because of less number of practitioners it is not possible to apply a statistical test since we also have used standard formulas to calculate effectiveness and efficiency. 2 out of the 3 reviewers using taxonomy found more defect types compared to 3 reviewers using checklist. 10-15% more defects have been found by reviewers using taxonomy. 2 out of the 3 reviewers using taxonomy are more productive (measuring in hours) compared to reviewers of checklist. Although the results are quite better than the student experiment but it is hard to claim that reviewers using taxonomy are more effective and efficient than the reviewers using checklist because of less subjects in number. The results of the post experiment questionnaire revealed that the taxonomy is easy to use and easy to understand but hard to remember while inspecting SRS than the checklist technique. Conclusions: Previously researchers created taxonomies for their own purpose or on industry demand. These taxonomies lack clear and understandable definitions. To overcome this problem, we built taxonomy with requirement defects which consists of definitions and examples. No claims are made based on student experiment because of insignificant values with respect to effectiveness and efficiency. Although the controlled industry experiment results showed that taxonomy performed slightly better than the checklist in efficiency i.e., in defect detection rate and effectiveness i.e., number of defect found. From this we can conclude that taxonomy helps guiding the reviewers to indentify defects from SRS but not quite much so it is recommended to perform a further study with practitioners in a large scale for effective results.
skarimuddin@yahoo.com, hassanshah357@gmail.com
Style APA, Harvard, Vancouver, ISO itp.
13

Powell, Daniel, i n/a. "Formal Methods For Verification Based Software Inspection". Griffith University. School of Computing and Information Technology, 2003. http://www4.gu.edu.au:8080/adt-root/public/adt-QGU20030925.154706.

Pełny tekst źródła
Streszczenie:
Useful processes, that are independently repeatable, are utilised in all branches of science and traditional engineering disciplines but seldom in software engineering. This is particularly so with processes used for detection and correction of defects in software systems. Code inspection, as introduced by Michael Fagan at IBM in the mid 1970's is widely recognised as an effective technique for finding defects in software. Despite its reputation, code inspection, as it is currently practiced, is not a strictly repeatable process. This is due to the problems faced by inspectors when they attempt to paraphrase the complicated semantics of a unit of computer code. Verification based software inspection, as advocated by the cleanroom software engineering community, requires that arguments of correctness be formulated with the code and its specification. These arguments rely on the reader being able to extract the semantics from the code. This thesis addresses the requirement for an independently repeatable, scalable and substantially automated method for yielding semantics from computer code in a complete, unambiguous and consistent manner in order to facilitate, and make repeatable, verification based code inspection. Current literature regarding the use of code inspection for verification of software is surveyed. Empirical studies are referenced, comparing inspection to software testing and program proof. Current uses of formal methods in software engineering will be discussed, with particular reference to formal method applications in verification. Forming the basis of the presented method is a systematic, and hence repeatable, approach to the derivation of program semantics. The theories and techniques proposed for deriving semantics from program code extend current algorithmic and heuristic techniques for deriving invariants. Additionally, the techniques introduced yield weaker forms of invariant information which are also useful for verification, defect detection and correction. Methods for using these weaker invariant forms, and tools to support these methods, are introduced. Algorithmic and heuristic techniques for investigating loop progress and termination are also introduced. Some of these techniques have been automated in supporting tools, and hence, the resulting defects can be repeatably identified. Throughout this thesis a strong emphasis is placed on describing implementable algorithms to realise the derivation techniques discussed. A number of these algorithms are implemented in a tool to support the application of the verification methods presented. The techniques and tools presented in this thesis are well suited, but not limited to, supporting rigorous methods of defect detection as well as formal and semi-formal reasoning of correctness. The automation of these techniques in tools to support practical, formal code reading and correctness argument will assist in addressing the needs of trusted component technologies and the general requirement for quality in software.
Style APA, Harvard, Vancouver, ISO itp.
14

Powell, Daniel. "Formal Methods For Verification Based Software Inspection". Thesis, Griffith University, 2003. http://hdl.handle.net/10072/366466.

Pełny tekst źródła
Streszczenie:
Useful processes, that are independently repeatable, are utilised in all branches of science and traditional engineering disciplines but seldom in software engineering. This is particularly so with processes used for detection and correction of defects in software systems. Code inspection, as introduced by Michael Fagan at IBM in the mid 1970's is widely recognised as an effective technique for finding defects in software. Despite its reputation, code inspection, as it is currently practiced, is not a strictly repeatable process. This is due to the problems faced by inspectors when they attempt to paraphrase the complicated semantics of a unit of computer code. Verification based software inspection, as advocated by the cleanroom software engineering community, requires that arguments of correctness be formulated with the code and its specification. These arguments rely on the reader being able to extract the semantics from the code. This thesis addresses the requirement for an independently repeatable, scalable and substantially automated method for yielding semantics from computer code in a complete, unambiguous and consistent manner in order to facilitate, and make repeatable, verification based code inspection. Current literature regarding the use of code inspection for verification of software is surveyed. Empirical studies are referenced, comparing inspection to software testing and program proof. Current uses of formal methods in software engineering will be discussed, with particular reference to formal method applications in verification. Forming the basis of the presented method is a systematic, and hence repeatable, approach to the derivation of program semantics. The theories and techniques proposed for deriving semantics from program code extend current algorithmic and heuristic techniques for deriving invariants. Additionally, the techniques introduced yield weaker forms of invariant information which are also useful for verification, defect detection and correction. Methods for using these weaker invariant forms, and tools to support these methods, are introduced. Algorithmic and heuristic techniques for investigating loop progress and termination are also introduced. Some of these techniques have been automated in supporting tools, and hence, the resulting defects can be repeatably identified. Throughout this thesis a strong emphasis is placed on describing implementable algorithms to realise the derivation techniques discussed. A number of these algorithms are implemented in a tool to support the application of the verification methods presented. The techniques and tools presented in this thesis are well suited, but not limited to, supporting rigorous methods of defect detection as well as formal and semi-formal reasoning of correctness. The automation of these techniques in tools to support practical, formal code reading and correctness argument will assist in addressing the needs of trusted component technologies and the general requirement for quality in software.
Thesis (PhD Doctorate)
Doctor of Philosophy (PhD)
School of Computing and Information Technology
Full Text
Style APA, Harvard, Vancouver, ISO itp.
15

Bowes, David Hutchinson. "Factors affecting the performance of trainable models for software defect prediction". Thesis, University of Hertfordshire, 2013. http://hdl.handle.net/2299/10978.

Pełny tekst źródła
Streszczenie:
Context. Reports suggest that defects in code cost the US in excess of $50billion per year to put right. Defect Prediction is an important part of Software Engineering. It allows developers to prioritise the code that needs to be inspected when trying to reduce the number of defects in code. A small change in the number of defects found will have a significant impact on the cost of producing software. Aims. The aim of this dissertation is to investigate the factors which a ect the performance of defect prediction models. Identifying the causes of variation in the way that variables are computed should help to improve the precision of defect prediction models and hence improve the cost e ectiveness of defect prediction. Methods. This dissertation is by published work. The first three papers examine variation in the independent variables (code metrics) and the dependent variable (number/location of defects). The fourth and fifth papers investigate the e ect that di erent learners and datasets have on the predictive performance of defect prediction models. The final paper investigates the reported use of di erent machine learning approaches in studies published between 2000 and 2010. Results. The first and second papers show that independent variables are sensitive to the measurement protocol used, this suggests that the way data is collected a ects the performance of defect prediction. The third paper shows that dependent variable data may be untrustworthy as there is no reliable method for labelling a unit of code as defective or not. The fourth and fifth papers show that the dataset and learner used when producing defect prediction models have an e ect on the performance of the models. The final paper shows that the approaches used by researchers to build defect prediction models is variable, with good practices being ignored in many papers. Conclusions. The measurement protocols for independent and dependent variables used for defect prediction need to be clearly described so that results can be compared like with like. It is possible that the predictive results of one research group have a higher performance value than another research group because of the way that they calculated the metrics rather than the method of building the model used to predict the defect prone modules. The machine learning approaches used by researchers need to be clearly reported in order to be able to improve the quality of defect prediction studies and allow a larger corpus of reliable results to be gathered.
Style APA, Harvard, Vancouver, ISO itp.
16

Allanqawi, Khaled Kh S. Kh. "A framework for the classification and detection of design defects and software quality assurance". Thesis, Kingston University, 2015. http://eprints.kingston.ac.uk/34534/.

Pełny tekst źródła
Streszczenie:
In current software development lifecyeles of heterogeneous environments, the pitfalls businesses have to face are that software defect tracking, measurements and quality assurance do not start early enough in the development process. In fact the cost of fixing a defect in a production environment is much higher than in the initial phases of the Software Development Life Cycle (SDLC) which is particularly true for Service Oriented Architecture (SOA). Thus the aim of this study is to develop a new framework for defect tracking and detection and quality estimation for early stages particularly for the design stage of the SDLC. Part of the objectives of this work is to conceptualize, borrow and customize from known frameworks, such as object-oriented programming to build a solid framework using automated rule based intelligent mechanisms to detect and classify defects in software design of SOA. The framework on design defects and software quality assurance (DESQA) will blend various design defect metrics and quality measurement approaches and will provide measurements for both defect and quality factors. Unlike existing frameworks, mechanisms are incorporated for the conversion of defect metrics into software quality measurements. The framework is evaluated using a research tool supported by sample used to complete the Design Defects Measuring Matrix, and data collection process. In addition, the evaluation using a case study aims to demonstrate the use of the framework on a number of designs and produces an overall picture regarding defects and quality. The implementation part demonstrated how the framework can predict the quality level of the designed software. The results showed a good level of quality estimation can be achieved based on the number of design attributes, the number of quality attributes and the number of SOA Design Defects. Assessment shows that metrics provide guidelines to indicate the progress that a software system has made and the quality of design. Using these guidelines, we can develop more usable and maintainable software systems to fulfil the demand of efficient systems for software applications. Another valuable result coming from this study is that developers are trying to keep backwards compatibility when they introduce new functionality. Sometimes, in the same newly-introduced elements developers perform necessary breaking changes in future versions. In that way they give time to their clients to adapt their systems. This is a very valuable practice for the developers because they have more time to assess the quality of their software before releasing it. Other improvements in this research include investigation of other design attributes and SOA Design Defects which can be computed in extending the tests we performed.
Style APA, Harvard, Vancouver, ISO itp.
17

von, Oldenburg Tim. "Making scope explorable in Software Development Environments to reduce defects and support program understanding". Thesis, Malmö högskola, Fakulteten för kultur och samhälle (KS), 2014. http://urn.kb.se/resolve?urn=urn:nbn:se:mau:diva-24006.

Pełny tekst źródła
Streszczenie:
Programming language tools help software developers to understand a program and to recognize possible pitfalls. Used with the right knowledge, they can be instrumented to achieve better software quality. However, creating language tools that integrate well into the development environment and workflow is challenging.This thesis utilizes a user-centered design process to identify the needs of professional developers through in-depth interviews, address those needs through a concept, and finally implement and evaluate the concept. Taking 'scope' as an exemplary source of misconceptions in programming, a “Scope Inspector” plug-in for the Atom IDE—targeting experienced JavaScript developers in the open source community—is implemented.
Style APA, Harvard, Vancouver, ISO itp.
18

Da, Costa S. C. "The prediction of risk of welding defects at the procedure stage using computer knowledge based systems". Thesis, Cranfield University, 1992. http://dspace.lib.cranfield.ac.uk/handle/1826/4446.

Pełny tekst źródła
Streszczenie:
The purpose of this research was to develop a methodology to evaluate the likelihood of defective welds as a procedure proposal is entered into a computerised database system. The approach developed was assessed for hydrogen induced cold cracking (HICC) since this defect is a major problem in welding technology. An expert system was used to implement the methodology. The information for the expert system knowledge base was partly gathered from previous work in this area. The technique necessary to analyze and incorporate knowledge was organized in a structured form including the major area to be attacked. The final system was implemented using an expert system shell. The global task of analyzing a welding procedure was broken-down into three different stages. A welding procedure specification comprised the first stage. In the second stage, an interface between the expert system software and a database was implemented. Having proved the feasibility and advantages of integrating the expert system shell with a relational database the remainder of the work was devoted to the development of a strategy for operating the expert system and in particular dealing with uncertainty. Detailed validation of the knowledge base and the system as a whole were confined to a single defect type in the belief that the modularity of the system would allow extension to other defect types and the strategies developed in the present work should it be applicable. Results have shown that the system performs well in the specified area. Validation trials using simulated welding conditions generated by the expert system have shown a very good correlation with practical results for different classes of steels. The integration between approved welding procedure records and procedure qualification records could be the basis for a complete welding database management. Practical application of this system could be extended for educational purpose and training facilities.
Style APA, Harvard, Vancouver, ISO itp.
19

Van, Rooyen Gert-Jan. "Baseband compensation principles for defects in quadrature signal conversion and processing". Thesis, Stellenbosch : Stellenbosch University, 2004. http://hdl.handle.net/10019.1/49869.

Pełny tekst źródła
Streszczenie:
Thesis (PhD)--University of Stellenbosch, 2005.
ENGLISH ABSTRACT: Keywords: software-defined radio, SDR, quadrature mixing, quadrature modulation, quadrature demodulation, digital compensation, software radio, direct-digital synthesis, DDS. An often-stated goal of software-defined transceiver systems is to perform digital signal conversion as close to the antenna as possible by using high-rate converters. In this dissertation, alternative design principles are proposed, and it is shown that the signal processing techniques based on these principles improve on the prior system's accuracy, while maintaining system flexibility. Firstly, it is proposed that digital compensation can be used to reverse the effects of hardware inaccuracies in the RF front-end of a software-defined radio. Novel compensation techniques are introduced that suppress the signal artefacts introduced by typical frontend hardware. The extent to which such artefacts may be suppressed, is only limited by the accuracy by which they may be measured and digitally represented. A general compensation principle is laid down, which formalises the conditions under which optimal compensation may be achieved. Secondly, it is proposed that, in the design of such RF front-ends, a clear distinction should be drawn between signal processing complexity and frequency translation. It is demonstrated that conventional SDR systems often neglect this principle. As an alternative, quadrature mixing is shown to provide a clear separation between the frequency translation and signal processing problems. However, effective use of quadrature mixing as design approach necessitates the use of accurate compensation techniques to circumvent the hardware inaccuracies typically found in such mixers. Quadrature mixers are proposed as general-purpose front-ends for software-defined radios, and quadrature modulation and demodulation techniques are presented as alternatives to existing schemes. The inherent hardware inaccuracies are analysed and simulated, and appropriate compensation techniques are derived and tested. Finally, the theory is verified with a prototype system.
AFRIKAANSE OPSOMMING: Sleutelwoorde: sagteware-gedefinieerde radio, SDR, haaksfasige menging, haaksfasige modulasie, haaksfasige demodulasie, digitale kompensasie, sagteware-radio, direk-digitale sintese, DDS. 'n Gewilde stelling is dat digitale seinomsetting in sagteware-gedefinieerde kommunikasiestelsels so na as moontlik aan die antenna moet geskied deur gebruik te maak van hoëspoed omsetters. Hierdie verhandeling stel alternatiewe ontwerpsbeginsels voor, en toon aan dat hierdie beginsels die eersgenoemde stelsel se akkuraatheid verbeter, terwyl stelselbuigsaamheid gehandhaaf word. Dit word eerstens voorgestel dat digitale kompensasie gebruik word om die effekte van hardeware-onakkuraathede in die RF-koppelvlak van sagteware-gedefinieerde radio's om te keer. Nuwe kompensasietegnieke, wat seinartefakte weens koppelvlak-onakkuraathede kan onderdruk, word aangebied. Die mate waartoe hierdie artefakte onderdruk kan word, word slegs beperk deur die akkuraatheid waarmee dit gemeet en digitaal voorgestel kan word. 'n Algemene kompensasiebeginsel word neergelê waarin die voorwaardes vir optimale kompensasie vasgelê word. Tweedens word voorgestel dat 'n duidelike onderskeid getref word tussen seinverwerkingskompleksiteit en seinverskuiwing in RF-koppelvlakke. Daar word getoon dat konvensionele SDR-stelsels dikwels nie hierdie beginsel handhaaf nie. 'n Alternatief, naamlik haaksfasige menging, word voorgehou as 'n tegniek wat duidelik onderskei tussen seinverskuiwing en seinverwerking. Akkurate kompensasietegnieke is egter nodig om effektief van sulke mengers gebruik te maak. Haaksfasige mengers word voorgestel as veeldoelige koppelvlakke vir sagteware-gedefinieerde radio's, en haaksfasige modulasie- en demodulasietegnieke word voorgestel as plaasvervangers vir bestaande tegnieke. Die inherente hardeware-onakkuraathede word geanaliseer en gesimuleer, en geskikte kompensasietegnieke word afgelei en getoets. Laastens word die teoretiese resultate met 'n praktiese prototipe bevestig.
Style APA, Harvard, Vancouver, ISO itp.
20

Suh, Caitlin D. "The Use of High-Throughput Virtual Screening Software in the Proposal of A Novel Treatment for Congenital Heart Defects". Scholarship @ Claremont, 2019. https://scholarship.claremont.edu/cmc_theses/2260.

Pełny tekst źródła
Streszczenie:
Conventional screening of potential drug candidates through wet lab affinity experiments using libraries of thousands of modified molecules is time and resource consuming, along with the fact that it contributes to the widening time gap between the discovery of disease-causing mutations and the implementation of resulting novel treatments. It is necessary to explore whether the preliminary use of high-throughput virtual screening (HTVS) software such as PyRx will curb both the time and money spent in discovering novel treatments for diseases such as congenital heart defects (CHDs). For example, AXIN2, a protein involved in a negative feedback loop inhibiting the Wnt/β-catenin signaling pathway important for cardiogenesis, has recently been associated with CHD. The loss-of-function mutation L10F on the tankyrase-binding domain of AXIN2 has been shown to upregulate the pathway by loss of inhibition ability, leading to the accumulation of intracellular β-catenin. In a different paper, however, AXIN2 has been shown to be stabilized using XAV-939, a small-molecule drug which targets tankyrase. PyRx and VMD will be used to modify the drug in order to increase its binding affinity to AXIN2, stabilizing the protein and reinstating its inhibitory property to treat CHDs. When used in adjunction to wet lab experiments, HTVS software may decrease costs and the time required to bring a potentially life-saving treatment into use.
Style APA, Harvard, Vancouver, ISO itp.
21

Oliveira, Itelvina Silva de. "Teste baseado em defeitos para ambientes de data warehouse". Universidade Tecnológica Federal do Paraná, 2015. http://repositorio.utfpr.edu.br/jspui/handle/1/1413.

Pełny tekst źródła
Streszczenie:
As organizações necessitam gerenciar informações para obter a melhoria contínua dos seus processos de negócios e agregar conhecimento que ofereça suporte ao processo decisório. Estas informações, muitas vezes, são disponibilizadas por ambientes de Data Warehouse (DW), nos quais os dados são manipulados e transformados. A qualidade dos dados nesses ambientes é essencial para a correta tomada de decisão, tornando-se imprescindível a aplicação de testes. O objetivo deste trabalho é elaborar e validar a aplicação de uma abordagem de teste para DW com o emprego de critérios da técnica de teste baseado em defeitos. A aplicação da abordagem possibilitou testar três fases de desenvolvimento do DW, nas quais estão as Fontes de Dados, processo ETL (Extraction, Transformation and Load) e dados do DW. O critério de teste Análise de Mutantes foi aplicado ao processo ETL por meio de operadores de mutação SQL e a Análise de Instâncias de Dados Alternativas foi aplicada nas fontes de dados e nos dados do DW por meio de classes de defeito nos dados. Essas classes foram geradas por meio da análise e associação dos problemas de qualidade de dados nas fases de desenvolvimento do DW. Os resultados obtidos em estudos de caso permitiram a validação da aplicabilidade e eficácia da técnica de teste baseado em defeitos para ambientes de DW, possibilitando assim revelar quais defeitos podem ocorrer na geração do DW que poderiam prejudicar a qualidade dos dados armazenados nesses ambientes.
Organizations need to manage information for a continuous improvement of its business processes and aggregate knowledge that help in the decision-making process. This information often is provided by Data Warehouse environments (DW), in which data are handled and processed. The quality of data in these environments is essential to make correct decisions, becoming it necessary the application of tests. The objective of this work is to develop and validate the implementation of a testing approach for DW using criteria of Fault-based Testing techniques. The application of the approach enabled tests in three phases of development of the DW, which are the data sources, ETL and DW data. The test criteria Mutation Analysis was applied to the ETL process (Extraction, Transformation and Load) through SQL mutation operators and the Alternative Data Instances Analysis was applied to the data sources and DW data through fault classes on the data. These classes were generated by analyzing and associating of data quality problems in the DW development stages. The results obtained through the case studies allowed assessment of the applicability and effectiveness of testing technique fault for DW environments, thus enabling to reveal faults, which may occur in the generation of DW that could harm the quality of the data stored in these environments.
Style APA, Harvard, Vancouver, ISO itp.
22

Ahmed, Israr, i Shahid Nadeem. "Minimizing Defects Originating from Elicitation, Analysis and Negotiation (E and A&N) Phase in Bespoke Requirements Engineering". Thesis, Blekinge Tekniska Högskola, Sektionen för datavetenskap och kommunikation, 2009. http://urn.kb.se/resolve?urn=urn:nbn:se:bth-4070.

Pełny tekst źródła
Streszczenie:
Defect prevention (DP) in early stages of software development life cycle (SDLC) is very cost effective than in later stages. The requirements elicitation and analysis & negotiation (E and A&N) phases in requirements engineering (RE) process are very critical and are major source of requirements defects. A poor E and A&N process may lead to a software requirements specifications (SRS) full of defects like missing, ambiguous, inconsistent, misunderstood, and incomplete requirements. If these defects are identified and fixed in later stages of SDLC then they could cause major rework by spending extra cost and effort. Organizations are spending about half of their total project budget on avoidable rework and majority of defects originate from RE activities. This study is an attempt to prevent requirements level defects from penetrates into later stages of SDLC. For this purpose empirical and literature studies are presented in this thesis. The empirical study is carried out with the help of six companies from Pakistan & Sweden by conducting interviews and literature study is done by using literature reviews. This study explores the most common requirements defect types, their reasons, severity level of defects (i.e. major or minor), DP techniques (DPTs) & methods, defect identification techniques that have been using in software development industry and problems in these DPTs. This study also describes possible major differences between Swedish and Pakistani software companies in terms of defect types and rate of defects originating from E and A&N phases. On the bases of study results, some solutions have been proposed to prevent requirements defects during the RE process. In this way we can minimize defects originating from E and A&N phases of RE in the bespoke requirements engineering (BESRE).
Style APA, Harvard, Vancouver, ISO itp.
23

Руденко, Александр Антонович. "Вероятностные модели и методы оценивания надежности программных средств с учетом вторичных дефектов". Thesis, Полтавский национальный технический университет им. Ю. Кондратюка, 2015. http://repository.kpi.kharkov.ua/handle/KhPI-Press/19065.

Pełny tekst źródła
Streszczenie:
Диссертация на соискание ученой степени кандидата технических наук по специальности 05.13.06 – информационные технологии – Национальный технический университет "Харьковский политехнический институт", Харьков, 2015. Диссертация посвящена разработке моделей, методов оценки надежности программно-технических комплексов, информационной технологии на основе учета внесения вторичных дефектов. Как показывает проведенный анализ, необходимость обеспечения точности оценки надежности программного обеспечения обуславливает актуальность научных исследований, посвященных разработке и совершенствованию методов и моделей оценки. В существующих моделях оценки надежности не учитывается фактор вторичных дефектов или этому аспекту не уделяется внимание вообще. Это может привести, с одной стороны, к неэффективному применению и распределению методов и средств повышения надежности, а с другой, к недооценке рисков, связанных с возникновением отказов. Усовершенствованы вероятностные модели оценки надежности программных средств на основе учета параметров вторичных дефектов, путем модификаций функций риска этих моделей, что позволяет адекватно отображать процессы тестирования и сопровождения программных средств. В рамках исследования был проведен анализ классификаций моделей, анализ вероятностных моделей повышения надежности на предмет возможности их модификаций с тем, чтобы учитывать вторичные дефекты. Наиболее целесообразно в контексте поставленной задачи использовать модель Джелински-Моранды. Разработан метод оценивания числа вторичных дефектов программных средств, основанный на анализе статистических данных проявления первичных дефектов программных средств, что позволяет повысить точность количественных оценок эксплуатационных показателей. Потребность в разработке метода вызвана трудностями аналитического нахождения вторичных дефектов на основе моделей оценки надежности программных средств. В методе оценивания числа вторичных дефектов по статистическим данным выявления дефектов учитываются факторы раннего и поздних этапов тестирования (эксплуатации), что соответствует реалиям соответствующих этапов жизненного цикла программ.
The dissertation on obtaining the scientific degree of candidate of technical sciences in the specialty 05.13.06 – information technologies – National technical University "Kharkiv Polytechnic Institute", Kharkov, 2015. The dissertation dedicated to the developing of models, methods of reliability estimation of software-technical complexes of information technology on the basis of making secondary defects. Scientific results are: improving probabilistic models of reliability estimation of software based on the parameters of secondary defects by modifying the risk function of these models that allows to reflect processes of testing and maintenance of software; method of estimating secondary defects of software tools that is based on the analysis of statistical data of manifestation of primary defects of software tools that allows to raise the accuracy of the quantitative assessment of performance indicators; the method of calculating the average intensity of manifestation of defects and the average change in the intensity of manifestation of defects with the help of modified model Jelinski-Moranda that, unlike existing, takes into account factor of secondary defects that allows to verify the reliability of software tools. Information technology of assessment the secure of software tools taking into account the secondary defects is devised basing on the method of estimating the number of secondary defects according to the statistics of defect detection and the method of calculating the average intensity of manifestation of defects and the average change in the intensity of manifestation of defects. The proposed models and methods allow to raise the accuracy of estimation of reliability of software and hardware complexes that is achieved by taking into account the factor of secondary defects.
Style APA, Harvard, Vancouver, ISO itp.
24

Руденко, Олександр Антонович. "Імовірнісні моделі та методи оцінювання надійності програмних засобів з урахуванням вторинних дефектів". Thesis, ТОВ "Фірма "Техсервіс", 2015. http://repository.kpi.kharkov.ua/handle/KhPI-Press/19064.

Pełny tekst źródła
Streszczenie:
Дисертація на здобуття наукового ступеня кандидата технічних наук за спеціальністю 05.13.06 – інформаційні технології – Національний технічний університет "Харківський політехнічний інститут", Харків, 2015. Дисертація присвячена розробці моделей, методів оцінювання надійності програмно-технічних комплексів, інформаційної технології на основі врахування внесення вторинних дефектів. Науковими результатами є: удосконалення імовірнісних моделей оцінки надійності програмних засобів на основі врахування параметрів вторинних дефектів шляхом модифікації функцій ризику цих моделей, що дозволяє адекватно відображати процеси тестування і супроводу програмних засобів; метод оцінювання числа вторинних дефектів програмних засобів, що ґрунтується на аналізі статистичних даних прояву первинних дефектів програмних засобів, що дозволяє підвищити точність оцінок кількісних експлуатаційних показників; метод обчислення середньої інтенсивності прояву дефектів і середньої зміни інтенсивності прояву дефектів за допомогою модифікованої моделі Джелінські-Моранди, у якому, на відміну від існуючих, враховується фактор вторинних дефектів, що дозволяє верифікувати показники надійності програмних засобів. На основі методу оцінювання числа вторинних дефектів за статистичними даними виявлення дефектів та методу обчислення середньої інтенсивності прояву дефектів і середньої зміни інтенсивності прояву дефектів розроблена інформаційна технологія оцінювання надійності програмних засобів з урахуванням вторинних дефектів. Запропоновані моделі і методи дозволяють підвищити точність оцінювання надійності програмно-технічних комплексів, що досягається за рахунок урахування фактора вторинних дефектів.
The dissertation on obtaining the scientific degree of candidate of technical sciences in the specialty 05.13.06 – information technologies – National technical University "Kharkiv Polytechnic Institute", Kharkov, 2015. The dissertation dedicated to the developing of models, methods of reliability estimation of software-technical complexes of information technology on the basis of making secondary defects. Scientific results are: improving probabilistic models of reliability estimation of software based on the parameters of secondary defects by modifying the risk function of these models that allows to reflect processes of testing and maintenance of software; method of estimating secondary defects of software tools that is based on the analysis of statistical data of manifestation of primary defects of software tools that allows to raise the accuracy of the quantitative assessment of performance indicators; the method of calculating the average intensity of manifestation of defects and the average change in the intensity of manifestation of defects with the help of modified model Jelinski-Moranda that, unlike existing, takes into account factor of secondary defects that allows to verify the reliability of software tools. Information technology of assessment the secure of software tools taking into account the secondary defects is devised basing on the method of estimating the number of secondary defects according to the statistics of defect detection and the method of calculating the average intensity of manifestation of defects and the average change in the intensity of manifestation of defects. The proposed models and methods allow to raise the accuracy of estimation of reliability of software and hardware complexes that is achieved by taking into account the factor of secondary defects.
Style APA, Harvard, Vancouver, ISO itp.
25

Ye, Xin. "Automated Software Defect Localization". Ohio University / OhioLINK, 2016. http://rave.ohiolink.edu/etdc/view?acc_num=ohiou1462374079.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
26

Porto, Faimison Rodrigues. "Cross-project defect prediction with meta-Learning". Universidade de São Paulo, 2017. http://www.teses.usp.br/teses/disponiveis/55/55134/tde-21032018-163840/.

Pełny tekst źródła
Streszczenie:
Defect prediction models assist tester practitioners on prioritizing the most defect-prone parts of the software. The approach called Cross-Project Defect Prediction (CPDP) refers to the use of known external projects to compose the training set. This approach is useful when the amount of historical defect data of a company to compose the training set is inappropriate or insufficient. Although the principle is attractive, the predictive performance is a limiting factor. In recent years, several methods were proposed aiming at improving the predictive performance of CPDP models. However, to the best of our knowledge, there is no evidence of which CPDP methods typically perform best. Moreover, there is no evidence on which CPDP methods perform better for a specific application domain. In fact, there is no machine learning algorithm suitable for all domains. The decision task of selecting an appropriate algorithm for a given application domain is investigated in the meta-learning literature. A meta-learning model is characterized by its capacity of learning from previous experiences and adapting its inductive bias dynamically according to the target domain. In this work, we investigate the feasibility of using meta-learning for the recommendation of CPDP methods. In this thesis, three main goals were pursued. First, we provide an experimental analysis to investigate the feasibility of using Feature Selection (FS) methods as an internal procedure to improve the performance of two specific CPDP methods. Second, we investigate which CPDP methods present typically best performances. We also investigate whether the typically best methods perform best for the same project datasets. The results reveal that the most suitable CPDP method for a project can vary according to the project characteristics, which leads to the third investigation of this work. We investigate the several particularities inherent to the CPDP context and propose a meta-learning solution able to learn from previous experiences and recommend a suitable CDPD method according to the characteristics of the project being predicted. We evaluate the learning capacity of the proposed solution and its performance in relation to the typically best CPDP methods.
Modelos de predição de defeitos auxiliam profissionais de teste na priorização de partes do software mais propensas a conter defeitos. A abordagem de predição de defeitos cruzada entre projetos (CPDP) refere-se à utilização de projetos externos já conhecidos para compor o conjunto de treinamento. Essa abordagem é útil quando a quantidade de dados históricos de defeitos é inapropriada ou insuficiente para compor o conjunto de treinamento. Embora o princípio seja atrativo, o desempenho de predição é um fator limitante nessa abordagem. Nos últimos anos, vários métodos foram propostos com o intuito de melhorar o desempenho de predição de modelos CPDP. Contudo, na literatura, existe uma carência de estudos comparativos que apontam quais métodos CPDP apresentam melhores desempenhos. Além disso, não há evidências sobre quais métodos CPDP apresentam melhor desempenho para um domínio de aplicação específico. De fato, não existe um algoritmo de aprendizado de máquina que seja apropriado para todos os domínios de aplicação. A tarefa de decisão sobre qual algoritmo é mais adequado a um determinado domínio de aplicação é investigado na literatura de meta-aprendizado. Um modelo de meta-aprendizado é caracterizado pela sua capacidade de aprender a partir de experiências anteriores e adaptar seu viés de indução dinamicamente de acordo com o domínio alvo. Neste trabalho, nós investigamos a viabilidade de usar meta-aprendizado para a recomendação de métodos CPDP. Nesta tese são almejados três principais objetivos. Primeiro, é conduzida uma análise experimental para investigar a viabilidade de usar métodos de seleção de atributos como procedimento interno de dois métodos CPDP, com o intuito de melhorar o desempenho de predição. Segundo, são investigados quais métodos CPDP apresentam um melhor desempenho em um contexto geral. Nesse contexto, também é investigado se os métodos com melhor desempenho geral apresentam melhor desempenho para os mesmos conjuntos de dados (ou projetos de software). Os resultados revelam que os métodos CPDP mais adequados para um projeto podem variar de acordo com as características do projeto sendo predito. Essa constatação conduz à terceira investigação realizada neste trabalho. Foram investigadas as várias particularidades inerentes ao contexto CPDP a fim de propor uma solução de meta-aprendizado capaz de aprender com experiências anteriores e recomendar métodos CPDP adequados, de acordo com as características do software. Foram avaliados a capacidade de meta-aprendizado da solução proposta e a sua performance em relação aos métodos base que apresentaram melhor desempenho geral.
Style APA, Harvard, Vancouver, ISO itp.
27

Jain, Achin. "Software defect content estimation: A Bayesian approach". Thesis, University of Ottawa (Canada), 2005. http://hdl.handle.net/10393/26932.

Pełny tekst źródła
Streszczenie:
Software inspection is a method to detect errors in software artefacts early in the development cycle. At the end of the inspection process the inspectors need to make a decision whether the inspected artefact is of sufficient quality or not. Several methods have been proposed to assist in making this decision like capture recapture methods and Bayesian approach. In this study these methods have been analyzed and compared and a new Bayesian approach for software inspection is proposed. All of the estimation models rely on an underlying assumption that the inspectors are independent. However, this assumption of independence is not necessarily true in practical sense, as most of the inspection teams interact with each other and share their findings. We, therefore, studied a new Bayesian model where the inspectors share their findings, for defect estimate and compared it with the Bayesian model (Gupta et al. 2003), where inspectors examine the artefact independently. The simulations were carried out under realistic software conditions with a small number of difficult defects and a few inspectors. The models were evaluated on the basis of decision accuracy and median relative error and our results suggest that the dependent inspector assumption improves the decision accuracy (DA) over the previous Bayesian model and CR models.
Style APA, Harvard, Vancouver, ISO itp.
28

CAVALCANTI, Diego Tavares. "Estudo do uso de vocabulários para analisar o impacto de relatórios de defeitos a código-fonte". Universidade Federal de Campina Grande, 2012. http://dspace.sti.ufcg.edu.br:8080/jspui/handle/riufcg/1839.

Pełny tekst źródła
Streszczenie:
Submitted by Johnny Rodrigues (johnnyrodrigues@ufcg.edu.br) on 2018-09-28T14:01:43Z No. of bitstreams: 1 DIEGO TAVARES CAVALCANTI - DISSERTAÇÃO PPGCC 2012..pdf: 11733349 bytes, checksum: 59909ce95d6ea71dea6e9686d3d20c33 (MD5)
Made available in DSpace on 2018-09-28T14:01:43Z (GMT). No. of bitstreams: 1 DIEGO TAVARES CAVALCANTI - DISSERTAÇÃO PPGCC 2012..pdf: 11733349 bytes, checksum: 59909ce95d6ea71dea6e9686d3d20c33 (MD5) Previous issue date: 2012-11-26
Localizar e corrigir defeitos são tarefas comuns no processo de manutenção de software. Entretanto, a atividade de localizar entidades de código que são possivelmente defeituosas e que necessitam ser modificadas para a correção de um defeito, não é trivial. Geralmente, desenvolvedores realizam esta tarefa por meio de um processo manual de leitura e inspeção do código, bem como de informações cadastradas em relatórios de defeitos. De fato, é necessário que os desenvolvedores tenham um bom conhecimento da arquitetura e do design do software a fim de realizarem tal tarefa. Entretanto, este conhecimento fica espalhado por entre a equipe e requer tempo para ser adquirido por novatos. Assim, é necessário o desenvolvimento de técnicas que auxiliem na tarefa de análise de impacto de relatórios de defeitos no código, independente da experiência do desenvolvedor que irá executá-la. Neste trabalho, apresentamos resultados de um estudo empírico no qual avaliamos se a análise automática de vocabulários de relatórios de defeitos e de software pode ser útil na tarefa de localizar defeitos no código. Nele, analisamos similaridade de vocabulários como fator para sugerir classes que são prováveis de serem impactadas por um dado relatório de defeito. Realizamos uma avaliação com oito projetos maduros de código aberto, desenvolvidos em Java, que utilizam Bugzilla e JIRA como seus repositórios de defeitos. Nossos resultados indicam que a análise de ambos os vocabulários é, de fato, uma fonte valiosa de informação, que pode ser utilizada para agilizar a tarefa de localização de defeitos. Para todos os sistemas estudados, ao considerarmos apenas análise de vocabulário, vimos que, mesmo com um ranking contendo apenas 8% das classes de um projeto, foi possível encontrar classes relacionadas ao defeito buscado em até 75% dos casos. Portanto, podemos concluir que, mesmo que não possamos utilizar vocabulários de software e de relatórios de defeitos como únicas fontes de informação, eles certamente podem melhorar os resultados obtidos, ao serem combinados com técnicas complementares.
Locating and fixing bugs described in bug reports are routine tasks in software development processes. A major effort must be undertaken to successfully locate the (possibly faulty) entities in the code that must be worked on. Generally, developers map bug reports to code through manual reading and inspection of both bug reports and the code itself. In practice, they must rely on their knowledge about the software architecture and design to perform the mapping in an efficient and effective way. However, it is well known that architectural and design knowledge is spread out among developers. Hence, the success of such a task is directly depending on choosing the right developer. In this paper, we present results of an empirical study we performed to evaluate whether the automated analysis of bug reports and software vocabularies can be helpful in the task of locating bugs. We conducted our study on eight versions of six mature Java open-source projects that use Bugzilla and JIRA as bug tracking systems. In our study, we have used Information Retrieval techniques to assess the similarity of bug reports and code entities vocabularies. For each bug report, we ranked ali code entities according to the measured similarity. Our results indicate that vocabularies are indeed a valuable source of information that can be used to narrow down the bug-locating task. For ali the studied systems, considering vocabulary similarity only, a Top 8% list of entities has about 75% of the target entities. We conclude that while vocabularies cannot be the sole source of information, they can certainly improve results if combined with other techniques.
Style APA, Harvard, Vancouver, ISO itp.
29

Sherwood, Patricia Ann. "Inspections : software development process for building defect free software applied in a small-scale software development environment /". Online version of thesis, 1990. http://hdl.handle.net/1850/10598.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
30

Tran, Qui Can Cuong. "Empirical evaluation of defect identification indicators and defect prediction models". Thesis, Blekinge Tekniska Högskola, Sektionen för datavetenskap och kommunikation, 2012. http://urn.kb.se/resolve?urn=urn:nbn:se:bth-2553.

Pełny tekst źródła
Streszczenie:
Context. Quality assurance plays a vital role in the software engineering development process. It can be considered as one of the activities, to observe the execution of software project to validate if it behaves as expected or not. Quality assurance activities contribute to the success of software project by reducing the risks of software’s quality. Accurate planning, launching and controlling quality assurance activities on time can help to improve the performance of software projects. However, quality assurance activities also consume time and cost. One of the reasons is that they may not focus on the potential defect-prone area. In some of the latest and more accurate findings, researchers suggested that quality assurance activities should focus on the scope that may have the potential of defect; and defect predictors should be used to support them in order to save time and cost. Many available models recommend that the project’s history information be used as defect indicator to predict the number of defects in the software project. Objectives. In this thesis, new models are defined to predict the number of defects in the classes of single software systems. In addition, the new models are built based on the combination of product metrics as defect predictors. Methods. In the systematic review a number of article sources are used, including IEEE Xplore, ACM Digital Library, and Springer Link, in order to find the existing models related to the topic. In this context, open source projects are used as training sets to extract information about occurred defects and the system evolution. The training data is then used for the definition of the prediction models. Afterwards, the defined models are applied on other systems that provide test data, so information that was not used for the training of the models; to validate the accuracy and correctness of the models Results. Two models are built. One model is built to predict the number of defects of one class. One model is built to predict whether one class contains bug or no bug.. Conclusions. The proposed models are the combination of product metrics as defect predictors that can be used either to predict the number of defects of one class or to predict if one class contains bugs or no bugs. This combination of product metrics as defect predictors can improve the accuracy of defect prediction and quality assurance activities; by giving hints on potential defect prone classes before defect search activities will be performed. Therefore, it can improve the software development and quality assurance in terms of time and cost
Style APA, Harvard, Vancouver, ISO itp.
31

Akinwale, Olusegun. "DuoTracker tool support for software defect data collection and analysis /". abstract and full text PDF (free order & download UNR users only), 2007. http://0-gateway.proquest.com.innopac.library.unr.edu/openurl?url_ver=Z39.88-2004&rft_val_fmt=info:ofi/fmt:kev:mtx:dissertation&res_dat=xri:pqdiss&rft_dat=xri:pqdiss:1447633.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
32

Gray, David Philip Harry. "Software defect prediction using static code metrics : formulating a methodology". Thesis, University of Hertfordshire, 2013. http://hdl.handle.net/2299/11067.

Pełny tekst źródła
Streszczenie:
Software defect prediction is motivated by the huge costs incurred as a result of software failures. In an effort to reduce these costs, researchers have been utilising software metrics to try and build predictive models capable of locating the most defect-prone parts of a system. These areas can then be subject to some form of further analysis, such as a manual code review. It is hoped that such defect predictors will enable software to be produced more cost effectively, and/or be of higher quality. In this dissertation I identify many data quality and methodological issues in previous defect prediction studies. The main data source is the NASA Metrics Data Program Repository. The issues discovered with these well-utilised data sets include many examples of seemingly impossible values, and much redundant data. The redundant, or repeated data points are shown to be the cause of potentially serious data mining problems. Other methodological issues discovered include the violation of basic data mining principles, and the misleading reporting of classifier predictive performance. The issues discovered lead to a new proposed methodology for software defect prediction. The methodology is focused around data analysis, as this appears to have been overlooked in many prior studies. The aim of the methodology is to be able to obtain a realistic estimate of potential real-world predictive performance, and also to have simple performance baselines with which to compare against the actual performance achieved. This is important as quantifying predictive performance appropriately is a difficult task. The findings of this dissertation raise questions about the current defect prediction body of knowledge. So many data-related and/or methodological errors have previously occurred that it may now be time to revisit the fundamental aspects of this research area, to determine what we really know, and how we should proceed.
Style APA, Harvard, Vancouver, ISO itp.
33

Hameed, Muhammad Muzaffar, i Muhammad Zeeshan ul Haq. "DefectoFix : An interactive defect fix logging tool". Thesis, Blekinge Tekniska Högskola, Avdelningen för programvarusystem, 2008. http://urn.kb.se/resolve?urn=urn:nbn:se:bth-5268.

Pełny tekst źródła
Streszczenie:
Despite the large efforts made during the development phase to produce fault free system, most of the software implementations still require the testing of entire system. The main problem in the software testing is the automation that could verify the system without manual intervention. Recent work in software testing is related to the automated fault injection by using fault models from repository. This requires a lot of efforts, which adds to the complexity of the system. To solve this issue, this thesis suggests DefectoFix framework. DefectoFix is an interactive defect fix logging tools that contains five components namely Version Control Sysem (VCS), source code files, differencing algorithm, Defect Fix Model (DFM) creation and additional information (project name, class name, file name, revision number, diff model). The proposed differencing algorithm extracts detailed information by detecting differences in source code files. This algorithm performs comparison at sub-tree levels of source code files. The extracted differences with additional information are stored as DFM in repository. DFM(s) can later be used for the automated fault injection process. The validation of DefectoFix framework is performed by a tool developed using Ruby programming language. Our case study confirms that the proposed framework generates a correct DFM and is useful in automated fault injection and software validation activities.
Style APA, Harvard, Vancouver, ISO itp.
34

Kasianenko, Stanislav. "Predicting Software Defectiveness by Mining Software Repositories". Thesis, Linnéuniversitetet, Institutionen för datavetenskap och medieteknik (DM), 2018. http://urn.kb.se/resolve?urn=urn:nbn:se:lnu:diva-78729.

Pełny tekst źródła
Streszczenie:
One of the important aims of the continuous software development process is to localize and remove all existing program bugs as fast as possible. Such goal is highly related to software engineering and defectiveness estimation. Many big companies started to store source code in software repositories as the later grew in popularity. These repositories usually include static source code as well as detailed data for defects in software units. This allows analyzing all the data without interrupting programing process. The main problem of large, complex software is impossibility to control everything manually while the price of the error can be very high. This might result in developers missing defects on testing stage and increase of maintenance cost. The general research goal is to find a way of predicting future software defectiveness with high precision. Reducing maintenance and development costs will contribute to reduce the time-to-market and increase software quality. To address the problem of estimating residual defects an approach was found to predict residual defectiveness of a software by the means of machine learning. For a prime machine learning algorithm, a regression decision tree was chosen as a simple and reliable solution. Data for this tree is extracted from static source code repository and divided into two parts: software metrics and defect data. Software metrics are formed from static code and defect data is extracted from reported issues in the repository. In addition to already reported bugs, they are augmented with unreported bugs found on “discussions” section in repository and parsed by a natural language processor. Metrics were filtered to remove ones, that were not related to defect data by applying correlation algorithm. Remaining metrics were weighted to use the most correlated combination as a training set for the decision tree. As a result, built decision tree model allows to forecast defectiveness with 89% chance for the particular product. This experiment was conducted using GitHub repository on a Java project and predicted number of possible bugs in a single file (Java class). The experiment resulted in designed method for predicting possible defectiveness from a static code of a single big (more than 1000 files) software version.
Style APA, Harvard, Vancouver, ISO itp.
35

Portnoy, William. "Distributable defect localization using Markov models /". Thesis, Connect to this title online; UW restricted, 2005. http://hdl.handle.net/1773/6883.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
36

Liljeson, Mattias, i Alexander Mohlin. "Software defect prediction using machine learning on test and source code metrics". Thesis, Blekinge Tekniska Högskola, Institutionen för kreativa teknologier, 2014. http://urn.kb.se/resolve?urn=urn:nbn:se:bth-4162.

Pełny tekst źródła
Streszczenie:
Context. Software testing is the process of finding faults in software while executing it. The results of the testing are used to find and correct faults. Software defect prediction estimates where faults are likely to occur in source code. The results from the defect prediction can be used to opti- mize testing and ultimately improve software quality. Machine learning, that concerns computer programs learning from data, is used to build pre- diction models which then can be used to classify data. Objectives. In this study we, in collaboration with Ericsson, investigated whether software metrics from source code files combined with metrics from their respective tests predicts faults with better prediction perfor- mance compared to using only metrics from the source code files. Methods. A literature review was conducted to identify inputs for an ex- periment. The experiment was applied on one repository from Ericsson to identify the best performing set of metrics. Results. The prediction performance results of three metric sets are pre- sented and compared with each other. Wilcoxon’s signed rank tests are performed on four different performance measures for each metric set and each machine learning algorithm to demonstrate significant differences of the results. Conclusions. We conclude that metrics from tests can be used to predict faults. However, the combination of source code metrics and test metrics do not outperform using only source code metrics. Moreover, we conclude that models built with metrics from the test metric set with minimal infor- mation of the source code can in fact predict faults in the source code.
Style APA, Harvard, Vancouver, ISO itp.
37

Mahmood, Zaheed. "An analysis of software defect prediction studies through reproducibility and replication". Thesis, University of Hertfordshire, 2018. http://hdl.handle.net/2299/20826.

Pełny tekst źródła
Streszczenie:
Context. Software defect prediction is essential in reducing software development costs and in helping companies save their reputation. Defect prediction uses mathematical models to identify patterns associated with defects within code. Resources spent reviewing the entire code can be minimised by focusing on defective parts of the code. Recent findings suggest many published prediction models may not be reliable. Critical scientific methods for identifying reliable research are Replication and Reproduction. Replication can test the external validity of studies while Reproduction can test their internal validity. Aims. The aims of my dissertation are first to study the use and quality of replications and reproductions in defect prediction. Second, to identify factors that aid or hinder these scientific methods. Methods. My methodology is based on tracking the replication of 208 defect prediction studies identified in a highly cited Systematic Literature Review (SLR) [Hall et al. 2012]. I analyse how often each of these 208 studies has been replicated and determine the type of replication carried out. I use quality, citation counts, publication venue, impact factor, and data availability from all the 208 papers to see if any of these factors are associated with the frequency with which they are replicated. I further reproduce the original studies that have been replicated in order to check their internal validity. Finally, I identify factors that affect reproducibility. Results. Only 13 (6%) of the 208 studies are replicated, most of which fail a quality check. Of the 13 replicated original studies, 62% agree with their replications and 38% disagree. The main feature of a study associated with being replicated is that original papers appear in the Transactions of Software Engineering (TSE) journal. The number of citations an original paper had was also an indicator of the probability of being replicated. In addition, studies conducted using closed source data have more replications than those based on open source data. Of the 4 out of 5 papers I reproduced, their results differed with those of the original by more than 5%. Four factors are likely to have caused these failures: i) lack of a single version of the data initially used by the original; ii) the different dataset versions available have different properties that impact model performance; iii) unreported data preprocessing; and iv) inconsistent results from alternative versions of the same tools. Conclusions. Very few defect prediction studies are replicated. The lack of replication and failure of reproduction means that it remains unclear how reliable defect prediction is. Further investigation into this failure provides key aspects researchers need to consider when designing primary studies, performing replication and reproduction studies. Finally, I provide practical steps for improving the likelihood of replication and the chances of validating a study by reporting key factors.
Style APA, Harvard, Vancouver, ISO itp.
38

Curhan, Lisa A. 1961. "Software defect tracking during new product development of a computer system". Thesis, Massachusetts Institute of Technology, 2005. http://hdl.handle.net/1721.1/34824.

Pełny tekst źródła
Streszczenie:
Thesis (S.M.)--Massachusetts Institute of Technology, System Design & Management Program, 2005.
Includes bibliographical references (p. 74-75).
Software defects (colloquially known as "bugs") have a major impact on the market acceptance and profitability of computer systems. Sun Microsystems markets both hardware and software for a wide variety of customer needs. The integration of hardware and software is a key core capability for Sun. Minimizing the quantity and impact of software defects on this integration during new product development is essential to execution of a timely and high-quality product. To analyze the effect of software defects on the product development cycle for a midrange computer system, I have used a particular computer platform, the Productl server, as a case study. The objective of this work was to use Sun's extensive database of software defects as a source for data-mining in order to draw conclusions about the types of software defects that tend to occur during new product development and early production ramp. I also interviewed key players on the Productl development team for more insight into the causes and impacts of software defects for this platform. Some of the major themes that resulted from this study include: The impact of defects is not necessarily proportional to their quantity. Some types of defects have a much higher cost to fix due to customer impact, time needed to fix, or the wide distribution of the software in which they are embedded. Software Requirements need to be vetted extensively before production of new code. This is especially critical for platform-specific requirements. The confluence of new features, new software structure and new hardware can lead to a greater density of software defects. The higher number of defects associated with the new System Controller code supports this conclusion. Current Limitations of Defect Data Mining: Automated extraction
(cont.) of information is most efficient when it can be applied to numbers and short text strings. However, the evaluation of software defects for root cause cannot be easily summarized in a few words or numbers. Therefore, an intelligent classification methodology for root causes of software defects, to be included in Sun's defect database, would be extremely useful to increase the utility of the database for institutional learning. Software Defect Data Mining seems to be underutilized at Sun. I have barely touched the surface of the information that can be extracted from our "BugDB" defect database. This data resource is rich with history. We should extract and analyze this type of data frequently.
by Lisa A. Curhan.
S.M.
Style APA, Harvard, Vancouver, ISO itp.
39

Isunza, Navarro Abgeiba Yaroslava. "Evaluation of Attention Mechanisms for Just-In-Time Software Defect Prediction". Thesis, KTH, Skolan för elektroteknik och datavetenskap (EECS), 2020. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-288724.

Pełny tekst źródła
Streszczenie:
Just-In-Time Software Defect Prediction (JIT-DP) focuses on predicting errors in software at change-level with the objective of helping developers identify defects while the development process is still ongoing, and improving the quality of software applications. This work studies deep learning techniques by applying attention mechanisms that have been successful in, among others, Natural Language Processing (NLP) tasks. We introduce two networks named Convolutional Neural Network with Bidirectional Attention (BACNN) and Bidirectional Attention Code Network (BACoN) that employ a bi-directional attention mechanism between the code and message of a software change. Furthermore, we examine BERT [17] and RoBERTa [57] attention architectures for JIT-DP. More specifically, we study the effectiveness of the aforementioned attention-based models to predict defective commits compared to the current state of the art, DeepJIT [37] and TLEL [101]. Our experiments evaluate the models by using software changes from the OpenStack open source project. The results showed that attention-based networks outperformed the baseline models in terms of accuracy in the different evaluation settings. The attention-based models, particularly BERT and RoBERTa architectures, demonstrated promising results in identifying defective software changes and proved to be effective in predicting defects in changes of new software releases.
Just-In-Time Defect Prediction (JIT-DP) fokuserar på att förutspå fel i mjukvara vid ändringar i koden, med målet att hjälpa utvecklare att identifiera defekter medan utvecklingsprocessen fortfarande är pågående, och att förbättra kvaliteten hos applikationsprogramvara. Detta arbete studerar djupinlärningstekniker genom att tillämpa attentionmekanismer som har varit framgångsrika inom, bland annat, språkteknologi (NLP). Vi introducerar två nätverk vid namn Convolutional Neural Network with Bidirectional Attention (BACNN), och Bidirectional Attention Code Network (BACoN), som använder en tvåriktad attentionmekanism mellan koden och meddelandet om en mjukvaruändring. Dessutom undersöker vi BERT [17] och RoBERTa [57], attentionarkitekturer för JIT-DP. Mer specifikt studerar vi hur effektivt dessa attentionbaserade modeller kan förutspå defekta ändringar, och jämför dem med de bästa tillgängliga arkitekturerna DeePJIT [37] och TLEL [101]. Våra experiment utvärderar modellerna genom att använda mjukvaruändringar från det öppna källkodsprojektet OpenStack. Våra resultat visar att attentionbaserade nätverk överträffar referensmodellen sett till träffsäkerheten i de olika scenarierna. De attentionbaserade modellerna, framför allt BERT och RoBERTa, demonstrerade lovade resultat när det kommer till att identifiera defekta mjukvaruändringar och visade sig vara effektiva på att förutspå defekter i ändringar av nya mjukvaruversioner.
Style APA, Harvard, Vancouver, ISO itp.
40

DUTTA, BINAMRA. "Enterprise Software Metrics: How To Add Business Value". Kent State University / OhioLINK, 2009. http://rave.ohiolink.edu/etdc/view?acc_num=kent1239239432.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
41

Siahaan, Antony. "Defect correction based domain decomposition methods for some nonlinear problems". Thesis, University of Greenwich, 2011. http://gala.gre.ac.uk/7144/.

Pełny tekst źródła
Streszczenie:
Defect correction schemes as a class of nonoverlapping domain decomposition methods offer several advantages in the ways they split a complex problem into several subdomain problems with less complexity. The schemes need a nonlinear solver to take care of the residual at the interface. The adaptive-∝ solver can converge locally in the ∞-norm, where the sufficient condition requires a relatively small local neighbourhood and the problem must have a strongly diagonal dominant Jacobian matrix with a very small condition number. Yet its advantage can be of high signicance in the computational cost where it simply needs a scalar as the approximation of Jacobian matrix. Other nonlinear solvers employed for the schemes are a Newton-GMRES method, a Newton method with a finite difference Jacobian approximation, and nonlinear conjugate gradient solvers with Fletcher-Reeves and Pollak-Ribiere searching direction formulas. The schemes are applied to three nonlinear problems. The first problem is a heat conduction in a multichip module where there the domain is assembled from many components of different conductivities and physical sizes. Here the implementations of the schemes satisfy the component meshing and gluing concept. A finite difference approximation of the residual of the governing equation turns out to be a better defect equation than the equality of normal derivative. Of all the nonlinear solvers implemented in the defect correction scheme, the nonlinear conjugate gradient method with Fletcher-Reeves searching direction has the best performance. The second problem is a 2D single-phase fluid flow with heat transfer where the PHOENICS CFD code is used to run the subdomain computation. The Newton method with a finite difference Jacobian is a reasonable interface solver in coupling these subdomain computations. The final problem is a multiphase heat and moisture transfer in a porous textile. The PHOENICS code is also used to solve the system of partial differential equations governing the multiphase process in each subdomain while the coupling of the subdomain solutions is taken care of with some FORTRAN codes by the defect correction schemes. A scheme using a modified-∝ method fails to obtain decent solutions in both single and two layers case. On the other hand, the scheme using the above Newton method produces satisfying results for both cases where it can lead an initially distant interface data into a good convergent solution. However, it is found that in general the number of nonlinear iteration of the defect correction schemes increases with the mesh refinement.
Style APA, Harvard, Vancouver, ISO itp.
42

Land, Lesley Pek Wee Information Systems Technology &amp Management Australian School of Business UNSW. "Software group reviews and the impact of procedural roles on defect detection performance". Awarded by:University of New South Wales. School of Information Systems, Technology and Management, 2000. http://handle.unsw.edu.au/1959.4/21838.

Pełny tekst źródła
Streszczenie:
Software reviews (inspections) have received widespread attention for ensuring the quality of software, by finding and repairing defects in software products. A typical review process consists of two stages critical for defect detection: individual review followed by group review. This thesis addresses two attributes to improve our understanding of the task model: (1) the need for review meetings, and (2) the use of roles in meetings. The controversy of review meeting effectiveness has been consistently raised in the literature. Proponents maintain that the review meeting is the crux of the review process, resulting in group synergism and qualitative benefits (e.g. user satisfaction). Opponents argue that against meetings because the costs of organising and conducting them are high, and there is no net meeting gain. The persistence of these diverse views is the main motivation behind this thesis. Although commonly prescribed in meetings, roles have not yet been empirically validated. Three procedural roles (moderator, reader, recorder) were considered. A conceptual framework on software reviews was developed, from which main research questions were identified. Two experiments were conducted. Review performance was operationalised in terms of true defects and false positives. The review product was COBOL code. The results indicated that in terms of true defects, group reviews outperformed the average individual but not nominal group reviews (aggregate of individual reviews). However, groups have the ability to filter false positives from the individuals' findings. Roles provided limited benefits in improving group reviews. Their main function is to reduce process loss, by encouraging systematic consideration of the individuals' findings. When two or more reviewers find a defect during individual reviews, it is likely to be carried through to the meeting (plurality effect). Groups employing roles reported more 'new' false positives (not identified from preparation) than groups without roles. Overall, subjects' ability at the defect detection was low. This thesis suggests that reading technologies may be helpful for improving reviewer performance. The inclusion of an author role may also reduce the level of false positive detection. The results have implications on the design and support of the software review process.
Style APA, Harvard, Vancouver, ISO itp.
43

Freitas, Diogo Machado de. "Geração evolucionária de heurísticas para localização de defeitos de software". Universidade Federal de Goiás, 2018. http://repositorio.bc.ufg.br/tede/handle/tede/9010.

Pełny tekst źródła
Streszczenie:
Submitted by Franciele Moreira (francielemoreyra@gmail.com) on 2018-10-30T13:30:59Z No. of bitstreams: 2 Dissertação - Diogo Machado de Freitas - 2018.pdf: 1477764 bytes, checksum: 73759c5ece96bf48ffd4d698f14026b9 (MD5) license_rdf: 0 bytes, checksum: d41d8cd98f00b204e9800998ecf8427e (MD5)
Approved for entry into archive by Luciana Ferreira (lucgeral@gmail.com) on 2018-10-30T13:41:38Z (GMT) No. of bitstreams: 2 Dissertação - Diogo Machado de Freitas - 2018.pdf: 1477764 bytes, checksum: 73759c5ece96bf48ffd4d698f14026b9 (MD5) license_rdf: 0 bytes, checksum: d41d8cd98f00b204e9800998ecf8427e (MD5)
Made available in DSpace on 2018-10-30T13:41:38Z (GMT). No. of bitstreams: 2 Dissertação - Diogo Machado de Freitas - 2018.pdf: 1477764 bytes, checksum: 73759c5ece96bf48ffd4d698f14026b9 (MD5) license_rdf: 0 bytes, checksum: d41d8cd98f00b204e9800998ecf8427e (MD5) Previous issue date: 2018-09-24
Coordenação de Aperfeiçoamento de Pessoal de Nível Superior - CAPES
Fault Localization is one stage of the software life cycle, which demands important resources such as time and effort spent on a project. There are several initiatives towards the automation of the fault localization process and the reduction of the associated resources. Many techniques are based on heuristics that use information obtained (spectrum) from the execution of test cases, in order to measure the suspiciousness of each program element to be defective. Spectrum data generally refers to code coverage and test results (positive or negative). The present work presents two approaches based on the Genetic Programming algorithm for the problem of Fault Localization: a method to compose a new heuristic from a set of existing ones; and a method for constructing heuristics based on data from program mutation analysis. The innovative aspects of both methods refer to the joint investigation of: (i) specialization of heuristics for certain programs; (ii) application of an evolutionary approach to the generation of heuristics with non-linear equations; (iii) creation of heuristics based on the combination of traditional heuristics; (iv) use of coverage and mutation spectra extracted from the test activity; (v) analyzing and comparing the efficacy of methods that use coverage and mutation spectra for fault localization; and (vi) quality analysis of the mutation spectra as a data source for fault localization. The results have pointed to the competitiveness of both approaches in their contexts.
Localização de Defeitos é uma etapa do ciclo de vida de software, que demanda recursos importantes tais como o tempo e o esforço gastos em um projeto. Existem diversas iniciativas na direção da automação do processo de localização de defeitos e da redução dos recursos associados. Muitas técnicas são baseadas heurísticas que utilizam informação obtida (espectro) a partir da execução de casos de teste, visando a medir a suspeita de cada elemento de programa para ser defeituoso. Os dados de espectro referem-se, em geral, à cobertura de código e aos resultados dos teste (positivo ou negativo). O presente trabalho apresenta duas abordagens baseadas no algoritmo Programação Genética para o problema de Localização de Defeitos: um método para compor automaticamente novas heurísticas a partir de um conjunto de heurísticas existentes; e um método para a construção de heurísticas baseadas em dados oriundos da análise de mutação de programas. Os aspectos inovadores de ambos os métodos referem-se à investigação conjunta de: (i) especialização de heurísticas para determinados programas; (ii) aplicação de abordagem evolutiva para a geração de heurísticas com equações não lineares; (iii) criação de heurísticas a partir da combinação de heurísticas tradicionais; (iv) uso de espectro de cobertura e de mutação extraídos da atividade de teste; (v) análise e comparação da eficácia de métodos que usam os espectros de cobertura e de mutação para a localização de defeitos; e (vi) análise da qualidade dos espectros de mutação como fonte de dados para a localização de defeitos. Os resultados apontaram competitividade de ambas as abordagens em seus contextos.
Style APA, Harvard, Vancouver, ISO itp.
44

Sun, Boya. "PRECISION IMPROVEMENT AND COST REDUCTION FOR DEFECT MINING AND TESTING". Case Western Reserve University School of Graduate Studies / OhioLINK, 2012. http://rave.ohiolink.edu/etdc/view?acc_num=case1321827962.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
45

Saxena, Kaustubh. "Investigation of the Effect of the Number of Inspectors on the Software Defect Estimates". Thesis, North Dakota State University, 2012. https://hdl.handle.net/10365/26714.

Pełny tekst źródła
Streszczenie:
Capture-recapture models help software managers by providing post-inspection defect estimate remaining in a software artifact to determine if a re-inspection in necessary. These estimates are calculated using the number of unique faults per inspector and the overlap of faults found by inspectors during an inspection cycle. A common belief is that the accuracy of the capture-recapture estimates improves with the inspection team size. This however, has not been empirically studied. This paper empirically investigates the effect of the number of inspectors on the estimates produced by capture-recapture models, by using inspection data with varying number and types of inspectors. The results show that the SC (Sample Coverage) estimators are best suited to software inspections and need least number of inspectors to achieve accurate and precise estimates. Our results also provide a detailed analysis of the number of inspectors necessary to obtain estimates within 5-20% of the actual defect count.
Style APA, Harvard, Vancouver, ISO itp.
46

Silva, Maria Goreti Simão da. "Mineração de repositórios de software modelos de previsão de pedidos de evolução de software". Master's thesis, Faculdade de Ciências e Tecnologia, 2013. http://hdl.handle.net/10362/10983.

Pełny tekst źródła
Streszczenie:
Dissertação para obtenção do Grau de Mestre em Engenharia Informática
A descoberta, ou confirmação, de tendências e padrões na evolução de sistemas de software tem vindo a conferir relevância à mineração de repositórios de software. A Engenharia de Software recorre assim a abordagens específicas para a mineração de dados, originários da construção de software, tais como: código-fonte, histórico de versões (logs), relatórios de erros (rastreio de defeitos), entre outros. Os membros da equipa de desenvolvimento são um recurso valioso no processo de desenvolvimento e manutenção de um software. Para otimizar o seu trabalho têm surgido ferramentas de software integradas e ligadas às atividades de desenvolvimento que permitem que os dados (tais como, pedidos de evolução, repositórios de controlo de versões, entre outros) sejam armazenados automaticamente. Estes dados poderão então ser recuperados e devidamente tratados para que se obtenha informação importante para uma melhoria do processo de desenvolvimento de software. A mineração de dados nos repositórios do processo de desenvolvimento permite detetar tendências e padrões quer no processo de desenvolvimento quer nos artefactos desenvolvidos, constituindo assim uma importante ferramenta de apoio à gestão desse processo. Pretendemos com este estudo usar a informação contida em repositórios de pedidos de evolução para criar modelos de previsão da distribuição desses pedidos ao longo do tempo. Esse tipo de modelos é útil para facilitar a gestão do processo de desenvolvimento e manutenção de software, na medida em que permitem prever períodos em que a densidade de pedidos será maior, contrastando com outros em que há menos pedidos, sendo essa informação relevante para a alocação de recursos humanos ao processo de desenvolvimento e manutenção. A abordagem a utilizar visa estudar que tipos de modelos são mais adequados, conforme o volume de dados histórico existente e o padrão de versões a que o repositório diz respeito. Em particular, queremos saber: - Será que a escolha do “melhor” modelo é relativamente estável, ou muito volátil? A implicação é que podemos ter de atualizar modelos muito frequentemente, ou nem por isso. - Será que os modelos que integram informação sazonal se tornam dominantes? Se sim, quanto tempo de dados históricos é necessário para que a informação sazonal seja relevante?
Style APA, Harvard, Vancouver, ISO itp.
47

Kristiansen, Jan Maximilian Winther. "Software Defect Analysis : An Empirical Study of Causes and Costs in the Information Technology Industry". Thesis, Norges teknisk-naturvitenskapelige universitet, Institutt for datateknikk og informasjonsvitenskap, 2010. http://urn.kb.se/resolve?urn=urn:nbn:no:ntnu:diva-11120.

Pełny tekst źródła
Streszczenie:
The area of software defects is not thoroughly studied in current research, even though it is estimated to be one of the most expensive topics in industries. Hence, certain researchers characterise the lack of research as a scandal within software engineering. Little research has been performed in investigating the root causes of defects, even thought we have classification schemes which aims to classify the what, where and why regarding software defects. We want to investigate the root causes of software defects through both qualitative and quantitative methods.We collected defect reports from three different types of projects in the defect tracking system of Company X. The first project was a project concerned with development of a general core of functionality which other projects could use. The second was a project aim at the mass-software market, while the third project was tailored software to a the needs of a client. These defect reports were analysed by both qualitative and quantitative methods. The qualitative methods were based on grounded theory. The methods tried to establish a theory of why some defect require extensive effort to correct through analysis of the discussions in the defect reports. The quantitative methods were used to describe differences between defects which required extensive or little effort to correct.In the qualitative analysis, we found four main root causes which explain why a group of defects require extensive effort to correct: hard to determine the location of the defect, long discussion or clarification of the defect, incorrect corrections introduces new defects, and implementation of missing functionality or re-implementation of existing functionality. A comparison between the four root causes and project types revealed the root causes were influenced by the project types. The first project had a larger degree of discussion and incorrect corrections than the second and third projects. The second and third projects were more concerned with hard to locate defects and implementation of missing functionality or re-implementation of functionality. Similarly, a comparison against another organisation showed there were differences with regard to root causes for extensive effort. This showed how systematic analysis of defect reports can yield software process improvement opportunities.In the quantitative analysis, we found differences among extensive or little effort to correct defects and project types. The extensive to correct defects of the first project were due to incorrect algorithms or methods, injected during the design phase, and high risk of regressions. In the second project, the extensive effort to correct defects were due to algorithms, methods, functions, classes and objects, were concerned with the core, platform, and user interface layers and injected during the design phase, and lower regression risks. In the third project, the defects which required extensive effort to correct were due to assignation and initialisation of variables, or function, classes and objects, related to the core-layer, injected during the coding phase, and average regression risk of medium. The little effort to correct defects in the core project were concerned with assignation or initialisation of variables, checking statements, lower regression risk, injected during the code phase. In the second project, easy to correct defects were concerned with checking statements in the code which had a low regression risk. In the third project, defects which required little effort to correct were due to checking statements, interfaces with third party libraries, lower regression risk and stem from requirements. The quantitative analysis contained high levels of unspecified values for little effort to correct defect. The levels of unspecified attributes were lower for defects which required extensive effort to correct.We concluded there were differences among project types with regard to root causes for defects, and that there were differences similar between different levels of effort required to correct defects. However, the study were not able to measure how these differences influenced the root causes as the study was performed in a descriptive manner.
Style APA, Harvard, Vancouver, ISO itp.
48

Davis, James Collins. "On the Impact and Defeat of Regular Expression Denial of Service". Diss., Virginia Tech, 2020. http://hdl.handle.net/10919/98593.

Pełny tekst źródła
Streszczenie:
Regular expressions (regexes) are a widely-used yet little-studied software component. Engineers use regexes to match domain-specific languages of strings. Unfortunately, many regex engine implementations perform these matches with worst-case polynomial or exponential time complexity in the length of the string. Because they are commonly used in user-facing contexts, super-linear regexes are a potential denial of service vector known as Regular expression Denial of Service (ReDoS). Part I gives the necessary background to understand this problem. In Part II of this dissertation, I present the first large-scale empirical studies of super-linear regex use. Guided by case studies of ReDoS issues in practice (Chapter 3), I report that the risk of ReDoS affects up to 10% of the regexes used in practice (Chapter 4), and that these findings generalize to software written in eight popular programming languages (Chapter 5). ReDoS appears to be a widespread vulnerability, motivating the consideration of defenses. In Part III I present the first systematic comparison of ReDoS defenses. Based on the necessary conditions for ReDoS, a ReDoS defense can be erected at the application level, the regex engine level, or the framework/runtime level. In my experiments I report that application-level defenses are difficult and error prone to implement (Chapter 6), that finding a compatible higher-performing regex engine is unlikely (Chapter 7), that optimizing an existing regex engine using memoization incurs (perhaps acceptable) space overheads (Chapter 8), and that incorporating resource caps into the framework or runtime is feasible but faces barriers to adoption (Chapter 9). In Part IV of this dissertation, we reflect on our findings. By leveraging empirical software engineering techniques, we have exposed the scope of potential ReDoS vulnerabilities, and given strong motivation for a solution. To assist practitioners, we have conducted a systematic evaluation of the solution space. We hope that our findings assist in the elimination of ReDoS, and more generally that we have provided a case study in the value of data-driven software engineering.
Doctor of Philosophy
Software commonly performs pattern-matching tasks on strings. For example, when validating input in a Web form, software commonly tests whether an input fits the pattern of a credit card number or an email address. Software engineers often implement such string-based pattern matching using a tool called regular expressions (regexes). Regexes permit software engineers to succinctly describe the sequences of characters that make up common "languages" like the set of valid Visa credit card numbers (16 digits, starting with a 4) or the set of valid emails (some characters, an '@', and more characters including at least one'.'). Using regexes on untrusted user input in this manner may be a dangerous decision because some regexes take a long time to evaluate. These slow regexes can be exploited by attackers in order to carry out a denial of service attack known as Regular expression Denial of Service (ReDoS). To date, ReDoS has led to outages affecting hundreds of websites and tens of thousands of users. While the risk of ReDoS is well known in theory, in this dissertation I present the first large-scale empirical studies measuring the extent to which slow regular expressions are used in practice. I found that about 10% of real regular expressions extracted from hundreds of thousands of software projects can exhibit longer-than-expected worst-case behavior in popular programming languages including JavaScript, Python, and Ruby. Motivated by these findings, I then consider a range of ReDoS solution approaches: application refactoring, regex engine replacement, regex engine optimization, and resource caps. I report that application refactoring is error-prone, and that regex engine replacement seems unlikely due to incompatibilities between regex engines. Some resource caps are more successful than others, but all resource cap approaches struggle with adoption. My novel regex engine optimizations seem the most promising approach for protecting existing regex engines, offering significant time reductions with acceptable space overheads.
Style APA, Harvard, Vancouver, ISO itp.
49

Nakagawa, Elisa Yumi. "Um Sistema de Injeção de Defeitos de Software Baseado em Operadores de Mutação". Universidade de São Paulo, 1998. http://www.teses.usp.br/teses/disponiveis/55/55134/tde-14032018-162534/.

Pełny tekst źródła
Streszczenie:
A Injeção de Defeitos é uma técnica que tem sido empregada amplamente para a construção de sistemas que precisam ser altamente confiáveis. Dentre as atividades de Injeção de Defeitos, existem estudos que englobam a injeção de defeitos de hardware e de software. Observa-se que existem poucos trabalhos relacionados à injeção de defeitos de software na literatura, assim como modelos de defeitos e métodos de injeção relacionados. Desse modo, o objetivo deste trabalho é estudar modelos de defeitos de software e investigar métodos de injeção, baseado nos conceitos e princípios oriundos do critério Análise de Mutantes. Dada a crescente complexidade dos sistemas computacionais, o projeto e a implementação de uma ferramenta de suporte à atividade de injeção tornam-se necessários. Dessa forma, neste trabalho é apresentada uma ferramenta de injeção de defeitos de software, denominada ITool, baseada em um esquema de injeção de defeitos. Esse esquema caracteriza o mapeamento de uma taxonomia de defeitos de software (Taxonomia de DeMillo) para os operadores de mutação do critério de teste Análise de Mutantes para a linguagem C. Para ilustrar a relevância e a factibilidade das idéias exploradas neste trabalho, conduziu-se um experimento piloto utilizando-se o programa Space, um sistema real desenvolvido pela ESA (European Space Agency).
Fault Injection is a technique that has been widely used in the development of computer systems that need to be very reliable. In this area, there are studies related with hardware and software fault injection. It should be pointed out that there are few works related to software fault injection in the literature as well as software fault model and injection methods. The objective of this work is to study software fault models and investigate injection methods based on concepts and principies taken from Mutation Analysis Criterion. Considering the increasing complexity of computer systems, the project and implementation of supporting tools to fault injection become necessary. In this perspective, in this work a software fault injection tool, named ITool, is presented. This tool is based on a fault injection scheme that defines the mapping of a software fault taxonomy (DeMillo\'s Taxonomy) to the mutation operators of the Mutation Analysis criterion for C language. To illustrate the relevance and feasibility of the ideas presented in this work, a pilot experience was carried out using the Space program, a real system developed by ESA (European Space Agency).
Style APA, Harvard, Vancouver, ISO itp.
50

Ferrari, Fabiano Cutigi. "Uma contribuição para o teste baseado em defeitos de software orientado a aspectos". Universidade de São Paulo, 2010. http://www.teses.usp.br/teses/disponiveis/55/55134/tde-19012011-090923/.

Pełny tekst źródła
Streszczenie:
A Programação Orientada a Aspectos (POA) é uma técnica contemporânea de desenvolvimento de software fortemente baseada no princípio da separação de interesses. Ela tem como objetivo tratar de problemas de modularização de software por meio da introdução do aspecto como uma nova unidade de implementação que encapsula comportamento relacionado aos interesses transversais do software. A despeito dos benefícios que podem ser alcançados com o uso da POA, seus mecanismos de implementação representam novas potenciais fontes de defeitos que devem ser tratados durante a fase de teste de software. Nesse contexto, o teste de mutação consiste em um critério de seleção de testes baseado em defeitos que tem sido bastante investigado para demonstrar a ausência de defeitos pré-especifiados no software. Acredita-se que o teste de mutação seja uma ferramenta adequada para lidar com as particularidades de técnicas de programação contemporâneas como a POA. Entretanto, até o presente momento, as poucas iniciativas para adaptar o teste de mutação para o contexto de programas orientados a aspectos (OA) apresentam cobertura limitada em relação aos tipos de defeitos simulados, ou ainda requerem adequado apoio automatizado e avaliações. Esta tese visa a mitigar essas limitações por meio da definição de uma abordagem abrangente de teste de mutação para programas OA escritos na linguagem AspectJ. A tese inicia como uma investigação da propensão a defeitos de programas OA e define uma taxonomia de defeitos para tais programas. A taxonomia inclui uma variedade de tipos de defeitos e serviu como base para a definição de um conjunto de operadores de mutação para programas OA. Suporte automatizado para a aplicação dos operadores também foi disponibilizado. Uma série de estudos quantitativos mostra que a taxonomia de defeitos proposta é suficiente para classificar defeitos encontrados em vários sistemas OA. Os estudos também mostram que os operadores de mutação propostos são capazes de simular defeitos que podem não ser relevados por conjuntos de teste pré-existentes, não derivados para cobrir mutantes. Além disso, observou-se que o esforço requerido para evoluir tais conjuntos de teste de forma a torná-los adequados para os requisitos gerados pelos operadores
Aspect-Oriented Programming (AOP) is a contemporary software development technique that strongly relies on the Separation of Concerns principle. It aims to tackle software modularisation problems by introducing the aspect as a new implementation unit to encapsulate behaviour required to realise the so-called crosscutting concerns. Despite the benefits that may be achieved with AOP, its implementation mechanisms represent new potential sources of faults that should be handled during the testing phase. In this context, mutation testing is a widely investigated fault-based test selection criterion that can help to demonstrate the absence of prespecified faults in the software. It is believed to be an adequate tool to deal with testing-related specificities of contemporary programming techniques such as AOP. However, to date, the few initiatives for customising the mutation testing for aspect-oriented (AO) programs show either limited coverage with respect to the types of simulated faults, or a need for both adequate tool support and proper evaluation. This thesis tackles these limitations by defining a comprehensive mutation-based testing approach for AO programs written in the AspectJ language. It starts with a fault-proneness investigation in order to define a fault taxonomy for AO software. Such taxonomy encompasses a range of fault types and underlay the definition of a set of mutation operators for AO programs. Automated tool support is also provided. A series of quantitative studies show that the proposed fault taxonomy is able to categorise faults identified from several available AO systems. Moreover, the studies show that the mutation operators are able to simulate faults that may not be revealed by pre-existing, non-mutation-based test suites. Furthermore, the effort required to augment the test suites to provide adequate coverage of mutants does not tend to overwhelm the testers. This provides evidence of the feasibility of the proposed approach and represents a step towards the practical fault-based testing of AO programs
Style APA, Harvard, Vancouver, ISO itp.
Oferujemy zniżki na wszystkie plany premium dla autorów, których prace zostały uwzględnione w tematycznych zestawieniach literatury. Skontaktuj się z nami, aby uzyskać unikalny kod promocyjny!

Do bibliografii