Dissertations / Theses on the topic 'Software Metric'

To see the other types of publications on this topic, follow the link: Software Metric.

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 dissertations / theses for your research on the topic 'Software Metric.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.

1

Rodríguez, Martínez Cecilia. "Software quality studies using analytical metric analysis." Thesis, KTH, Kommunikationssystem, CoS, 2013. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-120325.

Full text
Abstract:
Today engineering companies expend a large amount of resources on the detection and correction of the bugs (defects) in their software. These bugs are usually due to errors and mistakes made by programmers while writing the code or writing the specifications. No tool is able to detect all of these bugs. Some of these bugs remain undetected despite testing of the code. For these reasons, many researchers have tried to find indicators in the software’s source codes that can be used to predict the presence of bugs. Every bug in the source code is a potentially failure of the program to perform as expected. Therefore, programs are tested with many different cases in an attempt to cover all the possible paths through the program to detect all of these bugs. Early prediction of bugs informs the programmers about the location of the bugs in the code. Thus, programmers can more carefully test the more error prone files, and thus save a lot of time by not testing error free files. This thesis project created a tool that is able to predict error prone source code written in C++. In order to achieve this, we have utilized one predictor which has been extremely well studied: software metrics. Many studies have demonstrated that there is a relationship between software metrics and the presence of bugs. In this project a Neuro-Fuzzy hybrid model based on Fuzzy c-means and Radial Basis Neural Network has been used. The efficiency of the model has been tested in a software project at Ericsson. Testing of this model proved that the program does not achieve high accuracy due to the lack of independent samples in the data set. However, experiments did show that classification models provide better predictions than regression models. The thesis concluded by suggesting future work that could improve the performance of this program.
Idag spenderar ingenjörsföretag en stor mängd resurser på att upptäcka och korrigera buggar (fel) i sin mjukvara. Det är oftast programmerare som inför dessa buggar på grund av fel och misstag som uppkommer när de skriver koden eller specifikationerna. Inget verktyg kan detektera alla dessa buggar. Några av buggarna förblir oupptäckta trots testning av koden. Av dessa skäl har många forskare försökt hitta indikatorer i programvarans källkod som kan användas för att förutsäga förekomsten av buggar. Varje fel i källkoden är ett potentiellt misslyckande som gör att applikationen inte fungerar som förväntat. För att hitta buggarna testas koden med många olika testfall för att försöka täcka alla möjliga kombinationer och fall. Förutsägelse av buggar informerar programmerarna om var i koden buggarna finns. Således kan programmerarna mer noggrant testa felbenägna filer och därmed spara mycket tid genom att inte behöva testa felfria filer. Detta examensarbete har skapat ett verktyg som kan förutsäga felbenägen källkod skriven i C ++. För att uppnå detta har vi utnyttjat en välkänd metod som heter Software Metrics. Många studier har visat att det finns ett samband mellan Software Metrics och förekomsten av buggar. I detta projekt har en Neuro-Fuzzy hybridmodell baserad på Fuzzy c-means och Radial Basis Neural Network använts. Effektiviteten av modellen har testats i ett mjukvaruprojekt på Ericsson. Testning av denna modell visade att programmet inte Uppnå hög noggrannhet på grund av bristen av oberoende urval i datauppsättningen. Men gjordt experiment visade att klassificering modeller ger bättre förutsägelser än regressionsmodeller. Exjobbet avslutade genom att föreslå framtida arbetet som skulle kunna förbättra detta program.
Actualmente las empresas de ingeniería derivan una gran cantidad de recursos a la detección y corrección de errores en sus códigos software. Estos errores se deben generalmente a los errores cometidos por los desarrolladores cuando escriben el código o sus especificaciones.  No hay ninguna herramienta capaz de detectar todos estos errores y algunos de ellos pasan desapercibidos tras el proceso de pruebas. Por esta razón, numerosas investigaciones han intentado encontrar indicadores en los códigos fuente del software que puedan ser utilizados para detectar la presencia de errores. Cada error en un código fuente es un error potencial en el funcionamiento del programa, por ello los programas son sometidos a exhaustivas pruebas que cubren (o intentan cubrir) todos los posibles caminos del programa para detectar todos sus errores. La temprana localización de errores informa a los programadores dedicados a la realización de estas pruebas sobre la ubicación de estos errores en el código. Así, los programadores pueden probar con más cuidado los archivos más propensos a tener errores dejando a un lado los archivos libres de error. En este proyecto se ha creado una herramienta capaz de predecir código software propenso a errores escrito en C++. Para ello, en este proyecto se ha utilizado un indicador que ha sido cuidadosamente estudiado y ha demostrado su relación con la presencia de errores: las métricas del software. En este proyecto un modelo híbrido neuro-disfuso basado en Fuzzy c-means y en redes neuronales de función de base radial ha sido utilizado. La eficacia de este modelo ha sido probada en un proyecto software de Ericsson. Como resultado se ha comprobado que el modelo no alcanza una alta precisión debido a la falta de muestras independientes en el conjunto de datos y los experimentos han mostrado que los modelos de clasificación proporcionan mejores predicciones que los modelos de regresión. El proyecto concluye sugiriendo trabajo que mejoraría el funcionamiento del programa en el futuro.
APA, Harvard, Vancouver, ISO, and other styles
2

Sigfusson, Johann Tor. "Software metric extension of the Enterprisemodelling technique." Thesis, University of Skövde, Department of Computer Science, 1997. http://urn.kb.se/resolve?urn=urn:nbn:se:his:diva-243.

Full text
Abstract:

Abstract The objective of this project is to make it possible to evaluate real-time operating systems. A requirement specification for real-time operating system is represented with the help of the Enterprise Modelling technique. What is needed is to measure if the requirements, defined in the requirement specification, can be fulfilled by existing real-time operating system.

This dissertation is concerned with if it is possible to extend the Enterprise Modelling (EM) technique with software metrics. An emphasis is put on integrating an existing metrics paradigm with the EM technique.

The study shows that a paradigm, called Goal Question Metrics (GQM) can be used to extending the EM technique with software metrics.

Other results are that the extended EM model is good to identify metrics, because of its goal-oriented technique, with strong coupling to the enterprise, and actors and activities related to the product. This can be used to validate that relevant metrics are chosen, based on the need of components related to the enterprise.

APA, Harvard, Vancouver, ISO, and other styles
3

Matthews, S. G. "Metric domains for completeness." Thesis, University of Warwick, 1985. http://wrap.warwick.ac.uk/60775/.

Full text
Abstract:
Completeness is a semantic non-operational notion of program correctness suggested (but not pursued) by W.W.Wadge. Program verification can be simplified using completeness, firstly by removing the approximation relation from proofs, and secondly by removing partial objects from proofs. The dissertation proves the validity of this approach by demonstrating how it can work in the class of metric domains. We show how the use of Tarski's least fixed point theorem can be replaced by a non-operational unique fixed point theorem for many well behaved Programs. The proof of this theorem is also non-operational. After this we consider the problem of deciding what it means f or a function to be "complete". It is shown that combinators such as function composition are not complete, although they are traditionally assumed to be so. Complete versions for these combinators are given. Absolute functions are proposed as a general model for the notion of a complete function. The theory of mategories is introduced as a vehicle for studying absolute functions.
APA, Harvard, Vancouver, ISO, and other styles
4

Gonzalez, Marco A. "A new change propagation metric to assess software evolvability." Thesis, University of British Columbia, 2013. http://hdl.handle.net/2429/44607.

Full text
Abstract:
The development of software-intensive systems faces many challenges; one of the most important from an economic perspective is to reduce their maintenance costs. This thesis proposes a modified change propagation metric as a tool to assist the analysis of evolvability and maintainability of a software system and to ultimately support the reduction of its maintenance cost. The technical complexity of software systems has a great impact on their ability to make increased functionality and adaptability to the environment possible. One approach to understand and master the complexity of large software systems, varying from thousands to millions of lines of source code, is through software architecture. This study examines a sample of software systems from the dependencies of their static structural view. The dependencies and their importance are expressed as a design structure matrix (DSM) that is used as an indicator to reflect the strength of dependence and connection among the different modules. In this thesis, we propose a “modified change propagation” metric as a set of incremental improvements over the original Propagation Cost (PC) metric proposed by MacCormack (2008). Our improved metric uses dependencies weighted with strength to convey more information about the incidence of strongly connected relationships and it discounts weak dependencies. Moreover the original propagation metrics considered that the system should be acyclical; but we found that in practice a very few real systems are exempt of cycles. Furthermore, if cyclic dependencies are heavy rather than weak then these cycles should be treated differently. Finally, our metric is normalized to minimize the effect of both change in the total depth of the dependency graph, and increases in the size of the code. Our modified change propagation metric can help software designers assess the maintainability of a software system at design time and over a proposed release sequence by comparing change propagation measures for different designs of software architecture. For instance, after refactoring. We validated our metric both on a system developed at UBC, and on several large open-source repositories for which we were able to obtain long release histories.
APA, Harvard, Vancouver, ISO, and other styles
5

Gray, Christopher L. "A Coupling-Complexity Metric Suite for Predicting Software Quality." DigitalCommons@CalPoly, 2008. https://digitalcommons.calpoly.edu/theses/14.

Full text
Abstract:
Coupling Between Objects and Cyclomatic Complexity have long been used to measure software quality and predict maintainability and reliability of software systems prior to release. In particular, Coupling Between Objects has been shown to correlate with fault-proneness and maintainability of a system at the class level. We propose a new set of metrics based on a fusion of Coupling Between Objects and Cyclomatic Complexity that can be superior to Coupling Between Objects alone at predicting class quality. The new metrics use Cyclomatic Complexity to 1) augment Coupling Between Objects counting to assign a strength of a coupling between two classes and 2) determine the complexity of a method invocation chain through the transitive relation of invocations involved in a coupling. This results in a measure that identifies objects that are coupled to highly complex methods or method invocation chains. The metrics were implemented as an Eclipse Plug-in and an analysis of two industry Java projects, ConnectorJ and Hibernate, demonstrates the correlation between the new metrics and post-release defects identified in system change logs.
APA, Harvard, Vancouver, ISO, and other styles
6

Lincke, Rüdiger. "Validation of a standard- and metric-based software quality model /." Växjö : Växjö University Press, 2009. http://urn.kb.se/resolve?urn=urn:nbn:se:vxu:diva-5846.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Konuralp, Zeynep. "Software Process Improvement In A Software Development Environment." Master's thesis, METU, 2007. http://etd.lib.metu.edu.tr/upload/12609059/index.pdf.

Full text
Abstract:
A software process improvement study is presented. The literature on software development processes and their improvement is reviewed. The current peer review process at Software Engineering Directorate of the X Company, Ankara, Tü
rkiye (XCOM) is studied and the static software development metrics based on a recent proposal have been evaluated. The static software metrics based improvement suggestions and the author&rsquo
s improvement suggestions discussed with the senior staff are compared. An improved peer review process is proposed. The static software development metrics have been evaluated on the improved process to see the impacts of the improvements. The improved process has been already implemented at XCOM and preliminary results have been obtained.
APA, Harvard, Vancouver, ISO, and other styles
8

Dahmann, Franz-Dietmar. "Correlation between quality management metric and people capability maturity model." Thesis, Monterey, Calif. : Springfield, Va. : Naval Postgraduate School ; Available from National Technical Information Service, 2003. http://library.nps.navy.mil/uhtbin/hyperion-image/03sep%5FDahmann.pdf.

Full text
Abstract:
Thesis (M.S. in Information Technology Management)--Naval Postgraduate School, September 2003.
Thesis advisor(s): John Osmundson, J. Bret Michael. Includes bibliographical references (p. 83-84). Also available online.
APA, Harvard, Vancouver, ISO, and other styles
9

Gray, Christopher L. Janzen David. "A coupling-complexity metric suite for predicting software quality : a thesis /." [San Luis Obispo, Calif. : California Polytechnic State University], 2008. http://digitalcommons.calpoly.edu/theses/14/.

Full text
Abstract:
Thesis (M.S.)--California Polytechnic State University, 2008.
Major professor: David Janzen, Ph.D. "Presented to the faculty of California Polytechnic State University, San Luis Obispo." "In partial fulfillment of the requirements for the degree [of] Master of Science in Computer Science." "June 2008." Includes bibliographical references (leaves 57-62). Also available online. Also available on microfiche (1 sheet).
APA, Harvard, Vancouver, ISO, and other styles
10

Long, Cary D. "A proposed software maintenance metric for the object oriented programming paradigm." Master's thesis, This resource online, 1995. http://scholar.lib.vt.edu/theses/available/etd-02022010-020231/.

Full text
APA, Harvard, Vancouver, ISO, and other styles
11

Olson, Andrew Stephen. "A Software System for Solving Metric Emebedding Problems Using Linear Programming." Miami University / OhioLINK, 2006. http://rave.ohiolink.edu/etdc/view?acc_num=miami1145461043.

Full text
APA, Harvard, Vancouver, ISO, and other styles
12

Fouad, Shereen. "Metric learning for incorporating privileged information in prototype-based models." Thesis, University of Birmingham, 2013. http://etheses.bham.ac.uk//id/eprint/4615/.

Full text
Abstract:
Prototype-based classification models, and particularly Learning Vector Quantization (LVQ) frameworks with adaptive metrics, are powerful supervised classification techniques with good generalization behaviour. This thesis proposes three advanced learning methodologies, in the context of LVQ, aiming at better classification performance under various classification settings. The first contribution presents a direct and novel methodology for incorporating valuable privileged knowledge in the LVQ training phase, but not in testing. This is done by manipulating the global metric in the input space, based on distance relations revealed by the privileged information. Several experiments have been conducted that serve as illustration, and demonstrate the benefit of incorporating privileged information on the classification accuracy. Subsequently, the thesis presents a relevant extension of LVQ models, with metric learning, to the case of ordinal classification problems. Unlike in existing nominal LVQ, in ordinal LVQ the class order information is explicitly utilized during training. Competitive results have been obtained on several benchmarks, which improve upon standard LVQ as well as benchmark ordinal classifiers. Finally, a novel ordinal-based metric learning methodology is presented that is principally intended to incorporate privileged information in ordinal classification tasks. The model has been verified experimentally through a number of benchmark and real-world data sets.
APA, Harvard, Vancouver, ISO, and other styles
13

Berry, Michael CSE UNSW. "Assessment of software measurement." Awarded by:University of New South Wales. CSE, 2006. http://handle.unsw.edu.au/1959.4/25134.

Full text
Abstract:
Background and purpose. This thesis documents a program of five studies concerned with the assessment of software measurement. The goal of this program is to assist the software industry to improve the information support for managers, analysts and software engineers by providing evidence of where opportunities for improving measurement and analysis exist. Methods. The first study examined the assessment of software measurement frameworks using models of best practice based on performance/success factors. The software measurement frameworks of thirteen organisations were surveyed. The association between a factor and the outcome experienced with the organisations' frameworks was then evaluated. The subsequent studies were more info-centric and investigated using models of information quality to assess the support provided for software processes. For these studies, information quality models targeting specific software processes were developed using practitioner focus groups. The models were instantiated in survey instruments and the responses were analysed to identify opportunities to improve the information support provided. The final study compared the use of two different information quality models for the assessing and improving information support. Assessments of the same quantum of information were made using a targeted model and a generic model. The assessments were then evaluated by an expert panel in order to identify which information quality model was more effective for improvement purposes. Results. The study of performance factors for software measurement frameworks confirmed the association of some factors with success and quantified that association. In particular, it demonstrated the importance of evaluating contextual factors. The conclusion is that factor-based models may be appropriately used for risk analysis and for identifying constraints on measurement performance. Note, however, that a follow-up study showed that some initially successful frameworks subsequently failed. This implied an instability in the dependent variable, success, that could reduce the value of factor-based models for predicting success. The studies of targeted information quality models demonstrated the effectiveness of targeted assessments for identifying improvement opportunities and suggest that they are likely to be more effective for improvement purposes than using generic information quality models. The studies also showed the effectiveness of importance-performance analysis for prioritizing improvement opportunities.
APA, Harvard, Vancouver, ISO, and other styles
14

Selig, Calvin Lee. "ADLIF-a structured design language for metric analysis." Thesis, Virginia Tech, 1987. http://hdl.handle.net/10919/45917.

Full text
Abstract:

Since the inception of software engineering, the major goal has been to control the development and maintenance of reliable software. To this end, many different design methodologies have been presented as a means to improve software quality through semantic clarity and syntactic accuracy during the specification and design phases of the software life cycle. On the other end of the life cycle, software quality metrics have been proposed to supply quantitative measures of the resultant software. This study is an attempt to unify the two concepts by providing a means to determine the quality of a design before its implementation.


Master of Science
APA, Harvard, Vancouver, ISO, and other styles
15

Powale, Kalkin. "Automotive Powertrain Software Evaluation Tool." Master's thesis, Universitätsbibliothek Chemnitz, 2018. http://nbn-resolving.de/urn:nbn:de:bsz:ch1-qucosa-233186.

Full text
Abstract:
The software is a key differentiator and driver of innovation in the automotive industry. The major challenges for software development are increasing in complexity, shorter time-to-market, increase in development cost and demand of quality assurance. The complexity is increasing due to emission legislations, variants of product and new communication technologies being interfaced with the vehicle. The shorter development time is due to competition in the market, which requires faster feedback loops of verification and validation of developed functionalities. The increase in development cost is contributed by two factors; the first is pre-launch cost, this involves the cost of error correction in development stages. Another is post-launch cost; this involves warranty and guarantees cost. As the development time passes the cost of error correction also increases. Hence it is important to detect the error as early as possible. All these factors affect the software quality; there are several cases where Original Equipment Manufacturer (OEM) have callbacks their product because of the quality defect. Hence, there is increased in the requirement of software quality assurance. The solution for these software challenges can be the early quality evaluation in continuous integration framework environment. The most prominent in today\'s automotive industry AUTomotive Open System ARchitecture (AUTOSAR) reference architecture is used to describe software component and interfaces. AUTOSAR provides the standardised software component architecture elements. It was created to address the issues of growing complexity; the existing AUTOSAR environment does have software quality measures, such as schema validations and protocols for acceptance tests. However, it lacks the quality specification for non-functional qualities such as maintainability, modularity, etc. The tool is required which will evaluate the AUTOSAR based software architecture and give the objective feedback regarding quality. This thesis aims to provide the quality measurement tool, which will be used for evaluation of AUTOSAR based software architecture. The tool reads the AUTOSAR architecture information from AUTOSAR Extensible Markup Language (ARXML) file. The tool provides configuration ability, continuous evaluation and objective feedback regarding software quality characteristics. The tool was utilised on transmission control project, and results are validated by industry experts.
APA, Harvard, Vancouver, ISO, and other styles
16

Noor, Tanzeem Bin. "A Similarity-based Test Case Quality Metric using Historical Failure Data." IEEE, 2015. http://hdl.handle.net/1993/31045.

Full text
Abstract:
A test case is a set of input data and expected output, designed to verify whether the system under test satisfies all requirements and works correctly. An effective test case reveals a fault when the actual output differs from the expected output (i.e., the test case fails). The effectiveness of test cases is estimated using quality metrics, such as code coverage, size, and historical fault detection. Prior studies have shown that previously failing test cases are highly likely to fail again in the next releases; therefore, they are ranked higher. However, in practice, a failing test case may not be exactly the same as a previously failed test case, but quite similar. In this thesis, I have defined a metric that estimates test case quality using its similarity to the previously failing test cases. Moreover, I have evaluated the effectiveness of the proposed test quality metric through detailed empirical study.
February 2016
APA, Harvard, Vancouver, ISO, and other styles
17

Machniak, Martin J. "Development of a Quality Management Metric (QMM) measuring software program management quality." Thesis, Monterey, Calif. : Springfield, Va. : Naval Postgraduate School ; Available from National Technical Information Service, 1999. http://handle.dtic.mil/100.2/ADA374316.

Full text
Abstract:
Thesis (M.S. in Software Engineering) Naval Postgraduate School, December 1999.
"December 1999". Thesis advisor(s): J. Bret Michael, John Osmundson. Includes bibliographical references (p. 143-144). Also available online.
APA, Harvard, Vancouver, ISO, and other styles
18

McDaniel, Patrick Drew. "The analysis of Di, a detailed design metric, on large-scale software." Virtual Press, 1991. http://liblink.bsu.edu/uhtbin/catkey/774746.

Full text
APA, Harvard, Vancouver, ISO, and other styles
19

Mastromarino, Francesco. "Analisi delle metriche del software: l'utilizzo nei processi agili." Bachelor's thesis, Alma Mater Studiorum - Università di Bologna, 2016. http://amslaurea.unibo.it/12327/.

Full text
Abstract:
Il nostro lavoro ha l’intento di far conoscere e comprendere al lettore il concetto di metrica del software e i relativi aspetti ad essa collegati, ponendo un occhio di riguardo per l’applicazione di tali metriche nei processi di sviluppo software agili (ASD). Tali metriche rappresentano uno standard di misurazione, quantificazione e valutazione di diversi aspetti dello sviluppo del software. I vantaggi che esse offrono sono numerosi, tra i principali troviamo i seguenti: sono in grado di monitorare l’andamento del progetto (o prodotto) software e di assicurarne la qualità. Più precisamente, ricordiamo che nei processi Agile l’utilizzo delle metriche è considerato essenziale, in quanto esse consentono di migliorare le previsioni e la gestione di prodotti software. Successivamente, nell’elaborato sono state analizzate le relazioni esistenti tra le suddette metriche e la previsione di guasti software (chiamati fault-prediction). Inoltre, il nostro elaborato vuole offrire esempi pratici eseguiti sui software metrics tools, strumenti di supporto altamente utili ed efficaci per il Team di sviluppo software. I test sono stati eseguiti, in particolare, su JIRA (tool software per processi agili) e su QUAMOCO (tool software per processi tradizionali).
APA, Harvard, Vancouver, ISO, and other styles
20

Almeida, Alberto Teixeira Bigotte de. "An empirical study of the fault-predictive ability of software control-structure metrics." Thesis, Monterey, California : Naval Postgraduate School, 1990. http://handle.dtic.mil/100.2/ADA231860.

Full text
Abstract:
Thesis (M.S. in Computer Science)--Naval Postgraduate School, June 1990.
Thesis Advisor(s): Shimeall, Timothy J. Second Reader: Bradbury, Leigh W. "June 1990." Description based on signature page on October 16, 2009. DTIC Descriptor(s): Computer programs, costs, faults, measurement, test methods DTIC Indicator(s): Computer program verification, metric system, Theses. Author(s) subject terms: Software metrics, text-based metrics, faults, testing, empirical studies. Includes bibliographical references (p. 69-72). Also available online.
APA, Harvard, Vancouver, ISO, and other styles
21

Kasianenko, Stanislav. "Predicting Software Defectiveness by Mining Software Repositories." Thesis, Linnéuniversitetet, Institutionen för datavetenskap och medieteknik (DM), 2018. http://urn.kb.se/resolve?urn=urn:nbn:se:lnu:diva-78729.

Full text
Abstract:
One of the important aims of the continuous software development process is to localize and remove all existing program bugs as fast as possible. Such goal is highly related to software engineering and defectiveness estimation. Many big companies started to store source code in software repositories as the later grew in popularity. These repositories usually include static source code as well as detailed data for defects in software units. This allows analyzing all the data without interrupting programing process. The main problem of large, complex software is impossibility to control everything manually while the price of the error can be very high. This might result in developers missing defects on testing stage and increase of maintenance cost. The general research goal is to find a way of predicting future software defectiveness with high precision. Reducing maintenance and development costs will contribute to reduce the time-to-market and increase software quality. To address the problem of estimating residual defects an approach was found to predict residual defectiveness of a software by the means of machine learning. For a prime machine learning algorithm, a regression decision tree was chosen as a simple and reliable solution. Data for this tree is extracted from static source code repository and divided into two parts: software metrics and defect data. Software metrics are formed from static code and defect data is extracted from reported issues in the repository. In addition to already reported bugs, they are augmented with unreported bugs found on “discussions” section in repository and parsed by a natural language processor. Metrics were filtered to remove ones, that were not related to defect data by applying correlation algorithm. Remaining metrics were weighted to use the most correlated combination as a training set for the decision tree. As a result, built decision tree model allows to forecast defectiveness with 89% chance for the particular product. This experiment was conducted using GitHub repository on a Java project and predicted number of possible bugs in a single file (Java class). The experiment resulted in designed method for predicting possible defectiveness from a static code of a single big (more than 1000 files) software version.
APA, Harvard, Vancouver, ISO, and other styles
22

Mondal, Subhajit. "Extension of E([theta]) metric for evaluation of reliability." Kansas State University, 2005. http://hdl.handle.net/2097/144.

Full text
Abstract:
Master of Science
Department of Computing and Information Sciences
David A. Gustafson
The calculation of reliability based on running test cases refers to the probability of the software not generating faulty output consequent to the testing process. The metric used to measure this reliability is referred in terms of E(Θ) value. The concept of E(Θ) gives precise formulae to calculate the probability of failure of software after testing, debug or operational. This report aims at extending the functionalities of E(Θ) into the realm of multiple faults spread across multiple sub-domains. This generalization involves introduction of a new set of formulae for E(Θ) calculation which can account for faults spread over both single as well as multiple sub-domains in a code. The validity of the formulae is verified by matching the obtained theoretical results against the empirical data generated from running a test case simulator. The report further examines the possibility of an upper bound calculation on the derived formulae and its possible ramifications.
APA, Harvard, Vancouver, ISO, and other styles
23

Cormier, Catherine. "Seniority as a Metric in Reputation Systems for E-Commerce." Thèse, Université d'Ottawa / University of Ottawa, 2011. http://hdl.handle.net/10393/20105.

Full text
Abstract:
In order to succeed, it is imperative that all e-commerce systems include an effective and reliable trust and reputation modeling system. This is particularly true of decentralized e-commerce systems in which autonomous software engage in commercial transactions. Many researchers have sought to overcome the complexities of modeling a subjective, human concept like trust, resulting in several trust and reputation models. While these models each present a unique offering and solution to the problem, several issues persist. Most of the models require direct experience in the e-commerce system in order to make effective trust decisions. This leaves new agents and agents who only casually use the e-commerce system vulnerable. Additionally, the reputation ratings of agents who are relatively new to the system are often indistinguishable from scores for poorly performing agents. Finally, more tactics to defend against agents who exploit the characteristics of the open, distributed system for their own malicious needs are required. To address these issues, a new metric is devised and presented: seniority. Based on agent age and activity level within the e-commerce system, seniority provides a means of judging the credibility of other agents with little or no prior experience in the system. As the results of experimental analysis reveals, employing a reputation model that uses seniority provides considerable value to agents who are new agents, casual buyer agents and all other purchasing agents in the e-commerce system. This new metric therefore offers a significant contribution toward the development of enhanced and new trust and reputation models for deployment in real-world distributed e-commerce environments.
APA, Harvard, Vancouver, ISO, and other styles
24

O'Neill, Simon John. "A fundamental study into the theory and application of the partial metric spaces." Thesis, University of Warwick, 1998. http://wrap.warwick.ac.uk/73518/.

Full text
Abstract:
Our aim is to establish the partial metric spaces within the context of Theoretical Computer Science. We present a thesis in which the big "idea" is to develop a more (classically) analytic approach to problems in Computer Science. The partial metric spaces are the means by which we discuss our ideas. We build directly on the initial work of Matthews and Wadge in this area. Wadge introduced the notion of healthy programs corresponding to complete elements in a semantic domain, and of size being the extent to which a point is complete. To extend these concepts to a wider context, Matthews placed this work in a generalised metric framework. The resulting partial metric axioms are the starting point for our own research. In an original presentation, we show that Ta-metrics are either quasi-metrics, if we discard symmetry, or partial metrics, if we allow non-zero self-distances. These self-distances are how we capture Wadge's notion of size (or weight) in an abstract setting, and Edalat's computational models of metric spaces are examples of partial metric spaces. Our contributions to the theory of partial metric spaces include abstracting their essential topological characteristics to develop the hierarchical spaces, investigating their To-topological properties, and developing metric notions such as completions. We identify a quantitative domain to be a continuous domain with a To-metric inducing the Scott topology, and introduce the weighted spaces as a special class of partial metric spaces derived from an auxiliary weight function. Developing a new area of application, we model deterministic Petri nets as dynamical systems, which we analyse to prove liveness properties of the nets. Generalising to the framework of weighted spaces, we can develop model-independent analytic techniques. To develop a framework in which we can perform the more difficult analysis required for non-deterministic Petri nets, we identify the measure-theoretic aspects of partial metric spaces as fundamental, and use valuations as the link between weight functions and information measures. We are led to develop a notion of local sobriety, which itself appears to be of interest.
APA, Harvard, Vancouver, ISO, and other styles
25

Yurga, Tolga. "A Metrics-based Approach To The Testing Process And Testability Of Object-oriented Software Systems." Phd thesis, METU, 2009. http://etd.lib.metu.edu.tr/upload/12610540/index.pdf.

Full text
Abstract:
This dissertation investigates the factors that affect testability and testing cost of object- oriented software systems. Developing a software program which eases the testing process by increasing testability is crucial. Also, to assess whether or not the testing effort and cost consumed or planned is adequate or not is another critical matter this dissertation aims to answer by composing a new way to evaluate the links between software design parameters and testing effort via source-based metrics. An automated metric plug-in is used as the primary tool for obtaining the metric measurements. Our study is based on the investigation of many open-source projects written in Java to achieve our goals. By the help of the statistical evaluation of project data, we both propose a new model to assess testing effort and testability, and find significant relations and associations between software design and testing effort and testability of object-oriented software systems via source-based metrics.
APA, Harvard, Vancouver, ISO, and other styles
26

Eriksson, Viktor. "Extraction of radio frequency quality metric from digital video broadcast streams by cable using software defined radio." Thesis, Linköpings universitet, Kommunikationssystem, 2013. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-94187.

Full text
Abstract:
The purpose of this master thesis was to investigate how effiecient the extractionof radiofrequency quality metrics from digital video broadcast (DVB) streamscan become using software defined radio. Software defined radio (SDR) is a fairlynew technology that offers you the possibility of very flexible receivers and transmitters where it is possible to upgrade the modulation and demodulation overtime. Agama is interested in SDR for use in the Agama Analyzer, a widely deployedmonitoring probe running on top of standard services. Using SDR, Agama coulduse that in all deployments, such as DVB by cable/terrestrial/satellite (DVBC/T/S), which would simplify logistics. This thesis is an implementation of a SDR to be able to receive DVB-C. TheSDR must perform a number of adaptive algorithms in order to prevent the received symbols from being significantly different from the transmitted ones. Themain parts of the SDR include timing recovery, carrier recovery and equalization.Timing recovery performs synchronization between the transmitted and receivedsymbols and the carrier recovery performs synchronization between the carrierwave of the transmitter and the local oscillator in the receiver. The thesis discusses various methods to perform the different types of synchronizations andequalizations in order to find the most suitable methods.
APA, Harvard, Vancouver, ISO, and other styles
27

Eralp, Ozgur. "Design And Implementation Of A Software Development Process Measurement System." Master's thesis, METU, 2004. http://etd.lib.metu.edu.tr/upload/12604771/index.pdf.

Full text
Abstract:
This thesis study presents a software measurement program. The literature on software measurement is reviewed. Conditions for an effective implementation are investigated. A specific measurement system is designed and implemented in ASELSAN, Inc. This has involved organizational as well as technical work. A software tool has been developed to assist in aggregating measurements obtained from various CASE tools in use. Results of the implementation have started to be achieved. Lots of useful feedbacks have been returned to the organization as a result of analyzing of the measurement data.
APA, Harvard, Vancouver, ISO, and other styles
28

Bozkurt, Candas. "Pommes: A Tool For Quantitative Project Management." Master's thesis, METU, 2005. http://etd.lib.metu.edu.tr/upload/12606030/index.pdf.

Full text
Abstract:
Metric collection process and Project Management activities cannot be performed in an integrated fashion on most of the software projects. In software engineering world, there are Project Management Tools that has embedded project metrics and there are various Metric Collection Tools that collect specific metrics for satisfying requirements of different software life cycle phase activities (Configuration Management, Requirements Management, Application Development tools etc.). These tools however are not communicating with each other with any interface or any common database. This thesis focuses on the development of a tool to define, export, collect and use metrics for software project planning, tracking and oversight processes. To satisfy these objectives, POMMES with functionalities of Generic Metric Definition, Collection, Analysis, and Import, Update and Export of Project Metrics from 3rd Party Project Management Tools is developed and implemented in a software organization during this thesis work.
APA, Harvard, Vancouver, ISO, and other styles
29

Košata, Václav. "Srovnání řešení BI na bázi SaaS." Master's thesis, Vysoká škola ekonomická v Praze, 2011. http://www.nusl.cz/ntk/nusl-114272.

Full text
Abstract:
The diploma thesis is focused on a specific way of distribution of Business Intelligence application on Software-as-a-Service base. A different concept opens a possibility for small and medium-size companies which cannot afford robust and expensive solution. The theoretical part provides an introduction with the basic characteristics of BI systems and cloud applications. Additionally, descriptions of the selected criteria are stated for a comparison of the specifics of applications delivered as a service. Integration, analytical and reporting functions Belladati, Zoho Reports and Bime are tested in a practical part of the thesis. The main chapter is devoted to solution product comparison, based on the selected criteria. The main asset of the work is to discover the strengths and weaknesses of each solutions found during the practical testing on the test data. The result of the comparison is not to find the best product, but to enhance the specific properties. The output can serve as a background material during cloud-based BI applications selection.
APA, Harvard, Vancouver, ISO, and other styles
30

Duc, Anh Nguyen. "The impact of design complexity on software cost and quality." Thesis, Blekinge Tekniska Högskola, Sektionen för datavetenskap och kommunikation, 2010. http://urn.kb.se/resolve?urn=urn:nbn:se:bth-5708.

Full text
Abstract:
Context: Early prediction of software cost and quality is important for better software planning and controlling. In early development phases, design complexity metrics are considered as useful indicators of software testing effort and some quality attributes. Although many studies investigate the relationship between design complexity and cost and quality, it is unclear what we have learned from these studies, because no systematic synthesis exists to date. Aim: The research presented in this thesis is intended to contribute for the body of knowledge about cost and quality prediction. A major part of this thesis presents the systematic review that provides detail discussion about state of the art of research on relationship between software design metric and cost and software quality. Method: This thesis starts with a literature review to identify the important complexity dimensions and potential predictors for predicting external software quality attributes are identified. Second, we aggregated Spearman correlation coefficients and estimated odds ratios from univariate logistic regression models from 59 different data sets from 57 primary studies by a tailored meta-analysis approach. At last, it is an attempt to evaluate and explain for disagreement among selected studies. Result: There are not enough studies for quantitatively summarizing relationship between design complexity and development cost. Fault proneness and maintainability is the main focused characteristics that consume 75% total number of studies. Within fault proneness and maintainability studies, coupling and scale are two complexity dimensions that are most frequently used. Vote counting shows evidence about positive impact of some design metrics on these two quality attributes. Meta analysis shows the aggregated effect size of Line of code (LOC) is stronger than those of WMC, RFC and CBO. The aggregated effect sizes of LCOM, DIT and NOC are at trivial to small level. In subgroup analysis, defect collections phase explains more than 50% of observed variation in five out of seven investigated metrics. Conclusions: Coupling and scale metrics are stronger correlated to fault proneness than cohesion and inheritance metrics. No design metrics are stronger single predictors than LOC. We found that there is a strong disagreement between the individual studies, and that defect collection phase is able to partially explain the differences between studies.
APA, Harvard, Vancouver, ISO, and other styles
31

Andrade, Bruno André Mansilha. "Towards automatic non-metric traits analysis of skulls based on 3D models." Master's thesis, Universidade de Aveiro, 2016. http://hdl.handle.net/10773/18714.

Full text
Abstract:
Mestrado em Engenharia de Computadores e Telemática
O propósito desta dissertação é a melhoria da aplicação CraMs e a análise craniométrica de modelos 3D através da quantificação e classificação de estruturas e características morfológicas. Uma oportunidade para o desenvolvimento deste projeto apresentou-se no ano de 2012 numa tentativa de colaboração com antropólogos, para criar uma aplicação que os ajudasse e facilitasse a realização de medições craniométricas e no processo de marcação de pontos. A aplicação permite ultrapassar alguns dos problemas existentes com os métodos manuais utilizados pelos antropólogos que podem criar resultados irregulares em medições e danificar os espécimes no seu manuseamento. Esta ideia levou ao desenvolvimento de um programa de computador, CraMs, no âmbito de duas dissertações de mestrado nos anos letivos de 2012-2014. Esta nova abordagem baseia-se na aquisição de modelos craniométricos usando um scanner 3D que depois, serão usados para fazer medições e análises normalizadas. O trabalho desenvolvido foca-se na abordagem dos problemas identificados pelos especialistas e na expansão das funcionalidades existentes a fim de criar novos métodos e melhorar a sua usabilidade. Os métodos acima mencionados centram-se na análise da morfologia das amostras e na extração das estruturas de forma uniforme, nomeadamente, a forma da abertura nasal, a depressão pós-bregmática, a espinha nasal anterior e a forma craniana para uma classificação padrão, com o objetivo de identificar a ascendência do indivíduo e o seu género.
The purpose of this dissertation work is to improve the CraMs application and the craniometric analysis of 3D models through the quantification and classification of structures and morphological characteristics. An opportunity for the development of this project presented itself in the year of 2012 in a collaboration with anthropologists, to create an application that would assist those performing craniometric measurements and in the process of marking points. The use of an application can improve some of the problems that exist with the manual methods used by anthropologists, that can create irregular results in measurements and can damage the specimens, while handling them. This idea led to the development of a computer application, CraMs, in the scope of two Master dissertations in the academic years 2012-2014. This new approach relies on the acquisition of craniums using a 3D scanner which will, afterwards, be used to make standardized measurements and analysis. The work developed concentrate in the issues identified by the specialists and in the expansion of the functionalities in order to create new methods and improve the usability. The methods mentioned above focus on the morphology analysis of the specimens and on extraction of the structures uniformly, namely the nasal aperture width, the anterior nasal spine, the postbregmatic depression and the cranial shape, for a standard classification with the purpose of identifying the individual ancestry and gender.
APA, Harvard, Vancouver, ISO, and other styles
32

Zhu, Huan. "Investigation and Evaluation of Object Oriented Analysis techniques." Thesis, Blekinge Tekniska Högskola, Avdelningen för programvarusystem, 2004. http://urn.kb.se/resolve?urn=urn:nbn:se:bth-1427.

Full text
Abstract:
The technique of Object Oriented Analysis (OOA) has emerged only in the last decade. Although the technique of OOA is still new, its popularity has been increasing and it has already entered the mainstream of object oriented system development. This thesis makes a summary of four OOA methods and investigates the behaviors of all methods under different criteria. Through comparing the four methods, differences between methods are shown and analysts can select the appropriate one to meet his/her requirements.
Polhemsgatan 27A 37140 Karlskrona Sweden Huanday@hotmail.com
APA, Harvard, Vancouver, ISO, and other styles
33

Remiáš, Richard. "Systém pro podporu metrik v projektech vývoje softwaru." Master's thesis, Vysoké učení technické v Brně. Fakulta informačních technologií, 2012. http://www.nusl.cz/ntk/nusl-236616.

Full text
Abstract:
This work is aimed at design and implementation of system for supporting metrics in software development projects. The procedure of design and application of measurement methods is described. Further metrics data analysis in three domains is described: frequency domain, time domain and relationship domain; together with forms of visualization. Finally, the requirements for system for supporting metrics are enlisted, along with design of architecture and details of implementation.
APA, Harvard, Vancouver, ISO, and other styles
34

Kalaji, Abdul Salam. "Search-based software engineering : a search-based approach for testing from extended finite state machine (EFSM) models." Thesis, Brunel University, 2010. http://bura.brunel.ac.uk/handle/2438/4575.

Full text
Abstract:
The extended finite state machine (EFSM) is a powerful modelling approach that has been applied to represent a wide range of systems. Despite its popularity, testing from an EFSM is a substantial problem for two main reasons: path feasibility and path test case generation. The path feasibility problem concerns generating transition paths through an EFSM that are feasible and satisfy a given test criterion. In an EFSM, guards and assignments in a path‟s transitions may cause some selected paths to be infeasible. The problem of path test case generation is to find a sequence of inputs that can exercise the transitions in a given feasible path. However, the transitions‟ guards and assignments in a given path can impose difficulties when producing such data making the range of acceptable inputs narrowed down to a possibly tiny range. While search-based approaches have proven efficient in automating aspects of testing, these have received little attention when testing from EFSMs. This thesis proposes an integrated search-based approach to automatically test from an EFSM. The proposed approach generates paths through an EFSM that are potentially feasible and satisfy a test criterion. Then, it generates test cases that can exercise the generated feasible paths. The approach is evaluated by being used to test from five EFSM cases studies. The achieved experimental results demonstrate the value of the proposed approach.
APA, Harvard, Vancouver, ISO, and other styles
35

Maltbie, Nicholas. "Integrating Explainability in Deep Learning Application Development: A Categorization and Case Study." University of Cincinnati / OhioLINK, 2021. http://rave.ohiolink.edu/etdc/view?acc_num=ucin1623169431719474.

Full text
APA, Harvard, Vancouver, ISO, and other styles
36

Zhou, Luyuan. "Security Risk Analysis based on Data Criticality." Thesis, Linnéuniversitetet, Institutionen för datavetenskap och medieteknik (DM), 2020. http://urn.kb.se/resolve?urn=urn:nbn:se:lnu:diva-93055.

Full text
Abstract:
Nowadays, security risk assessment has become an integral part of network security as everyday life has become interconnected with and dependent on computer networks. There are various types of data in the network, often with different criticality in terms of availability or confidentiality or integrity of information. Critical data is riskier when it is exploited. Data criticality has an impact on network security risks. The challenge of diminishing security risks in a specific network is how to conduct network security risk analysis based on data criticality. An interesting aspect of the challenge is how to integrate the security metric and the threat modeling, and how to consider and combine the various elements that affect network security during security risk analysis. To the best of our knowledge, there exist no security risk analysis techniques based on threat modeling that consider the criticality of data. By extending the security risk analysis with data criticality, we consider its impact on the network in security risk assessment. To acquire the corresponding security risk value, a method for integrating data criticality into graphical attack models via using relevant metrics is needed. In this thesis, an approach for calculating the security risk value considering data criticality is proposed. Our solution integrates the impact of data criticality in the network by extending the attack graph with data criticality. There are vulnerabilities in the network that have potential threats to the network. First, the combination of these vulnerabilities and data criticality is identified and precisely described. Thereafter the interaction between the vulnerabilities through the attack graph is taken into account and the final security metric is calculated and analyzed. The new security metric can be used by network security analysts to rank security levels of objects in the network. By doing this, they can find objects that need to be given additional attention in their daily network protection work. The security metric could also be used to help them prioritize vulnerabilities that need to be fixed when the network is under attack. In general, network security analysts can find effective ways to resolve exploits in the network based on the value of the security metric.
APA, Harvard, Vancouver, ISO, and other styles
37

Bradley, Malcolm. "Whole life cost methods for computer systems." Thesis, Loughborough University, 1998. https://dspace.lboro.ac.uk/2134/7152.

Full text
Abstract:
This thesis provides an analysis of cost of ownership issues and techniques, and provides the supporting data to enable future system designers to make rational decisions on design options. It represents the experience gained whilst collecting cost and cost relationship data in the Rolls-Royce group over a period or more than four years. This, in a time of continuous change, in both the company and the wider IT industry. The thesis is arranged in chapters, each representing a milestone conference or journal paper. The exception to this is chapter Il- the conclusion and summary of the work in the thesis. The Chapter topics cover firstly the background of whole life cost and the aims and objectives of the research. A relationship between whole life cost and quality is considered and why whole life cost is a useful measure of quality. This is examined in practical terms of tools and methods. Case studies are used to illustrate the measurement and use of whole life cost. The impact of obsolescence risk is next considered, identifying the causes and implications of obsolescence. Case studies are used to show how the IT help desk can be used to identify and reduce whole life costs both in a deterministic and a probabilistic approach. This is followed by an examination of the costs of database systems at Rolls-Royce and Associates. Case studies of database systems are also used to show the need to collect in service data, and genetic algorithms are shown to be a useful tool for analysing the data. Whole life costing techniques applied to engineering systems at Rolls-Royce is examined. It is shown that a reliability centred maintenance database is a cost effective tool in collecting data. Network monitoring software is shown to be an effective tool for reducing the cost of ownership of IT systems. The overall conclusion is that whole life cost techniques have been shown to work for computer based systems, further work in this area is still needed to enable costs to be fully understood and optimised.
APA, Harvard, Vancouver, ISO, and other styles
38

Rotting, Tjädermo Viktor, and Alex Tanskanen. "System Upgrade Verification : An automated test case study." Thesis, Linköpings universitet, Institutionen för datavetenskap, 2019. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-165125.

Full text
Abstract:
We live in a society where automatization is becoming more common, whether it be cars or artificial intelligence. Software needs to be updated using patches, however, these patches have the possibility of breaking components. This study takes such a patch in the context of Ericsson, identifies what needs to be tested, investigates whether the tests can be automated and assesses how maintainable they are. Interviews were used for the identification of system and software parts in need of testing. Then tests were implemented in an automated test suite to test functionality of either a system or software. The goal was to reduce time of troubleshooting for employees without interrupting sessions for users as well as set up a working test suite. When the automated testing is completed and implemented in the test suite, the study is concluded by measuring the maintainability of the scripts using both metrics and human assessment through interviews. The result showed the testing suite proved maintainable, both from the metric point of view and from human assessment.
APA, Harvard, Vancouver, ISO, and other styles
39

Smith, Mary Lou. "Assessing software metrics." Thesis, National Library of Canada = Bibliothèque nationale du Canada, 1998. http://www.collectionscanada.ca/obj/s4/f2/dsk2/ftp01/MQ38410.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
40

BATISTA, CARLOS FREUD ALVES. "SOFTWARE SECURITY METRICS." PONTIFÍCIA UNIVERSIDADE CATÓLICA DO RIO DE JANEIRO, 2007. http://www.maxwell.vrac.puc-rio.br/Busca_etds.php?strSecao=resultado&nrSeq=10990@1.

Full text
Abstract:
PETRÓLEO BRASILEIRO S. A.
A dependência cada vez maior da tecnologia de informação (TI) torna software seguro um elemento chave para a continuidade dos serviços de nossa sociedade atual. Nos últimos anos, instituições públicas e privadas aumentaram seus investimentos em segurança da informação, mas a quantidade de ataques vem crescendo mais rapidamente do que a nossa capacidade de poder enfrentálos, colocando em risco a propriedade intelectual, a relação de confiança de clientes e a operação de serviços e negócios apoiados pelos serviços de TI. Especialistas em segurança afirmam que atualmente boa parte dos incidentes de segurança da informação ocorrem a partir de vulnerabilidades encontradas no software, componente presente em boa parte dos sistemas de informação. Para tornar o software fidedigno em relação à segurança, a criação e o uso de métricas de segurança serão fundamentais para gerenciar e entender o impacto dos programas de segurança nas empresas. Porém, métricas de segurança são cobertas de mistério e consideradas bastante difíceis de serem implementadas. Este trabalho pretende mostrar que hoje ainda não é possível termos métricas quantitativas capazes de indicar o nível de segurança que o software em desenvolvimento virá a ter. Necessitam-se, então, outras práticas para assegurar níveis de segurança a priori, ou seja, antes de se por o software em uso.
Today`s growing dependency on information technology (IT) makes software security a key element of IT services. In recent years public and private institutions raised the investment on information security, however the number of attacks is growing faster than our power to face them, putting at risk intellectual property, customer`s confidence and businesses that rely on IT services. Experts say that most information security incidents occur due to the vulnerabilities that exist in software systems in first place. Security metrics are essential to assess software dependability with respect to security, and also to understand and manage impacts of security initiatives in organizations. However, security metrics are shrouded in mystery and very hard to implement. This work intends to show that there are no adequate metrics capable of indicating the security level that a software will achieve. Hence, we need other practices to assess the security of software while developing it and before deploying it.
APA, Harvard, Vancouver, ISO, and other styles
41

Singh, Rajat. "Software Metrics Tool." Thesis, North Dakota State University, 2018. https://hdl.handle.net/10365/29766.

Full text
Abstract:
In the current world software applications are becoming more important with each passing day. They are present in all walks of life and it?s difficult to imagine a world without them. It?s a part of every known industry whether it be manufacturing, healthcare, financing. They are available on all forms in personal laptops, mobiles, and tablets. However, there is another challenging task to figure out the quality of the software. There are multiple measures available in the form of software metrics. The objective of this thesis is to present an extensible software for calculating software metrics. This tool proposed is a web application which calculates metrics and statistics for the source code files provided. This tool also provides an ability to the user to extend the tool by adding a metric to the tool.
APA, Harvard, Vancouver, ISO, and other styles
42

Coppick, John. "Software Metrics for Object-Oriented Software." TopSCHOLAR®, 1990. http://digitalcommons.wku.edu/theses/1920.

Full text
Abstract:
Within this thesis the application of software complexity metrics in the object-oriented paradigm is examined. Several factors which may affect the complexity of software objects are identified and discussed. The specific applications of Maurice Halstead’s Software Science and Thomas McCabe’s cyclomatic-complexity metric are discussed in detail. The goals here are to identify methods for applying existing software metrics to objects and to provide a basis of analysis for future studies of the measurement and control of software complexity in the object-oriented paradigm of software development. Halstead’s length, vocabulary, volume, program levels and effort metrics are defined for objects. A limit for the McCabe cyclomatic complexity of an object is suggested. Also, tools for calculating these metrics have been developed in LISP on a Texas Instruments’ Explorer.
APA, Harvard, Vancouver, ISO, and other styles
43

NASCIMENTO, Laísa Helena Oliveira do. "Abordagens para avaliação experimental de testes baseado em modelos de aplicações reativas." Universidade Federal de Campina Grande, 2008. http://dspace.sti.ufcg.edu.br:8080/jspui/handle/riufcg/1560.

Full text
Abstract:
Submitted by Johnny Rodrigues (johnnyrodrigues@ufcg.edu.br) on 2018-08-27T18:21:34Z No. of bitstreams: 1 LAÍSA HELENA OLIVEIRA DO NASCIMENTO - DISS PPGCC 2008..pdf: 1446971 bytes, checksum: 3622ab4c366ab0a686c4f32d688c13b6 (MD5)
Made available in DSpace on 2018-08-27T18:21:34Z (GMT). No. of bitstreams: 1 LAÍSA HELENA OLIVEIRA DO NASCIMENTO - DISS PPGCC 2008..pdf: 1446971 bytes, checksum: 3622ab4c366ab0a686c4f32d688c13b6 (MD5) Previous issue date: 2008-02-28
Processos de teste de software vêm ganhando cada vez mais espaço na indústria. Empresas têm investindo na definição e formalização dos seus processos e em meio a essa mudança de comportamento, Model-Based Testing (MBT) apresenta-se como uma técnica promissora de teste. No entanto, a utilização de MBT ainda é baixa e pesquisadores têm focado em maneiras de superar as barreiras para que se obtenha uma adoção maior por parte da indústria. O mundo empresarial é movido a processos e resultados. Dessa forma, o uso de MBT precisa se adaptar aos processos existentes, e estudos de caso que evidenciem as vantagens de sua utilização precisam ser conduzidos. Neste trabalho, o paradigma Goal Question Metric é utilizado na definição de modelos de medição que têm como foco principal a avaliação e o acompanhamento do desempenho de MBT sem causar impacto ao processo de teste já existente. Os modelos de medição consideram métricas como esforço, percentual de requisitos testáveis cobertos, percentual de casos de teste modificados, percentual de falhas,dentre outros. Os modelos não estão atrelados ao processo de MBT apresentado, podendo ser aplicados em qualquer processo que permita a coleta dos dados necessários para o cálculo das métricas. Para validar os modelos, estudos de caso foram conduzidos dentro do ambiente de testes da Motorola.
Software testing processes have become more common in industry. Companies are investing on the definition and the formalization of their test processes and, in this context, Model-Based Testing (MBT) appears as an interesting testing technique. However, industrial adoption of MBT remains low and researchers are also focusing on how to beat the barriers to wide adoption. Processes and results move the business world so, MBT processes must be adaptable to actual testing processes. For this, experiments to evaluate the results achieved with its use must be conduct. In this work, measurement models based on the Goal Question Metric methodology are proposed. The purpose is evaluating the use of MBT without increasing actual testing process costs. The models focus on aspects as effort, testable requirements coverage, modified test cases, failures, among others. The models are not associated with the MBT process presented. They can be applied with any process that allows metrics collection. In order to validate the measurement models, case studies were conducted into Motorola testing environment.
APA, Harvard, Vancouver, ISO, and other styles
44

Liu, Xiaowei. "Object-oriented software metrics." Thesis, National Library of Canada = Bibliothèque nationale du Canada, 1999. http://www.collectionscanada.ca/obj/s4/f2/dsk1/tape7/PQDD_0013/MQ41734.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
45

Datar, Ranjani Milind. "Metrics for software reuse." Virtual Press, 1995. http://liblink.bsu.edu/uhtbin/catkey/958791.

Full text
Abstract:
A major reengineering goal is software reuse. Effective reuse of knowledge, processes and products from previous software developments can reduce costs and increase both productivity and quality in software projects.This thesis extensively tests five projects produced by the graduate software engineering class at Ball State University. Each project has the same set of requirements.Each project is also analyzed based on subjective criteria, for example documentation, use of mnemonics for variable names and ease of understanding. Based on the outcome of testing and subjective analysis, reusable parts are identified.Metrics are collected on all of these projects. This thesis compares the metrics collected on the modules identified for reuse, and the same metrics collected on the non-reusable modules, to determine if there is a statistically significant difference in those metrics between the two groups. Metrics which are good predictors of reusable modules are identified.Metrics which are found to be good predictors of reusable modules include: number of in-parameters, number of data structure manipulations and central calls.
Department of Computer Science
APA, Harvard, Vancouver, ISO, and other styles
46

Kiri, V. A. "Studies of survival data using variable metric optimisation methods : Applications of survivor models in the area of industrial medical and social sciences, using software designed to exploit the practical merits of recent optimisation alogrithms." Thesis, University of Bradford, 1988. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.380629.

Full text
APA, Harvard, Vancouver, ISO, and other styles
47

Schilbach, Jan. "Statische Codemetriken als Bestandteil dreidimensionaler Softwarevisualisierungen." Master's thesis, Universitätsbibliothek Leipzig, 2012. http://nbn-resolving.de/urn:nbn:de:bsz:15-qucosa-78090.

Full text
Abstract:
Statische Codemetriken sind wichtige Indikatoren für die Qualität eines Softwaresystems. Sie beleuchten dabei unterschiedliche Aspekte eines Softwaresystems. Deshalb ist es notwendig, mehrere Codemetriken zu nutzen, um die Qualität eines Softwaresystems in seiner Gesamtheit bewerten zu können. Wünschenswert wäre zudem eine Darstellung, die die Struktur des Gesamtsystems und die Bewertung einzelner Elemente eines Softwaresystems in einer Darstellung kombiniert. Die Arbeit untersucht deshalb, welche Metaphern geeignet sind, um eine solche Darstellung zu ermöglichen. Ein zweites Ziel der Arbeit war es, eine solche Visualisierung automatisch erzeugen zu können. Dafür wurde ein Generator entwickelt, der diese Anforderung erfüllt. Zur Konzeption dieses Generators kamen Techniken aus der generativen Softwareentwicklung zum Einsatz. Bei der Umsetzung des Generators wurde auf Techniken aus der modellgetriebenen Softwareentwicklung zurückgegriffen, vor allem auf Techniken aus dem openArchitectureWare-Framework. Der Generator kann in Eclipse eingebunden werden und ist in der Lage, aus einem Java-Projekt die Struktur und die Metrikwerte automatisch zu extrahieren. Diese Werte werden daraufhin in ein dreidimensionales Modell überführt, das auf dem offenen Extensible 3D Standard basiert. Der Generator ermöglichte zudem die Evaluierung zweier unterschiedlicher Metaphern, die im Rahmen der Arbeit durchgeführt wurde.
APA, Harvard, Vancouver, ISO, and other styles
48

Hingane, Amruta. "A POT of software metrics a physiological overturn of technology of software metrics /." [Kent, Ohio] : Kent State University, 2008. http://rave.ohiolink.edu/etdc/view?acc%5Fnum=kent1227118085.

Full text
Abstract:
Thesis (M.S.)--Kent State University, 2008.
Title from PDF t.p. (viewed Jan. 21, 2010). Advisor: Austin Melton. Keywords: Software metrics; comparison; classical measurement. Includes bibliographical references (p. 89-93).
APA, Harvard, Vancouver, ISO, and other styles
49

Hingane, Amruta Laxman. "A POT of Software Metrics: A Physiological Overturn of Technology of Software Metrics." Kent State University / OhioLINK, 2008. http://rave.ohiolink.edu/etdc/view?acc_num=kent1227118085.

Full text
APA, Harvard, Vancouver, ISO, and other styles
50

Kwan, Pak Leung. "Design metrics forensics : an analysis of the primitive metrics in the Zage design metrics." Virtual Press, 1994. http://liblink.bsu.edu/uhtbin/catkey/897490.

Full text
Abstract:
The Software Engineering Research Center (SERC) Design Metrics Research Team at Ball State University has developed a design metric D(G) of the form:D(G) = D~ + DiWhere De is the architectural design metric (external design metric) and D; is the detailed design metric (internal design metric).Questions to be investigated in this thesis are:Why can D, be an indicator of the potential error modules?Why can D; be an indicator of the potential error modules?Are there any significant factors that dominate the design metrics?In this thesis, the report of the STANFINS data is evaluated by using correlation analysis, regression analysis, and several other statistical techiques. The STANFINS study is chosen because it contains approximately 532 programs, 3,000 packages and 2,500,000 lines of Ada.The design metrics study was completed on 21 programs (approximately 24,000 lines of code) which were selected by CSC development teams. Error reports were also provided by CSC personnel.
Department of Computer Science
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography