Dissertations / Theses on the topic 'Software quality evaluation'

To see the other types of publications on this topic, follow the link: Software quality evaluation.

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 dissertations / theses for your research on the topic 'Software quality evaluation.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.

1

Jah, Muzamil. "Software metrics : usability and evaluation of software quality." Thesis, University West, Department of Economics and IT, 2008. http://urn.kb.se/resolve?urn=urn:nbn:se:hv:diva-548.

Full text
Abstract:

It is difficult to understand, let alone improve, the quality of software without the knowledge of its software development process and software products. There must be some measurement process to predict the software development, and to evaluate the software products. This thesis provides a brief view on Software Quality, Software Metrics, and Software Metrics methods that will predict and measure the specified quality factors of software. It further discusses about the Quality as given by the standards such as ISO, principal elements required for the Software Quality and Software Metrics as the measurement technique to predict the Software Quality. This thesis was performed by evaluating a source code developed in Java, using Software Metrics, such as Size Metrics, Complexity Metrics, and Defect Metrics. Results show that, the quality of software can be analyzed, studied and improved by the usage of software metrics.

APA, Harvard, Vancouver, ISO, and other styles
2

Barney, Sebastian. "Software Quality Alignment : Evaluation and Understanding." Doctoral thesis, Karlskrona : Blekinge Institute of Technology, 2011. http://urn.kb.se/resolve?urn=urn:nbn:se:bth-00492.

Full text
Abstract:
Background: The software development environment is growing increasingly complex, with a greater diversity of stakeholders involved in product development. Moves towards global software development with onshoring, offshoring, insourcing and outsourcing have seen a range of stakeholders introduced to the software development process, each with their own incentives and understanding of their product. These differences between the stakeholders can be especially problematic with regard to aspects of software quality. The aspects are often not clearly and explicitly defined for a product, but still essential for its long-term sustainability. Research shows that software projects are more likely to succeed when the stakeholders share a common understanding of software quality. Objectives: This thesis has two main objectives. The first is to develop a method to determine the level of alignment between stakeholders with regard to the priority given to aspects of software quality. Given the ability to understand the levels of alignment between stakeholders, the second objective is to identify factors that support and impair this alignment. Both the method and the identified factors will help software development organisations create work environments that are better able to foster a common set of priorities with respect to software quality. Method: The primary research method employed throughout this thesis is case study research. In total, six case studies are presented, all conducted in large or multinational companies. A range of data collection techniques have been used, including questionnaires, semi-structured interviews and workshops. Results: A method to determine the level of alignment between stakeholders on the priority given to aspects of software quality is presented—the Stakeholder Alignment Assessment Method for Software Quality (SAAM-SQ). It is developed by drawing upon a systematic literature review and the experience of conducting a related case study. The method is then refined and extended through the experience gained from its repeated application in a series of case studies. These case studies are further used to identify factors that support and impair alignment in a range of different software development contexts. The contexts studied include onshore insourcing, onshore outsourcing, offshore insourcing and offshore outsourcing. Conclusion: SAAM-SQ is found to be robust, being successfully applied to case studies covering a range of different software development contexts. The factors identified from the case studies as supporting or impairing alignment confirm and extend research in the global software development domain.
APA, Harvard, Vancouver, ISO, and other styles
3

ALVARO, Alexandre. "A software component quality framework." Universidade Federal de Pernambuco, 2009. https://repositorio.ufpe.br/handle/123456789/1372.

Full text
Abstract:
Made available in DSpace on 2014-06-12T15:49:28Z (GMT). No. of bitstreams: 1 license.txt: 1748 bytes, checksum: 8a4605be74aa9ea9d79846c1fba20a33 (MD5) Previous issue date: 2009
Coordenação de Aperfeiçoamento de Pessoal de Nível Superior
Um grande desafio da Engenharia de Software Baseada em Componentes (ESBC) é a qualidade dos componentes utilizados em um sistema. A confiabilidade de um sistema baseado em componentes depende da confiabilidade dos componentes dos quais ele é composto. Na ESBC, a busca, seleção e avaliação de componentes de software é considerado um ponto chave para o efetivo desenvolvimento de sistemas baseado em componentes. Até agora a indústria de software tem se concentrado nos aspectos funcionais dos componentes de software, deixando de lado uma das tarefas mais árduas, que é a avaliação de sua qualidade. Se a garantia de qualidade de componentes desenvolvidos in-house é uma tarefa custosa, a garantia da qualidade utilizando componentes desenvolvidos externamente os quais frequentemente não se tem acesso ao código fonte e documentação detalhada se torna um desafio ainda maior. Assim, esta Tese introduz um Framework de Qualidade de Componentes de Software, baseado em módulos bem definidos que se complementam a fim de garantir a qualidade dos componentes de software. Por fim, um estudo experimental foi desenvolvido e executado de modo que se possa analisar a viabilidade do framework proposto
APA, Harvard, Vancouver, ISO, and other styles
4

CARVALHO, Fernando Ferreira de. "An embedded software component quality evaluation methodology." Universidade Federal de Pernambuco, 2010. https://repositorio.ufpe.br/handle/123456789/2412.

Full text
Abstract:
Made available in DSpace on 2014-06-12T15:57:59Z (GMT). No. of bitstreams: 2 arquivo3240_1.pdf: 2429983 bytes, checksum: 9b9eff719ea26a708f6868c5df873358 (MD5) license.txt: 1748 bytes, checksum: 8a4605be74aa9ea9d79846c1fba20a33 (MD5) Previous issue date: 2010
Universidade de Pernambuco
Um dos maiores desafios para a indústria de embarcados é fornecer produtos com alto nível de qualidade e funcionalidade, a um baixo custo e curto tempo de desenvolvimento, disponibilizando-o rapidamente ao mercado, aumentando assim, o retorno dos investimentos. Os requisitos de custo e tempo de desenvolvimento têm sido abordados com bastante êxito pela engenharia de software baseada em componentes (CBSE) aliada à técnica de reuso de componentes. No entanto, a utilização da abordagem CBSE sem as devidas verificações da qualidade dos componentes utilizados, pode trazer conseqüências catastróficas (Jezequel et al., 1997). A utilização de mecanismos apropriados de pesquisa, seleção e avaliação da qualidade de componentes são considerados pontos chave na adoção da abordagem CBSE. Diante do exposto, esta tese propõe uma Metodologia para Avaliação da Qualidade de Componentes de Software Embarcados sob diferentes aspectos. A idéia é solucionar a falta de consistência entre as normas ISO/IEC 9126, 14598 e 2500, incluindo o contexto de componente de software e estendendo-o ao domínio de sistemas embarcados. Estas normas provêem definições de alto nível para características e métricas para produtos de software, mas não provêem formas de usá-las efetivamente, tornando muito difícil aplicá-las sem adquirir mais informações de outras fontes. A Metodologia é composta de quatro módulos que se complementam em busca da qualidade, através de um processo de avaliação, um modelo de qualidade, técnicas de avaliação agrupadas por níveis de qualidade e uma abordagem de métricas. Desta forma, ela auxilia o desenvolvedor de sistemas embarcado no processo de seleção de componentes, avaliando qual componente melhor se enquadra nos requisitos do sistema. É utilizada por avaliadores terceirizados quando contratados por fornecedores a fim de obter credibilidade em seus componentes. A metodologia possibilita avaliar a qualidade do componente embarcado antes do mesmo ser armazenado em um sistema de repositório, especialmente no contexto do framework robusto para reuso de software, proposto por Almeida (Almeida, 2004)
APA, Harvard, Vancouver, ISO, and other styles
5

Kristensson, David. "Quality assurance in software development & evaluation of existing quality systems." Thesis, University West, Department of Economics and Informatics, 2004. http://urn.kb.se/resolve?urn=urn:nbn:se:hv:diva-522.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Owrak, Ali. "A quality evaluation model for service-oriented software." Thesis, University of Manchester, 2008. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.499901.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

BERTRAN, ISELA MACIA. "EVALUATION OF SOFTWARE QUALITY BASED ON UML MODELS." PONTIFÍCIA UNIVERSIDADE CATÓLICA DO RIO DE JANEIRO, 2009. http://www.maxwell.vrac.puc-rio.br/Busca_etds.php?strSecao=resultado&nrSeq=13748@1.

Full text
Abstract:
COORDENAÇÃO DE APERFEIÇOAMENTO DO PESSOAL DE ENSINO SUPERIOR
Um dos objetivos da engenharia de software é a construção de software com um nível de qualidade elevado com o menor custo e no menor tempo possível. Nesse contexto, muitas técnicas para o controle da qualidade de design de software têm sido definidas. Além disso, mecanismos baseados em métricas para a detecção de problemas também têm sido definidos. A maioria dessas técnicas e mecanismos foca a análise do código fonte. Porém, para reduzir retrabalho inútil, é importante utilizar técnicas de análise da qualidade capazes de detectar problemas de design já desde os modelos dos sistemas. Esta dissertação propõe: (i) um conjunto de estratégias de detecção para identificar, em modelos UML, problemas de design específicos e recorrentes na literatura: Long Parameter List, God Class, Data Class, Shotgun Surgery, Misplaced Class e God Package, e (ii) a utilização do modelo da qualidade QMOOD para avaliar design de software a partir de seus diagramas de classes. Para automatizar a aplicação destes mecanismos foi implementada uma ferramenta: a QCDTool. Os mecanismos desenvolvidos foram avaliados no contexto de dois estudos experimentais. O primeiro estudo avaliou a acurácia, precisão e recall das estratégias de detecção propostas. Esse estudo mostrou os benefícios e desvantagens da aplicação, em modelos, das estratégias de detecção propostas. O segundo estudo avaliou a utilidade da aplicação do modelo da qualidade QMOOD em diagramas UML. Esse estudo mostrou que foi possível identificar, em diagramas de classes, variações das propriedades de design, e, conseqüentemente, dos atributos da qualidade nos sistemas analisados.
One of the goals of software engineering is the development of high quality software at a small cost an in a short period of time. In this context, several techniques have been defined for controlling the quality of software designs. Furthermore, many metrics-based mechanisms have been defined for detecting software design flaws. Most of these mechanisms and techniques focus on analyzing the source code. However, in order to reduce unnecessary rework it is important to use quality analysis techniques that allow the detection of design flaws earlier in the development cycle. We believe that these techniques should analyze design flaws starting from software models. This dissertation proposes: (i) a set of strategies to detect, in UML models, specific and recurrent design problems: Long Parameter List, God Class, Data Class, Shotgun Surgery, Misplaced Class and God Package; (ii) and the use of QMOOD quality model to analyze class diagrams. To automate the application of these mechanisms we implemented a tool: the QCDTool. The detection strategies and QMOOD model were evaluated in the context of two experimental studies. The first study analyzed the accuracy, precision and recall of the proposed detection strategies. The second study analyzed the utility of use QMOOD quality model in the class diagrams. The results of the first study have shown the benefits and drawbacks of the application in class diagrams of some of the proposed detection strategies. The second study shows that it was possible to identify, based on class diagrams, variations of the design properties and consequently, of the quality attributes in the analyzed systems.
APA, Harvard, Vancouver, ISO, and other styles
8

Jabangwe, Ronald. "Software Quality Evaluation for Evolving Systems in Distributed Development Environments." Doctoral thesis, Blekinge Tekniska Högskola, Institutionen för programvaruteknik, 2015. http://urn.kb.se/resolve?urn=urn:nbn:se:bth-00613.

Full text
Abstract:
Context: There is an overwhelming prevalence of companies developing software in global software development (GSD) contexts. The existing body of knowledge, however, falls short of providing comprehensive empirical evidence on the implication of GSD contexts on software quality for evolving software systems. Therefore there is limited evidence to support practitioners that need to make informed decisions about ongoing or future GSD projects. Objective: This thesis work seeks to explore changes in quality, as well as to gather confounding factors that influence quality, for software systems that evolve in GSD contexts. Method: The research work in this thesis includes empirical work that was performed through exploratory case studies. This involved analysis of quantitative data consisting of defects as an indicator for quality, and measures that capture software evolution, and qualitative data from company documentations, interviews, focus group meetings, and questionnaires. An extensive literature review was also performed to gather information that was used to support the empirical investigations. Results: Offshoring software development work, to a location that has employees with limited or no prior experience with the software product, as observed in software transfers, can have a negative impact on quality. Engaging in long periods of distributed development with an offshore site and eventually handing over all responsibilities to the offshore site can be an alternative to software transfers. This approach can alleviate a negative effect on quality. Finally, the studies highlight the importance of taking into account the GSD context when investigating quality for software that is developed in globally distributed environments. This helps with making valid inferences about the development settings in GSD projects in relation to quality. Conclusion: The empirical work presented in this thesis can be useful input for practitioners that are planning to develop software in globally distributed environments. For example, the insights on confounding factors or mitigation practices that are linked to quality in the empirical studies can be used as input to support decision-making processes when planning similar GSD projects. Consequently, lessons learned from the empirical investigations were used to formulate a method, GSD-QuID, for investigating quality using defects for evolving systems. The method is expected to help researchers avoid making incorrect inferences about the implications of GSD contexts on quality for evolving software systems, when using defects as a quality indicator. This in turn will benefit practitioners that need the information to make informed decisions for software that is developed in similar circumstances.
APA, Harvard, Vancouver, ISO, and other styles
9

Powale, Kalkin. "Automotive Powertrain Software Evaluation Tool." Master's thesis, Universitätsbibliothek Chemnitz, 2018. http://nbn-resolving.de/urn:nbn:de:bsz:ch1-qucosa-233186.

Full text
Abstract:
The software is a key differentiator and driver of innovation in the automotive industry. The major challenges for software development are increasing in complexity, shorter time-to-market, increase in development cost and demand of quality assurance. The complexity is increasing due to emission legislations, variants of product and new communication technologies being interfaced with the vehicle. The shorter development time is due to competition in the market, which requires faster feedback loops of verification and validation of developed functionalities. The increase in development cost is contributed by two factors; the first is pre-launch cost, this involves the cost of error correction in development stages. Another is post-launch cost; this involves warranty and guarantees cost. As the development time passes the cost of error correction also increases. Hence it is important to detect the error as early as possible. All these factors affect the software quality; there are several cases where Original Equipment Manufacturer (OEM) have callbacks their product because of the quality defect. Hence, there is increased in the requirement of software quality assurance. The solution for these software challenges can be the early quality evaluation in continuous integration framework environment. The most prominent in today\'s automotive industry AUTomotive Open System ARchitecture (AUTOSAR) reference architecture is used to describe software component and interfaces. AUTOSAR provides the standardised software component architecture elements. It was created to address the issues of growing complexity; the existing AUTOSAR environment does have software quality measures, such as schema validations and protocols for acceptance tests. However, it lacks the quality specification for non-functional qualities such as maintainability, modularity, etc. The tool is required which will evaluate the AUTOSAR based software architecture and give the objective feedback regarding quality. This thesis aims to provide the quality measurement tool, which will be used for evaluation of AUTOSAR based software architecture. The tool reads the AUTOSAR architecture information from AUTOSAR Extensible Markup Language (ARXML) file. The tool provides configuration ability, continuous evaluation and objective feedback regarding software quality characteristics. The tool was utilised on transmission control project, and results are validated by industry experts.
APA, Harvard, Vancouver, ISO, and other styles
10

Mårtensson, Frans. "Software Architecture Quality Evaluation : Approaches in an Industrial Context." Licentiate thesis, Karlskrona : Blekinge Institute of Technology, 2006. http://urn.kb.se/resolve?urn=urn:nbn:se:bth-00313.

Full text
Abstract:
Software architecture has been identified as an increasingly important part of software development. The software architecture helps the developer of a software system to define the internal structure of the system. Several methods for evaluating software architectures have been proposed in order to assist the developer in creating a software architecture that will have a potential to fulfil the requirements on the system. Many of the evaluation methods focus on evaluation of a single quality attribute. However, in an industrial system there are normally requirements on several quality aspects of the system. Therefore, an architecture evaluation method that addresses multiple quality attributes, e.g., performance, maintainability, testability, and portability, would be more beneficial. This thesis presents research towards a method for evaluation of multiple quality attributes using one software architecture evaluation method. A prototype-based evaluation method is proposed that enables evaluation of multiple quality attributes using components of a system and an approximation of its intended runtime environment. The method is applied in an industrial case study where communication components in a distributed realtime system are evaluated. The evaluation addresses performance, maintainability, and portability for three alternative components using a single set of software architecture models and a prototype framework. The prototype framework enables the evaluation of different components and component configurations in the software architecture while collecting data in an objective way. Finally, this thesis presents initial work towards incorporating evaluation of testability into the method. This is done through an investigation of how testability is interpreted by different organizational roles in a software developing organization and which measures of source code that they consider affecting testability.
APA, Harvard, Vancouver, ISO, and other styles
11

Mårtensson, Frans. "Software architecture quality evaluation : approaches in an industrial context /." Karlskrona : Blekinge Institute of Technology, 2006. http://www.bth.se/fou/Forskinfo.nsf/allfirst2/3e821fbd7a66542cc1257169002ad63c?OpenDocument.

Full text
APA, Harvard, Vancouver, ISO, and other styles
12

Barkmann, Henrike. "Quantitative Evaluation of Software Quality Metrics in Open-Source Projects." Thesis, Växjö University, School of Mathematics and Systems Engineering, 2009. http://urn.kb.se/resolve?urn=urn:nbn:se:vxu:diva-2562.

Full text
Abstract:

The validation of software quality metrics lacks statistical

significance. One reason for this is that the data collection

requires quite some effort. To help solve this problem,

we develop tools for metrics analysis of a large number of

software projects (146 projects with ca. 70.000 classes and

interfaces and over 11 million lines of code). Moreover, validation

of software quality metrics should focus on relevant

metrics, i.e., correlated metrics need not to be validated independently.

Based on our statistical basis, we identify correlation

between several metrics from well-known objectoriented

metrics suites. Besides, we present early results of

typical metrics values and possible thresholds.

APA, Harvard, Vancouver, ISO, and other styles
13

Wilburn, Cathy A. "Using the Design Metrics Analyzer to improve software quality." Virtual Press, 1994. http://liblink.bsu.edu/uhtbin/catkey/902489.

Full text
Abstract:
Effective software engineering techniques are needed to increase the reliability of software systems, to increase the productivity of development teams, and to reduce the costs of software development. Companies search for an effective software engineering process as they strive to reach higher process maturity levels and produce better software. To aid in this quest for better methods of software engineering. the Design Metrics Research Team at Ball State University has analyzed university and industry software to be able to detect error-prone modules. The research team has developed, tested and validated their design metrics and found them to be highly successful. These metrics were typically collected and calculated by hand. So that these metrics can be collected more consistently, more accurately and faster, the Design Metrics Analyzer for Ada (DMA) was created. The DMA collects metrics from the files submitted based on a subprogram level. The metrics results are then analyzed to yield a list of stress points, which are modules that are considered to be error-prone or difficult for developers. This thesis describes the Design Metrics Analyzer, explains its output and how it functions. Also, ways that the DMA can be used in the software development life cycle are discussed.
Department of Computer Science
APA, Harvard, Vancouver, ISO, and other styles
14

Bogen, Manfred Adolf. "A framework for quality of service evaluation in distributed environments." Thesis, University of Nottingham, 2000. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.324479.

Full text
APA, Harvard, Vancouver, ISO, and other styles
15

Zhu, Liming Computer Science &amp Engineering Faculty of Engineering UNSW. "Software architecture evaluation for framework-based systems." Awarded by:University of New South Wales. Computer Science and Engineering, 2007. http://handle.unsw.edu.au/1959.4/28250.

Full text
Abstract:
Complex modern software is often built using existing application frameworks and middleware frameworks. These frameworks provide useful common services, while simultaneously imposing architectural rules and constraints. Existing software architecture evaluation methods do not explicitly consider the implications of these frameworks for software architecture. This research extends scenario-based architecture evaluation methods by incorporating framework-related information into different evaluation activities. I propose four techniques which target four different activities within a scenario-based architecture evaluation method. 1) Scenario development: A new technique was designed aiming to extract general scenarios and tactics from framework-related architectural patterns. The technique is intended to complement the current scenario development process. The feasibility of the technique was validated through a case study. Significant improvements of scenario quality were observed in a controlled experiment conducted by another colleague. 2) Architecture representation: A new metrics-driven technique was created to reconstruct software architecture in a just-in-time fashion. This technique was validated in a case study. This approach has significantly improved the efficiency of architecture representation in a complex environment. 3) Attribute specific analysis (performance only): A model-driven approach to performance measurement was applied by decoupling framework-specific information from performance testing requirements. This technique was validated on two platforms (J2EE and Web Services) through a number of case studies. This technique leads to the benchmark producing more representative measures of the eventual application. It reduces the complexity behind the load testing suite and framework-specific performance data collecting utilities. 4) Trade-off and sensitivity analysis: A new technique was designed seeking to improve the Analytical Hierarchical Process (AHP) for trade-off and sensitivity analysis during a framework selection process. This approach was validated in a case study using data from a commercial project. The approach can identify 1) trade-offs implied by an architecture alternative, along with the magnitude of these trade-offs. 2) the most critical decisions in the overall decision process 3) the sensitivity of the final decision and its capability for handling quality attribute priority changes.
APA, Harvard, Vancouver, ISO, and other styles
16

Yelleswarapu, Mahesh Chandra. "An Assessment of the Usability Quality Attribute in Open Source Software." Thesis, Blekinge Tekniska Högskola, Sektionen för datavetenskap och kommunikation, 2010. http://urn.kb.se/resolve?urn=urn:nbn:se:bth-2193.

Full text
Abstract:
Usability is one of the important quality attributes. Open source software products are well known for their efficiency and effectiveness. Lack of usability in OSS (Open Source Software) products will result in poor usage of the product. In OSS development there is no usability team, and one could therefore expect that the usability would be low for these products. In order to find out if this was really the case we made a usability evaluation using a questionnaire for four OSS products. The questionnaire was based on a review of existing literature. This questionnaire was presented to 17 people who are working with open source products. This evaluation showed that the overall usability was above average for all the four products. It seems, however, that the lack of a usability team has made the OSS products less easy to use for inexperienced users. Based on the responses to questionnaire and a literature review, a set of guidelines and hints for increasing the usability of OSS products was defined.
APA, Harvard, Vancouver, ISO, and other styles
17

Kwan, Pak Leung. "Design metrics forensics : an analysis of the primitive metrics in the Zage design metrics." Virtual Press, 1994. http://liblink.bsu.edu/uhtbin/catkey/897490.

Full text
Abstract:
The Software Engineering Research Center (SERC) Design Metrics Research Team at Ball State University has developed a design metric D(G) of the form:D(G) = D~ + DiWhere De is the architectural design metric (external design metric) and D; is the detailed design metric (internal design metric).Questions to be investigated in this thesis are:Why can D, be an indicator of the potential error modules?Why can D; be an indicator of the potential error modules?Are there any significant factors that dominate the design metrics?In this thesis, the report of the STANFINS data is evaluated by using correlation analysis, regression analysis, and several other statistical techiques. The STANFINS study is chosen because it contains approximately 532 programs, 3,000 packages and 2,500,000 lines of Ada.The design metrics study was completed on 21 programs (approximately 24,000 lines of code) which were selected by CSC development teams. Error reports were also provided by CSC personnel.
Department of Computer Science
APA, Harvard, Vancouver, ISO, and other styles
18

Rossi, Pablo Hernan, and pablo@cs rmit edu au. "Software design measures for distributed enterprise Information systems." RMIT University. Computer Science and Information Technology, 2004. http://adt.lib.rmit.edu.au/adt/public/adt-VIT20081211.164307.

Full text
Abstract:
Enterprise information systems are increasingly being developed as distributed information systems. Quality attributes of distributed information systems, as in the centralised case, should be evaluated as early and as accurately as possible in the software engineering process. In particular, software measures associated with quality attributes of such systems should consider the characteristics of modern distributed technologies. Early design decisions have a deep impact on the implementation of distributed enterprise information systems and thus, on the ultimate quality of the software as an operational entity. Due to the fact that the distributed-software engineering process affords software engineers a number of design alternatives, it is important to develop tools and guidelines that can be used to assess and compare design artefacts quantitatively. This dissertation makes a contribution to the field of Software Engineering by proposing and evaluating software design measures for distributed enterprise information systems. In previous research, measures developed for distributed software have been focused in code attributes, and thus, only provide feedback towards the end of the software engineering process. In contrast, this thesis proposes a number of specific design measures that provide quantitative information before the implementation. These measures capture attributes of the structure and behaviour of distributed information systems that are deemed important to assess their quality attributes, based on the analysis of the problem domain. The measures were evaluated theoretically and empirically as part of a well defined methodology. On the one hand, we have followed a formal framework based on the theory of measurement, in order to carry out the theoretical validation of the proposed measures. On the other hand, the suitability of the measures, to be used as indicators of quality attributes, was evaluated empirically with a robust statistical technique for exploratory research. The data sets analysed were gathered after running several experiments and replications with a distributed enterprise information system. The results of the empirical evaluation show that most of the proposed measures are correlated to the quality attributes of interest, and that most of these measures may be used, individually or in combination, for the estimation of these quality attributes-namely efficiency, reliability and maintainability. The design of a distributed information system is modelled as a combination of its structure, which reflects static characteristics, and its behaviour, which captures complementary dynamic aspects. The behavioural measures showed slightly better individual and combined results than the structural measures in the experimentation. This was in line with our expectations, since the measures were evaluated as indicators of non-functional quality attributes of the operational system. On the other hand, the structural measures provide useful feedback that is available earlier in the software engineering process. Finally, we developed a prototype application to collect the proposed measures automatically and examined typical real-world scenarios where the measures may be used to make design decisions as part of the software engineering process.
APA, Harvard, Vancouver, ISO, and other styles
19

Wnukiewicz, Karol Kazimierz. "The role of quality requirements in software architecture design." Thesis, Blekinge Tekniska Högskola, Avdelningen för programvarusystem, 2006. http://urn.kb.se/resolve?urn=urn:nbn:se:bth-2253.

Full text
Abstract:
An important issue during architectural design is that besides functional requirements, software architecture is influenced greatly by quality requirements [9][2][7], which often are neglected. The earlier quality requirements are considered, the less effort is needed later in the software lifecycle to ensure a sufficient software quality levels. Errors due to lack of their fulfilment are the most expensive and difficult to correct. Therefore, attention to quality requirements is crucial during an architectural design. The problem is not only to gather the system’s quality requirements, but to establish a methodology that helps to deal with them during the software development. Literature has paid some attention to software architecture in the context of quality requirements, but there is still lack of effective solutions in this area. To alleviate the problem, this paper lays out important concepts and notions of quality requirements in a way they can be used to drive design decisions and evaluate the architecture to estimate whether these requirements are fulfilled. Important concepts of software architecture area are presented to indicate how important quality requirements are during the design and what are the consequences of their lack in a software system. Moreover, a quality requirement-oriented design method is proposed as an outcome of the literature survey. This method is a model taking quality requirements into account at first, before the core functionality is placed. Besides the conceptual solution to the identified problems, this paper also suggests a practical method of handling quality requirements during a design. A recommendation framework for choosing the most suitable architectural pattern from a set of quality attributes is also proposed. Since the literature provides insufficient qualitative information about quality requirement issues in terms of software architectures, an empirical research is conducted as means for gathering the required data. Therefore, a systematic approach to support and analyze architectural designs in terms of quality requirements is prepared. Finally, quality requirement-oriented and pattern-based design method is further proposed as a result of investigating patterns as a tool for addressing quality requirements at different abstraction levels of a design. The research is concerned with the analysis of software architectures against one or more desired software qualities that ought to be achieved at the architectural level.
http://wnukiewicz.pl kareel@gmail.com
APA, Harvard, Vancouver, ISO, and other styles
20

Bhattrai, Gopendra R. "An empirical study of software design balance dynamics." Virtual Press, 1995. http://liblink.bsu.edu/uhtbin/catkey/958786.

Full text
Abstract:
The Design Metrics Research Team in the Computer Science Department at Ball State University has been engaged in developing and validating quality design metrics since 1987. Since then a number of design metrics have been developed and validated. One of the design metrics developed by the research team is design balance (DB). This thesis is an attempt to validate the metric DB. In this thesis, results of the analysis of five systems are presented. The main objective of this research is to examine if DB can be used to evaluate the complexity of a software design and hence the quality of the resulting software. Two of the five systems analyzed were student projects and the remaining three were from industry. The five systems analyzed were written in different languages, had different sizes and exhibited different error rates.
Department of Computer Science
APA, Harvard, Vancouver, ISO, and other styles
21

Dixon, Mark Brian. "An automated approach to the measurement and evaluation of software quality during development." Thesis, Leeds Beckett University, 1997. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.267449.

Full text
APA, Harvard, Vancouver, ISO, and other styles
22

Stineburg, Jeffrey. "Software reliability prediction based on design metrics." Virtual Press, 1999. http://liblink.bsu.edu/uhtbin/catkey/1154775.

Full text
Abstract:
This study has presented a new model for predicting software reliability based on design metrics. An introduction to the problem of software reliability is followed by a brief overview of software reliability models. A description of the models is given, including a discussion of some of the issues associated with them. The intractability of validating life-critical software is presented. Such validation is shown to require extended periods of test time that are impractical in real world situations. This problem is also inherent in fault tolerant software systems of the type currently being implemented in critical applications today. The design metrics developed at Ball State University is proposed as the basis of a new model for predicting software reliability from information available during the design phase of development. The thesis investigates the proposition that a relationship exists between the design metric D(G) and the errors that are found in the field. A study, performed on a subset of a large defense software system, discovered evidence to support the proposition.
Department of Computer Science
APA, Harvard, Vancouver, ISO, and other styles
23

Dorsey, Edward Vernon. "The automated assessment of computer software documentation quality using the objectives/principles/attributes framework." Thesis, This resource online, 1992. http://scholar.lib.vt.edu/theses/available/etd-03302010-020606/.

Full text
APA, Harvard, Vancouver, ISO, and other styles
24

Sathi, Veer Reddy, and Jai Simha Ramanujapura. "A Quality Criteria Based Evaluation of Topic Models." Thesis, Blekinge Tekniska Högskola, Institutionen för programvaruteknik, 2016. http://urn.kb.se/resolve?urn=urn:nbn:se:bth-13274.

Full text
Abstract:
Context. Software testing is the process, where a particular software product, or a system is executed, in order to find out the bugs, or issues which may otherwise degrade its performance. Software testing is usually done based on pre-defined test cases. A test case can be defined as a set of terms, or conditions that are used by the software testers to determine, if a particular system that is under test operates as it is supposed to or not. However, in numerous situations, test cases can be so many that executing each and every test case is practically impossible, as there may be many constraints. This causes the testers to prioritize the functions that are to be tested. This is where the ability of topic models can be exploited. Topic models are unsupervised machine learning algorithms that can explore large corpora of data, and classify them by identifying the hidden thematic structure in those corpora. Using topic models for test case prioritization can save a lot of time and resources. Objectives. In our study, we provide an overview of the amount of research that has been done in relation to topic models. We want to uncover various quality criteria, evaluation methods, and metrics that can be used to evaluate the topic models. Furthermore, we would also like to compare the performance of two topic models that are optimized for different quality criteria, on a particular interpretability task, and thereby determine the topic model that produces the best results for that task. Methods. A systematic mapping study was performed to gain an overview of the previous research that has been done on the evaluation of topic models. The mapping study focused on identifying quality criteria, evaluation methods, and metrics that have been used to evaluate topic models. The results of mapping study were then used to identify the most used quality criteria. The evaluation methods related to those criteria were then used to generate two optimized topic models. An experiment was conducted, where the topics generated from those two topic models were provided to a group of 20 subjects. The task was designed, so as to evaluate the interpretability of the generated topics. The performance of the two topic models was then compared by using the Precision, Recall, and F-measure. Results. Based on the results obtained from the mapping study, Latent Dirichlet Allocation (LDA) was found to be the most widely used topic model. Two LDA topic models were created, optimizing one for the quality criterion Generalizability (TG), and one for Interpretability (TI); using the Perplexity, and Point-wise Mutual Information (PMI) measures respectively. For the selected metrics, TI showed better performance, in Precision and F-measure, than TG. However, the performance of both TI and TG was comparable in case of Recall. The total run time of TI was also found to be significantly high than TG. The run time of TI was 46 hours, and 35 minutes, whereas for TG it was 3 hours, and 30 minutes.Conclusions. Looking at the F-measure, it can be concluded that the interpretability topic model (TI) performs better than the generalizability topic model (TG). However, while TI performed better in precision, Conclusions. Looking at the F-measure, it can be concluded that the interpretability topic model (TI) performs better than the generalizability topic model (TG). However, while TI performed better in precision, recall was comparable. Furthermore, the computational cost to create TI is significantly higher than for TG. Hence, we conclude that, the selection of the topic model optimization should be based on the aim of the task the model is used for. If the task requires high interpretability of the model, and precision is important, such as for the prioritization of test cases based on content, then TI would be the right choice, provided time is not a limiting factor. However, if the task aims at generating topics that provide a basic understanding of the concepts (i.e., interpretability is not a high priority), then TG is the most suitable choice; thus making it more suitable for time critical tasks.
APA, Harvard, Vancouver, ISO, and other styles
25

Aslan, Serdar. "Digital Educational Games: Methodologies for Development and Software Quality." Diss., Virginia Tech, 2016. http://hdl.handle.net/10919/73368.

Full text
Abstract:
Development of a game in the form of software for game-based learning poses significant technical challenges for educators, researchers, game designers, and software engineers. The game development consists of a set of complex processes requiring multi-faceted knowledge in multiple disciplines such as digital graphic design, education, gaming, instructional design, modeling and simulation, psychology, software engineering, visual arts, and the learning subject area. Planning and managing such a complex multidisciplinary development project require unifying methodologies for development and software quality evaluation and should not be performed in an ad hoc manner. This dissertation presents such methodologies named: GAMED (diGital educAtional gaMe dEvelopment methoDology) and IDEALLY (dIgital eDucational gamE softwAre quaLity evaLuation methodologY). GAMED consists of a body of methods, rules, and postulates and is embedded within a digital educational game life cycle. The life cycle describes a framework for organization of the phases, processes, work products, quality assurance activities, and project management activities required to develop, use, maintain, and evolve a digital educational game from birth to retirement. GAMED provides a modular structured approach for overcoming the development complexity and guides the developers throughout the entire life cycle. IDEALLY provides a hierarchy of 111 indicators consisting of 21 branch and 90 leaf indicators in the form of an acyclic graph for the measurement and evaluation of digital educational game software quality. We developed the GAMED and IDEALLY methodologies based on the experiences and knowledge we have gained in creating and publishing four digital educational games that run on the iOS (iPad, iPhone, and iPod touch) mobile devices: CandyFactory, CandySpan, CandyDepot, and CandyBot. The two methodologies provide a quality-centered structured approach for development of digital educational games and are essential for accomplishing demanding goals of game-based learning. Moreover, classifications provided in the literature are inadequate for the game designers, engineers and practitioners. To that end, we present a taxonomy of games that focuses on the characterization of games.
Ph. D.
APA, Harvard, Vancouver, ISO, and other styles
26

Santos, Daniel Soares. "Quality Evaluation Model for Crisis and Emergency Management Systems-of-Systems." Universidade de São Paulo, 2017. http://www.teses.usp.br/teses/disponiveis/55/55134/tde-10072017-162919/.

Full text
Abstract:
Systems-of-Systems (SoS) have performed an important and even essential role to the whole society and refer to complex softwareintensive systems, resulted from interoperability of independent constituent systems that work together to achieve more complex missions. SoS have emerged specially in critical application domains and, therefore, high level of quality must be assured during their development and evolution. However, dealing with quality of SoS still presents great challenges, as SoS present a set of unique characteristics that can directly affect the quality of such systems. Moreover, there are not comprehensive models that can support the quality evaluation of SoS. Motivated by this scenario, the main contribution of this Masters project is to present a SoS Evaluation Model, more specifically, addressing the crisis/emergency management domain, built in the context of a large international research project. The proposed model covers important evaluation activities and considers all SoS characteristics and challenges not usually addressed by other models. This model was applied to evaluate a crisis/emergency management SoS and our results have shown it viability to the effective management of the SoS quality.
Sistemas-de-Sistemas (SoS, do inglês Systems-of-Systems) realizam um importante e até essencial papel na sociedade. Referem-se a complexos sistemas intensivos em software, resultado da interoperabilidade de sistemas constituintes independentes que trabalham juntos para realizar missões mais complexas. SoS têm emergido especialmente em domínios de aplicação crítica, portanto, um alto nível de qualidade deve ser garantido durante seu desenvolvimento e evolução. Entretanto, lidar com qualidade em SoS ainda apresenta grandes desafios, uma vez que possuem um conjunto de características únicas que podem diretamente afetar a qualidade desses sistemas. Além disso, não existem modelos abrangentes para o suporte à avaliação de qualidade de SoS. Motivado por este cenário, a principal contribuição deste projeto de mestrado é apresentar um modelo de avaliação para SoS, especialmente destinado ao domínio de gerenciamento de crises e emergências. Este modelo foi construído no contexto de um grande projeto de pesquisa internacional, e cobre as mais importantes atividades de avaliação, considerando as principais características e desafios de SoS geralmente não abordados por outros modelos. Este modelo foi aplicado na avaliação de um SoS de gerenciamento de crises e emergência, e nossos resultados têm mostrado sua viabilidade para o efetivo gerenciamento da qualidade de SoS.
APA, Harvard, Vancouver, ISO, and other styles
27

Huamaní, Vargas Andre Henry, and Navarro Javier Danilo Watanabe. "Propuesta de implementación de un modelo para la evaluación de la calidad del producto de software para una empresa consultora TI." Master's thesis, Universidad Peruana de Ciencias Aplicadas (UPC), 2020. http://hdl.handle.net/10757/652540.

Full text
Abstract:
El presente trabajo de investigación tiene como objetivo el estudio de la línea de servicio de Certificación de Software de la empresa consultora TI y requiere resolver la problemática en la evaluación de los productos de desarrollo de software que impactan directamente en el cumplimiento de sus SLA, generando grandes pérdidas económicas. En el análisis cuantitativo, se identificó que existen inadecuadas técnicas para la evaluación con un impacto en el cumplimiento de los umbrales definidos en cada SLA, lo cual ha generado grandes pérdidas económicas en los últimos años por penalización. Ante esta situación caótica, se propone la implementación de un modelo de evaluación de calidad del producto de Software, la cual propone lineamientos de acuerdo con estándares y prácticas internacionales para la evaluación de la calidad, ayudando a incrementar la calidad de sus productos y la satisfacción del cliente, debido que se evidencia pérdidas económicas que se están incrementando anualmente. En este trabajo de investigación se hace una revisión general de los estándares de evaluación de calidad de producto de Software, se realiza una evaluación del cumplimiento de la norma ISO/IEC 25010 en la empresa y se propone un plan de mejora. Como conclusión, se recomienda la ejecución de la propuesta de implementación como apoyo estratégico al cumplimiento de los objetivos estratégicos de la empresa, reduciendo el riesgo de pérdidas económicas e incrementar la capacidad para ejecutar nuevas STD (Solicitudes Técnicas de Desarrollo), que permitirá a la empresa ser más rentable y brindar un servicio de mejor calidad.
The following work of research has as main subject the study of the software certification service line from the consulting TI company and requires solving the problematic in the evaluation of software development products that has a direct impact in the fulfillment of their SLA, generating large economic losses. In the quantitative analysis, it was identified there are inadequate techniques for the evaluation with an impact of the fulfillment of the thresholds defined on each SLA, which has generated large economic losses in the last year due to penalty. Given this chaotic situation, the implementation of a software product quality evaluation model is proposed, which provides guidelines in accordance with international standards and practices for the evaluation of quality, helping to increase the quality of its products and customer satisfaction, because there is evidence of economic losses that are increasing annually. In this research work a general review of the existing software product quality evaluation standards is made, an evaluation of compliance of the ISO/IEC 25010 norm in the company will be carried out and an improvement plan will be proposed. In conclusion, the implementation of the proposal is recommended as strategic support for the fulfillment of the strategic objectives of the company, reducing the risk of economic losses and increasing the capacity to execute new STD (Technical Development Requests), which will allow the company to be more profitable and provide a better quality service.
Trabajo de investigación
APA, Harvard, Vancouver, ISO, and other styles
28

Land, Lesley Pek Wee Information Systems Technology &amp Management Australian School of Business UNSW. "Software group reviews and the impact of procedural roles on defect detection performance." Awarded by:University of New South Wales. School of Information Systems, Technology and Management, 2000. http://handle.unsw.edu.au/1959.4/21838.

Full text
Abstract:
Software reviews (inspections) have received widespread attention for ensuring the quality of software, by finding and repairing defects in software products. A typical review process consists of two stages critical for defect detection: individual review followed by group review. This thesis addresses two attributes to improve our understanding of the task model: (1) the need for review meetings, and (2) the use of roles in meetings. The controversy of review meeting effectiveness has been consistently raised in the literature. Proponents maintain that the review meeting is the crux of the review process, resulting in group synergism and qualitative benefits (e.g. user satisfaction). Opponents argue that against meetings because the costs of organising and conducting them are high, and there is no net meeting gain. The persistence of these diverse views is the main motivation behind this thesis. Although commonly prescribed in meetings, roles have not yet been empirically validated. Three procedural roles (moderator, reader, recorder) were considered. A conceptual framework on software reviews was developed, from which main research questions were identified. Two experiments were conducted. Review performance was operationalised in terms of true defects and false positives. The review product was COBOL code. The results indicated that in terms of true defects, group reviews outperformed the average individual but not nominal group reviews (aggregate of individual reviews). However, groups have the ability to filter false positives from the individuals' findings. Roles provided limited benefits in improving group reviews. Their main function is to reduce process loss, by encouraging systematic consideration of the individuals' findings. When two or more reviewers find a defect during individual reviews, it is likely to be carried through to the meeting (plurality effect). Groups employing roles reported more 'new' false positives (not identified from preparation) than groups without roles. Overall, subjects' ability at the defect detection was low. This thesis suggests that reading technologies may be helpful for improving reviewer performance. The inclusion of an author role may also reduce the level of false positive detection. The results have implications on the design and support of the software review process.
APA, Harvard, Vancouver, ISO, and other styles
29

Karlin, Ievgen. "An Evaluation of NLP Toolkits for Information Quality Assessment." Thesis, Linnéuniversitetet, Institutionen för datavetenskap, fysik och matematik, DFM, 2012. http://urn.kb.se/resolve?urn=urn:nbn:se:lnu:diva-22606.

Full text
Abstract:
Documentation is often the first source, which can help user to solve problems or provide conditions of use of some product. That is why it should be clear and understandable. But what does “understandable” mean? And how to detect whether some text is unclear? And this thesis can answer on those questions.The main idea of current work is to measure clarity of the text information using natural language processing capabilities. There are three global steps to achieve this goal: to define criteria of bad clarity of text information, to evaluate different natural language toolkits and find suitable for us, and to implement a prototype system that, given a text, measures text clarity.Current thesis project is planned to be included to VizzAnalyzer (quality analysis tool, which processes information on structure level) and its main task is to perform a clarity analysis of text information extracted by VizzAnalyzer from different XML-files.
APA, Harvard, Vancouver, ISO, and other styles
30

Santos, Carlos. "Open source software projects' attractiveness, activeness, and efficiency as a path to software quality : an empirical evaluation of their relationships and causes /." Available to subscribers only, 2009. http://proquest.umi.com/pqdweb?did=1879096191&sid=2&Fmt=2&clientId=1509&RQT=309&VName=PQD.

Full text
Abstract:
Thesis (Ph. D.)--Southern Illinois University Carbondale, 2009.
"Department of Management." Keywords: Business strategy, Open source software, Software community, Software development, Software quality, Structural equation modeling. Includes bibliographical references (p. 119-124). Also available online.
APA, Harvard, Vancouver, ISO, and other styles
31

Santos, Jr Carlos D. "OPEN SOURCE SOFTWARE PROJECTS' ATTRACTIVENESS, ACTIVENESS, AND EFFICIENCY AS A PATH TO SOFTWARE QUALITY: AN EMPIRICAL EVALUATION OF THEIR RELATIONSHIPS AND CAUSES." OpenSIUC, 2009. https://opensiuc.lib.siu.edu/dissertations/2.

Full text
Abstract:
An organizational strategy to develop software has appeared in the market. Organizations release software source code open and hope to attract volunteers to improve their software, forming what we call an open source project. Examples of organizations that have used this strategy include IBM (Eclipse), SAP (Netweaver) and Mozilla (Thunderbird). Moreover, thousands of these projects have been created as a consequence of the growing amount of software source code released by individuals. This expressive phenomenon deserves attention for its sudden appearance, newness and usefulness to public and private organizations. To explain the dynamics of open source projects, this research theoretically identified and empirically analyzed a construct – attractiveness – found crucial to them due to its influence on how they are populated and operate, subsequently impacting the qualities of the software produced and of the support provided. Both attractiveness' causes and consequences were put under scrutiny, as well as its indicators. On the side of the consequences, it was theoretically proposed and empirically tested whether the attractiveness of these projects affects their levels of activeness, efficiency, likelihood of task completion, and time for task completion, though not linearly, as task complexity could moderate the relationships between them. Also, it was argued at the theoretical level that activeness, efficiency, likelihood of task completion, and time for task completion mediate the relationship between attractiveness and software/support quality. On the side of attractiveness' causes, it was proposed and tested that five open software projects' characteristics (license type, intended audience, type of project and project’s life-cycle stage) impact attractiveness directly. Additionally, these projects' characteristics were argued to influence projects' levels of activeness, efficiency, likelihood of task completion, and time for task completion (and so an empirical evaluation of their associations was performed). The empirical tests of all these relationships between constructs were carried out using Structural Equation Modeling with Maximum Likelihood on three samples of over 4,600 projects each, collected from the largest repository of open source software, Sourceforge.net (a repeated cross-sectional approach). The results confirmed the importance of attractiveness, suggesting a direct influence on projects' dynamics, as opposed to the moderated-by-task complexity indirect paths first proposed. Furthermore, all four projects' characteristics studied were found to significantly influence projects' attractiveness, activeness, efficiency, likelihood of task completion, and time for task completion (with the exception of license type and time for task completion). Besides providing a statistical test of these propositions, this study discovered the direction of the influence of each project characteristic on projects' attractiveness, activeness, efficiency, likelihood of task completion and time for task completion. Lastly, conclusions, limitations, and future directions are discussed based on these findings.
APA, Harvard, Vancouver, ISO, and other styles
32

Lin, Chia-en. "Performance Engineering of Software Web Services and Distributed Software Systems." Thesis, University of North Texas, 2014. https://digital.library.unt.edu/ark:/67531/metadc500103/.

Full text
Abstract:
The promise of service oriented computing, and the availability of Web services promote the delivery and creation of new services based on existing services, in order to meet new demands and new markets. As Web and internet based services move into Clouds, inter-dependency of services and their complexity will increase substantially. There are standards and frameworks for specifying and composing Web Services based on functional properties. However, mechanisms to individually address non-functional properties of services and their compositions have not been well established. Furthermore, the Cloud ontology depicts service layers from a high-level, such as Application and Software, to a low-level, such as Infrastructure and Platform. Each component that resides in one layer can be useful to another layer as a service. It hints at the amount of complexity resulting from not only horizontal but also vertical integrations in building and deploying a composite service. To meet the requirements and facilitate using Web services, we first propose a WSDL extension to permit specification of non-functional or Quality of Service (QoS) properties. On top of the foundation, the QoS-aware framework is established to adapt publicly available tools for Web services, augmented by ontology management tools, along with tools for performance modeling to exemplify how the non-functional properties such as response time, throughput, or utilization of services can be addressed in the service acquisition and composition process. To facilitate Web service composition standards, in this work we extended the framework with additional qualitative information to the service descriptions using Business Process Execution Language (BPEL). Engineers can use BPEL to explore design options, and have the QoS properties analyzed for the composite service. The main issue in our research is performance evaluation in software system and engineering. We researched the Web service computation as the first half of this dissertation, and performance antipattern detection and elimination in the second part. Performance analysis of software system is complex due to large number of components and the interactions among them. Without the knowledge of experienced experts, it is difficult to diagnose performance anomalies and attempt to pinpoint the root causes of the problems. Software performance antipatterns are similar to design patterns in that they provide what to avoid and how to fix performance problems when they appear. Although the idea of applying antipatterns is promising, there are gaps in matching the symptoms and generating feedback solution for redesign. In this work, we analyze performance antipatterns to extract detectable features, influential factors, and resource involvements so that we can lay the foundation to detect their presence. We propose system abstract layering model and suggestive profiling methods for performance antipattern detection and elimination. Solutions proposed can be used during the refactoring phase, and can be included in the software development life cycle. Proposed tools and utilities are implemented and their use is demonstrated with RUBiS benchmark.
APA, Harvard, Vancouver, ISO, and other styles
33

Tigulla, Anil Reddy, and Satya Srinivas Kalidasu. "Evaluating Efficiency Quality Attribute in Open Source Web browsers." Thesis, Blekinge Tekniska Högskola, Institutionen för programvaruteknik, 2015. http://urn.kb.se/resolve?urn=urn:nbn:se:bth-2584.

Full text
Abstract:
Context: Now a day’s end users are using different types of computer applications like web browsers, data processing tools like MS office, notepad etc., to do their day-to-day works. In the real world scenario, the usage of Open Source Software (OSS) products by both industrial people and end users are gradually increasing. The success of any OSS products depends on its quality standards. ‘Efficiency’ is one of the key quality factor, which portray the standards of product and it is observed that this factor is given little importance during its development. Therefore our research context lies within evaluating the efficiency quality attribute in OSS web browsers. Objectives: As discussed earlier the context of this research lies in evaluating the efficiency of OSS web browsers, the initial objective was to identify the available efficiency measures from the current literature and observe which type of measures are suitable for web browsers. Then our next objective is to compute values for the identified efficiency measures by considering a set of predefined web browsers from all the categories. Later we proposed Efficiency Baseline Criteria (EBC) and based on this criterion and experiment results obtained, the efficiency of OSS web browsers had been evaluated. Therefore the main objective of conducting this research is to formulate EBC guidelines, which can be later used by OSS developers to test their web browsers and ensure that all the quality standards are strictly adhered during the development of OSS products. Methods: Initially Literature Review (LR) was conducted in order to identify all the related efficiency quality attributes and also observe the sub-attribute functionalities, that are useful while measuring efficiency values of web browsers. Methods and procedures which are discussed in this LR are used as input for identifying efficiency measures that are related to web browsers. Later an experiment was performed in order to calculate efficiency values for CSS & proprietary set of web browsers (i.e. Case A) and OSS web browsers (i.e. Case B) by using different tools and procedures. Authors themselves had calculated efficiency values for both Case A and Case B web browsers. Based on the results of Case A web browsers, EBC was proposed and finally an statistical analysis (i.e. Mann Whitney U-test) is performed in order to evaluate the hypothesis which was formulated in experiment section. Results: From the LR study, it is observed that efficiency quality attribute is classified into two main categories (i.e. Time Behavior and Resource Utilization). Further under the category of Time behavior a total of 3 attributes were identified (i.e. Response time, Throughput and Turnaround time). From the results of LR, we had also observed the measuring process of each attribute for different web browsers. Later an experiment was performed on two different sets of web browsers (i.e. Case A and Case B web browsers). Based on the LR results, only 3 efficiency attributes (i.e. response time, memory utilization and throughput) were identified which are more suitable to the case of web browsers. These 3 efficiency attributes are further classified into 10 sub-categories. Efficiency values are calculated to both Case A and B for these 10 identified scenarios. Later from Case A results EBC values are generated. Finally hypothesis testing was done by initially performing K-S test and results suggest choosing non-parametric test (i.e. Mann Whitney U-test). Later Mann Whitney U-test was performed for all the scenarios and the normalized Z scores are more than 1.96, further suggested rejecting null hypothesis for all the 10 scenarios. Also EBC values are compared with Case B results and these also suggest us that efficiency standard of OSS web browsers are not equivalent to Case A web browsers. Conclusions: Based on quantitative results, we conclude that efficiency standards of OSS web browsers are not equivalent, when compared to Case A web browsers and the efficiency standards are not adhered during development process. Hence OSS developers should focus on implementing efficiency standards during the development stages itself in order to increase the quality of the end products. The major contribution from the two researchers to this area of research is “Efficiency Baseline Criteria”. The proposed EBC values are useful for OSS developers to test the efficiency standards of their web browser and also help them to analyze their shortcomings. As a result appropriate preventive measures can be planned in advance.
+91 - 9491754620
APA, Harvard, Vancouver, ISO, and other styles
34

Hirata, Thiago Massao. "Processo de avaliação de componentes de software fornecidos por terceiros baseado no uso de modelos de qualidade." Universidade de São Paulo, 2007. http://www.teses.usp.br/teses/disponiveis/3/3141/tde-01042008-102639/.

Full text
Abstract:
O objetivo deste trabalho foi definir um processo para a avaliação de componentes de software, que possa ser utilizado em organizações que utilizem componentes comerciais ou de software livre desenvolvidos por terceiros no desenvolvimento de sistemas de software. O Desenvolvimento Baseado em Componentes (CBD - Component-based Development) é um caminho para a diminuição de custos e prazos no desenvolvimento de sistemas de software. A adesão a essa prática pelas organizações incentivou o surgimento do mercado de comercialização de componentes de software e a multiplicação de projetos de componentes de código aberto, distribuídos livremente pela Internet. No entanto, o uso de componentes de software desenvolvidos por terceiros possui um risco associado, pois um componente escolhido pode não possuir a qualidade esperada, ou não apresentar o comportamento desejado dentro das condições reais de uso. Neste contexto, o objetivo do processo de avaliação de componentes é levantar dados referentes à qualidade de um componente e interpretar esses dados, seja para adquirir confiança em um componente, ou para a seleção de um componente em casos em que existe mais de uma opção disponível. Como peça central da avaliação está o modelo de qualidade, que define os atributos de qualidade de cada fator de qualidade, e as métricas para apoiar a avaliação.
The objective of this work is to define a process to software component evaluation to organizations which use third party, commercial off-the-shelf or open source components in the development of software products. The Component-based Development (CBD) is an approach to reduce costs and time-to-market of software projects. The need of components has become an opportunity to the growth of the software component market and to multiplication of projects of open source software components, which can be freely downloaded from Internet. However, the use of third-party components in a project can be risky. It is possible that a component does not present the expected behavior or does not have the needed quality for specific conditions associated to the software system in development. In this context, the objective of the software component evaluation process is to obtain trustful information from the quality of a component and to analyze this information, assessing the component use in a particular context or selecting one component among similar products. The main part of this process is the use of Quality Model which establishes the quality factors and metrics to support the evaluation.
APA, Harvard, Vancouver, ISO, and other styles
35

Johansson, Per, and Henric Holmberg. "On the Modularity of a System." Thesis, Malmö högskola, Teknik och samhälle, 2010. http://urn.kb.se/resolve?urn=urn:nbn:se:mau:diva-20183.

Full text
Abstract:
Den här uppsatsen behandlar skapandet och designen av en arkitektur över ett system för behandling av depression och andra psykiska sjukdomar via internet, kallat Melencolia. Ett av kraven för detta projekt är att skapa ett system som kan utökas i framtiden. Vi har härlett detta krav till begreppet modularitet och för att skapa en modulär arkitektur för Melencolia har vi undersökt vad begreppet innebär och härlett det till att vara ett kvalitetsdrag hos flera kvalitetsattribut däribland ”maintainability” och ”reusability”. Med hjälp av ”Attribute Driven Design” kan en arkitektur skapas som fokuserar kring en viss typ av kvalitetsattribut. Eftersom modularitet inte är ett kvalitetsattribut utan en kvalitetsegenskap har vi varit tvungna att ändra indata till denna metod, från kvalitetsattribut till kvalitetsegenskap. Vidare har vi härlett och lagt fram en ny metod för att mäta kvalitetsegenskaper i en mjukvaruarkikektur.Slutligen har vi använt vår metod för att mäta graden av modularitet i Melencolias arkitektur.
This thesis considers the problem of creating and designing an architecture for a software project that will result in a system for treatment of depression on the Internet. One of the requirements for this project, named by Melencolia, is to create a system which can be extended in the future. From this requirement we have derived the concept of modularity. In order to create a modular architecture we have concluded that modularity is a quality characteristic of multiple quality attributes such as "maintainability" and "reusability".We deploy Attribute-Driven Design (ADD) in this Melencolia project. By doing this, an architecture that is focused around modularity can be created. Since modularity is not a quality attribute, but rather a quality characteristic, we had to change the input to ADD from a quality attribute to a quality characteristic.Furthermore, we derive and propose a new method for quality characteristic evaluation of software architectures.Finally we apply our aforementioned method on the architecture of Melencolia and by doing this we get an indication on how well our proposed architecture satisfies modularity.
APA, Harvard, Vancouver, ISO, and other styles
36

Tran, Qui Can Cuong. "Empirical evaluation of defect identification indicators and defect prediction models." Thesis, Blekinge Tekniska Högskola, Sektionen för datavetenskap och kommunikation, 2012. http://urn.kb.se/resolve?urn=urn:nbn:se:bth-2553.

Full text
Abstract:
Context. Quality assurance plays a vital role in the software engineering development process. It can be considered as one of the activities, to observe the execution of software project to validate if it behaves as expected or not. Quality assurance activities contribute to the success of software project by reducing the risks of software’s quality. Accurate planning, launching and controlling quality assurance activities on time can help to improve the performance of software projects. However, quality assurance activities also consume time and cost. One of the reasons is that they may not focus on the potential defect-prone area. In some of the latest and more accurate findings, researchers suggested that quality assurance activities should focus on the scope that may have the potential of defect; and defect predictors should be used to support them in order to save time and cost. Many available models recommend that the project’s history information be used as defect indicator to predict the number of defects in the software project. Objectives. In this thesis, new models are defined to predict the number of defects in the classes of single software systems. In addition, the new models are built based on the combination of product metrics as defect predictors. Methods. In the systematic review a number of article sources are used, including IEEE Xplore, ACM Digital Library, and Springer Link, in order to find the existing models related to the topic. In this context, open source projects are used as training sets to extract information about occurred defects and the system evolution. The training data is then used for the definition of the prediction models. Afterwards, the defined models are applied on other systems that provide test data, so information that was not used for the training of the models; to validate the accuracy and correctness of the models Results. Two models are built. One model is built to predict the number of defects of one class. One model is built to predict whether one class contains bug or no bug.. Conclusions. The proposed models are the combination of product metrics as defect predictors that can be used either to predict the number of defects of one class or to predict if one class contains bugs or no bugs. This combination of product metrics as defect predictors can improve the accuracy of defect prediction and quality assurance activities; by giving hints on potential defect prone classes before defect search activities will be performed. Therefore, it can improve the software development and quality assurance in terms of time and cost
APA, Harvard, Vancouver, ISO, and other styles
37

Askaroglu, Emra. "Automatic Quality Of Service (qos) Evaluation For Domain Specific Web Service Discovery Framework." Master's thesis, METU, 2011. http://etd.lib.metu.edu.tr/upload/12613316/index.pdf.

Full text
Abstract:
Web Service technology is one of the most rapidly developing contemporary technologies. Nowadays, Web Services are being used by a large number of projects and academic studies all over the world. As the use of Web service technology is increasing, it becomes harder to find the most suitable web service which meets the Quality of Service (QoS) as well as functional requirements of the user. In addition, quality of the web services (QoS) that take part in the software system becomes very important. In this thesis, we develop a method to track the QoS primitives of Web Services and an algorithm to automatically calculate QoS values for Web Services. The proposed method is realized within a domain specific web service discovery system, namely DSWSD-S, Domain Specific Web Service Discovery with Semantics. This system searches the Internet and finds web services that are related to a domain and calculates QoS values through some parameters. When a web service is queried, our system returns suitable web services with their QoS values. How to calculate, keep track of and store QoS values constitute the main part of this study.
APA, Harvard, Vancouver, ISO, and other styles
38

Abbasi, Munir A. "Interoperability of wireless communication technologies in hybrid networks : evaluation of end-to-end interoperability issues and quality of service requirements." Thesis, Brunel University, 2011. http://bura.brunel.ac.uk/handle/2438/5562.

Full text
Abstract:
Hybrid Networks employing wireless communication technologies have nowadays brought closer the vision of communication “anywhere, any time with anyone”. Such communication technologies consist of various standards, protocols, architectures, characteristics, models, devices, modulation and coding techniques. All these different technologies naturally may share some common characteristics, but there are also many important differences. New advances in these technologies are emerging very rapidly, with the advent of new models, characteristics, protocols and architectures. This rapid evolution imposes many challenges and issues to be addressed, and of particular importance are the interoperability issues of the following wireless technologies: Wireless Fidelity (Wi-Fi) IEEE802.11, Worldwide Interoperability for Microwave Access (WiMAX) IEEE 802.16, Single Channel per Carrier (SCPC), Digital Video Broadcasting of Satellite (DVB-S/DVB-S2), and Digital Video Broadcasting Return Channel through Satellite (DVB-RCS). Due to the differences amongst wireless technologies, these technologies do not generally interoperate easily with each other because of various interoperability and Quality of Service (QoS) issues. The aim of this study is to assess and investigate end-to-end interoperability issues and QoS requirements, such as bandwidth, delays, jitter, latency, packet loss, throughput, TCP performance, UDP performance, unicast and multicast services and availability, on hybrid wireless communication networks (employing both satellite broadband and terrestrial wireless technologies). The thesis provides an introduction to wireless communication technologies followed by a review of previous research studies on Hybrid Networks (both satellite and terrestrial wireless technologies, particularly Wi-Fi, WiMAX, DVB-RCS, and SCPC). Previous studies have discussed Wi-Fi, WiMAX, DVB-RCS, SCPC and 3G technologies and their standards as well as their properties and characteristics, such as operating frequency, bandwidth, data rate, basic configuration, coverage, power, interference, social issues, security problems, physical and MAC layer design and development issues. Although some previous studies provide valuable contributions to this area of research, they are limited to link layer characteristics, TCP performance, delay, bandwidth, capacity, data rate, and throughput. None of the studies cover all aspects of end-to-end interoperability issues and QoS requirements; such as bandwidth, delay, jitter, latency, packet loss, link performance, TCP and UDP performance, unicast and multicast performance, at end-to-end level, on Hybrid wireless networks. Interoperability issues are discussed in detail and a comparison of the different technologies and protocols was done using appropriate testing tools, assessing various performance measures including: bandwidth, delay, jitter, latency, packet loss, throughput and availability testing. The standards, protocol suite/ models and architectures for Wi-Fi, WiMAX, DVB-RCS, SCPC, alongside with different platforms and applications, are discussed and compared. Using a robust approach, which includes a new testing methodology and a generic test plan, the testing was conducted using various realistic test scenarios on real networks, comprising variable numbers and types of nodes. The data, traces, packets, and files were captured from various live scenarios and sites. The test results were analysed in order to measure and compare the characteristics of wireless technologies, devices, protocols and applications. The motivation of this research is to study all the end-to-end interoperability issues and Quality of Service requirements for rapidly growing Hybrid Networks in a comprehensive and systematic way. The significance of this research is that it is based on a comprehensive and systematic investigation of issues and facts, instead of hypothetical ideas/scenarios or simulations, which informed the design of a test methodology for empirical data gathering by real network testing, suitable for the measurement of hybrid network single-link or end-to-end issues using proven test tools. This systematic investigation of the issues encompasses an extensive series of tests measuring delay, jitter, packet loss, bandwidth, throughput, availability, performance of audio and video session, multicast and unicast performance, and stress testing. This testing covers most common test scenarios in hybrid networks and gives recommendations in achieving good end-to-end interoperability and QoS in hybrid networks. Contributions of study include the identification of gaps in the research, a description of interoperability issues, a comparison of most common test tools, the development of a generic test plan, a new testing process and methodology, analysis and network design recommendations for end-to-end interoperability issues and QoS requirements. This covers the complete cycle of this research. It is found that UDP is more suitable for hybrid wireless network as compared to TCP, particularly for the demanding applications considered, since TCP presents significant problems for multimedia and live traffic which requires strict QoS requirements on delay, jitter, packet loss and bandwidth. The main bottleneck for satellite communication is the delay of approximately 600 to 680 ms due to the long distance factor (and the finite speed of light) when communicating over geostationary satellites. The delay and packet loss can be controlled using various methods, such as traffic classification, traffic prioritization, congestion control, buffer management, using delay compensator, protocol compensator, developing automatic request technique, flow scheduling, and bandwidth allocation.
APA, Harvard, Vancouver, ISO, and other styles
39

Brito, Junior Ozonias de Oliveira. "Abordagens para avaliação de software educativo e sua coerência com os modelos de qualidade de software." Universidade Federal da Paraíba, 2016. http://tede.biblioteca.ufpb.br:8080/handle/tede/9281.

Full text
Abstract:
Submitted by Fernando Souza (fernandoafsou@gmail.com) on 2017-08-17T16:24:57Z No. of bitstreams: 1 arquivototal.pdf: 2693687 bytes, checksum: b83d46aca568cc0cd41d71706209817d (MD5)
Made available in DSpace on 2017-08-17T16:24:57Z (GMT). No. of bitstreams: 1 arquivototal.pdf: 2693687 bytes, checksum: b83d46aca568cc0cd41d71706209817d (MD5) Previous issue date: 2016-08-26
Conselho Nacional de Pesquisa e Desenvolvimento Científico e Tecnológico - CNPq
Evaluation of Educational Software (ES) is important to identify the correct use of this tool as a facilitator of the teaching-learning process and also allows you to check the adequacy of the ES in accordance with the educational goals set by teachers; IF the correct operation according to the Software Engineering; the direction to the characteristics, needs and skills of its members, according to Engineering Usability. Therefore, the aim of this study was the analysis of 14 ES approaches to assessment in order to identify the coherence and comprehensiveness in relation to established models of classical literature for Software Quality (ISO 9126-1 and SWEBOK guide), Quality of Service (ISO 9241-1 and Heuristics Usability) and the. Precepts of Education. The process of knowledge appropriation of the approaches, was marked in the study of characterization of the same, from which it was possible to identify great heterogeneity of the adopted criteria, measurement instruments and diagnoses obtained. Thus, there was a comparison between the quality model in order to identify the intersection criteria for different models. As a result of the characterization of the 14 analyzed approaches and the intersection of the Software Quality Models, Use and Teaching, elaborated a dictionary of terms to map the criteria of each quality model, with those present in the ES assessment approaches considered in this study. It is expected that this tool will be useful to assist the actors involved in the ES evaluation process as it uses information taken and understood as a reference standard in the literature.
A avaliação de Softwares Educativos (SE) é importante para identificar a utilização correta desta ferramenta como facilitadora do processo de ensino-aprendizagem e, além disso, permite verificar a adequação do SE de acordo com os objetivos pedagógicos estabelecidos pelos professores; o correto funcionamento do SE de acordo com a Engenharia de Software e; o direcionamento para as características, necessidades e habilidades de seus usuários, segundo a Engenharia da Usabilidade. Portanto, o objetivo deste estudo foi realizar a análise de 14 abordagens para avaliação de SE, com o intuito de identificar a respectiva coerência e abrangência, em relação aos modelos consagrados da literatura clássica para a Qualidade de Software (Norma ISO 9126-1 e Swebok Guide), Qualidade de Uso (Norma ISO 9241-1 e Heurísticas de Usabilidade) e os Preceitos da Pedagogia. O processo de apropriação do conhecimento sobre as abordagens, pautou-se no estudo de caracterização das mesmas, a partir do qual foi possível identificar grande heterogeneidade dos critérios adotados, instrumentos de mensuração e diagnósticos obtidos. Dessa maneira, realizou-se uma comparação entre os modelos de qualidade, buscando identificar a intersecção entre critérios de modelos distintos. Como resultado da caracterização das 14 abordagens analisadas e da intersecção entre os modelos de Qualidade de Software, de Uso e Pedagógica, elaborou-se um dicionário de termos para mapear os critérios de cada modelo de qualidade, com aqueles presentes nas abordagens de avaliação de SE consideradas neste estudo. Espera-se que este instrumento seja útil para auxiliar os atores envolvidos no processo de avaliação de SE uma vez que se utiliza de informações adotadas e entendidas como padrão de referência na literatura.
APA, Harvard, Vancouver, ISO, and other styles
40

Domanos, Kyriakos. "Integrating Data Distribution Service in an Existing Software Architecture: Evaluation of the performance with different Quality of Service configurations." Thesis, Linköpings universitet, Fysik, elektroteknik och matematik, 2020. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-171556.

Full text
Abstract:
The Data Distribution Service (DDS) is a flexible, decentralized, peer-to-peer communication middle-ware. This thesis presents a performance analysis of the DDS usage in the Toyota Smartness platform that is used in Toyota’s Autonomous Guided Vehicles (AGVs). The purpose is to find if DDS is suitable for internal communication between modules that reside within the Smartness platform and for external communication between AGVs that are connected in the same network. An introduction to the main concepts of DDS and the Toyota Smartness platform architecture is given together with a presentation of some earlier research that has been done in DDS. A number of different approaches of how DDS can be integrated to the Smartness platform are explored and a set of different configurations that DDS provides are evaluated. The tests that were performed in order to evaluate the usage of DDS are described in detail and the results that were collected are presented, compared and discussed. The advantages and disadvantages of using DDS are listed, and some ideas for future work are proposed.
APA, Harvard, Vancouver, ISO, and other styles
41

Perera, Dinesh Sirimal. "Design metrics analysis of the Harris ROCC project." Virtual Press, 1995. http://liblink.bsu.edu/uhtbin/catkey/935930.

Full text
Abstract:
The Design Metrics Research Team at Ball State University has developed a quality design metric D(G), which consists of an internal design metric Di, and an external design metric De. This thesis discusses applying design metrics to the ROCC-Radar On-line Command Control project received from Harris Corporation. Thus, the main objective of this thesis is to analyze the behavior of D(G), and the primitive components of this metric.Error and change history reports are vital inputs to the validation of design metrics' performance. Since correct identification of types of changes/errors is critical for our evaluation, several different types of analyses were performed in an attempt to qualify the metric performance in each case.This thesis covers the analysis of 666 FORTRAN modules with approximately 142,296 lines of code.
Department of Computer Science
APA, Harvard, Vancouver, ISO, and other styles
42

Li, Haorui. "Efficiency of hospitals : Evaluation of Cambio COSMIC system." Thesis, Växjö University, School of Mathematics and Systems Engineering, 2007. http://urn.kb.se/resolve?urn=urn:nbn:se:vxu:diva-1625.

Full text
Abstract:

In this modern world, healthcare has becoming a popular word in human life. People pay their attention on their health protection and treatment, but at the same time, they need to bear the high expenditure for their healthcare processing.

It is a serious problem that the government income can not afford the large expense in healthcare industry. Especially in some developing countries, healthcare problem has become the problem for the nation development.

We would like to choose this basic way to solve this problem directly, to provide the channel to improve the efficiency of healthcare system, Cambio COSMIC.

The aim to analysis COSMIC for my case study is to find out the conclusion that how does the architect design the system from the stakeholders requirement to achieve the success of improving the efficiency of healthcare system. And how to measure the success for the system achieving to improve the efficiency of healthcare system is still required to indicate.

APA, Harvard, Vancouver, ISO, and other styles
43

Mousavi, Seyedamirhossein. "Maintainability Evaluation of Single Page Application Frameworks : Angular2 vs. React." Thesis, Linnéuniversitetet, Institutionen för datavetenskap (DV), 2017. http://urn.kb.se/resolve?urn=urn:nbn:se:lnu:diva-60901.

Full text
Abstract:
Web applications are subject to intense market forces, fast delivery and rapid requirement and code change. These are the factors that make maintainability a significant concern in any and especially web application development. In this report we develop a functional equivalent prototype from an existing Angular app, using ReactJs and afterward compare their maintainability as defined by ISO/IEC 25010. The maintainability comparison is made by calculating maintainability index for each of the applications using Plato analysis tool. The results do not show a significant difference in the calculated value of the final products. Source code analysis shows that changes in data flow need more modification in the Angular app, but with the objective oriented approach provided by Angular, we can have smaller chunks of code and thus higher maintainability per file and respectively a better average value. We conclude that regarding the lack of research and models in this area, MI is a consistent measurement model and Plato is a suitable tool for analysis. Though maintainability is highly bounded to the implementation, functionalities which are provided by the Angular framework as a bundle is more appropriate for large enterprises and complex products where React works better for smaller products.
APA, Harvard, Vancouver, ISO, and other styles
44

Al-Naeem, Tariq Abdullah Computer Science &amp Engineering Faculty of Engineering UNSW. "A quality-driven decision-support framework for architecting e-business applications." Awarded by:University of New South Wales. Computer Science and Engineering, 2006. http://handle.unsw.edu.au/1959.4/23419.

Full text
Abstract:
Architecting e-business applications is a complex design activity. This is mainly due to the numerous architectural decisions to be made, including the selection of alternative technologies, software components, design strategies, patterns, standards, protocols, platforms, etc. Further complexities arise due to the fact that these alternatives often vary considerably in their support for different quality attributes. Moreover, there are often different groups of stakeholders involved, with each having their own quality goals and criteria. Furthermore, different architectural decisions often include interdependent alternatives, where the selection of one alternative for one particular decision impacts the selections to be made for alternatives from other different decisions. There have been several research efforts aiming at providing sufficient mechanisms and tools for facilitating the architectural evaluation and design process. These approaches, however, address architectural decisions in isolation, where they focus on evaluating a limited set of alternatives belonging to one architectural decision. This has been the primary motivation behind the development of the Architectural DEcision-Making Support (ADEMS) framework, which basically aims at supporting stakeholders and architects during the architectural decision-making process by helping them determining a suitable combination of architectural alternatives. ADEMS framework is an iterative process that leverages rigorous quantitative decision-making techniques available in the literature of Management Science, particularly Multiple Attribute Decision-Making (MADM) methods and Integer Programming (IP). Furthermore, due to the number of architectural decisions involved as well as the variety of available alternatives, the architecture design space is expected to be huge. For this purpose, a query language has been developed, known as the Architecture Query Language (AQL), to aid architects in exploring and analyzing the design space in further depth, and also in examining different ???what-if??? architectural scenarios. In addition, in order to support leveraging ADEMS framework, a support tool has been implemented for carrying out the sophisticated set of mathematical computations and comparisons of the large number of architectural combinations, which might otherwise be hard to conduct using manual techniques. The primary contribution of the tool is in its help to identify, evaluate, and rank all potential combinations of alternatives based on their satisfaction to quality preferences provided by the different stakeholders. Finally, to assess the feasibility of ADEMS, three different case studies have been conducted relating to the architectural evaluation of different e-business and enterprise applications. Results obtained for the three case studies were quite positive as they showed an acceptable accuracy level for the decisions recommended by ADEMS, and at a reasonable time and effort costs for the different system stakeholders.
APA, Harvard, Vancouver, ISO, and other styles
45

Bhargava, Manjari. "Analysis of multiple software releases of AFATDS using design metrics." Virtual Press, 1991. http://liblink.bsu.edu/uhtbin/catkey/834502.

Full text
Abstract:
The development of high quality software the first time, greatly depends upon the ability to judge the potential quality of the software early in the life cycle. The Software Engineering Research Center design metrics research team at Ball State University has developed a metrics approach for analyzing software designs. Given a design, these metrics highlight stress points and determine overall design quality.The purpose of this study is to analyze multiple software releases of the Advanced Field Artillery Tactical Data System (AFATDS) using design metrics. The focus is on examining the transformations of design metrics at each of three releases of AFATDS to determine the relationship of design metrics to the complexity and quality of a maturing system. The software selected as a test case for this research is the Human Interface code from Concept Evaluation Phase releases 2, 3, and 4 of AFATDS. To automate the metric collection process, a metric tool called the Design Metric Analyzer was developed.Further analysis of design metrics data indicated that the standard deviation and mean for the metric was higher for release 2, relatively lower for release 3, and again higher for release 4. Interpreting this means that there was a decrease in complexity and an improvement in the quality of the software from release 2 to release 3 and an increase in complexity in release 4. Dialog with project personnel regarding design metrics confirmed most of these observations.
Department of Computer Science
APA, Harvard, Vancouver, ISO, and other styles
46

Umar, Azeem, and Kamran Khan Tatari. "Appropriate Web Usability Evaluation Method during Product Development." Thesis, Blekinge Tekniska Högskola, Avdelningen för programvarusystem, 2008. http://urn.kb.se/resolve?urn=urn:nbn:se:bth-2498.

Full text
Abstract:
Web development is different from traditional software development. Like in all software applications, usability is one of the core components of web applications. Usability engineering and web engineering are rapidly growing fields. Companies can improve their market position by making their products and services more accessible through usability engineering. User testing is often skipped when approaching deadline. This is very much true in case of web application development. Achieving good usability is one of the main concerns of web development. Several methods have been proposed in literature for evaluating web usability. There is not yet an agreement in the software development community about which usability evaluation method is more useful than another. Doing extensive usability evaluation is usually not feasible in case of web development. On the other hand unusable website increases the total cost of ownership. Improved usability is one of the major factors in achieving satisfaction up to a sufficient level. It can be achieved by utilizing appropriate usability evaluation method, but cost-effective usability evaluation tools are still lacking. In this thesis we study usability inspection and usability testing methods. Furthermore, an effort has been made in order to find appropriate usability evaluation method for web applications during product development and in this effort we propose appropriate web usability evaluation method which is based on observation of the common opinion of web industry.
There is no standard framework or mechanism of selecting usability evaluation method for software development. In the context of web development projects where time and budget are more limited than traditional software development projects, it becomes even harder to select appropriate usability evaluation method. Certainly it is not feasible for any web development project to utilize multiple usability inspection method and multiple usability testing methods during product development. The good choice can be the combinational method composed of one usability inspection method and one usability testing method. The thesis has contributed by identifying those usability evaluation methods which are common in literature and current web industry
ifazeem@gmail.com
APA, Harvard, Vancouver, ISO, and other styles
47

Munir, Hussan, and Misagh Moayyed. "Systematic Literature Review and Controlled Pilot Experimental Evaluation of Test Driven Development (TDD) vs. Test-Last Development (TLD)." Thesis, Blekinge Tekniska Högskola, Sektionen för datavetenskap och kommunikation, 2012. http://urn.kb.se/resolve?urn=urn:nbn:se:bth-4117.

Full text
Abstract:
Context: Test-Driven development (TDD) is a software development approach where test cases are written before actual development of the code in iterative cycles. TDD has gained attention of many software practitioners during the last decade since it has suggested several benefits in the software development process. However, empirical evidence of its dominance in terms of internal code quality, external code quality and productivity is fairly limited. Objectives: The aim behind conducting this study is to explore what has been achieved so far in the field of Test-driven development. The study reports the benefits and limitation of TDD compared to TLD and the outcome variables in all the reported studies along with their measurement criteria. Additionally, an experiment is conducted to see the impact of Test-driven development (TDD) on internal code quality, external code quality and productivity compared to Test-Last development (TLD). Methods: In this study two research methodologies are used specifically systematic literature review according to Kitchenham guidelines and controlled pilot experiment. In systematic literature review number of article sources are considered and used, including Inspec, Compendex, ACM, IEEE Xplore, Science direct (Elsevier) and ISI web of science. A review protocol is created first to ensure the objectivity and repeatability of the whole process. Second, a controlled experiment is conducted with professional software developers to explore the assumed benefits of Test-Driven development (TDD) compared to Test-Last development (TLD). Results: 9 distinct categories related to Test-driven development (TDD) are found that are investigated and reported in the literature. All the reported experiments revealing very little or no difference in internal code quality, external code quality and productivity in Test-Driven development (TDD) over Test-Last development (TLD). However, results were found contradictory when research methods are taken into account because case studies tend to find more positive results in the favor Test-Driven development (TDD) compared to experiments possibly due to the fact that experiment are mostly conducted in artificially created software development environment and mostly with students as a test subjects. On the other hand, experimental results and statistical analysis show no statistically significant result in the favor TDD compared to TLD. All the values found related to number of acceptance test cases passed (Mann-Whitney U test Exact Sig. 0.185), McCabe’s Cyclomatic complexity (Mann-Whitney U test Exact Sig. 0.063), Branch coverage (Mann-Whitney U test Exact Sig. 0.212), Productivity in terms of number of lines of code per person hours (Independent sample Ttest Sig. 0.686), productivity in terms number of user stories implemented per person hours (Independent sample T-test Sig. 0.835) in experiment are statistically insignificant. However, static code analysis (Independent sample T-test Sig. 0.03) result was found statistically significant but due to the low statistical power of test it was not possible to reject the null hypothesis. The results of the survey revealed that the majority of developers in the experiment prefer TLD over TDD, given the lesser required level of learning curve as well as the minimum effort needed to understand and employ TLD compared to TDD Conclusion: Systematic literature review confirms that the reported benefits of TDD development compared to Test-Last development are very small. However, case studies tend to find more positive results in the favor of Test-Driven development (TDD) compared to Test-Last development (TLD). Similarly, experimental findings are also confirming the fact that TDD has small benefits over TLD. However, given the small effect size there is an indication that (Test-Driven development) TDD endorses less complex code compared to Test-Last development (TLD).
Systematic literature review confirms that the reported benefits of TDD development compared to Test-Last development are very small. However, case studies tend to find more positive results in the favor of Test-Driven development (TDD) compared to Test-Last development (TLD). Similarly, experimental findings are also confirming the fact that TDD has small benefits over TLD. However, given the small effect size there is an indication that (Test-Driven development) TDD endorses less complex code compared to Test-Last development (TLD).
hassanmunirr@hotmail.com, mm1844@gmail.com
APA, Harvard, Vancouver, ISO, and other styles
48

FU, YU. "Mobile application rating based on AHP and FCEM : Using AHP and FCEM in mobile application features rating." Thesis, Blekinge Tekniska Högskola, Institutionen för programvaruteknik, 2017. http://urn.kb.se/resolve?urn=urn:nbn:se:bth-14928.

Full text
Abstract:
Context. Software evaluation is a research hotspot of both academia and industry. Users as the ultimate beneficiary of software products, their evaluation becomes more and more importance. In the real word, the users’ evaluation outcomes as the reference for end-users selecting products, and for project managers comparing their product with competitive products. A mobile application is a special software, which is facing the same situation. It is necessary to find and test an evaluation method for a mobile application which based on users’ feedback and give more reference for different stakeholders. Objectives. The aim of this thesis is to apply and evaluate AF in mobile application features rating. There are three kinds of people, and three processes are involved in a rating method applying process, rating designers in rating design process, rating providers in the rating process, and end-users in selecting process. Each process has the corresponding research objectives and research questions to test the applicability of AF method and the satisfaction of using AF and using AF rating outcomes. Methods. The research method of this thesis is a mixed method. The thesis combined experiment, questionnaire, and interview to achieve the research aim. The experiment is using for constructing a rating environment to simulate mobile application evaluation in the real world and test the applicability of AF method. Questionnaire as a supporting method utilizing for collecting the ratings from rating providers. And interviews are used for getting the satisfaction feedback of rating providers and end-users. Results. In this thesis, all AF use conditions are met, and AF evaluation system can be built in mobile application features rating. Comparing with existing method rating outcomes, the rating outcomes of AF are correct and complete. Although, the good feelings of end-users using AF rating outcomes to selecting a product, due to the complex rating process and heavy time cost, the satisfaction of rating providers is negative. Conclusions. AF can be used in mobile application features rating. Although there are many obvious advantages likes more scientific features weight, and more rating outcomes for different stakeholders, there are also shortages to improve such as complex rating process, heavy time cost, and bad information presentation. There is no evidence AF can reply the existing rating method in apps stores. However, there is still research value of AF in future work.
APA, Harvard, Vancouver, ISO, and other styles
49

Soad, Gustavo Willians. "Avaliação de qualidade em aplicativos educacionais móveis." Universidade de São Paulo, 2017. http://www.teses.usp.br/teses/disponiveis/55/55134/tde-27092017-173643/.

Full text
Abstract:
Estudos indicam que a utilização de aplicativos educacionais móveis vêm crescendo continuamente, possibilitando a alunos e professores maior flexibilidade e comodidade na execução de atividades e práticas educacionais. Embora várias instituições já tenham aderido à modalidade de aprendizagem móvel (m-learning), sua adoção ainda traz problemas e desafios organizacionais, culturais e tecnológicos. Um destes problemas consiste em como avaliar adequadamente a qualidade dos aplicativos educacionais desenvolvidos. De fato, os métodos existentes para avaliação da qualidade de software ainda são muito genéricos, não contemplando aspectos específicos aos contextos pedagógico e móvel. Nesse cenário, o presente trabalho apresenta o método MoLEva, desenvolvido para avaliar a qualidade de aplicativos educacionais móveis. O método tem como base a norma ISO/IEC 25000, sendo composto por: (i) modelo de qualidade; (ii) métricas; e (iii) critérios de julgamento. Para validar o método, foram realizados dois estudos de caso; o primeiro consistiu na aplicação do MoLEva para avaliar o aplicativo do ENEM; o segundo consistiu na aplicação do método para avaliação de aplicativos para o ensino de idiomas. A partir dos resultados obtidos, foi possível identificar problemas e pontos de melhoria nos aplicativos avaliados. Além disso, os estudos de caso conduzidos forneceram bons indicativos a respeito da viabilidade de uso do método MoLEva na avaliação de aplicativos educacionais móveis.
Studies indicate that the use of mobile learning applications has grown continuously, allowing students and teachers greater flexibility and convenience in the execution of educational activities and practices. Although several institutions have already adhered to the mobile learning (m-learning) modality, their adoption still brings organizational, cultural and technological problems and challenges. One of these problems is how to adequately evaluated the quality of the mobile learning applications developed. In fact, existing methods for evaluating software quality are still very generic, not considering aspects specific to the pedagogical and mobile contexts. In this scenario, the present work presents the MoLEva method, developed to evaluate the quality of mobile learning applications. The method is based on the ISO / IEC 25000 standard, being composed of: (i) quality model; (ii) metrics; and (iii) criteria of judgment. To validate the method, two case studies were performed; the first consisted of applying MoLEva to evaluate the ENEM application; the second consisted of applying the method for evaluating applications for language teaching. From the obtained results, it was possible to identify problems and improvement points in the evaluated applications. In addition, the case studies conducted provided good indications regarding the feasibility of using the MoLEva method in evaluating mobile learning applications.
APA, Harvard, Vancouver, ISO, and other styles
50

Popelka, Vladimír. "Aplikace procesní analýzy při řízení kvality a testování software." Master's thesis, Vysoká škola ekonomická v Praze, 2011. http://www.nusl.cz/ntk/nusl-72729.

Full text
Abstract:
This thesis deals with questions regarding quality assurance and software testing. The subject of its theoretical part is the specification of the general concept of quality, description of standards used in the field of software product quality evaluation and finally the evaluation of software development process itself. The thesis intends to introduce the theoretical framework of software quality assurance, especially the detailed analysis of the whole software testing branch. An added value to the theoretical part constitutes the characterization of procedural approach and selected methods used towards the improvement of processes. The practical part of the thesis comprises of the exemplification -- it shows the procedural approach at software quality management, applied to a selected IT company. The main aim of the practical part is to create a purposeful project for optimization of quality assurance and software testing processes. The core of the matter is to accomplish the process analysis of the present condition of software testing methodology. For the purpose of process analysis and optimization project, the models of key processes will be created; these processes will then be depicted based on defined pattern. The description of the state-of-the-art of software product quality assurance processes is further supplemented by the evaluation of such processes maturity. The project for optimization of software testing and quality assurance processes comes from the process analysis of the present condition of software testing methodology, as well as from the evaluation of procedural models maturity. The essence of processes optimization is the incorporation of change requests and innovative intentions of individual processes into the resulting state of methodology draft. For the measurement of selected quality assurance and software testing processes, the configuration of efficiency indicators and their application on particular processes is implemented. The research on the of the state-of-the-art, as well as the elaboration of this whole project for optimization of software testing and quality assurance processes runs in conformity with the principles of DMAIC model of Six Sigma method.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography