Dissertations / Theses on the topic 'Software evaluation'

To see the other types of publications on this topic, follow the link: Software evaluation.

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 dissertations / theses for your research on the topic 'Software evaluation.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.

1

Kumar, Nadella Navin. "Evaluation of ISDS software." Master's thesis, This resource online, 1991. http://scholar.lib.vt.edu/theses/available/etd-01262010-020122/.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Jah, Muzamil. "Software metrics : usability and evaluation of software quality." Thesis, University West, Department of Economics and IT, 2008. http://urn.kb.se/resolve?urn=urn:nbn:se:hv:diva-548.

Full text
Abstract:

It is difficult to understand, let alone improve, the quality of software without the knowledge of its software development process and software products. There must be some measurement process to predict the software development, and to evaluate the software products. This thesis provides a brief view on Software Quality, Software Metrics, and Software Metrics methods that will predict and measure the specified quality factors of software. It further discusses about the Quality as given by the standards such as ISO, principal elements required for the Software Quality and Software Metrics as the measurement technique to predict the Software Quality. This thesis was performed by evaluating a source code developed in Java, using Software Metrics, such as Size Metrics, Complexity Metrics, and Defect Metrics. Results show that, the quality of software can be analyzed, studied and improved by the usage of software metrics.

APA, Harvard, Vancouver, ISO, and other styles
3

Powale, Kalkin. "Automotive Powertrain Software Evaluation Tool." Master's thesis, Universitätsbibliothek Chemnitz, 2018. http://nbn-resolving.de/urn:nbn:de:bsz:ch1-qucosa-233186.

Full text
Abstract:
The software is a key differentiator and driver of innovation in the automotive industry. The major challenges for software development are increasing in complexity, shorter time-to-market, increase in development cost and demand of quality assurance. The complexity is increasing due to emission legislations, variants of product and new communication technologies being interfaced with the vehicle. The shorter development time is due to competition in the market, which requires faster feedback loops of verification and validation of developed functionalities. The increase in development cost is contributed by two factors; the first is pre-launch cost, this involves the cost of error correction in development stages. Another is post-launch cost; this involves warranty and guarantees cost. As the development time passes the cost of error correction also increases. Hence it is important to detect the error as early as possible. All these factors affect the software quality; there are several cases where Original Equipment Manufacturer (OEM) have callbacks their product because of the quality defect. Hence, there is increased in the requirement of software quality assurance. The solution for these software challenges can be the early quality evaluation in continuous integration framework environment. The most prominent in today\'s automotive industry AUTomotive Open System ARchitecture (AUTOSAR) reference architecture is used to describe software component and interfaces. AUTOSAR provides the standardised software component architecture elements. It was created to address the issues of growing complexity; the existing AUTOSAR environment does have software quality measures, such as schema validations and protocols for acceptance tests. However, it lacks the quality specification for non-functional qualities such as maintainability, modularity, etc. The tool is required which will evaluate the AUTOSAR based software architecture and give the objective feedback regarding quality. This thesis aims to provide the quality measurement tool, which will be used for evaluation of AUTOSAR based software architecture. The tool reads the AUTOSAR architecture information from AUTOSAR Extensible Markup Language (ARXML) file. The tool provides configuration ability, continuous evaluation and objective feedback regarding software quality characteristics. The tool was utilised on transmission control project, and results are validated by industry experts.
APA, Harvard, Vancouver, ISO, and other styles
4

Gabriel, Pedro Hugo do Nascimento. "Software languages engineering: experimental evaluation." Master's thesis, Faculdade de Ciências e Tecnologia, 2010. http://hdl.handle.net/10362/4854.

Full text
Abstract:
Dissertação apresentada na Faculdade de Ciências e Tecnologia da Universidade Nova de Lisboa para obtenção do grau de Mestre em Engenharia Informática
Domain-Specific Languages (DSLs) are programming languages that offer, through appropriate notation and abstraction, still enough an expressive control over a particular problem domain for more restricted use. They are expected to contribute with an enhancement of productivity, reliability, maintainability and portability, when compared with General Purpose Programming Languages (GPLs). However, like in any Software Product without passing by all development stages namely Domain Analysis, Design, Implementation and Evaluation, some of the DSLs’ alleged advantages may be impossible to be achieved with a significant level of satisfaction. This may lead to the production of inadequate or inefficient languages. This dissertation is focused on the Evaluation phase. To characterize DSL community commitment concerning Evaluation, we conducted a systematic review. The review covered publications in the main fora dedicated to DSLs from 2001 to 2008, and allowed to analyse and classify papers with respect to the validation efforts conducted by DSLs’ producers, where have been observed a reduced concern to this matter. Another important outcome that has been identified is the absence of a concrete approach to the evaluation of DSLs, which would allow a sound assessment of the actual improvements brought by the usage of DSLs. Therefore, the main goal of this dissertation concerns the production of a Systematic Evaluation Methodology for DSLs. To achieve this objective, has been carried out the major techniques used in Experimental Software Engineering and Usability Engineering context. The proposed methodology was validated with its use in several case studies, whereupon DSLs evaluation has been made in accordance with this methodology.
APA, Harvard, Vancouver, ISO, and other styles
5

Dillon, Andrew. "The Evaluation of software usability." London: Taylor and Francis, 2001. http://hdl.handle.net/10150/105344.

Full text
Abstract:
This item is not the definitive copy. Please use the following citation when referencing this material: Dillon, A. (2001) Usability evaluation. In W. Karwowski (ed.) Encyclopedia of Human Factors and Ergonomics, London: Taylor and Francis. Introduction: Usability is a measure of interface quality that refers to the effectiveness, efficiency and satisfaction with which users can perform tasks with a tool. Evaluating usability is now considered an essential part of the system development process and a variety of methods and have been developed to support the human factors professional in this work.
APA, Harvard, Vancouver, ISO, and other styles
6

Brophy, Dennis J. O'Leary James D. "Software evaluation for developing software reliability engineering and metrics models /." Monterey, Calif. : Springfield, Va. : Naval Postgraduate School ; Available from National Technical Information Service, 1999. http://handle.dtic.mil/100.2/ADA361889.

Full text
Abstract:
Thesis (M.S. in Information Technology Management) Naval Postgraduate School, March 1999.
"March 1999". Thesis advisor(s): Norman F. Schneidewind, Douglas Brinkley. Includes bibliographical references (p. 59-60). Also available online.
APA, Harvard, Vancouver, ISO, and other styles
7

Brophy, Dennis J., and James D. O'Leary. "Software evaluation for developing software reliability engineering and metrics models." Thesis, Monterey, California ; Naval Postgraduate School, 1999. http://hdl.handle.net/10945/13581.

Full text
Abstract:
Today's software is extremely complex, often constituting millions of lines of instructions. Programs are expected to operate smoothly on a wide variety of platforms. There are continuous attempts to try to assess what the reliability of a software package is and to predict what the reliability of software under development will be. The quantitative aspects of these assessments deal with evaluating, characterizing and predicting how well software will operate. Experience has shown that it is extremely difficult to make something as large and complex as modern software and predict with any accuracy how it is going to behave in the field. This thesis proposes to create an integrated system to predict software reliability for mission critical systems. This will be accomplished by developing a flexible DBMS to track failures and to integrate the DBMS with statistical analysis programs and software reliability prediction tools that are used to make calculations and display trend analysis. It further proposes a software metrics model for fault prediction by determining and manipulating metrics extracted from the code.
APA, Harvard, Vancouver, ISO, and other styles
8

Barney, Sebastian. "Software Quality Alignment : Evaluation and Understanding." Doctoral thesis, Karlskrona : Blekinge Institute of Technology, 2011. http://urn.kb.se/resolve?urn=urn:nbn:se:bth-00492.

Full text
Abstract:
Background: The software development environment is growing increasingly complex, with a greater diversity of stakeholders involved in product development. Moves towards global software development with onshoring, offshoring, insourcing and outsourcing have seen a range of stakeholders introduced to the software development process, each with their own incentives and understanding of their product. These differences between the stakeholders can be especially problematic with regard to aspects of software quality. The aspects are often not clearly and explicitly defined for a product, but still essential for its long-term sustainability. Research shows that software projects are more likely to succeed when the stakeholders share a common understanding of software quality. Objectives: This thesis has two main objectives. The first is to develop a method to determine the level of alignment between stakeholders with regard to the priority given to aspects of software quality. Given the ability to understand the levels of alignment between stakeholders, the second objective is to identify factors that support and impair this alignment. Both the method and the identified factors will help software development organisations create work environments that are better able to foster a common set of priorities with respect to software quality. Method: The primary research method employed throughout this thesis is case study research. In total, six case studies are presented, all conducted in large or multinational companies. A range of data collection techniques have been used, including questionnaires, semi-structured interviews and workshops. Results: A method to determine the level of alignment between stakeholders on the priority given to aspects of software quality is presented—the Stakeholder Alignment Assessment Method for Software Quality (SAAM-SQ). It is developed by drawing upon a systematic literature review and the experience of conducting a related case study. The method is then refined and extended through the experience gained from its repeated application in a series of case studies. These case studies are further used to identify factors that support and impair alignment in a range of different software development contexts. The contexts studied include onshore insourcing, onshore outsourcing, offshore insourcing and offshore outsourcing. Conclusion: SAAM-SQ is found to be robust, being successfully applied to case studies covering a range of different software development contexts. The factors identified from the case studies as supporting or impairing alignment confirm and extend research in the global software development domain.
APA, Harvard, Vancouver, ISO, and other styles
9

SHAUGHNESSY, MICHAEL RYAN. "EDUCATIONAL SOFTWARE EVALUATION: A CONTEXTUAL APPROACH." University of Cincinnati / OhioLINK, 2002. http://rave.ohiolink.edu/etdc/view?acc_num=ucin1021653053.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Clemens, Ronald F. "TEMPO software modificationg for SEVER evaluation." Thesis, Monterey, California : Naval Postgraduate School, 2009. http://edocs.nps.edu/npspubs/scholarly/theses/2009/Sep/09Sep_Clemens.pdf.

Full text
Abstract:
Thesis (M.S. in Systems Engineering)--Naval Postgraduate School, September 2009.
Thesis Advisor(s): Langford, Gary O. "September 2009." Description based on title screen as viewed on November 4, 2009. Author(s) subject terms: Decision, decision analysis, decision process, system engineering tool, SEVER, resource allocation, military planning, software tool, strategy evaluation. Includes bibliographical references (p. 111-113). Also available in print.
APA, Harvard, Vancouver, ISO, and other styles
11

Shaughnessy, Michael. "Educational software evaluation a contextual approach /." Cincinnati, Ohio : University of Cincinnati, 2002. http://rave.ohiolink.edu/etdc/view?acc%5Fnum=ucin1021653053.

Full text
APA, Harvard, Vancouver, ISO, and other styles
12

Zhu, Liming Computer Science &amp Engineering Faculty of Engineering UNSW. "Software architecture evaluation for framework-based systems." Awarded by:University of New South Wales. Computer Science and Engineering, 2007. http://handle.unsw.edu.au/1959.4/28250.

Full text
Abstract:
Complex modern software is often built using existing application frameworks and middleware frameworks. These frameworks provide useful common services, while simultaneously imposing architectural rules and constraints. Existing software architecture evaluation methods do not explicitly consider the implications of these frameworks for software architecture. This research extends scenario-based architecture evaluation methods by incorporating framework-related information into different evaluation activities. I propose four techniques which target four different activities within a scenario-based architecture evaluation method. 1) Scenario development: A new technique was designed aiming to extract general scenarios and tactics from framework-related architectural patterns. The technique is intended to complement the current scenario development process. The feasibility of the technique was validated through a case study. Significant improvements of scenario quality were observed in a controlled experiment conducted by another colleague. 2) Architecture representation: A new metrics-driven technique was created to reconstruct software architecture in a just-in-time fashion. This technique was validated in a case study. This approach has significantly improved the efficiency of architecture representation in a complex environment. 3) Attribute specific analysis (performance only): A model-driven approach to performance measurement was applied by decoupling framework-specific information from performance testing requirements. This technique was validated on two platforms (J2EE and Web Services) through a number of case studies. This technique leads to the benchmark producing more representative measures of the eventual application. It reduces the complexity behind the load testing suite and framework-specific performance data collecting utilities. 4) Trade-off and sensitivity analysis: A new technique was designed seeking to improve the Analytical Hierarchical Process (AHP) for trade-off and sensitivity analysis during a framework selection process. This approach was validated in a case study using data from a commercial project. The approach can identify 1) trade-offs implied by an architecture alternative, along with the magnitude of these trade-offs. 2) the most critical decisions in the overall decision process 3) the sensitivity of the final decision and its capability for handling quality attribute priority changes.
APA, Harvard, Vancouver, ISO, and other styles
13

Shepperd, Martin John. "System architecture metrics : an evaluation." n.p, 1990. http://ethos.bl.uk/.

Full text
APA, Harvard, Vancouver, ISO, and other styles
14

Carleson, Hannes, and Marcus Lyth. "Evaluation of Problem Driven Software Process Improvement." Thesis, KTH, Skolan för informations- och kommunikationsteknik (ICT), 2016. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-189216.

Full text
Abstract:
Software development is constantly growing in complexity and several newtools have been created with the aim to manage this. However, even with thisever evolving range of tools and methodology, organizations often struggle withhow to implement a new development-process, especially when implementingagile methods. The most common reason for this is because teams implementagile tools in an ad-hoc manner, without fully considering the effects this cancause. This leads to teams trying to correct their choice of methodologysomewhere during the post-planning phase, which can be devastating for aproject as it adds further complexity to the project by introducing new problemsduring the transition process. Moreover, with an existing range of tools aimedat managing this process transition, none of them have been thoroughlyevaluated, which in turn forms the problem that this thesis is centred around.This thesis explores a method transition scenario and evaluates a SoftwareProcess Improvement method oriented around the problems that theimprovement process is aiming to solve. The goal with this is to establish ifproblem oriented Software Process Improvement is viable as well as to providefurther data for the extensive research that is being done in this field. We wishto prove that the overall productivity of a software development team can beincreased even during a project by carefully managing the transition to newmethods using a problem driven approach.The research method used is of qualitative and inductive character. Data iscollected by performing a case study, via action research, and literature studies.The case study consists of iteratively managing a transition over to newmethods, at an organization in the middle of a project, using a problem drivenapproach to Software Process Improvement. Three iterations of methodimprovement are applied on the project and each iteration acts as an evaluationon how well Problem Driven Software Process Improvement works.By using the evaluation model created for this degree project, the researchershave found that problem driven Software Process Improvement is an effectivetool for managing and improving the processes of a development team.Productivity has increased with focus on tasks with highest priority beingfinished first. Transparency has increased with both development team andcompany having a clearer idea of work in progress and what is planned.Communication has grown with developers talking more freely about userstories and tasks during planning and stand-up meetings. The researchersacknowledge that the results of the study are of a limited scope and alsorecognize that further evaluation in form of more iterations are needed for acomplete evaluation.
APA, Harvard, Vancouver, ISO, and other styles
15

Berander, Patrik. "Understanding and Evaluation of Software Process Deviations." Thesis, Blekinge Tekniska Högskola, Institutionen för programvaruteknik och datavetenskap, 2002. http://urn.kb.se/resolve?urn=urn:nbn:se:bth-5971.

Full text
Abstract:
Software process improvement is often mentioned in today?s software marketplace. To be able to do process improvement, the organisation must have a process to improve from. These processes are commonly deviated from, and the PDU/PAY organisation at Ericsson AB has experienced that this happens too often within their organisation. The aim of this master thesis was to investigate why such deviations occur and how they could be prevented at PDU/PAY. A survey including a qualitative and a quantitative part was conducted at PDU/PAY to investigate this issue. The result was that processes were often deviated from due to lack of: management commitment, user involvement, synchronisation between processes, change management, anchoring of processes, and communication of processes. In addition to the conducted studies, an improvement proposal is given to the PDU/PAY organisation. This includes one organisational part and one part that is directly related to the actual work with processes. The proposal is intended to give PDU/PAY an essence of how to improve their work with their organisational processes.
APA, Harvard, Vancouver, ISO, and other styles
16

Osqui, Mitra M. 1980. "Evaluation of software energy consumption on microprocessors." Thesis, Massachusetts Institute of Technology, 2001. http://hdl.handle.net/1721.1/8344.

Full text
Abstract:
Thesis (S.M.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, February 2002.
Includes bibliographical references (leaves 72-75).
In the area of wireless communications, energy consumption is the key design consideration. Significant effort has been placed in optimizing hardware for energy efficiency, while relatively less emphasis has been placed on software energy reduction. For overall energy efficiency reduction of system energy consumption in both hardware and software must be addressed. One goal of this research is to evaluate the factors that affect software energy efficiency and identify techniques that can be employed to produce energy optimal software. In order to present a strong argument, two state-of-the-art low power processors were used for evaluation: the Intel StrongARM SA-1100 and the next generation Intel Xscale processor. A key step in analyzing the performance of software is to perform a comprehensive tabulation of the energy consumption per instruction, while taking into account the different modes of operation. This leads into a comprehensive energy profiling for the instruction set of the processors of interest. With information on the energy consumption per instruction, we can evaluate the feasibility of energy efficient programming and use the results to gain greater insight into the power consumption of the two processors under consideration. Benchmark programs will be tested on both processors to illustrate the effectiveness of the energy profiling results. The next goal is to look at the leakage current and current consumed during idle modes of the processors and how that impacts the overall picture of energy consumption. Thus energy consumption will be explored for the two processors from both a dynamic and static energy consumption perspective.
by Mitra M. Osqui.
S.M.
APA, Harvard, Vancouver, ISO, and other styles
17

Lamce, Bora. "Automation and Evaluation of Software Fault Prediction." Thesis, Mälardalens högskola, Akademin för innovation, design och teknik, 2018. http://urn.kb.se/resolve?urn=urn:nbn:se:mdh:diva-39995.

Full text
Abstract:
Delivering a fault-free software to the client requires exhaustive testing, which in today's ever-growing software systems, can be expensive and often impossible. Software fault prediction aims to improve software quality while reducing the testing effort by identifying fault-prone modules in the early stages of development process. However, software fault prediction activities are yet to be implemented in the daily work routine of practitioners as a result of a lack of automation of this process. This thesis presents an Eclipse plug-in as a fault prediction automation tool that can predict fault-prone modules using two prediction methods, Naive Bayes and Logistic Regression, while also reflecting on the performance of these prediction methods compared to each other. Evaluating the prediction methods on open source projects concluded that Logistic Regression performed better than Naive Bayes.As part of the prediction process, this thesis also reflects on the easiest metrics to automatically gather for fault prediction concluding that LOC, McCabe Complexity and CK suite metrics are the easiest to automatically gather.
APA, Harvard, Vancouver, ISO, and other styles
18

Frisch, Blade William Martin. "A User Experience Evaluation of AAC Software." Bowling Green State University / OhioLINK, 2020. http://rave.ohiolink.edu/etdc/view?acc_num=bgsu1594112876812982.

Full text
APA, Harvard, Vancouver, ISO, and other styles
19

Phalp, Keith T. "An evaluation of software modelling in practice." Thesis, Bournemouth University, 1995. http://eprints.bournemouth.ac.uk/438/.

Full text
APA, Harvard, Vancouver, ISO, and other styles
20

CARVALHO, Fernando Ferreira de. "An embedded software component quality evaluation methodology." Universidade Federal de Pernambuco, 2010. https://repositorio.ufpe.br/handle/123456789/2412.

Full text
Abstract:
Made available in DSpace on 2014-06-12T15:57:59Z (GMT). No. of bitstreams: 2 arquivo3240_1.pdf: 2429983 bytes, checksum: 9b9eff719ea26a708f6868c5df873358 (MD5) license.txt: 1748 bytes, checksum: 8a4605be74aa9ea9d79846c1fba20a33 (MD5) Previous issue date: 2010
Universidade de Pernambuco
Um dos maiores desafios para a indústria de embarcados é fornecer produtos com alto nível de qualidade e funcionalidade, a um baixo custo e curto tempo de desenvolvimento, disponibilizando-o rapidamente ao mercado, aumentando assim, o retorno dos investimentos. Os requisitos de custo e tempo de desenvolvimento têm sido abordados com bastante êxito pela engenharia de software baseada em componentes (CBSE) aliada à técnica de reuso de componentes. No entanto, a utilização da abordagem CBSE sem as devidas verificações da qualidade dos componentes utilizados, pode trazer conseqüências catastróficas (Jezequel et al., 1997). A utilização de mecanismos apropriados de pesquisa, seleção e avaliação da qualidade de componentes são considerados pontos chave na adoção da abordagem CBSE. Diante do exposto, esta tese propõe uma Metodologia para Avaliação da Qualidade de Componentes de Software Embarcados sob diferentes aspectos. A idéia é solucionar a falta de consistência entre as normas ISO/IEC 9126, 14598 e 2500, incluindo o contexto de componente de software e estendendo-o ao domínio de sistemas embarcados. Estas normas provêem definições de alto nível para características e métricas para produtos de software, mas não provêem formas de usá-las efetivamente, tornando muito difícil aplicá-las sem adquirir mais informações de outras fontes. A Metodologia é composta de quatro módulos que se complementam em busca da qualidade, através de um processo de avaliação, um modelo de qualidade, técnicas de avaliação agrupadas por níveis de qualidade e uma abordagem de métricas. Desta forma, ela auxilia o desenvolvedor de sistemas embarcado no processo de seleção de componentes, avaliando qual componente melhor se enquadra nos requisitos do sistema. É utilizada por avaliadores terceirizados quando contratados por fornecedores a fim de obter credibilidade em seus componentes. A metodologia possibilita avaliar a qualidade do componente embarcado antes do mesmo ser armazenado em um sistema de repositório, especialmente no contexto do framework robusto para reuso de software, proposto por Almeida (Almeida, 2004)
APA, Harvard, Vancouver, ISO, and other styles
21

ALVARO, Alexandre. "A software component quality framework." Universidade Federal de Pernambuco, 2009. https://repositorio.ufpe.br/handle/123456789/1372.

Full text
Abstract:
Made available in DSpace on 2014-06-12T15:49:28Z (GMT). No. of bitstreams: 1 license.txt: 1748 bytes, checksum: 8a4605be74aa9ea9d79846c1fba20a33 (MD5) Previous issue date: 2009
Coordenação de Aperfeiçoamento de Pessoal de Nível Superior
Um grande desafio da Engenharia de Software Baseada em Componentes (ESBC) é a qualidade dos componentes utilizados em um sistema. A confiabilidade de um sistema baseado em componentes depende da confiabilidade dos componentes dos quais ele é composto. Na ESBC, a busca, seleção e avaliação de componentes de software é considerado um ponto chave para o efetivo desenvolvimento de sistemas baseado em componentes. Até agora a indústria de software tem se concentrado nos aspectos funcionais dos componentes de software, deixando de lado uma das tarefas mais árduas, que é a avaliação de sua qualidade. Se a garantia de qualidade de componentes desenvolvidos in-house é uma tarefa custosa, a garantia da qualidade utilizando componentes desenvolvidos externamente os quais frequentemente não se tem acesso ao código fonte e documentação detalhada se torna um desafio ainda maior. Assim, esta Tese introduz um Framework de Qualidade de Componentes de Software, baseado em módulos bem definidos que se complementam a fim de garantir a qualidade dos componentes de software. Por fim, um estudo experimental foi desenvolvido e executado de modo que se possa analisar a viabilidade do framework proposto
APA, Harvard, Vancouver, ISO, and other styles
22

Lim, Edwin C. "Software metrics for monitoring software engineering projects." Thesis, Edith Cowan University, Research Online, Perth, Western Australia, 1994. https://ro.ecu.edu.au/theses/1100.

Full text
Abstract:
As part of the undergraduate course offered by Edith Cowan University, the Department of Computer Science has (as part of a year's study) a software engineering group project. The structure of this project was divided into two units, Software Engineering l and Software Engineering 2. ln Software Engineering 1, students were given the group project where they had to complete and submit the Functional Requirement and Detail System Design documentation. In Software Engineering 2, students commenced with the implementation of the software, testing and documentation. The software was then submitted for assessment and presented to the client. To aid the students with the development of the software, the department had adopted EXECOM's APT methodology as its standard guideline. Furthermore, the students were divided into groups of 4 to 5, each group working on the same problem. A staff adviser was assigned to each project group. The purpose of this research exercise was to fulfil two objectives. The first objective was to ascertain whether there is a need to improve the final year software engineering project for future students by enhancing any aspect that may be regarded as deficient. The second objective was to ascertain the factors that have the most impact on the quality of the delivered software. The quality of the delivered software was measured using a variety of software metrics. Measurement of software has mostly been ignored until recently or used without true understanding of its purpose. A subsidiary objective was to gain an understanding of the worth of software measurement in the student environment One of the conclusions derived from the study suggests that teams who spent more time on software design and testing, tended to produce better quality software with less defects. The study also showed that adherence to the APT methodology led to the project being on schedule and general team satisfaction with the project management. One of the recommendations made to the project co-ordinator was that staff advisers should have sufficient knowledge of the software engineering process.
APA, Harvard, Vancouver, ISO, and other styles
23

Nordenberg, Marcus. "PET – Plate Evaluation Tool." Thesis, Mittuniversitetet, Avdelningen för data- och systemvetenskap, 2016. http://urn.kb.se/resolve?urn=urn:nbn:se:miun:diva-30226.

Full text
Abstract:
Denna rapport har för avsikt att att beskriva arbetet kring utvecklingen av verktyget PET – Plate Evaluation Tool, som är en applikation avsedd att skanna, presentera, spara samt utvärdera målade plåtar i målningslinjer på SSAB Special Steels i Oxelösund. Till hjälp används det öppna biblioteket OpenCV som är ett bibliotek för datorseende. Med hjälp av detta vävs en bild samman när plåtar passerar en eller två kameror. Släpljus från en ljusramp som genererar en homogen ljusbild används för att tydligare framhäva brister. Verktyget identifierar sedan bristerna på plåtens undersida i syfte att hjälpa operatören att se dessa. Resultaten presenteras sedan bl.a. i ett webb-verktyg som använder sig av Java EE-standarden samt PrimeFaces. Webb-verktyget skapar spårbarhet som förenklar mycket arbete kring eventuella reklamationer, eller andra kvalitetsförbättringar och vilket genomslag dessa får. För operatörernas del presenteras resultaten i en skräddarsydd bild skriven i Java där operatören kan analysera bilden i mer detalj. Vidare undersöks och jämförs sedan vilka filter och parametrar som är lämpligast, och hittar utvalda brister i plåtar på bästa sätt. Projektet undersöker också möjligheten att genom denna teknik kunna utläsa stämplar som är stansade i plåten. Applikationen har fått mycket bra bemötande från de som använder den mest, nämligen operatörerna på berörda målningslinjer. De två filter som skrevs har visat sig kunna identifiera de brister de skall på ett bra sätt och kvalitén på de bilder som vävs samman har visat sig vara så pass bra att stämplar kan utläsas.
APA, Harvard, Vancouver, ISO, and other styles
24

Oswald, Matthias. "News-Aggregatoren-Software Evaluierung der Markführer /." St. Gallen, 2005. http://www.biblio.unisg.ch/org/biblio/edoc.nsf/wwwDisplayIdentifier/01652999001/$FILE/01652999001.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
25

Sochacki, Gustav. "Evaluation of Software Projects : A Recommendation for Implementation The Iterating Evaluation Model." Thesis, Blekinge Tekniska Högskola, Institutionen för programvaruteknik och datavetenskap, 2002. http://urn.kb.se/resolve?urn=urn:nbn:se:bth-2935.

Full text
Abstract:
Software process improvement (SPI) is generally associated with large organizations. Large organizations have the possibilities to fund software process improvement programs as large scale activities. Often these improvement programs do not show progress until some time has elapsed. The Capability Maturity Model can take one year to implement and not until then can measures be made to see how much quality increased. Small organizations do not have the same funding opportunities but are still in need of software process improvement programs. Generally it is better to initiate a software process improvement program as early as possible, no matter what size of organization. Although the funding capabilities for small organizations are less compared to large organizations, the total required funding will still be smaller than in large organizations. The small organization will grow and overtime become a midsized or large organization, so by starting an improvement program at an early stage the funding overall should be minimized. This becomes more visible when the organization has grown large. This master thesis presents the idea of implementing a software process improvement program, or at least parts of it, by evaluating the software project. By evaluating a project the specific needs that are most critical are implemented in the next project. This process is iterated for each concluded project. The master thesis introduces the Iterating Evaluation Model based on an interview survey. This model is compared to an already existing model, the Experience Factory.
APA, Harvard, Vancouver, ISO, and other styles
26

Jabangwe, Ronald. "Software Quality Evaluation for Evolving Systems in Distributed Development Environments." Doctoral thesis, Blekinge Tekniska Högskola, Institutionen för programvaruteknik, 2015. http://urn.kb.se/resolve?urn=urn:nbn:se:bth-00613.

Full text
Abstract:
Context: There is an overwhelming prevalence of companies developing software in global software development (GSD) contexts. The existing body of knowledge, however, falls short of providing comprehensive empirical evidence on the implication of GSD contexts on software quality for evolving software systems. Therefore there is limited evidence to support practitioners that need to make informed decisions about ongoing or future GSD projects. Objective: This thesis work seeks to explore changes in quality, as well as to gather confounding factors that influence quality, for software systems that evolve in GSD contexts. Method: The research work in this thesis includes empirical work that was performed through exploratory case studies. This involved analysis of quantitative data consisting of defects as an indicator for quality, and measures that capture software evolution, and qualitative data from company documentations, interviews, focus group meetings, and questionnaires. An extensive literature review was also performed to gather information that was used to support the empirical investigations. Results: Offshoring software development work, to a location that has employees with limited or no prior experience with the software product, as observed in software transfers, can have a negative impact on quality. Engaging in long periods of distributed development with an offshore site and eventually handing over all responsibilities to the offshore site can be an alternative to software transfers. This approach can alleviate a negative effect on quality. Finally, the studies highlight the importance of taking into account the GSD context when investigating quality for software that is developed in globally distributed environments. This helps with making valid inferences about the development settings in GSD projects in relation to quality. Conclusion: The empirical work presented in this thesis can be useful input for practitioners that are planning to develop software in globally distributed environments. For example, the insights on confounding factors or mitigation practices that are linked to quality in the empirical studies can be used as input to support decision-making processes when planning similar GSD projects. Consequently, lessons learned from the empirical investigations were used to formulate a method, GSD-QuID, for investigating quality using defects for evolving systems. The method is expected to help researchers avoid making incorrect inferences about the implications of GSD contexts on quality for evolving software systems, when using defects as a quality indicator. This in turn will benefit practitioners that need the information to make informed decisions for software that is developed in similar circumstances.
APA, Harvard, Vancouver, ISO, and other styles
27

Mårtensson, Frans, and Per Jönsson. "Software Architecture Simulation." Thesis, Blekinge Tekniska Högskola, Institutionen för programvaruteknik och datavetenskap, 2002. http://urn.kb.se/resolve?urn=urn:nbn:se:bth-4087.

Full text
Abstract:
A software architecture is one of the first steps towards a software system. A software architecture can be designed in different ways. During the design phase, it is important to select the most suitable design of the architecture, in order to create a good foundation for the system. The selection process is performed by evaluating architecture alternatives against each other. We investigate the use of continuous simulation of a software architecture as a support tool for architecture evaluation. For this purpose, we study a software architecture of an existing software system in an experiment, where we create a model of it using a tool for continuous simulation, and simulate the model. Based on the results from the simulation, we conclude that the system is too complex to be modeled for continuous simulation. Problems we identify are that we need discrete functionality to be able to correctly simulate the system, and that it is very time-consuming to develop a model for evaluation purposes. Thus, we find that continuous simulation is not appropriate for evaluating a software architecture, but that the modeling process is a valuable tool for increasing knowledge and understanding about an architecture.
APA, Harvard, Vancouver, ISO, and other styles
28

Raguindin, Ferdinand M. "Selecting a software capability evaluation for weapons acquisition." Thesis, Monterey, Calif. : Springfield, Va. : Naval Postgraduate School ; Available from National Technical Information Service, 1994. http://handle.dtic.mil/100.2/ADA289788.

Full text
Abstract:
Thesis (M.S. in Management) Naval Postgraduate School, September 1994.
Thesis advisor(s): Martin J. McCaffrey, James Emery. "September 1994." Includes bibliographical references. Also available online.
APA, Harvard, Vancouver, ISO, and other styles
29

Seckin, Haldun. "Software Process Improvement Based On Static Process Evaluation." Master's thesis, METU, 2006. http://etd.lib.metu.edu.tr/upload/12607155/index.pdf.

Full text
Abstract:
This study investigates software development process improvement approaches. In particular, the static process evaluation methodology proposed by S. Gü
ceglioglu is applied on the requirements analysis and validation process applied in Project X in MYCOMPANY and an improved process is proposed. That methodology is an extension of the ISO/IEC 9126 approach for software quality assessment, and is based on evaluating a set of well-defined metrics on the static model of software development processes. The improved process proposed for Project X is evaluated using Gü
ceglioglu&rsquo
s methodology. The applied and improved process measurement results compared to determine if the improved process is successful or not.
APA, Harvard, Vancouver, ISO, and other styles
30

Cheng, Chow Kian, and Rahadian Bayu Permadi. "Towards an Evaluation Framework for Software Process Improvement." Thesis, Blekinge Tekniska Högskola, Sektionen för datavetenskap och kommunikation, 2009. http://urn.kb.se/resolve?urn=urn:nbn:se:bth-3625.

Full text
Abstract:
Software has gained an essential role in our daily life in the last decades. This condition demands high quality software. To produce high quality software many practitioners and researchers put more attention on the software development process. Large investments are poured to improve the software development process. Software Process Improvement (SPI) is a research area which is aimed to address the assessment and improvement issues in the software development process. One of the most important aspects in software process improvement is to measure the results gained from the embarked process change. Without measuring the results, it is hard to tell whether the goals have been achieved or not. However, measurement for software process improvement is not a trivial task. Furthermore, there is no common systematic methodology that can be used to help measuring the performance of software process improvement initiatives. This thesis is intended to provide basic key concepts for the effective measurement and evaluation of the outcome of software process improvement. A major part of this thesis presents the systematic review in evaluating the outcome of software process improvement. The systematic review is aimed at the identification of the major issues in software process improvement evaluation and to gather the requirements for a software process improvement measurement and evaluation framework. Based on the results of the systematic review, a measurement and evaluation model is formulated. The objective of the model is to provide the groundwork for a software process improvement measurement and evaluation framework. The model is deemed to be applicable in a broad spectrum of scenarios by providing concepts that are independent from specific SPI initiatives.
APA, Harvard, Vancouver, ISO, and other styles
31

Carpatorea, Iulian Nicolae. "A graphical traffic scenario editing and evaluation software." Thesis, Högskolan i Halmstad, Halmstad Embedded and Intelligent Systems Research (EIS), 2012. http://urn.kb.se/resolve?urn=urn:nbn:se:hh:diva-19438.

Full text
Abstract:
An interactive tool is developed for the purpose of rapid exploration ofdiverse traffic scenario. The focus is on rapidity of design and evaluation rather thenon physical realism. Core aspects are the ability to define the essential elements fora traffic scenario such as a road network and vehicles. Cubic Bezier curves are usedto design the roads and vehicle trajectory. A prediction algorithm is used to visualizevehicle future poses and collisions and thus provide means for evaluation of saidscenario. Such a program was created using C++ with the help of Qt libraries.
APA, Harvard, Vancouver, ISO, and other styles
32

Owrak, Ali. "A quality evaluation model for service-oriented software." Thesis, University of Manchester, 2008. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.499901.

Full text
APA, Harvard, Vancouver, ISO, and other styles
33

Dutta, Rahul Kumar. "A Framework for Software Security Testing and Evaluation." Thesis, Linköpings universitet, Institutionen för datavetenskap, 2015. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-121645.

Full text
Abstract:
Security in automotive industry is a thought of concern these days. As more smart electronic devices are getting connected to each other, the dependency on these devices are urging us to connect them with moving objects such as cars, buses, trucks etc. As such, safety and security issues related to automotive objects are becoming more relevant in the realm of internet connected devices and objects. In this thesis, we emphasize on certain factors that introduces security vulnerabilities in the implementation phase of Software Development Life Cycle (SDLC). Input invalidation is one of them that we address in our work. We implement a security evaluation framework that allows us to improve security in automotive software by identifying and removing software security vulnerabilities that arise due to input invalidation reasons during SDLC. We propose to use this framework in the implementation and testing phase so that the critical deficiencies of software in security by design issues could be easily addressed and mitigated.
APA, Harvard, Vancouver, ISO, and other styles
34

BERTRAN, ISELA MACIA. "EVALUATION OF SOFTWARE QUALITY BASED ON UML MODELS." PONTIFÍCIA UNIVERSIDADE CATÓLICA DO RIO DE JANEIRO, 2009. http://www.maxwell.vrac.puc-rio.br/Busca_etds.php?strSecao=resultado&nrSeq=13748@1.

Full text
Abstract:
COORDENAÇÃO DE APERFEIÇOAMENTO DO PESSOAL DE ENSINO SUPERIOR
Um dos objetivos da engenharia de software é a construção de software com um nível de qualidade elevado com o menor custo e no menor tempo possível. Nesse contexto, muitas técnicas para o controle da qualidade de design de software têm sido definidas. Além disso, mecanismos baseados em métricas para a detecção de problemas também têm sido definidos. A maioria dessas técnicas e mecanismos foca a análise do código fonte. Porém, para reduzir retrabalho inútil, é importante utilizar técnicas de análise da qualidade capazes de detectar problemas de design já desde os modelos dos sistemas. Esta dissertação propõe: (i) um conjunto de estratégias de detecção para identificar, em modelos UML, problemas de design específicos e recorrentes na literatura: Long Parameter List, God Class, Data Class, Shotgun Surgery, Misplaced Class e God Package, e (ii) a utilização do modelo da qualidade QMOOD para avaliar design de software a partir de seus diagramas de classes. Para automatizar a aplicação destes mecanismos foi implementada uma ferramenta: a QCDTool. Os mecanismos desenvolvidos foram avaliados no contexto de dois estudos experimentais. O primeiro estudo avaliou a acurácia, precisão e recall das estratégias de detecção propostas. Esse estudo mostrou os benefícios e desvantagens da aplicação, em modelos, das estratégias de detecção propostas. O segundo estudo avaliou a utilidade da aplicação do modelo da qualidade QMOOD em diagramas UML. Esse estudo mostrou que foi possível identificar, em diagramas de classes, variações das propriedades de design, e, conseqüentemente, dos atributos da qualidade nos sistemas analisados.
One of the goals of software engineering is the development of high quality software at a small cost an in a short period of time. In this context, several techniques have been defined for controlling the quality of software designs. Furthermore, many metrics-based mechanisms have been defined for detecting software design flaws. Most of these mechanisms and techniques focus on analyzing the source code. However, in order to reduce unnecessary rework it is important to use quality analysis techniques that allow the detection of design flaws earlier in the development cycle. We believe that these techniques should analyze design flaws starting from software models. This dissertation proposes: (i) a set of strategies to detect, in UML models, specific and recurrent design problems: Long Parameter List, God Class, Data Class, Shotgun Surgery, Misplaced Class and God Package; (ii) and the use of QMOOD quality model to analyze class diagrams. To automate the application of these mechanisms we implemented a tool: the QCDTool. The detection strategies and QMOOD model were evaluated in the context of two experimental studies. The first study analyzed the accuracy, precision and recall of the proposed detection strategies. The second study analyzed the utility of use QMOOD quality model in the class diagrams. The results of the first study have shown the benefits and drawbacks of the application in class diagrams of some of the proposed detection strategies. The second study shows that it was possible to identify, based on class diagrams, variations of the design properties and consequently, of the quality attributes in the analyzed systems.
APA, Harvard, Vancouver, ISO, and other styles
35

Agbogidi, Oghenetega. "Practical Evaluation of a Software Defined Cellular Network." Thesis, University of North Texas, 2017. https://digital.library.unt.edu/ark:/67531/metadc984175/.

Full text
Abstract:
This thesis proposes a design of a rapidly deployable cellular network prototype that provides voice and data communications and it is interoperable with legacy devices and the existing network infrastructure. The prototype is based on software defined radio and makes use of IEEE 802.11 unlicensed wireless radio frequency (RF) band for backhaul link and an open source GSM implementation software. The prototype is also evaluated in environments where there is limited control of the radio frequency landscape, and using Voice Over Internet Protocol (VoIP) performance metrics to measure the quality of service. It is observed that in environments where the IEEE 802.11 band is not heavily utilized, a large number of calls are supported with good quality of service. However, when this band is heavily utilized only a few calls can be supported as the quality of service rapidly degrades with increasing number of calls, which is due to interference. It is concluded that in order to achieve tolerable voice quality, unused licensed spectrum is needed for backhaul communication between base stations.
APA, Harvard, Vancouver, ISO, and other styles
36

Dädeby, Oskar. "Dynamic Blast Load Analysis using RFEM : Software evaluation." Thesis, Luleå tekniska universitet, Institutionen för samhällsbyggnad och naturresurser, 2021. http://urn.kb.se/resolve?urn=urn:nbn:se:ltu:diva-84784.

Full text
Abstract:
The purpose of this Master thesis is to evaluate the RFEM software and determine if it could be used for dynamic analyses using blast loads from explosions. Determining the blast resistance for a structure is a growing market and would therefore be beneficial for Sweco Eskilstuna if RFEM could be used for this type of work. The verification involved comparing the RFEM software to a real experiment which consisted of a set of blast tested reinforced concrete beams. By using the structural properties from the experiment project with the experiment setup the same structure could be replicated in RFEM. RFEM would then simulate a dynamic analysis loaded with the same dynamic load measured from the experiment project in two different dynamic load cases caused by two differently loaded explosions. The structural response from the experiment could then be compared to the response simulated by the RFEM software, which consisted of displacement- and acceleration time diagrams. By analysing the displacement and acceleration of both the experiment and the RFEM software the accuracy was determined, and how well RFEM preformed the analysis for this specific situation. The comparison of the displacement and acceleration between the experiment and RFEM was considered acceptable if the maximum displacement was consistent with the experiments result and within the same time frame. The acceleration was considered acceptable if the initial acceleration was consistent with the experiment result. These criteria needed to be met for the verification that RFEM could simulate a dynamic analysis. If the software managed to complete a dynamic analysis for two dynamic load cases, then the software could be evaluated which consisted of determining if the post blast effects could be determined and if the modelling method was reliable.  The acceleration from RFEM were in good agreement with the experiment test at the initial part of the blast, reaching a close comparison for both load cases after 3 ms. Then the RFEM acceleration had a chaotic behaviour reaching no similarities for the duration of the blast. The displacement managed to get a close comparison of the maximum displacement with a margin of 0,5 mm for both load cases within a 1 ms time margin. RFEM managed in conclusion to simulate a blast load analysis, the displacement and acceleration gave acceptable results according to the criteria.  With the method chosen a fast simulation was achieved and with the same model complying with two different load cases for the same model gave indication that the first result was not a coincidence. The steps taken in the modelling method was straight forward, but two contributing parameters were determined to devalue the reliability. First parameter was the material model chosen for the concrete, which was chosen to a plastic material model. The two optional material model’s linear elastic and non-linear elastic both caused failed simulations. Also, the better model for the material model would have been a diagram model which insured that the concrete lost is capacity in tension with maximum capacity, but this was not available in a dynamic analysis with multiple load increments. Which is the reason why a plastic material model was chosen for the concrete. The second reason was the movement of the beam in the supports. This data was not recorded in the experiment but was determined to be a contributing part of the test. This however gave big differences of the result depending on how much the beam could move. In the end the best possible result was chosen to comply with the first load case where the same RFEM model was used in the second test. The second load case showed just as good results as the first load case, but with the big variation in results depending on the movement of the beam in the supports made this part unclear.  For the evaluation the question if the RFEM could provide a post blast analysis needed to be addressed, where the answer is no. The failure mode was chosen to comply with the choice of modelling method which required the analysis of the plastic strain in the reinforcement bars. This information was not available using the add-on module DYNAM-PRO and could therefore not provide the answer if the model structure resisted the blast.  For future work of this master thesis is to build a model that would give a more detailed post blast analysis, where this thesis was made to test the software. For this more work would be necessary by the creators Dlubal to further improve the add-on-module, which involves more extractable results and more detailed tools when using a dynamic load case, where some important functionality is only usable in a static load case. Other than that, RFEM managed to complete the dynamic analysis, and with further improving of the modelling method a more detailed analysis can be made and then be usable in real projects in the future.
APA, Harvard, Vancouver, ISO, and other styles
37

Shvadlenko, Irina. "Evaluation of Environmental Education Software “Protecting Your Environment”." Ohio University / OhioLINK, 2004. http://www.ohiolink.edu/etd/view.cgi?ohiou1108407292.

Full text
APA, Harvard, Vancouver, ISO, and other styles
38

Kota, Shivaram. "Software requirements for a facilities design software and evaluation of the factory programs suite." Ohio : Ohio University, 2000. http://www.ohiolink.edu/etd/view.cgi?ohiou1172257029.

Full text
APA, Harvard, Vancouver, ISO, and other styles
39

Khan, Muhammad Bilal Ahmad, and Song Shang. "Evaluation of Model Based Testing and Conformiq Qtronic." Thesis, Linköping University, Department of Computer and Information Science, 2009. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-28257.

Full text
Abstract:

The Model Based Testing is one of the modern automated testing methodologies used to generate test suits automatically from the abstract behavioral or environmental models of the System Under Test (SUT). The Generated test cases are abstract like models, but these test cases can also be transformed to the different test scripts or language specific test cases to execute them. The Model based testing can be applied in different ways and it has several dimensions during implementation that can be changes with nature of the SUT. Since model based testing is directly related with models, the model based testing can be applied at early stages of development that helps in validation of both models and requirements that could save time of test development at later stages. With the automatic generation of test cases, requirements change is very easy to handle with the model based testing as it requires fewer changes in the models and reduces rework. It is also easy to generate a large number of test cases with full coverage criteria using the model based testing that was hard to produce with traditional testing methodologies. Testing non-functional requirements is one field in which the model based testing is lacking; quality related aspects of the SUT difficult to be tested with the model based testing.

The effectiveness and performance of model based testing is directly related to the efficiency of CASE tool that implementing it. A variety of CASE tools based on models are currently in use in different industries. The Qtronic tool is one generating test cases from abstract model of SUT automatically.

In this master thesis detailed evaluation of the Qtronic test case generation technique, generation time, coverage criterion and quality of test cases are analyzed by modeling the Session Initiating Protocol (SIP) & File Transfer Protocol. (FTP), Also generation of test cases from models manually and by using the Qtronic Tool. In order to evaluate the Qtronic tool, detailed experiments and comparisons of manually generated test cases and test case generated by the Qtronic are conducted. The results of the case studies show the efficiency of the Qtronic over traditional manual test case generation in many aspects. We also show that the model based testing is not effective applied on every system under test, for some simple systems manual test case generation might be a good choice.

APA, Harvard, Vancouver, ISO, and other styles
40

Shepperd, Martin John. "System architecture metrics : an evaluation." Thesis, Open University, 1991. http://oro.open.ac.uk/57340/.

Full text
Abstract:
The research described in this dissertation is a study of the application of measurement, or metrics for software engineering. This is not in itself a new idea; the concept of measuring software was first mooted close on twenty years ago. However, examination of what is a considerable body of metrics work, reveals that incorporating measurement into software engineering is rather less straightforward than one might pre-suppose and despite the advancing years, there is still a lack of maturity. The thesis commences with a dissection of three of the most popular metrics, namely Haistead's software science, McCabe's cyclomatic complexity and Henry and Kafura's information flow - all of which might be regarded as having achieved classic status. Despite their popularity these metrics are all flawed in at least three respects. First and foremost, in each case it is unclear exactly what is being measured: instead there being a preponderance of such metaphysical terms as complexIty and qualIty. Second, each metric is theoretically doubtful in that it exhibits anomalous behaviour. Third, much of the claimed empirical support for each metric is spurious arising from poor experimental design, and inappropriate statistical analysis. It is argued that these problems are not misfortune but the inevitable consequence of the ad hoc and unstructured approach of much metrics research: in particular the scant regard paid to the role of underlying models. This research seeks to address these problems by proposing a systematic method for the development and evaluation of software metrics. The method is a goal directed, combination of formal modelling techniques, and empirical ealiat%or. The met\io s applied to the problem of developing metrics to evaluate software designs - from the perspective of a software engineer wishing to minimise implementation difficulties, faults and future maintenance problems. It highlights a number of weaknesses within the original model. These are tackled in a second, more sophisticated model which is multidimensional, that is it combines, in this case, two metrics. Both the theoretical and empirical analysis show this model to have utility in its ability to identify hardto- implement and unreliable aspects of software designs. It is concluded that this method goes some way towards the problem of introducing a little more rigour into the development, evaluation and evolution of metrics for the software engineer.
APA, Harvard, Vancouver, ISO, and other styles
41

Borowski, Jimmy. "Software Architecture Simulation : Performance evaluation during the design phase." Thesis, Blekinge Tekniska Högskola, Institutionen för programvaruteknik och datavetenskap, 2004. http://urn.kb.se/resolve?urn=urn:nbn:se:bth-5882.

Full text
Abstract:
Due to the increasing size and complexity of software systems, software architectures have become a crucial part in development projects. A lot of effort has been put into defining formal ways for describing architecture specifications using Architecture Description Languages (ADLs). Since no common ADL today offers tools for evaluating the performance, an attempt to develop such a tool based on an event-based simulation engine has been made. Common ADLs were investigated and the work was based on the fundamentals within the field of software architectures. The tool was evaluated both in terms of correctness in predictions as well as usability to show that it actually is possible to evaluate the performance using high-level architectures as models.
APA, Harvard, Vancouver, ISO, and other styles
42

Islam, A. K. M. Moinul, and Michael Unterkalmsteiner. "Software Process Improvement Measurement and Evaluation Framework (SPI-MEF)." Thesis, Blekinge Tekniska Högskola, Sektionen för datavetenskap och kommunikation, 2009. http://urn.kb.se/resolve?urn=urn:nbn:se:bth-2493.

Full text
Abstract:
During the last decades, the dependency on software has increased. Many of today’s modern devices embed software to control their functions. The increasing dependency has also taken part in shaping the software development process to produce better quality software. Many researchers and practitioners have spent large investments to improve the software development process. A research area within software engineering that addresses the assessment and improvement issues in development processes is called Software Process Improvement (SPI). One of the essential aspects in software process improvement is measuring the outcome of the implemented changes. The measurement and evaluation of software process improvement provides the means for the organization to articulate the achievement level of their goals. Although the importance of measuring and evaluating the outcome of software process improvement is paramount, there exist no common guidelines or systematic methods of measuring and evaluating the improvement. This condition evokes difficulties for practitioners to implement software process improvement measurement programs. This issue has raised the challenge to develop and implement an effective framework for measuring and evaluating the outcome of software process improvement initiatives. This thesis presents a measurement and evaluation framework for software process improvement. SPI-MEF provides guidelines in the form of systematic steps to evaluate the outcome of software process improvement. The framework is based on key concepts which were elaborated in previous work. In this thesis, a validation of SPI-MEF is also conducted by involving representatives from academia and industry. The validation is aimed to judge the frameworks’ usability, applicability and usefulness. Finally, a refinement of the framework is carried out based on the input from the validation.
APA, Harvard, Vancouver, ISO, and other styles
43

Gadgil, Kalyani Surendra. "Performance Benchmarking Software-Defined Radio Frameworks: GNURadio and CRTSv.2." Thesis, Virginia Tech, 2020. http://hdl.handle.net/10919/97568.

Full text
Abstract:
In this thesis, we benchmark the Cognitive Radios Test System version 2.0 (CRTSv.2) to analyze its software performance with respect to its internal structure and design choices. With the help of system monitoring and profiling tools, CRTSv.2 is tested to quantitatively evaluate its features and understand its shortcomings. With the help of GNU Radio, a popular, easy-to-use software radios framework, we ascertain that CRTSv.2 has a low memory footprint, fewer dependencies and overall, is a lightweight framework that can potentially be used for real-time signal processing. Several open-source measurement tools such as valgrind, perf, top, etc. are used to evaluate the CPU utilization, memory footprint and to postulate the origins of latencies. Based on our evaluation, we observe that CRTSv.2 shows a CPU utilization of approximately 9% whereas GNU Radio is 59%. CRTSv.2 has lower heap memory consumption of approximately 3MB to GNU Radio's 25MB. This study establishes a methodology to evaluate the performance of two SDR frameworks systematically and quantitatively.
Master of Science
When picking the best person for the job, we rely on the person's performance in past projects of a similar nature. The same can be said for software. Software radios provide the capability to perform signal processing functions in software, making them prime candidates towards solving modern problems such as spectrum scarcity, internet-of-things(IoT) adoption, vehicle-to-vehicle communication etc. In order to operate and configure software radios, software frameworks are provided that let the user make changes to the waveform, perform signal processing and data management. In this thesis, we consider two such frameworks,GNU Radio and CRTSv.2. A software performance evaluation is conducted to assess framework overheads contributing to operation of an orthogonal frequency-division multiplexing (OFDM) digital modulation scheme. This provides a quantitative analysis of a signals-specific use case which can be used by researchers to evaluate the optimal framework for research. This analysis can be generalized for different signal processing capabilities by understanding the total framework overhead removed from signal processing costs.
APA, Harvard, Vancouver, ISO, and other styles
44

Reinthaler, Stephan Ulrich. "Evaluation der Produktionsplanungssoftware Repetetive Manufacturing Optimization von Oracle." Institut für Transportwirtschaft und Logistik, WU Vienna University of Economics and Business, 2005. http://epub.wu.ac.at/1780/1/document.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
45

Rosén, Nils. "Evaluation methods for procurement of business critical software systems." Thesis, University of Skövde, School of Humanities and Informatics, 2009. http://urn.kb.se/resolve?urn=urn:nbn:se:his:diva-3091.

Full text
Abstract:

The purpose of this thesis is to explore what software evaluation methods are currently available that can assist organizations and companies in procuring a software solution for some particular task or purpose for a specific type of business. The thesis is based on a real-world scenario where a company, Volvo Technology Corporation (VTEC), is in the process of selecting a new intellectual property management system for their patent department. For them to make an informed decision as to which system to choose, an evaluation of market alternatives needs to be done. First, a set of software evaluation methods and techniques are chosen for further evaluation. An organizational study, by means of interviews where questions are based on the ISO 9126-1 Software quality model, is then conducted, eliciting user opinions about the current system and what improvements a future system should have. The candidate methods are then evaluated based on the results from the organizational study and other pertinent factors in order to reach a conclusion as to which method is best suited for this selection problem. The Analytical Hierarchy Process (AHP) is deemed the best choice.

APA, Harvard, Vancouver, ISO, and other styles
46

Jagannathan, Srivatsan. "Comparison and Evaluation of Open-source Cloud Management Software." Thesis, KTH, Kommunikationsnät, 2012. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-99004.

Full text
Abstract:
The number of cloud management software related to a private infrastructure-as-a-service cloud is increasing day-by-day. The features of the cloud management software vary significantly and this creates a difficulty for the cloud consumers to choose the software based on their business requirements. An example of the problem is choosing software with a power management feature. The power management feature is used to increase the efficiency of energy consumption by consolidating virtual machines together and turning off unused physical servers, which is not provided by many cloud management software. OpenNebula is one of the most widely used open-source cloud management software among research institutions and enterprises. However, the performance characteristic of OpenNebula is not well studied in the existing literature. An example of the problem is choosing a hardware configuration to run OpenNebula for the research institutions and enterprises. The first objective of this thesis is to develop a framework for comparing features of various cloud management software. For developing this framework, existing works are reviewed. The cloud management software is installed on the KTH LCN testbed for hands-on experience. Both the open-source and the commercial software are analyzed for developing the framework. The major contribution related to the framework is identifying features provided for the commercial software that are not available for the open-source software. The features are: (1) co-location of VMs is running a group of VMs on the same physical machine (for example, if the web server VM has to access the application server VM for getting the web pages, they can be placed on the same physical machine); (2) anti-co-location of VMs is not allowing a pair of VMs to run on a single physical machine (for example, the primary and back-up web server VMs should always run on the different physical machines); (3) the resources of the physical machines can be combined (e.g., number of CPU cores, physical memory) as a resource pool and compartmentalized into an organizational structure (e.g., HR, development, testing, etc). The second objective of this thesis is to evaluate the performance of the OpenNebula cloud management software. For the performance evaluation, existing works are reviewed to identify the metrics, and the OpenNebula cloud management software is installed on the KTH LCN testbed. The performance of the OpenNebula software was evaluated for different virtual machine operations, virtual machine types, number of virtual machines and change in load of the system. The major lessons learned related to the performance evaluation are: (1) the duration for the live migration does not change with the load; (2) the duration for the live migration increases linearly as the memory assigned to the VM increases; (3) the duration of the add and delete operations increases linearly as the number of VMs increases.
APA, Harvard, Vancouver, ISO, and other styles
47

Marculescu, Bogdan. "Interactive Search-Based Software Testing : Development, Evaluation, and Deployment." Doctoral thesis, Blekinge Tekniska Högskola, Institutionen för programvaruteknik, 2017. http://urn.kb.se/resolve?urn=urn:nbn:se:bth-15083.

Full text
APA, Harvard, Vancouver, ISO, and other styles
48

Shyr, Casper. "Development and evaluation of software for applied clinical genomics." Thesis, University of British Columbia, 2016. http://hdl.handle.net/2429/58043.

Full text
Abstract:
High-throughput next-generation DNA sequencing has evolved rapidly over the past 20 years. The Human Genome Project published its first draft of the human genome in 2000 at an enormous cost of 3 billion dollars, and was an international collaborative effort that spanned more than a decade. Subsequent technological innovations have decreased that cost by six orders of magnitude down to a thousand dollars, while throughput has increased by over 100 times to a current delivery of gigabase of data per run. In bioinformatics, significant efforts to capitalize on the new capacities have produced software for the identification of deviations from the reference sequence, including single nucleotide variants, short insertions/deletions, and more complex chromosomal characteristics such as copy number variations and translocations. Clinically, hospitals are starting to incorporate sequencing technology as part of exploratory projects to discover underlying causes of diseases with suspected genetic etiology, and to provide personalized clinical decision support based on patients’ genetic predispositions. As with any new large-scale data, a need has emerged for mechanisms to translate knowledge from computationally oriented informatics specialists to the clinically oriented users who interact with it. In the genomics field, the complexity of the data, combined with the gap in perspectives and skills between computational biologists and clinicians, present an unsolved grand challenge for bioinformaticians to translate patient genomic information to facilitate clinical decision-making. This doctoral thesis focuses on a comparative design analysis of clinical decision support systems and prototypes interacting with patient genomes under various sectors of healthcare to ultimately improve the treatment and well-being of patients. Through a combination of usability methodologies across multiple distinct clinical user groups, the thesis highlights reoccurring domain-specific challenges and introduces ways to overcome the roadblocks for translation of next-generation sequencing from research laboratory to a multidisciplinary hospital environment. To improve the interpretation efficiency of patient genomes and informed by the design analysis findings, a novel computational approach to prioritize exome variants based on automated appraisal of patient phenotypes is introduced. Finally, the thesis research incorporates applied genome analysis via clinical collaborations to inform interface design and enable mastery of genome analysis.
Science, Faculty of
Graduate
APA, Harvard, Vancouver, ISO, and other styles
49

Mårtensson, Frans. "Software Architecture Quality Evaluation : Approaches in an Industrial Context." Licentiate thesis, Karlskrona : Blekinge Institute of Technology, 2006. http://urn.kb.se/resolve?urn=urn:nbn:se:bth-00313.

Full text
Abstract:
Software architecture has been identified as an increasingly important part of software development. The software architecture helps the developer of a software system to define the internal structure of the system. Several methods for evaluating software architectures have been proposed in order to assist the developer in creating a software architecture that will have a potential to fulfil the requirements on the system. Many of the evaluation methods focus on evaluation of a single quality attribute. However, in an industrial system there are normally requirements on several quality aspects of the system. Therefore, an architecture evaluation method that addresses multiple quality attributes, e.g., performance, maintainability, testability, and portability, would be more beneficial. This thesis presents research towards a method for evaluation of multiple quality attributes using one software architecture evaluation method. A prototype-based evaluation method is proposed that enables evaluation of multiple quality attributes using components of a system and an approximation of its intended runtime environment. The method is applied in an industrial case study where communication components in a distributed realtime system are evaluated. The evaluation addresses performance, maintainability, and portability for three alternative components using a single set of software architecture models and a prototype framework. The prototype framework enables the evaluation of different components and component configurations in the software architecture while collecting data in an objective way. Finally, this thesis presents initial work towards incorporating evaluation of testability into the method. This is done through an investigation of how testability is interpreted by different organizational roles in a software developing organization and which measures of source code that they consider affecting testability.
APA, Harvard, Vancouver, ISO, and other styles
50

Andersson, Björn, and Marie Persson. "Software Reliability Prediction – An Evaluation of a Novel Technique." Thesis, Blekinge Tekniska Högskola, Avdelningen för programvarusystem, 2004. http://urn.kb.se/resolve?urn=urn:nbn:se:bth-3589.

Full text
Abstract:
Along with continuously increasing computerization, our expectations on software and hardware reliability increase considerably. Therefore, software reliability has become one of the most important software quality attributes. Software reliability modeling based on test data is done to estimate whether the current reliability level meets the requirements for the product. Software reliability modeling also provides possibilities to predict reliability. Costs of software developing and tests together with profit issues in relation to software reliability are one of the main objectives to software reliability prediction. Software reliability prediction currently uses different models for this purpose. Parameters have to be set in order to tune the model to fit the test data. A slightly different prediction model, Time Invariance Estimation, TIE is developed to challenge the models used today. An experiment is set up to investigate whether TIE could be found useful in a software reliability prediction context. The experiment is based on a comparison between the ordinary reliability prediction models and TIE.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography