Dissertations / Theses on the topic 'Computer software – Quality control'

To see the other types of publications on this topic, follow the link: Computer software – Quality control.

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 dissertations / theses for your research on the topic 'Computer software – Quality control.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.

1

Wilburn, Cathy A. "Using the Design Metrics Analyzer to improve software quality." Virtual Press, 1994. http://liblink.bsu.edu/uhtbin/catkey/902489.

Full text
Abstract:
Effective software engineering techniques are needed to increase the reliability of software systems, to increase the productivity of development teams, and to reduce the costs of software development. Companies search for an effective software engineering process as they strive to reach higher process maturity levels and produce better software. To aid in this quest for better methods of software engineering. the Design Metrics Research Team at Ball State University has analyzed university and industry software to be able to detect error-prone modules. The research team has developed, tested and validated their design metrics and found them to be highly successful. These metrics were typically collected and calculated by hand. So that these metrics can be collected more consistently, more accurately and faster, the Design Metrics Analyzer for Ada (DMA) was created. The DMA collects metrics from the files submitted based on a subprogram level. The metrics results are then analyzed to yield a list of stress points, which are modules that are considered to be error-prone or difficult for developers. This thesis describes the Design Metrics Analyzer, explains its output and how it functions. Also, ways that the DMA can be used in the software development life cycle are discussed.
Department of Computer Science
APA, Harvard, Vancouver, ISO, and other styles
2

Walsh, Martha Geiger. "A system of automated tools to support control of software development through software configuration management." Thesis, Kansas State University, 1985. http://hdl.handle.net/2097/9892.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Krishnamurthy, Janaki. "Quality Market: Design and Field Study of Prediction Market for Software Quality Control." NSUWorks, 2010. http://nsuworks.nova.edu/gscis_etd/352.

Full text
Abstract:
Given the increasing competition in the software industry and the critical consequences of software errors, it has become important for companies to achieve high levels of software quality. While cost reduction and timeliness of projects continue to be important measures, software companies are placing increasing attention on identifying the user needs and better defining software quality from a customer perspective. Software quality goes beyond just correcting the defects that arise from any deviations from the functional requirements. System engineers also have to focus on a large number of quality requirements such as security, availability, reliability, maintainability, performance and temporal correctness requirements. The fulfillment of these run-time observable quality requirements is important for customer satisfaction and project success. Generating early forecasts of potential quality problems can have significant benefits to quality improvement. One approach to better software quality is to improve the overall development cycle in order to prevent the introduction of defects and improve run-time quality factors. Many methods and techniques are available which can be used to forecast quality of an ongoing project such as statistical models, opinion polls, survey methods etc. These methods have known strengths and weaknesses and accurate forecasting is still a major issue. This research utilized a novel approach using prediction markets, which has proved useful in a variety of situations. In a prediction market for software quality, individual estimates from diverse project stakeholders such as project managers, developers, testers, and users were collected at various points in time during the project. Analogous to the financial futures markets, a security (or contract) was defined that represents the quality requirements and various stakeholders traded the securities using the prevailing market price and their private information. The equilibrium market price represents the best aggregate of diverse opinions. Among many software quality factors, this research focused on predicting the software correctness. The goal of the study was to evaluate if a suitably designed prediction market would generate a more accurate estimate of software quality than a survey method which polls subjects. Data were collected using a live software project in three stages: viz., the requirements phase, an early release phase and a final release phase. The efficacy of the market was tested with results from prediction markets by (i) comparing the market outcomes to final project outcome, and (ii) by comparing market outcomes to results of opinion poll. Analysis of data suggests that predictions generated using the prediction market are significantly different from those generated using polls at early release and final release stages. The prediction market estimates were also closer to the actual probability estimates for quality compared to the polls. Overall, the results suggest that suitably designed prediction markets provide better forecasts of potential quality problems than polls.
APA, Harvard, Vancouver, ISO, and other styles
4

Hammons, Rebecca L. "Continuing professional education for software quality assurance." Muncie, Ind. : Ball State University, 2009. http://cardinalscholar.bsu.edu/759.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Kwan, Pak Leung. "Design metrics forensics : an analysis of the primitive metrics in the Zage design metrics." Virtual Press, 1994. http://liblink.bsu.edu/uhtbin/catkey/897490.

Full text
Abstract:
The Software Engineering Research Center (SERC) Design Metrics Research Team at Ball State University has developed a design metric D(G) of the form:D(G) = D~ + DiWhere De is the architectural design metric (external design metric) and D; is the detailed design metric (internal design metric).Questions to be investigated in this thesis are:Why can D, be an indicator of the potential error modules?Why can D; be an indicator of the potential error modules?Are there any significant factors that dominate the design metrics?In this thesis, the report of the STANFINS data is evaluated by using correlation analysis, regression analysis, and several other statistical techiques. The STANFINS study is chosen because it contains approximately 532 programs, 3,000 packages and 2,500,000 lines of Ada.The design metrics study was completed on 21 programs (approximately 24,000 lines of code) which were selected by CSC development teams. Error reports were also provided by CSC personnel.
Department of Computer Science
APA, Harvard, Vancouver, ISO, and other styles
6

Stineburg, Jeffrey. "Software reliability prediction based on design metrics." Virtual Press, 1999. http://liblink.bsu.edu/uhtbin/catkey/1154775.

Full text
Abstract:
This study has presented a new model for predicting software reliability based on design metrics. An introduction to the problem of software reliability is followed by a brief overview of software reliability models. A description of the models is given, including a discussion of some of the issues associated with them. The intractability of validating life-critical software is presented. Such validation is shown to require extended periods of test time that are impractical in real world situations. This problem is also inherent in fault tolerant software systems of the type currently being implemented in critical applications today. The design metrics developed at Ball State University is proposed as the basis of a new model for predicting software reliability from information available during the design phase of development. The thesis investigates the proposition that a relationship exists between the design metric D(G) and the errors that are found in the field. A study, performed on a subset of a large defense software system, discovered evidence to support the proposition.
Department of Computer Science
APA, Harvard, Vancouver, ISO, and other styles
7

Black, Angus Hugh. "Software quality assurance in a remote client/contractor context." Thesis, Rhodes University, 2006. http://hdl.handle.net/10962/d1006615.

Full text
Abstract:
With the reliance on information technology and the software that this technology utilizes increasing every day, it is of paramount importance that software developed be of an acceptable quality. This quality can be achieved through the utilization of various software engineering standards and guidelines. The question is, to what extent do these standards and guidelines need to be utilized and how are these standards and guidelines implemented? This research focuses on how guidelines developed by standardization bodies and the unified process developed by Rational can be integrated to achieve a suitable process and version control system within the context of a remote client/contractor small team environment.
APA, Harvard, Vancouver, ISO, and other styles
8

Pipkin, Jeffrey A. "Applying design metrics to large-scale telecommunications software." Virtual Press, 1996. http://liblink.bsu.edu/uhtbin/catkey/1036178.

Full text
Abstract:
The design metrics developed by the Design Metrics team at Ball State University are a suite of metrics that can be applied during the design phase of software development. The benefit of the metrics lies in the fact that the metrics can be applied early in the software development cycle. The suite includes the external design metric De,the internal design metric D27 D(G), the design balance metric DB, and the design connectivity metric DC.The suite of design metrics have been applied to large-scale industrial software as well as student projects. Bell Communications Research of New Jersey has made available a software system that can be used to apply design metrics to large-scale telecommunications software. This thesis presents the suite of design metrics and attempts to determine if the characteristics of telecommunications software are accurately reflected in the conventions used to compute the metrics.
Department of Computer Science
APA, Harvard, Vancouver, ISO, and other styles
9

Bhattrai, Gopendra R. "An empirical study of software design balance dynamics." Virtual Press, 1995. http://liblink.bsu.edu/uhtbin/catkey/958786.

Full text
Abstract:
The Design Metrics Research Team in the Computer Science Department at Ball State University has been engaged in developing and validating quality design metrics since 1987. Since then a number of design metrics have been developed and validated. One of the design metrics developed by the research team is design balance (DB). This thesis is an attempt to validate the metric DB. In this thesis, results of the analysis of five systems are presented. The main objective of this research is to examine if DB can be used to evaluate the complexity of a software design and hence the quality of the resulting software. Two of the five systems analyzed were student projects and the remaining three were from industry. The five systems analyzed were written in different languages, had different sizes and exhibited different error rates.
Department of Computer Science
APA, Harvard, Vancouver, ISO, and other styles
10

Walia, Gursimran Singh. "Using error modeling to improve and control software quality an empirical investigation /." Diss., Mississippi State : Mississippi State University, 2009. http://library.msstate.edu/etd/show.asp?etd=etd-04032009-070637.

Full text
APA, Harvard, Vancouver, ISO, and other styles
11

Lim, Edwin C. "Software metrics for monitoring software engineering projects." Thesis, Edith Cowan University, Research Online, Perth, Western Australia, 1994. https://ro.ecu.edu.au/theses/1100.

Full text
Abstract:
As part of the undergraduate course offered by Edith Cowan University, the Department of Computer Science has (as part of a year's study) a software engineering group project. The structure of this project was divided into two units, Software Engineering l and Software Engineering 2. ln Software Engineering 1, students were given the group project where they had to complete and submit the Functional Requirement and Detail System Design documentation. In Software Engineering 2, students commenced with the implementation of the software, testing and documentation. The software was then submitted for assessment and presented to the client. To aid the students with the development of the software, the department had adopted EXECOM's APT methodology as its standard guideline. Furthermore, the students were divided into groups of 4 to 5, each group working on the same problem. A staff adviser was assigned to each project group. The purpose of this research exercise was to fulfil two objectives. The first objective was to ascertain whether there is a need to improve the final year software engineering project for future students by enhancing any aspect that may be regarded as deficient. The second objective was to ascertain the factors that have the most impact on the quality of the delivered software. The quality of the delivered software was measured using a variety of software metrics. Measurement of software has mostly been ignored until recently or used without true understanding of its purpose. A subsidiary objective was to gain an understanding of the worth of software measurement in the student environment One of the conclusions derived from the study suggests that teams who spent more time on software design and testing, tended to produce better quality software with less defects. The study also showed that adherence to the APT methodology led to the project being on schedule and general team satisfaction with the project management. One of the recommendations made to the project co-ordinator was that staff advisers should have sufficient knowledge of the software engineering process.
APA, Harvard, Vancouver, ISO, and other styles
12

Moschoglou, Georgios Moschos. "Software testing tools and productivity." Virtual Press, 1996. http://liblink.bsu.edu/uhtbin/catkey/1014862.

Full text
Abstract:
Testing statistics state that testing consumes more than half of a programmer's professional life, although few programmers like testing, fewer like test design and only 5% of their education will be devoted to testing. The main goal of this research is to test the efficiency of two software testing tools. Two experiments were conducted in the Computer Science Department at Ball State University. The first experiment compares two conditions - testing software using no tool and testing software using a command-line based testing tool - to the length of time and number of test cases needed to achieve an 80% statement coverage for 22 graduate students in the Computer Science Department. The second experiment compares three conditions - testing software using no tool, testing software using a command-line based testing tool, and testing software using a GUI interactive tool with added functionality - to the length of time and number of test cases needed to achieve 95% statement coverage for 39 graduate and undergraduate students in the same department.
Department of Computer Science
APA, Harvard, Vancouver, ISO, and other styles
13

Perera, Dinesh Sirimal. "Design metrics analysis of the Harris ROCC project." Virtual Press, 1995. http://liblink.bsu.edu/uhtbin/catkey/935930.

Full text
Abstract:
The Design Metrics Research Team at Ball State University has developed a quality design metric D(G), which consists of an internal design metric Di, and an external design metric De. This thesis discusses applying design metrics to the ROCC-Radar On-line Command Control project received from Harris Corporation. Thus, the main objective of this thesis is to analyze the behavior of D(G), and the primitive components of this metric.Error and change history reports are vital inputs to the validation of design metrics' performance. Since correct identification of types of changes/errors is critical for our evaluation, several different types of analyses were performed in an attempt to qualify the metric performance in each case.This thesis covers the analysis of 666 FORTRAN modules with approximately 142,296 lines of code.
Department of Computer Science
APA, Harvard, Vancouver, ISO, and other styles
14

Pryor, Alan N. "A Discrimination of Software Implementation Success Criteria." Thesis, University of North Texas, 1999. https://digital.library.unt.edu/ark:/67531/metadc2196/.

Full text
Abstract:
Software implementation projects struggle with the delicate balance of low cost, on-time delivery and quality. The methodologies and processes used to create and maintain a quality software system are expensive to deploy and result in long development cycle-time. However, without their deployment into the software implementation life-cycle, a software system will be undependable, unsuccessful. The purpose of this research is to identify a succinct set of software implementation success criteria and assess the key independent constructs, activities, carried out to ensure a successful implementation project. The research will assess the success of a software implementation project as the dependent construct of interest and use the software process model (methodology) as the independent construct. This field research involved three phases: (1) criteria development, (2) data collection, and (3) testing of hypotheses and discriminant analysis. The first phase resulted in the development of the measurement instruments for the independent and dependent constructs. The measurement instrument for the independent construct was representative of the criteria from highly regarded software implementation process models and methodologies, e.g., ISO9000, Software Engineering Institute's Capability Maturity Model (SEI CMM). The dependent construct was developed from the categories and criteria from the Delone and McLean (1992) MIS List of Success Measures. The data collection and assessment phase employed a field survey research strategy to 80 companies involved in internal software implementation. Both successful and unsuccessful software implementation projects (identified by the Delone/McLean model) participated. Results from 165 projects were collected, 28 unsuccessful and 137 successful. The third phase used ANOVA to test the first 11 hypotheses and employed discriminant analysis for the 12th hypothesis to identify the "best set" of variables, criteria, that discriminate between successful and unsuccessful software implementation projects. Twelve discriminating variables out of 67 were identified and supported as significant discriminators between successful and unsuccessful projects. Three of the 11 constructs were found not to be significant investments for the successful projects.
APA, Harvard, Vancouver, ISO, and other styles
15

Roems, Raphael. "The implications of deviating from software testing processes : a case study of a software development company in Cape Town, South Africa." Thesis, Cape Peninsula University of Technology, 2017. http://hdl.handle.net/20.500.11838/2686.

Full text
Abstract:
Thesis (MTech (Business Information Systems))--Cape Peninsula University of Technology, 2017.
Ensuring that predetermined quality standards are met is an issue which software development companies, and the software development industry at large, is having issues in attaining. The software testing process is an important process within the larger software development process, and is done to ensure that software functionality meets user requirements and software defects are detected and fixed prior to users receiving the developed software. Software testing processes have progressed to the point where there are formal processes, dedicated software testing resources and defect management software in use at software development organisations. The research determined implications that the case study software development organisation could face when deviating from software testing processes, with a focus on function performed by the software tester role. The analytical dimensions of duality of structure framework, based on Structuration Theory, was used as a lens to understand and interpret the socio-technical processes associated with software development processes at the case study organisation. Results include the identification of software testing processes, resources and tools, together with the formal software development processes and methodologies being used. Critical e-commerce website functionality and software development resource costs were identified. Tangible and intangible costs which arise due to software defects were also identified. Recommendations include the prioritisation of critical functionality for test execution for the organisation’s e-commerce website platform. The necessary risk management should also be undertaken in scenarios with time constraints on software testing, which balances risk with quality, features, budget and schedule. Numerous process improvements were recommended for the organisation, to assist in preventing deviations from prescribed testing processes. A guideline was developed as a research contribution to illustrate the relationships of the specific research areas and the impact on software project delivery.
APA, Harvard, Vancouver, ISO, and other styles
16

Underwood, B. Alan. "A framework for the certification of critical application systems." Thesis, Queensland University of Technology, 1994.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
17

Chen, Dejiu. "Systems Modeling and Modularity Assessment for Embedded Computer Control Applications." Doctoral thesis, KTH, Maskinkonstruktion, 2004. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-3792.

Full text
Abstract:
AbstractThe development of embedded computer control systems(ECS) requires a synergetic integration of heterogeneoustechnologies and multiple engineering disciplines. Withincreasing amount of functionalities and expectations for highproduct qualities, short time-to-market, and low cost, thesuccess of complexity control and built-in flexibility turn outto be one of the major competitive edges for many ECS products.For this reason, modeling and modularity assessment constitutetwo critical subjects of ECS engineering.In the development ofECS, model-based design is currently being exploited in most ofthe sub-systems engineering activities. However, the lack ofsupport for formalization and systematization associated withthe overall systems modeling leads to problems incomprehension, cross-domain communication, and integration oftechnologies and engineering activities. In particular, designchanges and exploitation of "components" are often risky due tothe inability to characterize components' properties and theirsystem-wide contexts. Furthermore, the lack of engineeringtheories for modularity assessment in the context of ECS makesit difficult to identify parameters of concern and to performearly system optimization. This thesis aims to provide a more complete basis for theengineering of ECS in the areas of systems modeling andmodularization. It provides solution domain models for embeddedcomputer control systems and the software subsystems. Thesemeta-models describe the key system aspects, design levels,components, component properties and relationships with ECSspecific semantics. By constituting the common basis forabstracting and relating different concerns, these models willalso help to provide better support for obtaining holisticsystem views and for incorporating useful technologies fromother engineering and research communities such as to improvethe process and to perform system optimization. Further, amodeling framework is derived, aiming to provide a perspectiveon the modeling aspect of ECS development and to codifyimportant modeling concepts and patterns. In order to extendthe scope of engineering analysis to cover flexibility relatedattributes and multi-attribute tradeoffs, this thesis alsoprovides a metrics system for quantifying componentdependencies that are inherent in the functional solutions.Such dependencies are considered as the key factors affectingcomplexity control, concurrent engineering, and flexibility.The metrics system targets early system-level design and takesinto account several domain specific features such asreplication and timing accuracy. Keywords:Domain-Specific Architectures, Model-basedSystem Design, Software Modularization and Components, QualityMetrics.
QC 20100524
APA, Harvard, Vancouver, ISO, and other styles
18

Mohamed, Essack. "A knowledge approach to software testing." Thesis, Stellenbosch : University of Stellenbosch, 2004. http://hdl.handle.net/10019.1/16391.

Full text
Abstract:
Thesis (MPhil)--University of Stellenbosch, 2004.
ENGLISH ABSTRACT: The effort to achieve quality is the largest component of software cost. Software testing is costly - ranging from 50% to 80% of the cost of producing a first working version. It is resource intensive and an intensely time consuming activity in the overall Systems Development Life Cycle (SDLC) and hence could arguably be the most important phase of the process. Software testing is pervasive. It starts at the initiation of a product with nonexecution type testing and continues to the retirement of the product life cycle beyond the post-implementation phase. Software testing is the currency of quality delivery. To understand testing and to improve testing practice, it is essential to see the software testing process in its broadest terms – as the means by which people, methodology, tools, measurement and leadership are integrated to test a software product. A knowledge approach recognises knowledge management (KM) enablers such as leadership, culture, technology and measurements that act in a dynamic relationship with KM processes, namely, creating, identifying, collecting, adapting, organizing, applying, and sharing. Enabling a knowledge approach is a worthy goal to encourage sharing, blending of experiences, discipline and expertise to achieve improvements in quality and adding value to the software testing process. This research was developed to establish whether specific knowledge such as domain subject matter or business expertise, application or technical skills, software testing competency, and whether the interaction of the testing team influences the degree of quality in the delivery of the application under test, or if one is the dominant critical knowledge area within software testing. This research also set out to establish whether there are personal or situational factors that will predispose the test engineer to knowledge sharing, again, with the view of using these factors to increase the quality and success of the ‘testing phase’ of the SDLC. KM, although relatively youthful, is entering its fourth generation with evidence of two paradigms emerging - that of mainstream thinking and that of the complex adaptive system theory. This research uses pertinent and relevant extracts from both paradigms appropriate to gain quality/success in software testing.
AFRIKAANSE OPSOMMING: By verre die grootste komponent van sagte ware koste is dié verwant aan kwaliteitsversekering. Toetsing van sagte ware is koste intensief en verteenwoordig tussen 50% en 80% van die kostes om ‘n beta weergawe vry te stel. Die toetsing van sagte ware is nie alleenlik duursaam nie, maar ook arbeidintensief en ‘n tydrowende aktiwteit in die sagte ware ontwikkelings lewensiklus en kan derhalwe gereken word as die mees belangrike fase. Toetsing is deurdringend – dit begin by die inisiëring van ‘n produk deur middel van nie-uitvoerende tipe toetsing en eindig by die voleinding van die produklewensiklus na die implementeringsfase. Sagte ware toetsing word beskou as die geldwaarde van kwalitatiewe aflewering. Om toetsing ten volle te begryp en die toepassing daarvan te verbeter, is dit noodsaaklik om die toetsproses holisties te beskou – as die medium en mate waartoe mense, metodologie, tegnieke, meting en leierskap integreer om ‘n sagte ware produk te toets. ‘n Benadering gekenmerk deur kennis erken die dinamiese verhouding waarbinne bestuurselemente van kundigheid, soos leierskap, kultuur, tegnologie en maatstawwe reageer en korrespondeer met prosesse van kundigheid, naamlik skep, identifiseer, versamel, aanpas, organiseer, toepas en meedeel. Die fasilitering van ‘n benadering gekenmerk deur kennis is ‘n waardige doelwit om meedeling, vermenging van ervaringe, dissipline en kundigheid aan te moedig ten einde kwaliteit te verbeter en waarde toe te voeg tot die proses van safte ware toetsing. Die doel van hierdie navorsing is om te bepaal of die kennis van ‘n spesifieke onderwerp, besigheidskundigheid, tegniese vaardighede of die toepassing daarvan, kundigheid van sagte ware toetsing, en/of die interaksie van die toetsspan die mate van kwaliteit beïnvloed, of een van voorgenoemde die dominante kritieke area van kennis is binne die konteks van sagte ware toetsing. Die navorsing beoog ook om te bepaal of daar persoonlike of situasiegebonde fakfore bestaan wat die toetstegnikus vooropstel om kennis te deel, weer eens, met die oog om deur middel van hierdie faktore kwaliteit te verbeter en die toetsfase binne die sagte ware ontwikkelingsiklus suksesvol af te lewer. Ten spyte van die relatiewe jeudgigheid van die bestuur van kennis, betree dit die vierde generasie waaruit twee denkwyses na vore kom – dié van hoofstroom denke en dié van ingewikkelde aangepaste stelselsdenke. Hierdie navorsing illustreer belangrike en toepaslike insette van beide denkwyses wat geskik is vir meedeling van kennis en vir die bereiking van verbeterde kwaliteit / sukses in sagte ware toetsing.
APA, Harvard, Vancouver, ISO, and other styles
19

West, James F. "An examination of the application of design metrics to the development of testing strategies in large-scale SDL models." Virtual Press, 2000. http://liblink.bsu.edu/uhtbin/catkey/1191725.

Full text
Abstract:
There exist a number of well-known and validated design metrics, and the fault prediction available through these metrics has been well documented for systems developed in languages such as C and Ada. However, the mapping and application of these metrics to SDL systems has not been thoroughly explored. The aim of this project is to test the applicability of these metrics in classifying components for testing purposes in a large-scale SDL system. A new model has been developed for this purpose. This research was conducted using a number of SDL systems, most notably actual production models provided by Motorola Corporation.
Department of Computer Science
APA, Harvard, Vancouver, ISO, and other styles
20

Land, Lesley Pek Wee Information Systems Technology &amp Management Australian School of Business UNSW. "Software group reviews and the impact of procedural roles on defect detection performance." Awarded by:University of New South Wales. School of Information Systems, Technology and Management, 2000. http://handle.unsw.edu.au/1959.4/21838.

Full text
Abstract:
Software reviews (inspections) have received widespread attention for ensuring the quality of software, by finding and repairing defects in software products. A typical review process consists of two stages critical for defect detection: individual review followed by group review. This thesis addresses two attributes to improve our understanding of the task model: (1) the need for review meetings, and (2) the use of roles in meetings. The controversy of review meeting effectiveness has been consistently raised in the literature. Proponents maintain that the review meeting is the crux of the review process, resulting in group synergism and qualitative benefits (e.g. user satisfaction). Opponents argue that against meetings because the costs of organising and conducting them are high, and there is no net meeting gain. The persistence of these diverse views is the main motivation behind this thesis. Although commonly prescribed in meetings, roles have not yet been empirically validated. Three procedural roles (moderator, reader, recorder) were considered. A conceptual framework on software reviews was developed, from which main research questions were identified. Two experiments were conducted. Review performance was operationalised in terms of true defects and false positives. The review product was COBOL code. The results indicated that in terms of true defects, group reviews outperformed the average individual but not nominal group reviews (aggregate of individual reviews). However, groups have the ability to filter false positives from the individuals' findings. Roles provided limited benefits in improving group reviews. Their main function is to reduce process loss, by encouraging systematic consideration of the individuals' findings. When two or more reviewers find a defect during individual reviews, it is likely to be carried through to the meeting (plurality effect). Groups employing roles reported more 'new' false positives (not identified from preparation) than groups without roles. Overall, subjects' ability at the defect detection was low. This thesis suggests that reading technologies may be helpful for improving reviewer performance. The inclusion of an author role may also reduce the level of false positive detection. The results have implications on the design and support of the software review process.
APA, Harvard, Vancouver, ISO, and other styles
21

Milicic, Drazen, and Pontus Svensson. "Sparks to a living quality organization." Thesis, Blekinge Tekniska Högskola, Institutionen för programvaruteknik och datavetenskap, 1998. http://urn.kb.se/resolve?urn=urn:nbn:se:bth-2731.

Full text
APA, Harvard, Vancouver, ISO, and other styles
22

Vieira, Daniel Vicente. "Desenvolvimento de um software para avaliação de qualidade de imagens tomográficas usando o Phantom Catphan500®." Universidade de São Paulo, 2016. http://www.teses.usp.br/teses/disponiveis/43/43134/tde-22032018-090436/.

Full text
Abstract:
Desde a invenção da tomografia computadorizada (CT) nos anos 70, toda década trouxe novas tecnologias para esta modalidade. Com estes avanços, também surgiu a necessidade de novas e melhores técnicas de avaliação de desempenho e segurança dos equipamentos de CT. Hoje, o controle de qualidade de equipamentos de CT é, em grande parte, feito manualmente. Portanto, é lento e, em parte, subjetivo. Neste trabalho, um software foi escrito em MatLab® para processar imagens do phantom de CT Catphan500®, aperfeiçoando a rotina do programa de controle de qualidade de CT. Com pouca interferência do usuário, o software mede a espessura de corte, incremento entre cortes e tamanho de pixel, avalia a linearidade do número CT, estima a Função Transferência de Modulação (MTF), o ruído e o Espectro de Potência do Ruído (NPS). Para a validação do software, conjuntos de imagens do phantom foram obtidas em 10 equipamentos de CT diferentes, com 27 protocolos diferentes. Cada conjunto foi analisado pelo software, e os resultados obtidos foram comparados aos resultados previamente obtidos pela rotina normal do programa controle de qualidade. Para essa comparação, dois testes de hipótese foram empregados: o teste t de Student (para os valores de espessura de corte, incremento entre cortes, tamanho de pixel e os coeficientes da avaliação de linearidade do número CT, adotando um valor-p de 0,01) e o teste F de Fisher (para o ruído, valor-p de 0,05). As funções MTF e NPS atualmente não são medidas na rotina do controle de qualidade, portanto não há resultado prévio para fazer esta comparação. Ao invés disso, o NPS foi ajustado em função da MTF (através da relação teórica que há entre os dois) e a qualidade do ajuste foi avaliada pelo teste de qui-quadrado. Dos 101 valores de t e 25 valores de F calculados, 2 e 1 respectivamente estavam fora do intervalo de aceitação. Este resultado está de acordo com os valores-p escolhidos e, portanto, os resultados obtidos pelo software estão de acordo com os resultados da rotina de controle de qualidade convencional. Os ajustes de NPS e MTF obtiveram incertezas grandes nos parâmetros de ajuste (incertezas da mesma ordem de grandeza dos próprios parâmetros). Porém, a avaliação do qui-quadrado reduzido indica que os ajustes foram aceitáveis (com exceção de um, que mostrou uma anomalia no NPS medido e foi desconsiderado). Portanto, o NPS e MTF obtidos estão de acordo com a expectativa teórica.
Since the introduction of the CT scanner as a diagnostic imaging modality, the scientific community has seen new and more complex CT technologies. These improvements brought the need for new and improved techniques to evaluate the safety and performance of these scanners. Nowadays, the interpretation of images generated during the implementation of CT quality control procedures are done visually in much of the cases. Therefore, it is slow and partially subjective. In this work, a software was written in MatLab to process images of the Catphan500 CT phantom, in order to improve the CT quality control workflow and its accuracy. The software evaluate the slice thickness, slice increment, and pixel size, calculates the CT number linearity, and assesses the Modulation Transfer Function (MTF), the noise and the Noise Power Spectrum (NPS). Image sets of the phantom were obtained from 10 different scanners using 27 different protocols in order to validate the software. Comparative results correlating the software output and corresponding data previously obtained by the current quality control program routine were used to conduct this validation. For this comparison, two statistical tests were employed: the Students t-test (for slice thickness, slice increment, pixel size, and the coefficients of the CT number linearity evaluation, with a chosen p-value of 0.01) and the Fisher F-test (for the noise, with chosen p-value of 0.05).The functions MTF and NPS are not currently measured by the quality control routine, so there was no previous result for comparison. Instead, the NPS was fitted as a function of the MTF (using the theoretical relationship between both functions) and the quality of the fit was evaluated using the reduced chi-square. From 101 t values and 25 F values calculated, 2 and 1 were outside the acceptance interval, respectively. This result agrees with the chosen p-values, and therefore the software results are in good agreement with the traditional quality control routine results. The fits of NPS and MTF presented large uncertainties in the fitting parameters (uncertainties of the same order of magnitude as the parameters themselves). However, the reduced chi-square evaluation indicates a good fit (with the exception of one fit, which showed an anomaly on the measured NPS and was unconsidered). Therefore, the obtained MTF and NPS were in agreement with the theoretical expectations.
APA, Harvard, Vancouver, ISO, and other styles
23

Bhargava, Manjari. "Analysis of multiple software releases of AFATDS using design metrics." Virtual Press, 1991. http://liblink.bsu.edu/uhtbin/catkey/834502.

Full text
Abstract:
The development of high quality software the first time, greatly depends upon the ability to judge the potential quality of the software early in the life cycle. The Software Engineering Research Center design metrics research team at Ball State University has developed a metrics approach for analyzing software designs. Given a design, these metrics highlight stress points and determine overall design quality.The purpose of this study is to analyze multiple software releases of the Advanced Field Artillery Tactical Data System (AFATDS) using design metrics. The focus is on examining the transformations of design metrics at each of three releases of AFATDS to determine the relationship of design metrics to the complexity and quality of a maturing system. The software selected as a test case for this research is the Human Interface code from Concept Evaluation Phase releases 2, 3, and 4 of AFATDS. To automate the metric collection process, a metric tool called the Design Metric Analyzer was developed.Further analysis of design metrics data indicated that the standard deviation and mean for the metric was higher for release 2, relatively lower for release 3, and again higher for release 4. Interpreting this means that there was a decrease in complexity and an improvement in the quality of the software from release 2 to release 3 and an increase in complexity in release 4. Dialog with project personnel regarding design metrics confirmed most of these observations.
Department of Computer Science
APA, Harvard, Vancouver, ISO, and other styles
24

Feng, Xin, and 馮昕. "MIST: towards a minimum set of test cases." Thesis, The University of Hong Kong (Pokfulam, Hong Kong), 2002. http://hub.hku.hk/bib/B24521012.

Full text
APA, Harvard, Vancouver, ISO, and other styles
25

Kossoski, Clayton. "Proposta de um método de teste para processos de desenvolvimento de software usando o paradigma orientado a notificações." Universidade Tecnológica Federal do Paraná, 2015. http://repositorio.utfpr.edu.br/jspui/handle/1/1405.

Full text
Abstract:
CAPES
O Paradigma Orientado a Notificações (PON) é uma alternativa para o desenvolvimento de aplicações em software e propõe resolver certos problemas existentes nos paradigmas usuais de programação, nomeadamente o Paradigma Declarativo (PD) e o Paradigma Imperativo (PI). Na verdade, o PON unifica as principais vantagens do PD e do PI, ao mesmo tempo que resolve (em termos de modelo) várias de suas deficiências e inconvenientes relativas ao cálculo lógico- causal em aplicações monoprocessados a de software, completamente supostamente multiprocessados. desde O PON ambientes tem sido materializado em termos de programação e modelagem, mas ainda não possuía um método formalizado para orientar os desenvolvedores na elaboração de teste de software. Esta dissertação propõe um método de teste para projetos de software que empregam o PON no seu desenvolvimento. O método de teste de software proposto foi desenvolvido para ser aplicado nas fases de teste unitário e teste de integração. O teste unitário considera as menores entidades testáveis do PON e requer critérios de teste específicos. O teste de integração considera o funcionamento das entidades PON em conjunto para realização de casos de uso e pode ser realizado em duas etapas: (1) teste sobre as funcionalidades descritas nos requisitos e no caso de uso e (2) teste que exercitem diretamente as entidades PON que compõem o caso de uso (como Premisses, Conditions e Rules). Esse método de teste foi aplicado em um caso de estudo que envolve a modelagem e desenvolvimento de um software de combate aéreo e os resultados desta pesquisa mostram que o método proposto possui grande importância no teste de programas PON.
The Notification Oriented Paradigm (NOP) is an alternative to the development of software applications and proposes to solve certain problems in the usual programming paradigms, including the Declarative Paradigm (DP) and Imperative Paradigm (IP). Indeed, the NOP unifies the main advantages of DP and IP while solving (in terms of model) several of its deficiencies and inconveniences related to logical-causal calculation, apparently from both mono and multiprocessor environments. The NOP has been materialized in terms of programming and modeling, but still did not have a formalized method to guide developers in designing and software testing activity. This dissertation proposes a test method for software projects that use the NOP in its development. The proposed software testing method was developed for use in the phases of unit testing and integration testing. The unit testing considers the smallest testable entities of the NOP and requires specific techniques for generating test cases. The integration testing considers the operation of the PON entities together to carry out use cases and can be accomplished in two steps: (1) test on the features described in the requirements and use case and (2) test that directly exercise the NOP entities that make up the use case (as Premisses, Conditions and Rules). The test method was applied in a case study involving the modeling and development of a simple air combat and the results of this research show that the proposed method has great importance in testing NOP programs in both unit and integration testing.
APA, Harvard, Vancouver, ISO, and other styles
26

Zhu, Liming Computer Science &amp Engineering Faculty of Engineering UNSW. "Software architecture evaluation for framework-based systems." Awarded by:University of New South Wales. Computer Science and Engineering, 2007. http://handle.unsw.edu.au/1959.4/28250.

Full text
Abstract:
Complex modern software is often built using existing application frameworks and middleware frameworks. These frameworks provide useful common services, while simultaneously imposing architectural rules and constraints. Existing software architecture evaluation methods do not explicitly consider the implications of these frameworks for software architecture. This research extends scenario-based architecture evaluation methods by incorporating framework-related information into different evaluation activities. I propose four techniques which target four different activities within a scenario-based architecture evaluation method. 1) Scenario development: A new technique was designed aiming to extract general scenarios and tactics from framework-related architectural patterns. The technique is intended to complement the current scenario development process. The feasibility of the technique was validated through a case study. Significant improvements of scenario quality were observed in a controlled experiment conducted by another colleague. 2) Architecture representation: A new metrics-driven technique was created to reconstruct software architecture in a just-in-time fashion. This technique was validated in a case study. This approach has significantly improved the efficiency of architecture representation in a complex environment. 3) Attribute specific analysis (performance only): A model-driven approach to performance measurement was applied by decoupling framework-specific information from performance testing requirements. This technique was validated on two platforms (J2EE and Web Services) through a number of case studies. This technique leads to the benchmark producing more representative measures of the eventual application. It reduces the complexity behind the load testing suite and framework-specific performance data collecting utilities. 4) Trade-off and sensitivity analysis: A new technique was designed seeking to improve the Analytical Hierarchical Process (AHP) for trade-off and sensitivity analysis during a framework selection process. This approach was validated in a case study using data from a commercial project. The approach can identify 1) trade-offs implied by an architecture alternative, along with the magnitude of these trade-offs. 2) the most critical decisions in the overall decision process 3) the sensitivity of the final decision and its capability for handling quality attribute priority changes.
APA, Harvard, Vancouver, ISO, and other styles
27

Clause, James Alexander. "Enabling and supporting the debugging of software failures." Diss., Georgia Institute of Technology, 2011. http://hdl.handle.net/1853/39514.

Full text
Abstract:
This dissertation evaluates the following thesis statement: Program analysis techniques can enable and support the debugging of failures in widely-used applications by (1) capturing, replaying, and, as much as possible, anonymizing failing executions and (2) highlighting subsets of failure-inducing inputs that are likely to be helpful for debugging such failures. To investigate this thesis, I developed techniques for recording, minimizing, and replaying executions captured from users' machines, anonymizing execution recordings, and automatically identifying failure-relevant inputs. I then performed experiments to evaluate the techniques in realistic scenarios using real applications and real failures. The results of these experiments demonstrate that the techniques can reduce the cost and difficulty of debugging.
APA, Harvard, Vancouver, ISO, and other styles
28

Lin, Burch. "Neural networks and their application to metrics research." Virtual Press, 1996. http://liblink.bsu.edu/uhtbin/catkey/1014859.

Full text
Abstract:
In the development of software, time and resources are limited. As a result, developers collect metrics in order to more effectively allocate resources to meet time constraints. For example, if one could collect metrics to determine, with accuracy, which modules were error-prone and which were error-free, one could allocate personnel to work only on those error-prone modules.There are three items of concern when using metrics. First, with the many different metrics that have been defined, one may not know which metrics to collect. Secondly, the amount of metrics data collected can be staggering. Thirdly, interpretation of multiple metrics may provide a better indication of error-proneness than any single metric.This thesis researched the accuracy of a neural network, an unconventional model, in building a model that can determine whether a module is error-prone from an input of a suite of metrics. The accuracy of the neural network model was compared with the accuracy of a linear regression model, a standard statistical model, that has the same input and output. In other words, we attempted to find whether metrics correlated with error-proneness. The metrics were gathered from three different software projects. The suite of metrics that was used to build the models was a subset of a larger collection of metrics that was reduced using factor analysis.The conclusion of this thesis is that, from the projects analyzed, neither the neural network model nor the logistic regression model provide acceptable accuracies for real use. We cannot conclude whether one model provides better accuracy than the other.
Department of Computer Science
APA, Harvard, Vancouver, ISO, and other styles
29

Baah, George Kofi. "Statistical causal analysis for fault localization." Diss., Georgia Institute of Technology, 2012. http://hdl.handle.net/1853/45762.

Full text
Abstract:
The ubiquitous nature of software demands that software is released without faults. However, software developers inadvertently introduce faults into software during development. To remove the faults in software, one of the tasks developers perform is debugging. However, debugging is a difficult, tedious, and time-consuming process. Several semi-automated techniques have been developed to reduce the burden on the developer during debugging. These techniques consist of experimental, statistical, and program-structure based techniques. Most of the debugging techniques address the part of the debugging process that relates to finding the location of the fault, which is referred to as fault localization. The current fault-localization techniques have several limitations. Some of the limitations of the techniques include (1) problems with program semantics, (2) the requirement for automated oracles, which in practice are difficult if not impossible to develop, and (3) the lack of theoretical basis for addressing the fault-localization problem. The thesis of this dissertation is that statistical causal analysis combined with program analysis is a feasible and effective approach to finding the causes of software failures. The overall goal of this research is to significantly extend the state of the art in fault localization. To extend the state-of-the-art, a novel probabilistic model that combines program-analysis information with statistical information in a principled manner is developed. The model known as the probabilistic program dependence graph (PPDG) is applied to the fault-localization problem. The insights gained from applying the PPDG to fault localization fuels the development of a novel theoretical framework for fault localization based on established causal inference methodology. The development of the framework enables current statistical fault-localization metrics to be analyzed from a causal perspective. The analysis of the metrics show that the metrics are related to each other thereby allowing the unification of the metrics. Also, the analysis of metrics from a causal perspective reveal that the current statistical techniques do not find the causes of program failures instead the techniques find the program elements most associated with failures. However, the fault-localization problem is a causal problem and statistical association does not imply causation. Several empirical studies are conducted on several software subjects and the results (1) confirm our analytical results, (2) demonstrate the efficacy of our causal technique for fault localization. The results demonstrate the research in this dissertation significantly improves on the state-of-the-art in fault localization.
APA, Harvard, Vancouver, ISO, and other styles
30

Kilic, Eda. "Quality Of Service Aware Dynamic Admission Control In Ieee 802.16j Non-transparent Relay Networks." Master's thesis, METU, 2010. http://etd.lib.metu.edu.tr/upload/12611631/index.pdf.

Full text
Abstract:
Today, telecommunication is improving rapidly. People are online anywhere anytime. Due to increasing demand in communication, wireless technologies are progressing quickly trying to provide more services in a wide range. In order to address mobility and connectivity requirements of users in wide areas, Worldwide Interoperability for Microwave Access (Wimax) has been introduced as a forth generation telecommunication technology. Wimax, which is also called Metropolitan Area Network (MAN), is based on IEEE 802.16 standard where a Base Station (BS) provides last mile broadband wireless access to the end users known as Mobile Stations (MS). However, in places where high constructions exist, the signal rate between MS and BS decreases or even the signal can be lost completely due to shadow fading. As a response to this issue, recently an intermediate node specification, namely Relay Station, has been defined in IEEE 802.16j standard for relaying, which provides both throughput enhancement and coverage extension. However, this update has introduced a new problem
call admission control in non-transparent relay networks that support coverage extension. In this thesis, a Quality of Service (QoS) aware dynamic admission control algorithm for IEEE 802.16j non-transparent relay networks is introduced. Our objectives are admitting more service flows, utilizing the bandwidth, giving individual control to each relay station (RS) on call acceptance and rejection, and finally not affecting ongoing service flow quality in an RS due to the dense population of service flows in other RSs. The simulation results show that the proposed algorithm outperforms the other existing call admission control algorithms. Moreover, this algorithm can be interpreted as pioneer call admission control algorithm in IEEE 802.16j non-transparent networks.
APA, Harvard, Vancouver, ISO, and other styles
31

Haskins, Bertram Peter. "A feasibility study on the use of agent-based image recognition on a desktop computer for the purpose of quality control in a production environment." Thesis, [Bloemfontein?] : Central University of Technology, Free State, 2006. http://hdl.handle.net/11462/66.

Full text
Abstract:
Thesis (M. Tech.) - Central University of Technology, Free State, 2006
A multi-threaded, multi-agent image recognition software application called RecMaster has been developed specifically for the purpose of quality control in a production environment. This entails using the system as a monitor to identify invalid objects moving on a conveyor belt and to pass on the relevant information to an attached device, such as a robotic arm, which will remove the invalid object. The main purpose of developing this system was to prove that a desktop computer could run an image recognition system efficiently, without the need for high-end, high-cost, specialised computer hardware. The programme operates by assigning each agent a task in the recognition process and then waiting for resources to become available. Tasks related to edge detection, colour inversion, image binarisation and perimeter determination were assigned to individual agents. Each agent is loaded onto its own processing thread, with some of the agents delegating their subtasks to other processing threads. This enables the application to utilise the available system resources more efficiently. The application is very limited in its scope, as it requires a uniform image background as well as little to no variance in camera zoom levels and object to lens distance. This study focused solely on the development of the application software, and not on the setting up of the actual imaging hardware. The imaging device, on which the system was tested, was a web cam capable of a 640 x 480 resolution. As such, all image capture and processing was done on images with a horizontal resolution of 640 pixels and a vertical resolution of 480 pixels, so as not to distort image quality. The application locates objects on an image feed - which can be in the format of a still image, a video file or a camera feed - and compares these objects to a model of the object that was created previously. The coordinates of the object are calculated and translated into coordinates on the conveyor system. These coordinates are then passed on to an external recipient, such as a robotic arm, via a serial link. The system has been applied to the model of a DVD, and tested against a variety of similar and dissimilar objects to determine its accuracy. The tests were run on both an AMD- and Intel-based desktop computer system, with the results indicating that both systems are capable of efficiently running the application. On average, the AMD-based system tended to be 81% faster at matching objects in still images, and 100% faster at matching objects in moving images. The system made matches within an average time frame of 250 ms, making the process fast enough to be used on an actual conveyor system. On still images, the results showed an 87% success rate for the AMD-based system, and 73% for Intel. For moving images, however, both systems showed a 100% success rate.
APA, Harvard, Vancouver, ISO, and other styles
32

Van, der Linde P. L. "A comparative study of three ICT network programs using usability testing." Thesis, [Bloemfontein?] : Central University of Technology, Free State, 2013. http://hdl.handle.net/11462/186.

Full text
Abstract:
Thesis (M. Tech. (Information Technology)) -- Central University of technology, Free State, 2013
This study compared the usability of three Information and Communication Technology (ICT) network programs in a learning environment. The researcher wanted to establish which program was most adequate from a usability perspective among second-year Information Technology (IT) students at the Central University of Technology (CUT), Free State. The Software Usability Measurement Inventory (SUMI) testing technique can measure software quality from a user perspective. The technique is supported by an extensive reference database to measure a software product’s quality in use and is embedded in an effective analysis and reporting tool called SUMI scorer (SUMISCO). SUMI was applied in a controlled laboratory environment where second-year IT students of the CUT, utilized SUMI as part of their networking subject, System Software 1 (SPG1), to evaluate each of the three ICT network programs. The results, strengths and weaknesses, as well as usability improvements, as identified by SUMISCO, are discussed to determine the best ICT network program from a usability perspective according to SPG1 students.
APA, Harvard, Vancouver, ISO, and other styles
33

Nilsson, Daniel, and Henrik Norin. "Adaptive QoS Management in Dynamically Reconfigurable Real-Time Databases." Thesis, Linköping University, Department of Computer and Information Science, 2005. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-2800.

Full text
Abstract:

During the last years the need for real-time database services has increased due to the growing number of data-intensive applications needing to enforce real-time constraints. The COMponent-based Embedded real-Time database (COMET) is a real-time database developed to meet these demands. COMET is developed using the AspeCtual COmponent-based Real-time system Development (ACCORD) design method, and consists of a number of components and aspects, which can be composed into a number of different configurations depending on system demands, e.g., Quality of Service (QoS) management can be used in unpredictable environments.

In embedded systems with requirementson high up-time it may not be possible to temporarily shut down the system for reconfiguration. Instead it is desirable to enable dynamic reconfiguration of the system, exchanging components during run-time. This in turn sets demands on the feedback control of the system to adjust to these new conditions, since a new time variant system has been created.

This thesis project implements improvements in COMET to create a more stable database suitable for further development. A mechanism for dynamic reconfiguration of COMET is implemented, thus, enabling components and aspects to be swapped during run-time. Adaptive feedback control algorithms are also implemented in order to better adjust to workload variations and database reconfiguration.

APA, Harvard, Vancouver, ISO, and other styles
34

Araujo, Sandro de. "Proposição para adaptação de termos do CMMI-DEV 1.3 para aplicação em PDPS de empresas de manufatura." Universidade Tecnológica Federal do Paraná, 2013. http://repositorio.utfpr.edu.br/jspui/handle/1/810.

Full text
Abstract:
Com um mercado global cada vez mais agressivo e competitivo, as indústrias têm buscado meios para se manterem competitivas. O Processo de Desenvolvimento de Produtos (PDP) ocupa um importante papel na estratégia das empresas que buscam um diferencial competitivo. Entretanto, para um PDP se tornar um diferencial competitivo, ele deve apresentar um patamar mínimo de maturidade, que representa o seu potencial de crescimento de capacitação, a riqueza do processo da organização e a consistência com que ele é aplicado em todos os seus projetos. Existem vários modelos que permitem avaliar a maturidade de um PDP. Porém, o Capability Maturity Model Integration (CMMI) fornece uma solução integrada que abrange atividades de desenvolvimento e manutenção de produtos e serviços. Entretanto, ele foi originalmente criado para analisar indústrias de tecnologia de informação, não abrangendo os termos utilizados nas empresas de manufatura. Sendo assim, o objetivo deste trabalho é propor uma estratégia que adapte parte do modelo CMMI-DEV 1.3, viabiliza e facilita o entendimento das suas metas e práticas para empresas de manufatura. Para isso, é apresentada uma revisão bibliográfica sobre o CMMI-DEV 1.3, PDP de empresas de manufatura e estratégias utilizadas para adaptar termos de métodos, modelos ou ferramentas entre áreas de especialidades distintas, incluindo um detalhamento conceitual de seus itens para, posteriormente, identificar a parte do modelo a ser adaptado nesse trabalho. Após esta delimitação, os termos são correlacionados com termos similares aos encontrados na literatura de empresas de manufatura e validados através da revisão por pares. Visando verificar a eficiência da estratégia para adaptação dos termos, recorre-se a entrevistas com sete profissionais de quatro indústrias e um acadêmico, todos variando de três a quinze anos de experiência na área de PDP. Entre os resultados, o trabalho contribui com uma proposição para a adaptação de termos do modelo CMMI-DEV 1.3 utilizado em indústrias de TI para o PDP das empresas de manufatura.
Through a global market increasingly aggressive and competitive, many industries are seeking ways to keep competitive. The Product Development Process (PDP) plays an important role in the strategy of companies that look for a competitive advantage. However, for the PDP become a competitive advantage, it must provide a minimum level of maturity, which represents the growth potential of training, the wealth of the organization's process and the consistency which it is applied in all its projects. There are several models for assessing the maturity of the PDP, but the Capability Maturity Model Integration (CMMI) provides an integrated solution that covers development activities and maintenance of products and services. However, it was originally created to analyze the information technology industries, not covering the terms used in manufacturing companies. Thus, the aim of this work is propose a strategy to adapt the CMMI - DEV 1.3, enabling easier understanding of their goals and practices for manufacturing companies. For it is presented a review on the CMMI - DEV 1.3 PDP manufacturing companies and strategies used to adapt terms of methods models or tools among different speciality areas, including a detailed concept of their items in order to identify the part of the model to be adapted in this work. After this definition, the terms are correlated with similar terms to those found in the literature of manufacturing companies and validated through peer review. In order to verify the effectiveness of the strategy to adapt the terms, the study performed interviews with seven professionals from four manufacturing industries and one academic, all of them ranging from three to fifteen years of experience in the PDP. Among the results, the study contributes to a proposition for adaptation of CMMI-DEV 1.3 used in IT industries for the PDP of manufacturing companies.
APA, Harvard, Vancouver, ISO, and other styles
35

Leal, Gislaine Camila Lapasini. "Know-cap: um método para capitalização de conhecimento no desenvolvimento de software." Universidade Tecnológica Federal do Paraná, 2015. http://repositorio.utfpr.edu.br/jspui/handle/1/1709.

Full text
Abstract:
O caráter intensivo em conhecimento da produção de software e sua crescente demanda sugerem a necessidade de definir mecanismos para gerenciar adequadamente os conhecimentos envolvidos com o objetivo de alcançar os requisitos de prazo, custos e qualidade. A capitalização do conhecimento é um processo que envolve desde a identificação até a avaliação dos conhecimentos produzidos e utilizados. Especificamente, para o desenvolvimento de software, a capitalização possibilita facilitar o acesso, minimizar a perda de conhecimento, reduzir a curva de aprendizagem, evitar a repetição de erros e retrabalho. Assim, esta tese apresenta o Know-Cap, um método desenvolvido para sistematizar e orientar a capitalização do conhecimento no desenvolvimento de software. O Know-Cap visa facilitar a localização, preservação, agregação de valor e atualização do conhecimento, com o intuito de utilizá-lo na execução de novas tarefas. O método foi proposto a partir de um conjunto de procedimentos metodológicos: pesquisa bibliográfica, revisão sistemática e análise de trabalhos correlatos. A viabilidade e adequação do Know-Cap foram analisadas a partir de um estudo de aplicação, conduzido em um caso real, e um estudo de análise realizado em empresas de desenvolvimento de software. Os resultados obtidos apontam que o Know-Cap apoia a capitalização do conhecimento no desenvolvimento de software.
The intensive character in knowledge of software production and its rising demand suggest the need to establish mechanisms to properly manage the knowledge involved in order to meet the requirements of deadline, costs and quality. The knowledge capitalization is a process that involves from identification to evaluation of the knowledge produced and used. Specifically, for software development, capitalization enables easier access, minimize the loss of knowledge, reducing the learning curve, avoid repeating errors and rework. Thus, this thesis presents the know-Cap, a method developed to organize and guide the capitalization of knowledge in software development. The Know-Cap facilitates the location, preservation, value addition and updating of knowledge, in order to use it in the execution of new tasks. The method was proposed from a set of methodological procedures: literature review, systematic review and analysis of related work. The feasibility and appropriateness of Know-Cap were analyzed from an application study, conducted in a real case, and an analytical study of software development companies. The results obtained indicate the Know- Cap supports the capitalization of knowledge in software development.
APA, Harvard, Vancouver, ISO, and other styles
36

Bissi, Wilson. "WS-TDD: uma abordagem ágil para o desenvolvimento de serviços WEB." Universidade Tecnológica Federal do Paraná, 2016. http://repositorio.utfpr.edu.br/jspui/handle/1/1829.

Full text
Abstract:
Test Driven Development (TDD) é uma prática ágil que ganhou popularidade ao ser definida como parte fundamental na eXtreme Programming (XP). Essa prática determina que os testes devem ser escritos antes da implementação do código. TDD e seus efeitos têm sido amplamente estudados e comparados com a prática Test Last Development (TLD) em diversos trabalhos. Entretanto, poucos estudos abordam TDD no desenvolvimento de Web Services (WS), devido à complexidade em testar as dependências entre os componentes distribuídos e as particularidades da Service Oriented Architecture (SOA). Este trabalho tem por objetivo definir e validar uma abordagem para o desenvolvimento de WS baseada na prática de TDD, denominada WS-TDD. Essa abordagem guia os desenvolvedores no uso de TDD durante o desenvolvimento de WS, sugerindo ferramentas e técnicas para lidar com as dependências e as particularidades de SOA, com foco na criação dos testes unitários e integrados automatizados na linguagem Java. No intuito de definir e validar a abordagem proposta, quatro métodos de pesquisa foram executados: (i) questionário presencial; (ii) experimento; (iii) entrevista presencial com cada participante do experimento e (iv) triangulação dos resultados com as pessoas que participaram nos três métodos anteriores. De acordo com os resultados obtidos, a WS-TDD mostrou-se mais eficiente quando comparada a TLD, aumentando a qualidade interna do software e a produtividade dos desenvolvedores. No entanto, a qualidade externa do software diminuiu, apresentando um maior número de defeitos quando comparada a TLD. Por fim, é importante destacar que a abordagem proposta surge como uma alternativa simples e prática para a adoção de TDD no desenvolvimento de WS, trazendo benefícios a qualidade interna e contribuindo para aumentar a produtividade dos desenvolvedores. Porém, a qualidade externa do software diminuiu ao utilizar a WS-TDD.
Test Driven Development (TDD) is an agile practice that gained popularity when defined as a fundamental part in eXtreme Programming (XP). This practice determines that the tests should be written before implementing the code. TDD and its effects have been widely studied and compared with the Test Last Development (TLD) in several studies. However, few studies address TDD practice in the development of Web Services (WS), due to the complexity of testing the dependencies among distributed components and the specific characteristics of Service Oriented Architecture (SOA). This study aims to define and validate an approach to develop WS based on the practice of TDD, called WS-TDD. This approach guides developers to use TDD to develop WS, suggesting tools and techniques to deal with SOA particularities and dependencies, focusing on the creation of the unitary and integrated automated tests in Java. In order to define and validate the proposed approach, four research methods have been carried out: (i) questionnaire; (ii) practical experiment; (iii) personal interview with each participant in the experiment and (iv) triangulation of the results with the people who participated in the three previous methods. According to the obtained results, WS-TDD was more efficient compared to TLD, increasing internal software quality and developer productivity. However, the external software quality has decreased due to a greater number of defects compared to the TLD approach. Finally, it is important to highlight that the proposed approach is a simple and practical alternative for the adoption of TDD in the development of WS, bringing benefits to internal quality and contributing to increase the developers’ productivity. However, the external software quality has decreased when using WS-TDD.
APA, Harvard, Vancouver, ISO, and other styles
37

Salman, Rosine Hanna. "Exploring Capability Maturity Models and Relevant Practices as Solutions Addressing IT Service Offshoring Project Issues." PDXScholar, 2014. https://pdxscholar.library.pdx.edu/open_access_etds/1843.

Full text
Abstract:
Western countries' information technology and software intensive firms are increasingly producing software and IT services in developing countries. With this swift advancement in offshoring, there are many issues that can be investigated which will enable companies to maximize their benefits from offshoring. However, significant challenges can occur throughout the lifecycle of offshoring IT service projects that turn the potential benefits into losses. This research investigated CMM/CMMI best practices and their effects on managing and mitigating critical issues associated with offshore development. Using a web based survey, data was collected from 451 Information Technology and software development firms in the US. The survey instrument was validated by an expert panel which included practitioners and researchers. The survey population consisted of Information Technology and software engineering managers who work on offshore IT and software development projects. Statistical methods including Chi Square and Cramer's V were used to test the research hypotheses. The results of the analysis show that IT companies applying CMM/CMMI models have fewer issues associated with IT offshoring. When US IT companies utilize and incorporate different practices from TSP and People CMM into CMMI for DEV/SVC and CMMI for ACQ, they have fewer offshoring issues related to language barriers and cultural differences. The results of this research contribute to the existing body of knowledge on the offshoring of IT services from the client management perspective and provide practitioners with increased knowledge regarding IT offshoring decisions.
APA, Harvard, Vancouver, ISO, and other styles
38

Souza, Rafael Gorski Moreno. "Problem-Based SRS: método para especificação de requisitos de software baseado em problemas." Universidade Tecnológica Federal do Paraná, 2016. http://repositorio.utfpr.edu.br/jspui/handle/1/1811.

Full text
Abstract:
Especificação de requisitos é reconhecida como como uma atividade critica nos processos de desenvolvimento de software por causa de seu impacto nos riscos do projeto quando mal executada. Um grande conjunto de estudos discute aspectos teóricos, proposições de técnicas e práticas recomendadas para a Engenharia de Requisitos (ER). Para ter sucesso, ER tem que assegurar que requisitos especificados são completos e corretos, o que significa que todas as intenções dos stakeholders são cobertas pelos requisitos e que não existem requisitos desnecessários. Entretanto, a captura precisa das intenções stakeholders continua sendo um desafio e é o maior fator para falhas em projetos de software. Esta dissertação apresenta um novo método denominado “Problem-Based SRS” que tem como objetivo melhorar a qualidade da especificação de requisitos de software (SRS – Software Requirements Specification) no sentido de que os requisitos especificados forneçam uma resposta adequada para os problemas dos clientes reais. Neste método, o conhecimento sobre os requisitos de software é construído a partir do conhecimento sobre os problemas do cliente. O Problem-Based SRS consiste de uma organização de atividades e resultados através de um processo que contem cinco etapas. O método fornece suporte ao time de engenharia de requisitos para analisar sistematicamente o contexto de negócio e especificar os requisitos de software, considerando o vislumbre e a visão do software. Os aspectos de qualidade das especificações são avaliados utilizando técnicas de rastreabilidade e princípios do axiomatic design. Os casos de estudo realizados e apresentados nesta dissertação apontam que o método proposto pode contribuir de forma significativa para uma melhor especificação de requisitos de software.
Requirements specification has long been recognized as critical activity in software development processes because of its impact on project risks when poorly performed. A large amount of studies addresses theoretical aspects, propositions of techniques, and recommended practices for Requirements Engineering (RE). To be successful, RE have to ensure that the specified requirements are complete and correct what means that all intents of the stakeholders in a given business context are covered by the requirements and that no unnecessary requirement was introduced. However, the accurate capture the business intents of the stakeholders remains a challenge and it is a major factor of software project failures. This master’s dissertation presents a novel method referred to as “Problem-Based SRS” aiming at improving the quality of the Software Requirements Specification (SRS) in the sense that the stated requirements provide suitable answers to real customer ́s businesses issues. In this approach, the knowledge about the software requirements is constructed from the knowledge about the customer ́s problems. Problem-Based SRS consists in an organization of activities and outcome objects through a process that contains five main steps. It aims at supporting the software requirements engineering team to systematically analyze the business context and specify the software requirements, taking also into account a first glance and vision of the software. The quality aspects of the specifications are evaluated using traceability techniques and axiomatic design principles. The cases studies conducted and presented in this document point out that the proposed method can contribute significantly to improve the software requirements specification.
APA, Harvard, Vancouver, ISO, and other styles
39

Niemann, Johan. "Development of a reconfigurable assembly system with enhanced control capabilities and virtual commissioning." Thesis, Bloemfontein : Central University of Technology, Free State, 2013. http://hdl.handle.net/11462/184.

Full text
Abstract:
Thesis (M. Tech. (Engineering: Electrical)) -- Central University of technology, Free State, 2013
The South African (SA) manufacturing industry requires developing similar levels of sophistication and expertise in automation as its international rivals to compete for global markets. To achieve this, manufacturing plants need to be managed extremely efficiently to ensure the quality of manufactured products and these plants must also have the relevant infrastructure. Furthermore, this industry must also compensate for rapid product introduction, product changes and short product lifespan. To support this need, this industry must engage in the current trend in automation known as reconfigurable manufacturing. The aim of the study is to develop a reconfigurable assembly system with enhanced control capabilities by utilizing virtual commissioning. In addition, this system must be capable of assembling multiple different products of a product range; reconfigure to accommodate the requirements of these products; autonomously reroute the product flow and distribute workload among assembly cells; handle erroneous products; and implement enhanced control methods. To achieve this, a literature study was done to confirm the type of components to be used, reveal design issues and what characteristics such a system must adhere to. Software named DELMIA was used to create a virtual simulation environment to verify the system and simultaneously scrutinize the methods of verification. On completion, simulations were conducted to verify software functions, device movements and operations, and the control software of the system. Based on simulation results, the physical system was built, and then verified with a multi agent system as overhead control to validate the entire system. The final results showed that the project objectives are achievable and it was also found that DELMIA is an excellent tool for system verification and will expedite the design of a system. By obtaining these results it is indicated that companies can design and verify their systems earlier through virtual commissioning. In addition, their systems will be more flexible, new products or product changes can be introduced more frequently, with minimum cost and downtime. This will enable SA manufacturing companies to be more competitive, ensure increased productivity, save time and so ensure them an advantage over their international competition.
APA, Harvard, Vancouver, ISO, and other styles
40

Garcia, Sotelo Gerardo Javier. "Get the right price every day." CSUSB ScholarWorks, 2005. https://scholarworks.lib.csusb.edu/etd-project/2729.

Full text
Abstract:
The purpose of this project is to manage restaurants using a software system called GRIPED (Get the Right Price Every day). The system is designed to cover quality control, food cost control and portion control for better management of a restaurant.
APA, Harvard, Vancouver, ISO, and other styles
41

Handley, Stephen Michael. "Monte Carlo simulations using MCNPX of proton and anti-proton beam profiles for radiation therapy." Oklahoma City : [s.n.], 2010.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
42

Albanez, Altamar Urbanetz de Araújo. "Associação entre CMMI-DEV 1.2 e ISO/TS 16949." Universidade Tecnológica Federal do Paraná, 2012. http://repositorio.utfpr.edu.br/jspui/handle/1/558.

Full text
Abstract:
O setor automotivo é um dos mais arrojados em termos de qualidade, demandando a certificação ISO/TS 16949. Apesar dessas empresas dominarem essa certificação, algumas a perdem em auditorias posteriores ou obtêm poucas melhorias além das existentes. Há indícios de que elas não possuam maturidade suficiente para obter ou manter essa certificação, nem diretrizes para melhorar continuamente. Em trabalhos anteriores, constatou-se que empresas certificadas possuíam, no mínimo, nível 2 de maturidade, sendo 1 (mínimo) e 5 (máximo), o que significa uma empresa com processo definido e gerenciável. Entretanto, o que habilita a empresa a melhorar seus índices é ter o processo controlado e integrado. A falta de maturidade de um processo de desenvolvimento de produto (PDP) desencadeia refugos e retrabalhos, comprometendo o uso eficiente de recursos, impactando no tempo e no custo do desenvolvimento e, indiretamente, na qualidade do processo e do produto final. Porém, as empresas certificadas não possuem diretrizes para melhorar seus processos. Para isso, a ISO demandaria algum recurso associado, visando fornecer orientação quanto aos aspectos que precisariam ser melhorados. Considerando que o CMMI é um método eficaz na obtenção de diagnóstico de maturidade e que considera a integração do PDP, esse trabalho visa identificar a associação entre a certificação ISO/TS 16949 e o método CMMI-DEV 1.2. Para isso, apresenta uma revisão sobre PDPs, certificação da qualidade e maturidade de processo. Posteriormente, são associadas as variáveis envolvidas em um processo de certificação ISO 9001 e as variáveis avaliadas na ISO/TS 16949 com as variáveis envolvidas na avaliação do nível 2 de maturidade do modelo CMMI-DEV 1.2. O trabalho explicita quais itens são considerados pela ISO/TS 16949, ressaltando os itens do CMMI que poderiam ser usados para obter um diagnóstico complementar para as empresas que desejam melhorar o fator qualidade, agregando, em paralelo, mais eficiência e produtividade aos seus processos produtivos.
The automotive sector is one of the most daring in terms of quality, requiring because of that certification to ISO/TS 16949. Although these companies dominate this certification, some lose in the subsequent audits or get little improvement beyond existing. There is evidence that they do not have the maturity to obtain or maintain such certification or guidelines to continually improve. In previous work, it was found out that certified companies had at least level 2 maturity, 1 (minimum) and 5 (maximum), which means a company defined and manageable process. However, what enables the company to improve its indexes have the process is controlled and integrated. The lack of maturity of a product development process (PDP) triggers scrap and rework, compromising the efficient use of resources, impacting the time and cost of development and, indirectly, the quality of the process and final product. However, the guidelines do not have certified companies to improve their processes. For this, the ISO would require some resource associated in order to provide guidance on the aspects that need to be improved. Whereas CMMI is an effective method for obtaining diagnostic and maturity that considers the integration of PDP, this work aims to identify the association between the ISO/TS 16949 and CMMI-DEV 1.2 method. Presenting an overview of PDPs, quality certification and process maturity. Later, associated variables are involved in a process of ISO 9001 certification and the variables evaluated in the ISO/TS 16949 with the variables involved in assessing the maturity level 2 with CMMI-DEV 1.2. The paper explains which items are considered by the ISO/TS 16949, CMMI highlighting items that could be used for diagnosis complement for companies that wish to improve the quality factor, adding, in parallel, more efficiency and productivity of their production processes.
APA, Harvard, Vancouver, ISO, and other styles
43

Abbas, Noura. "Software quality and governance in agile software development." Thesis, University of Southampton, 2009. https://eprints.soton.ac.uk/158357/.

Full text
Abstract:
Looking at software engineering from a historical perspective, we can see how software development methodologies have evolved over the past 50 years. Using the right software development methodology with the right settings has always been a challenge. Therefore, there has always been a need for empirical evidence about what worked well and what did not, and what factors affect the different variables of the development process. Probably the most noticeable change to software development methodology in the last 15 years has been the introduction of the word “agile”. As any area matures, there is a need to understand its components and relations, as well as the need of empirical evidence about how well agile methods work in real life settings. In this thesis, we empirically investigate the impact of agile methods on different aspects of quality including product quality, process quality and stakeholders’ satisfaction as well as the different factors that affect these aspects. Quantitative and qualitative research methods were used for this research, including semi-structured interviews and surveys. Quality was studied in two projects that used agile software development. The empirical study showed that both projects were successful with multiple releases, and with improved product quality and stakeholders’ satisfaction. The data analysis produced a list of 13 refined grounded hypotheses out of which 5 were supported throughout the research. One project was studied in-depth by collecting quantitative data about the process used via a newly designed iteration monitor. The iteration monitor was used by the team over three iterations and it helped identify issues and trends within the team in order to improve the process in the following iterations. Data about other organisations collected via surveys was used to generalise the obtained results. A variety of statistical analysis techniques were applied and these suggested that when agile methods have a good impact on quality they also has a good impact on productivity and satisfaction, also when agile methods had good impact on the previous aspects they reduced cost. More importantly, the analysis clustered 58 agile practices into 15 factors including incremental and iterative development, agile quality assurance, and communication. These factors can be used as a guide for agile process improvement. The previous results raised questions about agile project governance, and to answer these questions the agile projects governance survey was conducted. This survey collected 129 responses, and its statistically significant results suggested that: retrospectives are more effective when applied properly as they had more impact when the whole team participated and comments were recorded, that organisation size has a negative relationship with success, and that good practices are related together as when a team does one aspect well, they do all aspects well. Finally, the research results supported the hypotheses: agile software development can produce good quality software, achieve stakeholders’ satisfaction, motivate teams, assures quick and effective response to stakeholder’s requests, and it goes in stages, matures, and improves over time.
APA, Harvard, Vancouver, ISO, and other styles
44

Masoud, F. A. "Quality metrics in software engineering." Thesis, University of Liverpool, 1987. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.381358.

Full text
Abstract:
In the first part of this study software metrics are classified into three categories: primitive, abstract and structured. A comparative and analytical study of metrics from these categories was performed to provide software developers, users and management with a correct and consistent evaluation of a representative sample of the software metrics available in the literature. This analysis and comparison was performed in an attempt to: assist the software developers, users and management in selecting suitable quality metric(s) for their specific software quality requirements and to examine various definitions used to calculate these metrics. In the second part of this study an approach towards attaining software quality is developed. This approach is intended to help all the people concerned with the evaluation of software quality in the earlier stages of software systems development. The approach developed is intended to be uniform, consistent, unambiguous and comprehensive and one which makes the concept of software quality more meaningful and visible. It will help the developers both to understand the concepts of software quality and to apply and control it according to the expectations of users, management, customers etc.. The clear definitions provided for the software quality terms should help to prevent misinterpretation, and the definitions will also serve as a touchstone against which new ideas can be tested.
APA, Harvard, Vancouver, ISO, and other styles
45

Lindgren, Markus. "Bridging the software quality gap." Thesis, Umeå universitet, Institutionen för datavetenskap, 2012. http://urn.kb.se/resolve?urn=urn:nbn:se:umu:diva-61507.

Full text
Abstract:
There is a gap in the understanding of software quality between developers and non-technical stakeholders, the software quality gap, which leads to disagreements about the amount of time that should be used for quality improvements. The technical debt metaphor reduces this gap some by describing software quality in economic terms, enabling developers and non-technical stakeholders to communicate about the quality. However, the metaphor is vague and not very concrete in explaining the gap. The purpose of this thesis is to concretize the technical debt metaphor using Domain-Driven Design, an approach in which communicating software characteristics is central, in order to reduce the software quality gap further. Using the terminology of Domain-Driven Design, a new concept is defined: model debt, which measures and communicates the software quality of the domain model. An application is built, the ModelDebtOMeter, which extracts the domain model of a software system and visualizes it along with its corresponding model debt. The model debt of a legacy system is amortized and the results of the amortizations are presented to the non-technical stakeholders of the system using the ModelDebtOMeter, allowing the non-technical stakeholders to evaluate the model debt concept. The results show that the non-technical stakeholders think that model debt is an understandable concept which reduces, but not eliminates, the software quality gap.
APA, Harvard, Vancouver, ISO, and other styles
46

Cavalcante, Marcia Beatriz. "The impact of team software organizations on software quality and productivity." Thesis, National Library of Canada = Bibliothèque nationale du Canada, 1998. http://www.collectionscanada.ca/obj/s4/f2/dsk1/tape10/PQDD_0006/MQ44140.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
47

Williams, Daniel Dee. "Design analysis techniques for software quality enhancement." Online access for everyone, 2007. http://www.dissertations.wsu.edu/Thesis/Summer2007/d_williams_072407.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
48

Alshammari, Bandar M. "Quality metrics for assessing security-critical computer programs." Thesis, Queensland University of Technology, 2011. https://eprints.qut.edu.au/49780/1/Bandar_Alshammari_Thesis.pdf.

Full text
Abstract:
Existing secure software development principles tend to focus on coding vulnerabilities, such as buffer or integer overflows, that apply to individual program statements, or issues associated with the run-time environment, such as component isolation. Here we instead consider software security from the perspective of potential information flow through a program’s object-oriented module structure. In particular, we define a set of quantifiable "security metrics" which allow programmers to quickly and easily assess the overall security of a given source code program or object-oriented design. Although measuring quality attributes of object-oriented programs for properties such as maintainability and performance has been well-covered in the literature, metrics which measure the quality of information security have received little attention. Moreover, existing securityrelevant metrics assess a system either at a very high level, i.e., the whole system, or at a fine level of granularity, i.e., with respect to individual statements. These approaches make it hard and expensive to recognise a secure system from an early stage of development. Instead, our security metrics are based on well-established compositional properties of object-oriented programs (i.e., data encapsulation, cohesion, coupling, composition, extensibility, inheritance and design size), combined with data flow analysis principles that trace potential information flow between high- and low-security system variables. We first define a set of metrics to assess the security quality of a given object-oriented system based on its design artifacts, allowing defects to be detected at an early stage of development. We then extend these metrics to produce a second set applicable to object-oriented program source code. The resulting metrics make it easy to compare the relative security of functionallyequivalent system designs or source code programs so that, for instance, the security of two different revisions of the same system can be compared directly. This capability is further used to study the impact of specific refactoring rules on system security more generally, at both the design and code levels. By measuring the relative security of various programs refactored using different rules, we thus provide guidelines for the safe application of refactoring steps to security-critical programs. Finally, to make it easy and efficient to measure a system design or program’s security, we have also developed a stand-alone software tool which automatically analyses and measures the security of UML designs and Java program code. The tool’s capabilities are demonstrated by applying it to a number of security-critical system designs and Java programs. Notably, the validity of the metrics is demonstrated empirically through measurements that confirm our expectation that program security typically improves as bugs are fixed, but worsens as new functionality is added.
APA, Harvard, Vancouver, ISO, and other styles
49

Karami, Daryoosh. "Knowledge-based software engineering : a software quality management expert system prototype." Thesis, University of Southampton, 1995. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.361657.

Full text
APA, Harvard, Vancouver, ISO, and other styles
50

Moland, Kathryn J. "An Effective Software Development Methodology for Quality Software Development in a Scheduling Department." NSUWorks, 1997. http://nsuworks.nova.edu/gscis_etd/731.

Full text
Abstract:
The research described in this document represents work performed in the area of software development methodologies as it applied to quality software development in a scheduling department. It addressed traditional methods in software development, current trends in software development, in addition to quality and software development at various companies. The literature suggested a correlation between using a software development methodology and quality software. However, there was limited literature that measured quantitatively the correlation between the effectiveness of the software development methodology and quality software. A software development methodology was developed for the scheduling department of a government contractor company in Aiken, South Carolina based on its needs and emerging technologies. An instrument was utilized to measure the effectiveness of the developed methodology. The methodology was compared with two other methodologies: a standard methodology from the literature and the current method of software development in the scheduling department. A population of computer professionals was divided into three equal groups. Each group was asked to apply the methodology to the case study. Individuals in each group were asked to review the case study and software development methodology. Then using the instrument, the individuals were asked to evaluate the effectiveness of the software development methodology, thereby providing a means for evaluated effectiveness, without conducting years of testing. The responses of the three groups were compared to one another. The results indicated a significantly higher level of approval for those methodologies that guided the development activities, standardized the development process, and identified the development phases and deliverables. It was concluded that utilizing a software development methodology that guides, standardizes, and defines the development phases and deliverables will result in an improved software development process and software quality. Further investigation could validate the findings of this research. The results actually achieved from utilizing the methodology developed for the scheduling department compared with the results achieved from utilizing some other methodology could further validate these research findings. Additional research could examine, over an extended time period, the success of the software development process and software quality of those projects utilizing the methodology described in this dissertation.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography