Siga este link para ver outros tipos de publicações sobre o tema: Computer software - Evaluation.

Teses / dissertações sobre o tema "Computer software - Evaluation"

Crie uma referência precisa em APA, MLA, Chicago, Harvard, e outros estilos

Selecione um tipo de fonte:

Veja os 50 melhores trabalhos (teses / dissertações) para estudos sobre o assunto "Computer software - Evaluation".

Ao lado de cada fonte na lista de referências, há um botão "Adicionar à bibliografia". Clique e geraremos automaticamente a citação bibliográfica do trabalho escolhido no estilo de citação de que você precisa: APA, MLA, Harvard, Chicago, Vancouver, etc.

Você também pode baixar o texto completo da publicação científica em formato .pdf e ler o resumo do trabalho online se estiver presente nos metadados.

Veja as teses / dissertações das mais diversas áreas científicas e compile uma bibliografia correta.

1

Clemens, Ronald F. "TEMPO software modificationg for SEVER evaluation". Thesis, Monterey, California : Naval Postgraduate School, 2009. http://edocs.nps.edu/npspubs/scholarly/theses/2009/Sep/09Sep_Clemens.pdf.

Texto completo da fonte
Resumo:
Thesis (M.S. in Systems Engineering)--Naval Postgraduate School, September 2009.
Thesis Advisor(s): Langford, Gary O. "September 2009." Description based on title screen as viewed on November 4, 2009. Author(s) subject terms: Decision, decision analysis, decision process, system engineering tool, SEVER, resource allocation, military planning, software tool, strategy evaluation. Includes bibliographical references (p. 111-113). Also available in print.
Estilos ABNT, Harvard, Vancouver, APA, etc.
2

Manley, Gary W. "The classification and evaluation of Computer-Aided Software Engineering tools". Thesis, Monterey, California: Naval Postgraduate School, 1990. http://hdl.handle.net/10945/34910.

Texto completo da fonte
Resumo:
Approved for public release; distribution unlimited.
The use of Computer-Aided Software Engineering (CASE) tools has been viewed as a remedy for the software development crisis by achieving improved productivity and system quality via the automation of all or part of the software engineering process. The proliferation and tremendous variety of tools available have stretched the understanding of experienced practitioners and has had a profound impact on the software engineering process itself. To understand what a tool does and compare it to similar tools is a formidable task given the existing diversity of functionality. This thesis investigates what tools are available, proposes a general classification scheme to assist those investigating tools to decide where a tool falls within the software engineering process and identifies a tool's capabilities and limitations. This thesis also provides guidance for the evaluation of a tool and evaluates three commercially available tools.
Estilos ABNT, Harvard, Vancouver, APA, etc.
3

Schilling, Walter W. "A cost effective methodology for quantitative evaluation of software reliability using static analysis /". Connect to full text in OhioLINK ETD Center, 2007. http://rave.ohiolink.edu/etdc/view?acc_num=toledo1189820658.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
4

Shepperd, Martin John. "System architecture metrics : an evaluation". n.p, 1990. http://ethos.bl.uk/.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
5

Osqui, Mitra M. 1980. "Evaluation of software energy consumption on microprocessors". Thesis, Massachusetts Institute of Technology, 2001. http://hdl.handle.net/1721.1/8344.

Texto completo da fonte
Resumo:
Thesis (S.M.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, February 2002.
Includes bibliographical references (leaves 72-75).
In the area of wireless communications, energy consumption is the key design consideration. Significant effort has been placed in optimizing hardware for energy efficiency, while relatively less emphasis has been placed on software energy reduction. For overall energy efficiency reduction of system energy consumption in both hardware and software must be addressed. One goal of this research is to evaluate the factors that affect software energy efficiency and identify techniques that can be employed to produce energy optimal software. In order to present a strong argument, two state-of-the-art low power processors were used for evaluation: the Intel StrongARM SA-1100 and the next generation Intel Xscale processor. A key step in analyzing the performance of software is to perform a comprehensive tabulation of the energy consumption per instruction, while taking into account the different modes of operation. This leads into a comprehensive energy profiling for the instruction set of the processors of interest. With information on the energy consumption per instruction, we can evaluate the feasibility of energy efficient programming and use the results to gain greater insight into the power consumption of the two processors under consideration. Benchmark programs will be tested on both processors to illustrate the effectiveness of the energy profiling results. The next goal is to look at the leakage current and current consumed during idle modes of the processors and how that impacts the overall picture of energy consumption. Thus energy consumption will be explored for the two processors from both a dynamic and static energy consumption perspective.
by Mitra M. Osqui.
S.M.
Estilos ABNT, Harvard, Vancouver, APA, etc.
6

Phalp, Keith T. "An evaluation of software modelling in practice". Thesis, Bournemouth University, 1995. http://eprints.bournemouth.ac.uk/438/.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
7

Zhu, Liming Computer Science &amp Engineering Faculty of Engineering UNSW. "Software architecture evaluation for framework-based systems". Awarded by:University of New South Wales. Computer Science and Engineering, 2007. http://handle.unsw.edu.au/1959.4/28250.

Texto completo da fonte
Resumo:
Complex modern software is often built using existing application frameworks and middleware frameworks. These frameworks provide useful common services, while simultaneously imposing architectural rules and constraints. Existing software architecture evaluation methods do not explicitly consider the implications of these frameworks for software architecture. This research extends scenario-based architecture evaluation methods by incorporating framework-related information into different evaluation activities. I propose four techniques which target four different activities within a scenario-based architecture evaluation method. 1) Scenario development: A new technique was designed aiming to extract general scenarios and tactics from framework-related architectural patterns. The technique is intended to complement the current scenario development process. The feasibility of the technique was validated through a case study. Significant improvements of scenario quality were observed in a controlled experiment conducted by another colleague. 2) Architecture representation: A new metrics-driven technique was created to reconstruct software architecture in a just-in-time fashion. This technique was validated in a case study. This approach has significantly improved the efficiency of architecture representation in a complex environment. 3) Attribute specific analysis (performance only): A model-driven approach to performance measurement was applied by decoupling framework-specific information from performance testing requirements. This technique was validated on two platforms (J2EE and Web Services) through a number of case studies. This technique leads to the benchmark producing more representative measures of the eventual application. It reduces the complexity behind the load testing suite and framework-specific performance data collecting utilities. 4) Trade-off and sensitivity analysis: A new technique was designed seeking to improve the Analytical Hierarchical Process (AHP) for trade-off and sensitivity analysis during a framework selection process. This approach was validated in a case study using data from a commercial project. The approach can identify 1) trade-offs implied by an architecture alternative, along with the magnitude of these trade-offs. 2) the most critical decisions in the overall decision process 3) the sensitivity of the final decision and its capability for handling quality attribute priority changes.
Estilos ABNT, Harvard, Vancouver, APA, etc.
8

Xue, James Wen Jun. "Performance evaluation and resource management in enterprise systems". Thesis, University of Warwick, 2009. http://wrap.warwick.ac.uk/2303/.

Texto completo da fonte
Resumo:
This thesis documents research conducted as part of an EPSRC (EP/C53 8277/01) project whose aim was to understand, capture and dene the service requirements of cluster-supported enterprise systems. This research includes developing techniques to verify that the infrastructure is delivering on its agreed service requirements and a means of dynamically adjusting the operating policies if the service requirements are not being met. The research in this thesis falls into three broad categories: 1) the performance evaluation of data persistence in distributed enterprise applications; 2) Internet workload management and request scheduling; 3) dynamic resource allocation in server farms. Techniques for request scheduling and dynamic resource allocation are developed, with the aim of maximising the total revenue from dierent applications run in an Internet service hosting centre. Given that data is one of the most important assets of a company, it is essential that enterprise systems should be able to create, retrieve, update and delete data eectively. Web-based applications require application data and session data, and the persistence of these data is critical to the success of the business. However, data persistence comes at a cost as it introduces a performance overhead to the system. This thesis reports on research using state-of-the-art enterprise computing architectures to study the performance overheads of data persistence. Internet service providers (ISPs) are bound by quality of service (QoS) agreements with their clients. Since dierent applications serve various types of request, each with an associated value, some requests are more important than others in terms of revenue contribution. This thesis reports on the development of a priority, queue-based request scheduling scheme, which positions waiting requests in their relevant queues based on their priorities. In so doing, more important requests are processed sooner even though they may arrive in the system later than others. An experimental evaluation of this approach is conducted using an eventdriven simulator; the results demonstrate promising results over a number of existing methods in terms of revenue contribution. Due to the bursty nature of web-based workload, it is very diffcult to manage server resources in an Internet hosting centre. Static approaches such as resource provisioning either result in wasted resource (i.e., underutilisation in light loaded situations) or oer no help if resources are overutilised. Therefore, dynamic approaches to resource management are needed. This thesis proposes a bottleneck-aware, dynamic server switching policy, which is used in combination with an admission control scheme. The objective this scheme is to optimise the total revenue in the system, while maintaining the QoS agreed across all server pools in the hosting centre. A performance evaluation is conducted via extensive simulation, and the results show a considerable improvement from the bottleneck-aware server switching policy over a proportional allocation policy and a system that implements no dynamic server switching.
Estilos ABNT, Harvard, Vancouver, APA, etc.
9

DeRusso, Jamie Lynn. "Evaluating software used in a balanced literacy program". Honors in the Major Thesis, University of Central Florida, 2003. http://digital.library.ucf.edu/cdm/ref/collection/ETH/id/314.

Texto completo da fonte
Resumo:
This item is only available in print in the UCF Libraries. If this is your Honors Thesis, you can help us make it available online for use by researchers around the world by following the instructions on the distribution consent form at http://library.ucf.edu/Systems/DigitalInitiatives/DigitalCollections/InternetDistributionConsentAgreementForm.pdf You may also contact the project coordinator, Kerri Bottorff, at kerri.bottorff@ucf.edu for more information.
Bachelors
Education
Elementary Education
Estilos ABNT, Harvard, Vancouver, APA, etc.
10

Frisch, Blade William Martin. "A User Experience Evaluation of AAC Software". Bowling Green State University / OhioLINK, 2020. http://rave.ohiolink.edu/etdc/view?acc_num=bgsu1594112876812982.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
11

Seckin, Haldun. "Software Process Improvement Based On Static Process Evaluation". Master's thesis, METU, 2006. http://etd.lib.metu.edu.tr/upload/12607155/index.pdf.

Texto completo da fonte
Resumo:
This study investigates software development process improvement approaches. In particular, the static process evaluation methodology proposed by S. Gü
ceglioglu is applied on the requirements analysis and validation process applied in Project X in MYCOMPANY and an improved process is proposed. That methodology is an extension of the ISO/IEC 9126 approach for software quality assessment, and is based on evaluating a set of well-defined metrics on the static model of software development processes. The improved process proposed for Project X is evaluated using Gü
ceglioglu&rsquo
s methodology. The applied and improved process measurement results compared to determine if the improved process is successful or not.
Estilos ABNT, Harvard, Vancouver, APA, etc.
12

Majeed, Salman. "Evaluation of Collaborative Reputation System against Privacy-Invasive Software". Thesis, Blekinge Tekniska Högskola, Avdelningen för för interaktion och systemdesign, 2007. http://urn.kb.se/resolve?urn=urn:nbn:se:bth-4338.

Texto completo da fonte
Resumo:
As computers are getting integral part of daily lives, threats to privacy and personal information of users are increasing. Privacy-Invasive Software (PIS) is common problem now days. A reputation system named the PISKeeper system has been developed as countermeasure against PIS. This thesis aims at evaluating this system in order to know how and to what extent the PISKeeper system helps users in selecting the right software for their computers. Quantitative approach was adapted to evaluate the PISKeeper system. An experiment was designed and executed on computer users from different age groups and experiences in controlled lab environment. The results have proved that the PISKeeper system helped users in selecting right software for their computer by providing essential information about the specific software and comments from previous users of that software. Apart for evaluating the PISKeeper system, this thesis also aims to suggest improvements for the system. Sometimes PIS is bundled with legitimate software and users are informed about this by stating in End User License Agreement (EULA). Usually the users do not read EULA before accepting it giving consent to whatever written in it. This thesis also aims at suggesting an alternative way to present EULA content so the user may be informed about the behavior of the software in more convenient way.
Estilos ABNT, Harvard, Vancouver, APA, etc.
13

Somaiya, Sandeep R. "SENATE : a software system for evaluation of simulation results /". Master's thesis, This resource online, 1993. http://scholar.lib.vt.edu/theses/available/etd-03302010-020337/.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
14

Lim, Edwin C. "Software metrics for monitoring software engineering projects". Thesis, Edith Cowan University, Research Online, Perth, Western Australia, 1994. https://ro.ecu.edu.au/theses/1100.

Texto completo da fonte
Resumo:
As part of the undergraduate course offered by Edith Cowan University, the Department of Computer Science has (as part of a year's study) a software engineering group project. The structure of this project was divided into two units, Software Engineering l and Software Engineering 2. ln Software Engineering 1, students were given the group project where they had to complete and submit the Functional Requirement and Detail System Design documentation. In Software Engineering 2, students commenced with the implementation of the software, testing and documentation. The software was then submitted for assessment and presented to the client. To aid the students with the development of the software, the department had adopted EXECOM's APT methodology as its standard guideline. Furthermore, the students were divided into groups of 4 to 5, each group working on the same problem. A staff adviser was assigned to each project group. The purpose of this research exercise was to fulfil two objectives. The first objective was to ascertain whether there is a need to improve the final year software engineering project for future students by enhancing any aspect that may be regarded as deficient. The second objective was to ascertain the factors that have the most impact on the quality of the delivered software. The quality of the delivered software was measured using a variety of software metrics. Measurement of software has mostly been ignored until recently or used without true understanding of its purpose. A subsidiary objective was to gain an understanding of the worth of software measurement in the student environment One of the conclusions derived from the study suggests that teams who spent more time on software design and testing, tended to produce better quality software with less defects. The study also showed that adherence to the APT methodology led to the project being on schedule and general team satisfaction with the project management. One of the recommendations made to the project co-ordinator was that staff advisers should have sufficient knowledge of the software engineering process.
Estilos ABNT, Harvard, Vancouver, APA, etc.
15

Stineburg, Jeffrey. "Software reliability prediction based on design metrics". Virtual Press, 1999. http://liblink.bsu.edu/uhtbin/catkey/1154775.

Texto completo da fonte
Resumo:
This study has presented a new model for predicting software reliability based on design metrics. An introduction to the problem of software reliability is followed by a brief overview of software reliability models. A description of the models is given, including a discussion of some of the issues associated with them. The intractability of validating life-critical software is presented. Such validation is shown to require extended periods of test time that are impractical in real world situations. This problem is also inherent in fault tolerant software systems of the type currently being implemented in critical applications today. The design metrics developed at Ball State University is proposed as the basis of a new model for predicting software reliability from information available during the design phase of development. The thesis investigates the proposition that a relationship exists between the design metric D(G) and the errors that are found in the field. A study, performed on a subset of a large defense software system, discovered evidence to support the proposition.
Department of Computer Science
Estilos ABNT, Harvard, Vancouver, APA, etc.
16

Fung, Casey Kin-Chee. "A methodology for the collection and evaluation of software error data /". The Ohio State University, 1985. http://rave.ohiolink.edu/etdc/view?acc_num=osu1487263399022312.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
17

Ubhayakar, Sonali S. "Evaluation of program specification and verification systems". Thesis, Monterey, California. Naval Postgraduate School, 2003. http://hdl.handle.net/10945/893.

Texto completo da fonte
Resumo:
Computer systems that earn a high degree of trust must be backed by rigorous verification methods. A verification system is an interactive environment for writing formal specifications and checking formal proofs. Verification systems allow large complicated proofs to be managed and checked interactively. We desire evaluation criteria that provide a means of finding which verification system is suitable for a specific research environment and what needs of a particular project the tool satisfies. Therefore, the purpose of this thesis is to develop a methodology and set of evaluation criteria to evaluate verification systems for their suitability to improve the assurance that systems meet security objectives. A specific verification system is evaluated with respect to the defined methodology. The main goals are to evaluate whether the verification system has the capability to express the properties of software systems and to evaluate whether the verification system can provide inter-level mapping, a feature required for understanding how a system meets security objectives.
Naval Postgraduate School author (civilian).
Estilos ABNT, Harvard, Vancouver, APA, etc.
18

Wilburn, Cathy A. "Using the Design Metrics Analyzer to improve software quality". Virtual Press, 1994. http://liblink.bsu.edu/uhtbin/catkey/902489.

Texto completo da fonte
Resumo:
Effective software engineering techniques are needed to increase the reliability of software systems, to increase the productivity of development teams, and to reduce the costs of software development. Companies search for an effective software engineering process as they strive to reach higher process maturity levels and produce better software. To aid in this quest for better methods of software engineering. the Design Metrics Research Team at Ball State University has analyzed university and industry software to be able to detect error-prone modules. The research team has developed, tested and validated their design metrics and found them to be highly successful. These metrics were typically collected and calculated by hand. So that these metrics can be collected more consistently, more accurately and faster, the Design Metrics Analyzer for Ada (DMA) was created. The DMA collects metrics from the files submitted based on a subprogram level. The metrics results are then analyzed to yield a list of stress points, which are modules that are considered to be error-prone or difficult for developers. This thesis describes the Design Metrics Analyzer, explains its output and how it functions. Also, ways that the DMA can be used in the software development life cycle are discussed.
Department of Computer Science
Estilos ABNT, Harvard, Vancouver, APA, etc.
19

Kwan, Pak Leung. "Design metrics forensics : an analysis of the primitive metrics in the Zage design metrics". Virtual Press, 1994. http://liblink.bsu.edu/uhtbin/catkey/897490.

Texto completo da fonte
Resumo:
The Software Engineering Research Center (SERC) Design Metrics Research Team at Ball State University has developed a design metric D(G) of the form:D(G) = D~ + DiWhere De is the architectural design metric (external design metric) and D; is the detailed design metric (internal design metric).Questions to be investigated in this thesis are:Why can D, be an indicator of the potential error modules?Why can D; be an indicator of the potential error modules?Are there any significant factors that dominate the design metrics?In this thesis, the report of the STANFINS data is evaluated by using correlation analysis, regression analysis, and several other statistical techiques. The STANFINS study is chosen because it contains approximately 532 programs, 3,000 packages and 2,500,000 lines of Ada.The design metrics study was completed on 21 programs (approximately 24,000 lines of code) which were selected by CSC development teams. Error reports were also provided by CSC personnel.
Department of Computer Science
Estilos ABNT, Harvard, Vancouver, APA, etc.
20

Zitser, Misha 1979. "Securing software : an evaluation of static source code analyzers". Thesis, Massachusetts Institute of Technology, 2003. http://hdl.handle.net/1721.1/18025.

Texto completo da fonte
Resumo:
Thesis (M. Eng.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 2003.
Includes bibliographical references (leaves 100-105).
This thesis evaluated five static analysis tools--Polyspace C Verifier, ARCHER, BOON, Splint, and UNO--using 14 code examples that illustrated actual buffer overflow vulnerabilities found in various versions of Sendmail, BIND, and WU-FTPD. Each code example included a "BAD" case with one or more buffer overflow vulnerabilities and a "PATCHED" case without buffer overflows. The buffer overflows varied and included stack, heap, bss and data buffers; access above and below buffer bounds; access using pointers, indices, and functions; and scope differences between buffer creation and use. Detection rates for the "BAD" examples were low except for Splint and PolySpace C Verifier, which had average detection rates of 57% and 87% respectively. However, average false alarm rates, as measured using the "PATCHED" programs, were high for these two systems. The frequency of false alarms per lines of code was high for both of these tools; Splint gave on average one false alarm per 50 lines of code, and PolySpace gave on average one false alarm per 10 lines of code. This result shows that current approaches can detect buffer overflows, but that false alarm rates need to be lowered substantially.
by Misha Zitser.
M.Eng.
Estilos ABNT, Harvard, Vancouver, APA, etc.
21

Wooten, Amy Jo. "Improving the distributed evolution of software through heuristic evaluation". Thesis, Massachusetts Institute of Technology, 2011. http://hdl.handle.net/1721.1/66816.

Texto completo da fonte
Resumo:
Thesis (M. Eng.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 2011.
Cataloged from PDF version of thesis.
Includes bibliographical references (p. 65-67).
In order to create the increasingly complex software systems needed to deal with today's technological challenges, we must be able to build on previous work. However, existing software solutions are quite often not an exact fit. Software developers have found multiple ways of approaching the problem of designing software that can be adapted as well as otherwise changed; Most of this effort has been aimed at the structural properties of the software, by creating open-architecture systems. However, there are still significant usability hurdles to overcome. A developer-oriented evaluation of open architecture interfaces could help meet some of these challenges. In this thesis, I present a set of guidelines for designing a developer-oriented interface for software open architectures, developed through a survey of several related fields. I use these guidelines to design and implement an interface to the Maritime Open Architecture Autonomy, one such software framework. Finally, through two case studies, I demonstrate the usefulness of these guidelines as the basis of a low cost method of usability evaluation. Study observations and limitations are presented, as well as suggestions for further research into heuristic evaluation.
by Amy Jo Wooten.
M.Eng.
Estilos ABNT, Harvard, Vancouver, APA, etc.
22

McGuire, Heather. "The design, development and evaluation of a computer software workshop for volunteers". Thesis, National Library of Canada = Bibliothèque nationale du Canada, 2001. http://www.collectionscanada.ca/obj/s4/f2/dsk3/ftp04/MQ59245.pdf.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
23

Shah, Seyyed Madasar Ali. "Model transformation dependability evaluation by the automated creation of model generators". Thesis, University of Birmingham, 2012. http://etheses.bham.ac.uk//id/eprint/3407/.

Texto completo da fonte
Resumo:
This thesis is on the automatic creation of model generators to assist the validation of model transformations. The model driven software development methodology advocates models as the main artefact to represent software during development. Such models are automatically converted, by transformation tools, to apply in different stages of development. In one application of the method, it becomes possible to synthesise software implementations from design models. However, the transformations used to convert models are man-made, and so prone to development error. An error in a transformation can be transmitted to the created software, potentially creating many invalid systems. Evaluating that model transformations are reliable is fundamental to the success of modelling as a principle software development practice. Models generated via the technique presented in this thesis can be applied to validate transformations. In several existing transformation validation techniques, some form of conversion is employed. However, those techniques do not apply to validate the conversions used there-in. A defining feature of the current presentation is the utilization of transformations, making the technique self-hosting. That is, an implementation of the presented technique can create generators to assist model transformations validation and to assist validation of that implementation of the technique.
Estilos ABNT, Harvard, Vancouver, APA, etc.
24

Stevens, K. Todd. "A taxonomy for the evaluation of computer documentation". Thesis, Virginia Tech, 1988. http://hdl.handle.net/10919/44691.

Texto completo da fonte
Resumo:
Software quality is a highly visible topic in the software engineering community. In response to assessing the quality of the documentation of software, this thesis presents a taxonomy of documentation characteristics which can be used to evaluate the quality of computer documentation. Previous work in the area has been limited to individual characteristics of documentation and English prose in general and not organized in such fashion as to be usable in an evaluation procedure. This thesis takes these characteristics, adds others, and systematically establishes a hierarchical structure of characteristics that allow one to assess the quality of documentation. The tree structure has three distinct levels (viz. Qualities, Factors, and QuarzzuÌ ierr), with a root node (or highest characteristic) of Documentation Adequacy. The Qualities are abstract, non-measurable characteristics. The Factors are characteristics that support the assessment of the Qualities; Qualizier are decomposed into Factors. The Quantyiers, which are measurable document characteristics, support the assessment of the Factors. In the thesis, the levels are described and then the characteristics are each defined in terms of evaluation of documentation quality. Finally, an example application is presented as the evaluation taxonomy is tailored to a specific set of documents, those generated by the Automated Design Description System (ADDS).
Master of Science
Estilos ABNT, Harvard, Vancouver, APA, etc.
25

Bhattrai, Gopendra R. "An empirical study of software design balance dynamics". Virtual Press, 1995. http://liblink.bsu.edu/uhtbin/catkey/958786.

Texto completo da fonte
Resumo:
The Design Metrics Research Team in the Computer Science Department at Ball State University has been engaged in developing and validating quality design metrics since 1987. Since then a number of design metrics have been developed and validated. One of the design metrics developed by the research team is design balance (DB). This thesis is an attempt to validate the metric DB. In this thesis, results of the analysis of five systems are presented. The main objective of this research is to examine if DB can be used to evaluate the complexity of a software design and hence the quality of the resulting software. Two of the five systems analyzed were student projects and the remaining three were from industry. The five systems analyzed were written in different languages, had different sizes and exhibited different error rates.
Department of Computer Science
Estilos ABNT, Harvard, Vancouver, APA, etc.
26

Cancellieri, Michela. "Computer-aided design, synthesis and evaluation of novel antiviral compounds". Thesis, Cardiff University, 2014. http://orca.cf.ac.uk/69187/.

Texto completo da fonte
Resumo:
RNA viruses are a major cause of disease that in the last fifteen years counted for frequent outbreaks, infecting both humans and animals. Examples of emerging or ri-emerging viral pathogens are the Foot-and- Mouth disease virus (FMDV) for animals, Chikungunya virus (CHIKV), Coxsackie virus B3 (CVB3) and Respiratory Syncytial virus (RSV) for humans, all responsible for infections associated with mild to severe complications. Although both vaccines and small-molecule compounds are at different stages of development, no selective antiviral drugs have been approved so far, therefore for all four these viruses improved treatment strategies are required. Promising targets are the viral non-structural proteins, which are commonly evaluated for the identification of new antivirals. Starting from the study of different viral proteins, several computer-aided techniques were applied, aiming to identify hit molecules first, and secondly to synthesise new series of potential antiviral compounds. The available crystal structures of some of the proteins that play a role in viral replication were used for structure- and ligand-based virtual screenings of commercially available compounds against CVB3, FMDV and RSV. New families of potential anti-CHIKV compounds were rationally designed and synthesized, in order to establish a structureactivity relationship study on a lead structure previously found in our group. Finally, a de-novo drug design approach was performed to find a suitable scaffold for the synthesis of a series of zinc-ejecting compounds against RSV. Inhibition of virus replication was evaluated for all the new compounds, of which different showed antiviral potential.
Estilos ABNT, Harvard, Vancouver, APA, etc.
27

Hawley, Jeffrey Allan. "Software architecture of the non-rigid image registration evaluation project". Thesis, University of Iowa, 2011. https://ir.uiowa.edu/etd/1229.

Texto completo da fonte
Resumo:
In medical image registration the goal is to find point by point correspondences between a source image and a target image such that the two images are aligned. There are rigid and non-rigid registration algorithms. Rigid registration uses rigid transformation methods which preserve distances between every pair of points. Non-rigid registration uses transformation methods that do not have to preserve the distances. Image registration has many medical applications -tracking tumors, anatomical changes over time, differences between characteristics like age and gender, etc. A gold standard transformation to compare and evaluate the registration algorithms would be ideal to use to verify if the two images are perfectly aligned. However, there is hardly if ever a gold standard transformation for non-rigid registration algorithms. The reason why there is no gold standard transformation for non-rigid registration algorithms is that pointwise correspondence between two registered points is not unique. In the absence of a gold standard various evaluation methods are used to gauge registration performance. However, each evaluation method only evalutes the error in the transformation from a limited perspective and therefore has its advantages and drawbacks. The Non-Rigid Image Registration Evaluation Project (NIREP) was was created to provide one central tool that has a collection of evaluation methods to perform the evaluations on non-rigid image registration algorithms and rank the registration algorithms based on the outputs of the evaluation methods in the absence of without having to use a gold standard.
Estilos ABNT, Harvard, Vancouver, APA, etc.
28

Conrad, Paul Jefferson. "Analysis of PSP-like processes for software engineering". CSUSB ScholarWorks, 2006. https://scholarworks.lib.csusb.edu/etd-project/2962.

Texto completo da fonte
Resumo:
The purpose of this thesis is to provide the California State University, San Bernardino, Department of Computer Science with an analysis and recommended solution to improving the software development process.
Estilos ABNT, Harvard, Vancouver, APA, etc.
29

Hammerton, James Alistair. "Exploiting holistic computation : an evaluation of the sequential RAAM". Thesis, University of Birmingham, 1999. http://etheses.bham.ac.uk//id/eprint/4948/.

Texto completo da fonte
Resumo:
In recent years it has been claimed that connectionist methods of representing compositional structures, such as lists and trees, support a new form of symbol processing known as holistic computation. In a holistic computation the constituents of an object are acted upon simultaneously, rather than on a one-by-one basis as is typical in traditional symbolic systems. This thesis presents firstly, a critical examination of the concept of holistic computation, as described in the literature, along with a revised definition of the concept that aims to clarify the issues involved. In particular it is argued that holistic representations are not necessary for holistic computation and that holistic computation is not restricted to connectionist systems. Secondly, an evaluation of the capacity of a particular connectionist representation, the Sequential RAAM, to generate representations that support holistic symbol processing is presented. It is concluded that the Sequential RAAM is not as effective a vehicle for holistic symbol processing as it initially appeared, but that there may be some scope for improving its performance.
Estilos ABNT, Harvard, Vancouver, APA, etc.
30

Barkmann, Henrike. "Quantitative Evaluation of Software Quality Metrics in Open-Source Projects". Thesis, Växjö University, School of Mathematics and Systems Engineering, 2009. http://urn.kb.se/resolve?urn=urn:nbn:se:vxu:diva-2562.

Texto completo da fonte
Resumo:

The validation of software quality metrics lacks statistical

significance. One reason for this is that the data collection

requires quite some effort. To help solve this problem,

we develop tools for metrics analysis of a large number of

software projects (146 projects with ca. 70.000 classes and

interfaces and over 11 million lines of code). Moreover, validation

of software quality metrics should focus on relevant

metrics, i.e., correlated metrics need not to be validated independently.

Based on our statistical basis, we identify correlation

between several metrics from well-known objectoriented

metrics suites. Besides, we present early results of

typical metrics values and possible thresholds.

Estilos ABNT, Harvard, Vancouver, APA, etc.
31

Carleson, Hannes, e Marcus Lyth. "Evaluation of Problem Driven Software Process Improvement". Thesis, KTH, Skolan för informations- och kommunikationsteknik (ICT), 2016. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-189216.

Texto completo da fonte
Resumo:
Software development is constantly growing in complexity and several newtools have been created with the aim to manage this. However, even with thisever evolving range of tools and methodology, organizations often struggle withhow to implement a new development-process, especially when implementingagile methods. The most common reason for this is because teams implementagile tools in an ad-hoc manner, without fully considering the effects this cancause. This leads to teams trying to correct their choice of methodologysomewhere during the post-planning phase, which can be devastating for aproject as it adds further complexity to the project by introducing new problemsduring the transition process. Moreover, with an existing range of tools aimedat managing this process transition, none of them have been thoroughlyevaluated, which in turn forms the problem that this thesis is centred around.This thesis explores a method transition scenario and evaluates a SoftwareProcess Improvement method oriented around the problems that theimprovement process is aiming to solve. The goal with this is to establish ifproblem oriented Software Process Improvement is viable as well as to providefurther data for the extensive research that is being done in this field. We wishto prove that the overall productivity of a software development team can beincreased even during a project by carefully managing the transition to newmethods using a problem driven approach.The research method used is of qualitative and inductive character. Data iscollected by performing a case study, via action research, and literature studies.The case study consists of iteratively managing a transition over to newmethods, at an organization in the middle of a project, using a problem drivenapproach to Software Process Improvement. Three iterations of methodimprovement are applied on the project and each iteration acts as an evaluationon how well Problem Driven Software Process Improvement works.By using the evaluation model created for this degree project, the researchershave found that problem driven Software Process Improvement is an effectivetool for managing and improving the processes of a development team.Productivity has increased with focus on tasks with highest priority beingfinished first. Transparency has increased with both development team andcompany having a clearer idea of work in progress and what is planned.Communication has grown with developers talking more freely about userstories and tasks during planning and stand-up meetings. The researchersacknowledge that the results of the study are of a limited scope and alsorecognize that further evaluation in form of more iterations are needed for acomplete evaluation.
Estilos ABNT, Harvard, Vancouver, APA, etc.
32

Toubache, Kamel. "An evaluation and comparison of software process engineering support environments and tools /". Thesis, McGill University, 1993. http://digitool.Library.McGill.CA:80/R/?func=dbin-jump-full&object_id=69516.

Texto completo da fonte
Resumo:
This thesis proposes the evaluation and comparison of software process engineering support tools and environments. The two dimensions our work focuses on are the goals pursued when processes are engineered, and the roles, related to software processes, played in a software production setting. A framework is proposed to help assess how well software process support tools and environments meet specified goals, and support identified roles. Software process support approaches as well as the tools and environments implementing them are studied in order to provide for the necessary knowledge needed to make sensible judgments. The results of our evaluation show that three important issues are still inadequately addressed: the place of the human element, the tremendous degree of dynamism needed, and the continuous improvement of processes. Our findings suggest that presently we do not have the broad, and widely available mechanical support necessary to help software companies move beyond the defined level (level 3) of the Software Engineering Institute (SEI) Capability Maturity Model.
Estilos ABNT, Harvard, Vancouver, APA, etc.
33

Kawala-Janik, Aleksandra. "Efficiency evaluation of external environments control using bio-signals". Thesis, University of Greenwich, 2013. http://gala.gre.ac.uk/9810/.

Texto completo da fonte
Resumo:
There are many types of bio-signals with various control application prospects. This dissertation regards possible application domain of electroencephalographic signal. The implementation of EEG signals, as a source of information used for control of external devices, became recently a growing concern in the scientific world. Application of electroencephalographic signals in Brain-Computer Interfaces (BCI) (variant of Human-Computer Interfaces (HCI)) as an implement, which enables direct and fast communication between the human brain and an external device, has become recently very popular. Currently available on the market, BCI solutions require complex signal processing methodology, which results in the need of an expensive equipment with high computing power. In this work, a study on using various types of EEG equipment in order to apply the most appropriate one was conducted. The analysis of EEG signals is very complex due to the presence of various internal and external artifacts. The signals are also sensitive to disturbances and non-stochastic, what makes the analysis a complicated task. The research was performed on customised (built by the author of this dissertation) equipment, on professional medical device and on Emotiv EPOC headset. This work concentrated on application of an inexpensive, easy to use, Emotiv EPOC headset as a tool for gaining EEG signals. The project also involved application of embedded system platform - TS-7260. That solution caused limits in choosing an appropriate signal processing method, as embedded platforms characterise with a little efficiency and low computing power. That aspect was the most challenging part of the whole work. Implementation of the embedded platform enables to extend the possible future application of the proposed BCI. It also gives more flexibility, as the platform is able to simulate various environments. The study did not involve the use of traditional statistical or complex signal processing methods. The novelty of the solution relied on implementation of the basic mathematical operations. The efficiency of this method was also presented in this dissertation. Another important aspect of the conducted study is that the research was carried out not only in a laboratory, but also in an environment reflecting real-life conditions. The results proved efficiency and suitability of the implementation of the proposed solution in real-life environments. The further study will focus on improvement of the signal-processing method and application of other bio-signals - in order to extend the possible applicability and ameliorate its effectiveness.
Estilos ABNT, Harvard, Vancouver, APA, etc.
34

Werbelow, Wayne Louis. "The application of artificial intelligence techniques to software maintenance". Thesis, Kansas State University, 1985. http://hdl.handle.net/2097/9890.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
35

Dorsey, Edward Vernon. "The automated assessment of computer software documentation quality using the objectives/principles/attributes framework". Thesis, This resource online, 1992. http://scholar.lib.vt.edu/theses/available/etd-03302010-020606/.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
36

Sweeney, Rebecca Elizabeth. "Integrated software system for the collection and evaluation of wellness information". [Johnson City, Tenn. : East Tennessee State University], 2002. http://etd-submit.etsu.edu/etd/theses/available/etd-0717102-071055/unrestricted/SweeneyR072902.pdf.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
37

Tukseferi, Gerald. "TOWARD AN EVALUATION OF THE COGNITIVE PROCESSES USED BY SOFTWARE TESTERS". Thesis, Mälardalens högskola, Inbyggda system, 2020. http://urn.kb.se/resolve?urn=urn:nbn:se:mdh:diva-49048.

Texto completo da fonte
Resumo:
Software testing is a complex activity based on reasoning, decision making, abstraction and collaboration performed by allocating multiple cognitive resources. While studies of behaviour and cognitive aspects are emerging and rising in software engineering research, the study of cognition in software testing has yet to find its place in the research literature. Thus, this study explores the cognitive processes used by humans engaged in the software testing process by evaluating an existing: the Test Design Cognitive Model (TDCM). To achieve our objective, we conducted an experiment with veparticipants. The subjects were asked to test a specic Java program and verbalizing their thoughts. The results were analyzed using the verbal protocol analysis guidelines. The results suggest thatall the participants followed similar patterns as those hypothesised by the TDCM model. However,certain patterns specic to each participant were also evident. Two participants exhibited a failure pattern, in which both followed the same sequence of actions after evaluating the test case and verifying that it fails. The experiment protocol and the preliminary results provide a framework from which to investigate tester knowledge and expertise and it is the first step in understanding,evaluating and rening the cognitive processes of software testing.
Estilos ABNT, Harvard, Vancouver, APA, etc.
38

Segerstroem, K. Johanna (Karina Johanna) Carleton University Dissertation Psychology. "Should we integrate?; a user evaluation of web and desktop integration strategies". Ottawa, 1999.

Encontre o texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
39

Graser, Thomas Jeffrey. "Reference architecture representation environment (RARE) : systematic derivation and evaluation of domain-specific, implementation-independent software architectures /". Access restricted to users with UT Austin EID Full text (PDF) from UMI/Dissertation Abstracts International, 2001. http://wwwlib.umi.com/cr/utexas/fullcit?p3023549.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
40

Carpatorea, Iulian Nicolae. "A graphical traffic scenario editing and evaluation software". Thesis, Högskolan i Halmstad, Halmstad Embedded and Intelligent Systems Research (EIS), 2012. http://urn.kb.se/resolve?urn=urn:nbn:se:hh:diva-19438.

Texto completo da fonte
Resumo:
An interactive tool is developed for the purpose of rapid exploration ofdiverse traffic scenario. The focus is on rapidity of design and evaluation rather thenon physical realism. Core aspects are the ability to define the essential elements fora traffic scenario such as a road network and vehicles. Cubic Bezier curves are usedto design the roads and vehicle trajectory. A prediction algorithm is used to visualizevehicle future poses and collisions and thus provide means for evaluation of saidscenario. Such a program was created using C++ with the help of Qt libraries.
Estilos ABNT, Harvard, Vancouver, APA, etc.
41

Buisman, Jacco. "Game Theory and Bidding for Software Projects An Evaluation of the Bidding Behaviour of Software Engineers". Thesis, Blekinge Tekniska Högskola, Institutionen för programvaruteknik och datavetenskap, 2002. http://urn.kb.se/resolve?urn=urn:nbn:se:bth-5850.

Texto completo da fonte
Resumo:
The conception phase is one of the most important phases of software projects. In this phase it is determined which software development company will perform a software project. To obtain a software project, companies can have several bidding strategies. This thesis investigates if and how game theory can be a helpful tool to evaluate bidding for software projects. This thesis can be divided into two different parts: a theoretical and a practical. The theoretical part investigates the applicable parts of game theory in this thesis, explains what software projects are, explains the difference between costing and bidding and provides results of a literature survey about bidding behaviour. The practical part introduces a study to investigate strategies and bidding behaviour of software engineers, explains the experimental design that found the study, provides the results of the performed study and a discussion of the results. This thesis concludes that game theory contains some concepts that make it possible to evaluate bidding for software projects.
Jacco Buisman Bonairestraat 32 9715 SE Groningen The Netherlands
Estilos ABNT, Harvard, Vancouver, APA, etc.
42

Kasu, Velangani Deepak Reddy. "Programmer Difficulties within a Software Development Environment". University of Cincinnati / OhioLINK, 2018. http://rave.ohiolink.edu/etdc/view?acc_num=ucin1544002452972811.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
43

Horne, Joanna Kathryn. "Development and evaluation of computer-based techniques for assessing children in educational settings". Thesis, University of Hull, 2002. http://hydra.hull.ac.uk/resources/hull:11840.

Texto completo da fonte
Resumo:
This thesis reports and discusses an integrated programme of research on computerised assessment in education, focussing on two themes. The aim of the first study was to develop and evaluate a computerised baseline assessment system for four to five year olds (CoPS Baseline). The aim of the second study was to develop and evaluate a computerised dyslexia screening system for the secondary school age group (LASS Secondary). CoPS Baseline was shown to be a reliable and valid assessment of pupils' skills in literacy, mathematics, communication and personal and social development on entry to school at age four or five. It was also found to be predictive of children's later reading, spelling, writing and mathematics ability up to three years after the initial testing. LASS Secondary was shown to be a reliable and valid assessment of students' reading, spelling, reasoning, auditory memory, visual memory, phonological processing and phonic skills from the ages of 11 to 15. It was also seen to be a good indicator of dyslexia, with significant differences between the scores of dyslexic students and non-SEN students on the sentence reading, spelling, auditory memory, non-word reading and syllable segmentation tests. CoPS Baseline and LASS Secondary were also found to be more objective than conventional assessment administered by a person, time-saving in their test administration and scoring, and more enjoyable and motivating for children, particularly children who have specific difficulties. Computer-based techniques have been shown to be beneficial in the assessment of children in educational settings. However, further research is proposed in the areas of: gender and ethnic differences in computerised versus conventional assessment; the addition of reading comprehension, verbal intelligence, mathematics and motor skills tests to the LASS Secondary system; follow-up tests of students assessed on LASS Secondary to provide information about teaching outcomes; and the development of tests suitable for use with deaf / hearing-impaired individuals in order to assess literacy skills and identify dyslexia.
Estilos ABNT, Harvard, Vancouver, APA, etc.
44

Pell, Barney Darryl. "Strategy generation and evaluation for meta-game playing". Thesis, University of Cambridge, 1993. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.308363.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
45

Vrazalic, Lejla. "Towards holistic human-computer interaction evaluation research and practice development and validation of the distributed usability evaluation method /". Access electronically, 2004. http://www.library.uow.edu.au/adt-NWU/public/adt-NWU20050106.151954/index.html.

Texto completo da fonte
Resumo:
Thesis (Ph.D.)--University of Wollongong, 2004.
Typescript. This thesis is subject to a 2 year embargo (16/09/2004 to 16/09/2006) and may only be viewed and copied with the permission of the author. For further information please Contact the Archivist. Includes bibliographical references: p. 360-374.
Estilos ABNT, Harvard, Vancouver, APA, etc.
46

Czerny, Maximilian. "Automated software testing : Evaluation of Angluin's L* algorithm and applications in practice". Thesis, KTH, Skolan för datavetenskap och kommunikation (CSC), 2014. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-146018.

Texto completo da fonte
Resumo:
Learning-based testing can ensure software quality without a formal documentation or maintained specification of the system under test. Therefore, an automaton learning algorithm is the key component to automatically generate efficient test cases for black-box systems. In the present report Angluin’s automaton learning algorithm L* and the extension called L* Mealy are examined and evaluated in the application area of learning-based software testing. The purpose of this work is to estimate the applicability of the L* algorithm for learning real world software and to describe constraints of this approach. To achieve this, a framework to test the L* implementation on various deterministic finite automata (DFA) was written and an adaptation called L* Mealy was integrated into the learning-based testing platform LBTest. To follow the learning process, the queries that the learner needs to perform on the system to learn are tracked and measured. Both algorithms show a polynomial growth on these queries in case studies from real world business software or on randomly generated DFAs. The test data indicate a better learning performance in practice than the theoretical predictions imply. In contrast to other existing learning algorithms, the L* adaptation L* Mealy performs slowly in LBTest due to a polynomially growing interval between the types of queries that the learner needs to derive a hypothesis.
Estilos ABNT, Harvard, Vancouver, APA, etc.
47

Kӧnik, Stella. "Music education software : an HCI evaluation of existing software and prototype implementation of an ear training programme based on a proposed approach". Master's thesis, University of Cape Town, 2003. http://hdl.handle.net/11427/6408.

Texto completo da fonte
Resumo:
Includes bibliographical references.
This research paper uses Human Computer Interaction(HCI) evaluation methods, namely questionnaires and interviews, to obtain information about the kind of music software currently in common use in music departments, and the level of satisfaction of the students with this software. The evaluation results are then used to examine the possibilities for some improvement of this software. The evaluation reveals that certain software, such as that used for the area of aural (ear) training in music study, may not be used to its fullest extent as a result of a lack of student interest in the software. This research considers one possible reason for the lack of interest, namely that students find no relevance or meaning in the content of such software.
Estilos ABNT, Harvard, Vancouver, APA, etc.
48

Zhou, Ke. "On the evaluation of aggregated web search". Thesis, University of Glasgow, 2014. http://theses.gla.ac.uk/7104/.

Texto completo da fonte
Resumo:
Aggregating search results from a variety of heterogeneous sources or so-called verticals such as news, image and video into a single interface is a popular paradigm in web search. This search paradigm is commonly referred to as aggregated search. The heterogeneity of the information, the richer user interaction, and the more complex presentation strategy, make the evaluation of the aggregated search paradigm quite challenging. The Cranfield paradigm, use of test collections and evaluation measures to assess the effectiveness of information retrieval (IR) systems, is the de-facto standard evaluation strategy in the IR research community and it has its origins in work dating to the early 1960s. This thesis focuses on applying this evaluation paradigm to the context of aggregated web search, contributing to the long-term goal of a complete, reproducible and reliable evaluation methodology for aggregated search in the research community. The Cranfield paradigm for aggregated search consists of building a test collection and developing a set of evaluation metrics. In the context of aggregated search, a test collection should contain results from a set of verticals, some information needs relating to this task and a set of relevance assessments. The metrics proposed should utilize the information in the test collection in order to measure the performance of any aggregated search pages. The more complex user behavior of aggregated search should be reflected in the test collection through assessments and modeled in the metrics. Therefore, firstly, we aim to better understand the factors involved in determining relevance for aggregated search and subsequently build a reliable and reusable test collection for this task. By conducting several user studies to assess vertical relevance and creating a test collection by reusing existing test collections, we create a testbed with both the vertical-level (user orientation) and document-level relevance assessments. In addition, we analyze the relationship between both types of assessments and find that they are correlated in terms of measuring the system performance for the user. Secondly, by utilizing the created test collection, we aim to investigate how to model the aggregated search user in a principled way in order to propose reliable, intuitive and trustworthy evaluation metrics to measure the user experience. We start our investigations by studying solely evaluating one key component of aggregated search: vertical selection, i.e. selecting the relevant verticals. Then we propose a general utility-effort framework to evaluate the ultimate aggregated search pages. We demonstrate the fidelity (predictive power) of the proposed metrics by correlating them to the user preferences of aggregated search pages. Furthermore, we meta-evaluate the reliability and intuitiveness of a variety of metrics and show that our proposed aggregated search metrics are the most reliable and intuitive metrics, compared to adapted diversity-based and traditional IR metrics. To summarize, in this thesis, we mainly demonstrate the feasibility to apply the Cranfield Paradigm for aggregated search for reproducible, cheap, reliable and trustworthy evaluation.
Estilos ABNT, Harvard, Vancouver, APA, etc.
49

Kincaid, David Thomas. "Evaluation of computer hardware and software in the private country club sector of Virginia". Thesis, Virginia Tech, 1994. http://hdl.handle.net/10919/42007.

Texto completo da fonte
Resumo:

The world has seen incredible changes in recent years, and the most notable has been the introduction of computers into our society. One industry that has greatly benefited from the use of computers in their field has been the hospitality industry. The country club sector is one area of the hospitality industry that has been greatly improved through the use of computers. This study evaluated the software and hardware for private country clubs, and related that to the usage of these products by the private country clubs in Virginia.

The study utilized a survey to investigate the types of and methods by which computers have impacted these country clubs. The survey's results were offered to each country club that was surveyed, for their usage in whatever manner they find helpful.


Master of Science
Estilos ABNT, Harvard, Vancouver, APA, etc.
50

Rosén, Nils. "Evaluation methods for procurement of business critical software systems". Thesis, University of Skövde, School of Humanities and Informatics, 2009. http://urn.kb.se/resolve?urn=urn:nbn:se:his:diva-3091.

Texto completo da fonte
Resumo:

The purpose of this thesis is to explore what software evaluation methods are currently available that can assist organizations and companies in procuring a software solution for some particular task or purpose for a specific type of business. The thesis is based on a real-world scenario where a company, Volvo Technology Corporation (VTEC), is in the process of selecting a new intellectual property management system for their patent department. For them to make an informed decision as to which system to choose, an evaluation of market alternatives needs to be done. First, a set of software evaluation methods and techniques are chosen for further evaluation. An organizational study, by means of interviews where questions are based on the ISO 9126-1 Software quality model, is then conducted, eliciting user opinions about the current system and what improvements a future system should have. The candidate methods are then evaluated based on the results from the organizational study and other pertinent factors in order to reach a conclusion as to which method is best suited for this selection problem. The Analytical Hierarchy Process (AHP) is deemed the best choice.

Estilos ABNT, Harvard, Vancouver, APA, etc.
Oferecemos descontos em todos os planos premium para autores cujas obras estão incluídas em seleções literárias temáticas. Contate-nos para obter um código promocional único!

Vá para a bibliografia