To see the other types of publications on this topic, follow the link: Evaluation model.

Dissertations / Theses on the topic 'Evaluation model'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 dissertations / theses for your research on the topic 'Evaluation model.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.

1

Tudevdagva, Uranchimeg. "Structure Oriented Evaluation Model for E-Learning." Doctoral thesis, Universitätsbibliothek Chemnitz, 2014. http://nbn-resolving.de/urn:nbn:de:bsz:ch1-qucosa-146901.

Full text
Abstract:
Volume 14 of publication series EINGEBETTETE, SELBSTORGANISIERENDE SYSTEME is devoted to the structure oriented evaluation of e-learning. For future knowledge society, beside creation of intelligent technologies, adapted methods of knowledge transfer are required. In this context e-learning becomes a key technology for development of any education system. E-learning is a complex process into which many different groups with specific tasks and roles are included. The dynamics of an e-learning process requires adjusted quality management. For that corresponding evaluation methods are needed. In the present work, Dr.Tudevdagva develops a new evaluation approach for e-learning. The advantage of her method is that in contrast to linear evaluation methods no weight factors are needed and the logical goal structure of an elearning process can be involved into evaluation. Based on general measure theory structure oriented score calculation rules are derived. The so obtained score function satisfies the same calculation rules as they are known from normalised measures. In statistical generalisation, these rules allow the structure oriented calculation of empirical evaluation scores based on checklist data. By these scores the quality can be described by which an e-learning has reached its total goal. Moreover, a consistent evaluation of embedded partial processes of an e-learning becomes possibly. The presented score calculation rules are part of a eight step evaluation model which is illustrated by pilot samples. U. Tudevdagva’s structure oriented evaluation model (SURE model) is by its embedding into the general measure theory quite universal applicable. In similar manner, an evaluation of efficiency of administration or organisation processes becomes possible.
APA, Harvard, Vancouver, ISO, and other styles
2

Nordholm, Johan. "Model-Based Testing: An Evaluation." Thesis, Karlstad University, Faculty of Economic Sciences, Communication and IT, 2010. http://urn.kb.se/resolve?urn=urn:nbn:se:kau:diva-5188.

Full text
Abstract:

Testing is a critical activity in the software development process in order to obtain systems of high quality. Tieto typically develops complex systems, which are currently tested through a large number of manually designed test cases. Recent development within software testing has resulted in methods and tools that can automate the test case design, the generation of test code and the test result evaluation based on a model of the system under test. This testing approach is called model-based testing (MBT).

This thesis is a feasibility study of the model-based testing concept and has been performed at the Tieto office in Karlstad. The feasibility study included the use and evaluation of the model-based testing tool Qtronic, developed by Conformiq, which automatically designs test cases given a model of the system under test as input. The experiments for the feasibility study were based on the incremental development of a test object, which was the client protocol module of a simplified model for an ATM (Automated Teller Machine) client-server system. The experiments were evaluated both individually and by comparison with the previous experiment since they were based on incremental development. For each experiment the different tasks in the process of testing using Qtronic were analyzed to document the experience gained as well as to identify strengths and weaknesses.

The project has shown the promise inherent in using a model-based testing approach. The application of model-based testing and the project results indicate that the approach should be further evaluated since experience will be crucial if the approach is to be adopted within Tieto’s organization.

APA, Harvard, Vancouver, ISO, and other styles
3

Klose, Daniel Peter. "Protein model construction and evaluation." Thesis, University College London (University of London), 2008. http://discovery.ucl.ac.uk/1444214/.

Full text
Abstract:
The prediction of protein secondary and tertiary structure is becoming increasingly important as the number of sequences available to the biological community far exceeds the number of unique native structures. The following chapters describe the conception, construction, evaluation and application of a series of algorithms for the prediction and evaluation of two and three-dimensional protein structure. In chapter 1 a brief overview of protein structure and the resources required to predict protein features is given. Chapter 2 describes the investigation of sequence identity and alignments on the prediction of two-dimensional protein structure in the form of long and short range protein contacts a feature which is known to correlate with solvent accessibility. It also describes the identification of a feature which is referred to as the 'Empty Quarter' which forms the basis of an evaluation function described in Chapter 3 and developed in Chapter 4. Chapter 3 introduces the Dynamic Domain Threading method used during round six of the CASP exercise. Phobic, a protein evaluation function based on predicted solvent accessibility is described in Chapter 4. The de novo prediction of a/p proteins is described in Chapter 5, the method introduces a new approach to the old problem of combinatorial modelling and breaks the size limit previously imposed on de novo prediction. The final experimental chapter describes the prediction of solvent accessibility and secondary structure using a novel combination of the fuzzy k-nearest neighbour and support vector machine. Chapter 7 closes this piece of work with a review of the field and suggests potential improvements to the way work is conducted.
APA, Harvard, Vancouver, ISO, and other styles
4

Sharma, Nishchay. "Knock Model Evaluation – Gas Engine." Thesis, KTH, Maskinkonstruktion (Avd.), 2018. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-237133.

Full text
Abstract:
Knocking is a type of abnormal combustion which depends on several physical factors and results in high frequency pressure oscillations inside the combustion chamber of a spark-ignited internal combustion engine (ICE). These oscillations can damage the engine and hamper its efficiency, which is why it is important for automakers to understand the knocking behavior so that it can be avoided during engine operation. Due to the catastrophic outcomes of knocking a lot of research has been done in the past on prediction of its occurrence. There can be several causes of knocking but when it occurs due to auto-ignition of fuel in the end-gas it’s called spark-knock. There are various mathematical models that predict the phenomenon of spark-knock. In this thesis, several of the previously published knock prediction models for heavy-duty natural-gas engine are studied and analyzed. The main objective of this project is to assess the accuracy of different types of knock prediction models.Amongst all the types of knock prediction models emphasize has been given to empirical correlation models, particularly to the ones which are based on chemical kinetics pertaining to the combustion process of methane. These are the models that claim to predict ignition delay time based on concentration of air and fuel in the unburned zone of the cylinder. The models are assessed based on the knocking behavior they represent across the engine operation range. Results pertaining to the knock prediction models are evaluated in a 1D engine simulation model using AVL BOOST. The BOOST performance prediction model is calibrated against experimentally measured engine test-cell data and the same data is used to assess the knock prediction models.The knock prediction model whose results correlate with experimental observations is analyzed further while other models are discarded. Using the validated model, variation in knock occurrence is evaluated with change in the combustion phasing. Two of the parameter that are used to define the combustion phasing are spark-advance and combustion duration. It was found that when the brake mean effective pressure is kept constant the knock prediction parameter increases linearly with increase in spark advance and decreases linearly with increase in combustion duration. The variation of knock prediction parameter with spark advance showed increasing gradient with increase in engine torque.
Knack i en förbränningsmotor är en typ av onormal förbränning. Det är ett komplicerat fenomen som beror på flera fysiska faktorer och resulterar i högfrekventa tryckoscillationer inuti förbränningskammaren. Dessa oscillationer kan skada motorn och fenomenet hämmar motorns effektivitet. Knack kan uppstå på två sätt i en Otto-motor och detta examensarbete kommer att handla om självantändning. Självantändning, i detta fall, är när ändgasen börjar brinna utan att ha blivit påverkad av flamfronten eller gnistan från tändstiftet. Det finns flera olika matematiska modeller som i olika grader kan prediktera knackfenomenet. I detta examensarbete studeras några av de tidigare publicerade prediktionsmodellerna för knack i Otto-förbränning och modelleras för analys. Huvudsyftet med detta projekt är således att bedöma noggrannheten hos olika typer av knackmodeller. Extra fokus har lagts på empiriska korrelationsmodeller, särskilt till de som är baserade på kemisk kinetik avseende förbränningsprocessen av metan. Dessa modeller förutsäger den tid det tar för ändgasen att självantända, baserat på dess koncentration av luft och bränsle. Knackmodellerna bedöms sedan utifrån det beteende som de förutsäger över motorns driftområde och dess överensstämmelse med kända motorkalibreringsstrategier. Resultatet av knackpredikteringen för de olika knackmodellerna utvärderas och valideras i en motorsimuleringsmodell i mjukvaran AVL BOOST. BOOST-modellen kalibreras mot experimentellt uppmätta motortestdata. Baserat på resultaten från de valda knockmodellerna så blev den modell som bäst korrelerar med kända motorkalibreringsstrategier analyserad djupare. Den utvalda modellen var en ECM modell och den utvärderas ytterligare med avseende på variation i predikterad knack-parameter. Detta görs genom att modifiera två förbränningsparametrar: tändvinkel och förbränningsduration. Det visade sig att modellerna predikterade en linjär ökning då tändningen tidigareläggs och ett linjärt minskande vid längre förbränningsduration, vilket är i enlighet med motortestdata. Vidare visade det sig att variationer i tändvinkel resulterade i en högre gradient i knackpredikteringen vid högre motorbelastningar och korresponderande minskning vid lägre belastning.
APA, Harvard, Vancouver, ISO, and other styles
5

Miroshnychenko, Dmytro. "Mechanical behaviour of PVC : model evaluation." Thesis, Loughborough University, 2002. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.250881.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Al-Dawood, Abdullah Saad. "Transportation and economic development evaluation model." Diss., Virginia Tech, 1990. http://hdl.handle.net/10919/39905.

Full text
Abstract:
The system dynamics methodology is used to develop a computer simulation model to determine whether to add lanes to a congested highway or build a new, more direct, facility. Fundamental to this evaluation is the incorporation of non-user measures of effectiveness to go with the traditional highway user measures of effectiveness, such as the Benefit-Cost Ratio. In the system dynamics methodology three alternative forms of the model of a system are used: verbal, visual, and mathematical. The verbal description is diagrammatic and shows cause-and-effect relationships between many variables in a simple, concise manner. The visual model or "causal diagram" is translated into a mathematical model and system equations. The model is comprised of four sectors: 1. population sector 2. economic sector 3. university sector 4. transportation sector The model applies to the area of Blacksburg, Christiansburg and Roanoke (city and county). with special treatment to Virginia Tech through the university model. The simulation results of the non-user benefits along with user benefits is used to evaluate the alternatives in the Blacksburg-Christiansburg-Roanoke corridor.
Ph. D.
APA, Harvard, Vancouver, ISO, and other styles
7

Aldrete, Sánchez Rafael Manuel. "Feasibility evaluation model for toll highways /." Digital version accessible at:, 1998. http://wwwlib.umi.com/cr/utexas/main.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Clark, Thomas K. "Logging Subsystem Performance: Model and Evaluation." PDXScholar, 1994. https://pdxscholar.library.pdx.edu/open_access_etds/4724.

Full text
Abstract:
Transaction logging is an integral part of ensuring proper transformation of data from one state to another in modern data management. Because of this, the throughput of the logging subsystem can be critical to the throughput of an application. The purpose of this research is to break the log bottleneck at minimum cost. We first present a model for evaluating a logging subsystem, where a logging subsystem is made up of a log device, a log backup device, and the interconnect algorithm between the two, which we term the log backup method. Included in the logging model is a set of criteria for evaluating a logging subsystem and a system for weighting the criteria in order to facilitate comparisons of two logging subsystem configurations to determine the better of the two. We then present an evaluation of each of the pieces of the logging subsystem in order to increase the bandwidth of both the log device and log backup device, while selecting the best log backup method, at minimum cost. We show that the use of striping and RAID is the best alternative for increasing log device bandwidth. Along with our discussion of RAID, we introduce a new RAID algorithm that is designed to overcome the performance problems of small writes in a RAID log. In order to increase the effective bandwidth of the log backup device, we suggest the use of inexpensive magnetic tape drives and striping in the log backup device, where the bandwidth of the log backup device is increased to the point that it matches the bandwidth of the log device. For the log backup interconnect algorithm, we present the novel approach of backing up the log synchronously, where the log backup device is essentially a mirror of the log device, as well as evaluating other log backup interconnect algorithms. Finally, we present a discussion of a prototype implementation of some of the ideas in the thesis. The prototype was implemented in a commercial database system, using a beta version of INFORMIX-OnLine Dynamic Server™ version 6.0.
APA, Harvard, Vancouver, ISO, and other styles
9

Sirikijpanichkul, Ackchai. "An agent-based location evaluation model." Thesis, Queensland University of Technology, 2008. https://eprints.qut.edu.au/20672/1/Ackchai_Sirikijpanichkul_Thesis.pdf.

Full text
Abstract:
Truck transportation is considered as a favourable mode by shippers to carry freight at most ranges of distance as it has more flexibility in fleet size, capacity, scheduling, routing, and access. Although truck is considered as the popular mode for freight transportation, road-rail intermodal freight transportation becomes an attractive alternative to road only mode since the latter has no longer assured a reliable service due to traffic congestion problem. It also raises public concern in environmental and road safety impacts. Intermodal freight transportation is defined as a system that carries freight from origin to destination using two or more transportation modes where transfers between modes occur at an intermodal freight terminal. Success of the terminal depends on four major factors, namely: location, efficiency, financial sustainability, and rail level of service. Among these, the location is one of the most crucial success factors and needs to be considered carefully as it has direct and indirect impacts on a number of stakeholders including terminal users, terminal operators, transport network infrastructure providers, and community. Limitations of previous terminal location evaluation models in representing individual preference and behaviour as well as accommodating negotiation and communication between the players bring in an opportunity to develop a new model which is more flexible and capable of providing a solution that is not necessary to be optimal, but acceptable for every player without requiring explicit trade-offs. This thesis is aimed at demonstrating the feasibility of applying an agent-based approach to the evaluation of intermodal freight terminal location and investigating terminal effectiveness against stakeholder equity and some important aspects arising from the different stakeholders’ viewpoints. Agent technologies were introduced to model the stakeholders as individual agents. The agent concept was adopted to develop a decentralised location evaluation system that is able to balance the terminal effectiveness with the stakeholder equity. The proposed agent-based location evaluation model was modelled as a hierarchical control system that comprises three decision levels: local level, stakeholder level and policy level. Policy level is the highest decision level, which is represented by a policy maker. Apart from the policy level, the rest can be viewed as operational decision levels. Local level is the lowest control level. At this level, each stakeholder was classified into stakeholder groups based on their characteristics and interest. The terminal scenarios were then evaluated based on benefit maximisation criteria. Stakeholder control is the higher control level than the local level. It represents the control level where negotiations and decisions between groups of people (stakeholders) with different point of views are made. At this level, negotiation process was used to determine terminal location based on preference and equity of stakeholders. The determined terminal site was then used in the evaluation against constraints to ensure that all agents are satisfied. The terminal location decision for South East Queensland (SEQ) was applied as a case study of this thesis. The SEQ strategic freight transport model was developed, calibrated, and validated to assist in providing inputs for the evaluation of terminal location. The results indicated that for the developed agent-based location evaluation model, Yatala was selected as the most appropriate terminal location that results in the highest effectiveness and equity (as measured by level of satisfaction and Gini coefficient, respectively). Other location evaluation models were also used in comparison with the developed agent-based location evaluation model. Those include P-Median, P-Centre, and maximum covering models. It was found that the agent-based location evaluation model outperformed the other location evaluation models. Finally, a sensitivity analysis was conducted in order to evaluate the consistency of model outputs against the uncertainties in the input parameters. In most cases, the terminal location decisions obtained from the developed agent-based location evaluation model was not sensitive to the changes in those parameters. However, the results suggested that when a unit cost of truck travel delay increased, the impact on the final terminal location decisions was observed. This thesis demonstrated the feasibility of applying a decentralised approach to terminal location decision problem using a multi-agent concept and evaluating it against other well-known location problems. A new framework and methodology for the planning of intermodal terminal location evaluation was also formulated. Finally, the problems of terminal location evaluation and optimisation of intermodal freight terminal operation were integrated into a single evaluation model.
APA, Harvard, Vancouver, ISO, and other styles
10

Sirikijpanichkul, Ackchai. "An agent-based location evaluation model." Queensland University of Technology, 2008. http://eprints.qut.edu.au/20672/.

Full text
Abstract:
Truck transportation is considered as a favourable mode by shippers to carry freight at most ranges of distance as it has more flexibility in fleet size, capacity, scheduling, routing, and access. Although truck is considered as the popular mode for freight transportation, road-rail intermodal freight transportation becomes an attractive alternative to road only mode since the latter has no longer assured a reliable service due to traffic congestion problem. It also raises public concern in environmental and road safety impacts. Intermodal freight transportation is defined as a system that carries freight from origin to destination using two or more transportation modes where transfers between modes occur at an intermodal freight terminal. Success of the terminal depends on four major factors, namely: location, efficiency, financial sustainability, and rail level of service. Among these, the location is one of the most crucial success factors and needs to be considered carefully as it has direct and indirect impacts on a number of stakeholders including terminal users, terminal operators, transport network infrastructure providers, and community. Limitations of previous terminal location evaluation models in representing individual preference and behaviour as well as accommodating negotiation and communication between the players bring in an opportunity to develop a new model which is more flexible and capable of providing a solution that is not necessary to be optimal, but acceptable for every player without requiring explicit trade-offs. This thesis is aimed at demonstrating the feasibility of applying an agent-based approach to the evaluation of intermodal freight terminal location and investigating terminal effectiveness against stakeholder equity and some important aspects arising from the different stakeholders’ viewpoints. Agent technologies were introduced to model the stakeholders as individual agents. The agent concept was adopted to develop a decentralised location evaluation system that is able to balance the terminal effectiveness with the stakeholder equity. The proposed agent-based location evaluation model was modelled as a hierarchical control system that comprises three decision levels: local level, stakeholder level and policy level. Policy level is the highest decision level, which is represented by a policy maker. Apart from the policy level, the rest can be viewed as operational decision levels. Local level is the lowest control level. At this level, each stakeholder was classified into stakeholder groups based on their characteristics and interest. The terminal scenarios were then evaluated based on benefit maximisation criteria. Stakeholder control is the higher control level than the local level. It represents the control level where negotiations and decisions between groups of people (stakeholders) with different point of views are made. At this level, negotiation process was used to determine terminal location based on preference and equity of stakeholders. The determined terminal site was then used in the evaluation against constraints to ensure that all agents are satisfied. The terminal location decision for South East Queensland (SEQ) was applied as a case study of this thesis. The SEQ strategic freight transport model was developed, calibrated, and validated to assist in providing inputs for the evaluation of terminal location. The results indicated that for the developed agent-based location evaluation model, Yatala was selected as the most appropriate terminal location that results in the highest effectiveness and equity (as measured by level of satisfaction and Gini coefficient, respectively). Other location evaluation models were also used in comparison with the developed agent-based location evaluation model. Those include P-Median, P-Centre, and maximum covering models. It was found that the agent-based location evaluation model outperformed the other location evaluation models. Finally, a sensitivity analysis was conducted in order to evaluate the consistency of model outputs against the uncertainties in the input parameters. In most cases, the terminal location decisions obtained from the developed agent-based location evaluation model was not sensitive to the changes in those parameters. However, the results suggested that when a unit cost of truck travel delay increased, the impact on the final terminal location decisions was observed. This thesis demonstrated the feasibility of applying a decentralised approach to terminal location decision problem using a multi-agent concept and evaluating it against other well-known location problems. A new framework and methodology for the planning of intermodal terminal location evaluation was also formulated. Finally, the problems of terminal location evaluation and optimisation of intermodal freight terminal operation were integrated into a single evaluation model.
APA, Harvard, Vancouver, ISO, and other styles
11

Kim, JongKwan. "The Calibration and Uncertainty Evaluation of Spatially Distributed Hydrological." DigitalCommons@USU, 2013. https://digitalcommons.usu.edu/etd/1437.

Full text
Abstract:
In the last decade, spatially distributed hydrological models have rapidly advanced with the widespread availability of remotely sensed and geomatics information. Particularly, the areas of calibration and evaluation of spatially distributed hydrological models have been attempted in order to reduce the differences between models and improve realism through various techniques. Despite steady efforts, the study of calibrations and evaluations for spatially distributed hydrological models is still a largely unexplored field, in that there is no research in terms of the interactions of snow and water balance components with the traditional measurement methods as error functions. As one of the factors related to runoff, melting snow is important, especially in mountainous regions with heavy snowfall; however, no study considering both snow and water components simultaneously has investigated the procedures of calibration and evaluation for spatially distributed models. Additionally, novel approaches of error functions would be needed to reflect the characteristics of spatially distributed hydrological models in the comparison between simulated and observed values. Lastly, the shift from lumped model calibration to distributed model calibration has raised the model complexity. The number of unknown parameters can rapidly increase, depending on the degree of distribution. Therefore, a strategy is required to determine the optimal degree of model distributions for a study basin. In this study, we will attempt to address the issues raised above. This study utilizes the Research Distributed Hydrological Model (HL-RDHM) developed by Hydrologic Development Office of the National Weather Service (OHD-NWS). This model simultaneously simulates both snow and water balance components. It consists largely of two different modules, i.e., the Snow 17 as a snow component and the Sacramento Soil Moisture Accounting (SAC-SMA) as a water component, and is applied over the Durango River basin in Colorado, which is an area driven primarily by snow. As its main contribution, this research develops and tests various methods to calibrate and evaluate spatially distributed hydrological models with different, non-commensurate, variables and measurements. Additionally, this research provides guidance on the way to decide an appropriate degree of model distribution (resolution) for a specific water catchment.
APA, Harvard, Vancouver, ISO, and other styles
12

Bush, Charles D. "Teacher Perceptions About New Evaluation Model Implementations." Thesis, Northcentral University, 2017. http://pqdtopen.proquest.com/#viewpdf?dispub=10622533.

Full text
Abstract:

The challenge of designing and implementing teacher evaluation reform throughout the U.S. has been represented by different policies, teacher evaluation components, and difficulties with implementation. The purpose of this qualitative embedded single case study was to explore teacher perceptions about new evaluation model implementations and how new model implementations impact the relationships between teachers and administration. The main unit of analysis was teachers at one school experiencing the implementation of new evaluation reform. The sub-units were the experience levels of teachers, specifically New Teachers, Mid-career Teachers, and Seasoned Teachers. Findings in this research demonstrated a protectiveness of the low income school in which the participants work, and a lack of trust in the state understanding the needs of a low performing school. The findings indicated teachers perceive the lack of local control or input into the development or implementation of a new evaluation tool may create feelings of mistrust and ulterior motives. Results also emerged suggesting that teachers perceive a new teacher evaluation model may add stress to the site, provide tools for feedback and accountability, and possibly negatively impact the relationships with students. Finally, the findings indicated striking differences of the perceptions of teachers with different levels of teaching experience. Teachers of all experience levels perceived similar, positive relationships between teachers and administrators. However, the perceptions of the current evaluation tool was markedly different based on years of experience. New Teachers and Mid-Career Teachers stressed a desire to receive feedback and the need for feedback to improve their practice. Conversely, Seasoned Teachers stated a clear lack of need or desire for feedback. Additionally, All experience level groups perceived that there may be some level of added stress during the implementation of a new evaluation tool. Seasoned Teachers Mid-Career Teachers perceive the possibility of a new tool as a negative event, while New Teachers viewed this as an opportunity for accountability and alignment.

APA, Harvard, Vancouver, ISO, and other styles
13

Ahlberg, Jörgen. "Model-based coding : extraction, coding, and evaluation of face model parameters /." Linköping : Univ, 2002. http://www.bibl.liu.se/liupubl/disp/disp2002/tek761s.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
14

Shah, Seyyed Madasar Ali. "Model transformation dependability evaluation by the automated creation of model generators." Thesis, University of Birmingham, 2012. http://etheses.bham.ac.uk//id/eprint/3407/.

Full text
Abstract:
This thesis is on the automatic creation of model generators to assist the validation of model transformations. The model driven software development methodology advocates models as the main artefact to represent software during development. Such models are automatically converted, by transformation tools, to apply in different stages of development. In one application of the method, it becomes possible to synthesise software implementations from design models. However, the transformations used to convert models are man-made, and so prone to development error. An error in a transformation can be transmitted to the created software, potentially creating many invalid systems. Evaluating that model transformations are reliable is fundamental to the success of modelling as a principle software development practice. Models generated via the technique presented in this thesis can be applied to validate transformations. In several existing transformation validation techniques, some form of conversion is employed. However, those techniques do not apply to validate the conversions used there-in. A defining feature of the current presentation is the utilization of transformations, making the technique self-hosting. That is, an implementation of the presented technique can create generators to assist model transformations validation and to assist validation of that implementation of the technique.
APA, Harvard, Vancouver, ISO, and other styles
15

Ten, Eyck Patrick. "Problems in generalized linear model selection and predictive evaluation for binary outcomes." Diss., University of Iowa, 2015. https://ir.uiowa.edu/etd/6003.

Full text
Abstract:
This manuscript consists of three papers which formulate novel generalized linear model methodologies. In Chapter 1, we introduce a variant of the traditional concordance statistic that is associated with logistic regression. This adjusted c − statistic as we call it utilizes the differences in predicted probabilities as weights for each event/non- event observation pair. We highlight an extensive comparison of the adjusted and traditional c-statistics using simulations and apply these measures in a modeling application. In Chapter 2, we feature the development and investigation of three model selection criteria based on cross-validatory c-statistics: Model Misspecification Pre- diction Error, Fitting Sample Prediction Error, and Sum of Prediction Errors. We examine the properties of the corresponding selection criteria based on the cross- validatory analogues of the traditional and adjusted c-statistics via simulation and illustrate these criteria in a modeling application. In Chapter 3, we propose and investigate an alternate approach to pseudo- likelihood model selection in the generalized linear mixed model framework. After outlining the problem with the pseudo-likelihood model selection criteria found using the natural approach to generalized linear mixed modeling, we feature an alternate approach, implemented using a SAS macro, that obtains and applies the pseudo-data from the full model for fitting all candidate models. We justify the propriety of the resulting pseudo-likelihood selection criteria using simulations and implement this new method in a modeling application.
APA, Harvard, Vancouver, ISO, and other styles
16

Barkley, Ellise Jane-Ann. "An integrated approach to evaluation: A participatory model for reflection, evaluation, analysis and documentation (the 'READ' model) in community arts." Thesis, Queensland University of Technology, 2016. https://eprints.qut.edu.au/97728/3/Ellise%20Barkley%20Thesis.pdf.

Full text
Abstract:
The research focuses on the development and critical review of the READ model, a dynamic and rigorous model for evaluation of community arts. The model's innovation lies in the integration of four key appraisal and learning strategies- Reflection, Evaluation, Analysis and Documentation (READ) - to cater for the multi-faceted demands of creative community partnership initiatives. Positioned within the contemporary debate on cultural value and impact measurement, the research contributes to the discourse on effective community arts evaluation and offers an integrated, stakeholder-oriented model of relevance to the Creative Industries, and the fields of evaluation, cultural development, project management and sustainability.
APA, Harvard, Vancouver, ISO, and other styles
17

Ansin, Elin. "An evaluation of the Cox-Snell residuals." Thesis, Uppsala universitet, Statistiska institutionen, 2015. http://urn.kb.se/resolve?urn=urn:nbn:se:uu:diva-256665.

Full text
Abstract:
It is common practice to use Cox-Snell residuals to check for overall goodness of tin survival models. We evaluate the presumed relation of unit exponentially dis-tributed residuals for a good model t and evaluate under some violations of themodel. This is done graphically with the usual graphs of Cox-Snell residual andformally using Kolmogorov-Smirnov goodness of t test. It is observed that residu-als from a correctly tted model follow unit exponential distribution. However, theCox-Snell residuals do not seem to be sensitive to the violations of the model.
APA, Harvard, Vancouver, ISO, and other styles
18

Tuten, Paul M. "A Model for the Evaluation of IS/IT Investments." NSUWorks, 2009. http://nsuworks.nova.edu/gscis_etd/327.

Full text
Abstract:
Evaluation is a vital--yet challenging--part of IS/IT management and governance. The benefits (or lack therefore) associated with IS/IT investments have been widely debated within academic and industrial communities alike. Investments in information technology may or may not result in desirable outcomes. Yet, organizations must rely on information systems to remain competitive. Effective evaluation serves as one pathway to ensuring success. However, despite a growing multitude of measures and methods, practitioners continue to struggle with this intractable problem. Responding to the limited success of existing methods, scholars have argued that academicians should first develop a better understanding of the process of IS/IT evaluation. In addition, scholars have also posited that IS/IT evaluation practice should be tailored to fit a given organization's particular context. Of course, one cannot simply tell practitioners to "be contextual" when conducting evaluations and then hope for improved outcomes. Instead, having developed an improved understanding of the IS/IT evaluation process, researchers should articulate unambiguous guidelines to practitioners. The researcher addressed this need using a multi-phase research methodology. To start, the researcher conducted a literature review to identify and describe the relevant contextual elements operating in the IS/IT evaluation process: the purpose of conducting the evaluation (why); the subject of the evaluation (what); the specific aspects to be evaluated (which); the particular evaluation methods and techniques used (how); the timing of the evaluation (when); the individuals involved in, or affected by, the evaluation (who); and the environmental conditions under which the organization operates (where). Based upon these findings, the researcher followed a modeling-as-theorizing approach to develop a conceptual model of IS/IT evaluation. Next, the conceptual model was validated by applying it to multiple case studies selected from the extant literature. Once validated, the researcher utilized the model to develop a series of methodological guidelines to aid organizations in conducting evaluations. The researcher summarized these guidelines in the form of a checklist for professional practitioners. The researcher believes this holistic, conceptual model of IS/IT evaluation serves as an important step in advancing theory. In addition, the researcher's guidelines for conducting IS/IT evaluation based on organizational goals and conditions represents a significant contribution to industrial practice. Thus, the implications of this study come full circle: an improved understanding of evaluation should result in improved evaluation practices.
APA, Harvard, Vancouver, ISO, and other styles
19

Li, Zhengrong. "Model-based Tests for Standards Evaluation and Biological Assessments." Diss., Virginia Tech, 2006. http://hdl.handle.net/10919/29108.

Full text
Abstract:
Implementation of the Clean Water Act requires agencies to monitor aquatic sites on a regular basis and evaluate the quality of these sites. Sites are evaluated individually even though there may be numerous sites within a watershed. In some cases, sampling frequency is inadequate and the evaluation of site quality may have low reliability. This dissertation evaluates testing procedures for determination of site quality based on modelbased procedures that allow for other sites to contribute information to the data from the test site. Test procedures are described for situations that involve multiple measurements from sites within a region and single measurements when stressor information is available or when covariates are used to account for individual site differences. Tests based on analysis of variance methods are described for fixed effects and random effects models. The proposed model-based tests compare limits (tolerance limits or prediction limits) for the data with the known standard. When the sample size for the test site is small, using model-based tests improves the detection of impaired sites. The effects of sample size, heterogeneity of variance, and similarity between sites are discussed. Reference-based standards and corresponding evaluation of site quality are also considered. Regression-based tests provide methods for incorporating information from other sites when there is information on stressors or covariates. Extension of some of the methods to multivariate biological observations and stressors is also discussed. Redundancy analysis is used as a graphical method for describing the relationship between biological metrics and stressors. A clustering method for finding stressor-response relationships is presented and illustrated using data from the Mid-Atlantic Highlands. Multivariate elliptical and univariate regions for assessment of site quality are discussed.
Ph. D.
APA, Harvard, Vancouver, ISO, and other styles
20

Preacher, Kristopher J. "The Role of Model Complexity in the Evaluation of Structural Equation Models." The Ohio State University, 2003. http://rave.ohiolink.edu/etdc/view?acc_num=osu1054130634.

Full text
APA, Harvard, Vancouver, ISO, and other styles
21

Kale, Ravindra V. "Evaluation of an Active Colonoscopy Training Model." Ohio University / OhioLINK, 2012. http://rave.ohiolink.edu/etdc/view?acc_num=ohiou1350759066.

Full text
APA, Harvard, Vancouver, ISO, and other styles
22

Maier, Philipp. "Website Evaluation Model and Key Performance Indicators /." St. Gallen, 2008. http://www.biblio.unisg.ch/org/biblio/edoc.nsf/wwwDisplayIdentifier/04608352001/$FILE/04608352001.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
23

Alfadli, Ibrahim Mohammed M. "Citizen centred e-Government services evaluation model." Thesis, Durham University, 2015. http://etheses.dur.ac.uk/11250/.

Full text
Abstract:
Electronic government (e-Government) is attracting the interest of governments around the globe due to its great importance in facilitating, and providing services to citizens. Although most countries invest massive budgets to provide latest technologies, they face many obstacles, including the notable absence of the assessment, and evaluation of e-Government services from the citizen’s point of view. The objective of this research is to identify an e-Government evaluation model based on previous research and studies, and to evaluate each model by verifying its attributes, factors, and how they relate to each other. This research concentrates on evaluating online services provided to citizens by governments. It will develop a citizen centred model to evaluate e-Government services, and will help government organizations to find the strengths, and weakness of their online services. One of the main aspects of developing an evaluation model is to consider the citizens. The citizen is one of the most important reasons for governments putting their services online (e-Services). Therefore, finding ways of evaluating e-Services is crucial for governments in order to achieve better results from their perspectives as well as citizen satisfaction. The iMGov Model is based around the concepts of three phases in terms of Placing an Order, Processing an Order, and Delivering an Order. The new model will be compared with existing evaluation models. In conclusion, this research will produce an adequate e-Government evaluation model to measure e-Government services provided to citizens.
APA, Harvard, Vancouver, ISO, and other styles
24

Wagner, Tansy Lynn. "An Evaluation of an Avian Diversity Model." Fogler Library, University of Maine, 1999. http://www.library.umaine.edu/theses/pdf/WagnerTL1999.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
25

Parajuli, Prem B. "SWAT bacteria sub-model evaluation and application." Diss., Manhattan, Kan. : Kansas State University, 2007. http://hdl.handle.net/2097/373.

Full text
APA, Harvard, Vancouver, ISO, and other styles
26

Cacina, Nasuh. "Test and evaluation of surf forecasting model." Thesis, Monterey, California. Naval Postgraduate School, 1989. http://hdl.handle.net/10945/27306.

Full text
APA, Harvard, Vancouver, ISO, and other styles
27

Danas, Ryan. "User Evaluation Framework for Model Finding Research." Digital WPI, 2016. https://digitalcommons.wpi.edu/etd-theses/1009.

Full text
Abstract:
"We report the results of a series of crowd-sourced user studies in the formal-methods domain. Specifically, we explore the efficacy of the notion of "minimal counterexample" -- or more colloquially, "minimal bug report" -- when reasoning about logical specifications. Our results here suggest that minimal counterexamples are beneficial some specific cases, and harmful in others. Furthermore, our analysis leads to refined hypotheses about the role of minimal counterexamples that can be further evaluated in future studies. User-based evaluation has little precedent in the formal methods community. Therefore, as a further contribution, we discuss and analyze our research methodology, and offer guidelines for future user studies in formal methods research. "
APA, Harvard, Vancouver, ISO, and other styles
28

Jaako, J. (Janne). "Developing EHD-model for mobile game evaluation." Master's thesis, University of Oulu, 2017. http://urn.fi/URN:NBN:fi:oulu-201706022410.

Full text
Abstract:
Heuristics are a much researched, scientifically studied way to solve usability issues in user interface. However, the way of conducting usability evaluation is based on abstract models. As the heuristics as guidelines are abstract and their interpretation depends on the researcher, the gap between scientifically created heuristics and the practitioners’ methods of developing games has grown. This study introduces the Explicit Heuristics Design-model (EHD). It has been developed by using design science research methodology. This model can be implemented by using heuristic evaluation and expert evaluation methods. The EHD-model enables the extraction of explicit heuristics data from the games. The hypothesis proposed a connection between the heuristics and the revenue. The final evaluation phase of the EHD-model revealed that a correlation between the heuristics and the revenue generated by Android mobile fighting games exist. The EHD-model was also able to point out the most important heuristics affecting the generated revenue. This study reduces the gap between the scientific community and practitioners. It offers the scientific community a new view and results concerning heuristics and the practitioners a tool to enhance the profitability of a game.
APA, Harvard, Vancouver, ISO, and other styles
29

Onen, Ahmet. "Model-Based Grid Modernization Economic Evaluation Framework." Diss., Virginia Tech, 2014. http://hdl.handle.net/10919/46981.

Full text
Abstract:
A smart grid cost/benefit analysis answers a series of economic questions that address the incremental benefits of each stage or decision point. Each stage of the economic analysis provides information about the incremental benefits of that stage with respect to the previous stage. With this approach stages that provide little or no economic benefits can be identified. In this study there are series of applications,-including quasi-steady state power flows over time-varying loads and costs of service, Monte Carlo simulations, reconfiguration for restoration, and coordinated control - that are used to evaluate the cost-benefits of a series of smart grid investments. In the electric power system planning process, engineers seek to identify the most cost-effective means of serving the load within reliability and power quality criteria. In order to accurately assess the cost of a given project, the feeder losses must be calculated. In the past, the feeder losses were estimated based upon the peak load and a calculated load factor for the year. The cost of these losses would then be calculated based upon an expected, fixed per-kWh generation cost. This dissertation presents a more accurate means of calculating the cost of losses, using hourly feeder load information and time-varying electric energy cost data. The work here attempts to quantify the improvement in high accuracy and presents an example where the economic evaluation of a planning project requires the more accurate loss calculation. Smart grid investments can also affect response to equipment failures where there are two types of responses to consider -blue-sky day and storm. Storm response and power restoration can be very expensive for electric utilities. The deployment of automated switches can benefit the utility by decreasing storm restoration hours. The automated switches also improve system reliably by decreasing customer interruption duration. In this dissertation a Monte Carlo simulation is used to mimic storm equipment failure events, followed by reconfiguration for restoration and power flow evaluations. The Monte Carlo simulation is driven by actual storm statistics taken from 89 different storms, where equipment failure rates are time varying. The customer outage status and durations are examined. Changes in reliability for the system with and without automated switching devices are investigated. Time varying coordinated control of Conservation Voltage Reduction (CVR) is implemented. The coordinated control runs in the control center and makes use of measurements from throughout the system to determine control settings that move the system toward optimum performance as the load varies. The coordinated control provides set points to local controllers. A major difference between the coordinated control and local control is the set points provided by the coordinated control are time varying. Reduction of energy and losses of coordinated control are compared with local control. Also eliminating low voltage problems with coordinated control are addressed. An overall economic study is implemented in the final stage of the work. A series of five evaluations of the economic benefits of smart grid automation investments are investigated. Here benefits that can be quantified in terms of dollar savings are considered here referred to as "hard dollar" benefits. Smart Grid investment evaluations to be considered include investments in improved efficiency, more cost effective use of existing system capacity with automated switches, and coordinated control of capacitor banks and voltage regulators. These Smart Grid evaluations are sequentially ordered, resulting in a series of incremental hard dollar benefits. Hard dollar benefits come from improved efficiency, delaying large capital equipment investments, shortened storm restoration times, and reduced customer energy use. The evaluation shows that when time varying loads are considered in the design, investments in automation can improve performance and significantly lower costs resulting in "hard dollar" savings.
Ph. D.
APA, Harvard, Vancouver, ISO, and other styles
30

Kirk, Kristin Cherish. "Assessing Nonprofit Websites: Developing an Evaluation Model." Diss., Virginia Tech, 2018. http://hdl.handle.net/10919/94581.

Full text
Abstract:
Nonprofit organizations are pivotal actors in society, and their websites can play important roles in aiding organizations in their socially-beneficial missions by serving as a platform to present information, to interact with stakeholders and to perform online transactions. This dissertation analyzed nonprofit websites in the United States (U.S.) and in Thailand in a series of three articles. The first developed a website evaluative instrument, based on an e-commerce model, and applied it to nonprofit websites through a manual decoding process. That article's findings suggested that Thai websites are not considerably different than U.S. nonprofit websites, except more American websites offer online transactions. The second article analyzed two different types of nonprofits in Thailand using the same model to assess website development in an emerging market. That analysis suggested local Thai nonprofits' websites lagged significantly behind those of internationally connected nonprofit organizations in the country in the features they offered. The third article compared the adapted model employed in the second analysis, which used manual decoding for website examination, to a commercially available, automated evaluation service. That analysis highlighted the differences between the two assessment tools and found them to be complementary, but independently insufficient to ensure robust nonprofit website evaluation.
Ph. D.
APA, Harvard, Vancouver, ISO, and other styles
31

Johnson, Darius R. "Model-assisted Nondestructive Evaluation for Microstructure Quantification." University of Dayton / OhioLINK, 2015. http://rave.ohiolink.edu/etdc/view?acc_num=dayton1430394815.

Full text
APA, Harvard, Vancouver, ISO, and other styles
32

Hayes, Thomas S. "Evaluation of a refined lattice dome model." Thesis, Virginia Polytechnic Institute and State University, 1985. http://hdl.handle.net/10919/56187.

Full text
Abstract:
A general review of lattice dome geometry and connection details, leads to a modeling approach, which introduces intermediate elements to represent connections. The method provides improved modeling of joint behavior and flexibility for comparative studies. The discussion of lattice domes is further specialized for parallel lamella geometry. A procedure is developed for minimizing the number of different member lengths. This procedure is incorporated into a program, which generates the geometric data for a specified dome. The model is developed from a background which considers commercial space frame systems, static and dynamic loads, and modeling techniques using ABAQUS, a finite element program. An optional output of the generation program creates input data for ABAQUS. Modal analysis, static design loads, and earthquake loads are used in the evaluation of the model.
Master of Science
APA, Harvard, Vancouver, ISO, and other styles
33

Tekir, Selma Koltuksuz Ahmet. "An Implementation Model For Open Sources Evaluation/." [s.l.] : [s.n.], 2004. http://library.iyte.edu.tr/tezler/master/bilgisayaryazilimi/T000419.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
34

Jansson, Anna. "Evaluation of a New Lateral Boundary Condition in the MIUU Meso-Scale Model." Thesis, Uppsala universitet, Luft-, vatten och landskapslära, 2002. http://urn.kb.se/resolve?urn=urn:nbn:se:uu:diva-302873.

Full text
Abstract:
The MIUU meso-scale model has been used to evaluate a new lateral boundary condition. The new lateral boundary condition is a combination of two lateral boundary conditions used in regional models, the flow relaxation scheme and the tendency modification scheme. The impact of different lateral boundary formulations on meso-scale phenomena, such as convective boundary layers, nocturnal jets, sea breezes and mountain waves (Bora winds) has been studied. When, for instance, stably stratified air with a constant wind speed is advected through the lateral boundaries into a meso-scale model with a flat and homogenous land surface, the convective boundary layer is reduced in height and the nocturnal jet is reduced in magnitude up to a distance of 750 km from the inflow lateral boundary. This is the case, when the most common lateral boundary condition is used, namely the flow relaxation scheme, where the flow relaxation parameter is constant with height and a function of the horizontal grid points only. In the other tests a flow relaxation parameter is used that is very small up to a certain level above ground, increasing with height to a maximum value higher up, and being constant above this upper level. Then, the convective boundary layer and the nocturnal jet are fully developed already at 23 km from the inflow lateral boundary. When, for instance, islands are not represented in the large-scale model, due to the coarse grid resolution, but well represented in the meso-scale model, stably stratified air can be advected into the meso-scale model even during daytime. Then, artificial thermal circulations can arise at the lateral boundaries of the meso-scale model, and collide with a real sea-breeze circulation that develops at the coast-line. These artificial thermal circulations disappear only when the flow relaxation parameter is very small in the lowest levels. When mountain waves (Bora winds) are simulated in a relatively small model domain, the critical layer, i.e. the layer where the nonlinear large-amplitude mountain wave is generated and broken, is surprisingly displaced irrespective of the tested lateral boundary formulations. In many simulations large-scale fields have to be introduced into meso-scale models. If only the flow relaxation scheme is used, the flow relaxation parameter has to be “constant-in-height” and relatively large in order to introduce large-scale temperature and wind changes with the right time-scale at all levels. However, with the new lateral boundary condition, the flow relaxation parameter can be kept very small in the lowest kilometers above ground. A small value of the flow relaxation parameter means that the convective boundary layer and the nocturnal jet at the lateral boundaries are not affected by the boundary conditions, and furthermore, no artificial thermal circulations are created. At the same time, large-scale temperature and wind changes are correctly introduced at all heights during the prescribed time into the meso-scale model through the tendency modification scheme.
Den mesoskaliga MIUU modellen har använts för test av olika laterala gränsvillkor. Ett nytt lateralt gränsvillkor har konstruerats. Detta nya gränsvillkor är en kombination av två gränsvillkor, nämligen ’the flow relaxation scheme’ och ’the tendency modification scheme’. Inverkan av olika gänsvillkorsformuleringar på mesoskaliga fenomen som konvektiva gränsskikt, ’nocturnal’ jets, sjöbrisar och bergsvågor (Boravindar) har studerats. När stabilt skiktad luft med konstant vindhastighet advekteras in genom de laterala ränderna in till en mesoskalig modelldomän, som har en slät och homogen landyta, kommer det konvektiva gränsskiktets höjd och styrkan på ’nocturnal’ jeten att påverkas av gränsvillkoret. Randvillkoret kan påverka temperatur och hastighetsfältet upp till 750 km:s avstånd från inflödesranden. Detta sker när det vanligaste laterala gränsvillkoret används, nämligen, ’the flow relaxation scheme’. I detta schema är ’flow relaxation’-parametern konstant med höjden, dvs endast en funktion av de horisontella gridpunkterna. Sensivitetsstudier på värdet och formen av ’flow relaxation’-parametern har utförts. En ’flow relaxation’-parameter, som är mycket liten upp till en viss nivå och sedan ökar med höjden påverkar temperatur- och hastighetsfältet mycket mindre. Randvillkorets påverkan är då minimal redan på 23 km:s avstånd från inflödesranden och det konvektiva gränsskiktet och ’nocturnal’ jeten kan bli fullt utvecklade. Om till exempel öar, som är väl representerade i den mesoskaliga modellen, inte är representerade i den storskaliga modellen pga dess grova upplösning, kan stabilt skiktad luft advekteras in till den mesoskaliga modelldomänen till och med under dagtid. Det kan då uppstå en artificiell termisk cirkulation vid de laterala ränderna hos den mesoskaliga modellen. Denna artificiella termiska cirkulation kan sedan kollidera med en verklig sjöbriscirkulation. Detta kan förstöra den mesoskaliga modellösningen totalt. Denna artificiella termiska cirkulation försvinner endast då ’flow relaxation’-parametern är väldigt liten i de lägsta nivåerna. När bergsvågor (Boravindar) simuleras i en relativt liten modelldomän så är det kritiska skiktet, dvs det skikt där de icke-linjära vågorna med stor amplitud bryts och genereras, förflyttat jämfört med referensfallet där de laterala ränderna var långt borta från det studerade området. Detta sker förvånansvärt oberoenda av vilken lateral gränsvillkorsformulering som används. I många simuleringar ska storskaliga processer såsom fronter och geostrofiska vindänd-ringar införas till den mesoskaliga modellen. Om endast ’the flow relaxation scheme’ används måste ’flow relaxation’-parametern vara konstant med höjden och relativt stor. Detta för att storskaliga temperatur- och vindändringar skall kunna introduceras till den mesoskaliga modellen med rätt tidskonstant och på alla höjder. I det nya laterala gränsvillkoret behöver ’flow relaxation’-parametern inte vara lika stor och inte heller konstant med höjden. Temperatur- och vindändringar är ändå korrekt introducerade med exakt tidsskala i alla nivåer in till den mesoskaliga modellen. Detta sker genom användandet av det s.k. ’tendency modification’-schemat. Dessutom kan det konvektiva gränsskikt, ’nocturnal’ jeten och sjöbrisar utveklas korrekt i närheten av de laterala ränderna.
APA, Harvard, Vancouver, ISO, and other styles
35

Sochacki, Gustav. "Evaluation of Software Projects : A Recommendation for Implementation The Iterating Evaluation Model." Thesis, Blekinge Tekniska Högskola, Institutionen för programvaruteknik och datavetenskap, 2002. http://urn.kb.se/resolve?urn=urn:nbn:se:bth-2935.

Full text
Abstract:
Software process improvement (SPI) is generally associated with large organizations. Large organizations have the possibilities to fund software process improvement programs as large scale activities. Often these improvement programs do not show progress until some time has elapsed. The Capability Maturity Model can take one year to implement and not until then can measures be made to see how much quality increased. Small organizations do not have the same funding opportunities but are still in need of software process improvement programs. Generally it is better to initiate a software process improvement program as early as possible, no matter what size of organization. Although the funding capabilities for small organizations are less compared to large organizations, the total required funding will still be smaller than in large organizations. The small organization will grow and overtime become a midsized or large organization, so by starting an improvement program at an early stage the funding overall should be minimized. This becomes more visible when the organization has grown large. This master thesis presents the idea of implementing a software process improvement program, or at least parts of it, by evaluating the software project. By evaluating a project the specific needs that are most critical are implemented in the next project. This process is iterated for each concluded project. The master thesis introduces the Iterating Evaluation Model based on an interview survey. This model is compared to an already existing model, the Experience Factory.
APA, Harvard, Vancouver, ISO, and other styles
36

Clouse, Randy Wayne. "Evaluation of GLEAMS considering parameter uncertainty." Thesis, Virginia Tech, 1996. http://hdl.handle.net/10919/44516.

Full text
APA, Harvard, Vancouver, ISO, and other styles
37

Karasneh, Abed Al-Fatah A. "An integrated model of knowledge management systems." Thesis, University of Exeter, 2002. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.246392.

Full text
APA, Harvard, Vancouver, ISO, and other styles
38

Schneider, Sebastian Stefan. "Model for the evaluation of engineering design methods." München Verl. Dr. Hut, 2008. http://d-nb.info/99356934X/04.

Full text
APA, Harvard, Vancouver, ISO, and other styles
39

Grenier, Amanda. "Home care : evaluation of a case management model." Thesis, National Library of Canada = Bibliothèque nationale du Canada, 1998. http://www.collectionscanada.ca/obj/s4/f2/dsk1/tape11/PQDD_0023/MQ50698.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
40

Hammer, Roger Elliot Jr. "Wholesale replenishment models: model evaluation." Thesis, 1987. http://hdl.handle.net/10945/22584.

Full text
APA, Harvard, Vancouver, ISO, and other styles
41

Perez-Breva, Luis, and Osamu Yoshimi. "Model Selection in Summary Evaluation." 2002. http://hdl.handle.net/1721.1/7181.

Full text
Abstract:
A difficulty in the design of automated text summarization algorithms is in the objective evaluation. Viewing summarization as a tradeoff between length and information content, we introduce a technique based on a hierarchy of classifiers to rank, through model selection, different summarization methods. This summary evaluation technique allows for broader comparison of summarization methods than the traditional techniques of summary evaluation. We present an empirical study of two simple, albeit widely used, summarization methods that shows the different usages of this automated task-based evaluation system and confirms the results obtained with human-based evaluation methods over smaller corpora.
APA, Harvard, Vancouver, ISO, and other styles
42

Awn, Najmy. "Evaluation of the cuprizone model." Doctoral thesis, 2008. http://hdl.handle.net/11858/00-1735-0000-000D-F0D7-9.

Full text
APA, Harvard, Vancouver, ISO, and other styles
43

Lee, Hsin-Wei, and 李欣衛. "Online store platform evaluation model." Thesis, 2011. http://ndltd.ncl.edu.tw/handle/55423234511331466703.

Full text
Abstract:
碩士
中國文化大學
資訊管理學系
99
The rapid development of e-commerce Internet age, so that the mass consumer spending habits have been gradually transformed into online shopping, online shopping in many traditional shops to see the huge potential market, have invested to develop new business opportunities. However, in the online store after another, how to select the appropriate online shop platform becomes very important. In addition, each platform, including the provision of services also have their own shop platform differences, how to help platform for industry to improve the function of the platform is also the purpose of this study. As mentioned in previous studies, assess-ment of small online shop platform, a key factor, therefore, this study is to assess the model by creating, providing a traditional small business owners or internet entrepre-neurs shop online to select the appropriate platform . The study was through the AHP (Analytic Hierarchy Process), to create a network platform to assess the model shop, from the previous relevant literature and existing platform services to establish the main evaluation index and sub-criteria, and finally for the more well-known domestic online shop platform operators (Yahoo! Kimo Super Mall, PChome shopping street, Lotte market) for the assessment of the object, choose the right stores to provide a platform shop. The results of this study found that transaction security for the online shop store more emphasis on the assessment of dimensions, so the industry should be directed to enhance the platform transaction security mechanism, a way to prevent data leakage; In addition, the cash flow of the services has gained quite a store attention, improve cash flow services to increase customer convenience and fast purchase intention.
APA, Harvard, Vancouver, ISO, and other styles
44

Cai, Feng-Yu, and 蔡逢裕. "A Software Risk Evaluation Model." Thesis, 2002. http://ndltd.ncl.edu.tw/handle/12492570628659922435.

Full text
Abstract:
碩士
國立中正大學
資訊管理學系
90
With the open of market of Southeast Asia, the high technology manufacturers move outside more and more. The maturity of the hardware manufacturing technique affect the manufacturing cost to descend, and the profit is also reducing, too. Therefore the core trend of information industry transfers from hardware manufacture to the semi-conductor, software and marketing. Among them, the development of software industries not only express bright eye but also can match the hardware to produce high value added products. So it has become the future core of the nations. However, how to reduce the software risk is an extremely important subject. In the field of the risk management, many scholars usually concentrate on finding the most familiar risky factors to help project managers carry on various activities of the managements. However, two matters are neglected. First, the processes of software development are not all the same. In each stage of software development, we should put great emphasis on different risk factors then we can realize the degree of software risk correctly and properly. Second, when project managers assessing software risk by those risk factors, they are lack of objective reference. The purpose of this research is to build a software risk evaluation model. We try to collect the scores of the cognition when managers evaluated software projects by intuition. Like judging the finance of the individual or corporation in the realm of the finance, we can build a model of evaluating the software risk. The model evaluates the software risk in different stages of system development life cycle. The stages are: system selection and planning, system analysis, system design, and system implementation and operation. So that we can evaluate software at each stage, and find those risky project earlier. The result shows that the probability of the error of system estimated is at 0.36~0.28. It represent that project manager can find those high risky projects with their intuition.
APA, Harvard, Vancouver, ISO, and other styles
45

Liu, Mei-Chien, and 劉美倩. "To build the model of Information Asset Evaluation Model." Thesis, 2004. http://ndltd.ncl.edu.tw/handle/04963390921631948111.

Full text
Abstract:
碩士
中國文化大學
資訊管理研究所碩士在職專班
92
Calculating intangible asset influences on four dimensions “Intellectual Capital, Accounting, Risk Assessment of Asset (BS7799) and IT Baseline Protection Management” Those are all related analyzing the asset evaluated standard rules. There are many evaluation rules in each dimension. This research realizes all related asset definitions and measurements on build up the Evaluation Model. This information asset evaluation model can be used to evaluate asset values in the market. This model integrated methodologies collected, according to the standard evaluation process. The information asset evaluation model took security risk assessment factors, IT intangible asset protection values, evaluated methods, and knowledge values into account. This research proposes five key influence factors about the evaluation model, including Asset Incurrence calculation, Evaluated the suitable way to measure how to rent asset, risk assessment, how much we still can investigate on this asset, and should we replace this asset by a new one. Taking all these factors into consideration can evaluate the true values of the asset.
APA, Harvard, Vancouver, ISO, and other styles
46

Guo, Dennis, and 吳武泰. "A Study on the Fire Suppression Evaluation Model Used in the Fire Risk Evaluation Computer Model." Thesis, 2000. http://ndltd.ncl.edu.tw/handle/92611093069430990408.

Full text
Abstract:
碩士
中央警察大學
消防科學研究所
88
According to the statistical analysis of the past ten years' fire records, the residential fire is the most severe fire problem in Taiwan. On the other hand, due to the more complexity of building features and the more ongoing researches on the Performance-Based Design/Code, the more cases can be designed alternatively by choosing the optimum design combination between the passive and active fire protection measures based upon the cost-benefit which is the key factor of the evaluation of fire risk and its potential hazards. While there are still some difficulties using the alternative design method. Not only the complex relationship between the factors, but also the deferent way people to quantify it. Currently, so many different researches and their applications were accomplished even under the same theorem. The more we understand the phenomena of fire growth, smoke movement, and human behavior, the more we can easily establish the mathematical models or computer models which are more accurate and complex than former models. Also, due the higher performance of personal computer, recently the PC became more popular in the application of fire risk evaluation and fire hazard assessment. In the fire safety science, it is very difficult to determine the over all safety level of a building. The traditional way to evaluate the building fire safety level is mostly considering the built-in factors, such as, fire resistance, building maintenance, fire protection or fire suppression equipment's, fire load and so on. We do not consider about the external factors just like fire brigade. The FiRECAMO(Fire Risk Evaluation and Cost Assessment Model)computer program was accomplished by the National Research Council of Canada under the cooperation with Victory Technology University . This software can calculate fire risk of residential and office buildings, which the effectiveness of fire brigade was considered. In this research, FiRECAMO was selected as a research tool which cooperate with other studies and evaluation methods on the fire suppression effectiveness. The data and parameters of local fire brogade were collected and input to the sub-models of FiRECAMO to help us to enhance our building fire risk assessment technique. The results can be applied to the application of performance-based code system and the alternative design for the designers. Also, it can also help the authority to evaluate whether the fire brigade is capable enough to handle a building fire in their district.
APA, Harvard, Vancouver, ISO, and other styles
47

郭建廷. "A study on the fire Suppression Evaluation Model Used in the fire Risk Evaluation Computer Model." Thesis, 1999. http://ndltd.ncl.edu.tw/handle/01449411809252032851.

Full text
Abstract:
碩士
中央警察大學
消防學系
87
According to the statistical analysis of the past ten years'' fire records, teh residential fire is the most severe fire paoblem in Taiwan. On the other hand, due to the more complexity of building features and the more ongoing researches on the Performance-Based Design/Code, the more cases can be disigned alternatively by choosing the optimum disign comnbination between the passive and active fire protection measures based upon the cost-benefit which is the key factor of teh evaluation fo fire risk and its potential hazards. While there are still some difficulties using the alternative design method. Not only the complex relationship between the factors, but also the deferent way people to quantify it. Currently, so many different researches and their applications were accomplished even under the same theorem. The more we understand the phenomena of fire growth, smoke movement, and human behavior, the more we can easily establish the mathematical models or computer models which are more accurate and complex than former models. Also, due the higher performance of personal computer, recently the PC became more popular in the application of fire risk evaluation and fire hazard assessment. In the fire safety science, it is very difficult to determine the over all safety livel of a building. The traditional way to evaluaate the building fire safety level is mostly considering the built-in factors, such as, fire resistance, building maintenance, fire protection or fire protection or fire suppression equipment''s, fire load and so on. We do not consider about the external factors just like fire brigade. The FiRECAM tm (Fire Risk Evaluation and Cost Assessment Model) computer program was accomplished by the National Research Council of Canada under the cooperation with Victory Technology University. This software can calculate fire risk of residential and office buildings, which the effectiveness of fire brigade was considered. In this research, FiRECAM tm was selected as a research brigade was considered. In this research, FiRECAM tm was selected as a research tool which cooperate with other studies and evaluation methods on the fire suppression effectiveness. The data and parameters of local fire brogade were collected and input to the sub-models of FiRECAM tm to help us to enhance our building fire risk assessment technique. The results can be applied to the application of performance-based code system and the alternative design for the designers. Also, it can also help the authority to evaluate whether the fire brigade is capable enough to handle a building fire in their district.
APA, Harvard, Vancouver, ISO, and other styles
48

Chen, Kun-Hong. "An evaluation of capacity amplification model." 2004. http://www.cetd.com.tw/ec/thesisdetail.aspx?etdun=U0009-0112200611292883.

Full text
APA, Harvard, Vancouver, ISO, and other styles
49

Lin, Yang-Chun, and 林陽君. "The Evaluation Model for Factory Tour." Thesis, 2009. http://ndltd.ncl.edu.tw/handle/53487294295398766388.

Full text
Abstract:
碩士
高雄餐旅學院
旅遊管理研究所
97
The traditional manufacturing industry have ever made economic miracle for Taiwan, but loses their origin advantageous position now. When the traditional manufacturing industry faces to the international market competition, they are forced to transfer to another industry. The foreign manufacturing industry faces to the trend of internationalization and service-focus, so combined factory with tourism and transferred to the modern leisure trade. With the need of traditional factory transferring to the factory tours, the research of evaluation model for factory tour has not development yet. Therefore, this paper focuses to development the evaluation model. The Methodology of this study uses analytic hierarchy process (AHP) to analyze the indexes of four dimensions. According to the result, it shows that “natural resources attractions” is the most important dimension, the followings are “potentiality of market development”, ”ability of tourism operate management” and “creating the customer value”. In the “natural resources attractions” dimension, “product unique” is the most important indicator; in the “potentiality of market development” dimension, “association with tourism industry” has highest importance; and in “creating the customer value” dimension, “ skill of interpreting ” has the highest importance . This study builds up a model which can be the evolution indicators for the practitioner and reference materials for factory tour research.
APA, Harvard, Vancouver, ISO, and other styles
50

Wang, Chun-Ping, and 王鈞平. "The Evaluation Model for Resort Hotel." Thesis, 2004. http://ndltd.ncl.edu.tw/handle/42412433612167010475.

Full text
Abstract:
碩士
南華大學
旅遊事業管理學研究所
92
The major purpose of this researching is to create evaluation index and method on development potential of leisure hotel and resort. According to historical documents and characteristics of hotel industry, we make three evaluation phases , such as Basic factors, Resources and potential, and Basic equipments; and thirteen evaluation rules. Furthermore, we also used AHP (Analytic Hierarchy Process) to get the weights on each rules. From the research result, we found “Resource and Potential” on major rules is most important to scholars and experts, the second important factor is “Basic factor” and “Basic equipments”. On the secondary rules, “Resource and Potential” of major rules should be paid attention on “Attraction of Cultural Landscape” factor; “Code limitation and Law” should weigh highest degree on “Basic factor”; Finally, on “Basic equipment” phase, the “Traffic reaching ability” is necessary important evaluation factors to successfully develop leisure hotels and resorts. The research result not only provides reference basis to relative institutes on developing leisure hotels and resorts, but also provides reference on development potential evaluation model for future relative researching.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography