Dissertations / Theses on the topic 'Accuracy of model'

To see the other types of publications on this topic, follow the link: Accuracy of model.

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 dissertations / theses for your research on the topic 'Accuracy of model.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.

1

Fann, Chee Meng. "Development of an artillery accuracy model." Thesis, Monterey, Calif. : Naval Postgraduate School, 2006. http://bosun.nps.edu/uhtbin/hyperion.exe/06Dec%5FFann.pdf.

Full text
Abstract:
Thesis (M.S. in Engineering Science (Mechanical Engineering)--Naval Postgraduate School, December 2006.
Thesis Advisor(s): Morris Driels. "December 2006." Includes bibliographical references (p. 91). Also available in print.
APA, Harvard, Vancouver, ISO, and other styles
2

Gunner, J. C. "A model of building price forecasting accuracy." Thesis, University of Salford, 1997. http://usir.salford.ac.uk/26702/.

Full text
Abstract:
The purpose of this research was to derive a statistical model comprising the significant factors influencing the accuracy of a designer's price forecast and as an aid to providing a theoretical framework for further study. To this end data, comprising 181 building contract details, was collected from the Singapore office of an international firm of quantity surveyors over the period 1980 to 1991. Bivariate analysis showed a number of independent variables having significant effect on bias which was in general agreement with previous work in this domain. The research also identified a number of independent variables having significant effect on the consistency, or precision, of designers' building price forecasts. With information gleaned from bivariate results attempts were made to build a multivariate model which would explain a significant portion of the errors occurring in building price forecasts. The results of the models built were inconclusive because they failed to satisfy the assumptions inherent in ordinary least squares regression. The main failure in the models was in satisfying the assumption of homoscedasticity, that is, the conditional variances of the residuals are equal around the mean. Five recognised methodologies were applied to the data in attempts to remove heteroscedasticity but none were successful. A different approach to model building was then adopted and a tenable model was constructed which satisfied all of the regression assumptions and internal validity checks. The statistically significant model also revealed that the variable of Price Intensity was the sole underlying influence when tested against all other independentpage xiv variables in the data of this work and after partialling out the effect of all other independent variables. From this a Price Intensity theory of accuracy is developed and a further review of the previous work in this field suggests that this may be of universal application.
APA, Harvard, Vancouver, ISO, and other styles
3

Xu, Wenwei. "Enhancing model accuracy for control : two case studies /." free to MU campus, to others for purchase, 2002. http://wwwlib.umi.com/cr/mo/fullcit?p3074458.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Frazier, Alicia. "Accuracy and precision of a sectioned hollow model." Oklahoma City : [s.n.], 2008.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
5

Lehmann, Christopher, and Alexander Alfredsson. "Intrinsic Equity Valuation : An Emprical Assessment of Model Accuracy." Thesis, Södertörns högskola, Institutionen för samhällsvetenskaper, 2016. http://urn.kb.se/resolve?urn=urn:nbn:se:sh:diva-30377.

Full text
Abstract:
The discounted cash flow model and relative valuation models are ever-increasingly prevalent in today’s investment-heavy environment. In other words, theoretically inferior models are used in practice. It is this paradox that has lead us to compare the discounted cash flow model (DCFM), discounted dividend model (DDM), residual income-based model (RIVM) and the abnormal earnings growth model (AEGM) and their relative accuracy to observed stockprices. Adding to previous research, we investigate their performance in relation to the OMX30 index. What is more, we test how the performance of each model is affected by an extension of the forecast horizon. The study finds that AEGM outperforms the other models, both before and after extending the horizon. Our analysis was conducted by looking at accuracy, spread and the inherent speculative nature of each model. Taking all this into account, RIVM outperforms the other models. In this sense, one can question the rationale behind investor’s decision to primarily use the discounted cash flow model in equity valuation.
APA, Harvard, Vancouver, ISO, and other styles
6

Mitchinson, Pelham James. "Crowding indices : experimental methodology and predictive accuracy." Thesis, University of Southampton, 1999. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.302320.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Bilen, Oytun Peksel. "Advanced Model of Acoustic Trim; Effect on NTF Accuracy." Thesis, KTH, MWL Marcus Wallenberg Laboratoriet, 2011. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-77768.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Hone, David M. "Time and space resolution and mixed layer model accuracy." Monterey, California. Naval Postgraduate School, 1997. http://hdl.handle.net/10945/9080.

Full text
Abstract:
The oceanic turbulent boundary layer is a critical region to understand for oceanic and atmospheric prediction. This thesis answers two fundamental questions: (1) what is the response of the ocean mixed layer system to transient forcing at the air sea surface? (2) what is the necessary time and space resolution in an ocean mixed layer model to resolve important transient responses? Beginning with replication of de Szoeke and Rhines' work, additional physical processes were added to include more realistic viscous dissipation and anisotropy in the three-dimensional turbulent kinetic energy (TKE) budget. These refinements resulted in modification of de Szoeke and Rhines' findings. Firstly, TKE unsteadiness is important for a minimum of 10 to the 5th power seconds. Secondly, viscous dissipation should not be approximated as simply proportional to shear production. Thirdly, entrainment shear production remains significant for a minimum of one pendulum-day. The required temporal model resolution is dependent on the phenomena to be studied. This study focused on the diurnal, synoptic, and annual cycles, which the one-hour time step of the Naval Postgraduate School model adequately resolves. The study of spatial resolution showed unexpectedly that model skill was comparable for 1 m, 10 m and even 20 m vertical grid spacing
APA, Harvard, Vancouver, ISO, and other styles
9

Tjoa, Robertus Tjin Hok Carleton University Dissertation Engineering Mechanical. "Assessment of the accuracy of a computational casting model." Ottawa, 1992.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
10

Lee, Jacob Scott. "Accuracy of a Simplified Analysis Model for Modern Skyscrapers." BYU ScholarsArchive, 2013. https://scholarsarchive.byu.edu/etd/4055.

Full text
Abstract:
A new simplified skyscraper analysis model (SSAM) was developed and implemented in a spreadsheet to be used for preliminary skyscraper design and teaching purposes. The SSAM predicts linear and nonlinear response to gravity, wind, and seismic loading of "modern" skyscrapers which involve a core, megacolumns, outrigger trusses, belt trusses, and diagonals. The SSAM may be classified as a discrete method that constructs a reduced system stiffness matrix involving selected degrees of freedom (DOF's). The steps in the SSAM consist of: 1) determination of megacolumn areas, 2) construction of stiffness matrix, 3) calculation of lateral forces and displacements, and 4) calculation of stresses. Seven configurations of a generic skyscraper were used to compare the accuracy of the SSAM against a space frame finite element model. The SSAM was able to predict the existence of points of contraflexure in the deflected shape which are known to exist in modern skyscrapers. The accuracy of the SSAM was found to be very good for displacements (translations and rotations), and reasonably good for stress in configurations that exclude diagonals. The speed of execution, data preparation, data extraction, and optimization were found to be much faster with the SSAM than with general space frame finite element programs.
APA, Harvard, Vancouver, ISO, and other styles
11

Kazan, Baran. "Additional Classes Effect on Model Accuracy using Transfer Learning." Thesis, Högskolan i Gävle, Datavetenskap, 2020. http://urn.kb.se/resolve?urn=urn:nbn:se:hig:diva-33970.

Full text
Abstract:
This empirical research study discusses how much the model’s accuracy changes when adding a new image class by using a pre-trained model with the same labels and measuring the precision of the previous classes to observe the changes. The purpose is to determine if using transfer learning is beneficial for users that do not have enough data to train a model. The pre-trained model that was used to create a new model was the Inception V3. It has the same labels as the eight different classes that were used to train the model. To test this model, classes of wild and non-wild animals were taken as samples. The algorithm used to train the model was implemented in a single class programmed in Python programming language with PyTorch and TensorBoard library. The Tensorboard library was used to collect and represent the result. Research results showed that the accuracy of the first two classes was 94.96% in training and 97.07% in validation. When training the model with a total of eight classes, the accuracy was 91.89% in training and 95.40 in validation. The precision of both classes was detected at 100% when the model solely had cat and dog classes. After adding six additional classes in the model, the precision changed to 95.82% of the cats and 97.16% of the dogs.
APA, Harvard, Vancouver, ISO, and other styles
12

Vasudev, R. Sashin, and Ashok Reddy Vanga. "Accuracy of Software Reliability Prediction from Different Approaches." Thesis, Blekinge Tekniska Högskola, Avdelningen för för interaktion och systemdesign, 2008. http://urn.kb.se/resolve?urn=urn:nbn:se:bth-1298.

Full text
Abstract:
Many models have been proposed for software reliability prediction, but none of these models could capture a necessary amount of software characteristic. We have proposed a mixed approach using both analytical and data driven models for finding the accuracy in reliability prediction involving case study. This report includes qualitative research strategy. Data is collected from the case study conducted on three different companies. Based on the case study an analysis will be made on the approaches used by the companies and also by using some other data related to the organizations Software Quality Assurance (SQA) team. Out of the three organizations, the first two organizations used for the case study are working on reliability prediction and the third company is a growing company developing a product with less focus on quality. Data collection was by the means of interviewing an employee of the organization who leads a team and is in the managing position for at least last 2 years.
svra06@student.bth.se
APA, Harvard, Vancouver, ISO, and other styles
13

Miles, Luke G. "Global Digital Elevation Model Accuracy Assessment in the Himalaya, Nepal." TopSCHOLAR®, 2013. http://digitalcommons.wku.edu/theses/1313.

Full text
Abstract:
Digital Elevation Models (DEMs) are digital representations of surface topography or terrain. Collection of DEM data can be done directly through surveying and taking ground control point (GCP) data in the field or indirectly with remote sensing using a variety of techniques. The accuracies of DEM data can be problematic, especially in rugged terrain or when differing data acquisition techniques are combined. For the present study, ground data were taken in various protected areas in the mountainous regions of Nepal. Elevation, slope, and aspect were measured at nearly 2000 locations. These ground data were imported into a Geographic Information System (GIS) and compared to DEMs created by NASA researchers using two data sources: the Shuttle Radar Topography Mission (STRM) and Advanced Spaceborne Thermal Emission and Reflection Radiometer (ASTER). Slope and aspect were generated within a GIS and compared to the GCP ground reference data to evaluate the accuracy of the satellitederived DEMs, and to determine the utility of elevation and derived slope and aspect for research such as vegetation analysis and erosion management. The SRTM and ASTER DEMs each have benefits and drawbacks for various uses in environmental research, but generally the SRTM system was superior. Future research should focus on refining these methods to increase error discrimination.
APA, Harvard, Vancouver, ISO, and other styles
14

FULTON, JOHN PATRICK. "A SPATIAL MODEL FOR EVALUATING VARIABLE-RATE FERTILIZER APPLICATION ACCURACY." UKnowledge, 2003. http://uknowledge.uky.edu/gradschool_diss/248.

Full text
Abstract:
The popularity of variable-rate technology (VRT) has grown. However, the limitations and errors ofthis technology are generally unknown. Therefore, a spatial data model was developed to generate "asapplied"surfaces to advance precision agricultural (PA) practices. A test methodology based on ASAEStandard S341.2 was developed to perform uniform-rate (UR) and variable-rate (VR) tests to characterizedistribution patterns testing four VRT granular applicators (two spinner spreaders and two pneumaticapplicators). Single-pass UR patterns exhibited consistent shapes for three of the applicators with patternsshifts observed for the fourth applicator. Simulated overlap analysis showed that three of the applicatorsperformed satisfactorily with most CVs less than 20% while one applicator performed poorly (CVs andgt;25%). The spinner spreaders over-applied at the margins but the pneumatic applicators under-appliedsuggesting a required adjustment to the effective swath spacing. Therefore, it is recommended that CVsaccompany overlap pattern plots to ensure proper calibration of VRT application.Quantification of the rate response characteristics for the various applicators illustrated varying delayand transition times. Only one applicator demonstrated consistent delay and transition times. A sigmoidalfunction was used to model the rate response for applicators. One applicator exhibited a linear responseduring a decreasing rate change. Rate changes were quicker for the two newer VR control systemssignifying advancement in hydraulic control valve technology. This research illustrates the need forstandard testing protocols for VRT systems to help guide VRT software developers, equipmentmanufacturers, and users.The spatial data model uses GIS functionality to merge applicator descriptive patterns with a spatialfield application file (FAF) to generate an 'as-applied' surface representing the actual distribution ofgranular fertilizer. Field data was collected and used to validate the "as-applied" spatial model.Comparisons between the actual and predicted application rates for several fields were madedemonstrating good correlations for one applicator (several R2 andgt; 0.70), moderate success for anotherapplicator (0.60 andlt; R2 andlt; 0.66), and poor relationships for the third applicator (R2 andlt; 0.49). A comparison ofthe actual application rates to the prescription maps generated R2 values between 0.16 and 0.81demonstrating inconsistent VRT applicator performance. Thus, "as-applied" surfaces provide a means toproperly evaluate VRT while enhancing researchers' ability to compare VR management approaches.
APA, Harvard, Vancouver, ISO, and other styles
15

De, Lange Billy. "High accuracy numerical model of the SALT mirror support truss." Thesis, Stellenbosch : Stellenbosch University, 2011. http://hdl.handle.net/10019.1/18042.

Full text
Abstract:
Thesis (MScEng)--Stellenbosch University, 2011.
ENGLISH ABSTRACT: Although a numerical model of the mirror support truss of the Southern African Large Telescope (SALT) has already been developed during the design thereof, this thesis focuses on the development of the methods and techniques that would result in a more accurate numerical model of the actual structure that could be used as a basis for a numerical control system. This control system will compensate for de ections in the structure by adjusting the positioning of the individual mirror segments of the primary mirror. The two main components from which the support truss is constructed are the steel nodes, and the struts that connect to them. For this project a smaller, simpler laboratory model was designed and built to have geometrical properties similar to that of the support truss. The methods and techniques that were investigated were carried out on this model. By using numerical design optimisation techniques, improved numerical models of the different strut types were obtained. This was done by performing tests on the struts so that the actual responses of the struts could be obtained. Numerical models of the struts were then created and set up so that they could be optimised using structural optimisation software. Once accurate strut models had been obtained, these strut models were used to construct a numerical model of the assembled structure. No additional optimisation was performed on the assembled structure and tests were done on the physical structure to obtain its responses. These served as validation criteria for the numerical models of the struts. Because of unforeseen deformations of the structure, not all of the measured structural responses could be used. The remaining results showed, however, that the predictive accuracy of the top node displacement of the assembled structure improved to below 1.5%, from over 60%. From these results it was concluded that the accuracy of the entire structure's numerical model could be signi ficantly improved by optimising the individual strut types.
AFRIKAANSE OPSOMMING: Alhoewel daar reeds 'n numeriese model van die spieëlondersteuningsraamwerk van SALT ontwikkel is gedurende die ontwerp daarvan, fokus hierdie tesis op die ontwikkeling van metodes en tegnieke om 'n numeriese model van steeds hoër gehalte van hierdie spesi eke struktuur te verkry wat kan gebruik word as 'n basis vir 'n numeriese beheerstelsel. Hierdie beheerstelsel sal kan kompenseer vir die ondersteuningsraamwerk se vervormings deur om die individuele spieëlsegmente van die primêre spieël se posisionering te verstel. Hierdie stuktuur bestaan uit hoofsaaklik twee komponente, naamlik staalnodusse en die stutte wat aan hulle koppel. Vir hierdie projek is 'n kleiner, eenvoudiger laboratorium-model ontwerp en gebou om geometriese eienskappe soortgelyk aan die van die ondersteuningstruktuur te hê. Die metodes en tegnieke wat ondersoek is, is op hierdie model uitgevoer. Verbeterde numeriese modelle van die verskillende stut-tipes is ontwikkel deur middel van numerieseoptimeringstegnieke. Dit is gedoen deur toetse op die stutte uit te voer sodat hul werklike gedrag bepaal kon word. Numeriese modelle van die stutte is toe geskep en opgestel sodat hulle geoptimeer kon word om dieselfde gedrag as wat gemeet is, te toon. Hierdie geoptimeerde modelle is toe gebruik om numeriese modelle van die toets-struktuur te skep. Geen verdere optimering is op die numeriese model uitgevoer nie en toetse is op die struktuur gedoen om sy werklike gedrag te meet. Data wat deur die toetse verkry is het as validasie kriteria gedien om die akkuraatheid van die numeriese modelle van die stut-tipes te bepaal. Weens die struktuur se onvoorsiene vervorming kon alle gemete struktuurdata nie gebruik word nie. Die oorblywende data het egter getoon dat die akkuraatheid van die finale numeriese modelle van die struktuur verbeter het en dat dit die translasie van die top-node met 'n speling van 1.5% akkuraatheid kon voorspel, teenoor die oorsponlike speling van meer as 60%. Daar is bevind dat die akkuraatheid van die numeriese model van die hele struktuur noemenswaardig verbeter kan word deur die numeriese modelle van die stut-tipes te optimeer.
APA, Harvard, Vancouver, ISO, and other styles
16

Rooney, Thomas J. A. "On improving the forecast accuracy of the hidden Markov model." Master's thesis, University of Cape Town, 2016. http://hdl.handle.net/11427/22977.

Full text
Abstract:
The forecast accuracy of a hidden Markov model (HMM) may be low due first, to the measure of forecast accuracy being ignored in the parameterestimation method and, second, to overfitting caused by the large number of parameters that must be estimated. A general approach to forecasting is described which aims to resolve these two problems and so improve the forecast accuracy of the HMM. First, the application of extremum estimators to the HMM is proposed. Extremum estimators aim to improve the forecast accuracy of the HMM by minimising an estimate of the forecast error on the observed data. The forecast accuracy is measured by a score function and the use of some general classes of score functions is proposed. This approach contrasts with the standard use of a minus log-likelihood score function. Second, penalised estimation for the HMM is described. The aim of penalised estimation is to reduce overfitting and so increase the forecast accuracy of the HMM. Penalties on both the state-dependent distribution parameters and transition probability matrix are proposed. In addition, a number of cross-validation approaches for tuning the penalty function are investigated. Empirical assessment of the proposed approach on both simulated and real data demonstrated that, in terms of forecast accuracy, penalised HMMs fitted using extremum estimators generally outperformed unpenalised HMMs fitted using maximum likelihood.
APA, Harvard, Vancouver, ISO, and other styles
17

Ok, Ali Ozgun. "Accuracy Assessment Of The Dem And Orthoimage Generated From Aster." Master's thesis, METU, 2005. http://etd.lib.metu.edu.tr/upload/12606588/index.pdf.

Full text
Abstract:
In this study, DEMs and orthoimages were generated from ASTER imagery and their accuracies were assessed. The study site covers an area of approximately 60 x 60 km and encloses the city of Ankara. First, DEMs were generated from stereo ASTER images. In order to find the best GCP combination, different number of GCPs (8, 16, 24, and 32) was used. The accuracies of the generated DEMs were then assessed based on the check points (CP), slopes and land cover types. It was found that 16 GCPs were good compromise to produce the most accurate DEM. The post processing and blunder removal increased the overall accuracy up to 38%. It was also found that there is a strong linear relationship between the accuracies of DEMs and the slopes of the terrain. The accuracies computed for water, urban, forest, mountainous, and other areas were found to be 5.01 m, 8.03 m, 12.69 m, 17.14 m, and 10.21 m, respectively. The overall accuracy was computed as 10.92 m. The orthorectification of the ASTER image was carried out using 12 different mathematical models. Based on the results, the models First Order 2D Polynomial, Direct Linear Transformation and First Order Polynomial with Relief have produced the worst results. On the other hand, the model Second Order Rational Function appears to be the best model to orthorectify the ASTER images. However, the developed model Second Order Polynomial with Relief provides simplicity, consistency and requires less number of GCPs when compared to the model Second Order Rational Function.
APA, Harvard, Vancouver, ISO, and other styles
18

Yongtao, Yu. "Exchange rate forecasting model comparison: A case study in North Europe." Thesis, Uppsala universitet, Statistiska institutionen, 2011. http://urn.kb.se/resolve?urn=urn:nbn:se:uu:diva-154948.

Full text
Abstract:
In the past, a lot of studies about the comparison of exchange rate forecasting models have been carried out. Most of these studies have a similar result which is the random walk model has the best forecasting performance. In this thesis, I want to find a model to beat the random walk model in forecasting the exchange rate. In my study, the vector autoregressive model (VAR), restricted vector autoregressive model (RVAR), vector error correction model (VEC), Bayesian vector autoregressive model are employed in the analysis. These multivariable time series models are compared with the random walk model by evaluating the forecasting accuracy of the exchange rate for three North European countries both in short-term and long-term. For short-term, it can be concluded that the random walk model has the best forecasting accuracy. However, for long-term, the random walk model is beaten. The equal accuracy test proves this phenomenon really exists.
APA, Harvard, Vancouver, ISO, and other styles
19

Horn, Sandra L. "Aggregating Form Accuracy and Percept Frequency to Optimize Rorschach Perceptual Accuracy." University of Toledo / OhioLINK, 2015. http://rave.ohiolink.edu/etdc/view?acc_num=toledo1449513233.

Full text
APA, Harvard, Vancouver, ISO, and other styles
20

Modin, Larsson Jim. "Predictive Accuracy of Linear Models with Ordinal Regressors." Thesis, Uppsala universitet, Statistiska institutionen, 2016. http://urn.kb.se/resolve?urn=urn:nbn:se:uu:diva-273958.

Full text
Abstract:
This paper considers four approaches to ordinal predictors in linear regression to evaluate how these contrast with respect to predictive accuracy. The two most typical treatments, namely, dummy coding and classic linear regression on assigned level scores are compared with two improved methods; penalized smoothed coefficients and a generalized additive model with cubic splines. A simulation study is conducted to assess all on the basis of predictive performance. Our results show that the dummy based methods surpass the numeric at low sample sizes. Although, as sample size increases, differences between the methods diminish. Tendencies of overfitting are identified among the dummy methods. We conclude by stating that the choice of method not only ought to be context driven, but done in the light of all characteristics.
APA, Harvard, Vancouver, ISO, and other styles
21

Hakoyama, Shotaro. "Rater Characteristics in Performance Evaluation Accuracy." Wright State University / OhioLINK, 2014. http://rave.ohiolink.edu/etdc/view?acc_num=wright1399905636.

Full text
APA, Harvard, Vancouver, ISO, and other styles
22

Bilodeau, Bernard. "Accuracy of a truncated barotropic spectral model : numerical versus analytical solutions." Thesis, McGill University, 1985. http://digitool.Library.McGill.CA:80/R/?func=dbin-jump-full&object_id=66037.

Full text
APA, Harvard, Vancouver, ISO, and other styles
23

Huang, Junxiong. "A model for translation accuracy evaluation and measurement a quantitative approach /." Phd thesis, Australia : Macquarie University, 2008. http://hdl.handle.net/1959.14/82531.

Full text
Abstract:
"2007"
Thesis (PhD)--Macquarie University, Division of Linguistics and Psychology, Dept. of Linguistics, 2008.
Bibliography: p. 303-317.
Introduction -- Literature review -- Identification of the unit of translation -- Towards a model for standardized TQA -- Mean criteria of the world -- Creating the mark deduction scheme -- Testing the model -- Applying the model -- Conclusion.
Translation quality assessment (TQA) has been part of the translating process since Marcus Tullius Cicero (106-43BCE), and earnest studies on TQA have been conducted for several decades, but there has been no breakthrough in standardized TQA. Though the importance of TQA has been stressed, agreement on specific means of TQA has not been reached. As Chesterman and Wagner summarize, "Central to translation [...]," "[q]uality assessment is so complicated - especially if it is to be objective and reproducible" (2002: 80-81). The approaches to TQA published throughout the past millennia, by and large, are qualitative. "Whereas there is general agreement on the requirement for a translation to be 'good,' 'satisfactory,' or 'acceptable,' the definition of acceptability and of the means of determining it are matters of ongoing debate and there is precious little agreement on specifics" (Williams, 2004: xiv). Most published TQA approaches are neither objective nor reproducible. -- My study proposes a model for fuzzy standardized TQA through a quantitative approach, which expresses TQA results in numerical terms in a consistent manner. My model is statistics-based, practice-based and practice-oriented. It has been independently tested by eleven professors from four countries, fifteen senior United Nations translators, and fifty reader evaluators. My contrastive analysis of 23,000 pages of bilingual and multilingual texts has identified the unit of translation - the orthographic sentence in context, which is also verified by the results of an international survey among 66 professional translators, the majority of whom also confirm that they evaluate translations sentence by sentence in context. Halliday and Matthiessen's functional grammar theory, among others, provides my model for quantitative TQA with its theoretical basis, while the international survey, the necessary data. My model proposes a set of six Fuzzy Functional Translation Grammar terms, a grammar concept general enough to cover all grammar units in the translated orthographic sentence. Each term represents one type of error which contains from one to three sub-categories. Each error is assigned a value - the mean of the professional markers' deductions for relevant artificial errors and original errors. A marking scheme with sixteen variables under eight attributes is thus created. Ten marks are assigned to each unit of TQA, the sentence. For easy calculation, an arithmetic formula popularly used in statistics (Ex/n ) is adopted. With the assistance of a simple calculator, the evaluator can calculate the grade of a sentence, a sentence group, and the overall grade for an entire TT, regardless of its length. -- Perfect reliability or validity in any form of measurement is unattainable. There will always be some random error or noise in the data (McClendon, 2004: 7). Since it is the first of its type, I do not claim that my model is perfect. Variation has been found in the results of the testing performed by scholars and professional translators, but further testing based on two "easy" (markers' comment) sentences by the 50 reader evaluators respectively achieves 98% and 100% consistency, which indicates that markers' competence may equal constancy or that proper marker training and/or strict marker examination will minimize inconsistency among professional markers. My model, whose formulas withstand testing at the theoretical level and in practice, is not only ready for application, but it has profound implications beyond TQA, such as use in machine translation, and for other subjects like the role of the sentence in translation studies and translating practice.
Mode of access: World Wide Web.
317 leaves
APA, Harvard, Vancouver, ISO, and other styles
24

Ogawa, Hiroyuki. "Testing the accuracy of a three-dimensional acoustic coupled mode model." Thesis, Monterey, California. Naval Postgraduate School, 1991. http://hdl.handle.net/10945/26806.

Full text
APA, Harvard, Vancouver, ISO, and other styles
25

Horii, M. Michael. "A Predictive Model for Multi-Band Optical Tracking System (MBOTS) Performance." International Foundation for Telemetering, 2013. http://hdl.handle.net/10150/579658.

Full text
Abstract:
ITC/USA 2013 Conference Proceedings / The Forty-Ninth Annual International Telemetering Conference and Technical Exhibition / October 21-24, 2013 / Bally's Hotel & Convention Center, Las Vegas, NV
In the wake of sequestration, Test and Evaluation (T&E) groups across the U.S. are quickly learning to make do with less. For Department of Defense ranges and test facility bases in particular, the timing of sequestration could not be worse. Aging optical tracking systems are in dire need of replacement. What's more, the increasingly challenging missions of today require advanced technology, flexibility, and agility to support an ever-widening spectrum of scenarios, including short-range (0 − 5 km) imaging of launch events, long-range (50 km+) imaging of debris fields, directed energy testing, high-speed tracking, and look-down coverage of ground test scenarios, to name just a few. There is a pressing need for optical tracking systems that can be operated on a limited budget with minimal resources, staff, and maintenance, while simultaneously increasing throughput and data quality. Here we present a mathematical error model to predict system performance. We compare model predictions to site-acceptance test results collected from a pair of multi-band optical tracking systems (MBOTS) fielded at White Sands Missile Range. A radar serves as a point of reference to gauge system results. The calibration data and the triangulation solutions obtained during testing provide a characterization of system performance. The results suggest that the optical tracking system error model adequately predicts system performance, thereby supporting pre-mission analysis and conserving scarce resources for innovation and development of robust solutions. Along the way, we illustrate some methods of time-space-position information (TSPI) data analysis, define metrics for assessing system accuracy, and enumerate error sources impacting measurements. We conclude by describing technical challenges ahead and identifying a path forward.
APA, Harvard, Vancouver, ISO, and other styles
26

Jonsson, Eskil. "Ice Sheet Modeling: Accuracy of First-Order Stokes Model with Basal Sliding." Thesis, Uppsala universitet, Institutionen för geovetenskaper, 2018. http://urn.kb.se/resolve?urn=urn:nbn:se:uu:diva-360245.

Full text
Abstract:
Some climate models are still lacking features such as dynamical modelling of ice sheets due to their computational cost which results in poor accuracy and estimates of e.g. sea level rise. The need for low-cost high-order models initiated the development of the First-Order Stokes (or Blatter-Pattyn) model which retains much of the accuracy of the full-Stokes model but is also cost-effective. This model has proven accurate for ice sheets and glaciers with frozen bedrocks, or no-slip basal boundary conditions. However, experimental evidence seems to be lacking regarding its accuracy under sliding, or stress-free, bedrock conditions (ice-shelf conditions). Hence, it became of interest to investigate this. Numerical experiments were set up by formulating the first-order Stokes equations as a variational finite element problem, followed by implementing them using the open-source FEniCS framework. Two types of geometries were used with both no-slip and slip basal boundary conditions. Specifically, experiments B and D from the Ice Sheet Model Intercomparison Project for Higher-Order ice sheet Models (ISMIP-HOM) were used to benchmark the model. Local model errors were investigated and a convergence analysis was performed for both experiments. The results yielded an inherent model error of about 0.06% for ISMIP-HOM B and 0.006% for ISMIPHOM D, mostly relating to the different types of geometries used. Errors in stress-free regions were greater and varied on the order of 1%. This was deemed fairly accurate, and probably enough justification to replace models such as the Shallow Shelf Approximation with the First-Order Stokes model in some regions. However, more rigorous tests with real-world geometries may be warranted. Also noteworthy were inconsistent results in the vertical velocity under slippery conditions (ISMIPHOM D) which could either be due to coding errors or an inherent problem with the decoupling of the horizontal and vertical velocities of the First-Order Stokes model. This should be further investigated.
Vissa klimatmodeller saknar fortfarande funktioner så som dynamisk modellering av istäcken på grund av dess höga beräkningskostnad, vilket resulterar låg noggrannhet och uppskattningar av t.ex. havsnivåhöjning. Behovet av enkla modeller med hög noggrannhet satte igång utvecklingen av den s.k. Första Ordningens Stokes (eller Blatter-Pattyn) modellen. Denna modell behåller mycket av noggrannheten i den mer exakta full-Stokes-modellen men är också väldigt kostnadseffektiv. Denna modell har visat sig vara noggrann för istäcken och glaciärer med frusna berggrunder eller s.k. no-slip randvillkor. Experimentella bevis tycks dock saknas med avseende på dess noggrannhet under glidning, eller stressfria, berggrundsförhållanden (t.ex. vid ishyllor). Därför ville vi undersöka detta. Numeriska experiment upprättades genom att formulera Blatter-Pattyn ekvatonerna som ett variationsproblem (via finita elementmetoden), följt av att implementera dem med hjälp av den öppna källkoden FEniCS. Två typer av geometrier användes med både glidande och stressfria basala randvillkor. Specifikt användes experiment B och D från Ice Sheet Model Intercomparison Project for Higher-Order ice sheet Models (ISMIP-HOM) för att testa modellen. Lokala fel undersöktes och en konvergensanalys utfördes för båda experimenten. Resultaten gav ett modellfel på ca 0,06 % för ISMIP-HOM B och 0,006 % för ISMIP-HOM D, vilka var mest relaterade till de olika typerna av geometrier som användes. Fel i stressfria regioner var större och varierade i storleksordningen 1 %. Detta ansågs vara ganska noggrant och sannolikt tillräckligt för att ersätta modeller så som Shallow Shelf Approximationen med Blatter-Pattyn-modellen i vissa regioner. Dock krävs mer noggranna tester med mer verkliga geometrier för att dra konkreta slutsatser. Också anmärkningsvärt var motsägande resultat i den vertikala hastigheten under glidande förhållanden (ISMIP-HOM D) som antingen kan ha berott på kodningsfel eller ett modelproblem som härstammar utifrån särkopplingen mellan den horizontella- och den vertikala hastigheten i Blatter-Pattyn-modellen. Detta bör undersökas vidare.
APA, Harvard, Vancouver, ISO, and other styles
27

Do, Changhee. "Improvement in accuracy using records lacking sire information in the animal model." Diss., Virginia Tech, 1992. http://hdl.handle.net/10919/39430.

Full text
APA, Harvard, Vancouver, ISO, and other styles
28

Qiu, Yi. "An investigation into the microplane constitutive model for concrete." Thesis, University of Sheffield, 1999. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.311813.

Full text
APA, Harvard, Vancouver, ISO, and other styles
29

Oertel, David [Verfasser]. "Deep-Sea Model-Aided Navigation Accuracy for Autonomous Underwater Vehicles Using Online Calibrated Dynamic Models / David Oertel." München : Verlag Dr. Hut, 2018. http://d-nb.info/1156510554/34.

Full text
APA, Harvard, Vancouver, ISO, and other styles
30

Hughes, Alistair Paul. "The accuracy of linear flux models in predicting reaction rate profiles in a model biochemical reaction system." Master's thesis, University of Cape Town, 2014. http://hdl.handle.net/11427/9116.

Full text
Abstract:
Includes bibliographical references
Metabolic flux analysis is commonly used in the modelling of biochemical reactions. The use of MFA models has gained large amounts of interest due to the simplicity of the computational procedures required for the model, and the exclusion of difficult to measure intracellular reaction data. There are many examples of the use of MFA models in literature studies in a number of applications, ranging from the medical industry through to the development of novel biochemical processes. Little to no mention is provided in literature studies regarding the applicability of the MFA model to a specified set of reaction data. Furthermore, the techniques and routines used to compute the flux models are not well described in these studies. The objectives of this research were to determine the sensitivity of the MFA models to various operating and kinetic parameters and to highlight the considerations required when setting up the computational routine used to solve the flux balances. The study was conducted using a model pathway populated with a set of hypothetical elemental reactions and branch points. The model pathway was used in this study to negate the affects of complex regulatory biochemical architectures which are not well described in literature. The use of the model pathway ensured that the reaction system was thermodynamically feasible and there was consistency in the mass balances. The exclusion of the complex regulatory reactions did not affect the accuracy of the results generated in this study. A set of reaction mechanisms were used to describe each reaction step and were populated with parameters reference from literature. The cellular and reactor mass balances were generated using correlations presented in literature.
APA, Harvard, Vancouver, ISO, and other styles
31

LaFond, Lee James. "Decision consistency and accuracy indices for the bifactor and testlet response theory models." Diss., University of Iowa, 2014. https://ir.uiowa.edu/etd/1346.

Full text
Abstract:
The primary goal of this study was to develop a new procedure for estimating decision consistency and accuracy indices using the bifactor and testlet response theory (TRT) models. This study is the first to investigate decision consistency and accuracy from a multidimensional perspective, and the results have shown that the bifactor model at least behaved in way that met the author's expectations and represents a potential useful procedure. The TRT model, on the other hand, did not meet the author's expectations and generally showed poor model performance. The multidimensional decision consistency and accuracy indices proposed in this study appear to provide good performance, at least for the bifactor model, in the case of a substantial testlet effect. For practitioners examining a test containing testlets for decision consistency and accuracy, a recommended first step is to check for dimensionality. If the testlets show a significant degree of multidimensionality, then the usage of the multidimensional indices proposed can be recommended as the simulation study showed an improved level of performance over unidimensional IRT models. However, if there is a not a significant degree of multidimensionality then the unidimensional IRT models and indices would perform as well, or even better, than the multidimensional models. Another goal of this study was to compare methods for numerical integration used in the calculation of decision consistency and accuracy indices. This study investigated a new method (M method) that sampled ability estimates through a Monte-Carlo approach. In summary, the M method seems to be just as accurate as the other commonly used methods for numerical integration. However, it has some practical advantages over the D and P methods. As previously mentioned, it is not as nearly as computationally intensive as the D method. Also, the P method requires large sample sizes. In addition, the P method has conceptual disadvantage in that the conditioning variable, in theory, should be the true theta, not an estimated theta. The M method avoids both of these issues and seems to provide equally accurate estimates of decision consistency and accuracy indices, which makes it a strong option particularly in multidimensional cases.
APA, Harvard, Vancouver, ISO, and other styles
32

Kang, Inhan. "Modeling the Interaction of Numerosity and Perceptual Variables with the Diffusion Model." The Ohio State University, 2019. http://rave.ohiolink.edu/etdc/view?acc_num=osu1555421458277728.

Full text
APA, Harvard, Vancouver, ISO, and other styles
33

Li, Xue. "Incorporating chromatin interaction data to improve prediction accuracy of gene expression." Digital WPI, 2015. https://digitalcommons.wpi.edu/etd-theses/589.

Full text
Abstract:
Genome structure can be classified into three categories: primary structure, secondary structure and tertiary structure, and they are all important for gene transcription regulation. In this research, we utilize the structural information to characterize the correlations and interactions among genes, and involve such information into the Linear Mixed-Effects (LME) model to improve the accuracy of gene expression prediction. In particular, we use chromatin features as predictors and each gene is an observation. Before model training and testing, genes are grouped according to the genome structural information. We use four gene grouping methods: 1) grouping genes according to sliding windows on primary structure; 2) grouping anchor genes in chromatin loop structure; 3) grouping genes in the CTCF-anchored domain; and 4) grouping genes in the chromatin domains obtained from Hi-C experiments. We compare the prediction accuracy between LME model and linear regression model. If all chromatin feature predictors are included into the models, based on the primary structure only (Method 1), the LME models improve prediction accuracy by up to 1%. Based on the tertiary structure only (Methods 2-4), for the genes that can be grouped according the tertiary interaction data, LME models improve prediction accuracy by up to 2.1%. For individual chromatin feature predictors, the LME models improve from 2% to 26 %, in which improvement is more significant for chromatin features that have lower original predictive ability. For future research we propose a model that combines the primary and tertiary structure to infer the correlations among genes to further improve the prediction.
APA, Harvard, Vancouver, ISO, and other styles
34

Miller, Matthew Lowell. "Analysis of Viewshed Accuracy with Variable Resolution LIDAR Digital Surface Models and Photogrammetrically-Derived Digital Elevation Models." Thesis, Virginia Tech, 2011. http://hdl.handle.net/10919/35692.

Full text
Abstract:
The analysis of visibility between two points on the earthâ s terrain is a common use of GIS software. Most commercial GIS software packages include the ability to generate a viewshed, or a map of terrain surrounding a particular location that would be visible to an observer. Viewsheds are often generated using â bare-earthâ Digital Elevation Models (DEMs) derived from the process of photogrammetry. More detailed models, known as Digital Surface Models (DSMs), are often generated using Light Detection and Ranging (LIDAR) which uses an airborne laser to scan the terrain. In addition to having greater accuracy than photogrammetric DEMs, LIDAR DSMs include surface features such as buildings and trees. This project used a visibility algorithm to predict visibility between observer and target locations using both photogrammetric DEMs and LIDAR DSMs of varying resolution. A field survey of the locations was conducted to determine the accuracy of the visibility predictions and to gauge the extent to which the presence of surface features in the DSMs affected the accuracy. The use of different resolution terrain models allowed for the analysis of the relationship between accuracy and optimal grid size. Additionally, a series of visibility predictions were made using Monte Carlo methods to add random error to the terrain elevation to estimate the probability of a targetâ s being visible. Finally, the LIDAR DSMs were used to determine the linear distance of terrain along the lines-of-sight between the observer and targets that were obscured by trees or bushes. A logistic regression was performed between that distance and the visibility of the target to determine the extent to which a greater amount of vegetation along the line-of-sight impacted the targetâ s visibility.
Master of Science
APA, Harvard, Vancouver, ISO, and other styles
35

Guang, G.-J. "Model discretisation and accuracy assessment in an automated, adaptive finite element simulation system." Thesis, Swansea University, 1994. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.637183.

Full text
Abstract:
The finite element method has played an important role in helping the understanding of physics; from material mechanics to plasma flow, and is an extremely versatile tool for faster and better prototyping of today's industrial products, ranging from sub-micron semiconductor devices to large scale flight vehicles and reservoir dams. The outstanding power of the finite element method lies in its capability to solve geometrically complicated problems. However, this capability can only be fulfilled by an appropriately constructed mesh. With the recent emergence of the adaptive finite element method, users are relieved from the difficulties involved in appropriate/optimal mesh design and an automatic adaptive finite element analysis seems within reach. However, the realisation of adaptive finite element methods requires extensive theoretical and numerical development, together with, in order to properly integrate them into a smoothly operating system, the redesign of system philosophy and infrastructures. it is this aspect of the finite element method that makes a modern finite element system drastically different from the more traditional mesh-based ones. This thesis is on the design and development of such an automated, adaptive finite element simulation system. The emphasis is on its automation and adaptivity. Central to the system is the geometry-based philosophy. The system comprises two crucial procedures, namely, model discretisation and accuracy assessment. Mesh generation and mesh adaptation techniques are systematically reviewed. A geometry-based automatic 3D mesh generator, based on the 2-stage scheme of the unstructured approach exploiting the novel Delaunay simplexification algorithm has been researched and successfully developed. A mesh adaptator has also been developed to assume the responsibility of mesh adaptation. The mesh adaptator is a combination of the regeneration-based and node-based schemes of the h-adaptation approach. Other supporting modules such as the discretisation controller, automatic attribute assigner and solution mapper are also developed to form the complete model discretisation procedure.
APA, Harvard, Vancouver, ISO, and other styles
36

Thalieb, Rio M. "An accuracy analysis of Army Material System Analysis Activity discrete reliability growth model." Thesis, Monterey, California : Naval Postgraduate School, 1988. http://hdl.handle.net/10945/22849.

Full text
Abstract:
The accuracy of the discrete reliability growth model developed by Army Material System Analysis Activity (AMSAA) is analysed. The mean, standard deviation, and 95 precent confidence interval of the estimate of reliability resulting from simulating the AMSAA discrete reliability growth model are computed. The mean of the estimate of reliability from the AMSAA discrete reliability growth model is compared with the mean of the reliability estimate using the Exponential discrete reliability growth model developed at the Naval Postgraduate School and with the actual reliability which was used to generate test data for the replications in the simulations. The testing plan simulated in this study assumes that the mission tests (go-no-go) are performed until a predetermined number of failures occur at which time a modification is made. The main results are that the AMSAA discrete reliability growth model always performs well with concave growth patterns and has difficulty in tracking the actual reliability which has convex growth pattern or constant growth pattern when the number of failures specified equal to one. Keywords: Reliability growth, Estimate, Mean, Standard deviation, Thesis
APA, Harvard, Vancouver, ISO, and other styles
37

Porter, Jason L. "Comparison of intraoral and extraoral scanners on the accuracy of digital model articulation." VCU Scholars Compass, 2017. http://scholarscompass.vcu.edu/etd/4881.

Full text
Abstract:
Introduction: Orthodontists increasingly rely on digital models in clinical practice. The ability of modern scanners to articulate digital models must be scientifically evaluated. Methods:Twenty five digital articulated models were produced from four digital scanners in five experimental groups. The resulting inter-arch measurements were compared to the gold standard. An acceptable range of 0.5mm more or less than the gold standard was used for evaluation. Results: iTero® and iTero® Element yielded all acceptable inter-arch measurements. The 3M™ True Definition and Ortho Insight 3D® with Regisil® bite registration produced four of six acceptable inter-arch measurements. The Ortho Insight 3D® with Coprwax ™ bite registration yielded three of six acceptable inter-ach measurements. Conclusions: The iTero® and iTero® Element produced the most accurately articulated models. The 3M™ True Definition and Ortho Insight 3D® with Regisil® were the next most accurate. The Ortho Insight 3D® scanner with Coprwax ™ was the least accurate method tested.
APA, Harvard, Vancouver, ISO, and other styles
38

Karimi, Arizo. "VARs and ECMs in forecasting – a comparative study of the accuracy in forecasting Swedish exports." Thesis, Uppsala University, Department of Economics, 2008. http://urn.kb.se/resolve?urn=urn:nbn:se:uu:diva-9223.

Full text
Abstract:

In this paper, the forecast performance of an unrestricted Vector Autoregressive (VAR) model was compared against the forecast accuracy of a Vector error correction (VECM) model when computing out-of-sample forecasts for Swedish exports. The co-integrating relation used to estimate the error correction specification was based upon an economic theory for international trade suggesting that a long run equilibrium relation among the variables included in an export demand equation should exist. The results obtained provide evidence of a long run equilibrium relationship between the Swedish export volume and its main determinants. The models were estimated for manufactured goods using quarterly data for the period 1975-1999 and once estimated, the models were used to compute out-of-sample forecasts up to four-, eight- and twelve-quarters ahead for the Swedish export volume using both multi-step and one-step ahead forecast techniques. The main results suggest that the differences in forecasting ability between the two models are small, however according to the relevant evaluation criteria the unrestricted VAR model in general yields somewhat better forecast than the VECM model when forecasting Swedish exports over the chosen forecast horizons.

APA, Harvard, Vancouver, ISO, and other styles
39

Dyussekeneva, Karima. "New product sales forecasting : the relative accuracy of statistical, judgemental and combination forecasts." Thesis, University of Bath, 2011. https://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.550612.

Full text
Abstract:
This research investigates three approaches to new product sales forecasting: statistical, judgmental and the integration of these two approaches. The aim of the research is to find a simple, easy-to-use, low cost and accurate tool which can be used by managers to forecast the sales of new products. A review of the literature suggested that the Bass diffusion model was an appropriate statistical method for new product sales forecasting. For the judgmental approach, after considering different methods and constraints, such as bias, complexity, lack of accuracy, high cost and time involvement, the Delphi method was identified from the literature as a method, which has the potential to mitigate bias and produces accurate predictions at a low cost in a relatively short time. However, the literature also revealed that neither of the methods: statistical or judgmental, can be guaranteed to give the best forecasts independently, and a combination of them is the often best approach to obtaining the most accurate predictions. The study aims to compare these three approaches by applying them to actual sales data. To forecast the sales of new products, the Bass diffusion model was fitted to the sales history of similar (analogous) products that had been launched in the past and the resulting model was used to produce forecasts for the new products at the time of their launch. These forecasts were compared with forecasts produced through the Delphi method and also through a combination of statistical and judgmental methods. All results were also compared to the benchmark levels of accuracy, based on previous research and forecasts based on various combinations of the analogous products’ historic sales data. Although no statistically significant difference was found in the accuracy of forecasts, produced by the three approaches, the results were more accurate than those obtained using parameters suggested by previous researchers. The limitations of the research are discussed at the end of the thesis, together with suggestions for future research.
APA, Harvard, Vancouver, ISO, and other styles
40

Aguilar, Huacan Boris Abner. "Improving of the accuracy and efficiency of implicit solvent models in Biomolecular Modeling." Diss., Virginia Tech, 2014. http://hdl.handle.net/10919/64409.

Full text
Abstract:
Biomolecular Modeling is playing an important role in many practical applications such as biotechnology and structure-based drug design. One of the essential requirements of Biomolecular modeling is an accurate description of the solvent (water). The challenge is to make this description computationally facile that is reasonably fast, simple, robust and easy to incorporate into existing software packages. The most rigorous procedure to model the effect of aqueous solvent is to explicitly model every water molecule in the system. For many practical applications, this approach is computationally too intense, as the number of required water atoms is on average one order of magnitude larger than the number of atoms of the molecule of interest. Implicit solvent models, in which solvent molecules are represented by a continuum function, have become a popular alternative to explicit solvent methods as they are computationally more efficient. The Generalized Born (GB) implicit solvent has become quite popular due to its relative simplicity and computational efficiency. However, recent studies showed serious deficiencies of many GB variants when applied to Biomolecular Modeling such as an over- stabilization of alpha helical secondary structures and salt bridges. In this dissertation we present two new GB models aimed at computing solvation properties with a reasonable compromise between accuracy and speed. The first GB model, called NSR6, is based on a numerically surface integration over the standard molecular surface. When applied to a set of small drug-like molecules, NSR6 produced an accuracy, with respect to experiments, that is essentially at the same level as that of the expensive explicit solvent treatment. Furthermore, we developed an analytic GB model, called AR6, based on an approximation of the volume integral over the standard molecular volume. The accuracy of the AR6 model is tested relative to the numerically exact NSR6. Overall AR6 produces a good accuracy and is suitable for Molecular Dynamics simulations which is the main intended application.
Ph. D.
APA, Harvard, Vancouver, ISO, and other styles
41

Burdych, Filip. "Modelování predikce bankrotu stavebních podniků." Master's thesis, Vysoké učení technické v Brně. Fakulta podnikatelská, 2017. http://www.nusl.cz/ntk/nusl-319227.

Full text
Abstract:
This master thesis deals with bankruptcy prediction models for construction companies doing business in Czech Republic. Terms important for understanding the issue are defined in the theoretical part. In analytical part, there are five current bankruptcy prediction models tested on the analysed sample and resulted accuracy compared with original ones. On the basis of knowledges acquired, there is developed a brand-new bankruptcy prediction model.
APA, Harvard, Vancouver, ISO, and other styles
42

Elmubarak, Mona Dr. "Accuracy and reliability of traditional measurement techniques for tooth widths and arch perimeter compared to CAD/CAM." The University of the Western Cpae, 2018. http://hdl.handle.net/11394/6472.

Full text
Abstract:
Magister Scientiae Dentium - MSc(Dent)
BACKGROUND: Plaster models form an integral part of the traditional orthodontic records. They are necessary for diagnosis and treatment planning, case presentations as well as for the evaluation of treatment progress. The accuracy of the measurements taken for space assessment is crucial prior to treatment planning. The introduction of digital models overcomes some problems experienced with plaster models. Digital models have shown to be an acceptable alternative for plaster models. AIM: The aim of the study was to determine the accuracy of traditional measurement techniques when compared to the CAD/ CAM measurements in the assessment of tooth widths and arch perimeter from plaster models. METHOD: The mesio-distal tooth widths and arch perimeter of thirty archived plaster models were measured using a digital caliper to the nearest 0.01 mm and divider to the nearest 0.1 mm. Corresponding digital models were produced by scanning them with a CAD/CAM (InEos X5) and space analysis completed by measurements using InEos Blue software. Measurements were repeated after 1 week from the initial measurement. The methods were compared using descriptive analysis (mean difference and standard deviation). RESULTS: The operator reliability was high for digital models as well as the plaster models when the measurement tool was the digital caliper (analyzed using the Pearson correlation coefficient in the paired t-test). The mean values of tooth widths measurements of CAD/CAM, digital caliper and divider were 6.82 (±0.04), 6.94 (± 0.04) and 7.11 (± 0.04). There was a significant difference between the measurements made by the CAD/CAM and the divider. Additionally significant differences between the measurements by digital caliper and divider measurements (p < 0.05) were observed. No significant difference was found when comparing CAD/CAM to digital caliper. Positive correlation was displayed between CAD/CAM, digital caliper and the divider, but the measurements completed with the digital caliper had the highest correlation with the CAD/CAM. The difference was not significant between the aforementioned measurement tools (p > 0.05). Arch perimeter measurements showed no statistical significant difference between CAD/CAM, digital caliper and divider (p < 0.05). CONCLUSION: Archived plaster models stored as records can be converted to digital models as it will have the same accuracy of measurements. The value of doing a space analysis with the CAD/CAM system can be performed with similar reliability on the digital models as a caliper on plaster models.
APA, Harvard, Vancouver, ISO, and other styles
43

Elmubarak, Mona. "Accuracy and reliability of traditional measurement techniques for tooth widths and arch perimeter compared to CAD/CAM." University of the Western Cape, 2018. http://hdl.handle.net/11394/6521.

Full text
Abstract:
>Magister Scientiae - MSc
Background: Plaster models form an integral part of the traditional orthodontic records. They are necessary for diagnosis and treatment planning, case presentations as well as for the evaluation of treatment progress. The accuracy of the measurements taken for space assessment is crucial prior to treatment planning. The introduction of digital models overcomes some problems experienced with plaster models. Digital models have shown to be an acceptable alternative for plaster models. Aim: The aim of the study was to determine the accuracy of traditional measurement techniques when compared to the CAD/ CAM measurements in the assessment of tooth widths and arch perimeter from plaster models. Method: The mesio-distal tooth widths and arch perimeter of thirty archived plaster models were measured using a digital caliper to the nearest 0.01 mm and divider to the nearest 0.1 mm. Corresponding digital models were produced by scanning them with a CAD/CAM (InEos X5) and space analysis completed by measurements using InEos Blue software. Measurements were repeated after 1 week from the initial measurement. The methods were compared using descriptive analysis (mean difference and standard deviation). Results: The operator reliability was high for digital models as well as the plaster models when the measurement tool was the digital caliper (analyzed using the Pearson correlation coefficient in the paired t-test). The mean values of tooth widths measurements of CAD/CAM, digital caliper and divider were 6.82 (±0.04), 6.94 (± 0.04) and 7.11 (± 0.04). There was a significant difference between the measurements made by the CAD/CAM and the divider. Additionally significant differences between the measurements by digital caliper and divider measurements (p < 0.05) were observed. No significant difference was found when comparing CAD/CAM to digital caliper. Positive correlation was displayed between CAD/CAM, digital caliper and the divider, but the measurements completed with the digital caliper had the highest correlation with the CAD/CAM. The difference was not significant between the aforementioned measurement tools (p > 0.05). Arch perimeter measurements showed no statistical significant difference between CAD/CAM, digital caliper and divider (p < 0.05). Conclusion: Archived plaster models stored as records can be converted to digital models as it will have the same accuracy of measurements. The value of doing a space analysis with the CAD/CAM system can be performed with similar reliability on the digital models as a caliper on plaster models.
APA, Harvard, Vancouver, ISO, and other styles
44

Taba, Isabella Bahareh. "Improving eye-gaze tracking accuracy through personalized calibration of a user's aspherical corneal model." Thesis, University of British Columbia, 2012. http://hdl.handle.net/2429/40247.

Full text
Abstract:
The eyes present us with a window through which we view the world and gather information. Eye-gaze tracking systems are the means by which a user's point of gaze (POG) can be measured and recorded. Despite the active research in gaze tracking systems and major advances in this field, calibration remains one of the primary challenges in the development of eye tracking systems. In order to facilitate gaze measurement and tracking, eye-gaze trackers utilize simplifications in modeling the human eye. These simplifications include using a spherical corneal model and using population averages for eye parameters in place of individual measurements, but use of these simplifications in modeling contribute to system errors and impose inaccuracies on the process of point of gaze estimation. This research introduces a new one-time per-user calibration method for gaze estimation systems. The purpose of the calibration method developed in this thesis is to calculate and estimate different individual eye parameters based on an aspherical corneal model. Replacing average measurements with individual measurements promises to improve the accuracy and reliability of the system. The approach presented in this thesis involves estimating eye parameters by statistical modeling through least squares curve fitting. Compared to a current approach referred to here as the Hennessey's calibration method, this approach offers significant advantages, including improved, individual calibration. Through analysis and comparison of this new calibration method with the Hennessey calibration method, the research data presented in this thesis shows an improvement in gaze estimation accuracy of approximately 27%. Research has shown that the average accuracy for the Hennessey calibration method is about 1:5 cm on an LCD screen at a distance of 60 cm, while the new system, as tested on eight different subjects, achieved an average accuracy of 1:1 cm. A statistical analysis (T-test) of the comparative accuracy of the new calibration method versus the Hennessey calibration method has demonstrated that the new system represents a statistically significant improvement.
APA, Harvard, Vancouver, ISO, and other styles
45

Gramz, James. "Using Evolutionary Programming to increase the accuracy of an ensemble model for energy forecasting." Thesis, Marquette University, 2014. http://pqdtopen.proquest.com/#viewpdf?dispub=1554240.

Full text
Abstract:

Natural gas companies are always trying to increase the accuracy of their forecasts. We introduce evolutionary programming as an approach to forecast natural gas demand more accurately. The created Evolutionary Programming Engine and Evolutionary Programming Ensemble Model use the current GasDay models, along with weather and historical flow to create an overall forecast for the amount of natural gas a company will need to supply to their customers on a given day. The existing ensemble model uses the GasDay component models and then tunes their individual forecasts and combines them to create an overall forecast.

The inputs into the Evolutionary Programming Engine and Evolutionary Programming Ensemble Model were determined based on currently used inputs and domain knowledge about what variables are important for natural gas forecasting. The ensemble model design is based on if-statements that allow different equations to be used on different days to create a more accurate forecast, given the expected weather conditions.

This approach is compared to what GasDay currently uses based on a series of error metrics and comparisons on different types of weather days and during different months. Three different operating areas are evaluated, and the results show that the created Evolutionary Programming Ensemble Model is capable of creating improved forecasts compared to the existing ensemble model, as measured by Root Mean Square Error (RMSE) and Standard Error (Std Error). However, the if-statements in the ensemble models were not able to produce individually reasonable forecasts, which could potentially cause errant forecasts if a different set of if-statements are true on a given day.

APA, Harvard, Vancouver, ISO, and other styles
46

Williams, Brian J. "Effects of storm-related parameters on the accuracy of the nested tropical cyclone model." Thesis, Monterey, California. Naval Postgraduate School, 1986. http://hdl.handle.net/10945/21818.

Full text
APA, Harvard, Vancouver, ISO, and other styles
47

LE, BIHAN THOMAS. "Accuracy of PSA's DPF soot load estimator calibrated by means of a DoE model." Thesis, KTH, Maskinkonstruktion (Inst.), 2013. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-145030.

Full text
Abstract:
The combustion process happening in the Diesel engine is naturally producing soot. These soot emissions have been known for being harmful for both our environment and especially our health. Aware of this situation, the governments worldwide have defined emission limits for our vehicles. These emission regulations are continuously revised and have become harder and harder for the car manufacturers to reach. In that context, PSA has introduced in 2000 the Diesel Particulate Filter in the automotive industry (on a large scale).The DPF technology chosen by PSA requires a constant monitoring of the soot load in the filter. This monitoring is done by a module which estimates in real time the soot amount emitted by the engine and the self-regenerated part at the same time. In order to work properly, this module needs to be previously calibrated on an engine test bench. This solution has shown great results so far but the time needed in the test cell for the calibration is very long.An idea is then to use an engine model obtained by means of a Design of Experiment. This kind of model is able to provide us with data such as fuel consumption or pollutant emissions in a certain range of operating points. Nowadays, DoE engine models are used in the engine tuning phase which occurs before the soot load estimator is calibrated. The available engine model was consequently utilized to simulate the tests originally done in the test cell and results of the simulations used to calibrate the soot load estimator of the DPF. A potential problem of this method was that the engine operating points asked to the model would be close to its border of accuracy or even outside.6The results given by the model have first been compared to those provided by a real engine. Being close enough, data collected by the model have been used to calibrate the soot load estimator. This estimator was finally tested on real driving cycles. The accuracy of its load estimation was compared to the DPF weight (before/after) and promising enough for PSA to keep working in that direction after this thesis.
APA, Harvard, Vancouver, ISO, and other styles
48

Christensen, Nikolaj Kruse, Ty Paul A. Ferre, Gianluca Fiandaca, and Steen Christensen. "Voxel inversion of airborne electromagnetic data for improved groundwater model construction and prediction accuracy." COPERNICUS GESELLSCHAFT MBH, 2017. http://hdl.handle.net/10150/623198.

Full text
Abstract:
We present a workflow for efficient construction and calibration of large-scale groundwater models that includes the integration of airborne electromagnetic (AEM) data and hydrological data. In the first step, the AEM data are inverted to form a 3-D geophysical model. In the second step, the 3-D geophysical model is translated, using a spatially dependent petrophysical relationship, to form a 3-D hydraulic conductivity distribution. The geophysical models and the hydrological data are used to estimate spatially distributed petrophysical shape factors. The shape factors primarily work as translators between resistivity and hydraulic conductivity, but they can also compensate for structural defects in the geophysical model.

The method is demonstrated for a synthetic case study with sharp transitions among various types of deposits. Besides demonstrating the methodology, we demonstrate the importance of using geophysical regularization constraints that conform well to the depositional environment. This is done by inverting the AEM data using either smoothness (smooth) constraints or minimum gradient support (sharp) constraints, where the use of sharp constraints conforms best to the environment. The dependency on AEM data quality is also tested by inverting the geophysical model using data corrupted with four different levels of background noise. Subsequently, the geophysical models are used to construct competing groundwater models for which the shape factors are calibrated. The performance of each groundwater model is tested with respect to four types of prediction that are beyond the calibration base: a pumping well's recharge area and groundwater age, respectively, are predicted by applying the same stress as for the hydrologic model calibration; and head and stream discharge are predicted for a different stress situation.

As expected, in this case the predictive capability of a groundwater model is better when it is based on a sharp geophysical model instead of a smoothness constraint. This is true for predictions of recharge area, head change, and stream discharge, while we find no improvement for prediction of groundwater age. Furthermore, we show that the model prediction accuracy improves with AEM data quality for predictions of recharge area, head change, and stream discharge, while there appears to be no accuracy improvement for the prediction of groundwater age.
APA, Harvard, Vancouver, ISO, and other styles
49

Rohde, Johannes Bernhard Rudolf [Verfasser]. "Essays on model risk : the role of volatility for the accuracy of financial risk models / Johannes Bernhard Rudolf Rohde." Hannover : Technische Informationsbibliothek (TIB), 2015. http://d-nb.info/1081965088/34.

Full text
APA, Harvard, Vancouver, ISO, and other styles
50

Matzke, Nicholas J. "Probabilistic Historical Biogeography| New Models for Founder-Event Speciation, Imperfect Detection, and Fossils Allow Improved Accuracy and Model-Testing." Thesis, University of California, Berkeley, 2014. http://pqdtopen.proquest.com/#viewpdf?dispub=3616487.

Full text
Abstract:

Historical biogeography has a diversity of methods for inferring ancestral geographic ranges on phylogenies, but many of the methods have conflicting assumptions, and there is no common statistical framework by which to judge which models are preferable. Probabilistic modeling of geographic range evolution, pioneered by Ree and Smith (2008, Systematic Biology) in their program LAGRANGE, could provide such a framework, but this potential has not been implemented until now.

I have created an R package, "BioGeoBEARS," described in chapter 1 of the dissertation, that implements in a likelihood framework several commonly used models, such as the LAGRANGE Dispersal-Extinction-Cladogenesis (DEC) model and the Dispersal-Vicariance Analysis (DIVA, Ronquist 1997, Systematic Biology) model. Standard DEC is a model with two free parameters specifying the rate of "dispersal" (range expansion) and "extinction" (range contraction). However, while dispersal and extinction rates are free parameters, the cladogenesis model is fixed, such that the geographic range of the ancestral lineage is inherited by the two daughter lineages through a variety of scenarios fixed to have equal probability. This fixed nature of the cladogenesis model means that it has been indiscriminately applied in all DEC analyses, and has not been subjected to any inference or formal model testing.

BioGeoBEARS also adds a number of features not previously available in most historical biogeography software, such as distance-based dispersal, a model of imperfect detection, and the ability to include fossils either as ancestors or tips on a time-calibrated tree.

Several important conclusions may be drawn from this research. First, formal model selection procedures can be applied in phylogenetic inferences of historical biogeography, and the relative importance of different processes can be measured. These techniques have great potential for strengthening quantitative inference in historical biogeography. No longer are biogeographers forced to simply assume, consciously or not, that some processes (such as vicariance or dispersal) are important and others are not; instead, this can be inferred from the data. Second, founder-event speciation appears to be a crucial explanatory process in most clades, the only exception being some intracontinental taxa showing a large degree of sympatry across widespread ranges. This is not the same thing as claiming that founder-event speciation is the only important process; founder event speciation as the only important process is inferred in only one case (Microlophus lava lizards from the Galapagos). The importance of founder-event speciation will not be surprising to most island biogeographers. However, the results are important nonetheless, as there are still some vocal advocates of vicariance-dominated approaches to biogeography, such as Heads (2012, Molecular Panbiogeography of the Tropics), who allows vicariance and range-expansion to play a role in his historical inferences, but explicitly excludes founder-event speciation a priori. The commonly-used LAGRANGE DEC and DIVA programs actually make assumptions very similar to those of Heads, even though many users of these programs likely consider themselves dispersalists or pluralists. Finally, the inclusion of fossils and imperfect detection within the same likelihood and model-choice framework clears the path for integrating paleobiogeography and neontological biogeography, strengthening inference in both.

Model choice is now standard practice in phylogenetic analysis of DNA sequences: a program such as ModelTest is used to compare models such as Jukes-Cantor, HKY, GTR+I+G, and to select the best model before inferring phylogenies or ancestral states. It is clear that the same should now happen in phylogenetic biogeography. BioGeoBEARS enables this procedure. Perhaps more importantly, however, is the potential for users to create and test new models. Probabilistic modeling of geographic range evolution on phylogenies is still in its infancy, and undoubtedly there are better models out there, waiting to be discovered. It is also undoubtedly true that different clades and different regions will favor different processes, and that further improvements will be had by linking the evolution of organismal traits (e.g., loss of flight) with the evolution of geographic range, within a common inference framework. In a world of rapid climate change and habitat loss, biogeographical methods must maximize both flexibility and statistical rigor if they are to play a role. This research takes several steps in that direction.

BioGeoBEARS is open-source and is freely available at the Comprehensive R Archive Network (http://cran.r-project.org/web/packages/BioGeoBEARS/index.html). A step-by-step tutorial, using the Psychotria dataset, is available at PhyloWiki (http://phylo.wikidot.com/biogeobears).

(Abstract shortened by UMI.)

APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography