Dissertations / Theses on the topic 'Common accuracy of the model'

To see the other types of publications on this topic, follow the link: Common accuracy of the model.

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 dissertations / theses for your research on the topic 'Common accuracy of the model.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.

1

Бакун, Сабіна Антонівна. "Система оцінки кредитоспроможності фізичних осіб з використанням методів регресійного аналізу." Master's thesis, Київ, 2018. https://ela.kpi.ua/handle/123456789/23984.

Full text
Abstract:
Магістерська дисертація: 107 с., 32 рис., 32 табл., 5 додатків, 19 джерел. Актуальність теми: в Україні бурхливо зростає ринок споживчого кредитування. Проте, разом з цим, зростає і кількість неповернених кредитів, що наносить досить великі збитки банківським установам. Таким чином, розробка та застосування систем оцінки кредитоспроможності фізичних осіб у процесі прийняття рішення щодо видачі кредиту є актуальними на сьогоднішній день. Мета даної роботи полягає у дослідженні та вдосконаленні існуючих методик побудови скорингових моделей та розробці системи підтримки прийняття рішень для оцінювання кредитоспроможності фізичних осіб з використанням методу логістичної регресії. Об’єктом дослідження є набір статистичних даних щодо наданих банком споживчих кредитів фізичним особам. Методи дослідження: метод логістичної регресії, метод максимальної правдоподібності, метод градієнтного спуску, операції над матрицями. Програмний продукт реалізований за допомогою мови програмування С# у середовищі розробки Microsoft Visual Studio 2012. Для порівняльного аналізу отриманих результатів були побудовані моделі у вигляді дерев рішень і скорингової карти в системі SAS Enterprise Miner. Отримані результати: розроблено систему підтримки прийняття рішень для прогнозування кредитоспроможності фізичних осіб з використанням методу логістичної регресії та методу максимальної правдоподібності. Запропоновано спосіб використання категоріальних даних в регресійних моделях.
Theme: “System for evaluating the solvency of individuals using regression analysis methods”. Master's thesis explanatory note: 107 p., 32 fig., 32 tab., 5 appendices, 19 sources. Actuality: the consumer lending market is growing rapidly in Ukraine. However, along with this, the number of unreturned loans is increasing, which causes quite large losses to banking institutions. Thus, the development and application of systems for assessing the creditworthiness of individuals in the process of making a decision on the issuance of a loan are actual for today. The purpose of this work is to study and improve existing methods of constructing scoring models and to develop a decision support system for assessing the creditworthiness of individuals using the method of logistic regression. The object of the study is a set of statistical data on consumer loans provided by the bank to individuals. Methods of research: logistic regression method, maximum likelihood method, gradient descent method, operations on matrices. The software product was implemented using the C# programming language in the Microsoft Visual Studio 2012 development environment. For a comparative analysis of the results were built models as decision trees and scorecard in the SAS Enterprise Miner system. Obtained results: a decision support system was developed for predicting the creditworthiness of individuals using the logistic regression method and the maximum likelihood method. The method of using categorical data in regression models is proposed.
APA, Harvard, Vancouver, ISO, and other styles
2

Dusitsin, Krid, and Kurt Kosbar. "Accuracy of Computer Simulations that use Common Pseudo-random Number Generators." International Foundation for Telemetering, 1998. http://hdl.handle.net/10150/609238.

Full text
Abstract:
International Telemetering Conference Proceedings / October 26-29, 1998 / Town & Country Resort Hotel and Convention Center, San Diego, California
In computer simulations of communication systems, linear congruential generators and shift registers are typically used to model noise and data sources. These generators are often assumed to be close to ideal (i.e. delta correlated), and an insignificant source of error in the simulation results. The samples generated by these algorithms have non-ideal autocorrelation functions, which may cause a non-uniform distribution in the data or noise signals. This error may cause the simulation bit-error-rate (BER) to be artificially high or low. In this paper, the problem is described through the use of confidence intervals. Tests are performed on several pseudo-random generators to access which ones are acceptable for computer simulation.
APA, Harvard, Vancouver, ISO, and other styles
3

Berger, Julia Lizabeth. "Cybervetting: A Common Antecedents Model." Bowling Green State University / OhioLINK, 2015. http://rave.ohiolink.edu/etdc/view?acc_num=bgsu1431690206.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Fann, Chee Meng. "Development of an artillery accuracy model." Thesis, Monterey, Calif. : Naval Postgraduate School, 2006. http://bosun.nps.edu/uhtbin/hyperion.exe/06Dec%5FFann.pdf.

Full text
Abstract:
Thesis (M.S. in Engineering Science (Mechanical Engineering)--Naval Postgraduate School, December 2006.
Thesis Advisor(s): Morris Driels. "December 2006." Includes bibliographical references (p. 91). Also available in print.
APA, Harvard, Vancouver, ISO, and other styles
5

Mersch, Leslie N. "Accuracy Analysis of Common Adult Aging Methods Applied to Near Adult Human Skeletons." University of Cincinnati / OhioLINK, 2015. http://rave.ohiolink.edu/etdc/view?acc_num=ucin1439305302.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Blackstock, Michael Anthony. "A common model for ubiquitous computing." Thesis, University of British Columbia, 2008. http://hdl.handle.net/2429/2478.

Full text
Abstract:
Ubiquitous computing (ubicomp) is a compelling vision for how people will interact with multiple computer systems in the course of their daily lives. To date, practitioners have created a variety of infrastructures, middleware and toolkits to provide the flexibility, ease of programming and the necessary coordination of distributed software and hardware components in physical spaces. However, to-date no one approach has been adopted as a default or de-facto standard. Consequently the field risks losing momentum as fragmentation occurs. In particular, the goal of ubiquitous deployments may stall as groups deploy and trial incompatible point solutions in specific locations. In their defense, researchers in the field argue that it is too early to standardize and that room is needed to explore specialized domain-specific solutions. In the absence of an agreed upon set of standards, we argue that the community must consider a methodology that allows systems to evolve and specialize, while at the same time allowing the development of portable applications and integrated deployments that work between between sites. To address this we studied the programming models of many commercial and research ubicomp systems. Through this survey we gained an understanding of the shared abstractions required in a core programming model suitable for both application portability and systems integration. Based on this study we designed an extensible core model called the Ubicomp Common Model (UCM) to describe a representative sample of ubiquitous systems to date. The UCM is instantiated in a flexible and extensible platform called the Ubicomp Integration Framework (UIF) to adapt ubicomp systems to this model. Through application development and integration experience with a composite campus environment, we provide strong evidence that this model is adequate for application development and that the complexity of developing adapters to several representative systems is not onerous. The performance overhead introduced by introducing the centralized UIF between applications and an integrated system is reasonable. Through careful analysis and the use of well understood approaches to integration, this thesis demonstrates the value of our methodology that directly leverages the significant contributions of past research in our quest for ubicomp application and systems interoperability.
APA, Harvard, Vancouver, ISO, and other styles
7

Gunner, J. C. "A model of building price forecasting accuracy." Thesis, University of Salford, 1997. http://usir.salford.ac.uk/26702/.

Full text
Abstract:
The purpose of this research was to derive a statistical model comprising the significant factors influencing the accuracy of a designer's price forecast and as an aid to providing a theoretical framework for further study. To this end data, comprising 181 building contract details, was collected from the Singapore office of an international firm of quantity surveyors over the period 1980 to 1991. Bivariate analysis showed a number of independent variables having significant effect on bias which was in general agreement with previous work in this domain. The research also identified a number of independent variables having significant effect on the consistency, or precision, of designers' building price forecasts. With information gleaned from bivariate results attempts were made to build a multivariate model which would explain a significant portion of the errors occurring in building price forecasts. The results of the models built were inconclusive because they failed to satisfy the assumptions inherent in ordinary least squares regression. The main failure in the models was in satisfying the assumption of homoscedasticity, that is, the conditional variances of the residuals are equal around the mean. Five recognised methodologies were applied to the data in attempts to remove heteroscedasticity but none were successful. A different approach to model building was then adopted and a tenable model was constructed which satisfied all of the regression assumptions and internal validity checks. The statistically significant model also revealed that the variable of Price Intensity was the sole underlying influence when tested against all other independentpage xiv variables in the data of this work and after partialling out the effect of all other independent variables. From this a Price Intensity theory of accuracy is developed and a further review of the previous work in this field suggests that this may be of universal application.
APA, Harvard, Vancouver, ISO, and other styles
8

Kim, Ja Young. "Factors affecting accuracy of comparable scores for augmented tests under Common Core State Standards." Diss., University of Iowa, 2013. https://ir.uiowa.edu/etd/2543.

Full text
Abstract:
Under the Common Core State Standard (CCSS) initiative, states that voluntarily adopt the common core standards work together to develop a common assessment in order to supplement and replace existing state assessments. However, the common assessment may not cover all state standards, so states within the consortium can augment the assessment using locally developed items that align with state-specific standards to ensure that all necessary standards are measured. The purpose of this dissertation was to evaluate the linking accuracy of the augmented tests using the common-item nonequivalent groups design. Pseudo-test analyses were conducted by splitting a large-scale math assessment in half, creating two parallel common assessments, and by augmenting two sets of state-specific items from a large-scale science assessment. Based upon some modifications of the pseudo-data, a simulated study was also conducted. For the pseudo-test analyses, three factors were investigated: (1) the difference in ability between the new and old test groups, (2) the differential effect size for the common assessment and state-specific item set, and (3) the number of common items. For the simulation analyses, the latent-trait correlations between the common assessment and state-specific item set as well as the differential latent-trait correlations between the common assessment and state-specific item set were used in addition to the three factors considered for the pseudo-test analyses. For each of the analyses, four equating methods were used: the frequency estimation, chained equipercentile, item response theory (IRT) true score, and IRT observed score methods. The main findings of this dissertation were as follows: (1) as the group ability difference increased, bias also increased; (2) when the effect sizes differed for the common assessment and state-specific item set, larger bias was observed; (3) increasing the number of common items resulted in less bias, especially for the frequency estimation method when the group ability differed; (4) the frequency estimation method was more sensitive to the group ability difference than the differential effect size, while the IRT equating methods were more sensitive to the differential effect size than the group ability difference; (5) higher latent-trait correlation between the common assessment and state-specific item set was associated with smaller bias, and if the latent-trait correlation exceeded 0.8, the four equating methods provided adequate linking unless the group ability difference was large; (6) differential latent-trait correlations for the old and new tests resulted in larger bias than the same latent-trait correlations for the old and new tests, and (7) when the old and new test groups were equivalent, the frequency estimation method provided the least bias, but IRT true score and observed score equating resulted in smaller bias than the frequency estimation and chained equipercentile methods when group ability differed.
APA, Harvard, Vancouver, ISO, and other styles
9

Linder, Martin. "Common Ancestors in a Generalized Moran model." Licentiate thesis, Uppsala University, Department of Mathematics, 2009. http://urn.kb.se/resolve?urn=urn:nbn:se:uu:diva-122402.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Xu, Wenwei. "Enhancing model accuracy for control : two case studies /." free to MU campus, to others for purchase, 2002. http://wwwlib.umi.com/cr/mo/fullcit?p3074458.

Full text
APA, Harvard, Vancouver, ISO, and other styles
11

Frazier, Alicia. "Accuracy and precision of a sectioned hollow model." Oklahoma City : [s.n.], 2008.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
12

Bilen, Oytun Peksel. "Advanced Model of Acoustic Trim; Effect on NTF Accuracy." Thesis, KTH, MWL Marcus Wallenberg Laboratoriet, 2011. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-77768.

Full text
APA, Harvard, Vancouver, ISO, and other styles
13

Hone, David M. "Time and space resolution and mixed layer model accuracy." Monterey, California. Naval Postgraduate School, 1997. http://hdl.handle.net/10945/9080.

Full text
Abstract:
The oceanic turbulent boundary layer is a critical region to understand for oceanic and atmospheric prediction. This thesis answers two fundamental questions: (1) what is the response of the ocean mixed layer system to transient forcing at the air sea surface? (2) what is the necessary time and space resolution in an ocean mixed layer model to resolve important transient responses? Beginning with replication of de Szoeke and Rhines' work, additional physical processes were added to include more realistic viscous dissipation and anisotropy in the three-dimensional turbulent kinetic energy (TKE) budget. These refinements resulted in modification of de Szoeke and Rhines' findings. Firstly, TKE unsteadiness is important for a minimum of 10 to the 5th power seconds. Secondly, viscous dissipation should not be approximated as simply proportional to shear production. Thirdly, entrainment shear production remains significant for a minimum of one pendulum-day. The required temporal model resolution is dependent on the phenomena to be studied. This study focused on the diurnal, synoptic, and annual cycles, which the one-hour time step of the Naval Postgraduate School model adequately resolves. The study of spatial resolution showed unexpectedly that model skill was comparable for 1 m, 10 m and even 20 m vertical grid spacing
APA, Harvard, Vancouver, ISO, and other styles
14

Tjoa, Robertus Tjin Hok Carleton University Dissertation Engineering Mechanical. "Assessment of the accuracy of a computational casting model." Ottawa, 1992.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
15

Lee, Jacob Scott. "Accuracy of a Simplified Analysis Model for Modern Skyscrapers." BYU ScholarsArchive, 2013. https://scholarsarchive.byu.edu/etd/4055.

Full text
Abstract:
A new simplified skyscraper analysis model (SSAM) was developed and implemented in a spreadsheet to be used for preliminary skyscraper design and teaching purposes. The SSAM predicts linear and nonlinear response to gravity, wind, and seismic loading of "modern" skyscrapers which involve a core, megacolumns, outrigger trusses, belt trusses, and diagonals. The SSAM may be classified as a discrete method that constructs a reduced system stiffness matrix involving selected degrees of freedom (DOF's). The steps in the SSAM consist of: 1) determination of megacolumn areas, 2) construction of stiffness matrix, 3) calculation of lateral forces and displacements, and 4) calculation of stresses. Seven configurations of a generic skyscraper were used to compare the accuracy of the SSAM against a space frame finite element model. The SSAM was able to predict the existence of points of contraflexure in the deflected shape which are known to exist in modern skyscrapers. The accuracy of the SSAM was found to be very good for displacements (translations and rotations), and reasonably good for stress in configurations that exclude diagonals. The speed of execution, data preparation, data extraction, and optimization were found to be much faster with the SSAM than with general space frame finite element programs.
APA, Harvard, Vancouver, ISO, and other styles
16

Kazan, Baran. "Additional Classes Effect on Model Accuracy using Transfer Learning." Thesis, Högskolan i Gävle, Datavetenskap, 2020. http://urn.kb.se/resolve?urn=urn:nbn:se:hig:diva-33970.

Full text
Abstract:
This empirical research study discusses how much the model’s accuracy changes when adding a new image class by using a pre-trained model with the same labels and measuring the precision of the previous classes to observe the changes. The purpose is to determine if using transfer learning is beneficial for users that do not have enough data to train a model. The pre-trained model that was used to create a new model was the Inception V3. It has the same labels as the eight different classes that were used to train the model. To test this model, classes of wild and non-wild animals were taken as samples. The algorithm used to train the model was implemented in a single class programmed in Python programming language with PyTorch and TensorBoard library. The Tensorboard library was used to collect and represent the result. Research results showed that the accuracy of the first two classes was 94.96% in training and 97.07% in validation. When training the model with a total of eight classes, the accuracy was 91.89% in training and 95.40 in validation. The precision of both classes was detected at 100% when the model solely had cat and dog classes. After adding six additional classes in the model, the precision changed to 95.82% of the cats and 97.16% of the dogs.
APA, Harvard, Vancouver, ISO, and other styles
17

Lehmann, Christopher, and Alexander Alfredsson. "Intrinsic Equity Valuation : An Emprical Assessment of Model Accuracy." Thesis, Södertörns högskola, Institutionen för samhällsvetenskaper, 2016. http://urn.kb.se/resolve?urn=urn:nbn:se:sh:diva-30377.

Full text
Abstract:
The discounted cash flow model and relative valuation models are ever-increasingly prevalent in today’s investment-heavy environment. In other words, theoretically inferior models are used in practice. It is this paradox that has lead us to compare the discounted cash flow model (DCFM), discounted dividend model (DDM), residual income-based model (RIVM) and the abnormal earnings growth model (AEGM) and their relative accuracy to observed stockprices. Adding to previous research, we investigate their performance in relation to the OMX30 index. What is more, we test how the performance of each model is affected by an extension of the forecast horizon. The study finds that AEGM outperforms the other models, both before and after extending the horizon. Our analysis was conducted by looking at accuracy, spread and the inherent speculative nature of each model. Taking all this into account, RIVM outperforms the other models. In this sense, one can question the rationale behind investor’s decision to primarily use the discounted cash flow model in equity valuation.
APA, Harvard, Vancouver, ISO, and other styles
18

Mitchinson, Pelham James. "Crowding indices : experimental methodology and predictive accuracy." Thesis, University of Southampton, 1999. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.302320.

Full text
APA, Harvard, Vancouver, ISO, and other styles
19

Miles, Luke G. "Global Digital Elevation Model Accuracy Assessment in the Himalaya, Nepal." TopSCHOLAR®, 2013. http://digitalcommons.wku.edu/theses/1313.

Full text
Abstract:
Digital Elevation Models (DEMs) are digital representations of surface topography or terrain. Collection of DEM data can be done directly through surveying and taking ground control point (GCP) data in the field or indirectly with remote sensing using a variety of techniques. The accuracies of DEM data can be problematic, especially in rugged terrain or when differing data acquisition techniques are combined. For the present study, ground data were taken in various protected areas in the mountainous regions of Nepal. Elevation, slope, and aspect were measured at nearly 2000 locations. These ground data were imported into a Geographic Information System (GIS) and compared to DEMs created by NASA researchers using two data sources: the Shuttle Radar Topography Mission (STRM) and Advanced Spaceborne Thermal Emission and Reflection Radiometer (ASTER). Slope and aspect were generated within a GIS and compared to the GCP ground reference data to evaluate the accuracy of the satellitederived DEMs, and to determine the utility of elevation and derived slope and aspect for research such as vegetation analysis and erosion management. The SRTM and ASTER DEMs each have benefits and drawbacks for various uses in environmental research, but generally the SRTM system was superior. Future research should focus on refining these methods to increase error discrimination.
APA, Harvard, Vancouver, ISO, and other styles
20

FULTON, JOHN PATRICK. "A SPATIAL MODEL FOR EVALUATING VARIABLE-RATE FERTILIZER APPLICATION ACCURACY." UKnowledge, 2003. http://uknowledge.uky.edu/gradschool_diss/248.

Full text
Abstract:
The popularity of variable-rate technology (VRT) has grown. However, the limitations and errors ofthis technology are generally unknown. Therefore, a spatial data model was developed to generate "asapplied"surfaces to advance precision agricultural (PA) practices. A test methodology based on ASAEStandard S341.2 was developed to perform uniform-rate (UR) and variable-rate (VR) tests to characterizedistribution patterns testing four VRT granular applicators (two spinner spreaders and two pneumaticapplicators). Single-pass UR patterns exhibited consistent shapes for three of the applicators with patternsshifts observed for the fourth applicator. Simulated overlap analysis showed that three of the applicatorsperformed satisfactorily with most CVs less than 20% while one applicator performed poorly (CVs andgt;25%). The spinner spreaders over-applied at the margins but the pneumatic applicators under-appliedsuggesting a required adjustment to the effective swath spacing. Therefore, it is recommended that CVsaccompany overlap pattern plots to ensure proper calibration of VRT application.Quantification of the rate response characteristics for the various applicators illustrated varying delayand transition times. Only one applicator demonstrated consistent delay and transition times. A sigmoidalfunction was used to model the rate response for applicators. One applicator exhibited a linear responseduring a decreasing rate change. Rate changes were quicker for the two newer VR control systemssignifying advancement in hydraulic control valve technology. This research illustrates the need forstandard testing protocols for VRT systems to help guide VRT software developers, equipmentmanufacturers, and users.The spatial data model uses GIS functionality to merge applicator descriptive patterns with a spatialfield application file (FAF) to generate an 'as-applied' surface representing the actual distribution ofgranular fertilizer. Field data was collected and used to validate the "as-applied" spatial model.Comparisons between the actual and predicted application rates for several fields were madedemonstrating good correlations for one applicator (several R2 andgt; 0.70), moderate success for anotherapplicator (0.60 andlt; R2 andlt; 0.66), and poor relationships for the third applicator (R2 andlt; 0.49). A comparison ofthe actual application rates to the prescription maps generated R2 values between 0.16 and 0.81demonstrating inconsistent VRT applicator performance. Thus, "as-applied" surfaces provide a means toproperly evaluate VRT while enhancing researchers' ability to compare VR management approaches.
APA, Harvard, Vancouver, ISO, and other styles
21

De, Lange Billy. "High accuracy numerical model of the SALT mirror support truss." Thesis, Stellenbosch : Stellenbosch University, 2011. http://hdl.handle.net/10019.1/18042.

Full text
Abstract:
Thesis (MScEng)--Stellenbosch University, 2011.
ENGLISH ABSTRACT: Although a numerical model of the mirror support truss of the Southern African Large Telescope (SALT) has already been developed during the design thereof, this thesis focuses on the development of the methods and techniques that would result in a more accurate numerical model of the actual structure that could be used as a basis for a numerical control system. This control system will compensate for de ections in the structure by adjusting the positioning of the individual mirror segments of the primary mirror. The two main components from which the support truss is constructed are the steel nodes, and the struts that connect to them. For this project a smaller, simpler laboratory model was designed and built to have geometrical properties similar to that of the support truss. The methods and techniques that were investigated were carried out on this model. By using numerical design optimisation techniques, improved numerical models of the different strut types were obtained. This was done by performing tests on the struts so that the actual responses of the struts could be obtained. Numerical models of the struts were then created and set up so that they could be optimised using structural optimisation software. Once accurate strut models had been obtained, these strut models were used to construct a numerical model of the assembled structure. No additional optimisation was performed on the assembled structure and tests were done on the physical structure to obtain its responses. These served as validation criteria for the numerical models of the struts. Because of unforeseen deformations of the structure, not all of the measured structural responses could be used. The remaining results showed, however, that the predictive accuracy of the top node displacement of the assembled structure improved to below 1.5%, from over 60%. From these results it was concluded that the accuracy of the entire structure's numerical model could be signi ficantly improved by optimising the individual strut types.
AFRIKAANSE OPSOMMING: Alhoewel daar reeds 'n numeriese model van die spieëlondersteuningsraamwerk van SALT ontwikkel is gedurende die ontwerp daarvan, fokus hierdie tesis op die ontwikkeling van metodes en tegnieke om 'n numeriese model van steeds hoër gehalte van hierdie spesi eke struktuur te verkry wat kan gebruik word as 'n basis vir 'n numeriese beheerstelsel. Hierdie beheerstelsel sal kan kompenseer vir die ondersteuningsraamwerk se vervormings deur om die individuele spieëlsegmente van die primêre spieël se posisionering te verstel. Hierdie stuktuur bestaan uit hoofsaaklik twee komponente, naamlik staalnodusse en die stutte wat aan hulle koppel. Vir hierdie projek is 'n kleiner, eenvoudiger laboratorium-model ontwerp en gebou om geometriese eienskappe soortgelyk aan die van die ondersteuningstruktuur te hê. Die metodes en tegnieke wat ondersoek is, is op hierdie model uitgevoer. Verbeterde numeriese modelle van die verskillende stut-tipes is ontwikkel deur middel van numerieseoptimeringstegnieke. Dit is gedoen deur toetse op die stutte uit te voer sodat hul werklike gedrag bepaal kon word. Numeriese modelle van die stutte is toe geskep en opgestel sodat hulle geoptimeer kon word om dieselfde gedrag as wat gemeet is, te toon. Hierdie geoptimeerde modelle is toe gebruik om numeriese modelle van die toets-struktuur te skep. Geen verdere optimering is op die numeriese model uitgevoer nie en toetse is op die struktuur gedoen om sy werklike gedrag te meet. Data wat deur die toetse verkry is het as validasie kriteria gedien om die akkuraatheid van die numeriese modelle van die stut-tipes te bepaal. Weens die struktuur se onvoorsiene vervorming kon alle gemete struktuurdata nie gebruik word nie. Die oorblywende data het egter getoon dat die akkuraatheid van die finale numeriese modelle van die struktuur verbeter het en dat dit die translasie van die top-node met 'n speling van 1.5% akkuraatheid kon voorspel, teenoor die oorsponlike speling van meer as 60%. Daar is bevind dat die akkuraatheid van die numeriese model van die hele struktuur noemenswaardig verbeter kan word deur die numeriese modelle van die stut-tipes te optimeer.
APA, Harvard, Vancouver, ISO, and other styles
22

Rooney, Thomas J. A. "On improving the forecast accuracy of the hidden Markov model." Master's thesis, University of Cape Town, 2016. http://hdl.handle.net/11427/22977.

Full text
Abstract:
The forecast accuracy of a hidden Markov model (HMM) may be low due first, to the measure of forecast accuracy being ignored in the parameterestimation method and, second, to overfitting caused by the large number of parameters that must be estimated. A general approach to forecasting is described which aims to resolve these two problems and so improve the forecast accuracy of the HMM. First, the application of extremum estimators to the HMM is proposed. Extremum estimators aim to improve the forecast accuracy of the HMM by minimising an estimate of the forecast error on the observed data. The forecast accuracy is measured by a score function and the use of some general classes of score functions is proposed. This approach contrasts with the standard use of a minus log-likelihood score function. Second, penalised estimation for the HMM is described. The aim of penalised estimation is to reduce overfitting and so increase the forecast accuracy of the HMM. Penalties on both the state-dependent distribution parameters and transition probability matrix are proposed. In addition, a number of cross-validation approaches for tuning the penalty function are investigated. Empirical assessment of the proposed approach on both simulated and real data demonstrated that, in terms of forecast accuracy, penalised HMMs fitted using extremum estimators generally outperformed unpenalised HMMs fitted using maximum likelihood.
APA, Harvard, Vancouver, ISO, and other styles
23

Mehler, Anja. "Business model innovation in emerging markets : identifying common principles." Thesis, Stellenbosch : Stellenbosch University, 2014. http://hdl.handle.net/10019.1/96220.

Full text
Abstract:
Thesis (MBA)--Stellenbosch University, 2014.
ENGLISH ABSTRACT: With developed economies experiencing slow growth, multinational corporations (MNCs) in various industries are looking to tap into the enormous potential of emerging economies. By identifying emerging markets as future markets, MNCs can increase their market share and profits, and grow through a diversified strategy that focuses on unconventional markets and customers with unserved needs. However, MNCs entering these markets cannot succeed by simply transferring business models, products, and services developed for mature economies as the needs of the new consumers in emerging markets require innovative and non-traditional business models and approaches. The research question for this study is to investigate if and to what level MNCs have to adapt their business model when entering or expanding their operations to emerging markets. Therefore, research has been done on four MNCs across a diverse range of industries. For collecting data, the research made use of a qualitative case-study research approach and is based primarily on findings from four in-depth interviews with strategy or marketing experts from MNCs across industries. Further information was obtained through deep research on publicly available information about the company. The research aimed to identify similarities in the business model of successful pioneers and to analyse common principles that could be of use for other MNCs when planning to enter unknown emerging markets. The interviews were conducted personally, telephonically, and via email. In a next step, the interviews were transcribed and common themes were extracted and combined with findings from further research. For collecting and ordering the information, Osterwalder & Pigneur’s (2010) business model canvas was applied. Finally, the findings were grouped, formulated and compared to existing literature in order to identify similarities, common principles or differences for new output propositions. The primary finding of the research was that specific factors, such as the difference in market conditions and environments, as well as in consumer preferences and needs, strongly influence the design of business models. A key differentiating factor was the choice between keeping traditional business models with a focus on global and centralized systems, processes, brands and products or designing business models that are adjusted or innovated to meet local market conditions and consumer trends. Another key finding was that a balanced portfolio of brands is a critical factor of success in emerging markets. To reach different market segments in emerging markets, MNCs need to offer mainstream as well as premium brands, all based on a strong brand identity and brand values. The partnership with local business partners and key stakeholders was identified as fundamental to be able to react to local business environments. Furthermore, the integration of local suppliers and communities, as well as the adjustment of the value chain to the local environment, has been seen as a key factor to reduce costs while gaining acceptance and building close relationships with the local community. In order to overcome local challenges of institutional voids and lacking knowledge in emerging markets, the research has shown that a collaborative strategy with local partners is of high importance. The research showed that MNCs with global brands follow both approaches. While some MNCs maintain a traditional business model for all its markets, other MNCs design their business model based on standardized systems and processes to the local environment. In terms of the level of innovation, it can be said that none of the researched MNCs showed an extremely high level of innovation. Common principles and activities that could be identified in the business model design for emerging markets between all researched MNCs, are as follows: (1) balanced portfolio of strong brands, (2) strong partnerships with local key stakeholders, (3) loyal relationships with consumers, (4) an efficient and cost-effective value chain, and (5) collaborative partnerships or acquisitions as a critical market entry strategy.
APA, Harvard, Vancouver, ISO, and other styles
24

Mugodo, James, and n/a. "Plant species rarity and data restriction influence the prediction success of species distribution models." University of Canberra. Resource, Environmental & Heritage Sciences, 2002. http://erl.canberra.edu.au./public/adt-AUC20050530.112801.

Full text
Abstract:
There is a growing need for accurate distribution data for both common and rare plant species for conservation planning and ecological research purposes. A database of more than 500 observations for nine tree species with different ecological and geographical distributions and a range of frequencies of occurrence in south-eastern New South Wales (Australia) was used to compare the predictive performance of logistic regression models, generalised additive models (GAMs) and classification tree models (CTMs) using different data restriction regimes and several model-building strategies. Environmental variables (mean annual rainfall, mean summer rainfall, mean winter rainfall, mean annual temperature, mean maximum summer temperature, mean minimum winter temperature, mean daily radiation, mean daily summer radiation, mean daily June radiation, lithology and topography) were used to model the distribution of each of the plant species in the study area. Model predictive performance was measured as the area under the curve of a receiver operating characteristic (ROC) plot. The initial predictive performance of logistic regression models and generalised additive models (GAMs) using unrestricted, temperature restricted, major gradient restricted and climatic domain restricted data gave results that were contrary to current practice in species distribution modelling. Although climatic domain restriction has been used in other studies, it was found to produce models that had the lowest predictive performance. The performance of domain restricted models was significantly (p = 0.007) inferior to the performance of major gradient restricted models when the predictions of the models were confined to the climatic domain of the species. Furthermore, the effect of data restriction on model predictive performance was found to depend on the species as shown by a significant interaction between species and data restriction treatment (p = 0.013). As found in other studies however, the predictive performance of GAM was significantly (p = 0.003) better than that of logistic regression. The superiority of GAM over logistic regression was unaffected by different data restriction regimes and was not significantly different within species. The logistic regression models used in the initial performance comparisons were based on models developed using the forward selection procedure in a rigorous-fitting model-building framework that was designed to produce parsimonious models. The rigorous-fitting modelbuilding framework involved testing for the significant reduction in model deviance (p = 0.05) and significance of the parameter estimates (p = 0.05). The size of the parameter estimates and their standard errors were inspected because large estimates and/or standard errors are an indication of model degradation from overfilling or effecls such as mullicollinearily. For additional variables to be included in a model, they had to contribule significantly (p = 0.025) to the model prediclive performance. An attempt to improve the performance of species distribution models using logistic regression models in a rigorousfitting model-building framework, the backward elimination procedure was employed for model selection, bul it yielded models with reduced performance. A liberal-filling model-building framework that used significant model deviance reduction at p = 0.05 (low significance models) and 0.00001 (high significance models) levels as the major criterion for variable selection was employed for the development of logistic regression models using the forward selection and backward elimination procedures. Liberal filling yielded models that had a significantly greater predictive performance than the rigorous-fitting logistic regression models (p = 0.0006). The predictive performance of the former models was comparable to that of GAM and classification tree models (CTMs). The low significance liberal-filling models had a much larger number of variables than the high significance liberal-fitting models, but with no significant increase in predictive performance. To develop liberal-filling CTMs, the tree shrinking program in S-PLUS was used to produce a number of trees of differenl sizes (subtrees) by optimally reducing the size of a full CTM for a given species. The 10-fold cross-validated model deviance for the subtrees was plotted against the size of the subtree as a means of selecting an appropriate tree size. In contrast to liberal-fitting logistic regression, liberal-fitting CTMs had poor predictive performance. Species geographical range and species prevalence within the study area were used to categorise the tree species into different distributional forms. These were then used, to compare the effect of plant species rarity on the predictive performance of logistic regression models, GAMs and CTMs. The distributional forms included restricted and rare (RR) species (Eucalyptus paliformis and Eucalyptus kybeanensis), restricted and common (RC) species (Eucalyptus delegatensis, Eucryphia moorei and Eucalyptus fraxinoides), widespread and rare (WR) species (Eucalyptus data) and widespread and common (WC) species (Eucalyptus sieberi, Eucalyptus pauciflora and Eucalyptus fastigata). There were significant differences (p = 0.076) in predictive performance among the distributional forms for the logistic regression and GAM. The predictive performance for the WR distributional form was significantly lower than the performance for the other plant species distributional forms. The predictive performance for the RC and RR distributional forms was significantly greater than the performance for the WC distributional form. The trend in model predictive performance among plant species distributional forms was similar for CTMs except that the CTMs had poor predictive performance for the RR distributional form. This study shows the importance of data restriction to model predictive performance with major gradient data restriction being recommended for consistently high performance. Given the appropriate model selection strategy, logistic regression, GAM and CTM have similar predictive performance. Logistic regression requires a high significance liberal-fitting strategy to both maximise its predictive performance and to select a relatively small model that could be useful for framing future ecological hypotheses about the distribution of individual plant species. The results for the modelling of plant species for conservation purposes were encouraging since logistic regression and GAM performed well for the restricted and rare species, which are usually of greater conservation concern.
APA, Harvard, Vancouver, ISO, and other styles
25

Vasudev, R. Sashin, and Ashok Reddy Vanga. "Accuracy of Software Reliability Prediction from Different Approaches." Thesis, Blekinge Tekniska Högskola, Avdelningen för för interaktion och systemdesign, 2008. http://urn.kb.se/resolve?urn=urn:nbn:se:bth-1298.

Full text
Abstract:
Many models have been proposed for software reliability prediction, but none of these models could capture a necessary amount of software characteristic. We have proposed a mixed approach using both analytical and data driven models for finding the accuracy in reliability prediction involving case study. This report includes qualitative research strategy. Data is collected from the case study conducted on three different companies. Based on the case study an analysis will be made on the approaches used by the companies and also by using some other data related to the organizations Software Quality Assurance (SQA) team. Out of the three organizations, the first two organizations used for the case study are working on reliability prediction and the third company is a growing company developing a product with less focus on quality. Data collection was by the means of interviewing an employee of the organization who leads a team and is in the managing position for at least last 2 years.
svra06@student.bth.se
APA, Harvard, Vancouver, ISO, and other styles
26

Rose, Susan L. "Essays on almost common value auctions." Columbus, Ohio : Ohio State University, 2006. http://rave.ohiolink.edu/etdc/view?acc%5Fnum=osu1149185948.

Full text
APA, Harvard, Vancouver, ISO, and other styles
27

Ford, Jonathan M. "The Virtual Hip: An Anatomically Accurate Finite Element Model Based on the Visible Human Dataset." Scholar Commons, 2010. http://scholarcommons.usf.edu/etd/3451.

Full text
Abstract:
The purpose of this study is to determine if element decimation of a 3-D anatomical model affects the results of Finite Element Analysis (FEA). FEA has been increasingly applied to the biological and medical sciences. In order for an anatomical model to successfully run in FEA, the 3-D model’s complex geometry must be simplified, resulting in a loss of anatomical detail. The process of decimation reduces the number of elements within the structure and creates a simpler approximation of the model. Using the National Library of Medicine’s Visible Human Male dataset, a virtual 3-D representation of several structures of the hip were produced. The initial highest resolution model was processed through several levels of decimation. Each of these representative anatomical models were run in COMSOL 3.5a to measure the degree of displacement. These results were compared against the original model to determine what level of error was introduced due to model simplification.
APA, Harvard, Vancouver, ISO, and other styles
28

Horn, Sandra L. "Aggregating Form Accuracy and Percept Frequency to Optimize Rorschach Perceptual Accuracy." University of Toledo / OhioLINK, 2015. http://rave.ohiolink.edu/etdc/view?acc_num=toledo1449513233.

Full text
APA, Harvard, Vancouver, ISO, and other styles
29

Bilodeau, Bernard. "Accuracy of a truncated barotropic spectral model : numerical versus analytical solutions." Thesis, McGill University, 1985. http://digitool.Library.McGill.CA:80/R/?func=dbin-jump-full&object_id=66037.

Full text
APA, Harvard, Vancouver, ISO, and other styles
30

Huang, Junxiong. "A model for translation accuracy evaluation and measurement a quantitative approach /." Phd thesis, Australia : Macquarie University, 2008. http://hdl.handle.net/1959.14/82531.

Full text
Abstract:
"2007"
Thesis (PhD)--Macquarie University, Division of Linguistics and Psychology, Dept. of Linguistics, 2008.
Bibliography: p. 303-317.
Introduction -- Literature review -- Identification of the unit of translation -- Towards a model for standardized TQA -- Mean criteria of the world -- Creating the mark deduction scheme -- Testing the model -- Applying the model -- Conclusion.
Translation quality assessment (TQA) has been part of the translating process since Marcus Tullius Cicero (106-43BCE), and earnest studies on TQA have been conducted for several decades, but there has been no breakthrough in standardized TQA. Though the importance of TQA has been stressed, agreement on specific means of TQA has not been reached. As Chesterman and Wagner summarize, "Central to translation [...]," "[q]uality assessment is so complicated - especially if it is to be objective and reproducible" (2002: 80-81). The approaches to TQA published throughout the past millennia, by and large, are qualitative. "Whereas there is general agreement on the requirement for a translation to be 'good,' 'satisfactory,' or 'acceptable,' the definition of acceptability and of the means of determining it are matters of ongoing debate and there is precious little agreement on specifics" (Williams, 2004: xiv). Most published TQA approaches are neither objective nor reproducible. -- My study proposes a model for fuzzy standardized TQA through a quantitative approach, which expresses TQA results in numerical terms in a consistent manner. My model is statistics-based, practice-based and practice-oriented. It has been independently tested by eleven professors from four countries, fifteen senior United Nations translators, and fifty reader evaluators. My contrastive analysis of 23,000 pages of bilingual and multilingual texts has identified the unit of translation - the orthographic sentence in context, which is also verified by the results of an international survey among 66 professional translators, the majority of whom also confirm that they evaluate translations sentence by sentence in context. Halliday and Matthiessen's functional grammar theory, among others, provides my model for quantitative TQA with its theoretical basis, while the international survey, the necessary data. My model proposes a set of six Fuzzy Functional Translation Grammar terms, a grammar concept general enough to cover all grammar units in the translated orthographic sentence. Each term represents one type of error which contains from one to three sub-categories. Each error is assigned a value - the mean of the professional markers' deductions for relevant artificial errors and original errors. A marking scheme with sixteen variables under eight attributes is thus created. Ten marks are assigned to each unit of TQA, the sentence. For easy calculation, an arithmetic formula popularly used in statistics (Ex/n ) is adopted. With the assistance of a simple calculator, the evaluator can calculate the grade of a sentence, a sentence group, and the overall grade for an entire TT, regardless of its length. -- Perfect reliability or validity in any form of measurement is unattainable. There will always be some random error or noise in the data (McClendon, 2004: 7). Since it is the first of its type, I do not claim that my model is perfect. Variation has been found in the results of the testing performed by scholars and professional translators, but further testing based on two "easy" (markers' comment) sentences by the 50 reader evaluators respectively achieves 98% and 100% consistency, which indicates that markers' competence may equal constancy or that proper marker training and/or strict marker examination will minimize inconsistency among professional markers. My model, whose formulas withstand testing at the theoretical level and in practice, is not only ready for application, but it has profound implications beyond TQA, such as use in machine translation, and for other subjects like the role of the sentence in translation studies and translating practice.
Mode of access: World Wide Web.
317 leaves
APA, Harvard, Vancouver, ISO, and other styles
31

Ogawa, Hiroyuki. "Testing the accuracy of a three-dimensional acoustic coupled mode model." Thesis, Monterey, California. Naval Postgraduate School, 1991. http://hdl.handle.net/10945/26806.

Full text
APA, Harvard, Vancouver, ISO, and other styles
32

Stone, John 1967. "The common-law model for standard English in Johnson's dictionary." Thesis, McGill University, 1995. http://digitool.Library.McGill.CA:80/R/?func=dbin-jump-full&object_id=23738.

Full text
Abstract:
Samuel Johnson's Dictionary has long been regarded as an epoch-making book, as great a scholarly achievement as the dictionaries of the Italian, French and Spanish academies, yet more enlightened in its pretensions and its politics. For Johnson does not claim to have fixed the language; his authority is not backed by the state; his decisions as to currency, propriety, meaning, and spelling are based on a jumble of general custom, literary precedent, and reason.
I argue that the intellectual origins of Johnsonian standard English lie in Sir Edward Coke's early seventeenth-century restatement of common law doctrine and terms. Salient issues are common law's need to give an account of its antiquated, medieval vocabulary and its place in the constitutional conflict of the seventeenth century. I give an account of other possible influences on Johnson--Latin and English grammars, pedagogy, philosophical speculation on the nature of language, English prose styles, and proposals for an English academy or similar reform--but cannot find in any of them a sufficiently close conceptual parallel.
APA, Harvard, Vancouver, ISO, and other styles
33

Deen, William. "A mechanistic model of common ragweed based on photothermal time." Thesis, National Library of Canada = Bibliothèque nationale du Canada, 2000. http://www.collectionscanada.ca/obj/s4/f2/dsk1/tape2/PQDD_0019/NQ47389.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
34

Taucer, Anne Irene. "Biomechanics of common carotid arteries from mice heterozygous for mgR, the most common mouse model of Marfan syndrome." [College Station, Tex. : Texas A&M University, 2006. http://hdl.handle.net/1969.1/ETD-TAMU-1048.

Full text
APA, Harvard, Vancouver, ISO, and other styles
35

Modin, Larsson Jim. "Predictive Accuracy of Linear Models with Ordinal Regressors." Thesis, Uppsala universitet, Statistiska institutionen, 2016. http://urn.kb.se/resolve?urn=urn:nbn:se:uu:diva-273958.

Full text
Abstract:
This paper considers four approaches to ordinal predictors in linear regression to evaluate how these contrast with respect to predictive accuracy. The two most typical treatments, namely, dummy coding and classic linear regression on assigned level scores are compared with two improved methods; penalized smoothed coefficients and a generalized additive model with cubic splines. A simulation study is conducted to assess all on the basis of predictive performance. Our results show that the dummy based methods surpass the numeric at low sample sizes. Although, as sample size increases, differences between the methods diminish. Tendencies of overfitting are identified among the dummy methods. We conclude by stating that the choice of method not only ought to be context driven, but done in the light of all characteristics.
APA, Harvard, Vancouver, ISO, and other styles
36

Hakoyama, Shotaro. "Rater Characteristics in Performance Evaluation Accuracy." Wright State University / OhioLINK, 2014. http://rave.ohiolink.edu/etdc/view?acc_num=wright1399905636.

Full text
APA, Harvard, Vancouver, ISO, and other styles
37

Yongtao, Yu. "Exchange rate forecasting model comparison: A case study in North Europe." Thesis, Uppsala universitet, Statistiska institutionen, 2011. http://urn.kb.se/resolve?urn=urn:nbn:se:uu:diva-154948.

Full text
Abstract:
In the past, a lot of studies about the comparison of exchange rate forecasting models have been carried out. Most of these studies have a similar result which is the random walk model has the best forecasting performance. In this thesis, I want to find a model to beat the random walk model in forecasting the exchange rate. In my study, the vector autoregressive model (VAR), restricted vector autoregressive model (RVAR), vector error correction model (VEC), Bayesian vector autoregressive model are employed in the analysis. These multivariable time series models are compared with the random walk model by evaluating the forecasting accuracy of the exchange rate for three North European countries both in short-term and long-term. For short-term, it can be concluded that the random walk model has the best forecasting accuracy. However, for long-term, the random walk model is beaten. The equal accuracy test proves this phenomenon really exists.
APA, Harvard, Vancouver, ISO, and other styles
38

Jonsson, Eskil. "Ice Sheet Modeling: Accuracy of First-Order Stokes Model with Basal Sliding." Thesis, Uppsala universitet, Institutionen för geovetenskaper, 2018. http://urn.kb.se/resolve?urn=urn:nbn:se:uu:diva-360245.

Full text
Abstract:
Some climate models are still lacking features such as dynamical modelling of ice sheets due to their computational cost which results in poor accuracy and estimates of e.g. sea level rise. The need for low-cost high-order models initiated the development of the First-Order Stokes (or Blatter-Pattyn) model which retains much of the accuracy of the full-Stokes model but is also cost-effective. This model has proven accurate for ice sheets and glaciers with frozen bedrocks, or no-slip basal boundary conditions. However, experimental evidence seems to be lacking regarding its accuracy under sliding, or stress-free, bedrock conditions (ice-shelf conditions). Hence, it became of interest to investigate this. Numerical experiments were set up by formulating the first-order Stokes equations as a variational finite element problem, followed by implementing them using the open-source FEniCS framework. Two types of geometries were used with both no-slip and slip basal boundary conditions. Specifically, experiments B and D from the Ice Sheet Model Intercomparison Project for Higher-Order ice sheet Models (ISMIP-HOM) were used to benchmark the model. Local model errors were investigated and a convergence analysis was performed for both experiments. The results yielded an inherent model error of about 0.06% for ISMIP-HOM B and 0.006% for ISMIPHOM D, mostly relating to the different types of geometries used. Errors in stress-free regions were greater and varied on the order of 1%. This was deemed fairly accurate, and probably enough justification to replace models such as the Shallow Shelf Approximation with the First-Order Stokes model in some regions. However, more rigorous tests with real-world geometries may be warranted. Also noteworthy were inconsistent results in the vertical velocity under slippery conditions (ISMIPHOM D) which could either be due to coding errors or an inherent problem with the decoupling of the horizontal and vertical velocities of the First-Order Stokes model. This should be further investigated.
Vissa klimatmodeller saknar fortfarande funktioner så som dynamisk modellering av istäcken på grund av dess höga beräkningskostnad, vilket resulterar låg noggrannhet och uppskattningar av t.ex. havsnivåhöjning. Behovet av enkla modeller med hög noggrannhet satte igång utvecklingen av den s.k. Första Ordningens Stokes (eller Blatter-Pattyn) modellen. Denna modell behåller mycket av noggrannheten i den mer exakta full-Stokes-modellen men är också väldigt kostnadseffektiv. Denna modell har visat sig vara noggrann för istäcken och glaciärer med frusna berggrunder eller s.k. no-slip randvillkor. Experimentella bevis tycks dock saknas med avseende på dess noggrannhet under glidning, eller stressfria, berggrundsförhållanden (t.ex. vid ishyllor). Därför ville vi undersöka detta. Numeriska experiment upprättades genom att formulera Blatter-Pattyn ekvatonerna som ett variationsproblem (via finita elementmetoden), följt av att implementera dem med hjälp av den öppna källkoden FEniCS. Två typer av geometrier användes med både glidande och stressfria basala randvillkor. Specifikt användes experiment B och D från Ice Sheet Model Intercomparison Project for Higher-Order ice sheet Models (ISMIP-HOM) för att testa modellen. Lokala fel undersöktes och en konvergensanalys utfördes för båda experimenten. Resultaten gav ett modellfel på ca 0,06 % för ISMIP-HOM B och 0,006 % för ISMIP-HOM D, vilka var mest relaterade till de olika typerna av geometrier som användes. Fel i stressfria regioner var större och varierade i storleksordningen 1 %. Detta ansågs vara ganska noggrant och sannolikt tillräckligt för att ersätta modeller så som Shallow Shelf Approximationen med Blatter-Pattyn-modellen i vissa regioner. Dock krävs mer noggranna tester med mer verkliga geometrier för att dra konkreta slutsatser. Också anmärkningsvärt var motsägande resultat i den vertikala hastigheten under glidande förhållanden (ISMIP-HOM D) som antingen kan ha berott på kodningsfel eller ett modelproblem som härstammar utifrån särkopplingen mellan den horizontella- och den vertikala hastigheten i Blatter-Pattyn-modellen. Detta bör undersökas vidare.
APA, Harvard, Vancouver, ISO, and other styles
39

Do, Changhee. "Improvement in accuracy using records lacking sire information in the animal model." Diss., Virginia Tech, 1992. http://hdl.handle.net/10919/39430.

Full text
APA, Harvard, Vancouver, ISO, and other styles
40

Allick, Steven. "The common forms of contemporary videogames : a proposed content analysis model." Thesis, Teesside University, 2012. http://hdl.handle.net/10149/254616.

Full text
Abstract:
The aim of this thesis was to investigate trope usage in videogames, including the emergence of undiscovered ‘videogame’ tropes, and to create a new model for videogame categorisation using these tropes. This model serves to complement genre as a means of distilling videogame contents. The investigative work formed two parts, initially considering how videogames use existing rhetorical tropes such as metaphor as expressive and communicative devices and secondly to analyse videogames as a source of shared literary tropes. Each shared literary trope was validated as a common form of expression (referred to simply as 'common form'), where its presence was proven in a substantial sample of videogames. Common forms were gathered through a wide-ranging investigation of ten mainstream genres one at a time and in isolation to arrive at a pool of genre-specific common forms. The most closely related forms combined, with the help of relationship modelling techniques. A set of common forms capable of representing the contents of any videogame was reached. The result is a powerful hierarchical content model allowing a game to be described in terms of its common form usage profile. Common forms can effectively describe games which span several genres and differentiates between games which appear similar on the surface e.g. within the same genre hence aiding effective classification. Common Forms were proven to exist on a number of different hierarchies ranging from those specific to a particular game, to a game type (genre) and even to those which are universal and hence can be observed within any modern videogame. Finally, it was possible to see the very core or 'heart' of the functioning videogame, the never-ending competition between player resources such as energy, ammunition or shields, the 'player status' and the threats, challenges or obstacles the game's systems throw at the player, the 'game status'. The model does have considerable potential for application in educational settings such as college and university game development or appraisal classes and further development and testing would provide an effective tool for industry use.
APA, Harvard, Vancouver, ISO, and other styles
41

Meyer, Abel Hermanus. "Common values and competitiveness within a corporate culture and performance model." Thesis, Stellenbosch : Stellenbosch University, 2001. http://hdl.handle.net/10019.1/52167.

Full text
Abstract:
Thesis (MBA)--Stellenbosch University, 2001.
ENGLISH ABSTRACT: The utilisation of human capital and its impact on organisational performance must rank as one of the key managerial concerns III South Africa. The message from international competitive studies is clear: corporations become competitive when people and practices change. The global market has exposed the lack of competitive strength and effectiveness of South African corporations. Against this background, managing complex organisations in the private and public sector remains a daunting, pervasive and urgent task. By focusing on issues of organisational behaviour and global competitiveness, the study aims to contribute to the development of competitive (effective) corporations in South Africa. It is important to keep in mind that the present investigation was an exploratory study attempting to add to the body of knowledge about competitiveness. It aimed to enhance existing studies on global competitiveness and organisational effectiveness and open up possibilities for new management strategies and interventions as well as further research. In particular, it builds on previous work on the impact of organisational behaviour on performance. An extremely important development in the study of corporate culture has been proof that that the normative structure (corporate culture) has a significant impact on the performance of an organisation. Because of this impact, corporate culture has to be regarded as one of the key success factors in any corporation. Corporate culture is however no end in itself, but must be regarded as a hermeneutical key (interpretative) to corporate performance. The success of the corporation takes precedence over all other aspects of the organisation, even over its culture. The framework of corporate culture and competitiveness links patterns of behaviour and management practices with underlying assumptions, beliefs and values. It provides a clear description of the integrative mechanisms and dimensions of corporate culture and the way in which they impact on competitiveness. These behavioural factors are key determinants of organisational performance because of the close link between patterns of behaviour and underlying core values and beliefs. The model also defines the elements (people, change, projects, control) that need to be managed, as well as the traits (adaptability and innovation, mission, involvement, consistency) of the culture which determines the performance of the corporation. In terms of the corporate culture and competitiveness framework, the management activity of developing a set of common or core values is therefore a good starting point for any culture intervention strategy aimed at enhancing competitiveness (performance). A shared system of beliefs, values and symbols widely understood by an organisation's members has a positive impact on their ability to reach consensus and carry out coordinated actions. This impact, as well as the nature of the culture of the corporation has to be understood by everybody in the organisation. It also has to assist them in making sense of corporate life in such a manner that it creates opportunities for everyone to impact on the performance of the corporation.
AFRIKAANSE OPSOMMING: Die bestuur van menslike hulpbronne en die impak daarvan op organisatoriese prestasie is een van die kern bestuursvraagstukke in Suid-Afrika. Die internasionale boodskap oor mededingendheid is duidelik. Organisasies se mededingendheid verander wanneer mense en praktyke verander. Oor die algemeen vaar Suid-Afrikaanse organisasies redelik swak in die internasional mark weens 'n gebrek aan kompeterendheid en effektiwiteit. Dit is duidelik dat in hierdie lig, die bestuur van komplekse organisasies 'n uitdagende ontwykende maar dringende uitdaging aan bestuur is. Deur op organisatoriese gedrag en internasionale kompeterendheid te fokus, poog die studie om 'n bydrae te maak tot die ontwikkeling van kompeterende (effektiewe) organisasies in Suid-Afrika. Dit is belangrik om in gedagte te hou dat die studie van ondersoekende aard was en om daardeur verdere insig in kompeterendheid te verkry. Dit poog om by bestaande studies oor internasionale kompeterendheid en organisatoriese doeltreffendheid aan te sluit ten einde nuwe bestuursintervensies en strategië te ontwikkel en terselfdertyd rigting vir verdere navorsing aan te dui. Dit bou in besonder op vorige studies oor die impak van organisatoriese gedrag op doeltreffendheid. 'n Belangrike ontiwkkeling in die studie van korporatiewe kultuur was die bevinding dat die normatiewe struktuur (korporatiewe kultuur) 'n insiggewende impak op die prestasie van organisasies het. As gevolg van hierdie verhouding, moet korporatiewe kultuur as een van die sleutel sukses faktore in enige organisasie beskou word. Korporatiewe kultuur bly egter altyd slegs 'n middel tot die bereiking van doelwitte en nooit as die doel self nie. Dit moet daarom beskou word as 'n hermeneutiese (verklarende) sleutel tot organisatoriese doeltreffendheid. Die prestasie van enige organisasie moet voorkeur geniet bo all ander aspekte van die organisasie, selfs die korporatiewe kultuur. Die raamwerk van korporatiewe kultuur en doeltreffendheid verklaar die interaksie tussen die onderafdelings van kultuur en die organisasie se doeltreffendheid. Die aannames, oortuigings en waardesisteme van 'n organisasie vorm die basis van 'n stel bestuurspraktyke en gedragspatrone. Hierdie gedragspatrone is sleutelfaktore tot organisasie doeltreffendheid as gevolg van die noue verband tussen die gedrag en die onderliggende waardesisteem. Die raamwerk identifiseer die elemente (mense, verandering, projekte en kontrole) as die elemente wat bestuur moet word, sowel as vier meganismes (betrokkenheid, aanpasbaarheid en vernuwing/innovasie, konsekwentheid en doelgerigtheid/rigtingaanwysing) van kultuur wat die doeltreffendheid van die organisasie bepaal. Korporatiewe kultuurintervensie strategië, gemik op prestasieverbetering, behoort in terme van die korporatiewe kultuur en doeltreffendheidsraamwerk by die ontwikkeling van 'n stel gedeelde of kernwaardes te begin. 'n Gedeelde sisteem van oortuiginge, waardes en simbole wat deur alle lede van die organisasie verstaan en aanvaar word, sal 'n sterk en positiewe uitwerking op die vermoë om konsensus en gekoordineerde optrede te bereik, hê. Hierdie uitwerking asook die aard van die kultuur van die organisasie moet deur almal in die organisasie verstaan word. Dit moet hulle ook in staat stel om die organisasie se keuse van prioriteite te verstaan en daardeur geleenthede vir almal te skep om 'n impak op die doeltreffendheid van die organisasie te hê.
APA, Harvard, Vancouver, ISO, and other styles
42

Moloney, Peter. "From Common Market to European Union: Creating a New Model State?" Thesis, Boston College, 2014. http://hdl.handle.net/2345/3797.

Full text
Abstract:
Thesis advisor: James Cronin
In 1957, the Treaty of Rome was signed by six West European states to create the European Economic Community (EEC). Designed to foster a common internal market for a limited amount of industrial goods and to define a customs union within the Six, it did not at the time particularly stand out among contemporary international organizations. However, by 1992, within the space of a single generation, this initially limited trade zone had been dramatically expanded into the world's largest trade bloc and had pooled substantial sovereignty among its member states on a range of core state responsibilities. Most remarkably, this transformation resulted from a thoroughly novel political experiment that combined traditional interstate cooperation among its growing membership with an unprecedented transfer of sovereignty to centralized institutions. Though still lacking the traditional institutions and legitimacy of a fully-fledged state, in many policy areas, the European Union (EU) that emerged in 1992 was nonetheless collectively a global force. My dissertation argues that the organization's unprecedented transfer of national sovereignty challenged the very definition of the modern European state and its function. In structure and ambition, it represented far more than just a regional trade bloc among independent states: it became a unique political entity that effectively remodelled the fundamental blueprint of the conventional European state structure familiar to scholars for generations. How did such a dramatic transformation happen so quickly? I argue that three forces in particular were at play: the external pressures of globalization, the search for a new Western European and German identity within the Cold War world and the often unintended consequences of the interaction between member state governments and the Community's supranational institutions. In particular, I examine the history of the EEC's monetary union, common foreign policy, common social policy and the single market to explain the impact of the above forces of change on the EEC's rapid transformation
Thesis (PhD) — Boston College, 2014
Submitted to: Boston College. Graduate School of Arts and Sciences
Discipline: History
APA, Harvard, Vancouver, ISO, and other styles
43

Kerloc’h, Gaëtan Samuel Corentin. "The organizational model of liberated companies: what they have in common?" reponame:Repositório Institucional do FGV, 2017. http://hdl.handle.net/10438/19432.

Full text
Abstract:
Submitted by Gaetan Kerloc’h (gaetan.kerloch@gmail.com) on 2017-12-06T12:38:22Z No. of bitstreams: 1 The organizational model of liberated companies.pdf: 1402756 bytes, checksum: 66c9264abe34654405bac966bb388ce2 (MD5)
Rejected by Josineide da Silva Santos Locatelli (josineide.locatelli@fgv.br), reason: Dear Gaetan, Please, see the corrections you need to do in your thesis: The title is different, see how it is: THE ORGANIZATIONAL MODEL OF LIBERATED COMPANIES: WHAT THEY HAVE IN COMMON? Page 1: Put your full name, withdraw the images of the page; Page 2: Put your full name, correct the title, in “Knowledge Field” must to be of the advisor: INTERNACIONALIZAÇÃO DE EMPRESAS; Page 3: The Ficha Catalográfica must to be on the end the page; Page 4: correct name, title, knowledge Field, withdraw the little line above the member’s name; Acknowledgements, Abstract, Resumo, e Tables of Content must to be in CAPITAL letters and on the middle of the page. Withdraw all the page numbers before the Introduction, however, they need to be considered, example, if before there are 9 pages, the Introduction will start with the page 10. I am sending again the model on 2017-12-06T13:01:45Z (GMT)
Submitted by Gaetan Kerloc’h (gaetan.kerloch@gmail.com) on 2017-12-14T13:20:43Z No. of bitstreams: 1 The organizational model of liberated companies.docx: 549498 bytes, checksum: e0bc37e2be4b5c2220686c2a107323ee (MD5)
Rejected by Josineide da Silva Santos Locatelli (josineide.locatelli@fgv.br), reason: Hi Gaetan, Your work is ready, but cannot to be in Word, must to be in pdf. on 2017-12-14T14:57:29Z (GMT)
Submitted by Gaetan Kerloc’h (gaetan.kerloch@gmail.com) on 2017-12-14T18:02:57Z No. of bitstreams: 1 The organizational model of liberated companies.pdf: 1403015 bytes, checksum: ace1ab6d3bbe1f1d6849727b278fc7df (MD5)
Approved for entry into archive by Josineide da Silva Santos Locatelli (josineide.locatelli@fgv.br) on 2017-12-20T11:09:48Z (GMT) No. of bitstreams: 1 The organizational model of liberated companies.pdf: 1403015 bytes, checksum: ace1ab6d3bbe1f1d6849727b278fc7df (MD5)
Made available in DSpace on 2017-12-20T11:23:15Z (GMT). No. of bitstreams: 1 The organizational model of liberated companies.pdf: 1403015 bytes, checksum: ace1ab6d3bbe1f1d6849727b278fc7df (MD5) Previous issue date: 2017-12-06
This study addresses the current debate about the existence of an organizational model in liberated companies, which impacts its generalization and its transmission over time. Our purpose is to determine whether there is any consistency across the various organizational patterns in liberated companies. To reach this goal, 114 liberated organizations and twenty-five of their primary organizational patterns were identified in the literature. Then, a survey was sent to these 114 organizations to see how consistent these twenty-five patterns were across the universe. Data was collected from thirty-nine answers out of these 114 targeted companies. This data led to identifying eleven features that were present in most of the thirty-nine liberated companies who completed the questionnaire. Ten other patterns were found in the majority of the sample, while four patterns were identified as scarce. It was also determined that larger corporations operate differently. The analysis that was conducted will help leaders to understand the features of liberated companies.
Esta dissertação aborda o debate sobre a existência de um modelo organizacional em empresas inovadoras ('liberated companies'). O objetivo é determinar se há qualquer consistência entre os vários padrões organizacionais de empresas inovadoras. Para responder essa questão, primeiro foi identificado na literatura 114 empresas inovadoras (liberated companies) e listado vinte e cinco características organizacionais presentes nessas empresas. Foi enviada uma pesquisa para as 114 empresas identificadas, para verificar a consistência (ou não) dos vinte e cinco características organizacionais. Foram coletados dados de 39 das 114 empresas. A análise dos resultados indicou que onze características estavam presentes na maioria das 39 empresas que completaram o questionário. Dez outros padrões foram encontrados na maioria da amostra, enquanto quatro padrões foram identificados como raros. Também foi identificado uma maneira diferente de funcionamento para empresas maiores. A análise realizada será útil para empresários que queiram compreender melhor quais são as características organizacionais presentes nas 'liberated company'.
APA, Harvard, Vancouver, ISO, and other styles
44

Rydén, Linda. "The EU common agricultural policy and its effects on trade." Thesis, Högskolan i Jönköping, Internationella Handelshögskolan, 2013. http://urn.kb.se/resolve?urn=urn:nbn:se:hj:diva-21403.

Full text
Abstract:
The common agricultural policy (CAP) is a much discussed policy in the European Union (EU). It allocates great sums to the European agricultural sector every year and has been accused of being trade distorting and outdated. This thesis takes a closer look at what protectionist measures the CAP has used. The policy’s effects on trade will be assessed employing the sugar industry as a reference case. Sugar is heavily protected and is one of the most distorted sectors in agriculture. The CAP effects on trade in the sugar industry for ten countries in and outside the EU from 1991 to 2011 are estimated using a gravity model. This particular type of estimation has, to the author’s knowledge, not been performed for the sugar industry before, which makes the study unique. The results of the empirical testing indicates that trade diversion occurs if one country is a member of the CAP and its trading partner is not. When both trading partners are outside the CAP cooperation, they are estimated to have a higher trade volume. This result indicates that the CAP decreases trade. Current economic theory, in particular the North-South model of trade developed by Krugman (1979), suggests that protectionism of non-competitive sectors should be abolished and funds should instead be directed to innovation and new technology. The CAP is in this sense not adapted to modern economic thought.
APA, Harvard, Vancouver, ISO, and other styles
45

Guang, G.-J. "Model discretisation and accuracy assessment in an automated, adaptive finite element simulation system." Thesis, Swansea University, 1994. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.637183.

Full text
Abstract:
The finite element method has played an important role in helping the understanding of physics; from material mechanics to plasma flow, and is an extremely versatile tool for faster and better prototyping of today's industrial products, ranging from sub-micron semiconductor devices to large scale flight vehicles and reservoir dams. The outstanding power of the finite element method lies in its capability to solve geometrically complicated problems. However, this capability can only be fulfilled by an appropriately constructed mesh. With the recent emergence of the adaptive finite element method, users are relieved from the difficulties involved in appropriate/optimal mesh design and an automatic adaptive finite element analysis seems within reach. However, the realisation of adaptive finite element methods requires extensive theoretical and numerical development, together with, in order to properly integrate them into a smoothly operating system, the redesign of system philosophy and infrastructures. it is this aspect of the finite element method that makes a modern finite element system drastically different from the more traditional mesh-based ones. This thesis is on the design and development of such an automated, adaptive finite element simulation system. The emphasis is on its automation and adaptivity. Central to the system is the geometry-based philosophy. The system comprises two crucial procedures, namely, model discretisation and accuracy assessment. Mesh generation and mesh adaptation techniques are systematically reviewed. A geometry-based automatic 3D mesh generator, based on the 2-stage scheme of the unstructured approach exploiting the novel Delaunay simplexification algorithm has been researched and successfully developed. A mesh adaptator has also been developed to assume the responsibility of mesh adaptation. The mesh adaptator is a combination of the regeneration-based and node-based schemes of the h-adaptation approach. Other supporting modules such as the discretisation controller, automatic attribute assigner and solution mapper are also developed to form the complete model discretisation procedure.
APA, Harvard, Vancouver, ISO, and other styles
46

Thalieb, Rio M. "An accuracy analysis of Army Material System Analysis Activity discrete reliability growth model." Thesis, Monterey, California : Naval Postgraduate School, 1988. http://hdl.handle.net/10945/22849.

Full text
Abstract:
The accuracy of the discrete reliability growth model developed by Army Material System Analysis Activity (AMSAA) is analysed. The mean, standard deviation, and 95 precent confidence interval of the estimate of reliability resulting from simulating the AMSAA discrete reliability growth model are computed. The mean of the estimate of reliability from the AMSAA discrete reliability growth model is compared with the mean of the reliability estimate using the Exponential discrete reliability growth model developed at the Naval Postgraduate School and with the actual reliability which was used to generate test data for the replications in the simulations. The testing plan simulated in this study assumes that the mission tests (go-no-go) are performed until a predetermined number of failures occur at which time a modification is made. The main results are that the AMSAA discrete reliability growth model always performs well with concave growth patterns and has difficulty in tracking the actual reliability which has convex growth pattern or constant growth pattern when the number of failures specified equal to one. Keywords: Reliability growth, Estimate, Mean, Standard deviation, Thesis
APA, Harvard, Vancouver, ISO, and other styles
47

Porter, Jason L. "Comparison of intraoral and extraoral scanners on the accuracy of digital model articulation." VCU Scholars Compass, 2017. http://scholarscompass.vcu.edu/etd/4881.

Full text
Abstract:
Introduction: Orthodontists increasingly rely on digital models in clinical practice. The ability of modern scanners to articulate digital models must be scientifically evaluated. Methods:Twenty five digital articulated models were produced from four digital scanners in five experimental groups. The resulting inter-arch measurements were compared to the gold standard. An acceptable range of 0.5mm more or less than the gold standard was used for evaluation. Results: iTero® and iTero® Element yielded all acceptable inter-arch measurements. The 3M™ True Definition and Ortho Insight 3D® with Regisil® bite registration produced four of six acceptable inter-arch measurements. The Ortho Insight 3D® with Coprwax ™ bite registration yielded three of six acceptable inter-ach measurements. Conclusions: The iTero® and iTero® Element produced the most accurately articulated models. The 3M™ True Definition and Ortho Insight 3D® with Regisil® were the next most accurate. The Ortho Insight 3D® scanner with Coprwax ™ was the least accurate method tested.
APA, Harvard, Vancouver, ISO, and other styles
48

Horii, M. Michael. "A Predictive Model for Multi-Band Optical Tracking System (MBOTS) Performance." International Foundation for Telemetering, 2013. http://hdl.handle.net/10150/579658.

Full text
Abstract:
ITC/USA 2013 Conference Proceedings / The Forty-Ninth Annual International Telemetering Conference and Technical Exhibition / October 21-24, 2013 / Bally's Hotel & Convention Center, Las Vegas, NV
In the wake of sequestration, Test and Evaluation (T&E) groups across the U.S. are quickly learning to make do with less. For Department of Defense ranges and test facility bases in particular, the timing of sequestration could not be worse. Aging optical tracking systems are in dire need of replacement. What's more, the increasingly challenging missions of today require advanced technology, flexibility, and agility to support an ever-widening spectrum of scenarios, including short-range (0 − 5 km) imaging of launch events, long-range (50 km+) imaging of debris fields, directed energy testing, high-speed tracking, and look-down coverage of ground test scenarios, to name just a few. There is a pressing need for optical tracking systems that can be operated on a limited budget with minimal resources, staff, and maintenance, while simultaneously increasing throughput and data quality. Here we present a mathematical error model to predict system performance. We compare model predictions to site-acceptance test results collected from a pair of multi-band optical tracking systems (MBOTS) fielded at White Sands Missile Range. A radar serves as a point of reference to gauge system results. The calibration data and the triangulation solutions obtained during testing provide a characterization of system performance. The results suggest that the optical tracking system error model adequately predicts system performance, thereby supporting pre-mission analysis and conserving scarce resources for innovation and development of robust solutions. Along the way, we illustrate some methods of time-space-position information (TSPI) data analysis, define metrics for assessing system accuracy, and enumerate error sources impacting measurements. We conclude by describing technical challenges ahead and identifying a path forward.
APA, Harvard, Vancouver, ISO, and other styles
49

Lindén, Erik, and David Elofsson. "Model-based turbocharger control : A common approach for SI and CI engines." Thesis, Linköpings universitet, Institutionen för systemteknik, 2011. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-70288.

Full text
Abstract:
In this master’s thesis, a turbine model and a common control structure for theturbocharger for SI and CI-engines is developed. To design the control structure,simulations are done on an existing diesel engine model with VGT. In order tobe able to make simulations for engines with a wastegated turbine, the model isextended to include mass flow and turbine efficiency for that configuration. Thedeveloped model has a mean absolute relative error of 3.6 % for the turbine massflow and 7.4 % for the turbine efficiency. The aim was to control the intake manifoldpressure with good transients and to use the same control structure for VGTand wastegate. By using a common structure, development and calibration timecan be reduced. The non-linearities have been reduced by using an inverted turbinemodel in the control structure, which consists of a PI-controller with feedforward.The controller can be tuned to give a fast response for CI engines and a slowerresponse but with less overshoot for SI engines, which is preferable.
APA, Harvard, Vancouver, ISO, and other styles
50

Lango, Allen Hana. "The role of common genetic variation in model polygenic and monogenic traits." Thesis, University of Exeter, 2010. http://hdl.handle.net/10871/11714.

Full text
Abstract:
The aim of this thesis is to explore the role of common genetic variation, identified through genome-wide association (GWA) studies, in human traits and diseases, using height as a model polygenic trait, type 2 diabetes as a model common polygenic disease, and maturity onset diabetes of the young (MODY) as a model monogenic disease. The wave of the initial GWA studies, such as the Wellcome Trust Case-Control Consortium (WTCCC) study of seven common diseases, substantially increased the number of common variants associated with a range of different multifactorial traits and diseases. The initial excitement, however, seems to have been followed by some disappointment that the identified variants explain a relatively small proportion of the genetic variance of the studied trait, and that only few large effect or causal variants have been identified. Inevitably, this has led to criticism of the GWA studies, mainly that the findings are of limited clinical, or indeed scientific, benefit. Using height as a model, Chapter 2 explores the utility of GWA studies in terms of identifying regions that contain relevant genes, and in answering some general questions about the genetic architecture of highly polygenic traits. Chapter 3 takes this further into a large collaborative study and the largest sample size in a GWA study to date, mainly focusing on demonstrating the biological relevance of the identified variants, even when a large number of associated regions throughout the genome is implicated by these associations. Furthermore, it shows examples of different features of the genetic architecture, such as allelic heterogeneity and pleiotropy. Chapter 4 looks at the predictive value and, therefore, clinical utility, of variants found to associate with type 2 diabetes, a common multifactorial disease that is increasing in prevalence despite known environmental risk factors. This is a disease where knowledge of the genetic risk has potentially substantial clinical relevance. Finally, Chapter 5 approaches the monogenic-polygenic disease bridge in the direction opposite to that approached in the past: most studies have investigated genes mutated in monogenic diseases as candidates for harboring common variants predisposing to related polygenic diseases. This chapter looks at the common type 2 diabetes variants as modifiers of disease onset in patients with a monogenic but clinically heterogeneous disease, maturity onset diabetes of the young (MODY).
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography