Dissertations / Theses on the topic 'Test error estimation'

To see the other types of publications on this topic, follow the link: Test error estimation.

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 29 dissertations / theses for your research on the topic 'Test error estimation.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.

1

Steeno, Gregory Sean. "Robust and Nonparametric Methods for Topology Error Identification and Voltage Calibration in Power Systems Engineering." Diss., Virginia Tech, 1999. http://hdl.handle.net/10919/39305.

Full text
Abstract:
There is a growing interest in robust and nonparametric methods with engineering applications, due to the nature of the data. Here, we study two power systems engineering applications that employ or recommend robust and nonparametric methods; topology error identification and voltage calibration. Topology errors are a well-known, well-documented problem for utility companies. A topology error occurs when a line's status in a power network, whether active or deactive, is misclassified. This will lead to an incorrect Jacobian matrix used to estimate the unknown parameters of a network in a nonlinear regression model. We propose a solution using nonlinear regression techniques to identify the correct status of every line in the network by deriving a statistical model of the power flows and injections while employing Kirchhoff's Current Law. Simulation results on the IEEE-118 bus system showed that the methodology was able to detect where topology errors occurred as well as identify gross measurement errors. The Friedman Two-Way Analysis of Variance by Ranks test is advocated to calibrate voltage measurements at a bus in a power network. However, it was found that the Friedman test was only slightly more robust or resistant in the presence of discordant measurements than the classical F-test. The resistance of a statistical test is defined as the fraction of bad data necessary to switch a statistical conclusion. We mathematically derive the maximum resistance to rejection and to acceptance of the Friedman test, as well as the Brown-Mood test, and show that the Brown-Mood test has a higher maximum resistance to rejection and to acceptance than the Friedman test. In addition, we simulate the expected resistance to rejection and to acceptance of both tests and show that on average the Brown-Mood test is slightly more robust to rejection while on average the Friedman test is more robust to acceptance.
Ph. D.
APA, Harvard, Vancouver, ISO, and other styles
2

Albertson, K. V. "Pre-test estimation in a regression model with a mis-specified error covariance matrix." Thesis, University of Canterbury. Economics, 1993. http://hdl.handle.net/10092/4315.

Full text
Abstract:
This thesis considers some finite sample properties of a number of preliminary test (pre-test) estimators of the unknown parameters of a linear regression model that may have been mis-specified as a result of incorrectly assuming that the disturbance term has a scalar covariance matrix, and/or as a result of the exclusion of relevant regressors. The pre-test itself is a test for exact linear restrictions and is conducted using the usual Wald statistic, which provides a Uniformly Most Powerful Invariant test of the restrictions in a well specified model. The parameters to be estimated are the coefficient vector, the prediction vector (i.e. the expectation of the dependent variable conditional on the regressors), and the regression scale parameter. Note that while the problem of estimating the prediction vector is merely a special case of estimating the coefficient vector when the model is well specified, this is not the case when the model is mis-specified. The properties of each of these estimators in a well specified regression model have been examined in the literature, as have the effects of a number of different model mis-specifications, and we survey these results in Chapter Two. We will extend the existing literature by generalising the error covariance matrix in conjunction with allowing for possibly excluded regressors. To motivate the consideration of a nonscalar error covariance matrix in the context of a pre-test situation we briefly examine the literature on autoregressive and heteroscedastic error processes in Chapter Three. In Chapters Four, Five, Six, and Seven we derive the cumulative distribution function of the test statistic, and exact formulae for the bias and risk (under quadratic loss) of the unrestricted, restricted and pre-test estimators, in a model with a general error covariance matrix and possibly excluded relevant regressors. These formulae are data dependent and, to illustrate the results, are evaluated for a number of regression models and forms of error covariance matrix. In particular we determine the effects of autoregressive errors and heteroscedastic errors on each of the regression models under consideration. Our evaluations confirm the known result that the presence of a non scalar error covariance matrix introduces a distortion into the pre-test power function and we show the effects of this on the pre-test estimators. In addition to this we show that one effect of the mis-specification may be that the pre-test and restricted estimators may be strictly dominated by the corresponding unrestricted estimator even if there are no relevant regressors excluded from the model. If there are relevant regressors excluded from the model it appears that the additional mis-specification of the error covariance matrix has little qualitative impact unless the coefficients on the excluded regressors are small in magnitude or the excluded regressors are not correlated with the included regressors. As one of the effects of the mis-specification is to introduce a distortion into the pre-test power function, in Chapter Eight we consider the problem of determining the optimal critical value (under the criterion of minimax regret) for the pre-test when estimating the regression coefficient vector. We show that the mis-specification of the error covariance matrix may have a substantial impact on the optimal critical value chosen for the pre-test under this criterion, although, generally, the actual size of the pre-test is relatively unaffected by increasing degrees of mis-specification. Chapter Nine concludes this thesis and provides a summary of the results obtained in the earlier chapters. In addition, we outline some possible future research topics in this general area.
APA, Harvard, Vancouver, ISO, and other styles
3

Kutluay, Umit. "Aerodynamic Parameter Estimation Using Flight Test Data." Phd thesis, METU, 2011. http://etd.lib.metu.edu.tr/upload/12613786/index.pdf.

Full text
Abstract:
This doctoral study aims to develop a methodology for use in determining aerodynamic models and parameters from actual flight test data for different types of autonomous flight vehicles. The stepwise regression method and equation error method are utilized for the aerodynamic model identification and parameter estimation. A closed loop aerodynamic parameter estimation approach is also applied in this study which can be used to fine tune the model parameters. Genetic algorithm is used as the optimization kernel for this purpose. In the optimization scheme, an input error cost function is used together with a final position penalty as opposed to widely utilized output error cost function. Available methods in the literature are developed for and mostly applied to the aerodynamic system identification problem of piloted aircraft
a very limited number of studies on autonomous vehicles are available in the open literature. This doctoral study shows the applicability of the existing methods to aerodynamic model identification and parameter estimation problem of autonomous vehicles. Also practical considerations for the application of model structure determination methods to autonomous vehicles are not well defined in the literature and this study serves as a guide to these considerations.
APA, Harvard, Vancouver, ISO, and other styles
4

Jin, Fei. "Essays in Spatial Econometrics: Estimation, Specification Test and the Bootstrap." The Ohio State University, 2013. http://rave.ohiolink.edu/etdc/view?acc_num=osu1365612737.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Langer, Michelle M. Thissen David. "A reexamination of Lord's Wald test for differential item functioning using item response theory and modern error estimation." Chapel Hill, N.C. : University of North Carolina at Chapel Hill, 2008. http://dc.lib.unc.edu/u?/etd,2084.

Full text
Abstract:
Thesis (Ph. D.)--University of North Carolina at Chapel Hill, 2008.
Title from electronic title page (viewed Feb. 17, 2009). "... in partial fulfillment of the requirements for the degree of Doctor in Philosophy in the Department of Psychology Quantitative." Discipline: Psychology; Department/School: Psychology.
APA, Harvard, Vancouver, ISO, and other styles
6

Tanner, Whitney Ford. "Improved Standard Error Estimation for Maintaining the Validities of Inference in Small-Sample Cluster Randomized Trials and Longitudinal Studies." UKnowledge, 2018. https://uknowledge.uky.edu/epb_etds/20.

Full text
Abstract:
Data arising from Cluster Randomized Trials (CRTs) and longitudinal studies are correlated and generalized estimating equations (GEE) are a popular analysis method for correlated data. Previous research has shown that analyses using GEE could result in liberal inference due to the use of the empirical sandwich covariance matrix estimator, which can yield negatively biased standard error estimates when the number of clusters or subjects is not large. Many techniques have been presented to correct this negative bias; However, use of these corrections can still result in biased standard error estimates and thus test sizes that are not consistently at their nominal level. Therefore, there is a need for an improved correction such that nominal type I error rates will consistently result. First, GEEs are becoming a popular choice for the analysis of data arising from CRTs. We study the use of recently developed corrections for empirical standard error estimation and the use of a combination of two popular corrections. In an extensive simulation study, we find that nominal type I error rates can be consistently attained when using an average of two popular corrections developed by Mancl and DeRouen (2001, Biometrics 57, 126-134) and Kauermann and Carroll (2001, Journal of the American Statistical Association 96, 1387-1396) (AVG MD KC). Use of this new correction was found to notably outperform the use of previously recommended corrections. Second, data arising from longitudinal studies are also commonly analyzed with GEE. We conduct a simulation study, finding two methods to attain nominal type I error rates more consistently than other methods in a variety of settings: First, a recently proposed method by Westgate and Burchett (2016, Statistics in Medicine 35, 3733-3744) that specifies both a covariance estimator and degrees of freedom, and second, AVG MD KC with degrees of freedom equaling the number of subjects minus the number of parameters in the marginal model. Finally, stepped wedge trials are an increasingly popular alternative to traditional parallel cluster randomized trials. Such trials often utilize a small number of clusters and numerous time intervals, and these components must be considered when choosing an analysis method. A generalized linear mixed model containing a random intercept and fixed time and intervention covariates is the most common analysis approach. However, the sole use of a random intercept applies assumptions that will be violated in practice. We show, using an extensive simulation study based on a motivating example and a more general design, alternative analysis methods are preferable for maintaining the validity of inference in small-sample stepped wedge trials with binary outcomes. First, we show the use of generalized estimating equations, with an appropriate bias correction and a degrees of freedom adjustment dependent on the study setting type, will result in nominal type I error rates. Second, we show the use of a cluster-level summary linear mixed model can also achieve nominal type I error rates for equal cluster size settings.
APA, Harvard, Vancouver, ISO, and other styles
7

Lehmann, Rüdiger. "Observation error model selection by information criteria vs. normality testing." Hochschule für Technik und Wirtschaft Dresden, 2016. http://nbn-resolving.de/urn:nbn:de:bsz:520-qucosa-211721.

Full text
Abstract:
To extract the best possible information from geodetic and geophysical observations, it is necessary to select a model of the observation errors, mostly the family of Gaussian normal distributions. However, there are alternatives, typically chosen in the framework of robust M-estimation. We give a synopsis of well-known and less well-known models for observation errors and propose to select a model based on information criteria. In this contribution we compare the Akaike information criterion (AIC) and the Anderson Darling (AD) test and apply them to the test problem of fitting a straight line. The comparison is facilitated by a Monte Carlo approach. It turns out that the model selection by AIC has some advantages over the AD test.
APA, Harvard, Vancouver, ISO, and other styles
8

Lehmann, Rüdiger. "Observation error model selection by information criteria vs. normality testing." Hochschule für Technik und Wirtschaft Dresden, 2015. https://htw-dresden.qucosa.de/id/qucosa%3A23301.

Full text
Abstract:
To extract the best possible information from geodetic and geophysical observations, it is necessary to select a model of the observation errors, mostly the family of Gaussian normal distributions. However, there are alternatives, typically chosen in the framework of robust M-estimation. We give a synopsis of well-known and less well-known models for observation errors and propose to select a model based on information criteria. In this contribution we compare the Akaike information criterion (AIC) and the Anderson Darling (AD) test and apply them to the test problem of fitting a straight line. The comparison is facilitated by a Monte Carlo approach. It turns out that the model selection by AIC has some advantages over the AD test.
APA, Harvard, Vancouver, ISO, and other styles
9

Liu, Yu. "Estimation, Decision and Applications to Target Tracking." ScholarWorks@UNO, 2013. http://scholarworks.uno.edu/td/1758.

Full text
Abstract:
This dissertation mainly consists of three parts. The first part proposes generalized linear minimum mean-square error (GLMMSE) estimation for nonlinear point estimation. The second part proposes a recursive joint decision and estimation (RJDE) algorithm for joint decision and estimation (JDE). The third part analyzes the performance of sequential probability ratio test (SPRT) when the log-likelihood ratios (LLR) are independent but not identically distributed. The linear minimum mean-square error (LMMSE) estimation plays an important role in nonlinear estimation. It searches for the best estimator in the set of all estimators that are linear in the measurement. A GLMMSE estimation framework is proposed in this disser- tation. It employs a vector-valued measurement transform function (MTF) and finds the best estimator among all estimators that are linear in MTF. Several design guidelines for the MTF based on a numerical example were provided. A RJDE algorithm based on a generalized Bayes risk is proposed in this dissertation for dynamic JDE problems. It is computationally efficient for dynamic problems where data are made available sequentially. Further, since existing performance measures for estimation or decision are effective to evaluate JDE algorithms, a joint performance measure is proposed for JDE algorithms for dynamic problems. The RJDE algorithm is demonstrated by applications to joint tracking and classification as well as joint tracking and detection in target tracking. The characteristics and performance of SPRT are characterized by two important functions—operating characteristic (OC) and average sample number (ASN). These two functions have been studied extensively under the assumption of independent and identically distributed (i.i.d.) LLR, which is too stringent for many applications. This dissertation relaxes the requirement of identical distribution. Two inductive equations governing the OC and ASN are developed. Unfortunately, they have non-unique solutions in the general case. They do have unique solutions in two special cases: (a) the LLR sequence converges in distributions and (b) the LLR sequence has periodic distributions. Further, the analysis can be readily extended to evaluate the performance of the truncated SPRT and the cumulative sum test.
APA, Harvard, Vancouver, ISO, and other styles
10

Lauer, Peccoud Marie-Reine. "Méthodes statistiques pour le controle de qualité en présence d'erreurs de mesure." Université Joseph Fourier (Grenoble), 1997. http://www.theses.fr/1997NICE5136.

Full text
Abstract:
Lorsqu'on cherche a controler la qualite d'un ensemble de pieces a partir de mesures bruitees des grandeurs d'une caracteristique de ces pieces, on peut commettre des erreurs de decision nuisibles a la qualite. Il est donc essentiel de maitriser les risques encourus afin d'assurer la qualite finale de la fourniture. Nous considerons qu'une piece est defectueuse ou non selon que la grandeur g correspondante de la caracteristique est superieure ou inferieure a une valeur g#o donnee. Nous supposons que, compte tenu de l'infidelite de l'instrument de mesure, la mesure m obtenue de cette grandeur est de la forme f(g) + ou f est une fonction croissante telle que la valeur f(g#o) est connue et est une erreur aleatoire centree de variance donnee. Nous examinons d'abord le probleme de la fixation d'une mesure de rejet m de maniere a ce que, pour le tri d'un lot de pieces consistant a accepter ou rejeter chacune selon que la mesure associee est inferieure ou superieure a m, un objectif donne de qualite du lot apres tri soit respecte. Nous envisageons ensuite le probleme de test de la qualite globale d'un lot au vu de mesures pour un echantillon de pieces extrait du lot. Pour ces deux types de problemes, avec differents objectifs de qualite, nous proposons des solutions en privilegiant le cas ou la fonction f est affine et ou l'erreur et la variable g sont gaussiennes. Des resultats de simulations permettent d'apprecier les performances des procedures de controle definies et leur robustesse a des ecarts aux hypotheses utilisees dans les developpements theoriques.
APA, Harvard, Vancouver, ISO, and other styles
11

Öhman, Marie-Louise. "Aspects of analysis of small-sample right censored data using generalized Wilcoxon rank tests." Doctoral thesis, Umeå universitet, Statistiska institutionen, 1994. http://urn.kb.se/resolve?urn=urn:nbn:se:umu:diva-7313.

Full text
Abstract:
The estimated bias and variance of commonly applied and jackknife variance estimators and observed significance level and power of standardised generalized Wilcoxon linear rank sum test statistics and tests, respectively, of Gehan and Prentice are compared in a Monte Carlo simulation study. The variance estimators are the permutational-, the conditional permutational- and the jackknife variance estimators of the test statistic of Gehan, and the asymptotic- and the jackknife variance estimators of the test statistic of Prentice. In unbalanced small sample size problems with right censoring, the commonly applied variance estimators for the generalized Wilcoxon rank test statistics of Gehan and Prentice may be biased. In the simulation study it appears that variance properties and observed level and power may be improved by using the jackknife variance estimator. To establish the sensitivity to gross errors and misclassifications for standardised generalized Wilcoxon linear rank sum statistics in small samples with right censoring, the sensitivity curves of Tukey are used. For a certain combined sample, which might contain gross errors, a relatively simple method is needed to establish the applicability of the inference drawn from the selected rank test. One way is to use the change of decision point, which in this thesis is defined as the smallest proportion of altered positions resulting in an opposite decision. When little is known about the shape of a distribution function, non-parametric estimates for the location parameter are found by making use of censored one-sample- and two-sample rank statistics. Methods for constructing censored small sample confidence intervals and asymptotic confidence intervals for a location parameter are also considered. Generalisations of the solutions from uncensored one-sample and two-sample rank tests are utilised. A Monte-Carlo simulation study indicates that rank estimators may have smaller absolute estimated bias and smaller estimated mean squared error than a location estimator derived from the Product-Limit estimator of the survival distribution function. The ideas described and discussed are illustrated with data from a clinical trial of Head and Neck cancer.
digitalisering@umu
APA, Harvard, Vancouver, ISO, and other styles
12

Senteney, Michael H. "A Monte Carlo Study to Determine Sample Size for Multiple Comparison Procedures in ANOVA." Ohio University / OhioLINK, 2020. http://rave.ohiolink.edu/etdc/view?acc_num=ohiou160433478343909.

Full text
APA, Harvard, Vancouver, ISO, and other styles
13

Dufresne, Jean-Louis. "Etude et developpement d'une procedure experimentale pour l'identification des parametres d'un modele thermique de capteurs solaires a air en regime dynamique." Paris 7, 1987. http://www.theses.fr/1987PA077107.

Full text
Abstract:
Construction d'un banc d'essais permettant de tester le fonctionnement d'un capteur solaire a air en regime dynamique (variation de l'eclairement et du debit dans le capteur). Analyse des donnees d'ensoleillement recueillies toutes les minutes pendant un an a orsay (france). Methode d'identification de dix parametres libres d'un modele de capteur a air en regime dynamique, basee sur la mesure d'une seule sortie du systeme: la puissance extraite. Discussion de l'estimation des erreurs de mesure. Application a un capteur industriel (capteur acret)
APA, Harvard, Vancouver, ISO, and other styles
14

Tabatabaey, S. M. Mehdi (Seyed Mohammad Mehdi) Carleton University Dissertation Mathematics and Statistics. "Preliminary test approach estimation: regression model with spherically symmetric errors." Ottawa, 1995.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
15

Lee, Dong Jin. "Essays on optimal tests for parameter instability." Diss., Connect to a 24 p. preview or request complete full text in PDF format. Access restricted to UC campuses, 2008. http://wwwlib.umi.com/cr/ucsd/fullcit?p3304195.

Full text
Abstract:
Thesis (Ph. D.)--University of California, San Diego, 2008.
Title from first page of PDF file (viewed June 16, 2008). Available via ProQuest Digital Dissertations. Vita. Includes bibliographical references (p. 158-164).
APA, Harvard, Vancouver, ISO, and other styles
16

Gutiérrez, Ayala Evelyn Patricia. "Estimation of the disease prevalence when diagnostic tests are subject to classification error: bayesian approach." Master's thesis, Pontificia Universidad Católica del Perú, 2016. http://tesis.pucp.edu.pe/repositorio/handle/123456789/7631.

Full text
Abstract:
La estimación de la prevalencia de una enfermedad, la cual es definida como el número de casos con la enfermedad en una población dividida por el número de elementos en ésta, es realizado con gran precisión cuando existen pruebas 100% exactas, también llamadas gold standard. Sin embargo, en muchos casos, debido a los altos costos de las pruebas de diagnóstico o limitaciones de tecnología, la prueba gold standard no existe y debe ser reemplazada por una o más pruebas diagnósticas no tan caras pero con bajos niveles de sensibilidad o especificidad. Este estudio está enfocado en el estudio de dos enfoques bayesianos para la estimación de prevalencia cuando no es factible tener resultados de una prueba 100% exacta. El primero es un modelo con dos parámetros que toman en cuenta la asociación entre los resultados de las pruebas. El segundo es un enfoque que propone el uso del Bayesian Model Averaging para combinar los resultados de cuatro modelos donde cada uno de estos tiene suposiciones diferentes sobre la asociación entre los resultados de las pruebas diagnósticas. Ambos enfoques son estudiados mediante simulaciones para evaluar el desempeño de estos bajo diferentes escenarios. Finalmente estas técnicas serán usadas para estimar la prevalencia de enfermedad renal crónica en el Perú con datos de un estudio de cohortes de CRONICAS (Francis et al., 2015).
Tesis
APA, Harvard, Vancouver, ISO, and other styles
17

Feuardent, Valérie. "Amélioration des modèles par recalage : application aux structures spatiales." Cachan, Ecole normale supérieure, 1997. http://www.theses.fr/1997DENS0019.

Full text
Abstract:
Les modèles de calcul doivent permettre de prédire le comportement de structure complexe de façon précise. Il s'agit de minimiser la distance séparant les résultats de calcul (modèles éléments-finis) des données expérimentales (vibrations libres). Pour cela, la technique itérative de localisation-correction basée sur la notion de mesure d'erreur en relation de comportement (incluant une mesure d'erreur dynamique) est utilisée. Une base réduite de projection est proposée pour le recalage concernant les modélisations à grand nombre de degrés de liberté. Cette base construite à l'aide d'une méthode de sous-structuration a la particularité d'être associée aux tests expérimentaux. La robustesse au bruit de la méthode est considérée. Au niveau des résultats, l'étape de localisation des erreurs de modélisation est particulièrement performante et le problème inverse de détermination des paramètres dans le processus de correction ne rencontre pas de difficulté de résolution. Les erreurs faites lors de la modélisation des masses et des raideurs sont corrigées en quelques itérations.
APA, Harvard, Vancouver, ISO, and other styles
18

Mai, Hoang Bao An. "Analyse de performance d'un système d'authentification utilisant des codes graphiques." Thesis, Ecole centrale de Lille, 2014. http://www.theses.fr/2014ECLI0017/document.

Full text
Abstract:
Nous étudions dans cette thèse l'influence d'un système d'authentification utilisant des codes graphiques 2D modifiés lors de l'impression par un procédé physique non-clônable. Un tel procédé part du principe qu'à très haute résolution le système d'impression acquisition peut être modélisé comme un processus stochastique, de part le caractère aléatoire de la disposition des fibres de papiers, de mélange des particules d'encre, de l'adressabilité de l'imprimante ou encore du bruit d'acquisition. Nous considérons un scénario où l'adversaire pourra estimer le code original et essaiera de le reproduire en utilisant son propre système d'impression. La première solution que nous proposons pour arriver à l'authentification est d'utiliser un test d'hypothèse à partir des modèles à priori connus et sans mémoire des canaux d'impression-acquisition de l'imprimeur légitime et du contrefacteur. Dans ce contexte nous proposons une approximation fiable des probabilités d'erreur via l'utilisation de bornes exponentiels et du principe des grandes déviations. Dans un second temps, nous analysons un scénario plus réaliste qui prends en compte une estimation a priori du canal du contrefacteur et nous mesurons l'impact de cette étape sur les performances du système d'authentification. Nous montrons qu'il est possible de calculer la distribution des probabilité de non-détection et d'en extraire par exemple ses performances moyennes. La dernière partie de cette thèse propose d'optimiser, au travers d'un jeu minimax, le canal de l'imprimeur
We study in this thesis the impact of an authentication system based on 2D graphical codes that are corrupted by a physically unclonable noise such as the one emitted by a printing process. The core of such a system is that a printing process at very high resolution can be seen as a stochastic process and hence produces noise, this is due to the nature of different elements such as the randomness of paper fibers, the physical properties of the ink drop, the dot addressability of the printer, etc. We consider a scenario where the opponent may estimate the original graphical code and tries to reproduce the forged one using his printing process in order to fool the receiver. Our first solution to perform authentication is to use hypothesis testing on the observed memoryless sequences of a printed graphical code considering the assumption that we are able to perfectly model the printing process. The proposed approach arises from error exponent using exponential bounds as a direct application of the large deviation principle. Moreover, when looking for a more practical scenario, we take into account the estimation of the printing process used to generate the graphical code of the opponent, and we see how it impacts the performance of the authentication system. We show that it is both possible to compute the distribution of the probability of non-detection and to compute the average performance of the authentication system when the opponent channel has to be estimated. The last part of this thesis addresses the optimization problem of the printing channel
APA, Harvard, Vancouver, ISO, and other styles
19

Sawade, Christoph. "Active evaluation of predictive models." Phd thesis, Universität Potsdam, 2012. http://opus.kobv.de/ubp/volltexte/2013/6558/.

Full text
Abstract:
The field of machine learning studies algorithms that infer predictive models from data. Predictive models are applicable for many practical tasks such as spam filtering, face and handwritten digit recognition, and personalized product recommendation. In general, they are used to predict a target label for a given data instance. In order to make an informed decision about the deployment of a predictive model, it is crucial to know the model’s approximate performance. To evaluate performance, a set of labeled test instances is required that is drawn from the distribution the model will be exposed to at application time. In many practical scenarios, unlabeled test instances are readily available, but the process of labeling them can be a time- and cost-intensive task and may involve a human expert. This thesis addresses the problem of evaluating a given predictive model accurately with minimal labeling effort. We study an active model evaluation process that selects certain instances of the data according to an instrumental sampling distribution and queries their labels. We derive sampling distributions that minimize estimation error with respect to different performance measures such as error rate, mean squared error, and F-measures. An analysis of the distribution that governs the estimator leads to confidence intervals, which indicate how precise the error estimation is. Labeling costs may vary across different instances depending on certain characteristics of the data. For instance, documents differ in their length, comprehensibility, and technical requirements; these attributes affect the time a human labeler needs to judge relevance or to assign topics. To address this, the sampling distribution is extended to incorporate instance-specific costs. We empirically study conditions under which the active evaluation processes are more accurate than a standard estimate that draws equally many instances from the test distribution. We also address the problem of comparing the risks of two predictive models. The standard approach would be to draw instances according to the test distribution, label the selected instances, and apply statistical tests to identify significant differences. Drawing instances according to an instrumental distribution affects the power of a statistical test. We derive a sampling procedure that maximizes test power when used to select instances, and thereby minimizes the likelihood of choosing the inferior model. Furthermore, we investigate the task of comparing several alternative models; the objective of an evaluation could be to rank the models according to the risk that they incur or to identify the model with lowest risk. An experimental study shows that the active procedure leads to higher test power than the standard test in many application domains. Finally, we study the problem of evaluating the performance of ranking functions, which are used for example for web search. In practice, ranking performance is estimated by applying a given ranking model to a representative set of test queries and manually assessing the relevance of all retrieved items for each query. We apply the concepts of active evaluation and active comparison to ranking functions and derive optimal sampling distributions for the commonly used performance measures Discounted Cumulative Gain and Expected Reciprocal Rank. Experiments on web search engine data illustrate significant reductions in labeling costs.
Maschinelles Lernen befasst sich mit Algorithmen zur Inferenz von Vorhersagemodelle aus komplexen Daten. Vorhersagemodelle sind Funktionen, die einer Eingabe – wie zum Beispiel dem Text einer E-Mail – ein anwendungsspezifisches Zielattribut – wie „Spam“ oder „Nicht-Spam“ – zuweisen. Sie finden Anwendung beim Filtern von Spam-Nachrichten, bei der Text- und Gesichtserkennung oder auch bei der personalisierten Empfehlung von Produkten. Um ein Modell in der Praxis einzusetzen, ist es notwendig, die Vorhersagequalität bezüglich der zukünftigen Anwendung zu schätzen. Für diese Evaluierung werden Instanzen des Eingaberaums benötigt, für die das zugehörige Zielattribut bekannt ist. Instanzen, wie E-Mails, Bilder oder das protokollierte Nutzerverhalten von Kunden, stehen häufig in großem Umfang zur Verfügung. Die Bestimmung der zugehörigen Zielattribute ist jedoch ein manueller Prozess, der kosten- und zeitaufwendig sein kann und mitunter spezielles Fachwissen erfordert. Ziel dieser Arbeit ist die genaue Schätzung der Vorhersagequalität eines gegebenen Modells mit einer minimalen Anzahl von Testinstanzen. Wir untersuchen aktive Evaluierungsprozesse, die mit Hilfe einer Wahrscheinlichkeitsverteilung Instanzen auswählen, für die das Zielattribut bestimmt wird. Die Vorhersagequalität kann anhand verschiedener Kriterien, wie der Fehlerrate, des mittleren quadratischen Verlusts oder des F-measures, bemessen werden. Wir leiten die Wahrscheinlichkeitsverteilungen her, die den Schätzfehler bezüglich eines gegebenen Maßes minimieren. Der verbleibende Schätzfehler lässt sich anhand von Konfidenzintervallen quantifizieren, die sich aus der Verteilung des Schätzers ergeben. In vielen Anwendungen bestimmen individuelle Eigenschaften der Instanzen die Kosten, die für die Bestimmung des Zielattributs anfallen. So unterscheiden sich Dokumente beispielsweise in der Textlänge und dem technischen Anspruch. Diese Eigenschaften beeinflussen die Zeit, die benötigt wird, mögliche Zielattribute wie das Thema oder die Relevanz zuzuweisen. Wir leiten unter Beachtung dieser instanzspezifischen Unterschiede die optimale Verteilung her. Die entwickelten Evaluierungsmethoden werden auf verschiedenen Datensätzen untersucht. Wir analysieren in diesem Zusammenhang Bedingungen, unter denen die aktive Evaluierung genauere Schätzungen liefert als der Standardansatz, bei dem Instanzen zufällig aus der Testverteilung gezogen werden. Eine verwandte Problemstellung ist der Vergleich von zwei Modellen. Um festzustellen, welches Modell in der Praxis eine höhere Vorhersagequalität aufweist, wird eine Menge von Testinstanzen ausgewählt und das zugehörige Zielattribut bestimmt. Ein anschließender statistischer Test erlaubt Aussagen über die Signifikanz der beobachteten Unterschiede. Die Teststärke hängt von der Verteilung ab, nach der die Instanzen ausgewählt wurden. Wir bestimmen die Verteilung, die die Teststärke maximiert und damit die Wahrscheinlichkeit minimiert, sich für das schlechtere Modell zu entscheiden. Des Weiteren geben wir eine Möglichkeit an, den entwickelten Ansatz für den Vergleich von mehreren Modellen zu verwenden. Wir zeigen empirisch, dass die aktive Evaluierungsmethode im Vergleich zur zufälligen Auswahl von Testinstanzen in vielen Anwendungen eine höhere Teststärke aufweist. Im letzten Teil der Arbeit werden das Konzept der aktiven Evaluierung und das des aktiven Modellvergleichs auf Rankingprobleme angewendet. Wir leiten die optimalen Verteilungen für das Schätzen der Qualitätsmaße Discounted Cumulative Gain und Expected Reciprocal Rank her. Eine empirische Studie zur Evaluierung von Suchmaschinen zeigt, dass die neu entwickelten Verfahren signifikant genauere Schätzungen der Rankingqualität liefern als die untersuchten Referenzverfahren.
APA, Harvard, Vancouver, ISO, and other styles
20

Garcia, Luz Mery González. "Modelos baseados no planejamento para análise de populações finitas." Universidade de São Paulo, 2008. http://www.teses.usp.br/teses/disponiveis/45/45133/tde-19062008-183609/.

Full text
Abstract:
Estudamos o problema de obtenção de estimadores/preditores ótimos para combinações lineares de respostas coletadas de uma população finita por meio de amostragem aleatória simples. Nesse contexto, estendemos o modelo misto para populações finitas proposto por Stanek, Singer & Lencina (2004, Journal of Statistical Planning and Inference) para casos em que se incluem erros de medida (endógenos e exógenos) e informação auxiliar. Admitindo que as variâncias são conhecidas, mostramos que os estimadores/preditores propostos têm erro quadrático médio menor dentro da classe dos estimadores lineares não viciados. Por meio de estudos de simulação, comparamos o desempenho desses estimadores/preditores empíricos, i.e., obtidos com a substituição das componentes de variância por estimativas, com aquele de competidores tradicionais. Também, estendemos esses modelos para análise de estudos com estrutura do tipo pré-teste/pós-teste. Também por intermédio de simulação, comparamos o desempenho dos estimadores empíricos com o desempenho do estimador obtido por meio de técnicas clássicas de análise de medidas repetidas e com o desempenho do estimador obtido via análise de covariância por meio de mínimos quadrados, concluindo que os estimadores/ preditores empíricos apresentaram um menor erro quadrático médio e menor vício. Em geral, sugerimos o emprego dos estimadores/preditores empíricos propostos para dados com distribuição assimétrica ou amostras pequenas.
We consider optimal estimation of finite population parameters with data obtained via simple random samples. In this context, we extend a finite population mixed model proposed by Stanek, Singer & Lencina (2004, Journal of Statistical Planning and Inference) by including measurement errors (endogenous or exogenous) and auxiliary information. Assuming that variance components are known, we show that the proposed estimators/predictors have the smallest mean squared error in the class of unbiased estimators. Using simulation studies, we compare the performance of the empirical estimators/predictors obtained by replacing variance components with estimates with the performance of a traditional estimator. We also extend the finite population mixed model to data obtained via pretest-posttest designs. Through simulation studies, we compare the performance of the empirical estimator of the difference in gain between groups with the performance of the usual repeated measures estimator and with the performance of the usual analysis of covariance estimator obtained via ordinary least squares. The empirical estimator has smaller mean squared error and bias than the alternative estimators under consideration. In general, we recommend the use of the proposed estimators/ predictors for either asymmetric response distributions or small samples.
APA, Harvard, Vancouver, ISO, and other styles
21

Percy, Edward Richard Jr. "Corrected LM goodness-of-fit tests with applicaton to stock returns." The Ohio State University, 2006. http://rave.ohiolink.edu/etdc/view?acc_num=osu1134416514.

Full text
APA, Harvard, Vancouver, ISO, and other styles
22

Schepsmeier, Ulf [Verfasser], Claudia [Akademischer Betreuer] Czado, Kjersti [Akademischer Betreuer] Aas, and Peter X. K. [Akademischer Betreuer] Song. "Estimating standard errors and efficient goodness-of-fit tests for regular vine copula models / Ulf Schepsmeier. Gutachter: Kjersti Aas ; Peter X. K. Song ; Claudia Czado. Betreuer: Claudia Czado." München : Universitätsbibliothek der TU München, 2014. http://d-nb.info/1047883562/34.

Full text
APA, Harvard, Vancouver, ISO, and other styles
23

Tardivel, Patrick. "Représentation parcimonieuse et procédures de tests multiples : application à la métabolomique." Thesis, Toulouse 3, 2017. http://www.theses.fr/2017TOU30316/document.

Full text
Abstract:
Considérons un vecteur gaussien Y de loi N (m,sigma²Idn) et X une matrice de dimension n x p avec Y observé, m inconnu, Sigma et X connus. Dans le cadre du modèle linéaire, m est supposé être une combinaison linéaire des colonnes de X. En petite dimension, lorsque n ≥ p et que ker (X) = 0, il existe alors un unique paramètre Beta* tel que m = X Beta* ; on peut alors réécrire Y sous la forme Y = X Beta* + Epsilon. Dans le cadre du modèle linéaire gaussien en petite dimension, nous construisons une nouvelle procédure de tests multiples contrôlant le FWER pour tester les hypothèses nulles Beta*i = 0 pour i appartient à [[1,p]]. Cette procédure est appliquée en métabolomique au travers du programme ASICS qui est disponible en ligne. ASICS permet d'identifier et de quantifier les métabolites via l'analyse des spectres RMN. En grande dimension, lorsque n < p on a ker (X) ≠ 0, ainsi le paramètre Beta* décrit précédemment n'est pas unique. Dans le cas non bruité lorsque Sigma = 0, impliquant que Y = m, nous montrons que les solutions du système linéaire d'équations Y = X Beta avant un nombre de composantes non nulles minimales s'obtiennent via la minimisation de la "norme" lAlpha avec Alpha suffisamment petit
Let Y be a Gaussian vector distributed according to N (m,sigma²Idn) and X a matrix of dimension n x p with Y observed, m unknown, sigma and X known. In the linear model, m is assumed to be a linear combination of the columns of X In small dimension, when n ≥ p and ker (X) = 0, there exists a unique parameter Beta* such that m = X Beta*; then we can rewrite Y = Beta* + Epsilon. In the small-dimensional linear Gaussian model framework, we construct a new multiple testing procedure controlling the FWER to test the null hypotheses Beta*i = 0 for i belongs to [[1,p]]. This procedure is applied in metabolomics through the freeware ASICS available online. ASICS allows to identify and to qualify metabolites via the analyse of RMN spectra. In high dimension, when n < p we have ker (X) ≠ 0 consequently the parameter Beta* described above is no longer unique. In the noiseless case when Sigma = 0, implying thus Y = m, we show that the solutions of the linear system of equation Y = X Beta having a minimal number of non-zero components are obtained via the lalpha with alpha small enough
APA, Harvard, Vancouver, ISO, and other styles
24

Higson, Edward John. "Bayesian methods and machine learning in astrophysics." Thesis, University of Cambridge, 2019. https://www.repository.cam.ac.uk/handle/1810/289728.

Full text
Abstract:
This thesis is concerned with methods for Bayesian inference and their applications in astrophysics. We principally discuss two related themes: advances in nested sampling (Chapters 3 to 5), and Bayesian sparse reconstruction of signals from noisy data (Chapters 6 and 7). Nested sampling is a popular method for Bayesian computation which is widely used in astrophysics. Following the introduction and background material in Chapters 1 and 2, Chapter 3 analyses the sampling errors in nested sampling parameter estimation and presents a method for estimating them numerically for a single nested sampling calculation. Chapter 4 introduces diagnostic tests for detecting when software has not performed the nested sampling algorithm accurately, for example due to missing a mode in a multimodal posterior. The uncertainty estimates and diagnostics in Chapters 3 and 4 are implemented in the $\texttt{nestcheck}$ software package, and both chapters describe an astronomical application of the techniques introduced. Chapter 5 describes dynamic nested sampling: a generalisation of the nested sampling algorithm which can produce large improvements in computational efficiency compared to standard nested sampling. We have implemented dynamic nested sampling in the $\texttt{dyPolyChord}$ and $\texttt{perfectns}$ software packages. Chapter 6 presents a principled Bayesian framework for signal reconstruction, in which the signal is modelled by basis functions whose number (and form, if required) is determined by the data themselves. This approach is based on a Bayesian interpretation of conventional sparse reconstruction and regularisation techniques, in which sparsity is imposed through priors via Bayesian model selection. We demonstrate our method for noisy 1- and 2-dimensional signals, including examples of processing astronomical images. The numerical implementation uses dynamic nested sampling, and uncertainties are calculated using the methods introduced in Chapters 3 and 4. Chapter 7 applies our Bayesian sparse reconstruction framework to artificial neural networks, where it allows the optimum network architecture to be determined by treating the number of nodes and hidden layers as parameters. We conclude by suggesting possible areas of future research in Chapter 8.
APA, Harvard, Vancouver, ISO, and other styles
25

Lin, Yi-Chou, and 林益州. "An Error Estimation Approach of Test Stimulus for ADC Testing." Thesis, 2008. http://ndltd.ncl.edu.tw/handle/24527218069546163911.

Full text
Abstract:
碩士
雲林科技大學
電子與資訊工程研究所
96
This work presents a novel error estimation method of test stimulus for analog to digital converters testing. The error estimation method of test stimulus is based on a statistical successive approximation approach. By applying random patterns with a uniform distribution to ADC, we simplify the piecewise linear relationship between the input and output of the ADC and derive the equations describe the error of test stimulus. The proposed method not only reduces the computational complexity but provides effective and accurate information of test stimulus in developing an ADC testing scheme or BIST methodology.
APA, Harvard, Vancouver, ISO, and other styles
26

Chia-ChuanLi and 李嘉銓. "Bit Error Rate Estimation Method and Test Technique for SAR ADCs." Thesis, 2019. http://ndltd.ncl.edu.tw/handle/9hvys9.

Full text
APA, Harvard, Vancouver, ISO, and other styles
27

Sango, Joel. "Sur les tests de type diagnostic dans la validation des hypothèses de bruit blanc et de non corrélation." Thèse, 2016. http://hdl.handle.net/1866/18382.

Full text
Abstract:
Dans la modélisation statistique, nous sommes le plus souvent amené à supposer que le phénomène étudié est généré par une structure pouvant s’ajuster aux données observées. Cette structure fait apparaître une partie principale qui représente le mieux possible le phénomène étudié et qui devrait expliquer les données et une partie supposée négligeable appelée erreur ou innovation. Cette structure complexe est communément appelée un modèle, dont la forme peut être plus ou moins complexe. Afin de simplifier la structure, il est souvent supposé qu’elle repose sur un nombre fini de valeurs, appelées paramètres. Basé sur les données, ces paramètres sont estimés avec ce que l’on appelle des estimateurs. La qualité du modèle pour les données à notre disposition est également fonction des estimateurs et de leurs propriétés, par exemple, est-ce que les estimateurs sont raisonnablement proches des valeurs idéales, c’est-à-dire les vraies valeurs. Des questions d’importance portent sur la qualité de l’ajustement d’un modèle aux données, ce qui se fait par l’étude des propriétés probabilistes et statistiques du terme d’erreur. Aussi, l’étude des relations ou l’absence de ces dernières entre les phénomènes sous des hypothèses complexes sont aussi d’intérêt. Des approches possibles pour cerner ce genre de questions consistent dans l’utilisation des tests portemanteaux, dits également tests de diagnostic. La thèse est présentée sous forme de trois projets. Le premier projet est rédigé en langue anglaise. Il s’agit en fait d’un article actuellement soumis dans une revue avec comité de lecture. Dans ce projet, nous étudions le modèle vectoriel à erreurs multiplicatives (vMEM) pour lequel nous utilisons les propriétés des estimateurs des paramètres du modèle selon la méthode des moments généralisés (GMM) afin d’établir la distribution asymptotique des autocovariances résiduelles. Ceci nous permet de proposer des nouveaux tests diagnostiques pour ce type de modèle. Sous l’hypothèse nulle d’adéquation du modèle, nous montrons que la statistique usuelle de Hosking-Ljung-Box converge vers une somme pondérée de lois de khi-carré indépendantes à un degré de liberté. Un test généralisé de Hosking-Ljung-Box est aussi obtenu en comparant la densité spectrale des résidus de l’estimation et celle présumée sous l’hypothèse nulle. Un avantage des tests spectraux est qu’ils nécessitent des estimateurs qui convergent à la vitesse n−1/2 où n est la taille de l’échantillon, et leur utilisation n’est pas restreinte à une technique particulière, comme par exemple la méthode des moments généralisés. Dans le deuxième projet, nous établissons la distribution asymptotique sous l’hypothèse de faible dépendance des covariances croisées de deux processus stationnaires en covariance. La faible dépendance ici est définie en terme de l’effet limité d’une observation donnée sur les observations futures. Nous utilisons la notion de stabilité et le concept de contraction géométrique des moments. Ces conditions sont plus générales que celles de l’invariance des moments conditionnels d’ordre un à quatre utilisée jusque là par plusieurs auteurs. Un test statistique basé sur les covariances croisées et la matrice des variances et covariances de leur distribution asymptotique est alors proposé et sa distribution asymptotique établie. Dans l’implémentation du test, la matrice des variances et covariances des covariances croisées est estimée à l’aide d’une procédure autorégressive vectorielle robuste à l’autocorrélation et à l’hétéroscédasticité. Des simulations sont ensuite effectuées pour étudier les propriétés du test proposé. Dans le troisième projet, nous considérons un modèle périodique multivarié et cointégré. La présence de cointégration entraîne l’existence de combinaisons linéaires périodiquement stationnaires des composantes du processus étudié. Le nombre de ces combinaisons linéaires linéairement indépendantes est appelé rang de cointégration. Une méthode d’estimation en deux étapes est considérée. La première méthode est appelée estimation de plein rang. Dans cette approche, le rang de cointégration est ignoré. La seconde méthode est appelée estimation de rang réduit. Elle tient compte du rang de cointégration. Cette dernière est une approche non linéaire basée sur des itérations dont la valeur initiale est l’estimateur de plein rang. Les propriétés asymptotiques de ces estimateurs sont aussi établies. Afin de vérifier l’adéquation du modèle, des statistiques de test de type portemanteau sont considérées et leurs distributions asymptotiques sont étudiées. Des simulations sont par la suite présentées afin d’illustrer le comportement du test proposé.
In statistical modeling, we assume that the phenomenon of interest is generated by a model that can be fitted to the observed data. The part of the phenomenon not explained by the model is called error or innovation. There are two parts in the model. The main part is supposed to explain the observed data, while the unexplained part which is supposed to be negligible is also called error or innovation. In order to simplify the structures, the model are often assumed to rely on a finite set of parameters. The quality of a model depends also on the parameter estimators and their properties. For example, are the estimators relatively close to the true parameters ? Some questions also address the goodness-of-fit of the model to the observed data. This question is answered by studying the statistical and probabilistic properties of the innovations. On the other hand, it is also of interest to evaluate the presence or the absence of relationships between the observed data. Portmanteau or diagnostic type tests are useful to address such issue. The thesis is presented in the form of three projects. The first project is written in English as a scientific paper. It was recently submitted for publication. In that project, we study the class of vector multiplicative error models (vMEM). We use the properties of the Generalized Method of Moments to derive the asymptotic distribution of sample autocovariance function. This allows us to propose a new test statistic. Under the null hypothesis of adequacy, the asymptotic distributions of the popular Hosking-Ljung-Box (HLB) test statistics are found to converge in distribution to weighted sums of independent chi-squared random variables. A generalized HLB test statistic is motivated by comparing a vector spectral density estimator of the residuals with the spectral density calculated under the null hypothesis. In the second project, we derive the asymptotic distribution under weak dependence of cross covariances of covariance stationary processes. The weak dependence is defined in term of the limited effect of a given information on future observations. This recalls the notion of stability and geometric moment contraction. These conditions of weak dependence defined here are more general than the invariance of conditional moments used by many authors. A test statistic based on cross covariances is proposed and its asymptotic distribution is established. In the elaboration of the test statistics, the covariance matrix of the cross covariances is obtained from a vector autoregressive procedure robust to autocorrelation and heteroskedasticity. Simulations are also carried on to study the properties of the proposed test and also to compare it to existing tests. In the third project, we consider a cointegrated periodic model. Periodic models are present in the domain of meteorology, hydrology and economics. When modelling many processes, it can happen that the processes are just driven by a common trend. This situation leads to spurious regressions when the series are integrated but have some linear combinations that are stationary. This is called cointegration. The number of stationary linear combinations that are linearly independent is called cointegration rank. So, to model the real relationship between the processes, it is necessary to take into account the cointegration rank. In the presence of periodic time series, it is called periodic cointegration. It occurs when time series are periodically integrated but have some linear combinations that are periodically stationary. A two step estimation method is considered. The first step is the full rank estimation method that ignores the cointegration rank. It provides initial estimators to the second step estimation which is the reduced rank estimation. It is non linear and iterative. Asymptotic properties of the estimators are also established. In order to check for model adequacy, portmanteau type tests and their asymptotic distributions are also derived and their asymptotic distribution are studied. Simulation results are also presented to show the behaviour of the proposed test.
APA, Harvard, Vancouver, ISO, and other styles
28

Chen, Cheng-Chien, and 陳政謙. "The Impact of Anchor Item Parameters with Estimating Errors on Test Equating." Thesis, 2011. http://ndltd.ncl.edu.tw/handle/81235892860042402336.

Full text
Abstract:
碩士
輔仁大學
應用統計學研究所
99
Test equating is a statistical process to adjust scores on different forms to the same scale. According to the item response theory, while processing test equating, anchor items must be involved in different tests, so that they can be served as a link among these tests. According to related references, an error that occurs during item calibration can affect the capability estimation of the tested. This research discusses that, with different reaction modes towards different items, the anchor items can cause errors under different parameters and levels; and that different amounts of the tested, test items, and anchor items have certain impacts on test equating. The result shows that the level of errors occurred in the anchor items can be directly reflected on the test equating, which has greater impact when the difficulty parameters have estimation errors; and increasing test items can reduce bias during test equating, while the amount of the tested has not much impact, and finally, the equating of the anchor shows the best effect when it takes up to 20% to 30% of test items.
APA, Harvard, Vancouver, ISO, and other styles
29

Toland, Michael D. "Determining the accuracy of item parameter standard error of estimates in BILOG-MG 3." 2008. http://proquest.umi.com/pqdweb?did=1564034791&sid=3&Fmt=2&clientId=14215&RQT=309&VName=PQD.

Full text
Abstract:
Thesis (Ph.D.)--University of Nebraska-Lincoln, 2008.
Title from title screen (site viewed Nov. 25, 2008). PDF text:vii, 125 p. : ill. ; 29 Mb. UMI publication number: AAT 3317288. Includes bibliographical references. Also available in microfilm and microfiche formats.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography