Dissertations / Theses on the topic 'Nonparametrica'

To see the other types of publications on this topic, follow the link: Nonparametrica.

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 dissertations / theses for your research on the topic 'Nonparametrica.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.

1

CORRADIN, RICCARDO. "Contributions to modelling via Bayesian nonparametric mixtures." Doctoral thesis, Università degli Studi di Milano-Bicocca, 2019. http://hdl.handle.net/10281/241261.

Full text
Abstract:
I modelli mistura in ambito Bayesiano nonparametrico sono modelli flessibili per stime di densità e clustering, ormai uno strumento di uso comune in ambito statistico applicato. Il primo modello introdotto in questo ambito è stato il processo di Dirichlet (DP) (Ferguson, 1973) combinato con un kernel Gaussiano(Lo, 1984). Recentemente è cresciuto l’interesse verso la definizione di modelli mistura basati su misure nonparametriche che generalizzano il DP. Tra le misure proposte, il processo di Pitman-Yor (PY) (Perman et al., 1992; Pitman, 1995) e, più in generale, la classe di Gibbs-type prior (see e.g. De Blasi et al., 2015) rappresentano generalizzazioni convenienti in grado di combinare trattabilità matematica, interpretabilità e flessibilità. In questa tesi investighiamo tre aspetti dei modelli mistura nonparametrici, in ordine, proprietà dei modelli, aspetti computazionali e proprietà distributive. La tesi è organizzata come segue. Il primo capitolo propone una revisione coincisa della statistica Bayesiana nonparametrica, con particolare attenzione a strumenti e modelli utili nei capitoli successivi. Introduciamo le nozioni di scambiabilità, partizioni scambiabili e random probability measure. Discutiamo quindi alcuni casi particolari, i processi DP e PY, ingredienti principali rispettivamente nel secondo e nel terzo capitolo. Infine discutiamo brevemente la logica dietro la definizione di classi più generali di priors nonparametriche discrete. Nel secondo capitolo proponiamo uno studio dell’effetto di trasformazioni affini invertibili dei dati sulla distribuzione a posteriori di modelli mistura DP, con particolare attenzione ai modelli con kernel Gaussiano (DPM-G). Introduciamo un risultato riguardante la specificazione dei parametri di un modello in relazione a trasformazioni dei dati. Successivamente formalizziamo la nozione di robustezza asintotica di un modello nel caso di trasformazioni affini dei dati e dimostriamo un risultato asintotico che, basandosi sulla consistenza asintotica di modelli DPM-G, mostra che, sotto alcune assunzioni sulla distribuzione che ha generato i dati, i modelli DPM-G sono asintoticamente robusti. Nel terzo capitolo presentiamo l’Importance Conditional Sampler (ICS), un nuovo schema di campionamento condizionale per modelli mistura PY, basato su una rappresentazione della distribuzione a posteriori di un processo PY (Pitman, 1996) e sull’idea di importance sampling, ispirandosi al passo augmentation del noto Algoritmo 8 di Neal (2000). Il metodo proposto combina convenientemente le migliori caratteristiche dei metodi esistenti, condizionali e marginali, per modelli mistura PY. A differenza di altri algoritmi condizionali, l’efficienza numerica dell’ICS è robusta rispetto alla specificazione dei parametri del PY. Gli step per implementare l’ICS sono descritti in dettaglio e le performance sono comparate con gli algoritmi più popolari. Infine l’ICS viene usato per definire un nuovo algoritmo efficiente per la classe di modelli mistura GM-dipendenti DP (Lijoi et al., 2014a; Lijoi et al., 2014b), per dati parzialmente scambiabili. Nel quarto capitolo studiamo alcune proprietà delle Gibbs-type priors. Il risultato principale riguarda un campione scambiabile estratto da una Gibbs-type prior e propone una rappresentazione conveniente della distribuzione della dimensione del cluster per l’osservazione (m+1)esima, dato un campione non osservato di ampiezza m. Dallo studio di questa distribuzione deriviamo una strategia, semplice ed utile, per elicitare i parametri di una Gibbs-type prior, nel contesto dei modelli mistura con una misura misturante Gibbs-type. I risultati negli ultimi tre capitoli sono supportati da esaustivi studi di simulazioni ed illustrazioni in ambito atronomico.
Bayesian nonparametric mixtures are flexible models for density estimation and clustering, nowadays a standard tool in the toolbox of applied statisticians. The first proposal of such models was the Dirichlet process (DP) (Ferguson, 1973) mixture of Gaussian kernels by Lo (1984), contribution which paved the way to the definition of a wide variety of nonparametric mixture models. In recent years, increasing interest has been dedicated to the definition of mixture models based on nonparametric mixing measures that go beyond the DP. Among these measures, the Pitman-Yor process (PY) (Perman et al., 1992; Pitman, 1995) and, more in general, the class of Gibbs-type priors (see e.g. De Blasi et al., 2015) stand out for conveniently combining mathematical tractability, interpretability and modelling flexibility. In this thesis we investigate three aspects of nonparametric mixture models, which, in turn, concern their modelling, computational and distributional properties. The thesis is organized as follows. The first chapter proposes a coincise review of the area of Bayesian nonparametric statistics, with focus on tools and models that will be considered in the following chapters. We first introduce the notions of exchangeability, exchangeable partitions and discrete random probability measures. We then focus on the DP and the PY case, main ingredients of second and third chapter, respectively. Finally, we briefly discuss the rationale behind the definition of more general classes of discrete nonparametric priors. In the second chapter we propose a thorough study on the effect of invertible affine transformations of the data on the posterior distribution of DP mixture models, with particular attention to DP mixtures of Gaussian kernels (DPM-G). First, we provide an explicit result relating model parameters and transformations of the data. Second, we formalize the notion of asymptotic robustness of a model under affine transformations of the data and prove an asymptotic result which, by relying on the asymptotic consistency of DPM-G models, show that, under mild assumptions on the data-generating distribution, DPM-G are asymptotically robust. The third chapter presents the ICS, a novel conditional sampling scheme for PY mixture models, based on a useful representation of the posterior distribution of a PY (Pitman, 1996) and on an importance sampling idea, similar in spirit to the augmentation step of the celebrated Algorithm 8 of Neal (2000). The proposed method conveniently combines the best features of state-of-the-art conditional and marginal methods for PY mixture models. Importantly, and unlike its most popular conditional competitors, the numerical efficiency of the ICS is robust to the specification of the parameters of the PY. The steps for implementing the ICS are described in detail and its performance is compared with that one of popular competing algorithms. Finally, the ICS is used as a building block for devising a new efficient algorithm for the class of GM-dependent DP mixture models (Lijoi et al., 2014a; Lijoi et al., 2014b), for partially exchangeable data. In the fourth chapter we study some distributional properties Gibbs-type priors. The main result focuses on an exchangeable sample from a Gibbs-type prior and provides a conveniently simple description of the distribution of the size of the cluster the ( m + 1 ) th observation is assigned to, given an unobserved sample of size m. The study of such distribution provides the tools for a simple, yet useful, strategy for prior elicitation of the parameters of a Gibbs-type prior, in the context of Gibbs-type mixture models. The results in the last three chapters are supported by exhaustive simulation studies and illustrated by analysing astronomical datasets.
APA, Harvard, Vancouver, ISO, and other styles
2

Campbell, Trevor D. J. (Trevor David Jan). "Truncated Bayesian nonparametrics." Thesis, Massachusetts Institute of Technology, 2016. http://hdl.handle.net/1721.1/107047.

Full text
Abstract:
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Aeronautics and Astronautics, 2016.
Cataloged from PDF version of thesis.
Includes bibliographical references (pages 167-175).
Many datasets can be thought of as expressing a collection of underlying traits with unknown cardinality. Moreover, these datasets are often persistently growing, and we expect the number of expressed traits to likewise increase over time. Priors from Bayesian nonparametrics are well-suited to this modeling challenge: they generate a countably infinite number of underlying traits, which allows the number of expressed traits to both be random and to grow with the dataset size. We also require corresponding streaming, distributed inference algorithms that handle persistently growing datasets without slowing down over time. However, a key ingredient in streaming, distributed inference-an explicit representation of the latent variables used to statistically decouple the data-is not available for nonparametric priors, as we cannot simulate or store infinitely many random variables in practice. One approach is to approximate the nonparametric prior by developing a sequential representation-such that the traits are generated by a sequence of finite-dimensional distributions-and subsequently truncating it at some finite level, thus allowing explicit representation. However, truncated sequential representations have been developed only for a small number of priors in Bayesian nonparametrics, and the order they impose on the traits creates identifiability issues in the streaming, distributed setting. This thesis provides a comprehensive theoretical treatment of sequential representations and truncation in Bayesian nonparametrics. It details three sequential representations of a large class of nonparametric priors, and analyzes their truncation error and computational complexity. The results generalize and improve upon those existing in the literature. Next, the truncated explicit representations are used to develop the first streaming, distributed, asynchronous inference procedures for models from Bayesian nonparametrics. The combinatorial issues associated with trait identifiability in such models are resolved via a novel matching optimization. The resulting algorithms are fast, learning rate-free, and truncation-free. Taken together, these contributions provide the practitioner with the means to (1) develop multiple finite approximations for a given nonparametric prior; (2) determine which is the best for their application; and (3) use that approximation in the development of efficient streaming, distributed, asynchronous inference algorithms.
by Trevor David Jan Campbell.
Ph. D.
APA, Harvard, Vancouver, ISO, and other styles
3

Li, Jiexiang. "Nonparametric spatial estimation." [Bloomington, Ind.] : Indiana University, 2006. http://gateway.proquest.com/openurl?url_ver=Z39.88-2004&rft_val_fmt=info:ofi/fmt:kev:mtx:dissertation&res_dat=xri:pqdiss&rft_dat=xri:pqdiss:3223036.

Full text
Abstract:
Thesis (Ph.D.)--Indiana University, Dept. of Mathematics, 2006.
"Title from dissertation home page (viewed June 28, 2007)." Source: Dissertation Abstracts International, Volume: 67-06, Section: B, page: 3167. Adviser: Lanh Tat Tran.
APA, Harvard, Vancouver, ISO, and other styles
4

Straub, Julian Ph D. Massachusetts Institute of Technology. "Nonparametric directional perception." Thesis, Massachusetts Institute of Technology, 2017. http://hdl.handle.net/1721.1/112029.

Full text
Abstract:
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Electrical Engineering and Computer Science, 2017.
Cataloged from PDF version of thesis.
Includes bibliographical references (pages 239-257).
Artificial perception systems, like autonomous cars and augmented reality headsets, rely on dense 3D sensing technology such as RGB-D cameras and LiDAR. scanners. Due to the structural simplicity of man-made environments, understanding and leveraging not only the 3D data but also the local orientations of the constituent surfaces, has huge potential. From an indoor scene to large-scale urban environments, a large fraction of the surfaces can be described by just a few planes with even fewer different normal directions. This sparsity is evident in the surface normal distributions, which exhibit a small number of concentrated clusters. In this work, I draw a rigorous connection between surface normal distributions and 3D structure, and explore this connection in light of different environmental assumptions to further 3D perception. Specifically, I propose the concepts of the Manhattan Frame and the unconstrained directional segmentation. These capture, in the space of surface normals, scenes composed of multiple Manhattan Worlds and more general Stata Center Worlds, in which the orthogonality assumption of the Manhattan World is not applicable. This exploration is theoretically founded in Bayesian nonparametric models, which capture two key properties of the 3D sensing process of an artificial perception system: (1) the inherent sequential nature of data acquisition and (2) that the required model complexity grows with the amount of observed data. Herein, I derive inference algorithms for directional clustering and segmentation which inherently exploit and respect these properties. The fundamental insights gleaned from the connection between surface normal distributions and 3D structure lead to practical advances in scene segmentation, drift-free rotation estimation, global point cloud registration and real-time direction-aware 3D reconstruction to aid artificial perception systems.
by Julian Straub.
Ph. D.
APA, Harvard, Vancouver, ISO, and other styles
5

Xu, Tianbing. "Nonparametric evolutionary clustering." Diss., Online access via UMI:, 2009.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
6

Rangel, Ruiz Ricardo. "Nonparametric and semi-nonparametric approaches to the demand for liquid assets." Thesis, National Library of Canada = Bibliothèque nationale du Canada, 2001. http://www.collectionscanada.ca/obj/s4/f2/dsk3/ftp04/MQ64924.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Yuan, Lin. "Bayesian nonparametric survival analysis." Thesis, National Library of Canada = Bibliothèque nationale du Canada, 1997. http://www.collectionscanada.ca/obj/s4/f2/dsk3/ftp04/nq22253.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Bush, Helen Meyers. "Nonparametric multivariate quality control." Diss., Georgia Institute of Technology, 1996. http://hdl.handle.net/1853/25571.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Pedroso, Estevam de Souza Camila. "Switching nonparametric regression models." Thesis, University of British Columbia, 2013. http://hdl.handle.net/2429/45130.

Full text
Abstract:
In this thesis, we propose a methodology to analyze data arising from a curve that, over its domain, switches among J states. We consider a sequence of response variables, where each response y depends on a covariate x according to an unobserved state z, also called a hidden or latent state. The states form a stochastic process and their possible values are j=1,...,J. If z equals j the expected response of y is one of J unknown smooth functions evaluated at x. We call this model a switching nonparametric regression model. In a Bayesian switching nonparametric regression model the uncertainty about the functions is formulated by modeling the functions as realizations of stochastic processes. In a frequentist switching nonparametric regression model the functions are merely assumed to be smooth. We consider two different data structures: one with N replicates and the other with one single realization. For the hidden states, we consider those that are independent and identically distributed and those that follow a Markov structure. We develop an EM algorithm to estimate the parameters of the latent state process and the functions corresponding to the J states. Standard errors for the parameter estimates of the state process are also obtained. We investigate the frequentist properties of the proposed estimates via simulation studies. Two different applications of the proposed methodology are presented. In the first application we analyze the well-known motorcycle data in an innovative way: treating the data as coming from J>1 simulated accident runs with unobserved run labels. In the second application we analyze daytime power usage on business days in a building treating each day as a replicate and modeling power usage as arising from two functions, one function giving power usage when the cooling system of the building is off, the other function giving power usage when the cooling system is on.
APA, Harvard, Vancouver, ISO, and other styles
10

Gosling, John Paul. "Elicitation : a nonparametric view." Thesis, University of Sheffield, 2005. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.425613.

Full text
APA, Harvard, Vancouver, ISO, and other styles
11

Joseph, Joshua Mason. "Nonparametric Bayesian behavior modeling." Thesis, Massachusetts Institute of Technology, 2008. http://hdl.handle.net/1721.1/45263.

Full text
Abstract:
Thesis (S.M.)--Massachusetts Institute of Technology, Dept. of Aeronautics and Astronautics, 2008.
Includes bibliographical references (p. 91-94).
As autonomous robots are increasingly used in complex, dynamic environments, it is crucial that the dynamic elements are modeled accurately. However, it is often difficult to generate good models due to either a lack of domain understanding or the domain being intractably large. In many domains, even defining the size of the model can be a challenge. While methods exist to cluster data of dynamic agents into common motion patterns, or "behaviors," assumptions of the number of expected behaviors must be made. This assumption can cause clustering processes to under-fit or over-fit the training data. In a poorly understood domain, knowing the number of expected behaviors a priori is unrealistic and in an extremely large domain, correctly fitting the training data is difficult. To overcome these obstacles, this thesis takes a Bayesian approach and applies a Dirichlet process (DP) prior over behaviors, which uses experience to reduce the likelihood of over-fitting or under-fitting the model complexity. Additionally, the DP maintains a probability mass associated with a novel behavior and can address countably infinite behaviors. This learning technique is applied to modeling agents driving in an urban setting. The learned DP-based driver behavior model is first demonstrated on a simulated city. Building on successful simulation results, the methodology is applied to GPS data of taxis driving around Boston. Accurate prediction of future vehicle behavior from the model is shown in both domains.
by Joshua Mason Joseph.
S.M.
APA, Harvard, Vancouver, ISO, and other styles
12

Lin, Lizhen. "Nonparametric Inference for Bioassay." Diss., The University of Arizona, 2012. http://hdl.handle.net/10150/222849.

Full text
Abstract:
This thesis proposes some new model independent or nonparametric methods for estimating the dose-response curve and the effective dosage curve in the context of bioassay. The research problem is also of importance in environmental risk assessment and other areas of health sciences. It is shown in the thesis that our new nonparametric methods while bearing optimal asymptotic properties also exhibit strong finite sample performance. Although our specific emphasis is on bioassay and environmental risk assessment, the methodology developed in this dissertation applies broadly to general order restricted inference.
APA, Harvard, Vancouver, ISO, and other styles
13

Minello, Giorgia <1983&gt. "Nonparametric Spectral Graph Model." Master's Degree Thesis, Università Ca' Foscari Venezia, 2014. http://hdl.handle.net/10579/5390.

Full text
Abstract:
In many real world cases a feature-based description of objects is difficult and for this reason the use of the graph-based representation has become popular, thanks to the ability to effectively characterizing data. Learning models for detecting and classifying object categories is a challenging problem in machine vision, above all when objects are not described in a vectorial manner. Measuring their structural similarity, as well as characterizing a set of graphs via a representative, are only some of the several hurdles. A novel technique to classify objects abstracted in structured manner by mean of a generative model is presented in this research work. The spectral approach allows to look at graphs as clouds of points, in a multidimensional space, and makes easier the application of statistical tools and concepts, in particular the probability density function. A dual generative model is developed taking into account both the eigenvector and eigenvalue's part, from graphs' eigendecomposition. The eigenvector generative model and the related prediction phase, take advantage of a nonparametric technique, i.e. the kernel density estimator, whilst the eigenvalue learning phase is based on a classical parametric approach. As eigenvectors are sign-ambiguous, namely eigenvectors are recovered up to a sign factor +/- 1, a new method to correct their direction is proposed and a further alignment stage by matrix rotation is described. Eventually either spectral components are merged and used for the ultimate aim, that is the classification of out-of-sample graphs.
APA, Harvard, Vancouver, ISO, and other styles
14

Scherreik, Matthew D. "Online Clustering with Bayesian Nonparametrics." Wright State University / OhioLINK, 2020. http://rave.ohiolink.edu/etdc/view?acc_num=wright1610711743492959.

Full text
APA, Harvard, Vancouver, ISO, and other styles
15

Houtman, Martijn. "Nonparametric consumer and producer analysis." [Maastricht : Maastricht : Rijksuniversiteit Limburg] ; University Library, Maastricht University [Host], 1995. http://arno.unimaas.nl/show.cgi?fid=5770.

Full text
APA, Harvard, Vancouver, ISO, and other styles
16

Lee, Soyeon. "Spatial fixed design nonparametric regression." [Bloomington, Ind.] : Indiana University, 2006. http://gateway.proquest.com/openurl?url_ver=Z39.88-2004&rft_val_fmt=info:ofi/fmt:kev:mtx:dissertation&res_dat=xri:pqdiss&rft_dat=xri:pqdiss:3223074.

Full text
Abstract:
Thesis (Ph.D.)--Indiana University, Dept. of Mathematics, 2006.
"Title from dissertation home page (viewed July 2, 2007)." Source: Dissertation Abstracts International, Volume: 67-06, Section: B, page: 3167. Adviser: Lanh Tran.
APA, Harvard, Vancouver, ISO, and other styles
17

Rensfelt, Agnes. "Nonparametric identification of viscoelastic materials." Licentiate thesis, Uppsala : Univ. : Dept. of Information Technology, Univ, 2006. http://www.it.uu.se/research/publications/lic/2006-008/2006-008.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
18

Kankey, Roland Doyle. "Nonparametric extrapolative forecasting : an evaluation." Connect to resource, 1988. http://rave.ohiolink.edu/etdc/view.cgi?acc%5Fnum=osu1265129995.

Full text
APA, Harvard, Vancouver, ISO, and other styles
19

Godefay, Dawit Zerom. "Nonparametric prediction: some selected topics." [Amsterdam : Amsterdam : Thela Thesis] ; Universiteit van Amsterdam [Host], 2002. http://dare.uva.nl/document/64443.

Full text
APA, Harvard, Vancouver, ISO, and other styles
20

Polsen, Orathai. "Nonparametric regression and mixture models." Thesis, University of Leeds, 2011. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.578651.

Full text
Abstract:
Nonparametric regression estimation has become popular in the last 50 years. A commonly used nonparametric method for estimating the regression curve is the kernel estimator, exemplified by the Nadaraya- Watson estimator. The first part of thesis concentrates on the important issue of how to make a good choice of smoothing parameter for the Nadaraya- Watson estimator. In this study three types of smoothing parameter selectors are investigated: cross-validation, plug-in and bootstrap. In addition, two situations are examined: the same smoothing parameter and different smoothing parameters are employed for the estimates of the numerator and the denominator. We study the asymptotic bias and variance of the Nadaraya- Watson estimator when different smoothing parameters are used. We propose various plug-in methods for selecting smoothing parameter including a bootstrap smoothing parameter selector. The performances of the proposed selectors are investigated and also compared with cross-validation via a simulation study. Numerical results demonstrate that the proposed plug-in selectors outperform cross-validation when data is bivariate normal distributed. Numerical results also suggest that the proposed bootstrap selector with asymptotic pilot smoothing parameter compares favourably with cross-validation. We consider a circular-circular parametric regression model proposed by Taylor (2009), including parameter estimation and inference. In addition, we investigate diagnostic tools for circular regression which can be generally applied. A final thread is related to mixture models, in particular a mixture of linear regression models and a mixture of circular-circular regression models where there is unobserved group membership of the observation. We investigate methods for selecting starting values for EM algorithm which is used to fit mixture models and also the distributions of these values. Our experiments suggest that the proposed method compares favourably with the common method in mixture linear regression models.
APA, Harvard, Vancouver, ISO, and other styles
21

Liu, Han. "Nonparametric Learning in High Dimensions." Research Showcase @ CMU, 2010. http://repository.cmu.edu/dissertations/16.

Full text
Abstract:
This thesis develops flexible and principled nonparametric learning algorithms to explore, understand, and predict high dimensional and complex datasets. Such data appear frequently in modern scientific domains and lead to numerous important applications. For example, exploring high dimensional functional magnetic resonance imaging data helps us to better understand brain functionalities; inferring large-scale gene regulatory network is crucial for new drug design and development; detecting anomalies in high dimensional transaction databases is vital for corporate and government security. Our main results include a rigorous theoretical framework and efficient nonparametric learning algorithms that exploit hidden structures to overcome the curse of dimensionality when analyzing massive high dimensional datasets. These algorithms have strong theoretical guarantees and provide high dimensional nonparametric recipes for many important learning tasks, ranging from unsupervised exploratory data analysis to supervised predictive modeling. In this thesis, we address three aspects: 1 Understanding the statistical theories of high dimensional nonparametric inference, including risk, estimation, and model selection consistency; 2 Designing new methods for different data-analysis tasks, including regression, classification, density estimation, graphical model learning, multi-task learning, spatial-temporal adaptive learning; 3 Demonstrating the usefulness of these methods in scientific applications, including functional genomics, cognitive neuroscience, and meteorology. In the last part of this thesis, we also present the future vision of high dimensional and large-scale nonparametric inference.
APA, Harvard, Vancouver, ISO, and other styles
22

Van, Gael Jurgen. "Bayesian nonparametric hidden Markov models." Thesis, University of Cambridge, 2012. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.610196.

Full text
APA, Harvard, Vancouver, ISO, and other styles
23

Baiocchi, Giovanni. "Economic applications of nonparametric methods." Thesis, University of York, 2006. http://etheses.whiterose.ac.uk/14117/.

Full text
Abstract:
This thesis deals with the subject of nonparametric methods, focusing on application to economic issues. Chapter 2 introduces the basic nonparametric methods underlying the applications in the subsequent chapters. In Chapter 3 we propose some basic standards to improve the use and reporting of nonparametric methods in the statistics and economics literature for the purpose of accuracy and reproducibility. We make recommendations on four aspects of the application of nonparametric methods: computational practice, published reporting, numerical accuracy, and visualization. In Chapter 4 we investigate the effect of life-cycle factors and other demographic characteristics on income inequality in the UK. Two conditional inequality measures are derived from estimating the cumulative distribution function of household income, conditional upon a broad set of explanatory variables. Estimation of the distribution is carried out using a semiparametric approach. The proposed inequality estimators are easily interpretable and are shown to be consistent. Our results indicate the importance of interfamily differences in the analysis of income distribution. In addition, our estimation procedure uncovers higher-order properties of the income distribution and non-linearities of its moments that cannot be captured by means of a "standard" parametric approach. Several features of the conditional distribution of income are highlighted. Chapter 5 we reexamine the relationship between openness to trade and the environment, controlling for economic development, in order to identify the presence of multiple regimes in the cross-country pollution-economic relationship. We first identify the presence of multiple regimes by using specification tests which entertain a single regime model as the null hypothesis. Then we develop an easily interpretable measure, based on an original application of the Blinder-Oaxaca decomposition, of the impact on the environment due to differences in regimes. Finally we apply a nonparametric recursive partitioning algorithm to endogenously identify various regimes. Our conclusions are threefold. First, we reject the null hypothesis that all countries obey a common linear model. Second, we find that quantitatively regime differences can have a significant impact. Thirdly, by using regression tree analysis we find subsets of countries which appear to possess very different environmental/economic relationships. In Chapter 6 investigate the existence of the so called environmental kuznets curve (EKC), the inverted-U shaped relationship between income and pollution, using nonparametric regression and a threshold regression methods. We find support for threshold models that lead to different reduced-form relationships between environmental quality and economic activity when early stages of economic growth are contrasted with later stages, There is no evidence of a common inverted U-shaped environment/economy relationship that all country follow as they grow. We also find that changes that might benefit the environment occur at much higher levels of income than those implied by standard models. Our findings support models in which improvements are a consequence of the deliberate introduction of policies addressing environmental concerns. Moreover, we find evidence that countries with low-income levels have a far greater variability in emissions per capita than high-income countries. This has the implication that it may be more difficult to predict emission levels for low-income countries approaching the turning point. A summary of the main findings and further research directions are presented in Chapter 7 and in Chapter 8, respectively.
APA, Harvard, Vancouver, ISO, and other styles
24

Keys, Anthony C. "Nonparametric metamodeling for simulation optimization." Diss., Virginia Tech, 1995. http://hdl.handle.net/10919/38570.

Full text
Abstract:
Optimization of simulation model performance requires finding the values of the model's controllable inputs that optimize a chosen model response. Responses are usually stochastic in nature, and the cost of simulation model runs is high. The literature suggests the use of metamodels to synthesize the response surface using sample data. In particular, nonparametric regression is proposed as a useful tool in the global optimization of a response surface. As the general simulation optimization problem is very difficult and requires expertise from a number of fields, there is a growing consensus in the literature that a knowledge-based approach to solving simulation optimization problems is required. This dissertation examines the relative performance of the principal nonparametric techniques, spline and kernel smoothing, and subsequently addresses the issues involved in implementing the techniques in a knowledge-based simulation optimization system. The dissertation consists of two parts. In the first part, a full factorial experiment is carried out to compare the performance of kernel and spline smoothing on a number of measures when modeling a varied set of surfaces using a range of small sample sizes. In the second part, nonparametric metamodeling techniques are placed in a taxonomy of stochastic search procedures for simulation optimization and a method for their implementation in a knowledge-based system is presented. A sequential design procedure is developed that allows spline smoothing to be used as a search technique. Throughout the dissertation, a two-input, single-response model is considered. Results from the experiment show that spline smoothing is superior to constant-bandwidth kernel smoothing in fitting the response. Kernel smoothing is shown to be more accurate in placing optima in X-space for sample sizes up to 36. Inventory model examples are used to illustrate the results. The taxonomy implies that search procedures can be chosen initially using the parameters of the problem. A process that allows for selection of a search technique and its subsequent evaluation for further use or for substitution of another search technique is given. The success of a sequential design method for spline smooths in finding a global optimum is demonstrated using a bimodal response surface.
Ph. D.
APA, Harvard, Vancouver, ISO, and other styles
25

Dallaire, Patrick. "Bayesian nonparametric latent variable models." Doctoral thesis, Université Laval, 2016. http://hdl.handle.net/20.500.11794/26848.

Full text
Abstract:
L’un des problèmes importants en apprentissage automatique est de déterminer la complexité du modèle à apprendre. Une trop grande complexité mène au surapprentissage, ce qui correspond à trouver des structures qui n’existent pas réellement dans les données, tandis qu’une trop faible complexité mène au sous-apprentissage, c’est-à-dire que l’expressivité du modèle est insuffisante pour capturer l’ensemble des structures présentes dans les données. Pour certains modèles probabilistes, la complexité du modèle se traduit par l’introduction d’une ou plusieurs variables cachées dont le rôle est d’expliquer le processus génératif des données. Il existe diverses approches permettant d’identifier le nombre approprié de variables cachées d’un modèle. Cette thèse s’intéresse aux méthodes Bayésiennes nonparamétriques permettant de déterminer le nombre de variables cachées à utiliser ainsi que leur dimensionnalité. La popularisation des statistiques Bayésiennes nonparamétriques au sein de la communauté de l’apprentissage automatique est assez récente. Leur principal attrait vient du fait qu’elles offrent des modèles hautement flexibles et dont la complexité s’ajuste proportionnellement à la quantité de données disponibles. Au cours des dernières années, la recherche sur les méthodes d’apprentissage Bayésiennes nonparamétriques a porté sur trois aspects principaux : la construction de nouveaux modèles, le développement d’algorithmes d’inférence et les applications. Cette thèse présente nos contributions à ces trois sujets de recherches dans le contexte d’apprentissage de modèles à variables cachées. Dans un premier temps, nous introduisons le Pitman-Yor process mixture of Gaussians, un modèle permettant l’apprentissage de mélanges infinis de Gaussiennes. Nous présentons aussi un algorithme d’inférence permettant de découvrir les composantes cachées du modèle que nous évaluons sur deux applications concrètes de robotique. Nos résultats démontrent que l’approche proposée surpasse en performance et en flexibilité les approches classiques d’apprentissage. Dans un deuxième temps, nous proposons l’extended cascading Indian buffet process, un modèle servant de distribution de probabilité a priori sur l’espace des graphes dirigés acycliques. Dans le contexte de réseaux Bayésien, ce prior permet d’identifier à la fois la présence de variables cachées et la structure du réseau parmi celles-ci. Un algorithme d’inférence Monte Carlo par chaîne de Markov est utilisé pour l’évaluation sur des problèmes d’identification de structures et d’estimation de densités. Dans un dernier temps, nous proposons le Indian chefs process, un modèle plus général que l’extended cascading Indian buffet process servant à l’apprentissage de graphes et d’ordres. L’avantage du nouveau modèle est qu’il admet les connections entres les variables observables et qu’il prend en compte l’ordre des variables. Nous présentons un algorithme d’inférence Monte Carlo par chaîne de Markov avec saut réversible permettant l’apprentissage conjoint de graphes et d’ordres. L’évaluation est faite sur des problèmes d’estimations de densité et de test d’indépendance. Ce modèle est le premier modèle Bayésien nonparamétrique permettant d’apprendre des réseaux Bayésiens disposant d’une structure complètement arbitraire.
One of the important problems in machine learning is determining the complexity of the model to learn. Too much complexity leads to overfitting, which finds structures that do not actually exist in the data, while too low complexity leads to underfitting, which means that the expressiveness of the model is insufficient to capture all the structures present in the data. For some probabilistic models, the complexity depends on the introduction of one or more latent variables whose role is to explain the generative process of the data. There are various approaches to identify the appropriate number of latent variables of a model. This thesis covers various Bayesian nonparametric methods capable of determining the number of latent variables to be used and their dimensionality. The popularization of Bayesian nonparametric statistics in the machine learning community is fairly recent. Their main attraction is the fact that they offer highly flexible models and their complexity scales appropriately with the amount of available data. In recent years, research on Bayesian nonparametric learning methods have focused on three main aspects: the construction of new models, the development of inference algorithms and new applications. This thesis presents our contributions to these three topics of research in the context of learning latent variables models. Firstly, we introduce the Pitman-Yor process mixture of Gaussians, a model for learning infinite mixtures of Gaussians. We also present an inference algorithm to discover the latent components of the model and we evaluate it on two practical robotics applications. Our results demonstrate that the proposed approach outperforms, both in performance and flexibility, the traditional learning approaches. Secondly, we propose the extended cascading Indian buffet process, a Bayesian nonparametric probability distribution on the space of directed acyclic graphs. In the context of Bayesian networks, this prior is used to identify the presence of latent variables and the network structure among them. A Markov Chain Monte Carlo inference algorithm is presented and evaluated on structure identification problems and as well as density estimation problems. Lastly, we propose the Indian chefs process, a model more general than the extended cascading Indian buffet process for learning graphs and orders. The advantage of the new model is that it accepts connections among observable variables and it takes into account the order of the variables. We also present a reversible jump Markov Chain Monte Carlo inference algorithm which jointly learns graphs and orders. Experiments are conducted on density estimation problems and testing independence hypotheses. This model is the first Bayesian nonparametric model capable of learning Bayesian learning networks with completely arbitrary graph structures.
APA, Harvard, Vancouver, ISO, and other styles
26

Gao, Wenyu. "Advanced Nonparametric Bayesian Functional Modeling." Diss., Virginia Tech, 2020. http://hdl.handle.net/10919/99913.

Full text
Abstract:
Functional analyses have gained more interest as we have easier access to massive data sets. However, such data sets often contain large heterogeneities, noise, and dimensionalities. When generalizing the analyses from vectors to functions, classical methods might not work directly. This dissertation considers noisy information reduction in functional analyses from two perspectives: functional variable selection to reduce the dimensionality and functional clustering to group similar observations and thus reduce the sample size. The complicated data structures and relations can be easily modeled by a Bayesian hierarchical model, or developed from a more generic one by changing the prior distributions. Hence, this dissertation focuses on the development of Bayesian approaches for functional analyses due to their flexibilities. A nonparametric Bayesian approach, such as the Dirichlet process mixture (DPM) model, has a nonparametric distribution as the prior. This approach provides flexibility and reduces assumptions, especially for functional clustering, because the DPM model has an automatic clustering property, so the number of clusters does not need to be specified in advance. Furthermore, a weighted Dirichlet process mixture (WDPM) model allows for more heterogeneities from the data by assuming more than one unknown prior distribution. It also gathers more information from the data by introducing a weight function that assigns different candidate priors, such that the less similar observations are more separated. Thus, the WDPM model will improve the clustering and model estimation results. In this dissertation, we used an advanced nonparametric Bayesian approach to study functional variable selection and functional clustering methods. We proposed 1) a stochastic search functional selection method with application to 1-M matched case-crossover studies for aseptic meningitis, to examine the time-varying unknown relationship and find out important covariates affecting disease contractions; 2) a functional clustering method via the WDPM model, with application to three pathways related to genetic diabetes data, to identify essential genes distinguishing between normal and disease groups; and 3) a combined functional clustering, with the WDPM model, and variable selection approach with application to high-frequency spectral data, to select wavelengths associated with breast cancer racial disparities.
Doctor of Philosophy
As we have easier access to massive data sets, functional analyses have gained more interest to analyze data providing information about curves, surfaces, or others varying over a continuum. However, such data sets often contain large heterogeneities and noise. When generalizing the analyses from vectors to functions, classical methods might not work directly. This dissertation considers noisy information reduction in functional analyses from two perspectives: functional variable selection to reduce the dimensionality and functional clustering to group similar observations and thus reduce the sample size. The complicated data structures and relations can be easily modeled by a Bayesian hierarchical model due to its flexibility. Hence, this dissertation focuses on the development of nonparametric Bayesian approaches for functional analyses. Our proposed methods can be applied in various applications: the epidemiological studies on aseptic meningitis with clustered binary data, the genetic diabetes data, and breast cancer racial disparities.
APA, Harvard, Vancouver, ISO, and other styles
27

Millen, Brian A. "Nonparametric tests for umbrella alternatives /." The Ohio State University, 2001. http://rave.ohiolink.edu/etdc/view?acc_num=osu1488205318508038.

Full text
APA, Harvard, Vancouver, ISO, and other styles
28

Jiang, Yong Carleton University Dissertation Management Studies. "Bankruptcy prediction - a nonparametric approach." Ottawa, 1993.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
29

Nudurupati, Sai Vamshidhar Abebe Asheber. "Robust nonparametric discriminant analysis procedures." Auburn, Ala, 2009. http://hdl.handle.net/10415/1605.

Full text
APA, Harvard, Vancouver, ISO, and other styles
30

Dong, Lei. "Nonparametric tests for longitudinal data." Manhattan, Kan. : Kansas State University, 2009. http://hdl.handle.net/2097/2295.

Full text
APA, Harvard, Vancouver, ISO, and other styles
31

Centorrino, Samuele. "Causality, endogeneity and nonparametric estimation." Thesis, Toulouse 1, 2013. http://www.theses.fr/2013TOU10020/document.

Full text
Abstract:
Cette thèse porte sur les problèmes de causalité et d'endogénéité avec estimation non-paramétrique de la fonction d’intérêt. On explore ces problèmes dans deux modèles différents. Dans le cas de données en coupe transversale et iid, on considère l'estimation d'un modèle additif séparable, dans lequel la fonction de régression dépend d'une variable endogène. L'endogénéité est définie, dans ce cas, de manière très générale : elle peut être liée à une causalité inverse (la variable dépendante peut aussi intervenir dans la réalisation des régresseurs), ou à la simultanéité (les résidus contiennent de l'information qui peut influencer la variable indépendante). L'identification et l'estimation de la fonction de régression se font par variables instrumentales. Dans le cas de séries temporelles, on étudie les effets de l'hypothèse d'exogénéité dans un modèle de régression en temps continu. Dans un tel modèle, la variable d'état est fonction de son passé, mais aussi du passé d'autres variables et on s'intéresse à l'estimation nonparamétrique de la moyenne et de la variance conditionnelle. Le premier chapitre traite de ce dernier cas. En particulier, on donne des conditions suffisantes pour qu'on puisse faire de l'inférence statistique dans un tel modèle. On montre que la non-causalité est une condition suffisante pour l'exogénéité, quand on ne veut pas faire d'hypothèses sur les dynamiques du processus des covariables. Cependant, si on est prêt à supposer que le processus des covariables suit une simple équation différentielle stochastique, l'hypothèse de non-causalité devient immatérielle. Les chapitres de deux à quatre se concentrent sur le modèle iid simple. Etant donné que la fonction de régression est solution d'un problème mal-posé, on s'intéresse aux méthodes d'estimation par régularisation. Dans le deuxième chapitre, on considère ce modèle dans le cas d'un régularisation sur la norme L2 de la fonction (régularisation de type Tikhonov). On dérive les propriétés d'un critère de validation croisée pour définir le choix du paramètre de régularisation. Dans le chapitre trois, coécrit avec Jean-Pierre Florens, on étend ce modèle au cas où la variable dépendante n'est pas directement observée mais où on observe seulement une transformation binaire de cette dernière. On montre que le modèle peut être identifié en utilisant la décomposition de la variable dépendante dans l'espace des variables instrumentales et en supposant que les résidus de ce modèle réduit ont une distribution connue. On démontre alors, sous ces hypothèses, qu'on préserve les propriétés de convergence de l'estimateur non-paramétrique. Enfin, le chapitre quatre, coécrit avec Frédérique Fève et Jean-Pierre Florens, décrit une étude numérique, qui compare les propriétés de diverses méthodes de régularisation. En particulier, on discute des critères pour le choix adaptatif des paramètres de lissage et de régularisation et on teste la validité du bootstrap sauvage dans le cas des modèles de régression non-paramétrique avec variables instrumentales
This thesis deals with the broad problem of causality and endogeneity in econometrics when the function of interest is estimated nonparametrically. It explores this problem in two separate frameworks. In the cross sectional, iid setting, it considers the estimation of a nonlinear additively separable model, in which the regression function depends on an endogenous explanatory variable. Endogeneity is, in this case, broadly denned. It can relate to reverse causality (the dependent variable can also affects the independent regressor) or to simultaneity (the error term contains information that can be related to the explanatory variable). Identification and estimation of the regression function is performed using the method of instrumental variables. In the time series context, it studies the implications of the assumption of exogeneity in a regression type model in continuous time. In this model, the state variable depends on its past values, but also on some external covariates and the researcher is interested in the nonparametric estimation of both the conditional mean and the conditional variance functions. This first chapter deals with the latter topic. In particular, we give sufficient conditions under which the researcher can make meaningful inference in such a model. It shows that noncausality is a sufficient condition for exogeneity if the researcher is not willing to make any assumption on the dynamics of the covariate process. However, if the researcher is willing to assume that the covariate process follows a simple stochastic differential equation, then the assumption of noncausality becomes irrelevant. Chapters two to four are instead completely devoted to the simple iid model. The function of interest is known to be the solution of an inverse problem. In the second chapter, this estimation problem is considered when the regularization is achieved using a penalization on the L2-norm of the function of interest (so-called Tikhonov regularization). We derive the properties of a leave-one-out cross validation criterion in order to choose the regularization parameter. In the third chapter, coauthored with Jean-Pierre Florens, we extend this model to the case in which the dependent variable is not directly observed, but only a binary transformation of it. We show that identification can be obtained via the decomposition of the dependent variable on the space spanned by the instruments, when the residuals in this reduced form model are taken to have a known distribution. We finally show that, under these assumptions, the consistency properties of the estimator are preserved. Finally, chapter four, coauthored with Frédérique Fève and Jean-Pierre Florens, performs a numerical study, in which the properties of several regularization techniques are investigated. In particular, we gather data-driven techniques for the sequential choice of the smoothing and the regularization parameters and we assess the validity of wild bootstrap in nonparametric instrumental regressions
APA, Harvard, Vancouver, ISO, and other styles
32

Cortina, Borja Mario Jose Francisco. "Graph-theoretic multivariate nonparametric procedures." Thesis, University of Bath, 1992. https://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.302730.

Full text
APA, Harvard, Vancouver, ISO, and other styles
33

Yanagi, Takahide. "Essays on Nonparametric Methods in Econometrics." Kyoto University, 2015. http://hdl.handle.net/2433/200427.

Full text
APA, Harvard, Vancouver, ISO, and other styles
34

Calle, M. Luz. "The analysis of interval-censored survival data. From a Nonparametric perspective to a nonparametric Bayesian approach." Doctoral thesis, Universitat Politècnica de Catalunya, 1997. http://hdl.handle.net/10803/6521.

Full text
Abstract:
This work concerns some problems in the area of survival analysis that arise in real clinical or epidemiological studies. In particular, we approach the problem of estimating the survival function based on interval-censored data or doubly-censored data. We will start defining these concepts and presenting a brief review of different methodologies to deal with this kind of censoring patterns.
Survival analysis is the term used to describe the analysis of data that correspond to the time from a well defined origin time until the occurrence of some particular event of interest. This event need not necessarily be death, but could, for example, be the response to a treatment, remission from a disease, or the occurrence of a symptom
APA, Harvard, Vancouver, ISO, and other styles
35

Sajama. "Nonparametric methods for learning from data." Connect to a 24 p. preview or request complete full text in PDF format. Access restricted to UC campuses, 2006. http://wwwlib.umi.com/cr/ucsd/fullcit?p3205362.

Full text
Abstract:
Thesis (Ph. D.)--University of California, San Diego, 2006.
Title from first page of PDF file (viewed April 6, 2006). Available via ProQuest Digital Dissertations. Vita. Includes bibliographical references (p. 103-111).
APA, Harvard, Vancouver, ISO, and other styles
36

Guan, Yong Tao. "Nonparametric methods of assessing spatial isotropy." Texas A&M University, 2003. http://hdl.handle.net/1969.1/1158.

Full text
Abstract:
A common requirement for spatial analysis is the modeling of the second-order structure. While the assumption of isotropy is often made for this structure, it is not always appropriate. A conventional practice to check for isotropy is to informally assess plots of direction-specific sample second-order properties, e.g., sample variogram or sample second-order intensity function. While a useful diagnostic, these graphical techniques are difficult to assess and open to interpretation. Formal alternatives to graphical diagnostics are valuable, but have been applied to a limited class of models. In this dissertation, we propose a formal approach testing for isotropy that is both objective and appropriate for a wide class of models. This approach, which is based on the asymptotic joint normality of the sample second-order properties, can be used to compare these properties in multiple directions. An L_2 consistent subsampling estimator for the asymptotic covariance matrix of the sample second-order properties is derived and used to construct the test statistic with a limiting $\\chi^2$ distribution under the null hypothesis. Our testing approach is purely nonparametric and can be applied to both quantitative spatial processes and spatial point processes. For quantitative processes, the results apply to both regularly spaced and irregularly spaced data when the point locations are generated by a homogeneous point process. In addition, the shape of the random field can be quite irregular. Examples and simulations demonstrate the efficacy of the approach.
APA, Harvard, Vancouver, ISO, and other styles
37

Habli, Nada. "Nonparametric Bayesian Modelling in Machine Learning." Thesis, Université d'Ottawa / University of Ottawa, 2016. http://hdl.handle.net/10393/34267.

Full text
Abstract:
Nonparametric Bayesian inference has widespread applications in statistics and machine learning. In this thesis, we examine the most popular priors used in Bayesian non-parametric inference. The Dirichlet process and its extensions are priors on an infinite-dimensional space. Originally introduced by Ferguson (1983), its conjugacy property allows a tractable posterior inference which has lately given rise to a significant developments in applications related to machine learning. Another yet widespread prior used in nonparametric Bayesian inference is the Beta process and its extensions. It has originally been introduced by Hjort (1990) for applications in survival analysis. It is a prior on the space of cumulative hazard functions and it has recently been widely used as a prior on an infinite dimensional space for latent feature models. Our contribution in this thesis is to collect many diverse groups of nonparametric Bayesian tools and explore algorithms to sample from them. We also explore machinery behind the theory to apply and expose some distinguished features of these procedures. These tools can be used by practitioners in many applications.
APA, Harvard, Vancouver, ISO, and other styles
38

Aboalkhair, Ahmad M. "Nonparametric predictive inference for system reliability." Thesis, Durham University, 2012. http://etheses.dur.ac.uk/3918/.

Full text
Abstract:
This thesis provides a new method for statistical inference on system reliability on the basis of limited information resulting from component testing. This method is called Nonparametric Predictive Inference (NPI). We present NPI for system reliability, in particular NPI for k-out-of-m systems, and for systems that consist of multiple ki-out-of-mi subsystems in series configuration. The algorithm for optimal redundancy allocation, with additional components added to subsystems one at a time is presented. We also illustrate redundancy allocation for the same system in case the costs of additional components differ per subsystem. Then NPI is presented for system reliability in a similar setting, but with all subsystems consisting of the same single type of component. As a further step in the development of NPI for system reliability, where more general system structures can be considered, nonparametric predictive inference for reliability of voting systems with multiple component types is presented. We start with a single voting system with multiple component types, then we extend to a series configuration of voting subsystems with multiple component types. Throughout this thesis we assume information from tests of nt components of type t.
APA, Harvard, Vancouver, ISO, and other styles
39

Dharmasena, Tibbotuwa Deniye Kankanamge Lasitha Sandamali, and Sandamali dharmasena@rmit edu au. "Sequential Procedures for Nonparametric Kernel Regression." RMIT University. Mathematical and Geospatial Sciences, 2008. http://adt.lib.rmit.edu.au/adt/public/adt-VIT20090119.134815.

Full text
Abstract:
In a nonparametric setting, the functional form of the relationship between the response variable and the associated predictor variables is unspecified; however it is assumed to be a smooth function. The main aim of nonparametric regression is to highlight an important structure in data without any assumptions about the shape of an underlying regression function. In regression, the random and fixed design models should be distinguished. Among the variety of nonparametric regression estimators currently in use, kernel type estimators are most popular. Kernel type estimators provide a flexible class of nonparametric procedures by estimating unknown function as a weighted average using a kernel function. The bandwidth which determines the influence of the kernel has to be adapted to any kernel type estimator. Our focus is on Nadaraya-Watson estimator and Local Linear estimator which belong to a class of kernel type regression estimators called local polynomial kerne l estimators. A closely related problem is the determination of an appropriate sample size that would be required to achieve a desired confidence level of accuracy for the nonparametric regression estimators. Since sequential procedures allow an experimenter to make decisions based on the smallest number of observations without compromising accuracy, application of sequential procedures to a nonparametric regression model at a given point or series of points is considered. The motivation for using such procedures is: in many applications the quality of estimating an underlying regression function in a controlled experiment is paramount; thus, it is reasonable to invoke a sequential procedure of estimation that chooses a sample size based on recorded observations that guarantees a preassigned accuracy. We have employed sequential techniques to develop a procedure for constructing a fixed-width confidence interval for the predicted value at a specific point of the independent variable. These fixed-width confidence intervals are developed using asymptotic properties of both Nadaraya-Watson and local linear kernel estimators of nonparametric kernel regression with data-driven bandwidths and studied for both fixed and random design contexts. The sample sizes for a preset confidence coefficient are optimized using sequential procedures, namely two-stage procedure, modified two-stage procedure and purely sequential procedure. The proposed methodology is first tested by employing a large-scale simulation study. The performance of each kernel estimation method is assessed by comparing their coverage accuracy with corresponding preset confidence coefficients, proximity of computed sample sizes match up to optimal sample sizes and contrasting the estimated values obtained from the two nonparametric methods with act ual values at given series of design points of interest. We also employed the symmetric bootstrap method which is considered as an alternative method of estimating properties of unknown distributions. Resampling is done from a suitably estimated residual distribution and utilizes the percentiles of the approximate distribution to construct confidence intervals for the curve at a set of given design points. A methodology is developed for determining whether it is advantageous to use the symmetric bootstrap method to reduce the extent of oversampling that is normally known to plague Stein's two-stage sequential procedure. The procedure developed is validated using an extensive simulation study and we also explore the asymptotic properties of the relevant estimators. Finally, application of our proposed sequential nonparametric kernel regression methods are made to some problems in software reliability and finance.
APA, Harvard, Vancouver, ISO, and other styles
40

Huang, Fuping. "Nonparametric censored regression by smoothing splines." Thesis, National Library of Canada = Bibliothèque nationale du Canada, 2001. http://www.collectionscanada.ca/obj/s4/f2/dsk3/ftp05/NQ61977.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
41

Douglass, Julian James. "Nonparametric portfolio estimation and asset allocation." Thesis, University of British Columbia, 2009. http://hdl.handle.net/2429/5414.

Full text
Abstract:
This thesis comprises two essays that apply nonparametric methods to the estimation of portfolio allocations. In the first essay, I test the significance to investor welfare of (i) adding additional assets to the portfolio choice set and (ii) conditioning on predictor variables. I estimate unconditional and conditional optimal allocations of a constant relative risk aversion investor by maximizing a nonparametric approximation of the expected utility integral. Investors can improve their expected utility significantly over that of an equities and cash investor by adding portfolios based on the value or momentum premiums into their asset allocation decision. In contrast, neither a size premium portfolio nor a long-term bond portfolio improves expected utility. The significance of predictability is increased by simultaneously conditioning on the two strongest predictors (of eight) studied: the term spread and the gold industry trend. In the second essay, I formulate a nonparametric estimator that permits combining historical data with a qualitative prior. I investigate the impact of an investor belief, motivated by asset-pricing theory, that optimal allocations are positive. In the estimator construction, I use a Bayesian approach to perturb the probabilities associated with each data point in the empirical distribution to reflect qualitative prior beliefs. In a simulation study and in out-of-sample tests, I find that portfolio estimates conditioned on a belief in the positivity of portfolio weights are significantly more stable than those estimated by an uninformed investor, and that the model performs better in out-of-sample tests than a number of plug-in models. However, the out-of-sample performance lags that of the minimum-variance and 1/N policies.
APA, Harvard, Vancouver, ISO, and other styles
42

Samusenko, Pavel. "Nonparametric criteria for sparse contingency tables." Doctoral thesis, Lithuanian Academic Libraries Network (LABT), 2013. http://vddb.laba.lt/obj/LT-eLABa-0001:E.02~2013~D_20130218_142205-74244.

Full text
Abstract:
In the dissertation, the problem of nonparametric testing for sparse contingency tables is addressed. Statistical inference problems caused by sparsity of contingency tables are widely discussed in the literature. Traditionally, the expected (under null the hypothesis) frequency is required to exceed 5 in almost all cells of the contingency table. If this condition is violated, the χ2 approximations of goodness of fit statistics may be inaccurate and the table is said to be sparse . Several techniques have been proposed to tackle the problem: exact tests, alternative approximations, parametric and nonparametric bootstrap, Bayes approach and other methods. However they all are not applicable or have some limitations in nonparametric statistical inference of very sparse contingency tables. In the dissertation, it is shown that, for sparse categorical data, the likelihood ratio statistic and Pearson’s χ2 statistic may become noninformative: they do not anymore measure the goodness-of-fit of null hypotheses to data. Thus, they can be inconsistent even in cases where a simple consistent test does exist. An improvement of the classical criteria for sparse contingency tables is proposed. The improvement is achieved by grouping and smoothing of sparse categorical data by making use of a new sparse asymptotics model relying on (extended) empirical Bayes approach. Under general conditions, the consistency of the proposed criteria based on grouping is proved. Finite-sample behavior of... [to full text]
Disertacijoje sprendžiami neparametrinių hipotezių tikrinimo uždaviniai išretintoms dažnių lentelėms. Problemos, susijusios su retų įvykių dažnių lentelėmis yra plačiai aptartos mokslinėje literatūroje. Yra pasiūlyta visa eilė metodų: tikslieji testai, alternatyvūs aproksimavimo būdai parametrinė ir neparametrinė saviranka, Bayeso ir kiti metodai. Tačiau jie nepritaikomi arba yra neefektyvūs neparametrinėje labai išretintų dažnių lentelių analizėje. Disertacijoje parodyta, kad labai išretintiems kategoriniams duomenims tikėtinumo santykio statistika ir Pearsono χ2 statistika gali pasidaryti neinformatyviomis: jos jau nėra tinkamos nulinės hipotezės ir duomenų suderinamumui matuoti. Vadinasi, jų pagrindu sudaryti kriterijai gali būti net nepagrįsti net tuo atveju, kai egzistuoja paprastas pagrįstas kriterijus. Darbe yra pasiūlytas klasikinių kriterijų patobulinimas išretintų dažnių lentelėms. Siūlomi kriterijai remiasi išretintų kategorinių duomenų grupavimu ir glodinimu naudojant naują išretinimo asimtotikos modelį, kuris remiasi (išplėstine) empirine Bayeso metodologija. Prie bendrų sąlygų yra įrodytas siūlomų kriterijų, naudojančių grupavimą, pagrįstumas. Kriterijų elgesys baigtinių imčių atveju tiriamas taikant Monte Carlo modeliavimą. Disertacija susideda iš įvado, 4 skyrių, literatūros sąrašo, bendrų išvadų ir priedo. Įvade atskleidžiama nagrinėjamos mokslinės problemos svarba, aprašomi darbo tikslai ir uždaviniai, tyrimo metodai, mokslinis naujumas, praktinė gautų... [toliau žr. visą tekstą]
APA, Harvard, Vancouver, ISO, and other styles
43

Denison, David George Taylor. "Simulation based Bayesian nonparametric regression methods." Thesis, Imperial College London, 1997. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.266105.

Full text
APA, Harvard, Vancouver, ISO, and other styles
44

Maturi, Tahani. "Nonparametric predictive inference for multiple comparisons." Thesis, Durham University, 2010. http://etheses.dur.ac.uk/230/.

Full text
Abstract:
This thesis presents Nonparametric Predictive Inference (NPI) for several multiple comparisons problems. We introduce NPI for comparison of multiple groups of data including right-censored observations. Different right-censoring schemes discussed are early termination of an experiment, progressive censoring and competing risks. Several selection events of interest are considered including selecting the best group, the subset of best groups, and the subset including the best group. The proposed methods use lower and upper probabilities for some events of interest formulated in terms of the next future observation per group. For each of these problems the required assumptions are Hill's assumption A(n) and the generalized assumption rc-A(n) for right-censored data. Attention is also given to the situation where only a part of the data range is considered relevant for the inference, where in addition the numbers of observations to the left and to the right of this range are known. Throughout this thesis, our methods are illustrated and discussed via examples with data from the literature.
APA, Harvard, Vancouver, ISO, and other styles
45

Elsaeiti, Mohamed. "Nonparametric predictive inference for acceptance decisions." Thesis, Durham University, 2011. http://etheses.dur.ac.uk/3442/.

Full text
Abstract:
This thesis presents new solutions for two acceptance decisions problems. First, we present methods for basic acceptance sampling for attributes, based on the nonparametric predictive inferential approach for Bernoulli data, which is extended for this application. We consider acceptance sampling based on destructive tests and on non-destructive tests. Attention is mostly restricted to single stage sampling, but extension to two-stage sampling is also considered and discussed. Secondly, sequential acceptance decision problems are considered with the aim to select one or more candidates from a group, with the candidates observed sequentially, either per individual or in subgroups, and with the ordering of an individual compared to previous candidates and those in the same subgroup available. While, for given total group size, this problem can in principle be solved by dynamic programming, the computational effort required makes this not feasible for problems once the number of candidates to be selected, and the total group size are not small. We present a new heuristic approach to such problems, based on the principles of nonparametric predictive inference, and we study its performance via simulations. The approach is very flexible and computationally straightforward, and has advantages over alternative heuristic rules that have been suggested in the literature.
APA, Harvard, Vancouver, ISO, and other styles
46

Delatola, Eleni-Ioanna. "Bayesian nonparametric modelling of financial data." Thesis, University of Kent, 2012. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.589934.

Full text
Abstract:
This thesis presents a class of discrete time univariate stochastic volatility models using Bayesian nonparametric techniques. In particular, the models that will be introduced are not only the basic stochastic volatility model, but also the heavy-tailed model using scale mixture of Normals and the leverage model. The aim will be focused on capturing flexibly the distribution of the logarithm of the squared return under the aforementioned models using infinite mixture of Normals. Parameter estimates for these models will be obtained using Markov chain Monte Carlo methods and the Kalman filter. Links between the return distribution and the distribution of the logarithm of the squared returns "fill be established. The one-step ahead predictive ability of the model will be measured using log-predictive scores. Asset returns, stock indices and exchange rates will be fitted using the developed methods.
APA, Harvard, Vancouver, ISO, and other styles
47

Hall, Benjamin. "NONPARAMETRIC ESTIMATION OF DERIVATIVES WITH APPLICATIONS." UKnowledge, 2010. http://uknowledge.uky.edu/gradschool_diss/114.

Full text
Abstract:
We review several nonparametric regression techniques and discuss their various strengths and weaknesses with an emphasis on derivative estimation and confidence band creation. We develop a generalized C(p) criterion for tuning parameter selection when interest lies in estimating one or more derivatives and the estimator is both linear in the observed responses and self-consistent. We propose a method for constructing simultaneous confidence bands for the mean response and one or more derivatives, where simultaneous now refers both to values of the covariate and to all derivatives under consideration. In addition we generalize the simultaneous confidence bands to account for heteroscedastic noise. Finally, we consider the characterization of nanoparticles and propose a method for identifying a proper subset of the covariate space that is most useful for characterization purposes.
APA, Harvard, Vancouver, ISO, and other styles
48

Benhaddou, Rida. "Nonparametric and Empirical Bayes Estimation Methods." Doctoral diss., University of Central Florida, 2013. http://digital.library.ucf.edu/cdm/ref/collection/ETD/id/5765.

Full text
Abstract:
In the present dissertation, we investigate two different nonparametric models; empirical Bayes model and functional deconvolution model. In the case of the nonparametric empirical Bayes estimation, we carried out a complete minimax study. In particular, we derive minimax lower bounds for the risk of the nonparametric empirical Bayes estimator for a general conditional distribution. This result has never been obtained previously. In order to attain optimal convergence rates, we use a wavelet series based empirical Bayes estimator constructed in Pensky and Alotaibi (2005). We propose an adaptive version of this estimator using Lepski's method and show that the estimator attains optimal convergence rates. The theory is supplemented by numerous examples. Our study of the functional deconvolution model expands results of Pensky and Sapatinas (2009, 2010, 2011) to the case of estimating an (r+1)-dimensional function or dependent errors. In both cases, we derive minimax lower bounds for the integrated square risk over a wide set of Besov balls and construct adaptive wavelet estimators that attain those optimal convergence rates. In particular, in the case of estimating a periodic (r+1)-dimensional function, we show that by choosing Besov balls of mixed smoothness, we can avoid the ''curse of dimensionality'' and, hence, obtain higher than usual convergence rates when r is large. The study of deconvolution of a multivariate function is motivated by seismic inversion which can be reduced to solution of noisy two-dimensional convolution equations that allow to draw inference on underground layer structures along the chosen profiles. The common practice in seismology is to recover layer structures separately for each profile and then to combine the derived estimates into a two-dimensional function. By studying the two-dimensional version of the model, we demonstrate that this strategy usually leads to estimators which are less accurate than the ones obtained as two-dimensional functional deconvolutions. Finally, we consider a multichannel deconvolution model with long-range dependent Gaussian errors. We do not limit our consideration to a specific type of long-range dependence, rather we assume that the eigenvalues of the covariance matrix of the errors are bounded above and below. We show that convergence rates of the estimators depend on a balance between the smoothness parameters of the response function, the smoothness of the blurring function, the long memory parameters of the errors, and how the total number of observations is distributed among the channels.
Ph.D.
Doctorate
Mathematics
Sciences
Mathematics
APA, Harvard, Vancouver, ISO, and other styles
49

Zychaluk, Kamila. "Application of noise in nonparametric curve." Thesis, University of Birmingham, 2004. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.410854.

Full text
APA, Harvard, Vancouver, ISO, and other styles
50

PEREIRA, MANOEL FRANCISCO DE SOUZA. "OPTION PRICING VIA NONPARAMETRIC ESSCHER TRANSFORM." PONTIFÍCIA UNIVERSIDADE CATÓLICA DO RIO DE JANEIRO, 2011. http://www.maxwell.vrac.puc-rio.br/Busca_etds.php?strSecao=resultado&nrSeq=19219@1.

Full text
Abstract:
COORDENAÇÃO DE APERFEIÇOAMENTO DO PESSOAL DE ENSINO SUPERIOR
O apreçamento de opções é um dos temas mais importantes da economia financeira. Este estudo introduz uma versão não paramétrica da Transformada de Esscher para o apreçamento neutro ao risco de opções financeiras. Os tradicionais métodos paramétricos exigem a formulação de um modelo neutro ao risco explícito e são operacionalmente apenas para poucas funções densidade de probabilidade. Em nossa proposta, com simples suposições, evitamos a necessidade da formulação de um modelo neutro ao risco para os retornos. Primeiro, simulamos uma amostra de trajetórias de retornos sob a distribuição original P. Então, baseado na Transformada de Esscher, a amostra é reponderada, dando origem a uma amostra com risco neutralizado. Em seguida, os preços dos derivativos são obtidos através de uma simples média dos payoffs de cada trajetória da opção. Comparamos nossa proposta com alguns métodos de apreçamento tradicionais, aplicando quatro exercícios em situações diferentes, para destacar as diferenças e as semelhanças entre os métodos. Sob as mesmas condições e em situações similares, o método proposto reproduz os resultados dos métodos de apreçamento estabelecidos na literatura, o modelo de Black e Scholes (1973) e o método de Duan (1995). Quando as condições são diferentes, o método proposto indica que há mais risco do que outros métodos podem capturar.
Option valuation is one of the most important topics in financial economics. This study introduces a nonparametric version of the Esscher transform for risk neutral option pricing. Traditional parametric methods require the formulation of an explicit risk-neutral model and are operational only for a few probability density functions. In our proposal, we make only mild assumptions on the price kernel and there is no need for the formulation of the risk-neutral model for the returns. First, we simulate sample paths for the returns under the historical distribution P. Then, based on the Esscher transform, the sample is reweighted, giving rise to a risk-neutralized sample from which derivative prices can be obtained by a simple average of the pay-offs of the option to each path. We compare our proposal with some traditional pricing methods, applying four exercises under different situations, which seek to highlight the differences and similarities between the methods. Under the same conditions and in similar situations, the option pricing method proposed reproduces the results of pricing methods fully established in the literature, the Black and Scholes [3] model and the Duan [13] method. When the conditions are different, the results show that the method proposed indicates that there is more risk than the other methods can capture.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography