Dissertations / Theses on the topic 'Inferenza statistica'
Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles
Consult the top 50 dissertations / theses for your research on the topic 'Inferenza statistica.'
Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.
You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.
Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.
AGOSTINELLI, Claudio. "Inferenza statistica robusta basata sulla funzione di verosimiglianza pesata: alcuni sviluppi." Doctoral thesis, country:ITA, 1998. http://hdl.handle.net/10278/25831.
Full textCapriati, Paola Bianca Martina. "L'utilizzo del metodo Bootstrap nella statistica inferenziale." Master's thesis, Alma Mater Studiorum - Università di Bologna, 2015. http://amslaurea.unibo.it/8715/.
Full textMonda, Anna. "Inferenza non parametrica nel contesto di dati dipendenti: polinomi vocali e verosimiglianza empirica." Doctoral thesis, Universita degli studi di Salerno, 2013. http://hdl.handle.net/10556/1285.
Full textIl presente lavoro si inserisce nel contesto delle più recenti ricerche sugli strumenti di analisi non parametrica ed in particolare analizza l'utilizzo dei Polinomi Locali e della Verosimiglianza Empirica, nel caso di dati dipendenti. Le principali forme di dipendenza che verranno trattate in questo lavoro sono quelle che rispondono alla definizione di alpha-mixing ed in particolare il nostro si presenta come un tentativo di conciliare, in questo ambito, tecniche non parametriche, rappresentate dai Polinomi Locali, all'approccio di Empirical Likelihood, cercando di aggregare ed enfatizzare i punti di forza di entrambe le metodologie: i Polinomi Locali ci forniranno una stima più e accurata da collocare all'interno della definizione di Verosimiglianza Empirica fornita da Owen (1988). I vantaggi sono facili da apprezzare in termini di immediatezza ed utilizzo pratico di questa tecnica. I risultati vengono analizzati sia da un punto di vista teorico, sia confermati poi, da un punto di vista empirico, riuscendo a trarre dai dati anche utili informazioni in grado di fornire l'effettiva sensibilità al più cruciale e delicato parametro da stabilire nel caso di stimatori Polinomi Locali: il parametro di bandwidth. Lungo tutto l'elaborato presenteremo, in ordine, dapprima il contesto all'interno del quale andremo ad operare, precisando più nello specifico le forme di dipendenza trattate, nel capitolo secondo, enunceremo le caratteristiche e proprietà dei polinomi locali, successivamente, nel corso del capitolo terzo, analizzeremo nel dettaglio la verosimiglianza empirica, con particolare attenzione, anche in questo caso, alle proprietà teoriche, infine, nel quarto capitolo presenteremo risultati teorici personali, conseguiti a partire dalla trattazione teorica precedente. Il capitolo conclusivo propone uno studio di simulazione, sulla base delle proprietà teoriche ottenute nel capitolo precedente. Nelle battute conclusive troveranno spazio delucidazioni sugli esiti delle simulazioni, i quali, non soltanto confermano la validità dei risultati teorici esposti nel corso dell'elaborato, ma forniscono anche evidenze a favore di un'ulteriore analisi, per i test proposti, rispetto alla sensibilità verso il parametro di smoothing impiegato. [a cura dell'autore]
X n.s.
Mancini, Martina. "Teorema di Cochran e applicazioni." Bachelor's thesis, Alma Mater Studiorum - Università di Bologna, 2015. http://amslaurea.unibo.it/9145/.
Full textHAMMAD, AHMED TAREK. "Tecniche di valutazione degli effetti dei Programmi e delle Politiche Pubbliche. L' approccio di apprendimento automatico causale." Doctoral thesis, Università Cattolica del Sacro Cuore, 2022. http://hdl.handle.net/10280/110705.
Full textThe analysis of causal mechanisms has been considered in various disciplines such as sociology, epidemiology, political science, psychology and economics. These approaches allow uncovering causal relations and mechanisms by studying the role of a treatment variable (such as a policy or a program) on a set of outcomes of interest or different intermediates variables on the causal path between the treatment and the outcome variables. This thesis first focuses on reviewing and exploring alternative strategies to investigate causal effects and multiple mediation effects using Machine Learning algorithms which have been shown to be particularly suited for assessing research questions in complex settings with non-linear relations. Second, the thesis provides two empirical examples where two Machine Learning algorithms, namely the Generalized Random Forest and Multiple Additive Regression Trees, are used to account for important control variables in causal inference in a data-driven way. By bridging a fundamental gap between causality and advanced data modelling, this work combines state of the art theories and modelling techniques.
BOLZONI, MATTIA. "Variational inference and semi-parametric methods for time-series probabilistic forecasting." Doctoral thesis, Università degli Studi di Milano-Bicocca, 2021. http://hdl.handle.net/10281/313704.
Full textProbabilistic forecasting is a common task. The usual approach assumes a fixed structure for the outcome distribution, often called model, that depends on unseen quantities called parameters. It uses data to infer a reasonable distribution over these latent values. The inference step is not always straightforward, because single-value can lead to poor performances and overfitting while handling a proper distribution with MCMC can be challenging. Variational Inference (VI) is emerging as a viable optimisation based alternative that models the target posterior with instrumental variables called variational parameters. However, VI usually imposes a parametric structure on the proposed posterior. The thesis's first contribution is Hierarchical Variational Inference (HVI) a methodology that uses Neural Networks to create semi-parametric posterior approximations with the same minimum requirements as Metropolis-Hastings or Hamiltonian MCMC. The second contribution is a Python package to conduct VI on time-series models for mean-covariance estimate, using HVI and standard VI techniques combined with Neural Networks. Results on econometric and financial data show a consistent improvement using VI, compared to point estimate, obtaining lower variance forecasting.
ROMIO, SILVANA ANTONIETTA. "Modelli marginali strutturali per lo studio dell'effetto causale di fattori di rischio in presenza di confondenti tempo dipendenti." Doctoral thesis, Università degli Studi di Milano-Bicocca, 2010. http://hdl.handle.net/10281/8048.
Full textMASPERO, DAVIDE. "Computational strategies to dissect the heterogeneity of multicellular systems via multiscale modelling and omics data analysis." Doctoral thesis, Università degli Studi di Milano-Bicocca, 2022. http://hdl.handle.net/10281/368331.
Full textHeterogeneity pervades biological systems and manifests itself in the structural and functional differences observed both among different individuals of the same group (e.g., organisms or disease systems) and among the constituent elements of a single individual (e.g., cells). The study of the heterogeneity of biological systems and, in particular, of multicellular systems is fundamental for the mechanistic understanding of complex physiological and pathological phenomena (e.g., cancer), as well as for the definition of effective prognostic, diagnostic, and therapeutic strategies. This work focuses on developing and applying computational methods and mathematical models for characterising the heterogeneity of multicellular systems and, especially, cancer cell subpopulations underlying the evolution of neoplastic pathology. Similar methodologies have been developed to characterise viral evolution and heterogeneity effectively. The research is divided into two complementary portions, the first aimed at defining methods for the analysis and integration of omics data generated by sequencing experiments, the second at modelling and multiscale simulation of multicellular systems. Regarding the first strand, next-generation sequencing technologies allow us to generate vast amounts of omics data, for example, related to the genome or transcriptome of a given individual, through bulk or single-cell sequencing experiments. One of the main challenges in computer science is to define computational methods to extract useful information from such data, taking into account the high levels of data-specific errors, mainly due to technological limitations. In particular, in the context of this work, we focused on developing methods for the analysis of gene expression and genomic mutation data. In detail, an exhaustive comparison of machine-learning methods for denoising and imputation of single-cell RNA-sequencing data has been performed. Moreover, methods for mapping expression profiles onto metabolic networks have been developed through an innovative framework that has allowed one to stratify cancer patients according to their metabolism. A subsequent extension of the method allowed us to analyse the distribution of metabolic fluxes within a population of cells via a flux balance analysis approach. Regarding the analysis of mutational profiles, the first method for reconstructing phylogenomic models from longitudinal data at single-cell resolution has been designed and implemented, exploiting a framework that combines a Markov Chain Monte Carlo with a novel weighted likelihood function. Similarly, a framework that exploits low-frequency mutation profiles to reconstruct robust phylogenies and likely chains of infection has been developed by analysing sequencing data from viral samples. The same mutational profiles also allow us to deconvolve the signal in the signatures associated with specific molecular mechanisms that generate such mutations through an approach based on non-negative matrix factorisation. The research conducted with regard to the computational simulation has led to the development of a multiscale model, in which the simulation of cell population dynamics, represented through a Cellular Potts Model, is coupled to the optimisation of a metabolic model associated with each synthetic cell. Using this model, it is possible to represent assumptions in mathematical terms and observe properties emerging from these assumptions. Finally, we present a first attempt to combine the two methodological approaches which led to the integration of single-cell RNA-seq data within the multiscale model, allowing data-driven hypotheses to be formulated on the emerging properties of the system.
Zeller, Camila Borelli. "Modelo de Grubbs em grupos." [s.n.], 2006. http://repositorio.unicamp.br/jspui/handle/REPOSIP/307093.
Full textDissertação (mestrado) - Universidade Estadual de Campinas, Instituto de Matematica, Estatistica e Computação Cientifica
Made available in DSpace on 2018-08-05T23:55:16Z (GMT). No. of bitstreams: 1 Zeller_CamilaBorelli_M.pdf: 3683998 bytes, checksum: 26267086098b12bd76b1d5069f688223 (MD5) Previous issue date: 2006
Resumo: Neste trabalho, apresentamos um estudo de inferência estatística no modelo de Grubbs em grupos, que representa uma extensão do modelo proposto por Grubbs (1948,1973) que é freqüentemente usado para comparar instrumentos ou métodos de medição. Nós consideramos a parametrização proposta por Bedrick (2001). O estudo é baseado no método de máxima verossimilhança. Testes de hipóteses são considerados e baseados nas estatísticas de wald, escore e razão de verossimilhanças. As estimativas de máxima verossimilhança do modelo de Grubbs em grupos são obtidas usando o algoritmo EM e considerando que as observações seguem uma distribuição normal. Apresentamos um estudo de análise de diagnóstico no modelo de Grubbs em grupos com o interesse de avaliar o impacto que um determinado subgrupo exerce na estimativa dos parâmetros. Vamos utilizar a metodologia de influência local proposta por Cook (1986), considerando o esquema de perturbação: ponderação de casos. Finalmente, apresentamos alguns estudos de simulação e ilustramos os resultados teóricos obtidos usando dados encontrados na literatura
Abstract: In this work, we presented a study of statistical inference in the Grubbs's model with subgroups, that represents an extension of the model proposed by Grubbs (1948,1973) that is frequently used to compare instruments or measurement methods. We considered the parametrization proposed by Bedrick (2001). The study is based on the maximum likelihood method. Tests of hypotheses are considered and based on the wald statistics, score and likelihood ratio statistics. The maximum likelihood estimators of the Grubbs's model with subgroups are obtained using the algorithm EM and considering that the observations follow a normal distribution. We also presented a study of diagnostic analysis in the Grubb's model with subgroups with the interest of evaluating the effect that a certain one subgroup exercises in the estimate of the parameters. We will use the methodology of local influence proposed by Cook (1986) considering the schemes of perturbation of case weights. Finally, we presented some simulation studies and we illustrated the obtained theoretical results using data found in the literature
Mestrado
Mestre em Estatística
Filiasi, Mario. "Applications of Large Deviations Theory and Statistical Inference to Financial Time Series." Doctoral thesis, Università degli studi di Trieste, 2015. http://hdl.handle.net/10077/10940.
Full textLa corretta valutazione del rischio finanziario è una delle maggiori attività nell'amibto della ricerca finanziaria, ed è divenuta ancora più importante dopo la recente crisi finanziaria. I recenti progressi dell'econofisica hanno dimostrato come la dinamica dei mercati finanziari può essere studiata in modo attendibile per mezzo dei modelli usati in fisica statistica. L'andamento dei prezzi azionari è costantemente monitorato e registrato ad alte frequenze (fino a 1ms) e ciò produce un'enorme quantità di dati che può essere analizzata statisticamente per validare e calibrare i modelli teorici. Il presente lavoro si inserisce in questa ottica, ed è il risultato dell'interazione tra il Dipartimento di Fisica dell'Università degli Studi di Trieste e List S.p.A., in collaborazione con il Centro Internazionale di Fisica Teorica (ICTP). In questo lavoro svolgeremo un analisi delle serie storiche finanziarie degli ultimi due anni relative al prezzo delle azioni maggiormente scambiate sul mercato italiano. Studieremo le proprietà statistiche dei ritorni finanziari e verificheremo alcuni fatti stilizzati circa i prezzi azionari. I ritorni finanziari sono distribuiti secondo una distribuzione di probabilità a code larghe e pertanto, secondo la Teoria delle Grandi Deviazioni, sono frequentemente soggetti ad eventi estremi che generano salti di prezzo improvvisi. Il fenomeno viene qui identificato come "condensazione delle grandi deviazioni". Studieremo i fenomeni di condensazione secondo le convenzioni della fisica statistica e mostreremo la comparsa di una transizione di fase per distribuzioni a code larghe. Inoltre, analizzaremo empiricamente i fenomeni di condensazione nei prezzi azionari: mostreremo che i ritorni finanziari estremi sono generati da complesse fluttuazioni dei prezzi che limitano gli effetti di salti improvvisi ma che amplificano il movimento diffusivo dei prezzi. Proseguendo oltre l'analisi statistica dei prezzi delle singole azioni, investigheremo la struttura del mercato nella sua interezza. E' opinione comune in letteratura finanziaria che i cambiamenti di prezzo sono dovuti ad eventi esogeni come la diffusione di notizie politiche ed economiche. Nonostante ciò, è ragionevole ipotizzare che i prezzi azionari possano essere influenzati anche da eventi endogeni, come le variazioni di prezzo in altri strumenti finanziari ad essi correlati. La grande quantità di dati a disposizione permette di verificare quest'ipotesi e di studiare la struttura del mercato finanziario per mezzo dell'inferenza statistica. In questo lavoro proponiamo un modello di mercato basato su prezzi azionari interagenti: studieremo un modello di tipo "integrate & fire" ispirato alla dinamica delle reti neurali, in cui ogni azione è influenzata da tutte gli altre per mezzo di un meccanismo con soglie limite di prezzo. Usando un algoritmo di massima verosimiglianza, applicheremo il modello ai dati sperimentali e tenteremo di inferire la rete informativa che è alla base del mercato finanziario.
The correct evaluation of financial risk is one of the most active domain of financial research, and has become even more relevant after the latest financial crisis. The recent developments of econophysics prove that the dynamics of financial markets can be successfully investigated by means of physical models borrowed from statistical physics. The fluctuations of stock prices are continuously recorded at very high frequencies (up to 1ms) and this generates a huge amount of data which can be statistically analysed in order to validate and to calibrate the theoretical models. The present work moves in this direction, and is the result of a close interaction between the Physics Department of the University of Trieste with List S.p.A., in collaboration with the International Centre for Theoretical Physics (ICTP). In this work we analyse the time-series over the last two years of the price of the 20 most traded stocks from the Italian market. We investigate the statistical properties of price returns and we verify some stylized facts about stock prices. Price returns are distributed according to a heavy-tailed distribution and therefore, according to the Large Deviations Theory, they are frequently subject to extreme events which produce abrupt price jumps. We refer to this phenomenon as the condensation of the large deviations. We investigate condensation phenomena within the framework of statistical physics and show the emergence of a phase transition in heavy-tailed distributions. In addition, we empirically analyse condensation phenomena in stock prices: we show that extreme returns are generated by non-trivial price fluctuations, which reduce the effects of sharp price jumps but amplify the diffusive movements of prices. Moving beyond the statistical analysis of the single-stock prices, we investigate the structure of the market as a whole. In financial literature it is often assumed that price changes are due to exogenous events, e.g. the release of economic and political news. Yet, it is reasonable to suppose that stock prices could also be driven by endogenous events, such as the price changes of related financial instruments. The large amount of available data allows us to test this hypothesis and to investigate the structure of the market by means of the statistical inference. In this work we propose a market model based on interacting prices: we study an integrate & fire model, inspired by the dynamics of neural networks, where each stock price depends on the other stock prices through some threshold-passing mechanism. Using a maximum likelihood algorithm, we apply the model to the empirical data and try to infer the information network that underlies the financial market.
XXVII Ciclo
1986
Frey, Jesse C. "Inference procedures based on order statistics." Connect to this title online, 2005. http://rave.ohiolink.edu/etdc/view?acc%5Fnum=osu1122565389.
Full textTitle from first page of PDF file. Document formatted into pages; contains xi, 148 p.; also includes graphics. Includes bibliographical references (p. 146-148). Available online via OhioLINK's ETD Center
Wiberg, Marie H. "Computerized achievement tests : sequential and fixed length tests." Doctoral thesis, Umeå universitet, Statistiska institutionen, 2003. http://urn.kb.se/resolve?urn=urn:nbn:se:umu:diva-148.
Full textFollestad, Turid. "Stochastic Modelling and Simulation Based Inference of Fish Population Dynamics and Spatial Variation in Disease Risk." Doctoral thesis, Norwegian University of Science and Technology, Faculty of Information Technology, Mathematics and Electrical Engineering, 2003. http://urn.kb.se/resolve?urn=urn:nbn:no:ntnu:diva-41.
Full textWe present a non-Gaussian and non-linear state-space model for the population dynamics of cod along the Norwegian Skagerak coast, embedded in the framework of a Bayesian hierarchical model. The model takes into account both process error, representing natural variability in the dynamics of a population, and observational error, reflecting the sampling process relating the observed data to true abundances. The data set on which our study is based, consists of samples of two juvenile age-groups of cod taken by beach seine hauls at a set of sample stations within several fjords along the coast. The age-structure population dynamics model, constituting the prior of the Bayesian model, is specified in terms of the recruitment process and the processes of survival for these two juvenile age-groups and the mature population, for which we have no data. The population dynamics is specified on abundances at the fjord level, and an explicit down-scaling from the fjord level to the level of the monitored stations is included in the likelihood, modelling the sampling process relating the observed counts to the underlying fjord abundances.
We take a sampling based approach to parameter estimation using Markov chain Monte Carlo methods. The properties of the model in terms of mixing and convergence of the MCMC algorithm and explored empirically on the basis of a simulated data set, and we show how the mixing properties can be improved by re-parameterisation. Estimation of the model parameters, and not the abundances, is the primary aim of the study, and we also propose an alternative approach to the estimation of the model parameters based on the marginal posterior distribution integrating over the abundances.
Based on the estimated model we illustrate how we can simulate the release of juvenile cod, imitating an experiment conducted in the early 20th century to resolve a controversy between a fisherman and a scientist who could not agree on the effect of releasing cod larvae on the mature abundance of cod. This controversy initiated the monitoring programme generating the data used in our study.
Lee, Yun-Soo. "On some aspects of distribution theory and statistical inference involving order statistics." Virtual Press, 1991. http://liblink.bsu.edu/uhtbin/catkey/834141.
Full textDepartment of Mathematical Sciences
Kim, Woosuk. "Statistical Inference on Dual Generalized Order Statistics for Burr Type III Distribution." University of Cincinnati / OhioLINK, 2014. http://rave.ohiolink.edu/etdc/view?acc_num=ucin1396533232.
Full textJones, Lee K., and Richard C. 1943 Larson. "Efficient Computation of Probabilities of Events Described by Order Statistics and Application to a Problem of Queues." Massachusetts Institute of Technology, Operations Research Center, 1991. http://hdl.handle.net/1721.1/5159.
Full textHo, Man Wai. "Bayesian inference for models with monotone densities and hazard rates /." View Abstract or Full-Text, 2002. http://library.ust.hk/cgi/db/thesis.pl?ISMT%202002%20HO.
Full textIncludes bibliographical references (leaves 110-114). Also available in electronic version. Access restricted to campus users.
Villalobos, Isadora Antoniano. "Bayesian inference for models with infinite-dimensionally generated intractable components." Thesis, University of Kent, 2012. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.594106.
Full textBohlin, Lars. "Inferens på rangordningar - En Monte Carlo-analys." Thesis, Örebro universitet, Handelshögskolan vid Örebro Universitet, 2015. http://urn.kb.se/resolve?urn=urn:nbn:se:oru:diva-46322.
Full textAsif, Muneeb. "Bayesian Inference for the Global Minimum Variance Portfolio." Thesis, Örebro universitet, Handelshögskolan vid Örebro Universitet, 2018. http://urn.kb.se/resolve?urn=urn:nbn:se:oru:diva-68929.
Full textBlomberg, Per. "Informell Statistisk Inferens i modelleringssituationer : En studie om utveckling av ett ramverk för att analysera hur elever uttrycker inferenser." Licentiate thesis, Linnéuniversitetet, Institutionen för matematikdidaktik (MD), 2015. http://urn.kb.se/resolve?urn=urn:nbn:se:lnu:diva-45572.
Full textThe purpose of this study is to improve our knowledge about teaching and learning of informal statistical inference. A qualitative research strategy is used in the study that focuses on the testing and generation of theories inspired by grounded theory. The knowledge focus of the study is aimed at the characterisation of statistical processes and concepts where systems of concept frameworks about informal statistical inference and modelling represent an essential part of the research. In order to obtain adequate empirical data, a teaching situation was devised whereby students were involved in planning and implementing an investigation. The study was conducted in a normal classroom situation where the teaching was focused on an area in probability and statistics that included the introduction of box plots and normal distribution with related concepts. The empirical material was collected through video recordings and written reports. The material was analysed using a combined framework of informal statistical inference and modelling. The results of the analysis highlight examples of how students can be expected to express aspects of informal statistical inference within the context of statistical inquiry. A framework was also developed aimed to theoretically depict informal statistical inference in modelling situations. The study suggests that this framework has the potential to be used to analyse how informal statistical inference of students are expressed and to identify potential learning opportunities for students to develop their ability to express inferences.
Veraart, Almut Elisabeth Dorothea. "Volatility estimation and inference in the presence of jumps." Thesis, University of Oxford, 2007. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.670107.
Full textLundin, Mathias. "Sensitivity Analysis of Untestable Assumptions in Causal Inference." Doctoral thesis, Umeå universitet, Statistiska institutionen, 2011. http://urn.kb.se/resolve?urn=urn:nbn:se:umu:diva-43239.
Full textJinn, Nicole Mee-Hyaang. "Toward Error-Statistical Principles of Evidence in Statistical Inference." Thesis, Virginia Tech, 2014. http://hdl.handle.net/10919/48420.
Full textMaster of Arts
Koskinen, Johan. "Essays on Bayesian Inference for Social Networks." Doctoral thesis, Stockholm : Department of Statistics [Statistiska institutionen], Univ, 2004. http://urn.kb.se/resolve?urn=urn:nbn:se:su:diva-128.
Full textDENTI, FRANCESCO. "Bayesian Mixtures for Large Scale Inference." Doctoral thesis, Università degli Studi di Milano-Bicocca, 2020. http://hdl.handle.net/10281/262923.
Full textBayesian mixture models are ubiquitous in statistics due to their simplicity and flexibility and can be easily employed in a wide variety of contexts. In this dissertation, we aim at providing a few contributions to current Bayesian data analysis methods, often motivated by research questions from biological applications. In particular, we focus on the development of novel Bayesian mixture models, typically in a nonparametric setting, to improve and extend active research areas that involve large-scale data: the modeling of nested data, multiple hypothesis testing, and dimensionality reduction.\\ Therefore, our goal is twofold: to develop robust statistical methods motivated by a solid theoretical background, and to propose efficient, scalable and tractable algorithms for their applications.\\ The thesis is organized as follows. In Chapter \ref{intro} we shortly review the methodological background and discuss the necessary concepts that belong to the different areas that we will contribute to with this dissertation. \\ In Chapter \ref{CAM} we propose a Common Atoms model (CAM) for nested datasets, which overcomes the limitations of the nested Dirichlet Process, as discussed in \citep{Camerlenghi2018}. We derive its theoretical properties and develop a slice sampler for nested data to obtain an efficient algorithm for posterior simulation. We then embed the model in a Rounded Mixture of Gaussian kernels framework to apply our method to an abundance table from a microbiome study.\\ In Chapter \ref{BNPT} we develop a BNP version of the two-group model \citep{Efron2004}, modeling both the null density $f_0$ and the alternative density $f_1$ with Pitman-Yor process mixture models. We propose to fix the two discount parameters $\sigma_0$ and $\sigma_1$ so that $\sigma_0>\sigma_1$, according to the rationale that the null PY should be closer to its base measure (appropriately chosen to be a standard Gaussian base measure), while the alternative PY should have fewer constraints. To induce separation, we employ a non-local prior \citep{Johnson} on the location parameter of the base measure of the PY placed on $f_1$. We show how the model performs in different scenarios and apply this methodology to a microbiome dataset.\\ Chapter \ref{Peluso} presents a second proposal for the two-group model. Here, we make use of non-local distributions to model the alternative density directly in the likelihood formulation. We propose both a parametric and a nonparametric formulation of the model. We provide a theoretical justification for the adoption of this approach and, after comparing the performance of our model with several competitors, we present three applications on real, publicly available genomic datasets.\\ In Chapter \ref{CRIME} we focus on improving the model for intrinsic dimensions (IDs) estimation discussed in \citet{Allegra}. In particular, the authors estimate the IDs modeling the ratio of the distances from a point to its first and second nearest neighbors (NNs). First, we propose to include more suitable priors in their parametric, finite mixture model. Then, we extend the existing theoretical methodology by deriving closed-form distributions for the ratios of distances from a point to two NNs of generic order. We propose a simple Dirichlet process mixture model, where we exploit the novel theoretical results to extract more information from the data. The chapter is then concluded with simulation studies and the application to real data.\\ Finally, Chapter \ref{Conclusions} presents the future directions and concludes.
Huh, Ji Young. "Applications of Monte Carlo Methods in Statistical Inference Using Regression Analysis." Scholarship @ Claremont, 2015. http://scholarship.claremont.edu/cmc_theses/1160.
Full textThabane, Lehana. "Contributions to Bayesian statistical inference." Thesis, National Library of Canada = Bibliothèque nationale du Canada, 1998. http://www.collectionscanada.ca/obj/s4/f2/dsk3/ftp04/nq31133.pdf.
Full textYang, Liqiang. "Statistical Inference for Gap Data." NCSU, 2000. http://www.lib.ncsu.edu/theses/available/etd-20001110-173900.
Full textThis thesis research is motivated by a special type of missing data - Gap Data, which was first encountered in a cardiology study conducted at Duke Medical School. This type of data include multiple observations of certain event time (in this medical study the event is the reopenning of a certain artery), some of them may have one or more missing periods called ``gaps'' before observing the``first'' event. Therefore, for those observations, the observed first event may not be the true first event because the true first event might have happened in one of the missing gaps. Due to this kind of missing information, estimating the survival function of the true first event becomes very difficult. No research nor discussion has been done on this type of data by now. In this thesis, the auther introduces a new nonparametric estimating method to solve this problem. This new method is currently called Imputed Empirical Estimating (IEE) method. According to the simulation studies, the IEE method provide a very good estimate of the survival function of the true first event. It significantly outperforms all the existing estimating approaches in our simulation studies. Besides the new IEE method, this thesis also explores the Maximum Likelihood Estimate in thegap data case. The gap data is introduced as a special type of interval censored data for thefirst time. The dependence between the censoring interval (in the gap data case is the observedfirst event time point) and the event (in the gap data case is the true first event) makes the gap data different from the well studied regular interval censored data. This thesis points of theonly difference between the gap data and the regular interval censored data, and provides a MLEof the gap data under certain assumptions.The third estimating method discussed in this thesis is the Weighted Estimating Equation (WEE)method. The WEE estimate is a very popular nonparametric approach currently used in many survivalanalysis studies. In this thesis the consistency and asymptotic properties of the WEE estimateused in the gap data are discussed. Finally, in the gap data case, the WEE estimate is showed to be equivalent to the Kaplan-Meier estimate. Numerical examples are provied in this thesis toillustrate the algorithm of the IEE and the MLE approaches. The auther also provides an IEE estimate of the survival function based on the real-life data from Duke Medical School. A series of simulation studies are conducted to assess the goodness-of-fit of the new IEE estimate. Plots and tables of the results of the simulation studies are presentedin the second chapter of this thesis.
Sun, Xiaohai. "Causal inference from statistical data /." Berlin : Logos-Verl, 2008. http://d-nb.info/988947331/04.
Full textCzogiel, Irina. "Statistical inference for molecular shapes." Thesis, University of Nottingham, 2010. http://eprints.nottingham.ac.uk/12217/.
Full text方以德 and Yee-tak Daniel Fong. "Statistical inference on biomedical models." Thesis, The University of Hong Kong (Pokfulam, Hong Kong), 1993. http://hub.hku.hk/bib/B31210788.
Full textLiu, Fei, and 劉飛. "Statistical inference for banding data." Thesis, The University of Hong Kong (Pokfulam, Hong Kong), 2008. http://hub.hku.hk/bib/B41508701.
Full textJunklewitz, Henrik. "Statistical inference in radio astronomy." Diss., Ludwig-Maximilians-Universität München, 2014. http://nbn-resolving.de/urn:nbn:de:bvb:19-177457.
Full textBell, Paul W. "Statistical inference for multidimensional scaling." Thesis, University of Newcastle Upon Tyne, 2000. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.327197.
Full textCovarrubias, Carlos Cuevas. "Statistical inference for ROC curves." Thesis, University of Warwick, 2003. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.399489.
Full textOe, Bianca Madoka Shimizu. "Statistical inference in complex networks." Universidade de São Paulo, 2017. http://www.teses.usp.br/teses/disponiveis/55/55134/tde-28032017-095426/.
Full textVários fenômenos naturais e artificiais compostos de partes interconectadas vem sendo estudados pela teoria de redes complexas. Tal representação permite o estudo de processos dinâmicos que ocorrem em redes complexas, tais como propagação de epidemias e rumores. A evolução destes processos é influenciada pela organização das conexões da rede. O tamanho das redes do mundo real torna a análise da rede inteira computacionalmente proibitiva. Portanto, torna-se necessário representá-la com medidas topológicas ou amostrá-la para reduzir seu tamanho. Além disso, muitas redes são amostras de redes maiores cuja estrutura é difícil de ser capturada e deve ser inferida de amostras. Neste trabalho, ambos os problemas são estudados: a influência da estrutura da rede em processos de propagação e os efeitos da amostragem na estrutura da rede. Os resultados obtidos sugerem que é possível predizer o tamanho da epidemia ou do rumor com base em um modelo de regressão beta com dispersão variável, usando medidas topológicas como regressores. A medida mais influente em ambas as dinâmicas é a informação de busca média, que quantifica a facilidade com que se navega em uma rede. Também é mostrado que a estrutura de uma rede amostrada difere da original e que o tipo de mudança depende do método de amostragem utilizado. Por fim, quatro métodos de amostragem foram aplicados para estudar o comportamento do limiar epidêmico de uma rede quando amostrada com diferentes taxas de amostragem. Os resultados sugerem que a amostragem por busca em largura é a mais adequada para estimar o limiar epidêmico entre os métodos comparados.
ZHAO, SHUHONG. "STATISTICAL INFERENCE ON BINOMIAL PROPORTIONS." University of Cincinnati / OhioLINK, 2005. http://rave.ohiolink.edu/etdc/view?acc_num=ucin1115834351.
Full textLiu, Fei. "Statistical inference for banding data." Click to view the E-thesis via HKUTO, 2008. http://sunzi.lib.hku.hk/hkuto/record/B41508701.
Full textFong, Yee-tak Daniel. "Statistical inference on biomedical models /." [Hong Kong] : University of Hong Kong, 1993. http://sunzi.lib.hku.hk/hkuto/record.jsp?B13456921.
Full textPeiris, Thelge Buddika. "Constrained Statistical Inference in Regression." OpenSIUC, 2014. https://opensiuc.lib.siu.edu/dissertations/934.
Full textFANIZZA, MARCO. "Quantum statistical inference and communication." Doctoral thesis, Scuola Normale Superiore, 2021. http://hdl.handle.net/11384/109209.
Full textBorgos, Hilde Grude. "Stochastic Modeling and Statistical Inference of Geological Fault Populations and Patterns." Doctoral thesis, Norwegian University of Science and Technology, Department of Mathematical Sciences, 2000. http://urn.kb.se/resolve?urn=urn:nbn:no:ntnu:diva-503.
Full textThe focus of this work is on faults, and the main issue is statistical analysis and stochastic modeling of faults and fault patterns in petroleum reservoirs. The thesis consists of Part I-V and Appendix A-C. The units can be read independently. Part III is written for a geophysical audience, and the topic of this part is fault and fracture size-frequency distributions. The remaining parts are written for a statistical audience, but can also be read by people with an interest in quantitative geology. The topic of Part I and II is statistical model choice for fault size distributions, with a samling algorithm for estimating Bayes factor. Part IV describes work on spatial modeling of fault geometry, and Part V is a short note on line partitioning. Part I, II and III constitute the main part of the thesis. The appendices are conference abstracts and papers based on Part I and IV.
Paper III: reprinted with kind permission of the American Geophysical Union. An edited version of this paper was published by AGU. Copyright [2000] American Geophysical Union
Bruce, Daniel. "Optimal Design and Inference for Correlated Bernoulli Variables using a Simplified Cox Model." Doctoral thesis, Stockholm : Department of Statistics, Stockholm University, 2008. http://urn.kb.se/resolve?urn=urn:nbn:se:su:diva-7512.
Full textWesterborn, Johan. "On particle-based online smoothing and parameter inference in general state-space models." Doctoral thesis, KTH, Matematisk statistik, 2017. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-215292.
Full textDenna avhandling består av fyra artiklar, presenterade i Paper A-D, som behandlar partikelbaserad online-glättning och parameter- skattning i generella dolda Markovkedjor. I papper A presenteras en ny algoritm, PaRIS, med målet att effek- tivt beräkna partikelbaserade online-skattningar av glättade väntevär- den av additiva tillståndsfunktionaler. Algoritmen har, under svaga villkor, en beräkningskomplexitet som växer endast linjärt med antalet partiklar samt högst begränsade minneskrav. Dessutom härleds ett an- tal konvergensresultat för denna algoritm, såsom en central gränsvärdes- sats. Algoritmen testas i en simuleringsstudie. I papper B studeras problemet att skatta marginalglättningsfördel- ningen i dolda Markovkedjor. Detta åstadkoms genom att exekvera PaRIS-algoritmen i marginalläge. Genom ett argument om mixning i Markovkedjor motiveras att avbryta uppdateringen efter en av ett stoppkriterium bestämd fördröjning vilket ger en adaptiv fördröjnings- glättare. I papper C studeras problemet att beräkna derivator av filterfördel- ningen. Dessa används för att beräkna gradienten av log-likelihood funktionen. Algoritmen, som innehåller en uppdateringsmekanism lik- nande den i PaRIS, förses med ett antal konvergensresultat, såsom en central gränsvärdessats med en varians som är likformigt begränsad. Den resulterande algoritmen används för att konstruera en rekursiv parameterskattningsalgoritm. Papper D fokuserar på online-estimering av modellparametrar i generella dolda Markovkedjor. Den presenterade algoritmen kan ses som en kombination av PaRIS algoritmen och en nyligen föreslagen online-implementation av den klassiska EM-algoritmen.
QC 20171009
Shen, Gang. "Bayesian predictive inference under informative sampling and transformation." Link to electronic thesis, 2004. http://www.wpi.edu/Pubs/ETD/Available/etd-0429104-142754/.
Full textKeywords: Ignorable Model; Transformation; Poisson Sampling; PPS Sampling; Gibber Sampler; Inclusion Probabilities; Selection Bias; Nonignorable Model; Bayesian Inference. Includes bibliographical references (p.34-35).
Edin, Moa. "Outcome regression methods in causal inference : The difference LASSO and selection of effect modifiers." Thesis, Umeå universitet, Statistik, 2018. http://urn.kb.se/resolve?urn=urn:nbn:se:umu:diva-149423.
Full textBENAZZO, Andrea. "LE SIMULAZIONI DEL PROCESSO COALESCENTE IN GENETICA DI POPOLAZIONI: INFERENZE DEMOGRAFICHE ED EVOLUTIVE." Doctoral thesis, Università degli studi di Ferrara, 2012. http://hdl.handle.net/11392/2389456.
Full textRen, Sheng. "New Methods of Variable Selection and Inference on High Dimensional Data." University of Cincinnati / OhioLINK, 2017. http://rave.ohiolink.edu/etdc/view?acc_num=ucin1511883302569683.
Full textStattin, Oskar. "Large scale inference under sparse and weak alternatives: non-asymptotic phase diagram for CsCsHM statistics." Thesis, KTH, Matematisk statistik, 2017. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-209963.
Full textModern mätteknologi tillåter att generera och lagra gigantiska mängder data, varav en stor andel är redundanta och varav bara ett fåtal är an- vändbara för ett givet problem. Områden där detta är vanligt är till exempel inom genomik, proteomik och astronomi, där stora multi- pla test ofta behöver utföras, med förväntan om endast några fåsig- nifikanta effekter. Ett antal nya testprocedurer har utvecklats för att testa dessa så-kallade svaga och glesa effekter i storskalig statistisk in- ferens. Den mest populära av dessa är troligen Higher Criticism, HC (se Donoho och Jin (2004)). En ny klass av goodness-of-fit-testvariabel döpt CsCsHM har nyligen blivit härledd (se Stepanova och Pavlenko (2017)) för samma typ av multipla testscenarion och har bevisat bättre asymptotiska egenskaper än den traditionella HC-metoden.Den här rapporten utforskar det empiriska beteendet för båda test- metodikerna i närheten av detektionsgränsen, vilken är tröskeln för detektion av glesa och svaga effekter. Den här teoretiska, skarpa gränsen delar fasrymden, vilken är uppspänd av gleshets- och svaghetsparametrarna, i två delområden:det detektionsbara och det icke-detektionsbara området. Testsvariablernas metodik tillämpas även för variabelselektion för storskalig binär klassificering. Dessa tillämpas, förutom simuleringar, på riktig data. Resultaten pekar på att testvariablerna är jämförbara i prestation.