Tesis sobre el tema "Proportion inference"
Crea una cita precisa en los estilos APA, MLA, Chicago, Harvard y otros
Consulte los 19 mejores tesis para su investigación sobre el tema "Proportion inference".
Junto a cada fuente en la lista de referencias hay un botón "Agregar a la bibliografía". Pulsa este botón, y generaremos automáticamente la referencia bibliográfica para la obra elegida en el estilo de cita que necesites: APA, MLA, Harvard, Vancouver, Chicago, etc.
También puede descargar el texto completo de la publicación académica en formato pdf y leer en línea su resumen siempre que esté disponible en los metadatos.
Explore tesis sobre una amplia variedad de disciplinas y organice su bibliografía correctamente.
Kim, Hyun Seok (John). "Diagnosing examinees' attributes-mastery using the Bayesian inference for binomial proportion: a new method for cognitive diagnostic assessment". Diss., Georgia Institute of Technology, 2011. http://hdl.handle.net/1853/41144.
Texto completoLi, Qiuju. "Statistical inference for joint modelling of longitudinal and survival data". Thesis, University of Manchester, 2014. https://www.research.manchester.ac.uk/portal/en/theses/statistical-inference-for-joint-modelling-of-longitudinal-and-survival-data(65e644f3-d26f-47c0-bbe1-a51d01ddc1b9).html.
Texto completoZHAO, SHUHONG. "STATISTICAL INFERENCE ON BINOMIAL PROPORTIONS". University of Cincinnati / OhioLINK, 2005. http://rave.ohiolink.edu/etdc/view?acc_num=ucin1115834351.
Texto completoSimonnet, Titouan. "Apprentissage et réseaux de neurones en tomographie par diffraction de rayons X. Application à l'identification minéralogique". Electronic Thesis or Diss., Orléans, 2024. http://www.theses.fr/2024ORLE1033.
Texto completoUnderstanding the chemical and mechanical behavior of compacted materials (e.g. soil, subsoil, engineered materials) requires a quantitative description of the material's structure, and in particular the nature of the various mineralogical phases and their spatial relationships. Natural materials, however, are composed of numerous small-sized minerals, frequently mixed on a small scale. Recent advances in synchrotron-based X-ray diffraction tomography (to be distinguished from phase contrast tomography) now make it possible to obtain tomographic volumes with nanometer-sized voxels, with a XRD pattern for each of these voxels (where phase contrast only gives a gray level). On the other hand, the sheer volume of data (typically on the order of 100~000 XRD patterns per sample slice), combined with the large number of phases present, makes quantitative processing virtually impossible without appropriate numerical codes. This thesis aims to fill this gap, using neural network approaches to identify and quantify minerals in a material. Training such models requires the construction of large-scale learning bases, which cannot be made up of experimental data alone.Algorithms capable of synthesizing XRD patterns to generate these bases have therefore been developed.The originality of this work also concerned the inference of proportions using neural networks. To meet this new and complex task, adapted loss functions were designed.The potential of neural networks was tested on data of increasing complexity: (i) from XRD patterns calculated from crystallographic information, (ii) using experimental powder XRD patterns measured in the laboratory, (iii) on data obtained by X-ray tomography. Different neural network architectures were also tested. While a convolutional neural network seemed to provide interesting results, the particular structure of the diffraction signal (which is not translation invariant) led to the use of models such as Transformers. The approach adopted in this thesis has demonstrated its ability to quantify mineral phases in a solid. For more complex data, such as tomography, improvements have been proposed
Liu, Guoyuan. "Comparison of prior distributions for bayesian inference for small proportions". Thesis, McGill University, 2011. http://digitool.Library.McGill.CA:80/R/?func=dbin-jump-full&object_id=96917.
Texto completoSouvent des analyses bayésiennes de données épidémiologiques utilisent les distributions à priori objectives. Ces distributions à priori sont sélectionnées de sorte que les distributions à posteriori soient déterminées uniquement par les données observées. Bien que cette méthode soit efficace dans plusieurs situations, elle ne l'est pas dans le cas de l'estimation bayésienne de petites proportions. Cette situation peut survenir, par exemple lors de l'estimation de la prévalence d'une maladie rare. Plusieurs distributions à priori objectives ont été proposées pour l'estimation d'une proportion, telle que, par exemple la distribution uniforme de Jeffrey. Chacune de ces distributions à priori peut conduire à de différentes distributions à posteriori lorsque le nombre d'événements dans l'expérience binomiale est petit. Mais il n'est pas clair laquelle de ces distributions, en moyenne, donne de meilleurs estimés. Nous explorons cette question en examinant la performance fréquentiste des intervalles crédibles à posteriori obtenus, respectivement, avec chacune de ces distributions à priori. Pour évaluer cette performance, nous considèrons des statistiques comme la couverture moyenne et la longueur moyenne des intervalles crédibles à posteriori. Nous considérons aussi des distributions à priori plus informatives comme les distributions uniformes définies sur un sous-intervalle de l'intervalle [0, 1]. La performance des distributions à priori est évaluée en utilisant des données simulées de situations où l'intérêt de recherche est concentré sur l'estimation d'une seule proportion ou sur la différence entre deux proportions.
LEAL, ALTURO Olivia Lizeth. "Nonnested hypothesis testing inference in regression models for rates and proportions". Universidade Federal de Pernambuco, 2017. https://repositorio.ufpe.br/handle/123456789/24573.
Texto completoMade available in DSpace on 2018-05-07T21:17:28Z (GMT). No. of bitstreams: 1 DISSERTAÇÃO Olivia Lizeth Leal Alturo.pdf: 2450256 bytes, checksum: 8d29b676eaffcb3c5bc1b78a8611b9f8 (MD5) Previous issue date: 2017-02-16
Existem diferentes modelos de regressão que podem ser usados para modelar taxas, proporções e outras variáveis respostas que assumem valores no intervalo unitário padrão, (0,1). Quando só uma classe de modelos de regressão é considerada, a seleção do modelos pode ser baseada nos testes de hipóteses usuais. O objetivo da presente dissertação é apresentar e avaliar numericamente os desempenhos em amostras imitas de testes que podem ser usados quando há dois ou mais modelos que são plausíveis, são não-encaixados e pertencem a classes de modelos de regressão distintas. Os modelos competidores podem diferir nos regressores que utilizam, nas funções de ligação e/ou na distribuição assumida para a variável resposta. Através de simulações de Monte Cario nós estimamos as taxas de rejeição nulas e não-nulas dos testes sob diversos cenários. Avaliamos também o desempenho de um procedimento de seleção de modelos. Os resultados mostram que os testes podem ser bastante úteis na escolha do melhor modelo de regressão quando a variável resposta assume valores no intervalo unitário padrão.
There are several different regression models that can be used with rates, proportions and other continuous responses that assume values in the standard unit interval, (0,1). When only one class of models is considered, model selection can be based on standard hypothesis testing inference. In this dissertation, we develop tests that can be used when the practitioner has at his/her disposal more than one plausible model, the competing models are nonnested and possibly belong to different classes of models. The competing models can differ in the regressors they use, in the link functions and even in the response distribution. The finite sample performances of the proposed tests are numerically eval-uated. We evaluate both the null and nonnull behavior of the tests using Monte Cario simulations. The results show that the tests can be quite useful for selecting the best regression model when the response assumes values in the standard unit interval.
Ainsworth, Holly Fiona. "Bayesian inference for stochastic kinetic models using data on proportions of cell death". Thesis, University of Newcastle upon Tyne, 2014. http://hdl.handle.net/10443/2499.
Texto completoXiao, Yongling 1972. "Bootstrap-based inference for Cox's proportional hazards analyses of clustered censored survival data". Thesis, McGill University, 2005. http://digitool.Library.McGill.CA:80/R/?func=dbin-jump-full&object_id=98523.
Texto completoMethods. Both bootstrap-based approaches involve 2 stages of resampling the original data. The two methods share the same procedure at the first stage but employ different procedures at the second stage. At the first stage of both methods, the clusters (e.g. physicians) are resampled with replacement. At the second stage, one method resamples individual patients with replacement for each physician (i.e. units within-cluster) selected at the 1st stage, while another method picks up all the patients for each selected physician, without resampling. For both methods, each of the resulting bootstrap samples is then independently analyzed with standard Cox's PH model, and the standard errors (SE) of the regression parameters are estimated as the empirical standard deviation, of the corresponding estimates. Finally, 95% confidence intervals (CI) for the estimates are estimated using bootstrap-based SE and assuming normality.
Simulations design. I have simulated a hypothetical study with N patients clustered within practices of M physicians. Individual patients' times-to-events were generated from the exponential distribution with hazard conditional on (i) several patient-level variables, (ii) several cluster-level (physician's) variables, and (iii) physician's "random effects". Random right censoring was applied. Simulated data were analyzed using 4 approaches: the proposed two bootstrap methods, standard Cox's PH model and "classic" one-step bootstrap with direct resampling of the patients.
Results. Standard Cox's model and "Classic" 1-step bootstrap under-estimated variance of regression coefficients, leading to serious inflation of type I error rates and coverage rates of 95% CI as low as 60-70%. In contrast, the proposed approach that resamples both physicians and patients-within-physicians, with the 100 bootstrap resamples, resulted in slightly conservative estimates of standard errors, which yielded type I error rates between 2% and 6%, and coverage rates between 94% and 99%.
Conclusions. The proposed bootstrap approach offers an easy-to-implement method to account for interdependence of times-to-events in the inference about Cox model regression parameters in the context of analyses of right-censored clustered data.
Silva, Ana Roberta dos Santos 1989. "Modelos de regressão beta retangular heteroscedásticos aumentados em zeros e uns". [s.n.], 2015. http://repositorio.unicamp.br/jspui/handle/REPOSIP/306787.
Texto completoDissertação (mestrado) - Universidade Estadual de Campinas, Instituto de Matemática Estatística e Computação Científica
Made available in DSpace on 2018-08-26T19:30:15Z (GMT). No. of bitstreams: 1 Silva_AnaRobertadosSantos_M.pdf: 4052475 bytes, checksum: 08fb6f3f7b4ed838df4eea2dbcf06a29 (MD5) Previous issue date: 2015
Resumo: Neste trabalho desenvolvemos a distribuição beta retangular aumentada em zero e um, bem como um correspondente modelo de regressão beta retangular aumentado em zero e um para analisar dados limitados-aumentados (representados por variáveis aleatórias mistas com suporte limitado), que apresentam valores discrepantes. Desenvolvemos ferramentas de inferência sob as abordagens bayesiana e frequentista. No que diz respeito à inferência bayesiana, devido à impossibilidade de obtenção analítica das posteriores de interesse, utilizou-se algoritmos MCMC. Com relação à estimação frequentista, utilizamos o algoritmo EM. Desenvolvemos técnicas de análise de resíduos, utilizando o resíduo quantil aleatorizado, tanto sob o enfoque frequentista quanto bayesiano. Desenvolvemos, também, medidas de influência, somente sob o enfoque bayesiano, utilizando a medida de Kullback Leibler. Além disso, adaptamos métodos de checagem preditiva à posteriori existentes na literatura, ao nosso modelo, utilizando medidas de discrepância apropriadas. Para a comparação de modelos, utilizamos os critérios usuais na literatura, como AIC, BIC e DIC. Realizamos diversos estudos de simulação, considerando algumas situações de interesse prático, com o intuito de comparar as estimativas bayesianas com as frequentistas, bem como avaliar o comportamento das ferramentas de diagnóstico desenvolvidas. Um conjunto de dados da área psicométrica foi analisado para ilustrar o potencial do ferramental desenvolvido
Abstract: In this work we developed the zero-one augmented rectangular beta distribution, as well as a correspondent zero-one augmented rectangular beta regression model to analyze limited-augmented data (represented by mixed random variables with limited support), which present outliers. We develop inference tools under the Bayesian and frequentist approaches. Regarding to the Bayesian inference, due the impossibility of obtaining analytically the posterior distributions of interest, we used MCMC algorithms. Concerning the frequentist estimation, we use the EM algorithm. We develop techniques of residual analysis, by using the randomized quantile residuals, under both frequentist and Bayesian approaches. We also developed influence measures, only under the Bayesian approach, by using the measure of Kullback Leibler. In addition, we adapt methods of posterior predictive checking available in the literature, to our model, using appropriate discrepancy measures. For model selection, we use the criteria commonly employed in the literature, such as AIC, BIC and DIC. We performed several simulation studies, considering some situations of practical interest, in order to compare the Bayesian and frequentist estimates, as well as to evaluate the behavior of the developed diagnostic tools. A psychometric real data set was analyzed to illustrate the performance of the developed tools
Mestrado
Estatistica
Mestra em Estatística
Nourmohammadi, Mohammad. "Statistical inference with randomized nomination sampling". Elsevier B.V, 2014. http://hdl.handle.net/1993/30150.
Texto completoPavão, André Luis. "Modelos de duração aplicados à sobrevivência das empresas paulistas entre 2003 e 2007". Universidade de São Paulo, 2013. http://www.teses.usp.br/teses/disponiveis/12/12140/tde-24072013-154206/.
Texto completoThis thesis presents the main results that determined the bankruptcy of enterprises located in the São Paulo State from 2003 to 2007. The models used in this work were possible due to the partnership with SEBRAE, Small Business Service Supporting, located in the State of São Paulo. This institution provided the data basis for this research and its final version was compound by 662 enterprises and 33 variables, which were collected from a survey done by SEBRAE and the related enterprise. For first time available for research like this The research was supported by econometrics models, more precisely duration models, which identified the most important factors regarding enterprises survival. Two enterprise groups were distinguished: that one that will survive and grow and another will fail. In this work, three models were used: parametric, non-parametric and proportional risk with all of them presenting similar results. The proportional risk approach was applied for economic sectors and enterprises size. For the micro size business, the entrepreneurship\'s age and the resources applied on the employee\'s qualification were important to reduce the risk to fail in the time, whereas for small enterprises, variables like innovation and business plan building were the most important variables. For the commerce and service sectors, the enterprises related to the first one, the enterprises which kept attention on financial results (cash flow) presented lower risk to fail. For service sector, variables such as: entrepreneur\'s age, investment on the employee\'s qualification and enterprise\'s size were the most important variables to explain the difference the risk to fail between the enterprises. Another result presented was the risk to fail, which indicates the likelihood of an enterprise to leave its business activity. In this case, the parametric model using Weibull distribution concluded that the risk grows in the first five years. However, this result must be carefully evaluated since it would be necessary a longer term data to ensure this result.
Fossaluza, Victor. "Testes de hipóteses em eleições majoritárias". Universidade de São Paulo, 2008. http://www.teses.usp.br/teses/disponiveis/45/45133/tde-03092008-151708/.
Texto completoThe problem of inference about a proportion, widely explored in the statistical literature, plays a key role in the development of several theories of statistical inference and, invariably, is the object of investigation and discussion in comparative studies among different schools of inference. In addition, the estimation of proportions, as well as test of hypothesis for proportions, is very important in many areas of knowledge as it constitutes a simple and universal quantitative method. In this work a comparative study between the Classical and Bayesian approaches to the problem of testing the hypothesis of occurrence of second round (or not) in a typical scenario of a majoritarian election (absolute majority) in two rounds in Brazil is developed.
Galvis, Soto Diana Milena 1978. "Bayesian analysis of regression models for proportional data in the presence of zeros and ones = Análise bayesiana de modelos de regressão para dados de proporções na presença de zeros e uns". [s.n.], 2014. http://repositorio.unicamp.br/jspui/handle/REPOSIP/306682.
Texto completoTese (doutorado) - Universidade Estadual de Campinas, Instituto de Matemática Estatística e Computação Científica
Made available in DSpace on 2018-08-26T02:34:17Z (GMT). No. of bitstreams: 1 GalvisSoto_DianaMilena_D.pdf: 1208980 bytes, checksum: edbc193912a2a800da4936526ed79fa3 (MD5) Previous issue date: 2014
Resumo: Dados no intervalo (0,1) geralmente representam proporções, taxas ou índices. Porém, é possível observar situações práticas onde as proporções sejam zero e/ou um, representando ausência ou presença total da característica de interesse. Nesses casos, os modelos que analisam o efeito de covariáveis, tais como a regressão beta, beta retangular e simplex não são convenientes. Com o intuito de abordar este tipo de situações, considera-se como alternativa aumentar os valores zero e/ou um ao suporte das distribuições previamente mencionadas. Nesta tese, são propostos modelos de regressão de efeitos mistos para dados de proporções aumentados de zeros e uns, os quais permitem analisar o efeito de covariáveis sobre a probabilidade de observar ausência ou presença total da característica de interesse, assim como avaliar modelos com respostas correlacionadas. A estimação dos parâmetros de interesse pode ser via máxima verossimilhança ou métodos Monte Carlo via Cadeias de Markov (MCMC). Nesta tese, será adotado o enfoque Bayesiano, o qual apresenta algumas vantagens em relação à inferência clássica, pois não depende da teoria assintótica e os códigos são de fácil implementação, através de softwares como openBUGS e winBUGS. Baseados na distribuição marginal, é possível calcular critérios de seleção de modelos e medidas Bayesianas de divergência q, utilizadas para detectar observações discrepantes
Abstract: Continuous data in the unit interval (0,1) represent, generally, proportions, rates or indices. However, zeros and/or ones values can be observed, representing absence or total presence of a carachteristic of interest. In that case, regression models that analyze the effect of covariates such as beta, beta rectangular or simplex are not appropiate. In order to deal with this type of situations, an alternative is to add the zero and/or one values to the support of these models. In this thesis and based on these models, we propose the mixed regression models for proportional data augmented by zero and one, which allow analyze the effect of covariates into the probabilities of observing absence or total presence of the interest characteristic, besides of being possivel to deal with correlated responses. Estimation of parameters can follow via maximum likelihood or through MCMC algorithms. We follow the Bayesian approach, which presents some advantages when it is compared with classical inference because it allows to estimate the parameters even in small size sample. In addition, in this approach, the implementation is straightforward and can be done using software as openBUGS or winBUGS. Based on the marginal likelihood it is possible to calculate selection model criteria as well as q-divergence measures used to detect outlier observations
Doutorado
Estatistica
Doutora em Estatística
Martin, Victorin. "Modélisation probabiliste et inférence par l'algorithme Belief Propagation". Phd thesis, Ecole Nationale Supérieure des Mines de Paris, 2013. http://tel.archives-ouvertes.fr/tel-00867693.
Texto completoShieh, Meng-Shiou. "Correction methods, approximate biases, and inference for misclassified data". 2009. https://scholarworks.umass.edu/dissertations/AAI3359160.
Texto completoHSIAO, CHENG-HUAN y 蕭承桓. "Bayesian Inference of the Proportion of Sensitivity Attributes for Different Groups by using the Randomized Response Technique for Unrelated-Questions". Thesis, 2018. http://ndltd.ncl.edu.tw/handle/97dr5q.
Texto completo逢甲大學
統計學系統計與精算碩士班
106
In general, most people are reluctant to provide answers to sensitive questions. In this situation, the information collected by the investigator are likely to be untruthful and mislead the findings of the research. Therefore, we should use the randomized response technique (RRT) enables to protect the respondents’ privacies and increases the possibility for collecting honest answers from respondents. In this study, we employ the randomized response (RR) design of Greenberg et al. (1969) for unrelated question combined with the RR technique idea developed by Liu et al. (2016). In Liu et al. (2016), the conditional posterior distribution of proportion of the sensitive feature used only part of the potential variable information. Their idea can yield estimates of proportions which are inefficient. Thus, we need to improve the conditional posterior distribution, and then considers to different groups. The Gibbs sampling method is used to in this study. A simulation study is conducted to investigate the performance of the proposed methods where three different prior distributions are used. Furthermore, a real data about the social changes in Taiwan conducted by the Central Research Institute of Human and Social Research Center in 2012 is used to estimate of the proportions of affair between age groups and residential places.
HSIEH, CHIA-EN y 謝佳恩. "Bayesian Inference of the Proportion of Sensitive Attributes for Different Groups by using the Randomized Response Technique for Digitizing-Questions". Thesis, 2019. http://ndltd.ncl.edu.tw/handle/yn769f.
Texto completo逢甲大學
統計學系統計與精算碩士班
107
In general, when questionnaires are used to collect the data on sensitive issues, most respondents tend to become hostile to the interviewers. The respondents are more likely to provide untruthful responses or refuse to respond. Subsequently, this makes the collect of data difficile and the analysis results meaningless. Fortunately, the use of a randomized response question can protect the privacy of the respondent and further motivate the respondent to provide truthful responses. In this regards, Xiao (2018) used Greenberg, et al. (1969) uncorrelated randomized response technique (RRT) to provide conditional posterior distribution under all potential variable information. He used the Gibbs sampling technique to provide the estimator of the proportion of sensitive attributes under different exogenous groups. Hsieh and Perri (2019) gave the Bayesian inference of the proportion of sensitive attributes by combining Gibbs sampling technique with the randomized response techniques proposed by Christofides (2003). In this paper, we extends the Bayesian inference of Xiao (2018) to the RRT proposed by Christofides (2003) and derive the Bayesian inference of the proportion of sensitive attributes under different exogenous groups. Moreover, we use the Bayesian method combined with Gibbs sampling to provide different models for the sensitive attributes under different exogenous groups. Simulation studies is conducted in order to evaluate the performance of the proposed method. In addition, we use the RRT of Christofides (2003) to collect the data of the Sexual EQ questionnaire conducted by the Student Affairs Office of Feng Chia University during the second semester of the academic year 2016. The respondents basic information and the online dating experiences were used as external variables to obtain the network one-night ratio estimation under different group combinations.
Jalaluddin, Muhammad. "Robust inference for the Cox's proportional hazards model with frailties". 1999. http://www.library.wisc.edu/databases/connect/dissertations.html.
Texto completoMartínez, Vargas Danae Mirel. "Régression de Cox avec partitions latentes issues du modèle de Potts". Thèse, 2019. http://hdl.handle.net/1866/22552.
Texto completo