Auswahl der wissenschaftlichen Literatur zum Thema „Proportion inference“

Geben Sie eine Quelle nach APA, MLA, Chicago, Harvard und anderen Zitierweisen an

Wählen Sie eine Art der Quelle aus:

Machen Sie sich mit den Listen der aktuellen Artikel, Bücher, Dissertationen, Berichten und anderer wissenschaftlichen Quellen zum Thema "Proportion inference" bekannt.

Neben jedem Werk im Literaturverzeichnis ist die Option "Zur Bibliographie hinzufügen" verfügbar. Nutzen Sie sie, wird Ihre bibliographische Angabe des gewählten Werkes nach der nötigen Zitierweise (APA, MLA, Harvard, Chicago, Vancouver usw.) automatisch gestaltet.

Sie können auch den vollen Text der wissenschaftlichen Publikation im PDF-Format herunterladen und eine Online-Annotation der Arbeit lesen, wenn die relevanten Parameter in den Metadaten verfügbar sind.

Zeitschriftenartikel zum Thema "Proportion inference"

1

Xu, XingZhong, und Fang Liu. „Statistical inference on mixing proportion“. Science in China Series A: Mathematics 51, Nr. 9 (13.06.2008): 1593–608. http://dx.doi.org/10.1007/s11425-008-0016-0.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
2

Subbiah, M., und V. Rajeswaran. „proportion: A comprehensive R package for inference on single Binomial proportion and Bayesian computations“. SoftwareX 6 (2017): 36–41. http://dx.doi.org/10.1016/j.softx.2017.01.001.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
3

Mielke, Paul W., und Kenneth J. Berry. „Nonasymptotic Inferences Based on Cochran's Q Test“. Perceptual and Motor Skills 81, Nr. 1 (August 1995): 319–22. http://dx.doi.org/10.2466/pms.1995.81.1.319.

Der volle Inhalt der Quelle
Annotation:
A nonasymptotic inference procedure for Cochran's Q test for the equality of matched proportions is presented. The nonasymptotic method provides improvement over the asymptotic method when there is a small number of subjects and/or a relatively small proportion of successes for subjects.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
4

Garthwaite, Paul H., und John R. Crawford. „Inference for a binomial proportion in the presence of ties“. Journal of Applied Statistics 38, Nr. 9 (September 2011): 1915–34. http://dx.doi.org/10.1080/02664763.2010.537649.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
5

Mohamed, Nuri Eltabit. „Approximate confidence intervals for the Population Proportion based on linear model“. Al-Mukhtar Journal of Basic Sciences 21, Nr. 2 (05.05.2024): 58–62. http://dx.doi.org/10.54172/rtn9cg93.

Der volle Inhalt der Quelle
Annotation:
When the linear model errors are non-normal, one might be interested in making inference concerning proportion. The goal of this article is to construct approximate confidence intervals for the proportion founded on the supposed linear model which cover a true value of a proportion that close to a specific nominal value of the level of significant.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
6

Frey, Jesse. „Bayesian Inference on a Proportion Believed to be a Simple Fraction“. American Statistician 61, Nr. 3 (August 2007): 201–6. http://dx.doi.org/10.1198/000313007x222866.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
7

Nandram, Balgobin, Dilli Bhatta, Dhiman Bhadra und Gang Shen. „Bayesian predictive inference of a finite population proportion under selection bias“. Statistical Methodology 11 (März 2013): 1–21. http://dx.doi.org/10.1016/j.stamet.2012.08.003.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
8

Khan, K. Daniel, und J. A. A. Vargas-Guzmán. „Facies Proportions From the Inference of Nonlinear Conditional Beta-Field Parameters“. SPE Journal 18, Nr. 06 (06.05.2013): 1033–42. http://dx.doi.org/10.2118/163147-pa.

Der volle Inhalt der Quelle
Annotation:
Summary Conditional beta distributions are proposed with examples to evaluate the probability of intercepting specific proportions of target rocks in well planning. Geological facies or rock-type proportions are random variables pk(x) at each location, x. This paper recalls and further demonstrates that facies proportions can be modeled by local beta distributions. However, the highly variable shapes of the conditional probability-density functions (PDFs) for the random variables in the field lead to complex nonstationarity and nonlinearity issues. A practical and robust approach is to transform the proportion random variables to Gaussian variables, thus enabling the use of classical geostatistics. Although a direct relationship between Gaussian and beta random variables appears intractable, a suitable transformation that involves second-order expectations of proportions is proposed. The conditional parameters of the beta variables are recovered from kriging estimates after back transformation to proportions through Riemann sums.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
9

Hilbig, Benjamin E., und Rüdiger F. Pohl. „Recognizing Users of the Recognition Heuristic“. Experimental Psychology 55, Nr. 6 (Januar 2008): 394–401. http://dx.doi.org/10.1027/1618-3169.55.6.394.

Der volle Inhalt der Quelle
Annotation:
The recognition heuristic is hypothesized to be a frugal inference strategy assuming that inferences are based on the recognition cue alone. This assumption, however, has been questioned by existing research. At the same time most studies rely on the proportion of choices consistent with the heuristic as a measure of its use which may not be fully appropriate. In this study, we propose an index to identify true users of the heuristic contrasting them to decision makers who incorporate further knowledge beyond recognition. The properties and the applicability of the proposed index are investigated in the reanalyses of four published experiments and corroborated by a new study drawn up to rectify the shortcomings of the reanalyzed experiments. Applying the proposed index to explore the influence of knowledge we found that participants who were more knowledgeable made use of the information available to them and achieved the highest proportion of correct inferences.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
10

Rahardja, Dewi, und Yan D. Zhao. „Bayesian inference of a binomial proportion using one-sample misclassified binary data“. Model Assisted Statistics and Applications 7, Nr. 1 (16.02.2012): 17–22. http://dx.doi.org/10.3233/mas-2011-0197.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen

Dissertationen zum Thema "Proportion inference"

1

Kim, Hyun Seok (John). „Diagnosing examinees' attributes-mastery using the Bayesian inference for binomial proportion: a new method for cognitive diagnostic assessment“. Diss., Georgia Institute of Technology, 2011. http://hdl.handle.net/1853/41144.

Der volle Inhalt der Quelle
Annotation:
Purpose of this study was to propose a simple and effective method for cognitive diagnosis assessment (CDA) without heavy computational demand using Bayesian inference for binomial proportion (BIBP). In real data studies, BIBP was applied to a test data using two different item designs: four and ten attributes. Also, the BIBP method was compared with DINA and LCDM in the diagnosis result using the same four-attribute data set. There were slight differences in the attribute mastery probability estimate among the three model (DINA, LCDM, BIBP), which could result in different attribute mastery pattern. In Simulation studies, it was found that the general accuracy of the BIBP method in the true parameter estimation was relatively high. The DINA estimation showed slightly higher overall correct classification rate but the bigger overall biases and estimation errors than the BIBP estimation. The three simulation variables (Attribute Correlation, Attribute Difficulty, and Sample Size) showed impacts on the parameter estimations of both models. However, they affected differently the two models: Harder attributes showed the higher accuracy of attribute mastery classification in the BIBP estimation while easier attributes was associated with the higher accuracy of the DINA estimation. In conclusion, BIBP appears an effective method for CDA with the advantage of easy and fast computation and a relatively high accuracy of parameter estimation.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
2

Li, Qiuju. „Statistical inference for joint modelling of longitudinal and survival data“. Thesis, University of Manchester, 2014. https://www.research.manchester.ac.uk/portal/en/theses/statistical-inference-for-joint-modelling-of-longitudinal-and-survival-data(65e644f3-d26f-47c0-bbe1-a51d01ddc1b9).html.

Der volle Inhalt der Quelle
Annotation:
In longitudinal studies, data collected within a subject or cluster are somewhat correlated by their very nature and special cares are needed to account for such correlation in the analysis of data. Under the framework of longitudinal studies, three topics are being discussed in this thesis. In chapter 2, the joint modelling of multivariate longitudinal process consisting of different types of outcomes are discussed. In the large cohort study of UK north Stafforshire osteoarthritis project, longitudinal trivariate outcomes of continuous, binary and ordinary data are observed at baseline, year 3 and year 6. Instead of analysing each process separately, joint modelling is proposed for the trivariate outcomes to account for the inherent association by introducing random effects and the covariance matrix G. The influence of covariance matrix G on statistical inference of fixed-effects parameters has been investigated within the Bayesian framework. The study shows that by joint modelling the multivariate longitudinal process, it can reduce the bias and provide with more reliable results than it does by modelling each process separately. Together with the longitudinal measurements taken intermittently, a counting process of events in time is often being observed as well during a longitudinal study. It is of interest to investigate the relationship between time to event and longitudinal process, on the other hand, measurements taken for the longitudinal process may be potentially truncated by the terminated events, such as death. Thus, it may be crucial to jointly model the survival and longitudinal data. It is popular to propose linear mixed-effects models for the longitudinal process of continuous outcomes and Cox regression model for survival data to characterize the relationship between time to event and longitudinal process, and some standard assumptions have been made. In chapter 3, we try to investigate the influence on statistical inference for survival data when the assumption of mutual independence on random error of linear mixed-effects models of longitudinal process has been violated. And the study is conducted by utilising conditional score estimation approach, which provides with robust estimators and shares computational advantage. Generalised sufficient statistic of random effects is proposed to account for the correlation remaining among the random error, which is characterized by the data-driven method of modified Cholesky decomposition. The simulation study shows that, by doing so, it can provide with nearly unbiased estimation and efficient statistical inference as well. In chapter 4, it is trying to account for both the current and past information of longitudinal process into the survival models of joint modelling. In the last 15 to 20 years, it has been popular or even standard to assume that longitudinal process affects the counting process of events in time only through the current value, which, however, is not necessary to be true all the time, as recognised by the investigators in more recent studies. An integral over the trajectory of longitudinal process, along with a weighted curve, is proposed to account for both the current and past information to improve inference and reduce the under estimation of effects of longitudinal process on the risk hazards. A plausible approach of statistical inference for the proposed models has been proposed in the chapter, along with real data analysis and simulation study.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
3

ZHAO, SHUHONG. „STATISTICAL INFERENCE ON BINOMIAL PROPORTIONS“. University of Cincinnati / OhioLINK, 2005. http://rave.ohiolink.edu/etdc/view?acc_num=ucin1115834351.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
4

Simonnet, Titouan. „Apprentissage et réseaux de neurones en tomographie par diffraction de rayons X. Application à l'identification minéralogique“. Electronic Thesis or Diss., Orléans, 2024. http://www.theses.fr/2024ORLE1033.

Der volle Inhalt der Quelle
Annotation:
La compréhension du comportement chimique et mécanique des matériaux compactés (par exemple sol, sous-sol, matériaux ouvragés) nécessite de se baser sur une description quantitative de structuration du matériau, et en particulier de la nature des différentes phases minéralogiques et de leur relation spatiale. Or, les matériaux naturels sont composés de nombreux minéraux de petite taille, fréquemment mixés à petite échelle. Les avancées récentes en tomographie de diffraction des rayons X sur source synchrotron (à différencier de la tomographie en contraste de phase) permettent maintenant d'obtenir des volumes tomographiques avec des voxels de taille nanométrique, avec un diffractogramme pour chacun de ces voxels (là où le contraste de phase ne donne qu'un niveau de gris). En contrepartie, le volume de données (typiquement de l'ordre de 100~000 diffractogrammes par tranche d'échantillon), associé au grand nombre de phases présentes, rend le traitement quantitatif virtuellement impossible sans codes numériques appropriés. Cette thèse vise à combler ce manque, en utilisant des approches de type réseaux de neurones pour identifier et quantifier des minéraux dans un matériau. L'entrainement de tels modèles nécessite la construction de bases d'apprentissage de grande taille, qui ne peuvent pas être constituées uniquement de données expérimentales. Des algorithmes capables de synthétiser des diffractogrammes pour générer ces bases ont donc été développés. L'originalité de ce travail a également porté sur l'inférence de proportions avec des réseaux de neurones.Pour répondre à cette tâche, nouvelle et complexe, des fonctions de perte adaptées ont été conçues. Le potentiel des réseaux de neurones a été testé sur des données de complexités croissantes : (i) à partir de diffractogrammes calculés à partir des informations cristallographiques, (ii) en utilisant des diffractogrammes expérimentaux de poudre mesurés au laboratoire, (iii) sur les données obtenues par tomographie de rayons X. Différentes architectures de réseaux de neurones ont aussi été testées. Si un réseau de neurones convolutifs semble apporter des résultats intéressants, la structure particulière du signal de diffraction (qui n'est pas invariant par translation) a conduit à l'utilisation de modèles comme les Transformers. L'approche adoptée dans cette thèse a démontré sa capacité à quantifier les phases minérales dans un solide. Pour les données les plus complexes, tomographie notamment, des pistes d'amélioration ont été proposées
Understanding the chemical and mechanical behavior of compacted materials (e.g. soil, subsoil, engineered materials) requires a quantitative description of the material's structure, and in particular the nature of the various mineralogical phases and their spatial relationships. Natural materials, however, are composed of numerous small-sized minerals, frequently mixed on a small scale. Recent advances in synchrotron-based X-ray diffraction tomography (to be distinguished from phase contrast tomography) now make it possible to obtain tomographic volumes with nanometer-sized voxels, with a XRD pattern for each of these voxels (where phase contrast only gives a gray level). On the other hand, the sheer volume of data (typically on the order of 100~000 XRD patterns per sample slice), combined with the large number of phases present, makes quantitative processing virtually impossible without appropriate numerical codes. This thesis aims to fill this gap, using neural network approaches to identify and quantify minerals in a material. Training such models requires the construction of large-scale learning bases, which cannot be made up of experimental data alone.Algorithms capable of synthesizing XRD patterns to generate these bases have therefore been developed.The originality of this work also concerned the inference of proportions using neural networks. To meet this new and complex task, adapted loss functions were designed.The potential of neural networks was tested on data of increasing complexity: (i) from XRD patterns calculated from crystallographic information, (ii) using experimental powder XRD patterns measured in the laboratory, (iii) on data obtained by X-ray tomography. Different neural network architectures were also tested. While a convolutional neural network seemed to provide interesting results, the particular structure of the diffraction signal (which is not translation invariant) led to the use of models such as Transformers. The approach adopted in this thesis has demonstrated its ability to quantify mineral phases in a solid. For more complex data, such as tomography, improvements have been proposed
APA, Harvard, Vancouver, ISO und andere Zitierweisen
5

Liu, Guoyuan. „Comparison of prior distributions for bayesian inference for small proportions“. Thesis, McGill University, 2011. http://digitool.Library.McGill.CA:80/R/?func=dbin-jump-full&object_id=96917.

Der volle Inhalt der Quelle
Annotation:
Often, Bayesian analyses for epidemiological applications use objective prior distributions. These prior distributions are chosen with the goal of allowing the posterior distribution to be determined by the observed data alone. While this is achieved in most situations, it is not the case for Bayesian estimation of a small proportion. Such a situation might arise, for example, when estimating the prevalence of a rare disease. Several candidate objective prior distributions have been proposed for a Binomial proportion, including the Uniform distribution and Jeffrey's distribution. Each of these prior distributions may lead to very different posterior inferences when the number of events in the Binomial experiment is small, but it is unclear which of these would lead to better estimates on average. We explore this question by examining the frequentist performance of the posterior credible interval in two problems: i) estimating a single proportion, ii) estimating the difference between two proportions. The credible intervals obtained when using standard objective prior distributions as well as informative prior distributions motivated by real-life examples are compared. To assess frequentist performance, numerous statistics, including average coverage and average length of the posterior credible intervals were considered.
Souvent des analyses bayésiennes de données épidémiologiques utilisent les distributions à priori objectives. Ces distributions à priori sont sélectionnées de sorte que les distributions à posteriori soient déterminées uniquement par les données observées. Bien que cette méthode soit efficace dans plusieurs situations, elle ne l'est pas dans le cas de l'estimation bayésienne de petites proportions. Cette situation peut survenir, par exemple lors de l'estimation de la prévalence d'une maladie rare. Plusieurs distributions à priori objectives ont été proposées pour l'estimation d'une proportion, telle que, par exemple la distribution uniforme de Jeffrey. Chacune de ces distributions à priori peut conduire à de différentes distributions à posteriori lorsque le nombre d'événements dans l'expérience binomiale est petit. Mais il n'est pas clair laquelle de ces distributions, en moyenne, donne de meilleurs estimés. Nous explorons cette question en examinant la performance fréquentiste des intervalles crédibles à posteriori obtenus, respectivement, avec chacune de ces distributions à priori. Pour évaluer cette performance, nous considèrons des statistiques comme la couverture moyenne et la longueur moyenne des intervalles crédibles à posteriori. Nous considérons aussi des distributions à priori plus informatives comme les distributions uniformes définies sur un sous-intervalle de l'intervalle [0, 1]. La performance des distributions à priori est évaluée en utilisant des données simulées de situations où l'intérêt de recherche est concentré sur l'estimation d'une seule proportion ou sur la différence entre deux proportions.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
6

LEAL, ALTURO Olivia Lizeth. „Nonnested hypothesis testing inference in regression models for rates and proportions“. Universidade Federal de Pernambuco, 2017. https://repositorio.ufpe.br/handle/123456789/24573.

Der volle Inhalt der Quelle
Annotation:
Submitted by Alice Araujo (alice.caraujo@ufpe.br) on 2018-05-07T21:17:28Z No. of bitstreams: 1 DISSERTAÇÃO Olivia Lizeth Leal Alturo.pdf: 2450256 bytes, checksum: 8d29b676eaffcb3c5bc1b78a8611b9f8 (MD5)
Made available in DSpace on 2018-05-07T21:17:28Z (GMT). No. of bitstreams: 1 DISSERTAÇÃO Olivia Lizeth Leal Alturo.pdf: 2450256 bytes, checksum: 8d29b676eaffcb3c5bc1b78a8611b9f8 (MD5) Previous issue date: 2017-02-16
Existem diferentes modelos de regressão que podem ser usados para modelar taxas, proporções e outras variáveis respostas que assumem valores no intervalo unitário padrão, (0,1). Quando só uma classe de modelos de regressão é considerada, a seleção do modelos pode ser baseada nos testes de hipóteses usuais. O objetivo da presente dissertação é apresentar e avaliar numericamente os desempenhos em amostras imitas de testes que podem ser usados quando há dois ou mais modelos que são plausíveis, são não-encaixados e pertencem a classes de modelos de regressão distintas. Os modelos competidores podem diferir nos regressores que utilizam, nas funções de ligação e/ou na distribuição assumida para a variável resposta. Através de simulações de Monte Cario nós estimamos as taxas de rejeição nulas e não-nulas dos testes sob diversos cenários. Avaliamos também o desempenho de um procedimento de seleção de modelos. Os resultados mostram que os testes podem ser bastante úteis na escolha do melhor modelo de regressão quando a variável resposta assume valores no intervalo unitário padrão.
There are several different regression models that can be used with rates, proportions and other continuous responses that assume values in the standard unit interval, (0,1). When only one class of models is considered, model selection can be based on standard hypothesis testing inference. In this dissertation, we develop tests that can be used when the practitioner has at his/her disposal more than one plausible model, the competing models are nonnested and possibly belong to different classes of models. The competing models can differ in the regressors they use, in the link functions and even in the response distribution. The finite sample performances of the proposed tests are numerically eval-uated. We evaluate both the null and nonnull behavior of the tests using Monte Cario simulations. The results show that the tests can be quite useful for selecting the best regression model when the response assumes values in the standard unit interval.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
7

Ainsworth, Holly Fiona. „Bayesian inference for stochastic kinetic models using data on proportions of cell death“. Thesis, University of Newcastle upon Tyne, 2014. http://hdl.handle.net/10443/2499.

Der volle Inhalt der Quelle
Annotation:
The PolyQ model is a large stochastic kinetic model that describes protein aggregation within human cells as they undergo ageing. The presence of protein aggregates in cells is a known feature in many age-related diseases, such as Huntington's. Experimental data are available consisting of the proportions of cell death over time. This thesis is motivated by the need to make inference for the rate parameters of the PolyQ model. Ideally observations would be obtained on all chemical species, observed continuously in time. More realistically, it would be hoped that partial observations were available on the chemical species observed discretely in time. However, current experimental techniques only allow noisy observations on the proportions of cell death at a few discrete time points. This presents an ambitious inference problem. The model has a large state space and it is not possible to evaluate the data likelihood analytically. However, realisations from the model can be obtained using a stochastic simulator such as the Gillespie algorithm. The time evolution of a cell can be repeatedly simulated, giving an estimate of the proportion of cell death. Various MCMC schemes can be constructed targeting the posterior distribution of the rate parameters. Although evaluating the marginal likelihood is challenging, a pseudo-marginal approach can be used to replace the marginal likelihood with an easy to construct unbiased estimate. Another alternative which allows for the sampling error in the simulated proportions is also considered. Unfortunately, in practice, simulation from the model is too slow to be used in an MCMC inference scheme. A fast Gaussian process emulator is used to approximate the simulator. This emulator produces fully probabilistic predictions of the simulator output and can be embedded into inference schemes for the rate parameters. The methods developed are illustrated in two smaller models; the birth-death model and a medium sized model of mitochondrial DNA. Finally, inference on the large PolyQ model is considered.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
8

Xiao, Yongling 1972. „Bootstrap-based inference for Cox's proportional hazards analyses of clustered censored survival data“. Thesis, McGill University, 2005. http://digitool.Library.McGill.CA:80/R/?func=dbin-jump-full&object_id=98523.

Der volle Inhalt der Quelle
Annotation:
Background. Clustering of observations occurs frequently in epidemiological and clinical studies of time-to-event outcomes. However, only a few papers addressed the challenge of accounting for clustering while analyzing right-censored survival data. I propose two bootstrap-based approaches to correct standard errors of Cox's proportional hazards (PH) model estimates for clustering, and validate the approaches in simulations.
Methods. Both bootstrap-based approaches involve 2 stages of resampling the original data. The two methods share the same procedure at the first stage but employ different procedures at the second stage. At the first stage of both methods, the clusters (e.g. physicians) are resampled with replacement. At the second stage, one method resamples individual patients with replacement for each physician (i.e. units within-cluster) selected at the 1st stage, while another method picks up all the patients for each selected physician, without resampling. For both methods, each of the resulting bootstrap samples is then independently analyzed with standard Cox's PH model, and the standard errors (SE) of the regression parameters are estimated as the empirical standard deviation, of the corresponding estimates. Finally, 95% confidence intervals (CI) for the estimates are estimated using bootstrap-based SE and assuming normality.
Simulations design. I have simulated a hypothetical study with N patients clustered within practices of M physicians. Individual patients' times-to-events were generated from the exponential distribution with hazard conditional on (i) several patient-level variables, (ii) several cluster-level (physician's) variables, and (iii) physician's "random effects". Random right censoring was applied. Simulated data were analyzed using 4 approaches: the proposed two bootstrap methods, standard Cox's PH model and "classic" one-step bootstrap with direct resampling of the patients.
Results. Standard Cox's model and "Classic" 1-step bootstrap under-estimated variance of regression coefficients, leading to serious inflation of type I error rates and coverage rates of 95% CI as low as 60-70%. In contrast, the proposed approach that resamples both physicians and patients-within-physicians, with the 100 bootstrap resamples, resulted in slightly conservative estimates of standard errors, which yielded type I error rates between 2% and 6%, and coverage rates between 94% and 99%.
Conclusions. The proposed bootstrap approach offers an easy-to-implement method to account for interdependence of times-to-events in the inference about Cox model regression parameters in the context of analyses of right-censored clustered data.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
9

Silva, Ana Roberta dos Santos 1989. „Modelos de regressão beta retangular heteroscedásticos aumentados em zeros e uns“. [s.n.], 2015. http://repositorio.unicamp.br/jspui/handle/REPOSIP/306787.

Der volle Inhalt der Quelle
Annotation:
Orientador: Caio Lucidius Naberezny Azevedo
Dissertação (mestrado) - Universidade Estadual de Campinas, Instituto de Matemática Estatística e Computação Científica
Made available in DSpace on 2018-08-26T19:30:15Z (GMT). No. of bitstreams: 1 Silva_AnaRobertadosSantos_M.pdf: 4052475 bytes, checksum: 08fb6f3f7b4ed838df4eea2dbcf06a29 (MD5) Previous issue date: 2015
Resumo: Neste trabalho desenvolvemos a distribuição beta retangular aumentada em zero e um, bem como um correspondente modelo de regressão beta retangular aumentado em zero e um para analisar dados limitados-aumentados (representados por variáveis aleatórias mistas com suporte limitado), que apresentam valores discrepantes. Desenvolvemos ferramentas de inferência sob as abordagens bayesiana e frequentista. No que diz respeito à inferência bayesiana, devido à impossibilidade de obtenção analítica das posteriores de interesse, utilizou-se algoritmos MCMC. Com relação à estimação frequentista, utilizamos o algoritmo EM. Desenvolvemos técnicas de análise de resíduos, utilizando o resíduo quantil aleatorizado, tanto sob o enfoque frequentista quanto bayesiano. Desenvolvemos, também, medidas de influência, somente sob o enfoque bayesiano, utilizando a medida de Kullback Leibler. Além disso, adaptamos métodos de checagem preditiva à posteriori existentes na literatura, ao nosso modelo, utilizando medidas de discrepância apropriadas. Para a comparação de modelos, utilizamos os critérios usuais na literatura, como AIC, BIC e DIC. Realizamos diversos estudos de simulação, considerando algumas situações de interesse prático, com o intuito de comparar as estimativas bayesianas com as frequentistas, bem como avaliar o comportamento das ferramentas de diagnóstico desenvolvidas. Um conjunto de dados da área psicométrica foi analisado para ilustrar o potencial do ferramental desenvolvido
Abstract: In this work we developed the zero-one augmented rectangular beta distribution, as well as a correspondent zero-one augmented rectangular beta regression model to analyze limited-augmented data (represented by mixed random variables with limited support), which present outliers. We develop inference tools under the Bayesian and frequentist approaches. Regarding to the Bayesian inference, due the impossibility of obtaining analytically the posterior distributions of interest, we used MCMC algorithms. Concerning the frequentist estimation, we use the EM algorithm. We develop techniques of residual analysis, by using the randomized quantile residuals, under both frequentist and Bayesian approaches. We also developed influence measures, only under the Bayesian approach, by using the measure of Kullback Leibler. In addition, we adapt methods of posterior predictive checking available in the literature, to our model, using appropriate discrepancy measures. For model selection, we use the criteria commonly employed in the literature, such as AIC, BIC and DIC. We performed several simulation studies, considering some situations of practical interest, in order to compare the Bayesian and frequentist estimates, as well as to evaluate the behavior of the developed diagnostic tools. A psychometric real data set was analyzed to illustrate the performance of the developed tools
Mestrado
Estatistica
Mestra em Estatística
APA, Harvard, Vancouver, ISO und andere Zitierweisen
10

Nourmohammadi, Mohammad. „Statistical inference with randomized nomination sampling“. Elsevier B.V, 2014. http://hdl.handle.net/1993/30150.

Der volle Inhalt der Quelle
Annotation:
In this dissertation, we develop several new inference procedures that are based on randomized nomination sampling (RNS). The first problem we consider is that of constructing distribution-free confidence intervals for quantiles for finite populations. The required algorithms for computing coverage probabilities of the proposed confidence intervals are presented. The second problem we address is that of constructing nonparametric confidence intervals for infinite populations. We describe the procedures for constructing confidence intervals and compare the constructed confidence intervals in the RNS setting, both in perfect and imperfect ranking scenario, with their simple random sampling (SRS) counterparts. Recommendations for choosing the design parameters are made to achieve shorter confidence intervals than their SRS counterparts. The third problem we investigate is the construction of tolerance intervals using the RNS technique. We describe the procedures of constructing one- and two-sided RNS tolerance intervals and investigate the sample sizes required to achieve tolerance intervals which contain the determined proportions of the underlying population. We also investigate the efficiency of RNS-based tolerance intervals compared with their corresponding intervals based on SRS. A new method for estimating ranking error probabilities is proposed. The final problem we consider is that of parametric inference based on RNS. We introduce different data types associated with different situation that one might encounter using the RNS design and provide the maximum likelihood (ML) and the method of moments (MM) estimators of the parameters in two classes of distributions; proportional hazard rate (PHR) and proportional reverse hazard rate (PRHR) models.
APA, Harvard, Vancouver, ISO und andere Zitierweisen

Bücher zum Thema "Proportion inference"

1

Consortium for Mathematics and Its Applications (U.S.), Chedd-Angier Production Company, American Statistical Association und Annenberg Media, Hrsg. Against all odds--inside statistics: Disc 3, programs 9-12. S. Burlington, VT: Annenberg Media, 2011.

Den vollen Inhalt der Quelle finden
APA, Harvard, Vancouver, ISO und andere Zitierweisen
2

Inference for proportions [videorecording]. Oakville, Ont. :bMagic Lantern Communications Ltd, 1989.

Den vollen Inhalt der Quelle finden
APA, Harvard, Vancouver, ISO und andere Zitierweisen
3

Gill, Jeff, und Jonathan Homola. Issues in Polling Methodologies. Herausgegeben von Lonna Rae Atkeson und R. Michael Alvarez. Oxford University Press, 2016. http://dx.doi.org/10.1093/oxfordhb/9780190213299.013.11.

Der volle Inhalt der Quelle
Annotation:
This chapter presents issues and complications in statistical inference and uncertainty assessment using public opinion and polling data. It emphasizes the methodologically appropriate treatment of polling results as binomial and multinomial outcomes, and highlights methodological issues with correctly specifying and explaining the margin of error. The chapter also examines the log-ratio transformation of compositional data such as proportions of candidate support as one possible approach for the difficult analysis of such information. The deeply flawed Null Hypothesis Significance Testing (NHST) is discussed, along with common inferential misinterpretations. The relevance of this discussion is illustrated using specific examples of errors from journalistic sources as well as from academic journals focused on measures of public opinion.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
4

Hankin, David, Michael S. Mohr und Kenneth B. Newman. Sampling Theory. Oxford University Press, 2019. http://dx.doi.org/10.1093/oso/9780198815792.001.0001.

Der volle Inhalt der Quelle
Annotation:
We present a rigorous but understandable introduction to the field of sampling theory for ecologists and natural resource scientists. Sampling theory concerns itself with development of procedures for random selection of a subset of units, a sample, from a larger finite population, and with how to best use sample data to make scientifically and statistically sound inferences about the population as a whole. The inferences fall into two broad categories: (a) estimation of simple descriptive population parameters, such as means, totals, or proportions, for variables of interest, and (b) estimation of uncertainty associated with estimated parameter values. Although the targets of estimation are few and simple, estimates of means, totals, or proportions see important and often controversial uses in management of natural resources and in fundamental ecological research, but few ecologists or natural resource scientists have formal training in sampling theory. We emphasize the classical design-based approach to sampling in which variable values associated with units are regarded as fixed and uncertainty of estimation arises via various randomization strategies that may be used to select samples. In addition to covering standard topics such as simple random, systematic, cluster, unequal probability (stressing the generality of Horvitz–Thompson estimation), multi-stage, and multi-phase sampling, we also consider adaptive sampling, spatially balanced sampling, and sampling through time, three areas of special importance for ecologists and natural resource scientists. The text is directed to undergraduate seniors, graduate students, and practicing professionals. Problems emphasize application of the theory and R programming in ecological and natural resource settings.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
5

Golub, Jonathan. Survival Analysis. Herausgegeben von Janet M. Box-Steffensmeier, Henry E. Brady und David Collier. Oxford University Press, 2009. http://dx.doi.org/10.1093/oxfordhb/9780199286546.003.0023.

Der volle Inhalt der Quelle
Annotation:
This article provides a discussion of survival analysis that presents another way to incorporate temporal information into analysis in ways that give advantages similar to those from using time series. It describes the main choices researchers face when conducting survival analysis and offers a set of methodological steps that should become standard practice. After introducing the basic terminology, it shows that there is little to lose and much to gain by employing Cox models instead of parametric models. Cox models are superior to parametric models in three main respects: they provide more reliable treatment of the baseline hazard and superior handling of the proportional hazards assumption, and they are the best for handling tied data. Moreover, the illusory benefits of parametric models are presented. The greater use of Cox models enables researchers to elicit more useful information from their data, and allows for more reliable substantive inferences about important political processes.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
6

McCleary, Richard, David McDowall und Bradley J. Bartos. Statistical Conclusion Validity. Oxford University Press, 2017. http://dx.doi.org/10.1093/oso/9780190661557.003.0006.

Der volle Inhalt der Quelle
Annotation:
Chapter 6 addresses the sub-category of internal validity defined by Shadish et al., as statistical conclusion validity, or “validity of inferences about the correlation (covariance) between treatment and outcome.” The common threats to statistical conclusion validity can arise, or become plausible through either model misspecification or through hypothesis testing. The risk of a serious model misspecification is inversely proportional to the length of the time series, for example, and so is the risk of mistating the Type I and Type II error rates. Threats to statistical conclusion validity arise from the classical and modern hybrid significance testing structures, the serious threats that weigh heavily in p-value tests are shown to be undefined in Beyesian tests. While the particularly vexing threats raised by modern null hypothesis testing are resolved through the elimination of the modern null hypothesis test, threats to statistical conclusion validity would inevitably persist and new threats would arise.
APA, Harvard, Vancouver, ISO und andere Zitierweisen

Buchteile zum Thema "Proportion inference"

1

Myers, Chelsea. „Inference About a Population Proportion“. In Project-Based R Companion to Introductory Statistics, 119–30. First edition. | Boca Raton : Taylor and Francis, 2021.: Chapman and Hall/CRC, 2020. http://dx.doi.org/10.1201/9780429292002-ch09.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
2

Cowles, Mary Kathryn. „Inference for a Population Proportion“. In Springer Texts in Statistics, 49–65. New York, NY: Springer New York, 2012. http://dx.doi.org/10.1007/978-1-4614-5696-4_4.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
3

Best, Lisa, und Claire Goggin. „The Science of Seeing Science: Examining the Visuality Hypothesis“. In Diagrammatic Representation and Inference, 339–47. Cham: Springer International Publishing, 2021. http://dx.doi.org/10.1007/978-3-030-86062-2_34.

Der volle Inhalt der Quelle
Annotation:
AbstractFundamental disciplinary differences may be traceable to the use of visual representations, with researchers in the physical and life sciences relying more heavily on visuality. Our goal was to examine how inscriptions are used by scientists in different disciplines. We analyzed 2,467 articles from journals in biology, criminology and criminal justice, gerontology, library and information science, medicine, psychology, and sociology. Proportion of page space dedicated to graphs, tables, and non-graph illustrations was calculated. A Visuality Index was defined as the proportion of page space dedicated to visual depictions of data and non-data information. An ANOVA indicated a statistically significant difference between disciplines, interaction between inscription type and discipline, with articles published in biology journals dedicating more page space to graphs. The significant overlap in inscription use and visuality indicates imperfect disciplinary demarcation, suggesting similar methodological and data analytic practices within a discipline and between subdisciplines.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
4

Salinas Ruíz, Josafhat, Osval Antonio Montesinos López, Gabriela Hernández Ramírez und Jose Crossa Hiriart. „Generalized Linear Mixed Models for Proportions and Percentages“. In Generalized Linear Mixed Models with Applications in Agriculture and Biology, 209–78. Cham: Springer International Publishing, 2023. http://dx.doi.org/10.1007/978-3-031-32800-8_6.

Der volle Inhalt der Quelle
Annotation:
AbstractIn this chapter, we will review generalized linear mixed models (GLMMs) whose response can be either a proportion or a percentage. For proportion and percentage data, we refer to data whose expected value is between 0 and 1 or between 0 and 100. For the remainder of this book, we will refer to this type of data only in terms of proportion, knowing that it is possible to change it to a percentage scale only when multiplying it by 100. Proportions can be classified into two types: discrete and continuous. Discrete proportions arise when the unit of observation consists of N distinct entities, of which individuals have the attribute of interest “y. ”N must be a nonnegative integer and “y” must be a positive integer; here, y ≤ N. Therefore, the observed proportion must be a discrete fraction, which can take values $$ \frac{0}{N},\frac{1}{N},\cdots, \frac{N}{N} $$ 0 N , 1 N , ⋯ , N N . A binomial distribution is the sum of a series of m independent binary trials (i.e., trials with only two possible outcomes: success or failure), where all trials have the same probability of success. For binary and binomial distributions, the target of inference is the value of the parameter such that $$ 0\le E\left(\frac{y}{N}\right)=\pi \le 1 $$ 0 ≤ E y N = π ≤ 1 . Continuous proportions (ratios) arise when the researcher measures responses such as the fraction of the area of a leaf infested with a fungus, the proportion of damaged cloth in a square meter, the fraction of a contaminated area, and so on. As with the binomial parameter π, the continuous rates (fractions) take values between 0 and 1, but, unlike the binomial, the continuous proportions do not result from a set of Bernoulli tests. Instead, the beta distribution is most often used when the response variable is in continuous proportions. In the following sections, we will first address issues in modeling when we have binary and binomial data. When the response variable is binomial, we have the option of using a linearization method (pseudo-likelihood (PL)) or the Laplace or quadrature integral approximation (Stroup 2012).
APA, Harvard, Vancouver, ISO und andere Zitierweisen
5

Manalo, Emmanuel, und Laura Ohmes. „The Use of Diagrams in Planning for Report Writing“. In Diagrammatic Representation and Inference, 268–76. Cham: Springer International Publishing, 2022. http://dx.doi.org/10.1007/978-3-031-15146-0_23.

Der volle Inhalt der Quelle
Annotation:
AbstractIn this study, we investigated 32 undergraduate university students’ use of diagrams in planning to write two coursework reports. For both reports, the students were asked to submit a diagrammatic plan for what they were going to write. Prior to their first plan, no instruction was provided about how to use diagrams for planning. However, prior to the second plan, the students were provided instruction on the use of sketchnoting, which is one method for creating visual notes and organizing ideas. For the first plan, only 31% actually submitted a diagram plan, with the majority submitting a text-based plan. However, for the second plan, the proportion who submitted a diagram plan increased to 66%, but they also reported experiencing more difficulty in creating their plans compared to those who submitted text-based plans. The students’ plans and reports were scored for various quality features, analysis of which revealed that, for the second report, diagram plans had a better logical structure than text plans. More importantly, second reports created with diagram plans were also found to have a better logical structure than those created with text plans. The findings indicate that many students require instruction to be able to create diagrammatic plans, but that creating such plans can be helpful in structuring their written work.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
6

Holmes, William H., und William C. Rinaman. „Inference for Proportions“. In Statistical Literacy for Clinical Practitioners, 149–77. Cham: Springer International Publishing, 2014. http://dx.doi.org/10.1007/978-3-319-12550-3_6.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
7

Pagano, Marcello, Kimberlee Gauvreau und Heather Mattie. „Inference on Proportions“. In Principles of Biostatistics, 323–50. 3. Aufl. Boca Raton: Chapman and Hall/CRC, 2022. http://dx.doi.org/10.1201/9780429340512-14.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
8

Hahs-Vaughn, Debbie L., und Richard G. Lomax. „Inferences About Proportions“. In Statistical Concepts, 297–343. New York, NY : Routledge, 2019.: Routledge, 2020. http://dx.doi.org/10.4324/9780429261268-8.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
9

Connor, Jason T., und Peter B. Imrey. „Proportions: Inferences and Comparisons“. In Methods and Applications of Statistics in Clinical Trials, 570–94. Hoboken, NJ, USA: John Wiley & Sons, Inc., 2014. http://dx.doi.org/10.1002/9781118596333.ch34.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
10

Seber, George A. F. „Proportions, Inferences, and Comparisons“. In International Encyclopedia of Statistical Science, 1135–36. Berlin, Heidelberg: Springer Berlin Heidelberg, 2011. http://dx.doi.org/10.1007/978-3-642-04898-2_463.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen

Konferenzberichte zum Thema "Proportion inference"

1

Center, Julian L., Kevin H. Knuth, Ariel Caticha, Julian L. Center, Adom Giffin und Carlos C. Rodríguez. „Regression for Proportion Data“. In BAYESIAN INFERENCE AND MAXIMUM ENTROPY METHODS IN SCIENCE AND ENGINEERING. AIP, 2007. http://dx.doi.org/10.1063/1.2821266.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
2

Yoshida, Haruka, und Manabu Kuroki. „Proportion-based Sensitivity Analysis of Uncontrolled Confounding Bias in Causal Inference“. In Thirty-Third International Joint Conference on Artificial Intelligence {IJCAI-24}. California: International Joint Conferences on Artificial Intelligence Organization, 2024. http://dx.doi.org/10.24963/ijcai.2024/789.

Der volle Inhalt der Quelle
Annotation:
Uncontrolled confounding bias causes a spurious relationship between an exposure variable and an outcome variable and precludes reliable evaluation of the causal effect from observed data.Thus, it is important to observe a sufficient set of confounders to reliably evaluate the causal effect.However, there is no statistical method for judging whether an available set of covariates is sufficient to derive a reliable estimator for the causal effect.To address this problem, we focus on the fact that the mean squared error (MSE) of the outcome variable with respect to the average causal risk can be described as the sum of "the conditional variance of the outcome variable given the exposure variable" and "the square of the uncontrolled confounding bias".We then propose a novel sensitivity analysis, namely, the proportion-based sensitivity analysis of uncontrolled confounding bias in causal effects (PSA) in which the sensitivity parameter is formulated as the proportion of "the square of the uncontrolled confounding bias" to the MSE, and we clarify some properties.We also demonstrate the applicability of the PSA through two case studies.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
3

Simonnet, Titouan, Mame Diarra Fall, Bruno Galerne, Francis Claret und Sylvain Grangeon. „Proportion Inference Using Deep Neural Networks. Applications to X-Ray Diffraction and Hyperspectral Imaging“. In 2023 31st European Signal Processing Conference (EUSIPCO). IEEE, 2023. http://dx.doi.org/10.23919/eusipco58844.2023.10289954.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
4

Ma, Weijing, Fan Wang, Jingyi Zhang und Qiang Jin. „Overload Risk Evaluation of DNs with High Proportion EVs Based on Adaptive Net-based Fuzzy Inference System“. In 2020 IEEE 4th Conference on Energy Internet and Energy System Integration (EI2). IEEE, 2020. http://dx.doi.org/10.1109/ei250167.2020.9346905.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
5

Seeaed, Mushtak, und Al-Hakam Hamdan. „BIMification of Stone Walls for Maintenance Management by Utilizing Linked Data“. In 4th International Conference on Architectural & Civil Engineering Sciences. Cihan University-Erbil, 2023. http://dx.doi.org/10.24086/icace2022/paper.879.

Der volle Inhalt der Quelle
Annotation:
A large proportion of the data created during the inspection and assessment of stone facades and their damages is recorded in formats that are not machine-readable and thus cannot be further processed or managed digitally. Consequently, this increases the risk of data loss and incorrect information due to human misinterpretation. Therefore, a Multimodel-based approach has been developed in which stone facades of existing buildings are digitized as IFC-model by using proxy entities and linked with web ontologies for semantic enrichment. Additionally, detected anomalies in the stone structure are implemented and linked with geometrical representations. By utilizing additional rules and inference mechanisms, the anomalies can be classified, and a knowledge-based damage assessment is processed.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
6

Schockaert, Steven, Yazmin Ibanez-Garcia und Victor Gutierrez-Basulto. „A Description Logic for Analogical Reasoning“. In Thirtieth International Joint Conference on Artificial Intelligence {IJCAI-21}. California: International Joint Conferences on Artificial Intelligence Organization, 2021. http://dx.doi.org/10.24963/ijcai.2021/281.

Der volle Inhalt der Quelle
Annotation:
Ontologies formalise how the concepts from a given domain are interrelated. Despite their clear potential as a backbone for explainable AI, existing ontologies tend to be highly incomplete, which acts as a significant barrier to their more widespread adoption. To mitigate this issue, we present a mechanism to infer plausible missing knowledge, which relies on reasoning by analogy. To the best of our knowledge, this is the first paper that studies analogical reasoning within the setting of description logic ontologies. After showing that the standard formalisation of analogical proportion has important limitations in this setting, we introduce an alternative semantics based on bijective mappings between sets of features. We then analyse the properties of analogies under the proposed semantics, and show among others how it enables two plausible inference patterns: rule translation and rule extrapolation.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
7

Abdul rabu, Siti nazleen, Baharuddin Aris und Zaidatun Tasir. „LEVEL OF STUDENTS' CRITICAL THINKING ENGAGEMENT IN AN ONLINE INSTRUCTIONAL MULTIMEDIA DEVELOPMENT SCENARIO-BASED DISCUSSION FORUM“. In eLSE 2017. Carol I National Defence University Publishing House, 2017. http://dx.doi.org/10.12753/2066-026x-17-089.

Der volle Inhalt der Quelle
Annotation:
The present study identified students’ level of critical thinking engagement (CTE) via a scaffolded asynchronous online discussion forum (AODF). It involved 31 fourth year students studying Instructional Multimedia Development course at the faculty of education of a Malaysian public university. Observations were made based on three major problem solving scenario-based tasks (PSST), which involved adapting instructional design theories and models, project management, and multimedia design fundamentals. These assessments covered a period of 10 weeks and supported by instructor scaffolding. Using a qualitative content analysis approach, the students’ transcripts (consisting of 662 posts with 1025 segments) were coded into four critical thinking engagement processes: clarification, assessment, inference, and strategies. Results showed that the most frequently used critical thinking engagement process was clarification, followed by assessment, strategies, and inference. However, although clarification was the dominant proportion, the total of low-level and high-level critical thinking engagements increased from one task to another. Low-level engagement related to clarification process declined when high-level engagement related to assessment and strategy processes increased. Further interview analyses revealed that apart from the online scenario-based discussion approach being a new experience for these students, they were also not trained to write higher-order thinking answers as they were used to only recalling facts. However, the promising trend of gradual increment of high-level engagement with a downward trend in clarification demonstrated that instructor scaffolding plays an important role in supporting students’ critical thinking engagement via AODF.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
8

Pore, M., G. Gilfeather und L. Levy. „Risk Assessment in Signature Analysis“. In ISTFA 1996. ASM International, 1996. http://dx.doi.org/10.31399/asm.cp.istfa1996p0177.

Der volle Inhalt der Quelle
Annotation:
Abstract Hie failure analysis lab has analyzed 'n' devices that all have the same failure signature (a failure mode plus other observable characteristics), and found that they all failed from the same mechanism. We wish to identify this mechanism with the failure signature so that future parts with the same signature may be assigned a mechanism without analysis, but by inference from historical data, thus saving lab time and resources. What is the risk of error? A probability model is presented that allows the analyst to calculate a confidence interval for the proportion of future devices with the same signature failing from the same mechanism. An X/Y criterion is defined: one is X% confident that greater than Y% of future devices with this signature will also have this mechanism. The model is presented for an 'on-going process' application and for a 'finite-population' application. Easy calculation methods are presented, and charts are given to illustrate and shortcut the calculations.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
9

Zhang, Yuanxing, Yangbin Zhang, Kaigui Bian und Xiaoming Li. „Towards Reading Comprehension for Long Documents“. In Twenty-Seventh International Joint Conference on Artificial Intelligence {IJCAI-18}. California: International Joint Conferences on Artificial Intelligence Organization, 2018. http://dx.doi.org/10.24963/ijcai.2018/638.

Der volle Inhalt der Quelle
Annotation:
Machine reading comprehension has gained attention from both industry and academia. It is a very challenging task that involves various domains such as language comprehension, knowledge inference, summarization, etc. Previous studies mainly focus on reading comprehension on short paragraphs, and these approaches fail to perform well on the documents. In this paper, we propose a hierarchical match attention model to instruct the machine to extract answers from a specific short span of passages for the long document reading comprehension (LDRC) task. The model takes advantages from hierarchical-LSTM to learn the paragraph-level representation, and implements the match mechanism (i.e., quantifying the relationship between two contexts) to find the most appropriate paragraph that includes the hint of answers. Then the task can be decoupled into reading comprehension task for short paragraph, such that the answer can be produced. Experiments on the modified SQuAD dataset show that our proposed model outperforms existing reading comprehension models by at least 20% regarding exact match (EM), F1 and the proportion of identified paragraphs which are exactly the short paragraphs where the original answers locate.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
10

Prade, Henri, und Gilles Richard. „Analogical Proportions: Why They Are Useful in AI“. In Thirtieth International Joint Conference on Artificial Intelligence {IJCAI-21}. California: International Joint Conferences on Artificial Intelligence Organization, 2021. http://dx.doi.org/10.24963/ijcai.2021/621.

Der volle Inhalt der Quelle
Annotation:
This paper presents a survey of researches in analogical reasoning whose building block are analogical proportions which are statements of the form “a is to b as c is to d”. They have been developed in the last twenty years within an Artificial Intelligence perspective. After discussing their formal modeling with the associated inference mechanism, the paper reports the main results obtained in various AI domains ranging from computational linguistics to classification, including image processing, I.Q. tests, case based reasoning, preference learning, and formal concepts analysis. The last section discusses some new theoretical concerns, and the potential of analogical proportions in other areas such as argumentation, transfer learning, and XAI.
APA, Harvard, Vancouver, ISO und andere Zitierweisen

Berichte der Organisationen zum Thema "Proportion inference"

1

Beland, Anne, und Robert J. Mislevy. Probability-Based Inference in a Domain of Proportional Reasoning Tasks. Fort Belvoir, VA: Defense Technical Information Center, Januar 1992. http://dx.doi.org/10.21236/ada247304.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
2

Rodríguez, Francisco. Cleaning Up the Kitchen Sink: On the Consequences of the Linearity Assumption for Cross-Country Growth Empirics. Inter-American Development Bank, Januar 2006. http://dx.doi.org/10.18235/0011322.

Der volle Inhalt der Quelle
Annotation:
Existing work in growth empirics either assumes linearity of the growth function or attempts to capture non-linearities by the addition of a small number of quadratic or multiplicative interaction terms. Under a more generalized failure of linearity or if the functional form taken by the non-linearity is not known ex ante, such an approach is inadequate and will lead to biased and inconsistent OLS and instrumental variables estimators. This paper uses non-parametric and semiparametric methods of estimation to evaluate the relevance of strong non-linearities in commonly used growth data sets. Our tests decisively reject the linearity hypothesis. A preponderance of our tests also rejects the hypothesis that growth is a separable function of its regressors. Absent separability, the approximation error of estimators of the growth function grows in proportion to the number of relevant dimensions, substantially increasing the data requirements necessary to make inferences about the growth effects of regressors. We show that appropriate non-parametric tests are commonly inconclusive as to the effects of policies, institutions and economic structure on growth.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
Wir bieten Rabatte auf alle Premium-Pläne für Autoren, deren Werke in thematische Literatursammlungen aufgenommen wurden. Kontaktieren Sie uns, um einen einzigartigen Promo-Code zu erhalten!

Zur Bibliographie