Academic literature on the topic 'Bayesian paradigms'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the lists of relevant articles, books, theses, conference reports, and other scholarly sources on the topic 'Bayesian paradigms.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Journal articles on the topic "Bayesian paradigms"

1

Liu, Zhi-Qiang. "Bayesian Paradigms in Image Processing." International Journal of Pattern Recognition and Artificial Intelligence 11, no. 01 (February 1997): 3–33. http://dx.doi.org/10.1142/s0218001497000020.

Full text
Abstract:
A large number of image and spatial information processing problems involves the estimation of the intrinsic image information from observed images, for instance, image restoration, image registration, image partition, depth estimation, shape reconstruction and motion estimation. These are inverse problems and generally ill-posed. Such estimation problems can be readily formulated by Bayesian models which infer the desired image information from the measured data. Bayesian paradigms have played a very important role in spatial data analysis for over three decades and have found many successful applications. In this paper, we discuss several aspects of Bayesian paradigms: uncertainty present in the observed image, prior distribution modeling, Bayesian-based estimation techniques in image processing, particularly, the maximum a posteriori estimator and the Kalman filtering theory, robustness, and Markov random fields and applications.
APA, Harvard, Vancouver, ISO, and other styles
2

Oaksford, Mike, and Nick Chater. "New Paradigms in the Psychology of Reasoning." Annual Review of Psychology 71, no. 1 (January 4, 2020): 305–30. http://dx.doi.org/10.1146/annurev-psych-010419-051132.

Full text
Abstract:
The psychology of verbal reasoning initially compared performance with classical logic. In the last 25 years, a new paradigm has arisen, which focuses on knowledge-rich reasoning for communication and persuasion and is typically modeled using Bayesian probability theory rather than logic. This paradigm provides a new perspective on argumentation, explaining the rational persuasiveness of arguments that are logical fallacies. It also helps explain how and why people stray from logic when given deductive reasoning tasks. What appear to be erroneous responses, when compared against logic, often turn out to be rationally justified when seen in the richer rational framework of the new paradigm. Moreover, the same approach extends naturally to inductive reasoning tasks, in which people extrapolate beyond the data they are given and logic does not readily apply. We outline links between social and individual reasoning and set recent developments in the psychology of reasoning in the wider context of Bayesian cognitive science.
APA, Harvard, Vancouver, ISO, and other styles
3

Neupert, Shevaun D., Claire M. Growney, Xianghe Zhu, Julia K. Sorensen, Emily L. Smith, and Jan Hannig. "BFF: Bayesian, Fiducial, and Frequentist Analysis of Cognitive Engagement among Cognitively Impaired Older Adults." Entropy 23, no. 4 (April 6, 2021): 428. http://dx.doi.org/10.3390/e23040428.

Full text
Abstract:
Engagement in cognitively demanding activities is beneficial to preserving cognitive health. Our goal was to demonstrate the utility of frequentist, Bayesian, and fiducial statistical methods for evaluating the robustness of effects in identifying factors that contribute to cognitive engagement for older adults experiencing cognitive decline. We collected a total of 504 observations across two longitudinal waves of data from 28 cognitively impaired older adults. Participants’ systolic blood pressure responsivity, an index of cognitive engagement, was continuously sampled during cognitive testing. Participants reported on physical and mental health challenges and provided hair samples to assess chronic stress at each wave. Using the three statistical paradigms, we compared results from six model testing levels and longitudinal changes in health and stress predicting changes in cognitive engagement. Findings were mostly consistent across the three paradigms, providing additional confidence in determining effects. We extend selective engagement theory to cognitive impairment, noting that health challenges and stress appear to be important moderators. Further, we emphasize the utility of the Bayesian and fiducial paradigms for use with relatively small sample sizes because they are not based on asymptotic distributions. In particular, the fiducial paradigm is a useful tool because it provides more information than p values without the need to specify prior distributions, which may unduly influence the results based on a small sample. We provide the R code used to develop and implement all models.
APA, Harvard, Vancouver, ISO, and other styles
4

Ly, Alexander, Akash Raj, Alexander Etz, Maarten Marsman, Quentin F. Gronau, and Eric-Jan Wagenmakers. "Bayesian Reanalyses From Summary Statistics: A Guide for Academic Consumers." Advances in Methods and Practices in Psychological Science 1, no. 3 (August 13, 2018): 367–74. http://dx.doi.org/10.1177/2515245918779348.

Full text
Abstract:
Across the social sciences, researchers have overwhelmingly used the classical statistical paradigm to draw conclusions from data, often focusing heavily on a single number: p. Recent years, however, have witnessed a surge of interest in an alternative statistical paradigm: Bayesian inference, in which probabilities are attached to parameters and models. We feel it is informative to provide statistical conclusions that go beyond a single number, and—regardless of one’s statistical preference—it can be prudent to report the results from both the classical and the Bayesian paradigms. In order to promote a more inclusive and insightful approach to statistical inference, we show how the Summary Stats module in the open-source software program JASP ( https://jasp-stats.org ) can provide comprehensive Bayesian reanalyses from just a few commonly reported summary statistics, such as t and N. These Bayesian reanalyses allow researchers—and also editors, reviewers, readers, and reporters—to (a) quantify evidence on a continuous scale using Bayes factors, (b) assess the robustness of that evidence to changes in the prior distribution, and (c) gauge which posterior parameter ranges are more credible than others by examining the posterior distribution of the effect size. The procedure is illustrated using Festinger and Carlsmith’s (1959) seminal study on cognitive dissonance.
APA, Harvard, Vancouver, ISO, and other styles
5

Bojinov, Iavor I., Natesh S. Pillai, and Donald B. Rubin. "Diagnosing missing always at random in multivariate data." Biometrika 107, no. 1 (November 23, 2019): 246–53. http://dx.doi.org/10.1093/biomet/asz061.

Full text
Abstract:
Summary Models for analysing multivariate datasets with missing values require strong, often unassessable, assumptions. The most common of these is that the mechanism that created the missing data is ignorable, which is a two-fold assumption dependent on the mode of inference. The first part, which is the focus here, under the Bayesian and direct-likelihood paradigms requires that the missing data be missing at random; in contrast, the frequentist-likelihood paradigm demands that the missing data mechanism always produce missing at random data, a condition known as missing always at random. Under certain regularity conditions, assuming missing always at random leads to a condition that can be tested using the observed data alone, namely that the missing data indicators depend only on fully observed variables. In this note we propose three different diagnostic tests that not only indicate when this assumption is incorrect but also suggest which variables are the most likely culprits. Although missing always at random is not a necessary condition to ensure validity under the Bayesian and direct-likelihood paradigms, it is sufficient, and evidence of its violation should encourage the careful statistician to conduct targeted sensitivity analyses.
APA, Harvard, Vancouver, ISO, and other styles
6

Alotaibi, Refah, Lamya A. Baharith, Ehab M. Almetwally, Mervat Khalifa, Indranil Ghosh, and Hoda Rezk. "Statistical Inference on a Finite Mixture of Exponentiated Kumaraswamy-G Distributions with Progressive Type II Censoring Using Bladder Cancer Data." Mathematics 10, no. 15 (August 7, 2022): 2800. http://dx.doi.org/10.3390/math10152800.

Full text
Abstract:
A new family of distributions called the mixture of the exponentiated Kumaraswamy-G (henceforth, in short, ExpKum-G) class is developed. We consider Weibull distribution as the baseline (G) distribution to propose and study this special sub-model, which we call the exponentiated Kumaraswamy Weibull distribution. Several useful statistical properties of the proposed ExpKum-G distribution are derived. Under the classical paradigm, we consider the maximum likelihood estimation under progressive type II censoring to estimate the model parameters. Under the Bayesian paradigm, independent gamma priors are proposed to estimate the model parameters under progressive type II censored samples, assuming several loss functions. A simulation study is carried out to illustrate the efficiency of the proposed estimation strategies under both classical and Bayesian paradigms, based on progressively type II censoring models. For illustrative purposes, a real data set is considered that exhibits that the proposed model in the new class provides a better fit than other types of finite mixtures of exponentiated Kumaraswamy-type models.
APA, Harvard, Vancouver, ISO, and other styles
7

Neupert, Shevaun D., and Jan Hannig. "BFF: Bayesian, Fiducial, Frequentist Analysis of Age Effects in Daily Diary Data." Journals of Gerontology: Series B 75, no. 1 (August 17, 2019): 67–79. http://dx.doi.org/10.1093/geronb/gbz100.

Full text
Abstract:
Abstract Objectives We apply new statistical models to daily diary data to advance both methodological and conceptual goals. We examine age effects in within-person slopes in daily diary data and introduce Generalized Fiducial Inference (GFI), which provides a compromise between frequentist and Bayesian inference. We use daily stressor exposure data across six domains to generate within-person emotional reactivity slopes with daily negative affect. We test for systematic age differences and similarities in these reactivity slopes, which are inconsistent in previous research. Method One hundred and eleven older (aged 60–90) and 108 younger (aged 18–36) adults responded to daily stressor and negative affect questions each day for eight consecutive days, resulting in 1,438 total days. Daily stressor domains included arguments, avoided arguments, work/volunteer stressors, home stressors, network stressors, and health-related stressors. Results Using Bayesian, GFI, and frequentist paradigms, we compared results for the six stressor domains with a focus on interpreting age effects in within-person reactivity. Multilevel models suggested null age effects in emotional reactivity across each of the paradigms within the domains of avoided arguments, work/volunteer stressors, home stressors, and health-related stressors. However, the models diverged with respect to null age effects in emotional reactivity to arguments and network stressors. Discussion The three paradigms converged on null age effects in reactivity for four of the six stressor domains. GFI is a useful tool that provides additional information when making determinations regarding null age effects in within-person slopes. We provide the code for readers to apply these models to their own data.
APA, Harvard, Vancouver, ISO, and other styles
8

Kabanda, Gabriel. "Bayesian Network Model for a Zimbabwean Cybersecurity System." Oriental journal of computer science and technology 12, no. 4 (January 3, 2020): 147–67. http://dx.doi.org/10.13005/ojcst12.04.02.

Full text
Abstract:
The purpose of this research was to develop a structure for a network intrusion detection and prevention system based on the Bayesian Network for use in Cybersecurity. The phenomenal growth in the use of internet-based technologies has resulted in complexities in cybersecurity subjecting organizations to cyberattacks. What is required is a network intrusion detection and prevention system based on the Bayesian Network structure for use in Cybersecurity. Bayesian Networks (BNs) are defined as graphical probabilistic models for multivariate analysis and are directed acyclic graphs that have an associated probability distribution function. The research determined the cybersecurity framework appropriate for a developing nation; evaluated network detection and prevention systems that use Artificial Intelligence paradigms such as finite automata, neural networks, genetic algorithms, fuzzy logic, support-vector machines or diverse data-mining-based approaches; analysed Bayesian Networks that can be represented as graphical models and are directional to represent cause-effect relationships; and developed a Bayesian Network model that can handle complexity in cybersecurity. The theoretical framework on Bayesian Networks was largely informed by the NIST Cybersecurity Framework, General deterrence theory, Game theory, Complexity theory and data mining techniques. The Pragmatism paradigm used in this research, as a philosophy is intricately related to the Mixed Method Research (MMR). A mixed method approach was used in this research, which is largely quantitative with the research design being a survey and an experiment, but supported by qualitative approaches where Focus Group discussions were held. The performance of Support Vector Machines, Artificial Neural Network, K-Nearest Neighbour, Naive-Bayes and Decision Tree Algorithms was discussed. Alternative improved solutions discussed include the use of machine learning algorithms specifically Artificial Neural Networks (ANN), Decision Tree C4.5, Random Forests and Support Vector Machines (SVM).
APA, Harvard, Vancouver, ISO, and other styles
9

Laurens, Jean, Dominik Straumann, and Bernhard J. M. Hess. "Processing of Angular Motion and Gravity Information Through an Internal Model." Journal of Neurophysiology 104, no. 3 (September 2010): 1370–81. http://dx.doi.org/10.1152/jn.00143.2010.

Full text
Abstract:
The vestibular organs in the base of the skull provide important information about head orientation and motion in space. Previous studies have suggested that both angular velocity information from the semicircular canals and information about head orientation and translation from the otolith organs are centrally processed in an internal model of head motion, using the principles of optimal estimation. This concept has been successfully applied to model behavioral responses to classical vestibular motion paradigms. This study measured the dynamic of the vestibuloocular reflex during postrotatory tilt, tilt during the optokinetic afternystagmus, and off-vertical axis rotation. The influence of otolith signal on the VOR was systematically varied by using a series of tilt angles. We found that the time constants of responses varied almost identically as a function of gravity in these paradigms. We show that Bayesian modeling could predict the experimental results in an accurate and consistent manner. In contrast to other approaches, the Bayesian model also provides a plausible explanation of why these vestibulooculo motor responses occur as a consequence of an internal process of optimal motion estimation.
APA, Harvard, Vancouver, ISO, and other styles
10

Guo, Jeff, Bojana Ranković, and Philippe Schwaller. "Bayesian Optimization for Chemical Reactions." CHIMIA 77, no. 1/2 (February 22, 2023): 31. http://dx.doi.org/10.2533/chimia.2023.31.

Full text
Abstract:
Reaction optimization is challenging and traditionally delegated to domain experts who iteratively propose increasingly optimal experiments. Problematically, the reaction landscape is complex and often requires hundreds of experiments to reach convergence, representing an enormous resource sink. Bayesian optimization (BO) is an optimization algorithm that recommends the next experiment based on previous observations and has recently gained considerable interest in the general chemistry community. The application of BO for chemical reactions has been demonstrated to increase efficiency in optimization campaigns and can recommend favorable reaction conditions amidst many possibilities. Moreover, its ability to jointly optimize desired objectives such as yield and stereoselectivity makes it an attractive alternative or at least complementary to domain expert-guided optimization. With the democratization of BO software, the barrier of entry to applying BO for chemical reactions has drastically lowered. The intersection between the paradigms will see advancements at an ever-rapid pace. In this review, we discuss how chemical reactions can be transformed into machine-readable formats which can be learned by machine learning (ML) models. We present a foundation for BO and how it has already been applied to optimize chemical reaction outcomes. The important message we convey is that realizing the full potential of ML-augmented reaction optimization will require close collaboration between experimentalists and computational scientists.
APA, Harvard, Vancouver, ISO, and other styles

Dissertations / Theses on the topic "Bayesian paradigms"

1

Taheri, Sona. "Learning Bayesian networks based on optimization approaches." Thesis, University of Ballarat, 2012. http://researchonline.federation.edu.au/vital/access/HandleResolver/1959.17/36051.

Full text
Abstract:
Learning accurate classifiers from preclassified data is a very active research topic in machine learning and artifcial intelligence. There are numerous classifier paradigms, among which Bayesian Networks are very effective and well known in domains with uncertainty. Bayesian Networks are widely used representation frameworks for reasoning with probabilistic information. These models use graphs to capture dependence and independence relationships between feature variables, allowing a concise representation of the knowledge as well as efficient graph based query processing algorithms. This representation is defined by two components: structure learning and parameter learning. The structure of this model represents a directed acyclic graph. The nodes in the graph correspond to the feature variables in the domain, and the arcs (edges) show the causal relationships between feature variables. A directed edge relates the variables so that the variable corresponding to the terminal node (child) will be conditioned on the variable corresponding to the initial node (parent). The parameter learning represents probabilities and conditional probabilities based on prior information or past experience. The set of probabilities are represented in the conditional probability table. Once the network structure is constructed, the probabilistic inferences are readily calculated, and can be performed to predict the outcome of some variables based on the observations of others. However, the problem of structure learning is a complex problem since the number of candidate structures grows exponentially when the number of feature variables increases. This thesis is devoted to the development of learning structures and parameters in Bayesian Networks. Different models based on optimization techniques are introduced to construct an optimal structure of a Bayesian Network. These models also consider the improvement of the Naive Bayes' structure by developing new algorithms to alleviate the independence assumptions. We present various models to learn parameters of Bayesian Networks; in particular we propose optimization models for the Naive Bayes and the Tree Augmented Naive Bayes by considering different objective functions. To solve corresponding optimization problems in Bayesian Networks, we develop new optimization algorithms. Local optimization methods are introduced based on the combination of the gradient and Newton methods. It is proved that the proposed methods are globally convergent and have superlinear convergence rates. As a global search we use the global optimization method, AGOP, implemented in the open software library GANSO. We apply the proposed local methods in the combination with AGOP. Therefore, the main contributions of this thesis include (a) new algorithms for learning an optimal structure of a Bayesian Network; (b) new models for learning the parameters of Bayesian Networks with the given structures; and finally (c) new optimization algorithms for optimizing the proposed models in (a) and (b). To validate the proposed methods, we conduct experiments across a number of real world problems. Print version is available at: http://library.federation.edu.au/record=b1804607~S4
Doctor of Philosophy
APA, Harvard, Vancouver, ISO, and other styles
2

Cordani, Lisbeth Kaiserlian. "O ensino de estatística na universidade e a controvérsia sobre os fundamentos da inferência." Universidade de São Paulo, 2001. http://www.teses.usp.br/teses/disponiveis/48/48134/tde-04072011-084602/.

Full text
Abstract:
A maioria dos cursos universitários tem, em seu currículo, uma disciplina básica obrigatória de elementos de probabilidade e estatística. Além dos procedimentos de natureza descritiva, associados a análise de dados, fazem parte da ementa dessas disciplinas procedimentos inferenciais, geralmente apresentados dentro da teoria clássica(ou frequentista) de Neyman-Pearson. Não é costume nesta disciplina nem discutir aspectos epistemológicos ligados à inferência estatística e nem incluir a apresentação da escola Bayesiana, como uma possível alternativa. Sabidamente, tal disciplina é um entrave na vida escolar, tanto do aluno como do professor. Do aluno, porque este se depara, em boa parte das vezes, com um oferecimento mecânico da disciplina, sem motivação de natureza aplicada e sem vínculo aparente com sua realidade próxima curricular. Do professor, porque encontra geralmente alunos, além de despreparados com relação aos conceitos primários de incerteza e variabilidade, também com predisposição negativa, devido ao tabu associado à disciplina. Com o intuito de discutir a necessidade do oferecimento das primeiras noções inferenciais nessa disciplina, bem como responder a pergunta qual a inferência que deve ser ensinada numa disciplina básica de um curso universitário? buscamos caracterizar, ao longo de trabalho, as relações da estatística com: criação científica em geral e racionalismo e empirismo em particular; a existência ou não de um método científico; o objetivismo e o subjetivismo; os paradigmas das escolas clássica e Bayesiana; aprendizagem e cognição. Foram analisadas e comparadas as abordagens inferenciais feitas segundo cada escola, bem como apresentados alguns exemplos. A sugestão deste trabalho é de que o programa de uma primeira disciplina inclua os aspectos epistemológicos ligados à inferência, bem como a apresentação do tópico inferência estatística segundo as duas abordagens: clássica e Bayesiana. Isto eliminaria, pelo menos nos primeiros contatos do aluno com a área, a proposta de rompimento com a escola clássica preconizada por muitos adeptos da escola Bayesiana, bem como a proposta de resistência (manutenção do status quo), defendida por muitos elementos da escola clássica. Na verdade, a proposta preconiza a coexistência entre as duas escolas numa apresentação de curso básico, pois entendemos que o dever do professor é mostrar o estado da arte da área a seus alunos, deixando a opção (se isto fizer sentido) para uma etapa futura, seja acadêmica ou profissional.
In general most of the undergraduate courses in Brazil offer a basic discipline on probability and statistics. Beyond the descriptive procedures, associated with data analysis, these courses present to the students some inferential techniques, usually linked to the classical (frequentist) Neyman-Pearson school. It is not common to present the inferential aspects from the Bayesian point of view. Everybody knows that both student and teacher have problems with this basic discipline. The student, because he/she receives, in general, a mechanical course, without motivation, with no links to their other disciplines, and the teacher, because he/she usulally teaches to very naïve students concerning concept like uncertainty and variability. Added to that, students seem to have some fear towards the discipline (taboo). In order to discuss the first inferential notions presented in this discipline, and to answer the question which inference should we teach in a basic discipline of statistics to undergraduate students? we have tried, in this work, to characterise the relationship between statistics and the following aspects: scientific creation in general and empirism and rationalism in particular; the existence or not of a scientific method; objectivism and subjectivism; the paradigms associated to the classical and to the Bayesian schools; learning and some cognitive aspects. We have compared the inferential approaches, and some examples have been presented. This work suggests that the first program of a basic discipline of probability and statistics should include some epistemological inferential aspects as well as the introduction of inferential statistics by means of both approaches: classical and Bayesian. This action will prevent, at least at the first contact, the members of the Bayesian school from proposing the rupture with the classical, and also the members of the classical one from maintaining the status quo. In fact, the proposal is of coexistence of both schools in a first level, because we think it is a teachers duty to show the state of art to his/her students, giving the possibility of option (if necessary) for a following step.
APA, Harvard, Vancouver, ISO, and other styles
3

Lee, T. D. "Implementation of the Bayesian paradigm for highly parameterised linear models." Thesis, University of Nottingham, 1986. http://eprints.nottingham.ac.uk/14421/.

Full text
Abstract:
This thesis re-examines the Bayes hierarchical linear model and the associated issue of variance component estimation in the light of new numerical procedures, and demonstrates that the Bayes linear model is indeed a practical proposition. Technical issues considered include the development of analytical procedures essential for efficient evaluation of the likelihood function, and a partial characterisation of the difficulty of likelihood evaluation. A general non-informative prior distribution for the hierarchical linear model is developed. Extensions to spherically symmetric error distributions are shown to be practicable and useful. The numerical technique enables the sensitivity of the results to the prior structure, error structure and model structure to be investigated. An extended example is considered which illustrates these analytical and numerical techniques in a 15 dimensional problem. A second example provides a critical examination of a British Standards Institute paper, and develops further techniques for handling alternative spherically symmetric error distributions. Recent work on variance component estimation is viewed from the Bayesian perspective, and areas for further work are identified.
APA, Harvard, Vancouver, ISO, and other styles
4

Szymczak, Marcin. "Programming language semantics as a foundation for Bayesian inference." Thesis, University of Edinburgh, 2018. http://hdl.handle.net/1842/28993.

Full text
Abstract:
Bayesian modelling, in which our prior belief about the distribution on model parameters is updated by observed data, is a popular approach to statistical data analysis. However, writing specific inference algorithms for Bayesian models by hand is time-consuming and requires significant machine learning expertise. Probabilistic programming promises to make Bayesian modelling easier and more accessible by letting the user express a generative model as a short computer program (with random variables), leaving inference to the generic algorithm provided by the compiler of the given language. However, it is not easy to design a probabilistic programming language correctly and define the meaning of programs expressible in it. Moreover, the inference algorithms used by probabilistic programming systems usually lack formal correctness proofs and bugs have been found in some of them, which limits the confidence one can have in the results they return. In this work, we apply ideas from the areas of programming language theory and statistics to show that probabilistic programming can be a reliable tool for Bayesian inference. The first part of this dissertation concerns the design, semantics and type system of a new, substantially enhanced version of the Tabular language. Tabular is a schema-based probabilistic language, which means that instead of writing a full program, the user only has to annotate the columns of a schema with expressions generating corresponding values. By adopting this paradigm, Tabular aims to be user-friendly, but this unusual design also makes it harder to define the syntax and semantics correctly and reason about the language. We define the syntax of a version of Tabular extended with user-defined functions and pseudo-deterministic queries, design a dependent type system for this language and endow it with a precise semantics. We also extend Tabular with a concise formula notation for hierarchical linear regressions, define the type system of this extended language and show how to reduce it to pure Tabular. In the second part of this dissertation, we present the first correctness proof for a Metropolis-Hastings sampling algorithm for a higher-order probabilistic language. We define a measure-theoretic semantics of the language by means of an operationally-defined density function on program traces (sequences of random variables) and a map from traces to program outputs. We then show that the distribution of samples returned by our algorithm (a variant of “Trace MCMC” used by the Church language) matches the program semantics in the limit.
APA, Harvard, Vancouver, ISO, and other styles
5

Likhari, Amitoj S. "Computational accounts of attentional bias neural network and Bayesian network models of the dot probe paradigm /." [Gainesville, Fla.] : University of Florida, 2005. http://purl.fcla.edu/fcla/etd/UFE0009481.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Kliethermes, Stephanie Ann. "A Bayesian nonparametric approach to modeling longitudinal growth curves with non-normal outcomes." Diss., University of Iowa, 2013. https://ir.uiowa.edu/etd/2546.

Full text
Abstract:
Longitudinal growth patterns are routinely seen in medical studies where developments of individuals on one or more outcome variables are followed over a period of time. Many current methods for modeling growth presuppose a parametric relationship between the outcome and time (e.g., linear, quadratic); however, these relationships may not accurately capture growth over time. Functional mixed effects (FME) models provide flexibility in handling longitudinal data with nonparametric temporal trends because they allow the data to determine the shape of the curve. Although FME methods are well-developed for continuous, normally distributed outcome measures, nonparametric methods for handling categorical outcomes are limited. In this thesis, we propose a Bayesian hierarchical FME model to account for growth curves with non-Gaussian outcomes. In particular, we extend traditional FME models which assume normally distributed outcomes by modeling the probabilities associated with the binomially distributed outcomes and adding an additional level to the hierarchical model to correctly specify the outcomes as binomially distributed. We then extend the proposed binomial FME model to the multinomial setting where the outcomes consist of more than two nominal categories. Current modeling approaches include modeling each category of a multinomial outcome separately via linear and nonlinear mixed effects models; yet, these approaches ignore the inherent correlation among the categories of the outcome. Our model captures this correlation through a sequence of conditional binomial FME models which results in one model simultaneously estimating probabilities in all categories. Lastly, we extend our binomial FME model to address a common medical situation where multiple outcomes are measured on subjects over time and investigators are interested in simultaneously assessing the impact of all outcomes. We account for the relationship between outcomes by altering the correlation structure in the hierarchical model and simultaneously estimating the outcome curves. Our methods are assessed via simulation studies and real data analyses where we investigate the ability of the models to accurately predict the underlying growth trajectory of individuals and populations. Our applications include analyses of speech development data in adults and children with cochlear implants and analyses on eye-tracking data used to assess word processing in cochlear implant patients.
APA, Harvard, Vancouver, ISO, and other styles
7

Ajili, Moez. "Reliability of voice comparison for forensic applications." Thesis, Avignon, 2017. http://www.theses.fr/2017AVIG0223/document.

Full text
Abstract:
Dans les procédures judiciaires, des enregistrements de voix sont de plus en plus fréquemment présentés comme élément de preuve. En général, il est fait appel à un expert scientifique pour établir si l’extrait de voix en question a été prononcé par un suspect donné (prosecution hypothesis) ou non (defence hypothesis). Ce prosessus est connu sous le nom de “Forensic Voice Comparison (FVC)” (comparaison de voix dans le cadre judiciaire). Depuis l’émergence du modèle DNA typing, l’approche Bayesienne est devenue le nouveau “golden standard” en sciences criminalistiques. Dans cette approche, l’expert exprime le résultat de son analyse sous la forme d’un rapport de vraisemblance (LR). Ce rapport ne favorise pas seulement une des hypothèses (“prosecution” ou “defence”) mais il fournit également le poids de cette décision. Bien que le LR soit théoriquement suffisant pour synthétiser le résultat, il est dans la pratique assujetti à certaines limitations en raison de son processus d’estimation. Cela est particulièrement vrai lorsque des systèmes de reconnaissance automatique du locuteur (ASpR) sont utilisés. Ces systèmes produisent un score dans toutes les situations sans prendre en compte les conditions spécifiques au cas étudié. Plusieurs facteurs sont presque toujours ignorés par le processus d’estimation tels que la qualité et la quantité d’information dans les deux enregistrements vocaux, la cohérence de l’information entre les deux enregistrements, leurs contenus phonétiques ou encore les caractéristiques intrinsèques des locuteurs. Tous ces facteurs mettent en question la notion de fiabilité de la comparaison de voix dans le cadre judiciaire. Dans cette thèse, nous voulons adresser cette problématique dans le cadre des systèmes automatiques (ASpR) sur deux points principaux. Le premier consiste à établir une échelle hiérarchique des catégories phonétiques des sons de parole selon la quantité d’information spécifique au locuteur qu’ils contiennent. Cette étude montre l’importance du contenu phonétique: Elle met en évidence des différences intéressantes entre les phonèmes et la forte influence de la variabilité intra-locuteurs. Ces résultats ont été confirmés par une étude complémentaire sur les voyelles orales basée sur les paramètres formantiques, indépendamment de tout système de reconnaissance du locuteur. Le deuxième point consiste à mettre en œuvre une approche afin de prédire la fiabilité du LR à partir des deux enregistrements d’une comparaison de voix sans recours à un ASpR. À cette fin, nous avons défini une mesure d’homogénéité (NHM) capable d’estimer la quantité d’information et l’homogénéité de cette information entre les deux enregistrements considérés. Notre hypothèse ainsi définie est que l’homogénéité soit directement corrélée avec le degré de fiabilité du LR. Les résultats obtenus ont confirmé cette hypothèse avec une mesure NHM fortement corrélée à la mesure de fiabilité du LR. Nos travaux ont également mis en évidence des différences significatives du comportement de NHM entre les comparaisons cibles et les comparaisons imposteurs. Nos travaux ont montré que l’approche “force brute” (reposant sur un grand nombre de comparaisons) ne suffit pas à assurer une bonne évaluation de la fiabilité en FVC. En effet, certains facteurs de variabilité peuvent induire des comportements locaux des systèmes, liés à des situations particulières. Pour une meilleure compréhension de l’approche FVC et/ou d’un système ASpR, il est nécessaire d’explorer le comportement du système à une échelle aussi détaillée que possible (le diable se cache dans les détails)
It is common to see voice recordings being presented as a forensic trace in court. Generally, a forensic expert is asked to analyse both suspect and criminal’s voice samples in order to indicate whether the evidence supports the prosecution (same-speaker) or defence (different-speakers) hypotheses. This process is known as Forensic Voice Comparison (FVC). Since the emergence of the DNA typing model, the likelihood-ratio (LR) framework has become the new “golden standard” in forensic sciences. The LR not only supports one of the hypotheses but also quantifies the strength of its support. However, the LR accepts some practical limitations due to its estimation process itself. It is particularly true when Automatic Speaker Recognition (ASpR) systems are considered as they are outputting a score in all situations regardless of the case specific conditions. Indeed, several factors are not taken into account by the estimation process like the quality and quantity of information in both voice recordings, their phonological content or also the speakers intrinsic characteristics, etc. All these factors put into question the validity and reliability of FVC. In this Thesis, we wish to address these issues. First, we propose to analyse how the phonetic content of a pair of voice recordings affects the FVC accuracy. We show that oral vowels, nasal vowels and nasal consonants bring more speaker-specific information than averaged phonemic content. In contrast, plosive, liquid and fricative do not have a significant impact on the LR accuracy. This investigation demonstrates the importance of the phonemic content and highlights interesting differences between inter-speakers effects and intra-speaker’s ones. A further study is performed in order to study the individual speaker-specific information for each vowel based on formant parameters without any use of ASpR system. This study has revealed interesting differences between vowels in terms of quantity of speaker information. The results show clearly the importance of intra-speaker variability effects in FVC reliability estimation. Second, we investigate an approach to predict the LR reliability based only on the pair of voice recordings. We define a homogeneity criterion (NHM) able to measure the presence of relevant information and the homogeneity of this information between the pair of voice recordings. We are expecting that lowest values of homogeneity are correlated with the lowest LR’s accuracy measures, as well as the opposite behaviour for high values. The results showed the interest of the homogeneity measure for FVC reliability. Our studies reported also large differences of behaviour between FVC genuine and impostor trials. The results confirmed the importance of intra-speaker variability effects in FVC reliability estimation. The main takeaway of this Thesis is that averaging the system behaviour over a high number of factors (speaker, duration, content...) hides potentially many important details. For a better understanding of FVC approach and/or an ASpR system, it is mandatory to explore the behaviour of the system at an as-detailed-as-possible scale (The devil lies in the details)
APA, Harvard, Vancouver, ISO, and other styles
8

Rodrigues, Agatha Sacramento. "Estatística em confiabilidade de sistemas: uma abordagem Bayesiana paramétrica." Universidade de São Paulo, 2018. http://www.teses.usp.br/teses/disponiveis/45/45133/tde-10102018-232055/.

Full text
Abstract:
A confiabilidade de um sistema de componentes depende da confiabilidade de cada componente. Assim, a estimação da função de confiabilidade de cada componente do sistema é de interesse. No entanto, esta não é uma tarefa fácil, pois quando o sistema falha, o tempo de falha de um dado componente pode não ser observado, isto é, um problema de dados censurados. Neste trabalho, propomos modelos Bayesianos paramétricos para estimação das funções de confiabilidade de componentes e sistemas em quatro diferentes cenários. Inicialmente, um modelo Weibull é proposto para estimar a distribuição do tempo de vida de um componente de interesse envolvido em sistemas coerentes não reparáveis, quando estão disponíveis o tempo de falha do sistema e o estado do componente no momento da falha do sistema. Não é imposta a suposição de que os tempos de vida dos componentes sejam identicamente distribuídos, mas a suposição de independência entre os tempos até a falha dos componentes é necessária, conforme teorema anunciado e devidamente demonstrado. Em situações com causa de falha mascarada, os estados dos componentes no momento da falha do sistema não são observados e, neste cenário, um modelo Weibull com variáveis latentes no processo de estimação é proposto. Os dois modelos anteriormente descritos propõem estimar marginalmente as funções de confiabilidade dos componentes quando não são disponíveis ou necessárias as informações dos demais componentes e, por consequência, a suposição de independência entre os tempos de vida dos componentes é necessária. Com o intuito de não impor esta suposição, o modelo Weibull multivariado de Hougaard é proposto para a estimação das funções de confiabilidade de componentes envolvidos em sistemas coerentes não reparáveis. Por fim, um modelo Weibull para a estimação da função de confiabilidade de componentes de um sistema em série reparável com causa de falha mascarada é proposto. Para cada cenário considerado, diferentes estudos de simulação são realizados para avaliar os modelos propostos, sempre comparando com a melhor solução encontrada na literatura até então, em que, em geral, os modelos propostos apresentam melhores resultados. Com o intuito de demonstrar a aplicabilidade dos modelos, análises de dados são realizadas com problemas reais não só da área de confiabilidade, mas também da área social.
The reliability of a system of components depends on reliability of each component. Thus, the initial statistical work should be the estimation of the reliability of each component of the system. This is not an easy task because when the system fails, the failure time of a given component can be not observed, that is, a problem of censored data. We propose parametric Bayesian models for reliability functions estimation of systems and components involved in four scenarios. First, a Weibull model is proposed to estimate component failure time distribution from non-repairable coherent systems when there are available the system failure time and the component status at the system failure moment. Furthermore, identically distributed failure times are not a required restriction. An important result is proved: without the assumption that components\' lifetimes are mutually independent, a given set of sub-reliability functions does not identify the corresponding marginal reliability function. In masked cause of failure situations, it is not possible to identify the statuses of the components at the moment of system failure and, in this second scenario, we propose a Bayesian Weibull model by means of latent variables in the estimation process. The two models described above propose to estimate marginally the reliability functions of the components when the information of the other components is not available or necessary and, consequently, the assumption of independence among the components\' failure times is necessary. In order to not impose this assumption, the Hougaard multivariate Weibull model is proposed for the estimation of the components\' reliability functions involved in non-repairable coherent systems. Finally, a Weibull model for the estimation of the reliability functions of components of a repairable series system with masked cause of failure is proposed. For each scenario, different simulation studies are carried out to evaluate the proposed models, always comparing then with the best solution found in the literature until then. In general, the proposed models present better results. In order to demonstrate the applicability of the models, data analysis are performed with real problems not only from the reliability area, but also from social area.
APA, Harvard, Vancouver, ISO, and other styles
9

SARCIA', SALVATORE ALESSANDRO. "An Approach to improving parametric estimation models in the case of violation of assumptions based upon risk analysis." Doctoral thesis, Università degli Studi di Roma "Tor Vergata", 2009. http://hdl.handle.net/2108/1048.

Full text
Abstract:
In this work, we show the mathematical reasons why parametric models fall short of providing correct estimates and define an approach that overcomes the causes of these shortfalls. The approach aims at improving parametric estimation models when any regression model assumption is violated for the data being analyzed. Violations can be that, the errors are x-correlated, the model is not linear, the sample is heteroscedastic, or the error probability distribution is not Gaussian. If data violates the regression assumptions and we do not deal with the consequences of these violations, we cannot improve the model and estimates will be incorrect forever. The novelty of this work is that we define and use a feed-forward multi-layer neural network for discrimination problems to calculate prediction intervals (i.e. evaluate uncertainty), make estimates, and detect improvement needs. The primary difference from traditional methodologies is that the proposed approach can deal with scope error, model error, and assumption error at the same time. The approach can be applied for prediction, inference, and model improvement over any situation and context without making specific assumptions. An important benefit of the approach is that, it can be completely automated as a stand-alone estimation methodology or used for supporting experts and organizations together with other estimation techniques (e.g., human judgment, parametric models). Unlike other methodologies, the proposed approach focuses on the model improvement by integrating the estimation activity into a wider process that we call the Estimation Improvement Process as an instantiation of the Quality Improvement Paradigm. This approach aids mature organizations in learning from their experience and improving their processes over time with respect to managing their estimation activities. To provide an exposition of the approach, we use an old NASA COCOMO data set to (1) build an evolvable neural network model and (2) show how a parametric model, e.g., a regression model, can be improved and evolved with the new project data.
APA, Harvard, Vancouver, ISO, and other styles
10

Woo, Gloria. "Evolving Paradigms in the Treatment of Hepatitis B." Thesis, 2010. http://hdl.handle.net/1807/32961.

Full text
Abstract:
Hepatitis B is a serious global health problem with over 2 billion people infected worldwide and 350 million suffering from chronic hepatitis B (CHB) infection. Infection can lead to chronic hepatitis, cirrhosis and hepatocellular carcinoma (HCC) accounting for 320,000 deaths per year. Numerous treatments are available, but with a growing number of therapies each with considerable trade-offs, the optimal treatment strategy is not transparent. This dissertation investigates the relative efficacy of treatments for CHB and estimates the health related quality of life (HRQOL) and health utilities of mild to advanced CHB patients. A systematic review of published randomized controlled trials comparing surrogate outcomes for the first year of treatment was performed. Bayesian mixed treatment comparison meta-analysis was used to synthesize odds ratios, including 95% credible intervals and predicted probabilities of each outcome comparing all currently available treatments in HBeAg-positive and/or HBeAg-negative CHB patients. Among HBeAg-positive patients, tenofovir and entecavir were most effective, while in HBeAg-negative patients, tenofovir was the treatment of choice. Health state utilities and HRQOL for patients with CHB stratified by disease stage were elicited from patients attending tertiary care clinics at the University Health Network in Toronto. Respondents completed the standard gamble, EQ5D, Health Utilities Index Mark 3 (HUI3), Short-Form 36 version-2 and a demographics survey in their preferred language of English, Cantonese or Mandarin. Patient charts were accessed to determine disease stage and co-morbidities. The study included 433 patients of which: 294 had no cirrhosis, 79 had compensated cirrhosis, 7 had decompensated cirrhosis, 23 had HCC and 30 had received liver transplants. Mean standard gamble utilities were 0.89, 0.87, 0.82, 0.84 and 0.86 for the respective disease stages. HRQOL in CHB patients was only impaired at later stages of disease. Neither chronic infection nor antiviral treatment lowered HRQOL. Patients with CHB do not experience lower HRQOL as seen in patients with hepatitis C. The next step in this area of research is to incorporate the estimates synthesized by the current studies into a decision model evaluating the cost-effectiveness of treatment to provide guidance on the optimal therapy for patients with HBeAg-positive and HBeAg-negative CHB.
APA, Harvard, Vancouver, ISO, and other styles

Books on the topic "Bayesian paradigms"

1

Titelbaum, Michael G. Fundamentals of Bayesian Epistemology 2. Oxford University Press, 2022. http://dx.doi.org/10.1093/oso/9780192863140.001.0001.

Full text
Abstract:
This book introduces readers to the fundamentals of Bayesian epistemology. It begins by motivating and explaining the idea of a degree of belief (also known as a “credence”). It then presents Bayesians’ five core normative rules governing degrees of belief: Kolmogorov’s three probability axioms, the Ratio Formula for conditional credences, and Conditionalization for updating credences over time. After considering a few proposed additions to these norms, it applies the core rules to confirmation and decision theory. The book then details arguments for the Bayesian rules based on representation theorems, Dutch Books, and accuracy measures. Finally, it looks at objections and challenges to Bayesian epistemology. It presents problems concerning memory loss, self-location, old evidence, logical omniscience, and the subjectivity of priors. It considers the rival statistical paradigms of frequentism and likelihoodism. Then it explores alternative Bayesian-style formalisms involving comparative confidence rankings, credences ranges, and Dempster-Shafer functions.
APA, Harvard, Vancouver, ISO, and other styles
2

Titelbaum, Michael G. Fundamentals of Bayesian Epistemology 1. Oxford University Press, 2022. http://dx.doi.org/10.1093/oso/9780198707608.001.0001.

Full text
Abstract:
This book introduces readers to the fundamentals of Bayesian epistemology. It begins by motivating and explaining the idea of a degree of belief (also known as a “credence”). It then presents Bayesians’ five core normative rules governing degrees of belief: Kolmogorov’s three probability axioms, the Ratio Formula for conditional credences, and Conditionalization for updating credences over time. After considering a few proposed additions to these norms, it applies the core rules to confirmation and decision theory. The book then details arguments for the Bayesian rules based on representation theorems, Dutch Books, and accuracy measures. Finally, it looks at objections and challenges to Bayesian epistemology. It presents problems concerning memory loss, self-location, old evidence, logical omniscience, and the subjectivity of priors. It considers the rival statistical paradigms of frequentism and likelihoodism. Then it explores alternative Bayesian-style formalisms involving comparative confidence rankings, credences ranges, and Dempster-Shafer functions.
APA, Harvard, Vancouver, ISO, and other styles
3

Data Mining : Foundations and Intelligent Paradigms : VOLUME 2: Statistical, Bayesian, Time Series and Other Theoretical Aspects. Springer Berlin / Heidelberg, 2014.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
4

Jain, Lakhmi C., and Dawn E. Holmes. Data Mining : Foundations and Intelligent Paradigms Vol. 2 : VOLUME 2: Statistical, Bayesian, Time Series and Other Theoretical Aspects. Springer, 2011.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
5

Newen, Albert, Leon De Bruin, and Shaun Gallagher, eds. The Oxford Handbook of 4E Cognition. Oxford University Press, 2018. http://dx.doi.org/10.1093/oxfordhb/9780198735410.001.0001.

Full text
Abstract:
The Oxford Handbook of 4E Cognition provides a systematic overview of the state of the art in the field of 4E cognition: it includes chapters on hotly debated topics, for example, on the nature of cognition and the relation between cognition, perception and action; it discusses recent trends such as Bayesian inference and predictive coding; it presents new insights and findings regarding social understanding including the development of false belief understanding, and introduces new theoretical paradigms for understanding emotions and conceptualizing the interaction between cognition, language and culture. Each thematic section ends with a critical note to foster the fruitful discussion. In addition the final section of the book is dedicated to applications of 4E cognition approaches in disciplines such as psychiatry and robotics. This is a book with high relevance for philosophers, psychologists, psychiatrists, neuroscientists and anyone with an interest in the study of cognition as well as a wider audience with an interest in 4E cognition approaches.
APA, Harvard, Vancouver, ISO, and other styles
6

Gamerman, Dani, Tufi M. Soares, and Flávio Gonçalves. Bayesian analysis in item response theory applied to a large-scale educational assessment. Edited by Anthony O'Hagan and Mike West. Oxford University Press, 2018. http://dx.doi.org/10.1093/oxfordhb/9780198703174.013.22.

Full text
Abstract:
This article discusses the use of a Bayesian model that incorporates differential item functioning (DIF) in analysing whether cultural differences may affect the performance of students from different countries in the various test items which make up the OECD’s Programme for International Student Assessment (PISA) test of mathematics ability. The PISA tests in mathematics and other subjects are used to compare the educational attainment of fifteen-year old students in different countries. The article first provides a background on PISA, DIF and item response theory (IRT) before describing a hierarchical three-parameter logistic model for the probability of a correct response on an individual item to determine the extent of DIF remaining in the mathematics test of 2003. The results of Bayesian analysis illustrate the importance of appropriately accounting for all sources of heterogeneity present in educational testing and highlight the advantages of the Bayesian paradigm when applied to large-scale educational assessment.
APA, Harvard, Vancouver, ISO, and other styles
7

Westheimer, Gerald. The Shifted-Chessboard Pattern as Paradigm of the Exegesis of Geometrical-Optical Illusions. Oxford University Press, 2017. http://dx.doi.org/10.1093/acprof:oso/9780199794607.003.0036.

Full text
Abstract:
The shifted chessboard or café wall illusion yields to analysis at the two poles of the practice of vision science: bottom-up, pursuing its course from the visual stimulus into the front end of the visual apparatus, and top-down, figuring how the rules governing perception might lead to it. Following the first approach, examination of the effects of light spread in the eye and of nonlinearity and center-surround antagonism in the retina has made some inroads and provided partial explanations; with respect to the second, principles of perspective and of continuity and smoothness of contours can be evoked, and arguments about perception as Bayesian inference can be joined. Insights from these two directions are helping neurophysiologists in their struggle to identify a neural substrate of the phenomenon Münsterberg described in 1897.
APA, Harvard, Vancouver, ISO, and other styles

Book chapters on the topic "Bayesian paradigms"

1

Bandyopadhyay, Prasanta S., Gordon Brittan, and Mark L. Taper. "Bayesian and Evidential Paradigms." In SpringerBriefs in Philosophy, 15–36. Cham: Springer International Publishing, 2016. http://dx.doi.org/10.1007/978-3-319-27772-1_2.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Biggio, Battista, Giorgio Fumera, and Fabio Roli. "Bayesian Linear Combination of Neural Networks." In Innovations in Neural Information Paradigms and Applications, 201–30. Berlin, Heidelberg: Springer Berlin Heidelberg, 2009. http://dx.doi.org/10.1007/978-3-642-04003-0_9.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

McDougall, Damon, and Chris K. R. T. Jones. "Decreasing Flow Uncertainty in Bayesian Inverse Problems Through Lagrangian Drifter Control." In Mathematical Paradigms of Climate Science, 215–28. Cham: Springer International Publishing, 2016. http://dx.doi.org/10.1007/978-3-319-39092-5_10.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Hruschka, Estevam R., and Maria do Carmo Nicoletti. "Roles Played by Bayesian Networks in Machine Learning: An Empirical Investigation." In Emerging Paradigms in Machine Learning, 75–116. Berlin, Heidelberg: Springer Berlin Heidelberg, 2013. http://dx.doi.org/10.1007/978-3-642-28699-5_5.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Winkler, Gerhard. "The Bayesian Paradigm." In Image Analysis, Random Fields and Dynamic Monte Carlo Methods, 13–22. Berlin, Heidelberg: Springer Berlin Heidelberg, 1995. http://dx.doi.org/10.1007/978-3-642-97522-6_2.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Longford, Nicholas T. "The Bayesian Paradigm." In Statistical Decision Theory, 49–64. Berlin, Heidelberg: Springer Berlin Heidelberg, 2013. http://dx.doi.org/10.1007/978-3-642-40433-7_4.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Winkler, Gerhard. "The Bayesian Paradigm." In Image Analysis, Random Fields and Markov Chain Monte Carlo Methods, 9–28. Berlin, Heidelberg: Springer Berlin Heidelberg, 2003. http://dx.doi.org/10.1007/978-3-642-55760-6_2.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Tellaeche, Alberto, Xavier-P. BurgosArtizzu, Gonzalo Pajares, and Angela Ribeiro. "A Vision-Based Hybrid Classifier for Weeds Detection in Precision Agriculture Through the Bayesian and Fuzzy k-Means Paradigms." In Advances in Soft Computing, 72–79. Berlin, Heidelberg: Springer Berlin Heidelberg, 2007. http://dx.doi.org/10.1007/978-3-540-74972-1_11.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Vidakovic, Brani. "Γ-Minimax: A Paradigm for Conservative Robust Bayesians." In Robust Bayesian Analysis, 241–59. New York, NY: Springer New York, 2000. http://dx.doi.org/10.1007/978-1-4612-1306-2_13.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Vallverdú, Jordi. "The Coevolution, Battles, and Fights of Both Paradigms." In Bayesians Versus Frequentists, 61–76. Berlin, Heidelberg: Springer Berlin Heidelberg, 2015. http://dx.doi.org/10.1007/978-3-662-48638-2_5.

Full text
APA, Harvard, Vancouver, ISO, and other styles

Conference papers on the topic "Bayesian paradigms"

1

Ocampo, Shirlee, Rechel Arcilla, Frumencio Co, Ryan Jumangit, and Felipe Diokno. "Enthusing students towards statistical literacy using transformative learning paradigm: implementation and appraisal." In Statistics education for Progress: Youth and Official Statistics. IASE international Association for Statistical Education, 2013. http://dx.doi.org/10.52041/srap.113201.

Full text
Abstract:
and changing requirements of globalizing society. Hence, there is a need to shift from traditional method of teaching statistics to new paradigms. This paper presents the improvements implemented along with its appraisal in teaching general education statistics courses using the traditional transmissive pedagogy and then shifting to transformative learning paradigm. The transmissive pedagogy involves merely lectures and paper-and-pen tests, while the transformative learning paradigm integrates computer-based instructions, Web technologies, authentic assessment, problem-based learning, collaborative inquiry, and use of real-life data. Results showed a significant improvement in understanding statistics for both learning paradigms. However, the data did not provide evidence to indicate differences in the amount of learning between the two paradigms. Classical and Bayesian factor analyses both obtained seven non- intellective factors. The two paradigms differ significantly on five factors indicating that students are enthused towards statistical literacy under the transformative learning framework.
APA, Harvard, Vancouver, ISO, and other styles
2

Guijarro, Maria, Gonzalo Pajares, Raquel Abreu, Luis Garmendia, and Matilde Santos. "Design of a Hybrid Classifier for Natural Textures in Images from the Bayesian and Fuzzy Paradigms." In 2007 IEEE International Symposium on Intelligent Signal Processing. IEEE, 2007. http://dx.doi.org/10.1109/wisp.2007.4447562.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Musharraf, Mashrura, Allison Moyle, Faisal Khan, and Brian Veitch. "Using Simulator Data to Facilitate Human Reliability Analysis in Offshore Emergency Situations." In ASME 2018 37th International Conference on Ocean, Offshore and Arctic Engineering. American Society of Mechanical Engineers, 2018. http://dx.doi.org/10.1115/omae2018-78420.

Full text
Abstract:
Data scarcity has always been a significant challenge in the domain of human reliability analysis (HRA). The advancement of simulation technologies provides opportunities to collect human performance data that can facilitate both the development and validation paradigms of HRA. The potential of simulator data to improve HRA can be tapped through the use of advanced machine learning tools like Bayesian methods. Except for Bayesian networks, Bayesian methods have not been widely used in the HRA community. This paper uses a Bayesian method to enhance human error probability (HEP) assessment in offshore emergency situations using data generated in a simulator. Assessment begins by using constrained non-informative priors to define the HEPs in emergency situations. An experiment is then conducted in a simulator to collect human performance data in a set of emergency scenarios. Data collected during the experiment is used to update the priors and obtain informed posteriors. Use of the informed posteriors enable better understanding of the performance, and a more reliable and objective assessment of human reliability, compared to traditional assessment using expert judgment.
APA, Harvard, Vancouver, ISO, and other styles
4

Lu, Zhichao, Ian Whalen, Yashesh Dhebar, Kalyanmoy Deb, Erik Goodman, Wolfgang Banzhaf, and Vishnu Naresh Boddeti. "NSGA-Net: Neural Architecture Search using Multi-Objective Genetic Algorithm (Extended Abstract)." In Twenty-Ninth International Joint Conference on Artificial Intelligence and Seventeenth Pacific Rim International Conference on Artificial Intelligence {IJCAI-PRICAI-20}. California: International Joint Conferences on Artificial Intelligence Organization, 2020. http://dx.doi.org/10.24963/ijcai.2020/659.

Full text
Abstract:
Convolutional neural networks (CNNs) are the backbones of deep learning paradigms for numerous vision tasks. Early advancements in CNN architectures are primarily driven by human expertise and elaborate design. Recently, neural architecture search (NAS) was proposed with the aim of automating the network design process and generating task-dependent architectures. This paper introduces NSGA-Net -- an evolutionary search algorithm that explores a space of potential neural network architectures in three steps, namely, a population initialization step that is based on prior-knowledge from hand-crafted architectures, an exploration step comprising crossover and mutation of architectures, and finally an exploitation step that utilizes the hidden useful knowledge stored in the entire history of evaluated neural architectures in the form of a Bayesian Network. The integration of these components allows an efficient design of architectures that are competitive and in many cases outperform both manually and automatically designed architectures on CIFAR-10 classification task. The flexibility provided from simultaneously obtaining multiple architecture choices for different compute requirements further differentiates our approach from other methods in the literature.
APA, Harvard, Vancouver, ISO, and other styles
5

Snover, Matthew G., and Michael R. Brent. "A Bayesian model for morpheme and paradigm identification." In the 39th Annual Meeting. Morristown, NJ, USA: Association for Computational Linguistics, 2001. http://dx.doi.org/10.3115/1073012.1073075.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Wang, Pingfeng, Byeng D. Youn, Zhimin Xi, and Artemis Kloess. "Bayesian Reliability Analysis With Evolving, Insufficient, and Subjective Data Sets." In ASME 2008 International Design Engineering Technical Conferences and Computers and Information in Engineering Conference. ASMEDC, 2008. http://dx.doi.org/10.1115/detc2008-49663.

Full text
Abstract:
A primary concern in product design is ensuring high system reliability amidst various uncertainties throughout a product life-cycle. To achieve high reliability, uncertainty data for complex product systems must be adequately collected, analyzed, and managed throughout the product life-cycle. However, despite years of research, system reliability assessment is still difficult, mainly due to the challenges of evolving, insufficient, and subjective data sets. Therefore, the objective of this research is to establish a new paradigm of reliability prediction that enables the use of evolving, insufficient, and subjective data sets (from expert knowledge, customer survey, system inspection & testing, and field data) over the entire product life-cycle. This research will integrate probability encoding methods to a Bayesian updating mechanism. It is referred to as Bayesian Information Toolkit (BIT). Likewise, Bayesian Reliability Toolkit (BRT) will be created by incorporating reliability analysis to the Bayesian updating mechanism. In this research, both BIT and BRT will be integrated to predict reliability even with evolving, insufficient, and subjective data sets. It is shown that the proposed Bayesian reliability analysis can predict the reliability of door closing performance in a vehicle body-door subsystem where the relevant data sets availability are limited, subjective, and evolving.
APA, Harvard, Vancouver, ISO, and other styles
7

Stangl, Dalene. "Design of an internet course for training medical researchers in Bayesian statistical methods." In Training Researchers in the Use if Statistics. International Association for Statistical Education, 2000. http://dx.doi.org/10.52041/srap.00204.

Full text
Abstract:
Access to statistical information is at an all-time high, and the information age is fuelling this access at an extraordinary pace. This access increases the capacity for medical researchers to use statistics to guide decision making, yet few courses teach methods to do so. Rarely does statistics training include methods for incorporating statistical output into decision making. Mass education and educational reform is needed. Technological advances of the past decade make this goal possible, and allow us to dramatically change how we use, teach, and think about statistics. This paper covers the conceptual development of an Internet continuing-education course designed to teach the basics the Bayesian statistics to medical researchers. Two questions are discussed: Why the Internet, and why the Bayesian paradigm?
APA, Harvard, Vancouver, ISO, and other styles
8

Arasaratnam, Ienkaran, Jimi Tjong, and Ryan Ahmed. "Battery management system in the Bayesian paradigm: Part I: SOC estimation." In 2014 IEEE Transportation Electrification Conference and Expo (ITEC). IEEE, 2014. http://dx.doi.org/10.1109/itec.2014.6861863.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Bhoyar, A., S. Sharma, S. Barve, and R. Kumar Rana. "Intelligent Control of Autonomous Vessels: Bayesian Estimation Instead of Statistical Learning?" In International Conference on Marine Engineering and Technology Oman. London: IMarEST, 2019. http://dx.doi.org/10.24868/icmet.oman.2019.008.

Full text
Abstract:
Marine vessels have been recently considered for redesign with a view towards autonomous operation. This brings forth a number of safety concerns as regards malware attacks on intra-vehicle communications systems as well as on sensor based communication with their environment. Designing suitable hybrid systems or cyber physical systems as the above, which are data driven, involves a challenge by way of difficulty in abstraction. The current modeling paradigm for cyber physical systems is based upon the abstract idea of a hybrid automaton which involves discrete as well as continuous mathematical models for the physical device (marine vessel/s) Incorporating statistical inference techniques to introduce an element of autonomy in this has been recently proposed in literature. An engineering situation is explored in which a pair of marine vessels is being deployed to navigate avoiding collision with the help of deterministic control as well as with a particle filtering state estimator. A security intrusion is considered to occur in the communication channels and the robustness of the system is studied with the state estimation. Such intrusions can indeed be expected to defeat the collision protection design if sufficiently intense. However, better protection is offered by such Bayesian estimation based intelligent control as compare to statistical learning base control. Our results suggest that the hybrid automaton modeling paradigm with autonomy incorporated needs to be suitably abstracted in order to better design their defence against cyber-attacks.
APA, Harvard, Vancouver, ISO, and other styles
10

Choudhury, Sanjiban, Siddhartha Srinivasa, and Sebastian Scherer. "Bayesian Active Edge Evaluation on Expensive Graphs." In Twenty-Seventh International Joint Conference on Artificial Intelligence {IJCAI-18}. California: International Joint Conferences on Artificial Intelligence Organization, 2018. http://dx.doi.org/10.24963/ijcai.2018/679.

Full text
Abstract:
We consider the problem of real-time motion planning that requires evaluating a minimal number of edges on a graph to quickly discover collision-free paths. Evaluating edges is expensive, both for robots with complex geometries like robot arms, and for robots sensing the world online like UAVs. Until now, this challenge has been addressed via laziness, i.e. deferring edge evaluation until absolutely necessary, with the hope that edges turn out to be valid. However, all edges are not alike in value - some have a lot of potentially good paths flowing through them, and some others encode the likelihood of neighbouring edges being valid. This leads to our key insight - instead of passive laziness, we can actively choose edges that reduce the uncertainty about the validity of paths. We show that this is equivalent to the Bayesian active learning paradigm of decision region determination (DRD). However, the DRD problem is not only combinatorially hard but also requires explicit enumeration of all possible worlds. We propose a novel framework that combines two DRD algorithms, DIRECT and BISECT, to overcome both issues. We show that our approach outperforms several state-of-the-art algorithms on a spectrum of planning problems for mobile robots, manipulators and autonomous helicopters.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography