To see the other types of publications on this topic, follow the link: Bayesian paradigms.

Journal articles on the topic 'Bayesian paradigms'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 journal articles for your research on the topic 'Bayesian paradigms.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse journal articles on a wide variety of disciplines and organise your bibliography correctly.

1

Liu, Zhi-Qiang. "Bayesian Paradigms in Image Processing." International Journal of Pattern Recognition and Artificial Intelligence 11, no. 01 (February 1997): 3–33. http://dx.doi.org/10.1142/s0218001497000020.

Full text
Abstract:
A large number of image and spatial information processing problems involves the estimation of the intrinsic image information from observed images, for instance, image restoration, image registration, image partition, depth estimation, shape reconstruction and motion estimation. These are inverse problems and generally ill-posed. Such estimation problems can be readily formulated by Bayesian models which infer the desired image information from the measured data. Bayesian paradigms have played a very important role in spatial data analysis for over three decades and have found many successful applications. In this paper, we discuss several aspects of Bayesian paradigms: uncertainty present in the observed image, prior distribution modeling, Bayesian-based estimation techniques in image processing, particularly, the maximum a posteriori estimator and the Kalman filtering theory, robustness, and Markov random fields and applications.
APA, Harvard, Vancouver, ISO, and other styles
2

Oaksford, Mike, and Nick Chater. "New Paradigms in the Psychology of Reasoning." Annual Review of Psychology 71, no. 1 (January 4, 2020): 305–30. http://dx.doi.org/10.1146/annurev-psych-010419-051132.

Full text
Abstract:
The psychology of verbal reasoning initially compared performance with classical logic. In the last 25 years, a new paradigm has arisen, which focuses on knowledge-rich reasoning for communication and persuasion and is typically modeled using Bayesian probability theory rather than logic. This paradigm provides a new perspective on argumentation, explaining the rational persuasiveness of arguments that are logical fallacies. It also helps explain how and why people stray from logic when given deductive reasoning tasks. What appear to be erroneous responses, when compared against logic, often turn out to be rationally justified when seen in the richer rational framework of the new paradigm. Moreover, the same approach extends naturally to inductive reasoning tasks, in which people extrapolate beyond the data they are given and logic does not readily apply. We outline links between social and individual reasoning and set recent developments in the psychology of reasoning in the wider context of Bayesian cognitive science.
APA, Harvard, Vancouver, ISO, and other styles
3

Neupert, Shevaun D., Claire M. Growney, Xianghe Zhu, Julia K. Sorensen, Emily L. Smith, and Jan Hannig. "BFF: Bayesian, Fiducial, and Frequentist Analysis of Cognitive Engagement among Cognitively Impaired Older Adults." Entropy 23, no. 4 (April 6, 2021): 428. http://dx.doi.org/10.3390/e23040428.

Full text
Abstract:
Engagement in cognitively demanding activities is beneficial to preserving cognitive health. Our goal was to demonstrate the utility of frequentist, Bayesian, and fiducial statistical methods for evaluating the robustness of effects in identifying factors that contribute to cognitive engagement for older adults experiencing cognitive decline. We collected a total of 504 observations across two longitudinal waves of data from 28 cognitively impaired older adults. Participants’ systolic blood pressure responsivity, an index of cognitive engagement, was continuously sampled during cognitive testing. Participants reported on physical and mental health challenges and provided hair samples to assess chronic stress at each wave. Using the three statistical paradigms, we compared results from six model testing levels and longitudinal changes in health and stress predicting changes in cognitive engagement. Findings were mostly consistent across the three paradigms, providing additional confidence in determining effects. We extend selective engagement theory to cognitive impairment, noting that health challenges and stress appear to be important moderators. Further, we emphasize the utility of the Bayesian and fiducial paradigms for use with relatively small sample sizes because they are not based on asymptotic distributions. In particular, the fiducial paradigm is a useful tool because it provides more information than p values without the need to specify prior distributions, which may unduly influence the results based on a small sample. We provide the R code used to develop and implement all models.
APA, Harvard, Vancouver, ISO, and other styles
4

Ly, Alexander, Akash Raj, Alexander Etz, Maarten Marsman, Quentin F. Gronau, and Eric-Jan Wagenmakers. "Bayesian Reanalyses From Summary Statistics: A Guide for Academic Consumers." Advances in Methods and Practices in Psychological Science 1, no. 3 (August 13, 2018): 367–74. http://dx.doi.org/10.1177/2515245918779348.

Full text
Abstract:
Across the social sciences, researchers have overwhelmingly used the classical statistical paradigm to draw conclusions from data, often focusing heavily on a single number: p. Recent years, however, have witnessed a surge of interest in an alternative statistical paradigm: Bayesian inference, in which probabilities are attached to parameters and models. We feel it is informative to provide statistical conclusions that go beyond a single number, and—regardless of one’s statistical preference—it can be prudent to report the results from both the classical and the Bayesian paradigms. In order to promote a more inclusive and insightful approach to statistical inference, we show how the Summary Stats module in the open-source software program JASP ( https://jasp-stats.org ) can provide comprehensive Bayesian reanalyses from just a few commonly reported summary statistics, such as t and N. These Bayesian reanalyses allow researchers—and also editors, reviewers, readers, and reporters—to (a) quantify evidence on a continuous scale using Bayes factors, (b) assess the robustness of that evidence to changes in the prior distribution, and (c) gauge which posterior parameter ranges are more credible than others by examining the posterior distribution of the effect size. The procedure is illustrated using Festinger and Carlsmith’s (1959) seminal study on cognitive dissonance.
APA, Harvard, Vancouver, ISO, and other styles
5

Bojinov, Iavor I., Natesh S. Pillai, and Donald B. Rubin. "Diagnosing missing always at random in multivariate data." Biometrika 107, no. 1 (November 23, 2019): 246–53. http://dx.doi.org/10.1093/biomet/asz061.

Full text
Abstract:
Summary Models for analysing multivariate datasets with missing values require strong, often unassessable, assumptions. The most common of these is that the mechanism that created the missing data is ignorable, which is a two-fold assumption dependent on the mode of inference. The first part, which is the focus here, under the Bayesian and direct-likelihood paradigms requires that the missing data be missing at random; in contrast, the frequentist-likelihood paradigm demands that the missing data mechanism always produce missing at random data, a condition known as missing always at random. Under certain regularity conditions, assuming missing always at random leads to a condition that can be tested using the observed data alone, namely that the missing data indicators depend only on fully observed variables. In this note we propose three different diagnostic tests that not only indicate when this assumption is incorrect but also suggest which variables are the most likely culprits. Although missing always at random is not a necessary condition to ensure validity under the Bayesian and direct-likelihood paradigms, it is sufficient, and evidence of its violation should encourage the careful statistician to conduct targeted sensitivity analyses.
APA, Harvard, Vancouver, ISO, and other styles
6

Alotaibi, Refah, Lamya A. Baharith, Ehab M. Almetwally, Mervat Khalifa, Indranil Ghosh, and Hoda Rezk. "Statistical Inference on a Finite Mixture of Exponentiated Kumaraswamy-G Distributions with Progressive Type II Censoring Using Bladder Cancer Data." Mathematics 10, no. 15 (August 7, 2022): 2800. http://dx.doi.org/10.3390/math10152800.

Full text
Abstract:
A new family of distributions called the mixture of the exponentiated Kumaraswamy-G (henceforth, in short, ExpKum-G) class is developed. We consider Weibull distribution as the baseline (G) distribution to propose and study this special sub-model, which we call the exponentiated Kumaraswamy Weibull distribution. Several useful statistical properties of the proposed ExpKum-G distribution are derived. Under the classical paradigm, we consider the maximum likelihood estimation under progressive type II censoring to estimate the model parameters. Under the Bayesian paradigm, independent gamma priors are proposed to estimate the model parameters under progressive type II censored samples, assuming several loss functions. A simulation study is carried out to illustrate the efficiency of the proposed estimation strategies under both classical and Bayesian paradigms, based on progressively type II censoring models. For illustrative purposes, a real data set is considered that exhibits that the proposed model in the new class provides a better fit than other types of finite mixtures of exponentiated Kumaraswamy-type models.
APA, Harvard, Vancouver, ISO, and other styles
7

Neupert, Shevaun D., and Jan Hannig. "BFF: Bayesian, Fiducial, Frequentist Analysis of Age Effects in Daily Diary Data." Journals of Gerontology: Series B 75, no. 1 (August 17, 2019): 67–79. http://dx.doi.org/10.1093/geronb/gbz100.

Full text
Abstract:
Abstract Objectives We apply new statistical models to daily diary data to advance both methodological and conceptual goals. We examine age effects in within-person slopes in daily diary data and introduce Generalized Fiducial Inference (GFI), which provides a compromise between frequentist and Bayesian inference. We use daily stressor exposure data across six domains to generate within-person emotional reactivity slopes with daily negative affect. We test for systematic age differences and similarities in these reactivity slopes, which are inconsistent in previous research. Method One hundred and eleven older (aged 60–90) and 108 younger (aged 18–36) adults responded to daily stressor and negative affect questions each day for eight consecutive days, resulting in 1,438 total days. Daily stressor domains included arguments, avoided arguments, work/volunteer stressors, home stressors, network stressors, and health-related stressors. Results Using Bayesian, GFI, and frequentist paradigms, we compared results for the six stressor domains with a focus on interpreting age effects in within-person reactivity. Multilevel models suggested null age effects in emotional reactivity across each of the paradigms within the domains of avoided arguments, work/volunteer stressors, home stressors, and health-related stressors. However, the models diverged with respect to null age effects in emotional reactivity to arguments and network stressors. Discussion The three paradigms converged on null age effects in reactivity for four of the six stressor domains. GFI is a useful tool that provides additional information when making determinations regarding null age effects in within-person slopes. We provide the code for readers to apply these models to their own data.
APA, Harvard, Vancouver, ISO, and other styles
8

Kabanda, Gabriel. "Bayesian Network Model for a Zimbabwean Cybersecurity System." Oriental journal of computer science and technology 12, no. 4 (January 3, 2020): 147–67. http://dx.doi.org/10.13005/ojcst12.04.02.

Full text
Abstract:
The purpose of this research was to develop a structure for a network intrusion detection and prevention system based on the Bayesian Network for use in Cybersecurity. The phenomenal growth in the use of internet-based technologies has resulted in complexities in cybersecurity subjecting organizations to cyberattacks. What is required is a network intrusion detection and prevention system based on the Bayesian Network structure for use in Cybersecurity. Bayesian Networks (BNs) are defined as graphical probabilistic models for multivariate analysis and are directed acyclic graphs that have an associated probability distribution function. The research determined the cybersecurity framework appropriate for a developing nation; evaluated network detection and prevention systems that use Artificial Intelligence paradigms such as finite automata, neural networks, genetic algorithms, fuzzy logic, support-vector machines or diverse data-mining-based approaches; analysed Bayesian Networks that can be represented as graphical models and are directional to represent cause-effect relationships; and developed a Bayesian Network model that can handle complexity in cybersecurity. The theoretical framework on Bayesian Networks was largely informed by the NIST Cybersecurity Framework, General deterrence theory, Game theory, Complexity theory and data mining techniques. The Pragmatism paradigm used in this research, as a philosophy is intricately related to the Mixed Method Research (MMR). A mixed method approach was used in this research, which is largely quantitative with the research design being a survey and an experiment, but supported by qualitative approaches where Focus Group discussions were held. The performance of Support Vector Machines, Artificial Neural Network, K-Nearest Neighbour, Naive-Bayes and Decision Tree Algorithms was discussed. Alternative improved solutions discussed include the use of machine learning algorithms specifically Artificial Neural Networks (ANN), Decision Tree C4.5, Random Forests and Support Vector Machines (SVM).
APA, Harvard, Vancouver, ISO, and other styles
9

Laurens, Jean, Dominik Straumann, and Bernhard J. M. Hess. "Processing of Angular Motion and Gravity Information Through an Internal Model." Journal of Neurophysiology 104, no. 3 (September 2010): 1370–81. http://dx.doi.org/10.1152/jn.00143.2010.

Full text
Abstract:
The vestibular organs in the base of the skull provide important information about head orientation and motion in space. Previous studies have suggested that both angular velocity information from the semicircular canals and information about head orientation and translation from the otolith organs are centrally processed in an internal model of head motion, using the principles of optimal estimation. This concept has been successfully applied to model behavioral responses to classical vestibular motion paradigms. This study measured the dynamic of the vestibuloocular reflex during postrotatory tilt, tilt during the optokinetic afternystagmus, and off-vertical axis rotation. The influence of otolith signal on the VOR was systematically varied by using a series of tilt angles. We found that the time constants of responses varied almost identically as a function of gravity in these paradigms. We show that Bayesian modeling could predict the experimental results in an accurate and consistent manner. In contrast to other approaches, the Bayesian model also provides a plausible explanation of why these vestibulooculo motor responses occur as a consequence of an internal process of optimal motion estimation.
APA, Harvard, Vancouver, ISO, and other styles
10

Guo, Jeff, Bojana Ranković, and Philippe Schwaller. "Bayesian Optimization for Chemical Reactions." CHIMIA 77, no. 1/2 (February 22, 2023): 31. http://dx.doi.org/10.2533/chimia.2023.31.

Full text
Abstract:
Reaction optimization is challenging and traditionally delegated to domain experts who iteratively propose increasingly optimal experiments. Problematically, the reaction landscape is complex and often requires hundreds of experiments to reach convergence, representing an enormous resource sink. Bayesian optimization (BO) is an optimization algorithm that recommends the next experiment based on previous observations and has recently gained considerable interest in the general chemistry community. The application of BO for chemical reactions has been demonstrated to increase efficiency in optimization campaigns and can recommend favorable reaction conditions amidst many possibilities. Moreover, its ability to jointly optimize desired objectives such as yield and stereoselectivity makes it an attractive alternative or at least complementary to domain expert-guided optimization. With the democratization of BO software, the barrier of entry to applying BO for chemical reactions has drastically lowered. The intersection between the paradigms will see advancements at an ever-rapid pace. In this review, we discuss how chemical reactions can be transformed into machine-readable formats which can be learned by machine learning (ML) models. We present a foundation for BO and how it has already been applied to optimize chemical reaction outcomes. The important message we convey is that realizing the full potential of ML-augmented reaction optimization will require close collaboration between experimentalists and computational scientists.
APA, Harvard, Vancouver, ISO, and other styles
11

Ateeq, Kahkashan, Noumana Safdar, and Shakeel Ahmed. "Exploring the Exponentiated Transmuted Inverse Rayleigh Distribution (ETIRD) in Classical and Bayesian Paradigms." STATISTICS, COMPUTING AND INTERDISCIPLINARY RESEARCH 4, no. 2 (December 31, 2022): 17–37. http://dx.doi.org/10.52700/scir.v4i2.114.

Full text
Abstract:
We derived, a new three parameters continuous probability distribution called Exponentiated Transmuted Inverse Rayleigh Distribution (ETIRD). Various mathematical properties of the new distribution including mean, rth moments, moment generating function, quantile function etc. are derived. In the Classical paradigm, the estimators of the distribution are obtained using the maximum likelihood method. The Bayes estimators are derived under square error loss function (SELF) using non-informative and informative priors via the Lindley approximation technique. Bayes Estimators are compared with their corresponding maximum likelihood Estimators (MLEs) using a Monte Carlo Simulation Study under different sample sizes, different values of true parameters, using informative and non-informative priors. Performance of Bayes estimators and classical estimates is judged for the four real life data sets. The results of simulation study and real-life example show that the Bayes estimators provided better results than MLEs.
APA, Harvard, Vancouver, ISO, and other styles
12

Jacobsen, Daniel J., Lars Kai Hansen, and Kristoffer Hougaard Madsen. "Bayesian Model Comparison in Nonlinear BOLD fMRI Hemodynamics." Neural Computation 20, no. 3 (March 2008): 738–55. http://dx.doi.org/10.1162/neco.2007.07-06-282.

Full text
Abstract:
Nonlinear hemodynamic models express the BOLD (blood oxygenation level dependent) signal as a nonlinear, parametric functional of the temporal sequence of local neural activity. Several models have been proposed for both the neural activity and the hemodynamics. We compare two such combined models: the original balloon model with a square-pulse neural model (Friston, Mechelli, Turner, & Price, 2000) and an extended balloon model with a more sophisticated neural model (Buxton, Uludag, Dubowitz, & Liu, 2004). We learn the parameters of both models using a Bayesian approach, where the distribution of the parameters conditioned on the data is estimated using Markov chain Monte Carlo techniques. Using a split-half resampling procedure (Strother, Anderson, & Hansen, 2002), we compare the generalization abilities of the models as well as their reproducibility, for both synthetic and real data, recorded from two different visual stimulation paradigms. The results show that the simple model is the better one for these data.
APA, Harvard, Vancouver, ISO, and other styles
13

Levy, Roy. "The Rise of Markov Chain Monte Carlo Estimation for Psychometric Modeling." Journal of Probability and Statistics 2009 (2009): 1–18. http://dx.doi.org/10.1155/2009/537139.

Full text
Abstract:
Markov chain Monte Carlo (MCMC) estimation strategies represent a powerful approach to estimation in psychometric models. Popular MCMC samplers and their alignment with Bayesian approaches to modeling are discussed. Key historical and current developments of MCMC are surveyed, emphasizing how MCMC allows the researcher to overcome the limitations of other estimation paradigms, facilitates the estimation of models that might otherwise be intractable, and frees the researcher from certain possible misconceptions about the models.
APA, Harvard, Vancouver, ISO, and other styles
14

Hashemi, Meysam, Anirudh N. Vattikonda, Viktor Sip, Sandra Diaz-Pier, Alexander Peyser, Huifang Wang, Maxime Guye, Fabrice Bartolomei, Marmaduke M. Woodman, and Viktor K. Jirsa. "On the influence of prior information evaluated by fully Bayesian criteria in a personalized whole-brain model of epilepsy spread." PLOS Computational Biology 17, no. 7 (July 14, 2021): e1009129. http://dx.doi.org/10.1371/journal.pcbi.1009129.

Full text
Abstract:
Individualized anatomical information has been used as prior knowledge in Bayesian inference paradigms of whole-brain network models. However, the actual sensitivity to such personalized information in priors is still unknown. In this study, we introduce the use of fully Bayesian information criteria and leave-one-out cross-validation technique on the subject-specific information to assess different epileptogenicity hypotheses regarding the location of pathological brain areas based on a priori knowledge from dynamical system properties. The Bayesian Virtual Epileptic Patient (BVEP) model, which relies on the fusion of structural data of individuals, a generative model of epileptiform discharges, and a self-tuning Monte Carlo sampling algorithm, is used to infer the spatial map of epileptogenicity across different brain areas. Our results indicate that measuring the out-of-sample prediction accuracy of the BVEP model with informative priors enables reliable and efficient evaluation of potential hypotheses regarding the degree of epileptogenicity across different brain regions. In contrast, while using uninformative priors, the information criteria are unable to provide strong evidence about the epileptogenicity of brain areas. We also show that the fully Bayesian criteria correctly assess different hypotheses about both structural and functional components of whole-brain models that differ across individuals. The fully Bayesian information-theory based approach used in this study suggests a patient-specific strategy for epileptogenicity hypothesis testing in generative brain network models of epilepsy to improve surgical outcomes.
APA, Harvard, Vancouver, ISO, and other styles
15

Zhong, Hongye, and Jitian Xiao. "Enhancing Health Risk Prediction with Deep Learning on Big Data and Revised Fusion Node Paradigm." Scientific Programming 2017 (2017): 1–18. http://dx.doi.org/10.1155/2017/1901876.

Full text
Abstract:
With recent advances in health systems, the amount of health data is expanding rapidly in various formats. This data originates from many new sources including digital records, mobile devices, and wearable health devices. Big health data offers more opportunities for health data analysis and enhancement of health services via innovative approaches. The objective of this research is to develop a framework to enhance health prediction with the revised fusion node and deep learning paradigms. Fusion node is an information fusion model for constructing prediction systems. Deep learning involves the complex application of machine-learning algorithms, such as Bayesian fusions and neural network, for data extraction and logical inference. Deep learning, combined with information fusion paradigms, can be utilized to provide more comprehensive and reliable predictions from big health data. Based on the proposed framework, an experimental system is developed as an illustration for the framework implementation.
APA, Harvard, Vancouver, ISO, and other styles
16

Ouyang, Guang, Joseph Dien, and Romy Lorenz. "Handling EEG artifacts and searching individually optimal experimental parameter in real time: a system development and demonstration." Journal of Neural Engineering 19, no. 1 (February 1, 2022): 016016. http://dx.doi.org/10.1088/1741-2552/ac42b6.

Full text
Abstract:
Abstract Objective. Neuroadaptive paradigms that systematically assess event-related potential (ERP) features across many different experimental parameters have the potential to improve the generalizability of ERP findings and may help to accelerate ERP-based biomarker discovery by identifying the exact experimental conditions for which ERPs differ most for a certain clinical population. Obtaining robust and reliable ERPs online is a prerequisite for ERP-based neuroadaptive research. One of the key steps involved is to correctly isolate electroencephalography artifacts in real time because they contribute a large amount of variance that, if not removed, will greatly distort the ERP obtained. Another key factor of concern is the computational cost of the online artifact handling method. This work aims to develop and validate a cost-efficient system to support ERP-based neuroadaptive research. Approach. We developed a simple online artifact handling method, single trial PCA-based artifact removal (SPA), based on variance distribution dichotomies to distinguish between artifacts and neural activity. We then applied this method in an ERP-based neuroadaptive paradigm in which Bayesian optimization was used to search individually optimal inter-stimulus-interval (ISI) that generates ERP with the highest signal-to-noise ratio. Main results. SPA was compared to other offline and online algorithms. The results showed that SPA exhibited good performance in both computational efficiency and preservation of ERP pattern. Based on SPA, the Bayesian optimization procedure was able to quickly find individually optimal ISI. Significance. The current work presents a simple yet highly cost-efficient method that has been validated in its ability to extract ERP, preserve ERP effects, and better support ERP-based neuroadaptive paradigm.
APA, Harvard, Vancouver, ISO, and other styles
17

Chinembiri, Tsikai Solomon, Onisimo Mutanga, and Timothy Dube. "Carbon Stock Prediction in Managed Forest Ecosystems Using Bayesian and Frequentist Geostatistical Techniques and New Generation Remote Sensing Metrics." Remote Sensing 15, no. 6 (March 18, 2023): 1649. http://dx.doi.org/10.3390/rs15061649.

Full text
Abstract:
The study compares the performance of a hierarchical Bayesian geostatistical methodology with a frequentist geostatistical approach, specifically, Kriging with External Drift (KED), for predicting C stock using prediction aides from the Landsat-8 and Sentinel-2 multispectral remote sensing platforms. The frequentist geostatistical approach’s reliance on the long-run frequency of repeated experiments for constructing confidence intervals is not always practical or feasible, as practitioners typically have access to a single dataset due to cost constraints on surveys and sampling. We evaluated two approaches for C stock prediction using two new generation multispectral remote sensing datasets because of the inherent uncertainty characterizing spatial prediction problems in the unsampled locations, as well as differences in how the Bayesian and frequentist geostatistical paradigms handle uncertainty. Information on C stock spectral prediction in the form of NDVI, SAVI, and EVI derived from multispectral remote sensing platforms, Landsat-8 and Sentinel-2, was used to build Bayesian and frequentist-based C stock predictive models in the sampled plantation forest ecosystem. Sentinel-2-based C stock predictive models outperform their Landsat-8 counterparts using both the Bayesian and frequentist inference approaches. However, the Bayesian-based Sentinel-2 C stock predictive model (RMSE = 0.17 MgCha−1) is more accurate than its frequentist-based Sentinel-2 (RMSE = 1.19 MgCha−1) C stock equivalent. The Sentinel-2 frequentist-based C stock predictive model gave the C stock prediction range of 1 ≤ MgCha−1 ≤ 290, whilst the Sentinel-2 Bayesian-based C stock predictive model resulted in the prediction range of 1 ≤ MgCha−1 ≤ 285. However, both the Bayesian and frequentist C stock predictive models built with the Landsat-8 sensor overpredicted the sampled C stock because the range of predicted values fell outside the range of the observed C stock values. As a result, we recommend and conclude that the Bayesian-based C stock prediction method, when it is combined with high-quality remote sensing data such as that of Sentinel-2, is an effective inferential statistical methodology for reporting C stock in managed plantation forest ecosystems.
APA, Harvard, Vancouver, ISO, and other styles
18

Ateeq, Kahkashan, Saima Altaf, and Muhammad Aslam. "Modeling and Bayesian Analysis of Time between the Breakdown of Electric Feeders." Modelling and Simulation in Engineering 2022 (May 30, 2022): 1–13. http://dx.doi.org/10.1155/2022/5830945.

Full text
Abstract:
The failure of electric feeders is a common problem in the summer season in Pakistan. In this article, one of the troubling aspects of the electric power system of Pakistan (Multan city) has been studied. The time lapses between the breakdown of electric feeders of the city have been modeled by suggesting an inverse Rayleigh-exponential distribution. The parameters of the distribution are estimated in both the frequentist and Bayesian paradigms. Since the Bayes estimators under informative priors are not attained in the closed form, this paper provides a comparative analysis of the Bayes estimators under Lindley and Tierney–Kadane approximation methods. The simulation study and the real-life data set assessed the validity of the model and the superiority of the Bayes estimators over the maximum likelihood estimators.
APA, Harvard, Vancouver, ISO, and other styles
19

Puttick, Mark N., Joseph E. O'Reilly, Alastair R. Tanner, James F. Fleming, James Clark, Lucy Holloway, Jesus Lozano-Fernandez, et al. "Uncertain-tree: discriminating among competing approaches to the phylogenetic analysis of phenotype data." Proceedings of the Royal Society B: Biological Sciences 284, no. 1846 (January 11, 2017): 20162290. http://dx.doi.org/10.1098/rspb.2016.2290.

Full text
Abstract:
Morphological data provide the only means of classifying the majority of life's history, but the choice between competing phylogenetic methods for the analysis of morphology is unclear. Traditionally, parsimony methods have been favoured but recent studies have shown that these approaches are less accurate than the Bayesian implementation of the Mk model. Here we expand on these findings in several ways: we assess the impact of tree shape and maximum-likelihood estimation using the Mk model, as well as analysing data composed of both binary and multistate characters. We find that all methods struggle to correctly resolve deep clades within asymmetric trees, and when analysing small character matrices. The Bayesian Mk model is the most accurate method for estimating topology, but with lower resolution than other methods. Equal weights parsimony is more accurate than implied weights parsimony, and maximum-likelihood estimation using the Mk model is the least accurate method. We conclude that the Bayesian implementation of the Mk model should be the default method for phylogenetic estimation from phenotype datasets, and we explore the implications of our simulations in reanalysing several empirical morphological character matrices. A consequence of our finding is that high levels of resolution or the ability to classify species or groups with much confidence should not be expected when using small datasets. It is now necessary to depart from the traditional parsimony paradigms of constructing character matrices, towards datasets constructed explicitly for Bayesian methods.
APA, Harvard, Vancouver, ISO, and other styles
20

Zhang, Zhihao, Saksham Chandra, Andrew Kayser, Ming Hsu, and Joshua L. Warren. "A Hierarchical Bayesian Implementation of the Experience-Weighted Attraction Model." Computational Psychiatry 4 (August 2020): 40–60. http://dx.doi.org/10.1162/cpsy_a_00028.

Full text
Abstract:
Social and decision-making deficits are often the first symptoms of neuropsychiatric disorders. In recent years, economic games, together with computational models of strategic learning, have been increasingly applied to the characterization of individual differences in social behavior, as well as their changes across time due to disease progression, treatment, or other factors. At the same time, the high dimensionality of these data poses an important challenge to statistical estimation of these models, potentially limiting the adoption of such approaches in patients and special populations. We introduce a hierarchical Bayesian implementation of a class of strategic learning models, experience-weighted attraction (EWA), that is widely used in behavioral game theory. Importantly, this approach provides a unified framework for capturing between- and within-participant variation, including changes associated with disease progression, comorbidity, and treatment status. We show using simulated data that our hierarchical Bayesian approach outperforms representative agent and individual-level estimation methods that are commonly used in extant literature, with respect to parameter estimation and uncertainty quantification. Furthermore, using an empirical dataset, we demonstrate the value of our approach over competing methods with respect to balancing model fit and complexity. Consistent with the success of hierarchical Bayesian approaches in other areas of behavioral science, our hierarchical Bayesian EWA model represents a powerful and flexible tool to apply to a wide range of behavioral paradigms for studying the interplay between complex human behavior and biological factors.
APA, Harvard, Vancouver, ISO, and other styles
21

van de Wouw, Didrika S., Ryan T. McKay, Bruno B. Averbeck, and Nicholas Furl. "Explaining human sampling rates across different decision domains." Judgment and Decision Making 17, no. 3 (May 2022): 487–512. http://dx.doi.org/10.1017/s1930297500003557.

Full text
Abstract:
AbstractUndersampling biases are common in the optimal stopping literature, especially for economic full choice problems. Among these kinds of number-based studies, the moments of the distribution of values that generates the options (i.e., the generating distribution) seem to influence participants’ sampling rate. However, a recent study reported an oversampling bias on a different kind of optimal stopping task: where participants chose potential romantic partners from images of faces (Furl et al., 2019). The authors hypothesised that this oversampling bias might be specific to mate choice. We preregistered this hypothesis and so, here, we test whether sampling rates across different image-based decision-making domains a) reflect different over- or undersampling biases, or b) depend on the moments of the generating distributions (as shown for economic number-based tasks). In two studies (N = 208 and N = 96), we found evidence against the preregistered hypothesis. Participants oversampled to the same degree across domains (compared to a Bayesian ideal observer model), while their sampling rates depended on the generating distribution mean and skewness in a similar way as number-based paradigms. Moreover, optimality model sampling to some extent depended on the the skewness of the generating distribution in a similar way to participants. We conclude that oversampling is not instigated by the mate choice domain and that sampling rate in image-based paradigms, like number-based paradigms, depends on the generating distribution.
APA, Harvard, Vancouver, ISO, and other styles
22

Yang, Geunbo, Wongyu Lee, Youjung Seo, Choongseop Lee, Woojoon Seok, Jongkil Park, Donggyu Sim, and Cheolsoo Park. "Unsupervised Spiking Neural Network with Dynamic Learning of Inhibitory Neurons." Sensors 23, no. 16 (August 17, 2023): 7232. http://dx.doi.org/10.3390/s23167232.

Full text
Abstract:
A spiking neural network (SNN) is a type of artificial neural network that operates based on discrete spikes to process timing information, similar to the manner in which the human brain processes real-world problems. In this paper, we propose a new spiking neural network (SNN) based on conventional, biologically plausible paradigms, such as the leaky integrate-and-fire model, spike timing-dependent plasticity, and the adaptive spiking threshold, by suggesting new biological models; that is, dynamic inhibition weight change, a synaptic wiring method, and Bayesian inference. The proposed network is designed for image recognition tasks, which are frequently used to evaluate the performance of conventional deep neural networks. To manifest the bio-realistic neural architecture, the learning is unsupervised, and the inhibition weight is dynamically changed; this, in turn, affects the synaptic wiring method based on Hebbian learning and the neuronal population. In the inference phase, Bayesian inference successfully classifies the input digits by counting the spikes from the responding neurons. The experimental results demonstrate that the proposed biological model ensures a performance improvement compared with other biologically plausible SNN models.
APA, Harvard, Vancouver, ISO, and other styles
23

Zhang, Xuan, Ole-Christoffer Granmo, and B. John Oommen. "On incorporating the paradigms of discretization and Bayesian estimation to create a new family of pursuit learning automata." Applied Intelligence 39, no. 4 (February 2, 2013): 782–92. http://dx.doi.org/10.1007/s10489-013-0424-x.

Full text
APA, Harvard, Vancouver, ISO, and other styles
24

Griffiths, Thomas L., Michael L. Kalish, and Stephan Lewandowsky. "Theoretical and empirical evidence for the impact of inductive biases on cultural evolution." Philosophical Transactions of the Royal Society B: Biological Sciences 363, no. 1509 (September 19, 2008): 3503–14. http://dx.doi.org/10.1098/rstb.2008.0146.

Full text
Abstract:
The question of how much the outcomes of cultural evolution are shaped by the cognitive capacities of human learners has been explored in several disciplines, including psychology, anthropology and linguistics. We address this question through a detailed investigation of transmission chains, in which each person passes information to another along a chain. We review mathematical and empirical evidence that shows that under general conditions, and across experimental paradigms, the information passed along transmission chains will be affected by the inductive biases of the people involved—the constraints on learning and memory, which influence conclusions from limited data. The mathematical analysis considers the case where each person is a rational Bayesian agent. The empirical work consists of behavioural experiments in which human participants are shown to operate in the manner predicted by the Bayesian framework. Specifically, in situations in which each person's response is used to determine the data seen by the next person, people converge on concepts consistent with their inductive biases irrespective of the information seen by the first member of the chain. We then relate the Bayesian analysis of transmission chains to models of biological evolution, clarifying how chains of individuals correspond to population-level models and how selective forces can be incorporated into our models. Taken together, these results indicate how laboratory studies of transmission chains can provide information about the dynamics of cultural evolution and illustrate that inductive biases can have a significant impact on these dynamics.
APA, Harvard, Vancouver, ISO, and other styles
25

FEAR, CHRISTOPHER F., and DAVID HEALY. "Probabilistic reasoning in obsessive–compulsive and delusional disorders." Psychological Medicine 27, no. 1 (January 1997): 199–208. http://dx.doi.org/10.1017/s0033291796004175.

Full text
Abstract:
Background. Delusional disorder (DD) and obsessive–compulsive disorder (OCD) have been investigated in previous studies using probabilistic reasoning paradigms and abnormalities in each group have been reported. No study to date has compared results between these groups. This study compares patients with these disorders with those who have both phenomena.Methods. Thirty subjects with DD, 29 with OCD and 16 with obsessive and delusional features were compared with 30 normal controls in a study of probabilistic reasoning using two different computer-based tasks involving a Bayesian paradigm.Results. Deluded subjects showed a ‘jump to conclusions’ reasoning style, but on a test that added a consequence to their choices did not differ from normals. OCD subjects deviated from Bayesian and control norms to a greater degree than did DD subjects. In subjects with mixed psychopathology, the presence of both phenomena appeared to ‘normalize’ these probability estimates.Conclusions. Our findings extend those of others but require cautious interpretation as to the role of probabilistic reasoning in the genesis of delusions or obsessions. Obsessionals in both the OCD and Mixed groups, showed substantial deviation from Bayesian norms, suggesting that obsessionality leads to a reasoning style that is less ‘normal’ than that of delusionals. Further work is required to investigate clinical correlates of these findings which provide modest support for the proposal that the combination of obsessions and delusions confers greater functional advantages than simply having delusions or obsessions.
APA, Harvard, Vancouver, ISO, and other styles
26

Bouslama, Mehdi, Meredith T. Bowen, Diogo C. Haussen, Seena Dehkharghani, Jonathan A. Grossberg, Letícia C. Rebello, Srikant Rangaraju, Michael R. Frankel, and Raul G. Nogueira. "Selection Paradigms for Large Vessel Occlusion Acute Ischemic Stroke Endovascular Therapy." Cerebrovascular Diseases 44, no. 5-6 (2017): 277–84. http://dx.doi.org/10.1159/000478537.

Full text
Abstract:
Background: Optimal patient selection methods for thrombectomy in large vessel occlusion stroke (LVOS) are yet to be established. We sought to evaluate the ability of different selection paradigms to predict favorable outcomes. Methods: Review of a prospectively collected database of endovascular patients with anterior circulation LVOS, adequate CT perfusion (CTP), National Institutes of Health Stroke Scale (NIHSS) ≥10 from September 2010 to March 2016. Patients were retrospectively assessed for thrombectomy eligibility by 4 mismatch criteria: Perfusion-Imaging Mismatch (PIM): between CTP-derived perfusion defect and ischemic core volumes; Clinical-Core Mismatch (CCM): between age-adjusted NIHSS and CTP core; Clinical-ASPECTS Mismatch (CAM-1): between age-adjusted NIHSS and ASPECTS; Clinical-ASPECTS Mismatch (CAM-2): between NIHSS and ASPECTS. Outcome measures were inclusion rates for each paradigm and their ability to predict good outcomes (90-day modified Rankin Scale 0-2). Results: Three hundred eighty-four patients qualified. CAM-2 and CCM had higher inclusion (89.3 and 82.3%) vs. CAM-1 (67.7%) and PIM (63.3%). Proportions of selected patients were statistically different except for PIM and CAM-1 (p = 0.19), with PIM having the highest disagreement. There were no differences in good outcome rates between PIM(+)/PIM(-) (52.2 vs. 48.5%; p = 0.51) and CAM-2(+)/CAM-2(-) (52.4 vs. 38.5%; p = 0.12). CCM(+) and CAM-1(+) had higher rates compared to nonselected counterparts (53.4 vs. 38.7%, p = 0.03; 56.6 vs. 38.6%; p = 0.002). The abilities of PIM, CCM, CAM-1, and CAM-2 to predict outcomes were similar according to the c-statistic, Akaike and Bayesian information criterion. Conclusions: For patients with NIHSS ≥10, PIM appears to disqualify more patients without improving outcomes. CCM may improve selection, combining a high inclusion rate with optimal outcome discrimination across (+) and (-) patients. Future studies are warranted.
APA, Harvard, Vancouver, ISO, and other styles
27

Li, Man, Feng Li, Jiahui Pan, Dengyong Zhang, Suna Zhao, Jingcong Li, and Fei Wang. "The MindGomoku: An Online P300 BCI Game Based on Bayesian Deep Learning." Sensors 21, no. 5 (February 25, 2021): 1613. http://dx.doi.org/10.3390/s21051613.

Full text
Abstract:
In addition to helping develop products that aid the disabled, brain–computer interface (BCI) technology can also become a modality of entertainment for all people. However, most BCI games cannot be widely promoted due to the poor control performance or because they easily cause fatigue. In this paper, we propose a P300 brain–computer-interface game (MindGomoku) to explore a feasible and natural way to play games by using electroencephalogram (EEG) signals in a practical environment. The novelty of this research is reflected in integrating the characteristics of game rules and the BCI system when designing BCI games and paradigms. Moreover, a simplified Bayesian convolutional neural network (SBCNN) algorithm is introduced to achieve high accuracy on limited training samples. To prove the reliability of the proposed algorithm and system control, 10 subjects were selected to participate in two online control experiments. The experimental results showed that all subjects successfully completed the game control with an average accuracy of 90.7% and played the MindGomoku an average of more than 11 min. These findings fully demonstrate the stability and effectiveness of the proposed system. This BCI system not only provides a form of entertainment for users, particularly the disabled, but also provides more possibilities for games.
APA, Harvard, Vancouver, ISO, and other styles
28

Wang, Liwei, Xiong Li, Zhuowen Tu, and Jiaya Jia. "Discriminative Clustering via Generative Feature Mapping." Proceedings of the AAAI Conference on Artificial Intelligence 26, no. 1 (September 20, 2021): 1162–68. http://dx.doi.org/10.1609/aaai.v26i1.8305.

Full text
Abstract:
Existing clustering methods can be roughly classified into two categories: generative and discriminative approaches. Generative clustering aims to explain the data and thus is adaptive to the underlying data distribution; discriminative clustering, on the other hand, emphasizes on finding partition boundaries. In this paper, we take the advantages of both models by coupling the two paradigms through feature mapping derived from linearizing Bayesian classifiers. Such the feature mapping strategy maps nonlinear boundaries of generative clustering to linear ones in the feature space where we explicitly impose the maximum entropy principle. We also propose the unified probabilistic framework, enabling solvers using standard techniques. Experiments on a variety of datasets bear out the notable benefit of our method in terms of adaptiveness and robustness.
APA, Harvard, Vancouver, ISO, and other styles
29

Lopes, Marcos Venícios de Oliveira, Viviane Martins da Silva, and Thelma Leite de Araujo. "A análise de diagnósticos de enfermagem sob uma perspectiva bayesiana." Revista da Escola de Enfermagem da USP 46, no. 4 (August 2012): 994–1000. http://dx.doi.org/10.1590/s0080-62342012000400030.

Full text
Abstract:
O uso de técnicas de estatística bayesiana é uma abordagem que tem sido bem aceita e estabelecida em campos fora da enfermagem como um paradigma de redução da incerteza presente em uma dada situação clínica. O presente artigo tem como propósito apresentar um direcionamento para o uso específico do paradigma bayesiano na análise de diagnósticos de enfermagem. Para isso, as etapas e interpretações de análise bayesiana são discutidas; um exemplo teórico e outro prático sobre análise bayesiana de diagnósticos de enfermagem são apresentados; e há a descrição de como a abordagem bayesiana pode ser utilizada para resumir o conhecimento disponível e apresentar estimativas pontuais e intervalares da verdadeira probabilidade de um diagnóstico de enfermagem. Conclui-se que a aplicação de métodos estatísticos bayesianos é uma importante ferramenta para a definição mais acurada de probabilidades de diagnósticos de enfermagem.
APA, Harvard, Vancouver, ISO, and other styles
30

Hernández, Felipe, and Xu Liang. "Hybridizing Bayesian and variational data assimilation for high-resolution hydrologic forecasting." Hydrology and Earth System Sciences 22, no. 11 (November 9, 2018): 5759–79. http://dx.doi.org/10.5194/hess-22-5759-2018.

Full text
Abstract:
Abstract. The success of real-time estimation and forecasting applications based on geophysical models has been possible thanks to the two main existing frameworks for the determination of the models' initial conditions: Bayesian data assimilation and variational data assimilation. However, while there have been efforts to unify these two paradigms, existing attempts struggle to fully leverage the advantages of both in order to face the challenges posed by modern high-resolution models – mainly related to model indeterminacy and steep computational requirements. In this article we introduce a hybrid algorithm called OPTIMISTS (Optimized PareTo Inverse Modeling through Integrated STochastic Search) which is targeted at non-linear high-resolution problems and that brings together ideas from particle filters (PFs), four-dimensional variational methods (4D-Var), evolutionary Pareto optimization, and kernel density estimation in a unique way. Streamflow forecasting experiments were conducted to test which specific configurations of OPTIMISTS led to higher predictive accuracy. The experiments were conducted on two watersheds: the Blue River (low resolution) using the VIC (Variable Infiltration Capacity) model and the Indiantown Run (high resolution) using the DHSVM (Distributed Hydrology Soil Vegetation Model). By selecting kernel-based non-parametric sampling, non-sequential evaluation of candidate particles, and through the multi-objective minimization of departures from the streamflow observations and from the background states, OPTIMISTS was shown to efficiently produce probabilistic forecasts with comparable accuracy to that obtained from using a particle filter. Moreover, the experiments demonstrated that OPTIMISTS scales well in high-resolution cases without imposing a significant computational overhead. With the combined advantages of allowing for fast, non-Gaussian, non-linear, high-resolution prediction, the algorithm shows the potential to increase the efficiency of operational prediction systems.
APA, Harvard, Vancouver, ISO, and other styles
31

Yu, Tianyuan, Yongxin Yang, Da Li, Timothy Hospedales, and Tao Xiang. "Simple and Effective Stochastic Neural Networks." Proceedings of the AAAI Conference on Artificial Intelligence 35, no. 4 (May 18, 2021): 3252–60. http://dx.doi.org/10.1609/aaai.v35i4.16436.

Full text
Abstract:
Stochastic neural networks (SNNs) are currently topical, with several paradigms being actively investigated including dropout, Bayesian neural networks, variational information bottleneck (VIB) and noise regularized learning. These neural network variants impact several major considerations, including generalization, network compression, robustness against adversarial attack and label noise, and model calibration. However, many existing networks are complicated and expensive to train, and/or only address one or two of these practical considerations. In this paper we propose a simple and effective stochastic neural network (SE-SNN) architecture for discriminative learning by directly modeling activation uncertainty and encouraging high activation variability. Compared to existing SNNs, our SE-SNN is simpler to implement and faster to train, and produces state of the art results on network compression by pruning, adversarial defense, learning with label noise, and model calibration.
APA, Harvard, Vancouver, ISO, and other styles
32

Abdellah, Ali R., Omar Abdulkareem Mahmood, Ruslan Kirichek, Alexander Paramonov, and Andrey Koucheryavy. "Machine Learning Algorithm for Delay Prediction in IoT and Tactile Internet." Future Internet 13, no. 12 (November 26, 2021): 304. http://dx.doi.org/10.3390/fi13120304.

Full text
Abstract:
The next-generation cellular systems, including fifth-generation cellular systems (5G), are empowered with the recent advances in artificial intelligence (AI) and other recent paradigms. The internet of things (IoT) and the tactile internet are paradigms that can be empowered with AI solutions and integrated with 5G systems to deliver novel services that impact the future. Machine learning technologies (ML) can understand examples of nonlinearity from the environment and are suitable for network traffic prediction. Network traffic prediction is one of the most active research areas that integrates AI with information networks. Traffic prediction is an integral approach to ensure security, reliability, and quality of service (QoS) requirements. Nowadays, it can be used in various applications, such as network monitoring, resource management, congestion control, network bandwidth allocation, network intrusion detection, etc. This paper performs time series prediction for IoT and tactile internet delays, using the k-step-ahead prediction approach with nonlinear autoregressive with external input (NARX)-enabled recurrent neural network (RNN). The ML was trained with four different training functions: Bayesian regularization backpropagation (Trainbr), Levenberg–Marquardt backpropagation (Trainlm), conjugate gradient backpropagation with Fletcher–Reeves updates (Traincgf), and the resilient backpropagation algorithm (Trainrp). The accuracy of the predicted delay was measured using three functions based on ML: mean square error (MSE), root mean square error (RMSE), and mean absolute percentage error (MAPE).
APA, Harvard, Vancouver, ISO, and other styles
33

Crawford, Lindsay, Liye Zou, and Paul D. Loprinzi. "Oxygenation of the Prefrontal Cortex during Memory Interference." Journal of Clinical Medicine 8, no. 12 (November 22, 2019): 2055. http://dx.doi.org/10.3390/jcm8122055.

Full text
Abstract:
Background: Memory interference occurs when information (or memory) to be retrieved is interrupted by competing stimuli. Proactive interference (PI) occurs when previously acquired information interferes with newly acquired information, whereas retroactive interference (RI) occurs when newly acquired information interferes with previously acquired information. In animal paradigms, the prefrontal cortex (PFC) has been shown to help facilitate pattern separation, and ultimately, attenuate memory interference. Research evaluating the role of the PFC on memory interference among humans is, however, limited. The present study evaluated the relationship between PFC oxygenation on memory interference among humans, with the null hypothesis being that there is no association between PFC oxygenation and memory interference. Methods: A total of 74 participants (Mage = 20.8 years) completed the study. Participants completed a computerized memory interference task using the AB-DE AC-FG paradigm, with PFC oxyhemoglobin levels measured via functional near-infrared spectroscopy. Results: For PI, the change in oxygenated hemoglobin for encoding list 1 and retrieval of list 1 showed moderate evidence for the null hypothesis (BF01 = 4.05 and 3.28, respectively). For RI, the Bayesian analysis also established moderate evidence for the null hypothesis across all memory task time points. Conclusion: Our study demonstrates evidence of the null hypothesis regarding the relationship between PFC oxygenation and memory interference. Future work should continue to investigate this topic to identify mechanistic correlates of memory interference.
APA, Harvard, Vancouver, ISO, and other styles
34

AVEN, TERJE. "RISK ANALYSIS AND SCIENCE." International Journal of Reliability, Quality and Safety Engineering 11, no. 01 (March 2004): 1–15. http://dx.doi.org/10.1142/s0218539304001300.

Full text
Abstract:
In this paper we discuss the scientific basis of risk analysis, when a Bayesian approach is the foundation of the analysis. We argue that the analysis cannot be judged by reference to the traditional science paradigms alone, such as the natural sciences, social sciences, mathematics and probability theory. There is a need for recognitions of a risk analysis science which is related to the establishment of principles, methods and models to analysis, describe and communicate risk, in a decision-making context. The "goodness" of these principles, methods and models cannot be evaluated by reference to the accuracy in describing the world, as risk analysis is a tool for expressing and communicating uncertainty about the world. Empirical control is only relevant to some degree. An example of a risk analysis of an offshore installation is used to illustrate important issues.
APA, Harvard, Vancouver, ISO, and other styles
35

Holbrook, Andrew J., Xiang Ji, and Marc A. Suchard. "From viral evolution to spatial contagion: a biologically modulated Hawkes model." Bioinformatics 38, no. 7 (January 18, 2022): 1846–56. http://dx.doi.org/10.1093/bioinformatics/btac027.

Full text
Abstract:
Abstract Summary Mutations sometimes increase contagiousness for evolving pathogens. During an epidemic, scientists use viral genome data to infer a shared evolutionary history and connect this history to geographic spread. We propose a model that directly relates a pathogen’s evolution to its spatial contagion dynamics—effectively combining the two epidemiological paradigms of phylogenetic inference and self-exciting process modeling—and apply this phylogenetic Hawkes process to a Bayesian analysis of 23 421 viral cases from the 2014 to 2016 Ebola outbreak in West Africa. The proposed model is able to detect individual viruses with significantly elevated rates of spatiotemporal propagation for a subset of 1610 samples that provide genome data. Finally, to facilitate model application in big data settings, we develop massively parallel implementations for the gradient and Hessian of the log-likelihood and apply our high-performance computing framework within an adaptively pre-conditioned Hamiltonian Monte Carlo routine. Supplementary information Supplementary data are available at Bioinformatics online.
APA, Harvard, Vancouver, ISO, and other styles
36

Lavenir, Gabrielle, and Nicolas Bourgeois. "Old people, video games and french press: A topic model approach on a study about discipline, entertainment and self-improvement." MedieKultur: Journal of media and communication research 33, no. 63 (November 2, 2017): 20. http://dx.doi.org/10.7146/mediekultur.v33i63.24749.

Full text
Abstract:
Over the past few years, the French mainstream press has paid more and more attention to "silver gamers", adults over sixty who play video games. This article investigates the discursive and normative paradigms that underlie the unexpected enthusiasm of the French mainstream press for older adults who play video games. We use mixed methods on a corpus of French, Swiss and Belgian articles that mention both older people and video games. First, we produce topics, that is, sets of words related by their meanings and identified with a Bayesian statistical algorithm. Second, we cross the topic model results with a discursive analysis of selected articles. We preface the topic modeling's conclusions with a discussion of the representations of older people and video games in European French-language mainstream media. Our analysis explores how the press coverage of older people who play video games simultaneously erases moral panic about video games and reinforces the discourse of "successful ageing".
APA, Harvard, Vancouver, ISO, and other styles
37

Kroneisen, Meike, and Daniel W. Heck. "Interindividual Differences in the Sensitivity for Consequences, Moral Norms, and Preferences for Inaction: Relating Basic Personality Traits to the CNI Model." Personality and Social Psychology Bulletin 46, no. 7 (December 31, 2019): 1013–26. http://dx.doi.org/10.1177/0146167219893994.

Full text
Abstract:
Research on moral decision making usually focuses on two ethical principles: the principle of utilitarianism (= morality of an action is determined by its consequences) and the principle of deontology (= morality of an action is valued according to the adherence to moral norms regardless of the consequences). Criticism on traditional moral dilemma research includes the reproach that consequences and norms are confounded in standard paradigms. As a remedy, a multinomial model (the CNI model) was developed to disentangle and measure sensitivity to consequences ( C), sensitivity to moral norms ( N), and general preference for inaction versus action ( I). In two studies, we examined the link of basic personality traits to moral judgments by fitting a hierarchical Bayesian version of the CNI model. As predicted, high Honesty–Humility was selectively associated with sensitivity for norms, whereas high Emotionality was selectively associated with sensitivity for consequences. However, Conscientiousness was not associated with a preference for inaction.
APA, Harvard, Vancouver, ISO, and other styles
38

NAGARAJAN, RADHAKRISHNAN, JANE E. AUBIN, and CHARLOTTE A. PETERSON. "ROBUST DEPENDENCIES AND STRUCTURES IN STEM CELL DIFFERENTIATION." International Journal of Bifurcation and Chaos 15, no. 04 (April 2005): 1503–14. http://dx.doi.org/10.1142/s0218127405012636.

Full text
Abstract:
Cell differentiation is a complex process governed by the timely activation of genes resulting in a specific phenotype or observable physical change. Recent reports have indicated heterogeneity in gene expression even amongst identical colonies (clones). While some genes are always expressed, others are expressed with a finite probability. In this report, a mathematical framework is provided to understand the mechanism of osteoblast (bone forming cell) differentiation. A systematic approach using a combination of entropy, pair-wise dependency and Bayesian approach is used to gain insight into the dependencies and underlying network structure. Pairwise dependencies are estimated using linear correlation and mutual information. An algorithm is proposed to identify statistically significant mutual information estimates. The robustness of the dependencies and the network structure to decreasing number of colonies (colony size) and perturbation is investigated. Perturbation is achieved by generating bootstrap samples. The methods discussed are generic in nature and can be extended to similar experimental paradigms.
APA, Harvard, Vancouver, ISO, and other styles
39

Grzegorcyzk, Marco, Dirk Husmeier, and Jörg Rahnenführer. "Modelling Nonstationary Gene Regulatory Processes." Advances in Bioinformatics 2010 (July 20, 2010): 1–17. http://dx.doi.org/10.1155/2010/749848.

Full text
Abstract:
An important objective in systems biology is to infer gene regulatory networks from postgenomic data, and dynamic Bayesian networks have been widely applied as a popular tool to this end. The standard approach for nondiscretised data is restricted to a linear model and a homogeneous Markov chain. Recently, various generalisations based on changepoint processes and free allocation mixture models have been proposed. The former aim to relax the homogeneity assumption, whereas the latter are more flexible and, in principle, more adequate for modelling nonlinear processes. In our paper, we compare both paradigms and discuss theoretical shortcomings of the latter approach. We show that a model based on the changepoint process yields systematically better results than the free allocation model when inferring nonstationary gene regulatory processes from simulated gene expression time series. We further cross-compare the performance of both models on three biological systems: macrophages challenged with viral infection, circadian regulation in Arabidopsis thaliana, and morphogenesis in Drosophila melanogaster.
APA, Harvard, Vancouver, ISO, and other styles
40

Bellucci, Gabriele. "A Model of Trust." Games 13, no. 3 (May 17, 2022): 39. http://dx.doi.org/10.3390/g13030039.

Full text
Abstract:
Trust is central to a large variety of social interactions. Different research fields have empirically and theoretically investigated trust, observing trusting behaviors in different situations and pinpointing their different components and constituents. However, a unifying, computational formalization of those diverse components and constituents of trust is still lacking. Previous work has mainly used computational models borrowed from other fields and developed for other purposes to explain trusting behaviors in empirical paradigms. Here, I computationally formalize verbal models of trust in a simple model (i.e., vulnerability model) that combines current and prospective action values with beliefs and expectancies about a partner’s behavior. By using the classic investment game (IG)—an economic game thought to capture some important features of trusting behaviors in social interactions—I show how variations of a single parameter of the vulnerability model generates behaviors that can be interpreted as different “trust attitudes”. I then show how these behavioral patterns change as a function of an individual’s loss aversion and expectations of the partner’s behavior. I finally show how the vulnerability model can be easily extended in a novel IG paradigm to investigate inferences on different traits of a partner. In particular, I will focus on benevolence and competence—two character traits that have previously been described as determinants of trustworthiness impressions central to trust. The vulnerability model can be employed as is or as a utility function within more complex Bayesian frameworks to fit participants’ behavior in different social environments where actions are associated with subjective values and weighted by individual beliefs about others’ behaviors. Hence, the vulnerability model provides an important building block for future theoretical and empirical work across a variety of research fields.
APA, Harvard, Vancouver, ISO, and other styles
41

Brugnetti, Ennio, Guido Coletta, Fabrizio De Caro, Alfredo Vaccaro, and Domenico Villacci. "Enabling Methodologies for Predictive Power System Resilience Analysis in the Presence of Extreme Wind Gusts." Energies 13, no. 13 (July 7, 2020): 3501. http://dx.doi.org/10.3390/en13133501.

Full text
Abstract:
Modern power system operation should comply with strictly reliability and security constraints, which aim at guarantee the correct system operation also in the presence of severe internal and external disturbances. Amongst the possible phenomena perturbing correct system operation, the predictive assessment of the impacts induced by extreme weather events has been considered as one of the most critical issues to address, since they can induce multiple, and large-scale system contingencies. In this context, the development of new computing paradigms for resilience analysis has been recognized as a very promising research direction. To address this issue, this paper proposes two methodologies, which are based on Time Varying Markov Chain and Dynamic Bayesian Network, for assessing the system resilience against extreme wind gusts. The main difference between the proposed methodologies and the traditional solution techniques is the improved capability in modelling the occurrence of multiple component faults and repairing, which cannot be neglected in the presence of extreme events, as experienced worldwide by several Transmission System Operators. Several cases studies and benchmark comparisons are presented and discussed in order to demonstrate the effectiveness of the proposed methods in the task of assessing the power system resilience in realistic operation scenarios.
APA, Harvard, Vancouver, ISO, and other styles
42

Liddell, Torrin M., and John K. Kruschke. "Ostracism and fines in a public goods game with accidental contributions: The importance of punishment type." Judgment and Decision Making 9, no. 6 (November 2014): 523–47. http://dx.doi.org/10.1017/s1930297500006409.

Full text
Abstract:
AbstractPunishment is an important method for discouraging uncooperative behavior. We use a novel design for a public goods game in which players have explicit intended contributions with accidentally changed actual contributions, and in which players can apply costly fines or ostracism. Moreover, all players except the subject are automated, whereby we control the intended contributions, actual contributions, costly fines, and ostracisms experienced by the subject. We assess subject’s utilization of other players’ intended and actual contributions when making decisions to fine or ostracize. Hierarchical Bayesian logistic regression provides robust estimates. We find that subjects emphasize actual contribution more than intended contribution when deciding to fine, but emphasize intended contribution more than actual contribution when deciding to ostracize. We also find that the efficacy of past punishment, in terms of changing the contributions of the punished player, influences the type of punishment selected. Finally, we find that the punishment norms of the automated players affect the punishments performed by the subject. These novel paradigms and analyses indicate that punishment is flexible and adaptive, contrary to some evolutionary theories that predict inflexible punishments that emphasize outcomes.
APA, Harvard, Vancouver, ISO, and other styles
43

Safron, Adam. "The Radically Embodied Conscious Cybernetic Bayesian Brain: From Free Energy to Free Will and Back Again." Entropy 23, no. 6 (June 20, 2021): 783. http://dx.doi.org/10.3390/e23060783.

Full text
Abstract:
Drawing from both enactivist and cognitivist perspectives on mind, I propose that explaining teleological phenomena may require reappraising both “Cartesian theaters” and mental homunculi in terms of embodied self-models (ESMs), understood as body maps with agentic properties, functioning as predictive-memory systems and cybernetic controllers. Quasi-homuncular ESMs are suggested to constitute a major organizing principle for neural architectures due to their initial and ongoing significance for solutions to inference problems in cognitive (and affective) development. Embodied experiences provide foundational lessons in learning curriculums in which agents explore increasingly challenging problem spaces, so answering an unresolved question in Bayesian cognitive science: what are biologically plausible mechanisms for equipping learners with sufficiently powerful inductive biases to adequately constrain inference spaces? Drawing on models from neurophysiology, psychology, and developmental robotics, I describe how embodiment provides fundamental sources of empirical priors (as reliably learnable posterior expectations). If ESMs play this kind of foundational role in cognitive development, then bidirectional linkages will be found between all sensory modalities and frontal-parietal control hierarchies, so infusing all senses with somatic-motoric properties, thereby structuring all perception by relevant affordances, so solving frame problems for embodied agents. Drawing upon the Free Energy Principle and Active Inference framework, I describe a particular mechanism for intentional action selection via consciously imagined (and explicitly represented) goal realization, where contrasts between desired and present states influence ongoing policy selection via predictive coding mechanisms and backward-chained imaginings (as self-realizing predictions). This embodied developmental legacy suggests a mechanism by which imaginings can be intentionally shaped by (internalized) partially-expressed motor acts, so providing means of agentic control for attention, working memory, imagination, and behavior. I further describe the nature(s) of mental causation and self-control, and also provide an account of readiness potentials in Libet paradigms wherein conscious intentions shape causal streams leading to enaction. Finally, I provide neurophenomenological handlings of prototypical qualia including pleasure, pain, and desire in terms of self-annihilating free energy gradients via quasi-synesthetic interoceptive active inference. In brief, this manuscript is intended to illustrate how radically embodied minds may create foundations for intelligence (as capacity for learning and inference), consciousness (as somatically-grounded self-world modeling), and will (as deployment of predictive models for enacting valued goals).
APA, Harvard, Vancouver, ISO, and other styles
44

Bihl, Trevor, Todd Jenkins, Chadwick Cox, Ashley DeMange, Kerry Hill, and Edmund Zelnio. "From Lab to Internship and Back Again: Learning Autonomous Systems through Creating a Research and Development Ecosystem." Proceedings of the AAAI Conference on Artificial Intelligence 33 (July 17, 2019): 9635–43. http://dx.doi.org/10.1609/aaai.v33i01.33019635.

Full text
Abstract:
As research and development (R&D) in autonomous systems progresses further, more interdisciplinary knowledge is needed from domains as diverse as artificial intelligence (AI), bi-ology, psychology, modeling and simulation (M&S), and robotics. Such R&D efforts are necessarily interdisciplinary in nature and require technical as well as further soft skills of teamwork, communication and integration. In this paper, we introduce a 14 week, summer long internship for developing these skills in undergraduate science and engineering interns through R&D. The internship was designed to be modular and divided into three parts: training, innovation, and application/integration. The end result of the internship was 1) the development of an M&S ecosystem for autonomy concepts, 2) development and robotics testing of reasoning methods through both Bayesian methods and cognitive models of the basal ganglia, and 3) a process for future internships within the modular construct. Through collaboration with full-time professional staff, who actively learned with the interns, this internship incorporates a feedback loop to educate and per-form fundamental R&D. Future iterations of this internship can leverage the M&S ecosystem and adapt the modular internship framework to focus on different innovations, learning paradigms, and/or applications.
APA, Harvard, Vancouver, ISO, and other styles
45

Maxwell, Joshua, Lin Fang, and Joshua Carlson. "Do Carryover Effects Influence Attentional Bias to Threat in the Dot-Probe Task?" Journal of Trial and Error 2, no. 1 (March 3, 2022): 70–76. http://dx.doi.org/10.36850/e9.

Full text
Abstract:
Threatening stimuli are often thought to have sufficient potency to bias attention, relative to neutral stimuli. Researchers and clinicians opt for frequently used paradigms to measure such bias, such as the dot-probe task. Bias to threat in the dot-probe task is indicated by a congruency effect i.e., faster responses on congruent trials than incongruent trials (also referred to as attention capture). However, recent studies have found that such congruency effects are small and suffer from poor internal reliability. One explanation to low effect sizes and poor reliability is carryover effects of threat – greater congruency effects on trials following a congruent trial relative to trials following an incongruent trial. In the current study, we investigated carryover effects of threat with two large samples of healthy undergraduate students who completed a typical dot-probe task. Although we found a small congruency effect for fearful faces (Experiment 1, n = 241, d = 0.15) and a reverse congruency effect for threatening images, (Experiment 2, n = 82, d = 0.11) whereas no carryover effects for threat were observed in either case. Bayesian analyses revealed moderate to strong evidence in favor of the null hypothesis. We conclude that carryover effects for threat do not influence attention bias for threat.
APA, Harvard, Vancouver, ISO, and other styles
46

McDonald, Kelsey R., John M. Pearson, and Scott A. Huettel. "Dorsolateral and dorsomedial prefrontal cortex track distinct properties of dynamic social behavior." Social Cognitive and Affective Neuroscience 15, no. 4 (April 2020): 383–93. http://dx.doi.org/10.1093/scan/nsaa053.

Full text
Abstract:
Abstract Understanding how humans make competitive decisions in complex environments is a key goal of decision neuroscience. Typical experimental paradigms constrain behavioral complexity (e.g. choices in discrete-play games), and thus, the underlying neural mechanisms of dynamic social interactions remain incompletely understood. Here, we collected fMRI data while humans played a competitive real-time video game against both human and computer opponents, and then, we used Bayesian non-parametric methods to link behavior to neural mechanisms. Two key cognitive processes characterized behavior in our task: (i) the coupling of one’s actions to another’s actions (i.e. opponent sensitivity) and (ii) the advantageous timing of a given strategic action. We found that the dorsolateral prefrontal cortex displayed selective activation when the subject’s actions were highly sensitive to the opponent’s actions, whereas activation in the dorsomedial prefrontal cortex increased proportionally to the advantageous timing of actions to defeat one’s opponent. Moreover, the temporoparietal junction tracked both of these behavioral quantities as well as opponent social identity, indicating a more general role in monitoring other social agents. These results suggest that brain regions that are frequently implicated in social cognition and value-based decision-making also contribute to the strategic tracking of the value of social actions in dynamic, multi-agent contexts.
APA, Harvard, Vancouver, ISO, and other styles
47

Stock, Ann-Kathrin, Annett Werner, Paul Kuntke, Miriam-Sophie Petasch, Wiebke Bensmann, Nicolas Zink, Anna Helin Koyun, Boris B. Quednow, and Christian Beste. "Gamma-Aminobutyric Acid and Glutamate Concentrations in the Striatum and Anterior Cingulate Cortex Not Found to Be Associated with Cognitive Flexibility." Brain Sciences 13, no. 8 (August 11, 2023): 1192. http://dx.doi.org/10.3390/brainsci13081192.

Full text
Abstract:
Behavioral flexibility and goal-directed behavior heavily depend on fronto-striatal networks. Within these circuits, gamma-aminobutyric acid (GABA) and glutamate play an important role in (motor) response inhibition, but it has remained largely unclear whether they are also relevant for cognitive inhibition. We hence investigated the functional role of these transmitters for cognitive inhibition during cognitive flexibility. Healthy young adults performed two paradigms assessing different aspects of cognitive flexibility. Magnetic resonance spectroscopy (MRS) was used to quantify GABA+ and total glutamate/glutamine (Glx) levels in the striatum and anterior cingulate cortex (ACC) referenced to N-acetylaspartate (NAA). We observed typical task switching and backward inhibition effects, but striatal and ACC concentrations of GABA+/NAA and Glx/NAA were not associated with cognitive flexibility in a functionally relevant manner. The assumption of null effects was underpinned by Bayesian testing. These findings suggest that behavioral and cognitive inhibition are functionally distinct faculties, that depend on (at least partly) different brain structures and neurotransmitter systems. While previous studies consistently demonstrated that motor response inhibition is modulated by ACC and striatal GABA levels, our results suggest that the functionally distinct cognitive inhibition required for successful switching is not, or at least to a much lesser degree, modulated by these factors.
APA, Harvard, Vancouver, ISO, and other styles
48

Allen, Ronald J., and Michael S. Pardo. "Relative plausibility and its critics." International Journal of Evidence & Proof 23, no. 1-2 (January 3, 2019): 5–59. http://dx.doi.org/10.1177/1365712718813781.

Full text
Abstract:
Within legal scholarship there is a tendency to use (perhaps overuse) “paradigm shift” in ways far removed from the process famously described by Thomas Kuhn. Within the field of evidence, however, a phenomenon very similar to a paradigm shift, in the Kuhnian sense, is occurring. Although not on the scale of the transformation from Newtonian to Einsteinian physics or other tectonic shifts in science, the best understanding of juridical proof is shifting from probabilism to explanationism. For literally hundreds of years, proof at trial was assumed to be probabilistic. This assumption was given sustained scholarly attention and support beginning with the 1968 publication of John Kaplan’s path-breaking article that generated a rich literature explaining virtually all aspects of juridical proof as probabilistic, from the basic nature of relevancy through the processing of information to the final decision about the facts. Although probabilism quickly became the dominant paradigm, some analytical difficulties were detected quite early (“anomalies” or “irritants” in the words of Kuhn), beginning with L. Jonathan Cohen’s demonstration of certain proof paradoxes. These were extended by Ronald Allen, who also demonstrated the incompatibility of Bayesian reasoning with trials and proposed an analytical alternative. Again a complex literature ensued with the defenders of the dominant paradigm attempting to explain away the anomalies or to shield the probabilistic paradigm from their potentially corrosive effects (in what in fact on a very small scale is precisely what Kuhn explained and predicted with respect to paradigm shifts in science). Over the last two decades, these anomalies have become too irritating to ignore, and the strengths of the competing paradigm involving explanatory inferences (referred to as the relative plausibility theory) have become too persuasive to dismiss. Thus the paradigm shift that the field is now experiencing. We provide here a summary of the relative plausibility theory and its improvement on the probabilistic paradigm. As Kuhn noted, not everybody gets on board when paradigms shift; there are holdouts, dissenters, and objectors. Three major efforts to demonstrate the inadequacies of relative plausibility have recently been published. We analyze them here to demonstrate that their objections are either misplaced or unavailing, leaving relative plausibility as the best explanation of juridical proof. It is interesting to note that two of the three critiques that we discuss actually agree on the inadequacies of the probabilistic paradigm (they provide alternatives). The third concedes that explanationism may provide a better overall account of juridical proof but tries to resuscitate a probabilistic interpretation of burdens of proof in light of one particular analytical difficulty (i.e., the conjunction problem, which arises from the fact that proof burdens apply to the individual elements of crimes, civil claims, and defenses rather than a party’s case as a whole). In analyzing the alternative positions proposed by our critics, we demonstrate that their accounts each fail to provide a better explanation than relative plausibility.
APA, Harvard, Vancouver, ISO, and other styles
49

Longarzo, Mariachiara, Carlo Cavaliere, Giulia Mele, Stefano Tozza, Liberatore Tramontano, Vincenzo Alfano, Marco Aiello, Marco Salvatore, and Dario Grossi. "Microstructural Changes in Motor Functional Conversion Disorder: Multimodal Imaging Approach on a Case." Brain Sciences 10, no. 6 (June 18, 2020): 385. http://dx.doi.org/10.3390/brainsci10060385.

Full text
Abstract:
Background: Functional motor conversion disorders are characterized by neurological symptoms unrelated to brain structural lesions. The present study was conducted on a woman presenting motor symptoms causing motor dysfunction, using advanced multimodal neuroimaging techniques, electrophysiological and neuropsychological assessment. Methods. The patient underwent fluorodeoxyglucose-positron emission tomography-computed tomography (FDG-PET-CT) and functional magnetic resonance imaging (fMRI) with both task and resting-state paradigms and was compared with 11 healthy matched controls. To test differences in structural parameters, Bayesian comparison was performed. To test differences in functional parameters, a first- and second-level analysis was performed in task fMRI, while a seed-to-seed analysis to evaluate the connections between brain regions and identify intersubject variations was performed in resting-state fMRI. Results. FDG-PET showed two patterns of brain metabolism, involving the cortical and subcortical structures. Regarding the diffusion data, microstructural parameters were altered for U-shape fibers for the hand and feet regions. Resting-state analysis showed hypoconnectivity between the parahippocampal and superior temporal gyrus. Neurophysiological assessment showed no alterations. Finally, an initial cognitive impairment was observed, paralleled by an anxiety and mild depressive state. Conclusions. While we confirmed no structural alterations sustaining this functional motor disorder, we report microstructural changes in sensory–motor integration for both the hand and feet regions that could functionally support clinical manifestations.
APA, Harvard, Vancouver, ISO, and other styles
50

Bahnmueller, Julia, Krzysztof Cipora, Silke Melanie Göbel, Hans-Christoph Nuerk, and Mojtaba Soltanlou. "Pick the smaller number: No influence of linguistic markedness on three-digit number processing." Journal of Numerical Cognition 7, no. 3 (November 30, 2021): 295–307. http://dx.doi.org/10.5964/jnc.6057.

Full text
Abstract:
The symbolic number comparison task has been widely used to investigate the cognitive representation and underlying processes of multi-digit number processing. The standard procedure to establish numerical distance and compatibility effects in such number comparison paradigms usually entails asking participants to indicate the larger of two presented multi-digit Arabic numbers rather than to indicate the smaller number. In terms of linguistic markedness, this procedure includes the unmarked/base form in the task instruction (i.e., large). Here we evaluate distance and compatibility effects in a three-digit number comparison task observed in Bahnmueller et al. (2015, https://doi.org/10.3389/fpsyg.2015.01216) using a marked task instruction (i.e., ‘pick the smaller number’). Moreover, we aimed at clarifying whether the markedness of task instruction influences common numerical effects and especially componential processing as indexed by compatibility effects. We instructed German- and English-speaking adults (N = 52) to indicate the smaller number in a three-digit number comparison task as opposed to indicating the larger number in Bahnmueller et al. (2015). We replicated standard effects of distance and compatibility in the new pick the smaller number experiment. Moreover, when comparing our findings to Bahnmueller et al. (2015), numerical effects did not differ significantly between the two studies as indicated by both frequentist and Bayesian analysis. Taken together our data suggest that distance and compatibility effects alongside componential processing of multi-digit numbers are rather robust against variations of linguistic markedness of task instructions.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography