Academic literature on the topic 'Bayesian intelligence'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the lists of relevant articles, books, theses, conference reports, and other scholarly sources on the topic 'Bayesian intelligence.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Journal articles on the topic "Bayesian intelligence"

1

Zelterman, Daniel. "Bayesian Artificial Intelligence." Technometrics 47, no. 1 (February 2005): 101–2. http://dx.doi.org/10.1198/tech.2005.s836.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Ramoni, Marco F. "Bayesian Artificial Intelligence." Journal of the American Statistical Association 100, no. 471 (September 2005): 1096–97. http://dx.doi.org/10.1198/jasa.2005.s39.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

V. Jensen, Finn. "Bayesian Artificial Intelligence." Pattern Analysis and Applications 7, no. 2 (May 26, 2004): 221–23. http://dx.doi.org/10.1007/s10044-004-0214-5.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Vreeswijk, Gerard A. W. "Book Review: Bayesian Artificial Intelligence." Artificial Intelligence and Law 11, no. 4 (2003): 289–98. http://dx.doi.org/10.1023/b:arti.0000045970.25670.25.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Pascual-Garcia, Erica, and Guillermo De la Torre-Gea. "Bayesian Analysis to the experiences of corruption through Artificial Intelligence." International Journal of Trend in Scientific Research and Development Volume-2, Issue-2 (February 28, 2018): 103–7. http://dx.doi.org/10.31142/ijtsrd2443.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Muhsina, Elvanisa Ayu, and Nurochman Nurochman. "SISTEM PAKAR REKOMENDASI PROFESI BERDASARKAN MULTIPLE INTELLIGENCES MENGGUNAKAN TEOREMA BAYESIAN." JISKA (Jurnal Informatika Sunan Kalijaga) 2, no. 1 (August 29, 2017): 16. http://dx.doi.org/10.14421/jiska.2017.21-03.

Full text
Abstract:
Intelligence is perhaps to be the one of the most logical way to determine how smart people is. That fact has always been a problem at job because there are number of job that attract people but require a high GPA for them. Employee with high GPA doesn’t always fit in his skill and work role. They unable to understand and maintain their performance. This expert system is a necessary for recommend job using Intelligence. This research use a Bayesian theorem calculation to find out probability value and job recommendation. The value of MI (Multiple Intelligences)’s user, MI probability to a job and job probability to previous result without any evidence produce a Calculation Variable.Result of the test shows output recommendation as expert system to 81.25% match with expert recommendation. 100% users statistically states the system running well. Expert system usability test shows 80% users strongly agree, 15.7% users agree and 4.3% users are neutral.Keywords: Multiple Intelligences, Profession, Bayesian theorem
APA, Harvard, Vancouver, ISO, and other styles
7

TERZIYAN, VAGAN. "A BAYESIAN METANETWORK." International Journal on Artificial Intelligence Tools 14, no. 03 (June 2005): 371–84. http://dx.doi.org/10.1142/s0218213005002156.

Full text
Abstract:
Bayesian network (BN) is known to be one of the most solid probabilistic modeling tools. The theory of BN provides already several useful modifications of a classical network. Among those there are context-enabled networks such as multilevel networks or recursive multinets, which can provide separate BN modelling for different combinations of contextual features' values. The main challenge of this paper is the multilevel probabilistic meta-model (Bayesian Metanetwork), which is an extension of traditional BN and modification of recursive multinets. It assumes that interoperability between component networks can be modeled by another BN. Bayesian Metanetwork is a set of BN, which are put on each other in such a way that conditional or unconditional probability distributions associated with nodes of every previous probabilistic network depend on probability distributions associated with nodes of the next network. We assume parameters (probability distributions) of a BN as random variables and allow conditional dependencies between these probabilities. Several cases of two-level Bayesian Metanetworks were presented, which consist on interrelated predictive and contextual BN models.
APA, Harvard, Vancouver, ISO, and other styles
8

Pate-Cornell, Elisabeth. "Fusion of Intelligence Information: A Bayesian Approach." Risk Analysis 22, no. 3 (June 2002): 445–54. http://dx.doi.org/10.1111/0272-4332.00056.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Angelopoulos, Nicos, and James Cussens. "Bayesian learning of Bayesian networks with informative priors." Annals of Mathematics and Artificial Intelligence 54, no. 1-3 (November 2008): 53–98. http://dx.doi.org/10.1007/s10472-009-9133-x.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Sanghai, S., P. Domingos, and D. Weld. "Relational Dynamic Bayesian Networks." Journal of Artificial Intelligence Research 24 (December 2, 2005): 759–97. http://dx.doi.org/10.1613/jair.1625.

Full text
Abstract:
Stochastic processes that involve the creation of objects and relations over time are widespread, but relatively poorly studied. For example, accurate fault diagnosis in factory assembly processes requires inferring the probabilities of erroneous assembly operations, but doing this efficiently and accurately is difficult. Modeled as dynamic Bayesian networks, these processes have discrete variables with very large domains and extremely high dimensionality. In this paper, we introduce relational dynamic Bayesian networks (RDBNs), which are an extension of dynamic Bayesian networks (DBNs) to first-order logic. RDBNs are a generalization of dynamic probabilistic relational models (DPRMs), which we had proposed in our previous work to model dynamic uncertain domains. We first extend the Rao-Blackwellised particle filtering described in our earlier work to RDBNs. Next, we lift the assumptions associated with Rao-Blackwellization in RDBNs and propose two new forms of particle filtering. The first one uses abstraction hierarchies over the predicates to smooth the particle filter's estimates. The second employs kernel density estimation with a kernel function specifically designed for relational domains. Experiments show these two methods greatly outperform standard particle filtering on the task of assembly plan execution monitoring.
APA, Harvard, Vancouver, ISO, and other styles

Dissertations / Theses on the topic "Bayesian intelligence"

1

Horsch, Michael C. "Dynamic Bayesian networks." Thesis, University of British Columbia, 1990. http://hdl.handle.net/2429/28909.

Full text
Abstract:
Given the complexity of the domains for which we would like to use computers as reasoning engines, an automated reasoning process will often be required to perform under some state of uncertainty. Probability provides a normative theory with which uncertainty can be modelled. Without assumptions of independence from the domain, naive computations of probability are intractible. If probability theory is to be used effectively in AI applications, the independence assumptions from the domain should be represented explicitly, and used to greatest possible advantage. One such representation is a class of mathematical structures called Bayesian networks. This thesis presents a framework for dynamically constructing and evaluating Bayesian networks. In particular, this thesis investigates the issue of representing probabilistic knowledge which has been abstracted from particular individuals to which this knowledge may apply, resulting in a simple representation language. This language makes the independence assumptions for a domain explicit. A simple procedure is provided for building networks from knowledge expressed in this language. The mapping between the knowledge base and network created is precisely defined, so that the network always represents a consistent probability distribution. Finally, this thesis investigates the issue of modifying the network after some evaluation has taken place, and several techniques for correcting the state of the resulting model are derived.
Science, Faculty of
Computer Science, Department of
Graduate
APA, Harvard, Vancouver, ISO, and other styles
2

Edgington, Padraic D. "Modular Bayesian filters." Thesis, University of Louisiana at Lafayette, 2015. http://pqdtopen.proquest.com/#viewpdf?dispub=3712276.

Full text
Abstract:

In this dissertation, I introduce modularization as a means of efficiently solving problems represented by dynamic Bayesian networks and study the properties and effects of modularization relative to traditional solutions. Modularizing a Bayesian filter allows its results to be calculated faster than a traditional Bayesian filter. Traditional Bayesian filters can have issues when large problems must be solved within a short period of time. Modularization addresses this issue by dividing the full problem into a set of smaller problems that can then be solved with separate Bayesian filters. Since the time complexity of Bayesian filters is greater than linear, solving several smaller problems is cheaper than solving a single large problem. The cost of reassembling the results from the smaller problems is comparable to the cost of the smaller problems. This document introduces the concept of both exact and approximate modular Bayesian filters and describes how to design each of the elements of a modular Bayesian filters. These concepts are clarified by using a series of examples from the realm of vehicle state estimation and include the results of each stage of the algorithm creation in a simulated environment. A final section shows the implementation of a modular Bayesian filter in a real-world problem tasked with addressing the problem of vehicle state estimation in the face of transitory sensor failure. This section also includes all of the attending algorithms that allow the problem to be solved accurately and in real-time.

APA, Harvard, Vancouver, ISO, and other styles
3

Hanif, A. "Computational intelligence sequential Monte Carlos for recursive Bayesian estimation." Thesis, University College London (University of London), 2013. http://discovery.ucl.ac.uk/1403732/.

Full text
Abstract:
Recursive Bayesian estimation using sequential Monte Carlos methods is a powerful numerical technique to understand latent dynamics of non-linear non-Gaussian dynamical systems. Classical sequential Monte Carlos suffer from weight degeneracy which is where the number of distinct particles collapse. Traditionally this is addressed by resampling, which effectively replaces high weight particles with many particles with high inter-particle correlation. Frequent resampling, however, leads to a lack of diversity amongst the particle set in a problem known as sample impoverishment. Traditional sequential Monte Carlo methods attempt to resolve this correlated problem however introduce further data processing issues leading to minimal to comparable performance improvements over the sequential Monte Carlo particle filter. A new method, the adaptive path particle filter, is proposed for recursive Bayesian estimation of non-linear non-Gaussian dynamical systems. Our method addresses the weight degeneracy and sample impoverishment problem by embedding a computational intelligence step of adaptive path switching between generations based on maximal likelihood as a fitness function. Preliminary tests on a scalar estimation problem with non-linear non-Gaussian dynamics and a non-stationary observation model and the traditional univariate stochastic volatility problem are presented. Building on these preliminary results, we evaluate our adaptive path particle filter on the stochastic volatility estimation problem. We calibrate the Heston stochastic volatility model employing a Markov chain Monte Carlo on six securities. Finally, we investigate the efficacy of sequential Monte Carlos for recursive Bayesian estimation of astrophysical time series. We posit latent dynamics for both regularized and irregular astrophysical time series, calibrating fifty-five quasar time series using the CAR(1) model. We find the adaptive path particle filter to statistically significantly outperform the standard sequential importance resampling particle filter, the Markov chain Monte Carlo particle filter and, upon Heston model estimation, the particle learning algorithm particle filter. In addition, from our quasar MCMC calibration we find the characteristic timescale τ to be first-order stable in contradiction to the literature though indicative of a unified underlying structure. We offer detailed analysis throughout, and conclude with a discussion and suggestions for future work.
APA, Harvard, Vancouver, ISO, and other styles
4

Ross, Stéphane. "Model-based Bayesian reinforcement learning in complex domains." Thesis, McGill University, 2008. http://digitool.Library.McGill.CA:80/R/?func=dbin-jump-full&object_id=21960.

Full text
Abstract:
Reinforcement Learning has emerged as a useful framework for learning to perform a task optimally from experience in unknown systems. A major problem for such learning algorithms is how to balance optimally the exploration of the system, to gather knowledge, and the exploitation of current knowledge, to complete the task. Model-based Bayesian Reinforcement Learning (BRL) methods provide an optimal solution to this problem by formulating it as a planning problem under uncertainty. However, the complexity of these methods has so far limited their applicability to small and simple domains. To improve the applicability of model-based BRL, this thesis presents several extensions to more complex and realistic systems, such as partially observable and continuous domains. To improve learning efficiency in large systems, this thesis includes another extension to automatically learn and exploit the structure of the system. Approximate algorithms are proposed to efficiently solve the resulting inference and planning problems.
L'apprentissage par renforcement a émergé comme une technique utile pour apprendre à accomplir une tâche de façon optimale à partir d'expérience dans les systèmes inconnus. L'un des problèmes majeurs de ces algorithmes d'apprentissage est comment balancer de façon optimale l'exploration du système, pour acquérir des connaissances, et l'exploitation des connaissances actuelles, pour compléter la tâche. L'apprentissage par renforcement bayésien avec modèle permet de résoudre ce problème de façon optimale en le formulant comme un problème de planification dans l'incertain. La complexité de telles méthodes a toutefois limité leur applicabilité à de petits domaines simples. Afin d'améliorer l'applicabilité de l'apprentissage par renforcement bayésian avec modèle, cette thèse presente plusieurs extensions de ces méthodes à des systèmes beaucoup plus complexes et réalistes, où le domaine est partiellement observable et/ou continu. Afin d'améliorer l'efficacité de l'apprentissage dans les gros systèmes, cette thèse inclue une autre extension qui permet d'apprendre automatiquement et d'exploiter la structure du système. Des algorithmes approximatifs sont proposés pour résoudre efficacement les problèmes d'inference et de planification résultants.
APA, Harvard, Vancouver, ISO, and other styles
5

Gannon, Michael William. "Cruise missile proliferation : an application of Bayesian analysis to intelligence forecasting." Thesis, Monterey, Calif. : Springfield, Va. : Naval Postgraduate School ; Available from the National Technical Information Service, 1992. http://handle.dtic.mil/100.2/ADA257717.

Full text
Abstract:
Thesis (M.S. in National Security Affairs) Naval Postgraduate School, September 1992.
Thesis advisor: Edward J. Laurance. ADA257717. "September 1992". Includes bibliographical reference (p. 82-84).
APA, Harvard, Vancouver, ISO, and other styles
6

Luo, Zhiyuan. "A probabilistic reasoning and learning system based on Bayesian belief networks." Thesis, Heriot-Watt University, 1992. http://hdl.handle.net/10399/1490.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Pomerantz, Daniel. "Designing a context dependant movie recommender: a hierarchical Bayesian approach." Thesis, McGill University, 2010. http://digitool.Library.McGill.CA:80/R/?func=dbin-jump-full&object_id=86751.

Full text
Abstract:
In this thesis, we analyze a context-dependent movie recommendation system using a Hierarchical Bayesian Network. Unlike most other recommender systems which either do not consider context or do so using collaborative filtering, our approach is content-based. This allows users to individually interpret contexts or invent their own contexts and continue to get good recommendations. By using a Hierarchical Bayesian Network, we can provide context recommendations when users have only provided a small amount of information about their preferences per context. At the same time, our model has enough degrees of freedom to handle users with different preferences in different contexts. We show on a real data set that using a Bayesian Network to model contexts reduces the error on cross-validation over models that do not link contexts together or ignore context altogether.
Dans cette thèse, nous analysons un système de recommandations de films dépendant du contexte en utilisant un réseau Bayésien hiérarchique. Contrairement à la plupart des systèmes de recommendations qui, soit ne considère pas le contexte, soit le considère en utilisant le filtrage collaboratif, notre approche est basée sur le contenu. Ceci permet aux utilisateurs d'interpréter les contextes individuellement ou d'inventer leurs propres contextes tout en obtenant toujours de bonnes recommandations. En utilisant le rèseau Bayésien hiérarchique, nous pouvons fournir des recommendations en contexte quand les utilisateurs n'ont fourni que quelques informations par rapport à leurs préférences dans différents contextes. De plus, notre modèle a assez de degrés de liberté pour prendre en charge les utilisateurs avec des préférences différentes dans différents contextes. Nous démontrons sur un ensemble de données réel que l'utilisation d'un réseau Bayésien pour modéliser les contextes réduit l'erreur de validation croisée par rapport aux modèles qui ne lient pas les contextes ensemble ou qui ignore tout simplement le contexte.
APA, Harvard, Vancouver, ISO, and other styles
8

Carr, S. "Investigating the applicability of bayesian networks to the analysis of military intelligence." Thesis, Cranfield University, 2008. http://hdl.handle.net/1826/2826.

Full text
Abstract:
Intelligence failures have been attributed to an inability to correlate many small pieces of data into a larger picture. This thesis has sought to investigate how the fusion and analysis of uncertain or incomplete data through the use of Bayesian Belief Networks (BBN) compares with people’s intuitive judgements. These flexible, robust, graphical probabilistic networks are able to incorporate values from a wide range of sources including empirical values, experimental data and subjective values. Using the latter, elicited from a number of serving military officers, BBNs provide a logical framework to combine each individual’s set of one-at-a-time judgements, allowing comparisons with the same individuals’ many-at-a-time, direct intuitive judgements. This was achieved through a serie s of fictitious and historical case studies. Building upon this work, another area of interest was the extent to which different elicitation techniques lead to equivalent or differing judgements. The techniques compared were: direct ranking of the variables’ perceived importance for discriminating between given hypotheses, likelihood ratios and conditional probabilities. The experimental results showed that individuals were unable to correctly manipulate the dependencies between information as evidence accumulated. The results also showed varying beliefs about the importance of information depending upon the elicitation technique used. Little evidence was found of a high correlation between direct normative rankings of variables’ importance and those obtained from the BBNs’ combination of one-at-a-time judgements. Likelihood values should only be used as an elicitation technique by those who either regularly manipulate uncertain information or use ratios. Overall, conditional probability distributions provided the least troublesome elicitation technique of subjective preferences. In conclusion, Bayesian Belief Networks developed through the use of subjective probability distributions offer a flexible, robust methodology for the development of a normative model for the basis of a decision support system for the quantitative analysis of intelligence data.
APA, Harvard, Vancouver, ISO, and other styles
9

Jaitha, Anant. "An Introduction to the Theory and Applications of Bayesian Networks." Scholarship @ Claremont, 2017. http://scholarship.claremont.edu/cmc_theses/1638.

Full text
Abstract:
Bayesian networks are a means to study data. A Bayesian network gives structure to data by creating a graphical system to model the data. It then develops probability distributions over these variables. It explores variables in the problem space and examines the probability distributions related to those variables. It conducts statistical inference over those probability distributions to draw meaning from them. They are good means to explore a large set of data efficiently to make inferences. There are a number of real world applications that already exist and are being actively researched. This paper discusses the theory and applications of Bayesian networks.
APA, Harvard, Vancouver, ISO, and other styles
10

Saini, Nishrith. "Using Machine Intelligence to Prioritise Code Review Requests." Thesis, Blekinge Tekniska Högskola, Institutionen för programvaruteknik, 2020. http://urn.kb.se/resolve?urn=urn:nbn:se:bth-20140.

Full text
Abstract:
Background: Modern Code Review (MCR) is a process of reviewing code which is a commonly used practice in software development. It is the process of reviewing any new code changes that need to be merged with the existing codebase. As a developer, one receives many code review requests daily that need to be reviewed. When the developer receives the review requests, they are not prioritised. Manuallyprioritising them is a challenging and time-consuming process. Objectives: This thesis aims to address and solve the above issues by developing a machine intelligence-based code review prioritisation tool. The goal is to identify the factors that impact code review prioritisation process with the help of feedback provided by experienced developers and literature; these factors can be used to develop and implement a solution that helps in prioritising the code review requests automatically. The solution developed is later deployed and evaluated through user and reviewer feedback in a real large-scale project. The developed prioritisation tool is named as Pineapple. Methods: A case study has been conducted at Ericsson. The identification of factors that impact the code review prioritisation process was identified through literature review and semi-structured interviews. The feasibility, usability, and usefulness of Pineapple have been evaluated using a static validation method with the help of responses provided by the developers after using the tool. Results: The results indicate that Pineapple can help developers prioritise their code review requests and assist them while performing code reviews. It was found that the majority of people believed Pineapple has the ability to decrease the lead time of the code review process while providing reliable prioritisations. The prioritisations are performed in a production environment with an average time of two seconds. Conclusions: The implementation and validation of Pineapple suggest the possible usefulness of the tool to help developers prioritise their code review requests. The tool helps to decrease the code review lead-time, along with reducing the workload on a developer while reviewing code changes.
APA, Harvard, Vancouver, ISO, and other styles

Books on the topic "Bayesian intelligence"

1

E, Nicholson Ann, ed. Bayesian artificial intelligence. 2nd ed. Boca Raton, FL: CRC Press, 2011.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
2

E, Nicholson Ann, ed. Bayesian artificial intelligence. Boca Raton, Fla: Chapman & Hall/CRC, 2004.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
3

Dowe, David L., ed. Algorithmic Probability and Friends. Bayesian Prediction and Artificial Intelligence. Berlin, Heidelberg: Springer Berlin Heidelberg, 2013. http://dx.doi.org/10.1007/978-3-642-44958-1.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Neal, Radford M. Bayesian learning for neural networks. Toronto: University of Toronto, Dept. of Computer Science, 1995.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
5

Szeliski, Richard. Bayesian Modeling of Uncertainty in Low-Level Vision. Boston, MA: Springer US, 1989.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
6

E, Holmes Dawn, Jain L. C, and SpringerLink (Online service), eds. Innovations in Bayesian Networks: Theory and Applications. Berlin, Heidelberg: Springer-Verlag Berlin Heidelberg, 2008.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
7

Williamson, Jon. Bayesian nets and causality: Philosophical and computational foundations. New York: Oxford University Press, 2005.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
8

Neal, Radford M. Bayesian learning for neural networks. New York: Springer, 1996.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
9

Sucar, L. Enrique, Eduardo F. Morales, and Jesse Hoey. Decision theory models for applications in artificial intelligence: Concepts and solutions. Hershey, PA: Information Science Reference, 2011.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
10

Bayesian networks and decision graphs. New York: Springer, 2001.

Find full text
APA, Harvard, Vancouver, ISO, and other styles

Book chapters on the topic "Bayesian intelligence"

1

Lu, Chenguang. "From Bayesian Inference to Logical Bayesian Inference." In Intelligence Science II, 11–23. Cham: Springer International Publishing, 2018. http://dx.doi.org/10.1007/978-3-030-01313-4_2.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Lu, Chenguang. "Correction to: From Bayesian Inference to Logical Bayesian Inference." In Intelligence Science II, E1. Cham: Springer International Publishing, 2018. http://dx.doi.org/10.1007/978-3-030-01313-4_51.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Das, Monidipa, and Soumya K. Ghosh. "Spatial Bayesian Network." In Studies in Computational Intelligence, 53–79. Cham: Springer International Publishing, 2019. http://dx.doi.org/10.1007/978-3-030-27749-9_4.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Das, Monidipa, and Soumya K. Ghosh. "Semantic Bayesian Network." In Studies in Computational Intelligence, 81–99. Cham: Springer International Publishing, 2019. http://dx.doi.org/10.1007/978-3-030-27749-9_5.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Maragoudakis, Manolis, and Nikos Fakotakis. "Bayesian Feature Construction." In Advances in Artificial Intelligence, 235–45. Berlin, Heidelberg: Springer Berlin Heidelberg, 2006. http://dx.doi.org/10.1007/11752912_25.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Conati, Cristina. "Bayesian Student Modeling." In Studies in Computational Intelligence, 281–99. Berlin, Heidelberg: Springer Berlin Heidelberg, 2010. http://dx.doi.org/10.1007/978-3-642-14363-2_14.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Bhattacharjee, Shrutilipi, Soumya Kanti Ghosh, and Jia Chen. "Fuzzy Bayesian Semantic Kriging." In Studies in Computational Intelligence, 73–95. Singapore: Springer Singapore, 2019. http://dx.doi.org/10.1007/978-981-13-8664-0_4.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Nguyen, Thanh Dai, Sunil Gupta, Santu Rana, Vu Nguyen, Svetha Venkatesh, Kyle J. Deane, and Paul G. Sanders. "Cascade Bayesian Optimization." In AI 2016: Advances in Artificial Intelligence, 268–80. Cham: Springer International Publishing, 2016. http://dx.doi.org/10.1007/978-3-319-50127-7_22.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Jøsang, Audun. "Bayesian Reputation Systems." In Artificial Intelligence: Foundations, Theory, and Algorithms, 289–302. Cham: Springer International Publishing, 2016. http://dx.doi.org/10.1007/978-3-319-42337-1_16.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Hussein, Ahmed, and Eugene Santos. "Exploring Case-Based Bayesian Networks and Bayesian Multi-nets for Classification." In Advances in Artificial Intelligence, 485–92. Berlin, Heidelberg: Springer Berlin Heidelberg, 2004. http://dx.doi.org/10.1007/978-3-540-24840-8_42.

Full text
APA, Harvard, Vancouver, ISO, and other styles

Conference papers on the topic "Bayesian intelligence"

1

Takeishi, Naoya, Yoshinobu Kawahara, Yasuo Tabei, and Takehisa Yairi. "Bayesian Dynamic Mode Decomposition." In Twenty-Sixth International Joint Conference on Artificial Intelligence. California: International Joint Conferences on Artificial Intelligence Organization, 2017. http://dx.doi.org/10.24963/ijcai.2017/392.

Full text
Abstract:
Dynamic mode decomposition (DMD) is a data-driven method for calculating a modal representation of a nonlinear dynamical system, and it has been utilized in various fields of science and engineering. In this paper, we propose Bayesian DMD, which provides a principled way to transfer the advantages of the Bayesian formulation into DMD. To this end, we first develop a probabilistic model corresponding to DMD, and then, provide the Gibbs sampler for the posterior inference in Bayesian DMD. Moreover, as a specific example, we discuss the case of using a sparsity-promoting prior for an automatic determination of the number of dynamic modes. We investigate the empirical performance of Bayesian DMD using synthetic and real-world datasets.
APA, Harvard, Vancouver, ISO, and other styles
2

Fortier, Nathan, John Sheppard, and Karthik Ganesan Pillai. "Bayesian abductive inference using overlapping swarm intelligence." In 2013 IEEE Symposium on Swarm Intelligence (SIS). IEEE, 2013. http://dx.doi.org/10.1109/sis.2013.6615188.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Fortier, Nathan, John Sheppard, and Shane Strasser. "Learning Bayesian classifiers using overlapping swarm intelligence." In 2014 IEEE Symposium On Swarm Intelligence (SIS). IEEE, 2014. http://dx.doi.org/10.1109/sis.2014.7011796.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Shen, Gehui, Xi Chen, and Zhihong Deng. "Variational Learning of Bayesian Neural Networks via Bayesian Dark Knowledge." In Twenty-Ninth International Joint Conference on Artificial Intelligence and Seventeenth Pacific Rim International Conference on Artificial Intelligence {IJCAI-PRICAI-20}. California: International Joint Conferences on Artificial Intelligence Organization, 2020. http://dx.doi.org/10.24963/ijcai.2020/282.

Full text
Abstract:
Bayesian neural networks (BNNs) have received more and more attention because they are capable of modeling epistemic uncertainty which is hard for conventional neural networks. Markov chain Monte Carlo (MCMC) methods and variational inference (VI) are two mainstream methods for Bayesian deep learning. The former is effective but its storage cost is prohibitive since it has to save many samples of neural network parameters. The latter method is more time and space efficient, however the approximate variational posterior limits its performance. In this paper, we aim to combine the advantages of above two methods by distilling MCMC samples into an approximate variational posterior. On the basis of an existing distillation technique we first propose variational Bayesian dark knowledge method. Moreover, we propose Bayesian dark prior knowledge, a novel distillation method which considers MCMC posterior as the prior of a variational BNN. Two proposed methods both not only can reduce the space overhead of the teacher model so that are scalable, but also maintain a distilled posterior distribution capable of modeling epistemic uncertainty. Experimental results manifest our methods outperform existing distillation method in terms of predictive accuracy and uncertainty modeling.
APA, Harvard, Vancouver, ISO, and other styles
5

Hutter, Marcus. "Feature Dynamic Bayesian Networks." In 2nd Conference on Artificial General Intelligence 2009. Paris, France: Atlantis Press, 2009. http://dx.doi.org/10.2991/agi.2009.6.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Li, Yan, Fang Liu, Lei Yu, and Quan Qi. "Regression Model Based on Sparse Bayesian Learning." In 2010 International Conference on Artificial Intelligence and Computational Intelligence (AICI). IEEE, 2010. http://dx.doi.org/10.1109/aici.2010.119.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Nguyen, Vu, Dinh Phung, Trung Le, and Hung Bui. "Discriminative Bayesian Nonparametric Clustering." In Twenty-Sixth International Joint Conference on Artificial Intelligence. California: International Joint Conferences on Artificial Intelligence Organization, 2017. http://dx.doi.org/10.24963/ijcai.2017/355.

Full text
Abstract:
We propose a general framework for discriminative Bayesian nonparametric clustering to promote the inter-discrimination among the learned clusters in a fully Bayesian nonparametric (BNP) manner. Our method combines existing BNP clustering and discriminative models by enforcing latent cluster indices to be consistent with the predicted labels resulted from probabilistic discriminative model. This formulation results in a well-defined generative process wherein we can use either logistic regression or SVM for discrimination. Using the proposed framework, we develop two novel discriminative BNP variants: the discriminative Dirichlet process mixtures, and the discriminative-state infinite HMMs for sequential data. We develop efficient data-augmentation Gibbs samplers for posterior inference. Extensive experiments in image clustering and dynamic location clustering demonstrate that by encouraging discrimination between induced clusters, our model enhances the quality of clustering in comparison with the traditional generative BNP models.
APA, Harvard, Vancouver, ISO, and other styles
8

Daxberger, Erik, Anastasia Makarova, Matteo Turchetta, and Andreas Krause. "Mixed-Variable Bayesian Optimization." In Twenty-Ninth International Joint Conference on Artificial Intelligence and Seventeenth Pacific Rim International Conference on Artificial Intelligence {IJCAI-PRICAI-20}. California: International Joint Conferences on Artificial Intelligence Organization, 2020. http://dx.doi.org/10.24963/ijcai.2020/365.

Full text
Abstract:
The optimization of expensive to evaluate, black-box, mixed-variable functions, i.e. functions that have continuous and discrete inputs, is a difficult and yet pervasive problem in science and engineering. In Bayesian optimization (BO), special cases of this problem that consider fully continuous or fully discrete domains have been widely studied. However, few methods exist for mixed-variable domains and none of them can handle discrete constraints that arise in many real-world applications. In this paper, we introduce MiVaBo, a novel BO algorithm for the efficient optimization of mixed-variable functions combining a linear surrogate model based on expressive feature representations with Thompson sampling. We propose an effective method to optimize its acquisition function, a challenging problem for mixed-variable domains, making MiVaBo the first BO method that can handle complex constraints over the discrete variables. Moreover, we provide the first convergence analysis of a mixed-variable BO algorithm. Finally, we show that MiVaBo is significantly more sample efficient than state-of-the-art mixed-variable BO algorithms on several hyperparameter tuning tasks, including the tuning of deep generative models.
APA, Harvard, Vancouver, ISO, and other styles
9

Matsuhisa, Takashi. "Bayesian Communication under Rough Sets Information." In 2006 IEEE/WIC/ACM International Conference on Web Intelligence International Intelligence Agent Technology Workshops. IEEE, 2006. http://dx.doi.org/10.1109/wi-iatw.2006.50.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Tse, R., G. Seet, and S. K. Sim. "Recognition of Human Intentions Using Bayesian Artificial Intelligence." In ASME 2007 International Mechanical Engineering Congress and Exposition. ASMEDC, 2007. http://dx.doi.org/10.1115/imece2007-43325.

Full text
Abstract:
Controlling a robot to perform a task is more difficult than commanding a human. A robot needs to be preprogrammed to perform a task. This is achieved by providing the robot with a complete set of step-by-step commands from the beginning till the end. In contrast, to a human, recalling an experience when he was instructed with the same command in a similar situation, a human would be able to guess what intention behind such a command is and could then behave cooperatively. Our objective is to equip the robot with such a capability of recognizing some simple human intentions required of a robot, such as: moving around a corner, moving parallel to the wall, or moving towards an object. The cues used by the robot to make an inference were: the odometer and laser sensor readings, and the human operator’s commands given. Using the Maximum-Likelihood (ML) parameter learning on Dynamic Bayesian Networks, the correlations between these cues and the intentions were modeled and used to infer the human intentions in controlling the robot. From the experiments, the robot was able to learn and infer the above mentioned intentions of the human user with a satisfying success rate.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography