Artykuły w czasopismach na temat „Bayesian classification”

Kliknij ten link, aby zobaczyć inne rodzaje publikacji na ten temat: Bayesian classification.

Utwórz poprawne odniesienie w stylach APA, MLA, Chicago, Harvard i wielu innych

Wybierz rodzaj źródła:

Sprawdź 50 najlepszych artykułów w czasopismach naukowych na temat „Bayesian classification”.

Przycisk „Dodaj do bibliografii” jest dostępny obok każdej pracy w bibliografii. Użyj go – a my automatycznie utworzymy odniesienie bibliograficzne do wybranej pracy w stylu cytowania, którego potrzebujesz: APA, MLA, Harvard, Chicago, Vancouver itp.

Możesz również pobrać pełny tekst publikacji naukowej w formacie „.pdf” i przeczytać adnotację do pracy online, jeśli odpowiednie parametry są dostępne w metadanych.

Przeglądaj artykuły w czasopismach z różnych dziedzin i twórz odpowiednie bibliografie.

1

Yazdi, Hadi Sadoghi, Mehri Sadoghi Yazdi i Abedin Vahedian. "Fuzzy Bayesian Classification of LR Fuzzy Numbers". International Journal of Engineering and Technology 1, nr 5 (2009): 415–23. http://dx.doi.org/10.7763/ijet.2009.v1.78.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
2

Wang, ShuangCheng, GuangLin Xu i RuiJie Du. "Restricted Bayesian classification networks". Science China Information Sciences 56, nr 7 (9.01.2013): 1–15. http://dx.doi.org/10.1007/s11432-012-4729-x.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
3

Berrett, Candace, i Catherine A. Calder. "Bayesian spatial binary classification". Spatial Statistics 16 (maj 2016): 72–102. http://dx.doi.org/10.1016/j.spasta.2016.01.004.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
4

Dojer, Norbert, Paweł Bednarz, Agnieszka Podsiadło i Bartek Wilczyński. "BNFinder2: Faster Bayesian network learning and Bayesian classification". Bioinformatics 29, nr 16 (1.07.2013): 2068–70. http://dx.doi.org/10.1093/bioinformatics/btt323.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
5

Reguzzoni, M., F. Sansò, G. Venuti i P. A. Brivio. "Bayesian classification by data augmentation". International Journal of Remote Sensing 24, nr 20 (styczeń 2003): 3961–81. http://dx.doi.org/10.1080/0143116031000103817.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
6

Wang, Xiaohui, Shubhankar Ray i Bani K. Mallick. "Bayesian Curve Classification Using Wavelets". Journal of the American Statistical Association 102, nr 479 (wrzesień 2007): 962–73. http://dx.doi.org/10.1198/016214507000000455.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
7

Williams, C. K. I., i D. Barber. "Bayesian classification with Gaussian processes". IEEE Transactions on Pattern Analysis and Machine Intelligence 20, nr 12 (1998): 1342–51. http://dx.doi.org/10.1109/34.735807.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
8

Dellaportas, Petros. "Bayesian classification of Neolithic tools". Journal of the Royal Statistical Society: Series C (Applied Statistics) 47, nr 2 (28.06.2008): 279–97. http://dx.doi.org/10.1111/1467-9876.00112.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
9

Miguel Hernández-Lobato, Jose, Daniel Hernández-Lobato i Alberto Suárez. "Network-based sparse Bayesian classification". Pattern Recognition 44, nr 4 (kwiecień 2011): 886–900. http://dx.doi.org/10.1016/j.patcog.2010.10.016.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
10

Hunter, L., i D. J. States. "Bayesian classification of protein structure". IEEE Expert 7, nr 4 (sierpień 1992): 67–75. http://dx.doi.org/10.1109/64.153466.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
11

Gupal, A. M., S. V. Pashko i I. V. Sergienko. "Efficiency of Bayesian classification procedure". Cybernetics and Systems Analysis 31, nr 4 (lipiec 1995): 543–54. http://dx.doi.org/10.1007/bf02366409.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
12

Martinez, Matthew, Phillip L. De Leon i David Keeley. "Bayesian classification of falls risk". Gait & Posture 67 (styczeń 2019): 99–103. http://dx.doi.org/10.1016/j.gaitpost.2018.09.028.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
13

Baram, Yoram. "Bayesian classification by iterated weighting". Neurocomputing 25, nr 1-3 (kwiecień 1999): 73–79. http://dx.doi.org/10.1016/s0925-2312(98)00110-6.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
14

Lee, Michael D. "Bayesian outcome-based strategy classification". Behavior Research Methods 48, nr 1 (20.02.2015): 29–41. http://dx.doi.org/10.3758/s13428-014-0557-9.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
15

Davig, Troy, i Aaron Smalter Hall. "Recession forecasting using Bayesian classification". International Journal of Forecasting 35, nr 3 (lipiec 2019): 848–67. http://dx.doi.org/10.1016/j.ijforecast.2018.08.005.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
16

Ershadi, Mohammad Mahdi, i Abbas Seifi. "An efficient Bayesian network for differential diagnosis using experts' knowledge". International Journal of Intelligent Computing and Cybernetics 13, nr 1 (9.03.2020): 103–26. http://dx.doi.org/10.1108/ijicc-10-2019-0112.

Pełny tekst źródła
Streszczenie:
PurposeThis study aims to differential diagnosis of some diseases using classification methods to support effective medical treatment. For this purpose, different classification methods based on data, experts’ knowledge and both are considered in some cases. Besides, feature reduction and some clustering methods are used to improve their performance.Design/methodology/approachFirst, the performances of classification methods are evaluated for differential diagnosis of different diseases. Then, experts' knowledge is utilized to modify the Bayesian networks' structures. Analyses of the results show that using experts' knowledge is more effective than other algorithms for increasing the accuracy of Bayesian network classification. A total of ten different diseases are used for testing, taken from the Machine Learning Repository datasets of the University of California at Irvine (UCI).FindingsThe proposed method improves both the computation time and accuracy of the classification methods used in this paper. Bayesian networks based on experts' knowledge achieve a maximum average accuracy of 87 percent, with a minimum standard deviation average of 0.04 over the sample datasets among all classification methods.Practical implicationsThe proposed methodology can be applied to perform disease differential diagnosis analysis.Originality/valueThis study presents the usefulness of experts' knowledge in the diagnosis while proposing an adopted improvement method for classifications. Besides, the Bayesian network based on experts' knowledge is useful for different diseases neglected by previous papers.
Style APA, Harvard, Vancouver, ISO itp.
17

Xu, Shuo. "Bayesian Naïve Bayes classifiers to text classification". Journal of Information Science 44, nr 1 (1.11.2016): 48–59. http://dx.doi.org/10.1177/0165551516677946.

Pełny tekst źródła
Streszczenie:
Text classification is the task of assigning predefined categories to natural language documents, and it can provide conceptual views of document collections. The Naïve Bayes (NB) classifier is a family of simple probabilistic classifiers based on a common assumption that all features are independent of each other, given the category variable, and it is often used as the baseline in text classification. However, classical NB classifiers with multinomial, Bernoulli and Gaussian event models are not fully Bayesian. This study proposes three Bayesian counterparts, where it turns out that classical NB classifier with Bernoulli event model is equivalent to Bayesian counterpart. Finally, experimental results on 20 newsgroups and WebKB data sets show that the performance of Bayesian NB classifier with multinomial event model is similar to that of classical counterpart, but Bayesian NB classifier with Gaussian event model is obviously better than classical counterpart.
Style APA, Harvard, Vancouver, ISO itp.
18

Long, Yuqi, i Xingzhong Xu. "Bayesian decision rules to classification problems". Australian & New Zealand Journal of Statistics 63, nr 2 (24.05.2021): 394–415. http://dx.doi.org/10.1111/anzs.12325.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
19

Mittal, Amit Kumar, Shivangi Mittal i Digendra Singh Rathore. "Bayesian Classification for Social Media Text". International Journal of Computer Sciences and Engineering 6, nr 7 (31.07.2018): 641–46. http://dx.doi.org/10.26438/ijcse/v6i7.641646.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
20

dos Santos, Edimilson B., Estevam R. Hruschka, Eduardo R. Hruschka i Nelson F. F. Ebecken. "Bayesian network classifiers: Beyond classification accuracy". Intelligent Data Analysis 15, nr 3 (4.05.2011): 279–98. http://dx.doi.org/10.3233/ida-2010-0468.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
21

Ruiz, Pablo, Javier Mateos, Gustavo Camps-Valls, Rafael Molina i Aggelos K. Katsaggelos. "Bayesian Active Remote Sensing Image Classification". IEEE Transactions on Geoscience and Remote Sensing 52, nr 4 (kwiecień 2014): 2186–96. http://dx.doi.org/10.1109/tgrs.2013.2258468.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
22

Cruz-Mesía, Rolando De la, Fernando A. Quintana i Peter Müller. "Semiparametric Bayesian classification with longitudinal markers". Journal of the Royal Statistical Society: Series C (Applied Statistics) 56, nr 2 (marzec 2007): 119–37. http://dx.doi.org/10.1111/j.1467-9876.2007.00569.x.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
23

Lin, Tein-Hsiang, i Kang G. Shin. "A Bayesian approach to fault classification". ACM SIGMETRICS Performance Evaluation Review 18, nr 1 (kwiecień 1990): 58–66. http://dx.doi.org/10.1145/98460.98505.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
24

Akhtar, Naveed, Faisal Shafait i Ajmal Mian. "Discriminative Bayesian Dictionary Learning for Classification". IEEE Transactions on Pattern Analysis and Machine Intelligence 38, nr 12 (1.12.2016): 2374–88. http://dx.doi.org/10.1109/tpami.2016.2527652.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
25

Hurn, M. A., K. V. Mardia, T. J. Hainsworth, J. Kirkbride i E. Berry. "Bayesian fused classification of medical images". IEEE Transactions on Medical Imaging 15, nr 6 (1996): 850–58. http://dx.doi.org/10.1109/42.544502.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
26

Hong, Euy-Seok. "Software Quality Classification using Bayesian Classifier". Journal of the Korea society of IT services 11, nr 1 (31.03.2012): 211–21. http://dx.doi.org/10.9716/kits.2012.11.1.211.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
27

Wan, E. A. "Neural network classification: a Bayesian interpretation". IEEE Transactions on Neural Networks 1, nr 4 (1990): 303–5. http://dx.doi.org/10.1109/72.80269.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
28

Dellaportas, Patros. "Corrigendum: Bayesian classification of Neolithic tools". Journal of the Royal Statistical Society: Series C (Applied Statistics) 47, nr 4 (6.01.2002): 620. http://dx.doi.org/10.1111/1467-9876.00133.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
29

Vinay, A., Abhijay Gupta, Aprameya Bharadwaj, Arvind Srinivasan, K. N. Balasubramanya Murthy i S. Natarajan. "Unconstrained Face Recognition using Bayesian Classification". Procedia Computer Science 143 (2018): 519–27. http://dx.doi.org/10.1016/j.procs.2018.10.425.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
30

Kabán, Ata. "On Bayesian classification with Laplace priors". Pattern Recognition Letters 28, nr 10 (lipiec 2007): 1271–82. http://dx.doi.org/10.1016/j.patrec.2007.02.010.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
31

Ni, Yang, Peter Müller, Maurice Diesendruck, Sinead Williamson, Yitan Zhu i Yuan Ji. "Scalable Bayesian Nonparametric Clustering and Classification". Journal of Computational and Graphical Statistics 29, nr 1 (19.07.2019): 53–65. http://dx.doi.org/10.1080/10618600.2019.1624366.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
32

Dadaneh, Siamak Zamani, Edward R. Dougherty i Xiaoning Qian. "Optimal Bayesian Classification With Missing Values". IEEE Transactions on Signal Processing 66, nr 16 (15.08.2018): 4182–92. http://dx.doi.org/10.1109/tsp.2018.2847660.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
33

Klocker, Johanna, Bettina Wailzer, Gerhard Buchbauer i Peter Wolschann. "Bayesian Neural Networks for Aroma Classification". Journal of Chemical Information and Computer Sciences 42, nr 6 (listopad 2002): 1443–49. http://dx.doi.org/10.1021/ci0202640.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
34

Zhang, Xunan, Shiji Song i Cheng Wu. "Robust Bayesian Classification with Incomplete Data". Cognitive Computation 5, nr 2 (21.09.2012): 170–87. http://dx.doi.org/10.1007/s12559-012-9188-6.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
35

Bielza, C., G. Li i P. Larrañaga. "Multi-dimensional classification with Bayesian networks". International Journal of Approximate Reasoning 52, nr 6 (wrzesień 2011): 705–27. http://dx.doi.org/10.1016/j.ijar.2011.01.007.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
36

Gutiérrez, Luis, Eduardo Gutiérrez-Peña i Ramsés H. Mena. "Bayesian nonparametric classification for spectroscopy data". Computational Statistics & Data Analysis 78 (październik 2014): 56–68. http://dx.doi.org/10.1016/j.csda.2014.04.010.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
37

Krzanowski, Wojtek J., Trevor C. Bailey, Derek Partridge, Jonathan E. Fieldsend, Richard M. Everson i Vitaly Schetinin. "Confidence in Classification: A Bayesian Approach". Journal of Classification 23, nr 2 (wrzesień 2006): 199–220. http://dx.doi.org/10.1007/s00357-006-0013-3.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
38

Carhart, Gary W., Bret F. Draayer i Michael K. Giles. "Optical pattern recognition using bayesian classification". Pattern Recognition 27, nr 4 (kwiecień 1994): 587–606. http://dx.doi.org/10.1016/0031-3203(94)90039-6.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
39

Kehagias, A. "Bayesian classification of Hidden Markov Models". Mathematical and Computer Modelling 23, nr 5 (marzec 1996): 25–43. http://dx.doi.org/10.1016/0895-7177(96)00010-6.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
40

Cazorla, M. A., i F. Escolano. "Two bayesian methods for junction classification". IEEE Transactions on Image Processing 12, nr 3 (marzec 2003): 317–27. http://dx.doi.org/10.1109/tip.2002.806242.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
41

Nykaza, Edward T., Matthew G. Blevins, Carl R. Hart i Anton Netchaev. "Bayesian classification of environmental noise sources". Journal of the Acoustical Society of America 141, nr 5 (maj 2017): 3522. http://dx.doi.org/10.1121/1.4987416.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
42

Flach, Peter A., i Nicolas Lachiche. "Naive Bayesian Classification of Structured Data". Machine Learning 57, nr 3 (grudzień 2004): 233–69. http://dx.doi.org/10.1023/b:mach.0000039778.69032.ab.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
43

Klein, Ruben, i S. James Press. "Adaptive Bayesian Classification of Spatial Data". Journal of the American Statistical Association 87, nr 419 (wrzesień 1992): 844–51. http://dx.doi.org/10.1080/01621459.1992.10475287.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
44

Li, Xiuqi, i Subhashis Ghosal. "Bayesian classification of multiclass functional data". Electronic Journal of Statistics 12, nr 2 (2018): 4669–96. http://dx.doi.org/10.1214/18-ejs1522.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
45

Konomi, Bledar A., Soma S. Dhavala, Jianhua Z. Huang, Subrata Kundu, David Huitink, Hong Liang, Yu Ding i Bani K. Mallick. "Bayesian object classification of gold nanoparticles". Annals of Applied Statistics 7, nr 2 (czerwiec 2013): 640–68. http://dx.doi.org/10.1214/12-aoas616.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
46

Lee, Heeseung, i Sang-Hun Lee. "Bayesian integration in setting classification criterion". IBRO Reports 6 (wrzesień 2019): S204. http://dx.doi.org/10.1016/j.ibror.2019.07.636.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
47

DEL ÁGUILA, ISABEL MARÍA, i JOSÉ DEL SAGRADO. "REQUIREMENT RISK LEVEL FORECAST USING BAYESIAN NETWORKS CLASSIFIERS". International Journal of Software Engineering and Knowledge Engineering 21, nr 02 (marzec 2011): 167–90. http://dx.doi.org/10.1142/s0218194011005219.

Pełny tekst źródła
Streszczenie:
Requirement engineering is a key issue in the development of a software project. Like any other development activity it is not without risks. This work is about the empirical study of risks of requirements by applying machine learning techniques, specifically Bayesian networks classifiers. We have defined several models to predict the risk level for a given requirement using three dataset that collect metrics taken from the requirement specifications of different projects. The classification accuracy of the Bayesian models obtained is evaluated and compared using several classification performance measures. The results of the experiments show that the Bayesians networks allow obtaining valid predictors. Specifically, a tree augmented network structure shows a competitive experimental performance in all datasets. Besides, the relations established between the variables collected to determine the level of risk in a requirement, match with those set by requirement engineers. We show that Bayesian networks are valid tools for the automation of risks assessment in requirement engineering.
Style APA, Harvard, Vancouver, ISO itp.
48

Afshar, Parnian, Arash Mohammadi i Konstantinos N. Plataniotis. "BayesCap: A Bayesian Approach to Brain Tumor Classification Using Capsule Networks". IEEE Signal Processing Letters 27 (2020): 2024–28. http://dx.doi.org/10.1109/lsp.2020.3034858.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
49

Siagian, Novriadi Antonius, Sutarman Wage i Sawaluddin. "Dataset Weighting Features Using Gain Ratio To Improve Method Accuracy Naïve Bayesian Classification". IOP Conference Series: Earth and Environmental Science 748, nr 1 (1.04.2021): 012034. http://dx.doi.org/10.1088/1755-1315/748/1/012034.

Pełny tekst źródła
Streszczenie:
Abstract The Naïve Bayes method is proven to have a high speed when applied to large datasets, but the Naïve Bayes method has weaknesses when selecting attributes because Naïve Bayes is a statistical classification method that is only based on the Bayes theorem so that it can only be used to predict the probability of the class membership of a class independently. Independent without being able to do the selection of attributes that have a high correlation and correlation between one attribute with other attributes so that it can affect the value of accuracy. Naïve Bayesian Weight has been able to provide better accuracy than conventional Naïve Bayesian. Where an increase in the highest accuracy value obtained from the Water Quality dataset is equal to 88.57% in the Weight Naïve Bayesian classification model, while the lowest accuracy value is obtained from the Haberman dataset which is 78.95% in the conventional Naïve Bayesian classification model. The increase in accuracy of the Weight Naïve Bayesian classification model in the Water Quality dataset is 2.9%. While the increase in accuracy value in the Haberman dataset is 1.8%. If done the average accuracy of each dataset using the Weight Naïve Bayesian classification model is 2.35%. Based on the testing that has been done on all test data, it can be said that the Weight Naïve Bayesian classification model can provide better accuracy values than those produced by the conventional Naïve Bayesian classification model.
Style APA, Harvard, Vancouver, ISO itp.
50

Li, Zhi Qiang, De Quan Yang, Yuan Tan i Yuan Ping Zou. "An Improved Naive Bayesian Classification Algorithm for Sentiment Classification of Microblogs". Applied Mechanics and Materials 543-547 (marzec 2014): 3614–20. http://dx.doi.org/10.4028/www.scientific.net/amm.543-547.3614.

Pełny tekst źródła
Streszczenie:
For the attribute-weighted based naive Bayesian classification algorithms, the selection of the weight directly affects the classification results. Based on this, the drawbacks of the TFIDF feature selection approaches in sentiment classification for the microblogs are analyzed, and an improved algorithm named TF-D(t)-CHI is proposed, which applies statistical calculation to obtain the correlation degree between the feature words and the classes. It presents the distribution of the feature items by variance in classes, which solves the problem that the short-texts contain few feature words while the high frequency feature words have too high weight. Experimental result indicate that TF-D(T)-CHI based naive Bayesian classification for feature selection and weight calculation has better classification results in sentiment classification for microblogs.
Style APA, Harvard, Vancouver, ISO itp.
Oferujemy zniżki na wszystkie plany premium dla autorów, których prace zostały uwzględnione w tematycznych zestawieniach literatury. Skontaktuj się z nami, aby uzyskać unikalny kod promocyjny!

Do bibliografii