Auswahl der wissenschaftlichen Literatur zum Thema „Probability learning“

Geben Sie eine Quelle nach APA, MLA, Chicago, Harvard und anderen Zitierweisen an

Wählen Sie eine Art der Quelle aus:

Machen Sie sich mit den Listen der aktuellen Artikel, Bücher, Dissertationen, Berichten und anderer wissenschaftlichen Quellen zum Thema "Probability learning" bekannt.

Neben jedem Werk im Literaturverzeichnis ist die Option "Zur Bibliographie hinzufügen" verfügbar. Nutzen Sie sie, wird Ihre bibliographische Angabe des gewählten Werkes nach der nötigen Zitierweise (APA, MLA, Harvard, Chicago, Vancouver usw.) automatisch gestaltet.

Sie können auch den vollen Text der wissenschaftlichen Publikation im PDF-Format herunterladen und eine Online-Annotation der Arbeit lesen, wenn die relevanten Parameter in den Metadaten verfügbar sind.

Zeitschriftenartikel zum Thema "Probability learning":

1

SAEKI, Daisuke. „Probability learning in golden hamsters“. Japanese Journal of Animal Psychology 49, Nr. 1 (1999): 41–47. http://dx.doi.org/10.2502/janip.49.41.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
2

Groth, Randall E., Jennifer A. Bergner und Jathan W. Austin. „Dimensions of Learning Probability Vocabulary“. Journal for Research in Mathematics Education 51, Nr. 1 (Januar 2020): 75–104. http://dx.doi.org/10.5951/jresematheduc.2019.0008.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
Annotation:
Normative discourse about probability requires shared meanings for disciplinary vocabulary. Previous research indicates that students’ meanings for probability vocabulary often differ from those of mathematicians, creating a need to attend to developing students’ use of language. Current standards documents conflict in their recommendations about how this should occur. In the present study, we conducted microgenetic research to examine the vocabulary use of four students before, during, and after lessons from a cycle of design-based research attending to probability vocabulary. In characterizing students’ normative and nonnormative uses of language, we draw implications for the design of curriculum, standards, and further research. Specifically, we illustrate the importance of attending to incrementality, multidimensionality, polysemy, interrelatedness, and heterogeneity to foster students’ probability vocabulary development.
3

Groth, Randall E., Jennifer A. Bergner und Jathan W. Austin. „Dimensions of Learning Probability Vocabulary“. Journal for Research in Mathematics Education 51, Nr. 1 (Januar 2020): 75–104. http://dx.doi.org/10.5951/jresematheduc.51.1.0075.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
Annotation:
Normative discourse about probability requires shared meanings for disciplinary vocabulary. Previous research indicates that students’ meanings for probability vocabulary often differ from those of mathematicians, creating a need to attend to developing students’ use of language. Current standards documents conflict in their recommendations about how this should occur. In the present study, we conducted microgenetic research to examine the vocabulary use of four students before, during, and after lessons from a cycle of design-based research attending to probability vocabulary. In characterizing students’ normative and nonnormative uses of language, we draw implications for the design of curriculum, standards, and further research. Specifically, we illustrate the importance of attending to incrementality, multidimensionality, polysemy, interrelatedness, and heterogeneity to foster students’ probability vocabulary development.
4

Rivas, Javier. „Probability matching and reinforcement learning“. Journal of Mathematical Economics 49, Nr. 1 (Januar 2013): 17–21. http://dx.doi.org/10.1016/j.jmateco.2012.09.004.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
5

West, Bruce J. „Fractal Probability Measures of Learning“. Methods 24, Nr. 4 (August 2001): 395–402. http://dx.doi.org/10.1006/meth.2001.1208.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
6

Malley, J. D., J. Kruppa, A. Dasgupta, K. G. Malley und A. Ziegler. „Probability Machines“. Methods of Information in Medicine 51, Nr. 01 (2012): 74–81. http://dx.doi.org/10.3414/me00-01-0052.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
Annotation:
SummaryBackground: Most machine learning approaches only provide a classification for binary responses. However, probabilities are required for risk estimation using individual patient characteristics. It has been shown recently that every statistical learning machine known to be consistent for a nonparametric regression problem is a probability machine that is provably consistent for this estimation problem.Objectives: The aim of this paper is to show how random forests and nearest neighbors can be used for consistent estimation of individual probabilities.Methods: Two random forest algorithms and two nearest neighbor algorithms are described in detail for estimation of individual probabilities. We discuss the consistency of random forests, nearest neighbors and other learning machines in detail. We conduct a simulation study to illustrate the validity of the methods. We exemplify the algorithms by analyzing two well-known data sets on the diagnosis of appendicitis and the diagnosis of diabetes in Pima Indians.Results: Simulations demonstrate the validity of the method. With the real data application, we show the accuracy and practicality of this approach. We provide sample code from R packages in which the probability estimation is already available. This means that all calculations can be performed using existing software.Conclusions: Random forest algorithms as well as nearest neighbor approaches are valid machine learning methods for estimating individual probabilities for binary responses. Freely available implementations are available in R and may be used for applications.
7

Dawson, Michael R. W. „Probability Learning by Perceptrons and People“. Comparative Cognition & Behavior Reviews 15 (2022): 1–188. http://dx.doi.org/10.3819/ccbr.2019.140011.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
8

HIRASAWA, Kotaro, Masaaki HARADA, Masanao OHBAYASHI, Juuichi MURATA und Jinglu HU. „Probability and Possibility Automaton Learning Network“. IEEJ Transactions on Industry Applications 118, Nr. 3 (1998): 291–99. http://dx.doi.org/10.1541/ieejias.118.291.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
9

Groth, Randall E., Jaime Butler und Delmar Nelson. „Overcoming challenges in learning probability vocabulary“. Teaching Statistics 38, Nr. 3 (26.05.2016): 102–7. http://dx.doi.org/10.1111/test.12109.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
10

Starzyk, J. A., und F. Wang. „Dynamic Probability Estimator for Machine Learning“. IEEE Transactions on Neural Networks 15, Nr. 2 (März 2004): 298–308. http://dx.doi.org/10.1109/tnn.2004.824254.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen

Dissertationen zum Thema "Probability learning":

1

Gozenman, Filiz. „Interaction Of Probability Learning And Working Memory“. Master's thesis, METU, 2012. http://etd.lib.metu.edu.tr/upload/12614535/index.pdf.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
Annotation:
Probability learning is the ability to establish a relationship between stimulus and outcomes based on occurrence probabilities using repetitive feedbacks. Participants learn the task according to the cue-outcome relationship, and try to gain in depth understanding of this relationship throughout the experiment. While learning is at the highest level, people rely on their working memory. In this study 20 participants were presented a probability learning task, and their prefrontal cortex activity was measured with functional Near-Infrared Spectroscopy. It was hypothesized that as participants gain more knowledge of the probabilities they will learn cue-outcome relationships and therefore rely less on their working memory. Therefore as learning precedes a drop in the fNIRS signal is expected. We obtained results confirming our hypothesis: Significant negative correlation between dorsolateral prefrontal cortex activity and learning was found. Similarly, response time also decreased through the task, indicating that as learning precedes participants made decisions faster. Participants used either the frequency matching or the maximization strategy in order to solve the task in which they had to decide whether the blue or the red color was winning. When they use the frequency matching strategy they chose blue at the rate of winning for the blue choice. When they use the maximization strategy they chosed blue almost always. Our task was designed such that the frequency for blue to win was 80%. We had hypothesized that the people in frequency matching and maximization groups would show working memory differences which could be observed from the fNIRS signal. However, we were unable to detect this type of behavioral difference in the fNIRS signal. Overall, our study showed the relationship between probability learning and working memory as depicted by brain activity in the dorsolateral prefrontal cortex which widely known as the central executive component of working memory.
2

RYSZ, TERI. „METACOGNITION IN LEARNING ELEMENTARY PROBABILITY AND STATISTICS“. University of Cincinnati / OhioLINK, 2004. http://rave.ohiolink.edu/etdc/view?acc_num=ucin1099248340.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
3

Bouchacourt, Diane. „Task-oriented learning of structured probability distributions“. Thesis, University of Oxford, 2017. https://ora.ox.ac.uk/objects/uuid:0665495b-afbb-483b-8bdf-cbc6ae5baeff.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
Annotation:
Machine learning models automatically learn from historical data to predict unseen events. Such events are often represented as complex multi-dimensional structures. In many cases there is high uncertainty in the prediction process. Research has developed probabilistic models to capture distributions of complex objects, but their learning objective is often agnostic of the evaluation loss. In this thesis, we address the aforementioned defficiency by designing probabilistic methods for structured object prediction that take into account the task at hand. First, we consider that the task at hand is explicitly known, but there is ambiguity in the prediction due to an unobserved (latent) variable. We develop a framework for latent structured output prediction that unifies existing empirical risk minimisation methods. We empirically demonstrate that for large and ambiguous latent spaces, performing prediction by minimising the uncertainty in the latent variable provides more accurate results. Empirical risk minimisation methods predict only a pointwise estimate of the output, however there can be uncertainty on the output value itself. To tackle this deficiency, we introduce a novel type of model to perform probabilistic structured output prediction. Our training objective minimises a dissimilarity coefficient between the data distribution and the model's distribution. This coefficient is defined according to a loss of choice, thereby our objective can be tailored to the task loss. We empirically demonstrate the ability of our model to capture distributions over complex objects. Finally, we tackle a setting where the task loss is implicitly expressed. Specifically, we consider the case of grouped observations. We propose a new model for learning a representation of the data that decomposes according to the semantics behind this grouping, while allowing efficient test-time inference. We experimentally demonstrate that our model learns a disentangled and controllable representation, leverages grouping information when available, and generalises to unseen observations.
4

Li, Chengtao Ph D. Massachusetts Institute of Technology. „Diversity-inducing probability measures for machine learning“. Thesis, Massachusetts Institute of Technology, 2019. https://hdl.handle.net/1721.1/121724.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
Annotation:
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Electrical Engineering and Computer Science, 2019
Cataloged from PDF version of thesis.
Includes bibliographical references (pages 163-176).
Subset selection problems arise in machine learning within kernel approximation, experimental design, and numerous other applications. In such applications, one often seeks to select diverse subsets of items to represent the population. One way to select such diverse subsets is to sample according to Diversity-Inducing Probability Measures (DIPMs) that assign higher probabilities to more diverse subsets. DIPMs underlie several recent breakthroughs in mathematics and theoretical computer science, but their power has not yet been explored for machine learning. In this thesis, we investigate DIPMs, their mathematical properties, sampling algorithms, and applications. Perhaps the best known instance of a DIPM is a Determinantal Point Process (DPP). DPPs originally arose in quantum physics, and are known to have deep relations to linear algebra, combinatorics, and geometry. We explore applications of DPPs to kernel matrix approximation and kernel ridge regression.
In these applications, DPPs deliver strong approximation guarantees and obtain superior performance compared to existing methods. We further develop an MCMC sampling algorithm accelerated by Gauss-type quadratures for DPPs. The algorithm runs several orders of magnitude faster than the existing ones. DPPs lie in a larger class of DIPMs called Strongly Rayleigh (SR) Measures. Instances of SR measures display a strong negative dependence property known as negative association, and as such can be used to model subset diversity. We study mathematical properties of SR measures, and construct the first provably fast-mixing Markov chain that samples from general SR measures. As a special case, we consider an SR measure called Dual Volume Sampling (DVS), for which we present the first poly-time sampling algorithm.
While all considered distributions over subsets are unconstrained, those of interest in the real world usually come with constraints due to prior knowledge, resource limitations or personal preferences. Hence we investigate sampling from constrained versions of DIPMs. Specifically, we consider DIPMs with cardinality constraints and matroid base constraints and construct poly-time approximate sampling algorithms for them. Such sampling algorithms will enable practical uses of constrained DIPMs in real world.
by Chengtao Li.
Ph. D.
Ph.D. Massachusetts Institute of Technology, Department of Electrical Engineering and Computer Science
5

Hunt, Gareth David. „Reinforcement Learning for Low Probability High Impact Risks“. Thesis, Curtin University, 2019. http://hdl.handle.net/20.500.11937/77106.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
Annotation:
We demonstrate a method of reinforcement learning that uses training in simulation. Our system generates an estimate of the potential reward and danger of each action as well as a measure of the uncertainty present in both. The system generates this by seeking out not only rewarding actions but also dangerous ones in the simulated training. During runtime our system is able to use this knowledge to avoid risks while accomplishing its tasks.
6

Słowiński, Witold. „Autonomous learning of domain models from probability distribution clusters“. Thesis, University of Aberdeen, 2014. http://digitool.abdn.ac.uk:80/webclient/DeliveryManager?pid=211059.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
Annotation:
Nontrivial domains can be difficult to understand and the task of encoding a model of such a domain can be difficult for a human expert, which is one of the fundamental problems of knowledge acquisition. Model learning provides a way to address this problem by allowing a predictive model of the domain's dynamics to be learnt algorithmically, without human supervision. Such models can provide insight about the domain to a human or aid in automated planning or reinforcement learning. This dissertation addresses the problem of how to learn a model of a continuous, dynamic domain, from sensory observations, through the discretisation of its continuous state space. The learning process is unsupervised in that there are no predefined goals, and it assumes no prior knowledge of the environment. Its outcome is a model consisting of a set of predictive cause-and-effect rules which describe changes in related variables over brief periods of time. We present a novel method for learning such a model, which is centred around the idea of discretising the state space by identifying clusters of uniform density in the probability density function of variables, which correspond to meaningful features of the state space. We show that using this method it is possible to learn models exhibiting predictive power. Secondly, we show that applying this discretisation process to two-dimensional vector variables in addition to scalar variables yields a better model than only applying it to scalar variables and we describe novel algorithms and data structures for discretising one- and two-dimensional spaces from observations. Finally, we demonstrate that this method can be useful for planning or decision making in some domains where the state space exhibits stable regions of high probability and transitional regions of lesser probability. We provide evidence for these claims by evaluating the model learning algorithm in two dynamic, continuous domains involving simulated physics: the OpenArena computer game and a two-dimensional simulation of a bouncing ball falling onto uneven terrain.
7

Benson, Carol Trinko Jones Graham A. „Assessing students' thinking in modeling probability contexts“. Normal, Ill. Illinois State University, 2000. http://wwwlib.umi.com/cr/ilstu/fullcit?p9986725.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
Annotation:
Thesis (Ph. D.)--Illinois State University, 2000.
Title from title page screen, viewed May 11, 2006. Dissertation Committee: Graham A. Jones (chair), Kenneth N. Berk, Patricia Klass, Cynthia W. Langrall, Edward S. Mooney. Includes bibliographical references (leaves 115-124) and abstract. Also available in print.
8

Rast, Jeanne D. „A Comparison of Learning Subjective and Traditional Probability in Middle Grades“. Digital Archive @ GSU, 2005. http://digitalarchive.gsu.edu/msit_diss/4.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
Annotation:
The emphasis given to probability and statistics in the K-12 mathematics curriculum has brought attention to the various approaches to probability and statistics concepts, as well as how to teach these concepts. Teachers from fourth, fifth, and sixth grades from a small suburban Catholic school engaged their students (n=87) in a study to compare learning traditional probability concepts to learning traditional and subjective probability concepts. The control group (n=44) received instruction in traditional probability, while the experimental group (n=43) received instruction in traditional and subjective probability. A Multivariate Analysis of Variance and a Bayesian t-test were used to analyze pretest and posttest scores from the Making Decisions about Chance Questionnaire (MDCQ). Researcher observational notes, teacher journal entries, student activity worksheet explanations, pre- and post-test answers, and student interviews were coded for themes. All groups showed significant improvement on the post-MDCQ (p < .01). There was a disordinal interaction between the combined fifth- and sixth-grade experimental group (n=28) and the control group (n=28), however the mean difference in performance on the pre-MDCQ and post-MDCQ was not significant (p=.096). A Bayesian t-test indicated that there is reasonable evidence to believe that the mean of the experimental group exceeded the mean of the control group. Qualitative data showed that while students have beliefs about probabilistic situations based on their past experiences and prior knowledge, and often use this information to make probability judgments, they find traditional probability problems easier than subjective probability. Further research with different grade levels, larger sample sizes or different activities would develop learning theory in this area and may provide insight about probability judgments previously labeled as misconceptions by researchers.
9

Lindsay, David George. „Machine learning techniques for probability forecasting and their practical evaluations“. Thesis, Royal Holloway, University of London, 2007. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.445274.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
10

Kornfeld, Sarah. „Predicting Default Probability in Credit Risk using Machine Learning Algorithms“. Thesis, KTH, Matematisk statistik, 2020. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-275656.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
Annotation:
This thesis has explored the field of internally developed models for measuring the probability of default (PD) in credit risk. As regulators put restrictions on modelling practices and inhibit the advance of risk measurement, the fields of data science and machine learning are advancing. The tradeoff between stricter regulation on internally developed models and the advancement of data analytics was investigated by comparing model performance of the benchmark method Logistic Regression for estimating PD with the machine learning methods Decision Trees, Random Forest, Gradient Boosting and Artificial Neural Networks (ANN). The data was supplied by SEB and contained 45 variables and 24 635 samples. As the machine learning techniques become increasingly complex to favour enhanced performance, it is often at the expense of the interpretability of the model. An exploratory analysis was therefore made with the objective of measuring variable importance in the machine learning techniques. The findings from the exploratory analysis will be compared to the results from benchmark methods that exist for measuring variable importance. The results of this study shows that logistic regression outperformed the machine learning techniques based on the model performance measure AUC with a score of 0.906. The findings from the exploratory analysis did increase the interpretability of the machine learning techniques and were validated by the results from the benchmark methods.
Denna uppsats har undersökt internt utvecklade modeller för att estimera sannolikheten för utebliven betalning (PD) inom kreditrisk. Samtidigt som nya regelverk sätter restriktioner på metoder för modellering av kreditrisk och i viss mån hämmar utvecklingen av riskmätning, utvecklas samtidigt mer avancerade metoder inom maskinlärning för riskmätning. Således har avvägningen mellan strängare regelverk av internt utvecklade modeller och framsteg i dataanalys undersökts genom jämförelse av modellprestanda för referens metoden logistisk regression för uppskattning av PD med maskininlärningsteknikerna beslutsträd, Random Forest, Gradient Boosting och artificiella neurala nätverk (ANN). Dataunderlaget kommer från SEB och består utav 45 variabler och 24 635 observationer. När maskininlärningsteknikerna blir mer komplexa för att gynna förbättrad prestanda är det ofta på bekostnad av modellens tolkbarhet. En undersökande analys gjordes därför med målet att mäta förklarningsvariablers betydelse i maskininlärningsteknikerna. Resultaten från den undersökande analysen kommer att jämföras med resultat från etablerade metoder som mäter variabelsignifikans. Resultatet av studien visar att den logistiska regressionen presterade bättre än maskininlärningsteknikerna baserat på prestandamåttet AUC som mätte 0.906. Resultatet from den undersökande analysen för förklarningsvariablers betydelse ökade tolkbarheten för maskininlärningsteknikerna. Resultatet blev även validerat med utkomsten av de etablerade metoderna för att mäta variabelsignifikans.

Bücher zum Thema "Probability learning":

1

Batanero, Carmen, Egan J. Chernoff, Joachim Engel, Hollylynne S. Lee und Ernesto Sánchez. Research on Teaching and Learning Probability. Cham: Springer International Publishing, 2016. http://dx.doi.org/10.1007/978-3-319-31625-3.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
2

DasGupta, Anirban. Probability for Statistics and Machine Learning. New York, NY: Springer New York, 2011. http://dx.doi.org/10.1007/978-1-4419-9634-3.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
3

Peck, Roxy. Statistics: Learning from data. Australia: Brooks/Cole, Cengage Learning, 2014.

Den vollen Inhalt der Quelle finden
APA, Harvard, Vancouver, ISO und andere Zitierweisen
4

Unpingco, José. Python for Probability, Statistics, and Machine Learning. Cham: Springer International Publishing, 2019. http://dx.doi.org/10.1007/978-3-030-18545-9.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
5

Unpingco, José. Python for Probability, Statistics, and Machine Learning. Cham: Springer International Publishing, 2016. http://dx.doi.org/10.1007/978-3-319-30717-6.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
6

Unpingco, José. Python for Probability, Statistics, and Machine Learning. Cham: Springer International Publishing, 2022. http://dx.doi.org/10.1007/978-3-031-04648-3.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
7

Powell, Warren B. Optimal learning. Hoboken, New Jersey: Wiley, 2012.

Den vollen Inhalt der Quelle finden
APA, Harvard, Vancouver, ISO und andere Zitierweisen
8

Vapnik, Vladimir Naumovich. The Nature of Statistical Learning Theory. New York, NY: Springer New York, 1995.

Den vollen Inhalt der Quelle finden
APA, Harvard, Vancouver, ISO und andere Zitierweisen
9

DasGupta, Anirban. Probability for statistics and machine learning: Fundamentals and advanced topics. New York: Springer, 2011.

Den vollen Inhalt der Quelle finden
APA, Harvard, Vancouver, ISO und andere Zitierweisen
10

Wan, Shibiao. Machine learning for protein subcellular localization prediction. Boston: De Gruyter, 2015.

Den vollen Inhalt der Quelle finden
APA, Harvard, Vancouver, ISO und andere Zitierweisen

Buchteile zum Thema "Probability learning":

1

Zeugmann, Thomas, Pascal Poupart, James Kennedy, Xin Jin, Jiawei Han, Lorenza Saitta, Michele Sebag et al. „Posterior Probability“. In Encyclopedia of Machine Learning, 780. Boston, MA: Springer US, 2011. http://dx.doi.org/10.1007/978-0-387-30164-8_648.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
2

Zeugmann, Thomas, Pascal Poupart, James Kennedy, Xin Jin, Jiawei Han, Lorenza Saitta, Michele Sebag et al. „Prior Probability“. In Encyclopedia of Machine Learning, 782. Boston, MA: Springer US, 2011. http://dx.doi.org/10.1007/978-0-387-30164-8_658.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
3

Kumar Singh, Bikesh, und G. R. Sinha. „Probability Theory“. In Machine Learning in Healthcare, 23–33. New York: CRC Press, 2022. http://dx.doi.org/10.1201/9781003097808-2.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
4

Unpingco, José. „Probability“. In Python for Probability, Statistics, and Machine Learning, 35–100. Cham: Springer International Publishing, 2016. http://dx.doi.org/10.1007/978-3-319-30717-6_2.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
5

Unpingco, José. „Probability“. In Python for Probability, Statistics, and Machine Learning, 39–121. Cham: Springer International Publishing, 2019. http://dx.doi.org/10.1007/978-3-030-18545-9_2.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
6

Unpingco, José. „Probability“. In Python for Probability, Statistics, and Machine Learning, 47–134. Cham: Springer International Publishing, 2022. http://dx.doi.org/10.1007/978-3-031-04648-3_2.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
7

Faul, A. C. „Probability Theory“. In A Concise Introduction to Machine Learning, 7–61. Boca Raton, Florida : CRC Press, [2019] | Series: Chapman & Hall/CRC machine learning & pattern recognition: Chapman and Hall/CRC, 2019. http://dx.doi.org/10.1201/9781351204750-2.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
8

Ghatak, Abhijit. „Probability and Distributions“. In Machine Learning with R, 31–56. Singapore: Springer Singapore, 2017. http://dx.doi.org/10.1007/978-981-10-6808-9_2.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
9

Forsyth, David. „Clustering Using Probability Models“. In Applied Machine Learning, 183–202. Cham: Springer International Publishing, 2019. http://dx.doi.org/10.1007/978-3-030-18114-7_9.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
10

Webb, Geoffrey I. „Posterior Probability“. In Encyclopedia of Machine Learning and Data Mining, 989–90. Boston, MA: Springer US, 2017. http://dx.doi.org/10.1007/978-1-4899-7687-1_648.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen

Konferenzberichte zum Thema "Probability learning":

1

Temlyakov, V. N. „Optimal estimators in learning theory“. In Approximation and Probability. Warsaw: Institute of Mathematics Polish Academy of Sciences, 2006. http://dx.doi.org/10.4064/bc72-0-23.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
2

Neville, Jennifer, David Jensen, Lisa Friedland und Michael Hay. „Learning relational probability trees“. In the ninth ACM SIGKDD international conference. New York, New York, USA: ACM Press, 2003. http://dx.doi.org/10.1145/956750.956830.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
3

Arieli, Itai, Yakov Babichenko und Manuel Mueller-Frank. „Naive Learning Through Probability Matching“. In EC '19: ACM Conference on Economics and Computation. New York, NY, USA: ACM, 2019. http://dx.doi.org/10.1145/3328526.3329601.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
4

Ha, Ming-hu, Zhi-fang Feng, Er-ling Du und Yun-chao Bai. „Further Discussion on Quasi-Probability“. In 2006 International Conference on Machine Learning and Cybernetics. IEEE, 2006. http://dx.doi.org/10.1109/icmlc.2006.258542.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
5

Burgos, María, María Del Mar López-Martín und Nicolás Tizón-Escamilla. „ALGEBRAIC REASONING IN PROBABILITY TASKS“. In 14th International Conference on Education and New Learning Technologies. IATED, 2022. http://dx.doi.org/10.21125/edulearn.2022.0777.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
6

Eugênio, Robson, Carlos Monteiro, Liliane Carvalho, José Roberto Costa Jr. und Karen François. „MATHEMATICS TEACHERS LEARNING ABOUT PROBABILITY LITERACY“. In 14th International Technology, Education and Development Conference. IATED, 2020. http://dx.doi.org/10.21125/inted.2020.0272.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
7

Silva, Jorge, und Shrikanth Narayanan. „Minimum Probability of Error Signal Representation“. In 2007 IEEE Workshop on Machine Learning for Signal Processing. IEEE, 2007. http://dx.doi.org/10.1109/mlsp.2007.4414331.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
8

Wang, Jingyi, und Boyang Zhang. „Survival Probability Assessment using Machine Learning Algorithms“. In 2022 International Conference on Machine Learning and Intelligent Systems Engineering (MLISE). IEEE, 2022. http://dx.doi.org/10.1109/mlise57402.2022.00097.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
9

Scanlon, Eileen, Tim O'Shea, Randall B. Smith und Yibing Li. „Supporting the distributed synchronous learning of probability“. In the 2nd international conference. Morristown, NJ, USA: Association for Computational Linguistics, 1997. http://dx.doi.org/10.3115/1599773.1599801.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
10

Shanbhag, Annapurna Anant, Chinmai Shetty, Alaka Ananth, Anjali Shridhar Shetty, K. Kavanashree Nayak und B. R. Rakshitha. „Heart Attack Probability Analysis Using Machine Learning“. In 2021 IEEE International Conference on Distributed Computing, VLSI, Electrical Circuits and Robotics (DISCOVER). IEEE, 2021. http://dx.doi.org/10.1109/discover52564.2021.9663631.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen

Berichte der Organisationen zum Thema "Probability learning":

1

Shute, Valerie J., und Lisa A. Gawlick-Grendell. An Experimental Approach to Teaching and Learning Probability: Stat Lady. Fort Belvoir, VA: Defense Technical Information Center, April 1996. http://dx.doi.org/10.21236/ada316969.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
2

Ilyin, M. E. The distance learning course «Theory of probability, mathematical statistics and random functions». OFERNIO, Dezember 2018. http://dx.doi.org/10.12731/ofernio.2018.23529.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
3

Kriegel, Francesco. Learning description logic axioms from discrete probability distributions over description graphs (Extended Version). Technische Universität Dresden, 2018. http://dx.doi.org/10.25368/2022.247.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
Annotation:
Description logics in their standard setting only allow for representing and reasoning with crisp knowledge without any degree of uncertainty. Of course, this is a serious shortcoming for use cases where it is impossible to perfectly determine the truth of a statement. For resolving this expressivity restriction, probabilistic variants of description logics have been introduced. Their model-theoretic semantics is built upon so-called probabilistic interpretations, that is, families of directed graphs the vertices and edges of which are labeled and for which there exists a probability measure on this graph family. Results of scientific experiments, e.g., in medicine, psychology, or biology, that are repeated several times can induce probabilistic interpretations in a natural way. In this document, we shall develop a suitable axiomatization technique for deducing terminological knowledge from the assertional data given in such probabilistic interpretations. More specifically, we consider a probabilistic variant of the description logic EL⊥, and provide a method for constructing a set of rules, so-called concept inclusions, from probabilistic interpretations in a sound and complete manner.
4

Kriegel, Francesco. Learning General Concept Inclusions in Probabilistic Description Logics. Technische Universität Dresden, 2015. http://dx.doi.org/10.25368/2022.220.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
Annotation:
Probabilistic interpretations consist of a set of interpretations with a shared domain and a measure assigning a probability to each interpretation. Such structures can be obtained as results of repeated experiments, e.g., in biology, psychology, medicine, etc. A translation between probabilistic and crisp description logics is introduced and then utilised to reduce the construction of a base of general concept inclusions of a probabilistic interpretation to the crisp case for which a method for the axiomatisation of a base of GCIs is well-known.
5

Gribok, Andrei V., Kevin P. Chen und Qirui Wang. Machine-Learning Enabled Evaluation of Probability of Piping Degradation In Secondary Systems of Nuclear Power Plants. Office of Scientific and Technical Information (OSTI), Mai 2020. http://dx.doi.org/10.2172/1634815.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
6

Robson, Jennifer. The Canada Learning Bond, financial capability and tax-filing: Results from an online survey of low and modest income parents. SEED Winnipeg/Carleton University Arthur Kroeger College of Public Affairs, März 2022. http://dx.doi.org/10.22215/clb20220301.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
Annotation:
Previous research has identified several likely causes of eligible non-participation in the Canada Learning Bond (CLB), including awareness, financial exclusion, and administrative barriers. This study expands on that research, with a particular focus on the role of tax-filing as an administrative obstacle to accessing the CLB. I present results from an online survey of low and modest income parents (n=466) conducted in 2021. We find that, even among parents reporting they have received the CLB (46%), a majority (51%) report low confidence in their familiarity with the program, and more than one in six (17%) are unaware of the need to file tax returns to maintain eligibility for annual CLB payments. Self-reported regular tax-filing is associated with a 59% increase in the probability of accessing the CLB, even when controlling for a range of parental characteristics. This study confirms previous work by Harding and colleagues (2019) that non-filing may explain some share of eligible non-participation in education savings incentives. Tax-filing services may be an important pathway to improve CLB access. Low and modest income parents show substantial diversity in their preferred filing methods and outreach efforts cannot be concentrated in only one avenue if they are to be successful. The study also tests a small ‘nudge’ to address gaps in awareness and finds that information-only approaches to outreach are likely to have limited success, even with motivated populations.
7

Moreno Pérez, Carlos, und Marco Minozzo. “Making Text Talk”: The Minutes of the Central Bank of Brazil and the Real Economy. Madrid: Banco de España, November 2022. http://dx.doi.org/10.53479/23646.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
Annotation:
This paper investigates the relationship between the views expressed in the minutes of the meetings of the Central Bank of Brazil’s Monetary Policy Committee (COPOM) and the real economy. It applies various computational linguistic machine learning algorithms to construct measures of the minutes of the COPOM. First, we create measures of the content of the paragraphs of the minutes using Latent Dirichlet Allocation (LDA). Second, we build an uncertainty index for the minutes using Word Embedding and K-Means. Then, we combine these indices to create two topic-uncertainty indices. The first one is constructed from paragraphs with a higher probability of topics related to “general economic conditions”. The second topic-uncertainty index is constructed from paragraphs that have a higher probability of topics related to “inflation” and the “monetary policy discussion”. Finally, we employ a structural VAR model to explore the lasting effects of these uncertainty indices on certain Brazilian macroeconomic variables. Our results show that greater uncertainty leads to a decline in inflation, the exchange rate, industrial production and retail trade in the period from January 2000 to July 2019.
8

Clausen, Jay, Vuong Truong, Sophia Bragdon, Susan Frankenstein, Anna Wagner, Rosa Affleck und Christopher Williams. Buried-object-detection improvements incorporating environmental phenomenology into signature physics. Engineer Research and Development Center (U.S.), September 2022. http://dx.doi.org/10.21079/11681/45625.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
Annotation:
The ability to detect buried objects is critical for the Army. Therefore, this report summarizes the fourth year of an ongoing study to assess environ-mental phenomenological conditions affecting probability of detection and false alarm rates for buried-object detection using thermal infrared sensors. This study used several different approaches to identify the predominant environmental variables affecting object detection: (1) multilevel statistical modeling, (2) direct image analysis, (3) physics-based thermal modeling, and (4) application of machine learning (ML) techniques. In addition, this study developed an approach using a Canny edge methodology to identify regions of interest potentially harboring a target object. Finally, an ML method was developed to improve automatic target detection and recognition performance by accounting for environmental phenomenological conditions, improving performance by 50% over standard automatic target detection and recognition software.
9

Yaroshchuk, Svitlana O., Nonna N. Shapovalova, Andrii M. Striuk, Olena H. Rybalchenko, Iryna O. Dotsenko und Svitlana V. Bilashenko. Credit scoring model for microfinance organizations. [б. в.], Februar 2020. http://dx.doi.org/10.31812/123456789/3683.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
Annotation:
The purpose of the work is the development and application of models for scoring assessment of microfinance institution borrowers. This model allows to increase the efficiency of work in the field of credit. The object of research is lending. The subject of the study is a direct scoring model for improving the quality of lending using machine learning methods. The objective of the study: to determine the criteria for choosing a solvent borrower, to develop a model for an early assessment, to create software based on neural networks to determine the probability of a loan default risk. Used research methods such as analysis of the literature on banking scoring; artificial intelligence methods for scoring; modeling of scoring estimation algorithm using neural networks, empirical method for determining the optimal parameters of the training model; method of object-oriented design and programming. The result of the work is a neural network scoring model with high accuracy of calculations, an implemented system of automatic customer lending.
10

Tyshchenko, Yelyzaveta Yu, und Andrii M. Striuk. Актуальність розробки моделі адаптивного навчання. [б. в.], Dezember 2018. http://dx.doi.org/10.31812/123456789/2889.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
Annotation:
The learning process can be made most effective by transferring the educational process to the electronic environment. Thanks to adaptive testing, the accuracy, quality, reliability of training and student interest are enhanced, which allows him to be more motivated. This is a new approach for the student to master most of the information. The introduction of an adaptive testing system ensures the improvement of student learning performance. From the proper organization of the control of knowledge depends on the effectiveness of the educational process. Adaptive testing involves changing the sequence of tasks in the testing process itself, taking into account the answers to the tasks already received. In the process of passing the test, a personality model is built that learns for later use in selecting the following testing tasks, depending on the level of knowledge of the student and his individual characteristics. When calculating the assessment, the adaptive testing system takes into account the probability that the student can guess the answer, the number of attempts to pass the test and the average result achieved during all attempts. The complex of tasks for adaptive testing can be developed taking into account a separate type of perception of information by each student, that is, the student is offered tasks that he is able to cope with and which are interesting for him, which means he is more confident in his abilities and aims at successful completion of the course.

Zur Bibliographie