Добірка наукової літератури з теми "Probability learning"

Оформте джерело за APA, MLA, Chicago, Harvard та іншими стилями

Оберіть тип джерела:

Ознайомтеся зі списками актуальних статей, книг, дисертацій, тез та інших наукових джерел на тему "Probability learning".

Біля кожної праці в переліку літератури доступна кнопка «Додати до бібліографії». Скористайтеся нею – і ми автоматично оформимо бібліографічне посилання на обрану працю в потрібному вам стилі цитування: APA, MLA, «Гарвард», «Чикаго», «Ванкувер» тощо.

Також ви можете завантажити повний текст наукової публікації у форматі «.pdf» та прочитати онлайн анотацію до роботи, якщо відповідні параметри наявні в метаданих.

Статті в журналах з теми "Probability learning"

1

SAEKI, Daisuke. "Probability learning in golden hamsters." Japanese Journal of Animal Psychology 49, no. 1 (1999): 41–47. http://dx.doi.org/10.2502/janip.49.41.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
2

Groth, Randall E., Jennifer A. Bergner, and Jathan W. Austin. "Dimensions of Learning Probability Vocabulary." Journal for Research in Mathematics Education 51, no. 1 (January 2020): 75–104. http://dx.doi.org/10.5951/jresematheduc.2019.0008.

Повний текст джерела
Анотація:
Normative discourse about probability requires shared meanings for disciplinary vocabulary. Previous research indicates that students’ meanings for probability vocabulary often differ from those of mathematicians, creating a need to attend to developing students’ use of language. Current standards documents conflict in their recommendations about how this should occur. In the present study, we conducted microgenetic research to examine the vocabulary use of four students before, during, and after lessons from a cycle of design-based research attending to probability vocabulary. In characterizing students’ normative and nonnormative uses of language, we draw implications for the design of curriculum, standards, and further research. Specifically, we illustrate the importance of attending to incrementality, multidimensionality, polysemy, interrelatedness, and heterogeneity to foster students’ probability vocabulary development.
Стилі APA, Harvard, Vancouver, ISO та ін.
3

Groth, Randall E., Jennifer A. Bergner, and Jathan W. Austin. "Dimensions of Learning Probability Vocabulary." Journal for Research in Mathematics Education 51, no. 1 (January 2020): 75–104. http://dx.doi.org/10.5951/jresematheduc.51.1.0075.

Повний текст джерела
Анотація:
Normative discourse about probability requires shared meanings for disciplinary vocabulary. Previous research indicates that students’ meanings for probability vocabulary often differ from those of mathematicians, creating a need to attend to developing students’ use of language. Current standards documents conflict in their recommendations about how this should occur. In the present study, we conducted microgenetic research to examine the vocabulary use of four students before, during, and after lessons from a cycle of design-based research attending to probability vocabulary. In characterizing students’ normative and nonnormative uses of language, we draw implications for the design of curriculum, standards, and further research. Specifically, we illustrate the importance of attending to incrementality, multidimensionality, polysemy, interrelatedness, and heterogeneity to foster students’ probability vocabulary development.
Стилі APA, Harvard, Vancouver, ISO та ін.
4

Rivas, Javier. "Probability matching and reinforcement learning." Journal of Mathematical Economics 49, no. 1 (January 2013): 17–21. http://dx.doi.org/10.1016/j.jmateco.2012.09.004.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
5

West, Bruce J. "Fractal Probability Measures of Learning." Methods 24, no. 4 (August 2001): 395–402. http://dx.doi.org/10.1006/meth.2001.1208.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
6

Jiang, Xiaolei. "Conditional Probability in Machine Learning." Journal of Education and Educational Research 4, no. 2 (July 20, 2023): 31–33. http://dx.doi.org/10.54097/jeer.v4i2.10647.

Повний текст джерела
Анотація:
To help teaching of machine learning course, manipulation rules and application examples of conditional probabilities in machine learning are presented. The emphasis is to make a clear distinction between reasonable assumptions and logical deductions developed from assumptions and axioms. The formula for conditional probability of conditional probability is presented with examples in Bayesian coin tossing, Bayesian linear regression, and Gaussian processes for regression and classification. The signal + noise model is formulated in terms of a proposition and exemplified by linear-Gaussian models and linear dynamical systems.
Стилі APA, Harvard, Vancouver, ISO та ін.
7

Malley, J. D., J. Kruppa, A. Dasgupta, K. G. Malley, and A. Ziegler. "Probability Machines." Methods of Information in Medicine 51, no. 01 (2012): 74–81. http://dx.doi.org/10.3414/me00-01-0052.

Повний текст джерела
Анотація:
SummaryBackground: Most machine learning approaches only provide a classification for binary responses. However, probabilities are required for risk estimation using individual patient characteristics. It has been shown recently that every statistical learning machine known to be consistent for a nonparametric regression problem is a probability machine that is provably consistent for this estimation problem.Objectives: The aim of this paper is to show how random forests and nearest neighbors can be used for consistent estimation of individual probabilities.Methods: Two random forest algorithms and two nearest neighbor algorithms are described in detail for estimation of individual probabilities. We discuss the consistency of random forests, nearest neighbors and other learning machines in detail. We conduct a simulation study to illustrate the validity of the methods. We exemplify the algorithms by analyzing two well-known data sets on the diagnosis of appendicitis and the diagnosis of diabetes in Pima Indians.Results: Simulations demonstrate the validity of the method. With the real data application, we show the accuracy and practicality of this approach. We provide sample code from R packages in which the probability estimation is already available. This means that all calculations can be performed using existing software.Conclusions: Random forest algorithms as well as nearest neighbor approaches are valid machine learning methods for estimating individual probabilities for binary responses. Freely available implementations are available in R and may be used for applications.
Стилі APA, Harvard, Vancouver, ISO та ін.
8

Dawson, Michael R. W. "Probability Learning by Perceptrons and People." Comparative Cognition & Behavior Reviews 15 (2022): 1–188. http://dx.doi.org/10.3819/ccbr.2019.140011.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
9

HIRASAWA, Kotaro, Masaaki HARADA, Masanao OHBAYASHI, Juuichi MURATA, and Jinglu HU. "Probability and Possibility Automaton Learning Network." IEEJ Transactions on Industry Applications 118, no. 3 (1998): 291–99. http://dx.doi.org/10.1541/ieejias.118.291.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
10

Groth, Randall E., Jaime Butler, and Delmar Nelson. "Overcoming challenges in learning probability vocabulary." Teaching Statistics 38, no. 3 (May 26, 2016): 102–7. http://dx.doi.org/10.1111/test.12109.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.

Дисертації з теми "Probability learning"

1

Gozenman, Filiz. "Interaction Of Probability Learning And Working Memory." Master's thesis, METU, 2012. http://etd.lib.metu.edu.tr/upload/12614535/index.pdf.

Повний текст джерела
Анотація:
Probability learning is the ability to establish a relationship between stimulus and outcomes based on occurrence probabilities using repetitive feedbacks. Participants learn the task according to the cue-outcome relationship, and try to gain in depth understanding of this relationship throughout the experiment. While learning is at the highest level, people rely on their working memory. In this study 20 participants were presented a probability learning task, and their prefrontal cortex activity was measured with functional Near-Infrared Spectroscopy. It was hypothesized that as participants gain more knowledge of the probabilities they will learn cue-outcome relationships and therefore rely less on their working memory. Therefore as learning precedes a drop in the fNIRS signal is expected. We obtained results confirming our hypothesis: Significant negative correlation between dorsolateral prefrontal cortex activity and learning was found. Similarly, response time also decreased through the task, indicating that as learning precedes participants made decisions faster. Participants used either the frequency matching or the maximization strategy in order to solve the task in which they had to decide whether the blue or the red color was winning. When they use the frequency matching strategy they chose blue at the rate of winning for the blue choice. When they use the maximization strategy they chosed blue almost always. Our task was designed such that the frequency for blue to win was 80%. We had hypothesized that the people in frequency matching and maximization groups would show working memory differences which could be observed from the fNIRS signal. However, we were unable to detect this type of behavioral difference in the fNIRS signal. Overall, our study showed the relationship between probability learning and working memory as depicted by brain activity in the dorsolateral prefrontal cortex which widely known as the central executive component of working memory.
Стилі APA, Harvard, Vancouver, ISO та ін.
2

RYSZ, TERI. "METACOGNITION IN LEARNING ELEMENTARY PROBABILITY AND STATISTICS." University of Cincinnati / OhioLINK, 2004. http://rave.ohiolink.edu/etdc/view?acc_num=ucin1099248340.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
3

Bouchacourt, Diane. "Task-oriented learning of structured probability distributions." Thesis, University of Oxford, 2017. https://ora.ox.ac.uk/objects/uuid:0665495b-afbb-483b-8bdf-cbc6ae5baeff.

Повний текст джерела
Анотація:
Machine learning models automatically learn from historical data to predict unseen events. Such events are often represented as complex multi-dimensional structures. In many cases there is high uncertainty in the prediction process. Research has developed probabilistic models to capture distributions of complex objects, but their learning objective is often agnostic of the evaluation loss. In this thesis, we address the aforementioned defficiency by designing probabilistic methods for structured object prediction that take into account the task at hand. First, we consider that the task at hand is explicitly known, but there is ambiguity in the prediction due to an unobserved (latent) variable. We develop a framework for latent structured output prediction that unifies existing empirical risk minimisation methods. We empirically demonstrate that for large and ambiguous latent spaces, performing prediction by minimising the uncertainty in the latent variable provides more accurate results. Empirical risk minimisation methods predict only a pointwise estimate of the output, however there can be uncertainty on the output value itself. To tackle this deficiency, we introduce a novel type of model to perform probabilistic structured output prediction. Our training objective minimises a dissimilarity coefficient between the data distribution and the model's distribution. This coefficient is defined according to a loss of choice, thereby our objective can be tailored to the task loss. We empirically demonstrate the ability of our model to capture distributions over complex objects. Finally, we tackle a setting where the task loss is implicitly expressed. Specifically, we consider the case of grouped observations. We propose a new model for learning a representation of the data that decomposes according to the semantics behind this grouping, while allowing efficient test-time inference. We experimentally demonstrate that our model learns a disentangled and controllable representation, leverages grouping information when available, and generalises to unseen observations.
Стилі APA, Harvard, Vancouver, ISO та ін.
4

Li, Chengtao Ph D. Massachusetts Institute of Technology. "Diversity-inducing probability measures for machine learning." Thesis, Massachusetts Institute of Technology, 2019. https://hdl.handle.net/1721.1/121724.

Повний текст джерела
Анотація:
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Electrical Engineering and Computer Science, 2019
Cataloged from PDF version of thesis.
Includes bibliographical references (pages 163-176).
Subset selection problems arise in machine learning within kernel approximation, experimental design, and numerous other applications. In such applications, one often seeks to select diverse subsets of items to represent the population. One way to select such diverse subsets is to sample according to Diversity-Inducing Probability Measures (DIPMs) that assign higher probabilities to more diverse subsets. DIPMs underlie several recent breakthroughs in mathematics and theoretical computer science, but their power has not yet been explored for machine learning. In this thesis, we investigate DIPMs, their mathematical properties, sampling algorithms, and applications. Perhaps the best known instance of a DIPM is a Determinantal Point Process (DPP). DPPs originally arose in quantum physics, and are known to have deep relations to linear algebra, combinatorics, and geometry. We explore applications of DPPs to kernel matrix approximation and kernel ridge regression.
In these applications, DPPs deliver strong approximation guarantees and obtain superior performance compared to existing methods. We further develop an MCMC sampling algorithm accelerated by Gauss-type quadratures for DPPs. The algorithm runs several orders of magnitude faster than the existing ones. DPPs lie in a larger class of DIPMs called Strongly Rayleigh (SR) Measures. Instances of SR measures display a strong negative dependence property known as negative association, and as such can be used to model subset diversity. We study mathematical properties of SR measures, and construct the first provably fast-mixing Markov chain that samples from general SR measures. As a special case, we consider an SR measure called Dual Volume Sampling (DVS), for which we present the first poly-time sampling algorithm.
While all considered distributions over subsets are unconstrained, those of interest in the real world usually come with constraints due to prior knowledge, resource limitations or personal preferences. Hence we investigate sampling from constrained versions of DIPMs. Specifically, we consider DIPMs with cardinality constraints and matroid base constraints and construct poly-time approximate sampling algorithms for them. Such sampling algorithms will enable practical uses of constrained DIPMs in real world.
by Chengtao Li.
Ph. D.
Ph.D. Massachusetts Institute of Technology, Department of Electrical Engineering and Computer Science
Стилі APA, Harvard, Vancouver, ISO та ін.
5

Hunt, Gareth David. "Reinforcement Learning for Low Probability High Impact Risks." Thesis, Curtin University, 2019. http://hdl.handle.net/20.500.11937/77106.

Повний текст джерела
Анотація:
We demonstrate a method of reinforcement learning that uses training in simulation. Our system generates an estimate of the potential reward and danger of each action as well as a measure of the uncertainty present in both. The system generates this by seeking out not only rewarding actions but also dangerous ones in the simulated training. During runtime our system is able to use this knowledge to avoid risks while accomplishing its tasks.
Стилі APA, Harvard, Vancouver, ISO та ін.
6

Słowiński, Witold. "Autonomous learning of domain models from probability distribution clusters." Thesis, University of Aberdeen, 2014. http://digitool.abdn.ac.uk:80/webclient/DeliveryManager?pid=211059.

Повний текст джерела
Анотація:
Nontrivial domains can be difficult to understand and the task of encoding a model of such a domain can be difficult for a human expert, which is one of the fundamental problems of knowledge acquisition. Model learning provides a way to address this problem by allowing a predictive model of the domain's dynamics to be learnt algorithmically, without human supervision. Such models can provide insight about the domain to a human or aid in automated planning or reinforcement learning. This dissertation addresses the problem of how to learn a model of a continuous, dynamic domain, from sensory observations, through the discretisation of its continuous state space. The learning process is unsupervised in that there are no predefined goals, and it assumes no prior knowledge of the environment. Its outcome is a model consisting of a set of predictive cause-and-effect rules which describe changes in related variables over brief periods of time. We present a novel method for learning such a model, which is centred around the idea of discretising the state space by identifying clusters of uniform density in the probability density function of variables, which correspond to meaningful features of the state space. We show that using this method it is possible to learn models exhibiting predictive power. Secondly, we show that applying this discretisation process to two-dimensional vector variables in addition to scalar variables yields a better model than only applying it to scalar variables and we describe novel algorithms and data structures for discretising one- and two-dimensional spaces from observations. Finally, we demonstrate that this method can be useful for planning or decision making in some domains where the state space exhibits stable regions of high probability and transitional regions of lesser probability. We provide evidence for these claims by evaluating the model learning algorithm in two dynamic, continuous domains involving simulated physics: the OpenArena computer game and a two-dimensional simulation of a bouncing ball falling onto uneven terrain.
Стилі APA, Harvard, Vancouver, ISO та ін.
7

Benson, Carol Trinko Jones Graham A. "Assessing students' thinking in modeling probability contexts." Normal, Ill. Illinois State University, 2000. http://wwwlib.umi.com/cr/ilstu/fullcit?p9986725.

Повний текст джерела
Анотація:
Thesis (Ph. D.)--Illinois State University, 2000.
Title from title page screen, viewed May 11, 2006. Dissertation Committee: Graham A. Jones (chair), Kenneth N. Berk, Patricia Klass, Cynthia W. Langrall, Edward S. Mooney. Includes bibliographical references (leaves 115-124) and abstract. Also available in print.
Стилі APA, Harvard, Vancouver, ISO та ін.
8

Rast, Jeanne D. "A Comparison of Learning Subjective and Traditional Probability in Middle Grades." Digital Archive @ GSU, 2005. http://digitalarchive.gsu.edu/msit_diss/4.

Повний текст джерела
Анотація:
The emphasis given to probability and statistics in the K-12 mathematics curriculum has brought attention to the various approaches to probability and statistics concepts, as well as how to teach these concepts. Teachers from fourth, fifth, and sixth grades from a small suburban Catholic school engaged their students (n=87) in a study to compare learning traditional probability concepts to learning traditional and subjective probability concepts. The control group (n=44) received instruction in traditional probability, while the experimental group (n=43) received instruction in traditional and subjective probability. A Multivariate Analysis of Variance and a Bayesian t-test were used to analyze pretest and posttest scores from the Making Decisions about Chance Questionnaire (MDCQ). Researcher observational notes, teacher journal entries, student activity worksheet explanations, pre- and post-test answers, and student interviews were coded for themes. All groups showed significant improvement on the post-MDCQ (p < .01). There was a disordinal interaction between the combined fifth- and sixth-grade experimental group (n=28) and the control group (n=28), however the mean difference in performance on the pre-MDCQ and post-MDCQ was not significant (p=.096). A Bayesian t-test indicated that there is reasonable evidence to believe that the mean of the experimental group exceeded the mean of the control group. Qualitative data showed that while students have beliefs about probabilistic situations based on their past experiences and prior knowledge, and often use this information to make probability judgments, they find traditional probability problems easier than subjective probability. Further research with different grade levels, larger sample sizes or different activities would develop learning theory in this area and may provide insight about probability judgments previously labeled as misconceptions by researchers.
Стилі APA, Harvard, Vancouver, ISO та ін.
9

Lindsay, David George. "Machine learning techniques for probability forecasting and their practical evaluations." Thesis, Royal Holloway, University of London, 2007. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.445274.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
10

Kornfeld, Sarah. "Predicting Default Probability in Credit Risk using Machine Learning Algorithms." Thesis, KTH, Matematisk statistik, 2020. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-275656.

Повний текст джерела
Анотація:
This thesis has explored the field of internally developed models for measuring the probability of default (PD) in credit risk. As regulators put restrictions on modelling practices and inhibit the advance of risk measurement, the fields of data science and machine learning are advancing. The tradeoff between stricter regulation on internally developed models and the advancement of data analytics was investigated by comparing model performance of the benchmark method Logistic Regression for estimating PD with the machine learning methods Decision Trees, Random Forest, Gradient Boosting and Artificial Neural Networks (ANN). The data was supplied by SEB and contained 45 variables and 24 635 samples. As the machine learning techniques become increasingly complex to favour enhanced performance, it is often at the expense of the interpretability of the model. An exploratory analysis was therefore made with the objective of measuring variable importance in the machine learning techniques. The findings from the exploratory analysis will be compared to the results from benchmark methods that exist for measuring variable importance. The results of this study shows that logistic regression outperformed the machine learning techniques based on the model performance measure AUC with a score of 0.906. The findings from the exploratory analysis did increase the interpretability of the machine learning techniques and were validated by the results from the benchmark methods.
Denna uppsats har undersökt internt utvecklade modeller för att estimera sannolikheten för utebliven betalning (PD) inom kreditrisk. Samtidigt som nya regelverk sätter restriktioner på metoder för modellering av kreditrisk och i viss mån hämmar utvecklingen av riskmätning, utvecklas samtidigt mer avancerade metoder inom maskinlärning för riskmätning. Således har avvägningen mellan strängare regelverk av internt utvecklade modeller och framsteg i dataanalys undersökts genom jämförelse av modellprestanda för referens metoden logistisk regression för uppskattning av PD med maskininlärningsteknikerna beslutsträd, Random Forest, Gradient Boosting och artificiella neurala nätverk (ANN). Dataunderlaget kommer från SEB och består utav 45 variabler och 24 635 observationer. När maskininlärningsteknikerna blir mer komplexa för att gynna förbättrad prestanda är det ofta på bekostnad av modellens tolkbarhet. En undersökande analys gjordes därför med målet att mäta förklarningsvariablers betydelse i maskininlärningsteknikerna. Resultaten från den undersökande analysen kommer att jämföras med resultat från etablerade metoder som mäter variabelsignifikans. Resultatet av studien visar att den logistiska regressionen presterade bättre än maskininlärningsteknikerna baserat på prestandamåttet AUC som mätte 0.906. Resultatet from den undersökande analysen för förklarningsvariablers betydelse ökade tolkbarheten för maskininlärningsteknikerna. Resultatet blev även validerat med utkomsten av de etablerade metoderna för att mäta variabelsignifikans.
Стилі APA, Harvard, Vancouver, ISO та ін.

Книги з теми "Probability learning"

1

Batanero, Carmen, Egan J. Chernoff, Joachim Engel, Hollylynne S. Lee, and Ernesto Sánchez. Research on Teaching and Learning Probability. Cham: Springer International Publishing, 2016. http://dx.doi.org/10.1007/978-3-319-31625-3.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
2

DasGupta, Anirban. Probability for Statistics and Machine Learning. New York, NY: Springer New York, 2011. http://dx.doi.org/10.1007/978-1-4419-9634-3.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
3

Aggarwal, Charu C. Probability and Statistics for Machine Learning. Cham: Springer Nature Switzerland, 2024. http://dx.doi.org/10.1007/978-3-031-53282-5.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
4

Egan, J. Chernoff, Engel Joachim, Lee Hollylynne S, and Sánchez Ernesto, eds. Research on Teaching and Learning Probability. Cham: Springer, 2016.

Знайти повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
5

Unpingco, José. Python for Probability, Statistics, and Machine Learning. Cham: Springer International Publishing, 2019. http://dx.doi.org/10.1007/978-3-030-18545-9.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
6

Unpingco, José. Python for Probability, Statistics, and Machine Learning. Cham: Springer International Publishing, 2016. http://dx.doi.org/10.1007/978-3-319-30717-6.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
7

Unpingco, José. Python for Probability, Statistics, and Machine Learning. Cham: Springer International Publishing, 2022. http://dx.doi.org/10.1007/978-3-031-04648-3.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
8

Powell, Warren B. Optimal learning. Hoboken, New Jersey: Wiley, 2012.

Знайти повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
9

Peck, Roxy. Statistics: Learning from data. Australia: Brooks/Cole, Cengage Learning, 2014.

Знайти повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
10

Knez, Igor. To know what to know before knowing: Acquisition of functional rules in probabilistic ecologies. Uppsala: Uppsala University, 1992.

Знайти повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.

Частини книг з теми "Probability learning"

1

Glenberg, Arthur M., and Matthew E. Andrzejewski. "Probability." In Learning From Data, 105–19. 4th ed. New York: Routledge, 2024. http://dx.doi.org/10.4324/9781003025405-6.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
2

Zeugmann, Thomas, Pascal Poupart, James Kennedy, Xin Jin, Jiawei Han, Lorenza Saitta, Michele Sebag, et al. "Posterior Probability." In Encyclopedia of Machine Learning, 780. Boston, MA: Springer US, 2011. http://dx.doi.org/10.1007/978-0-387-30164-8_648.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
3

Zeugmann, Thomas, Pascal Poupart, James Kennedy, Xin Jin, Jiawei Han, Lorenza Saitta, Michele Sebag, et al. "Prior Probability." In Encyclopedia of Machine Learning, 782. Boston, MA: Springer US, 2011. http://dx.doi.org/10.1007/978-0-387-30164-8_658.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
4

Kumar Singh, Bikesh, and G. R. Sinha. "Probability Theory." In Machine Learning in Healthcare, 23–33. New York: CRC Press, 2022. http://dx.doi.org/10.1201/9781003097808-2.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
5

Unpingco, José. "Probability." In Python for Probability, Statistics, and Machine Learning, 35–100. Cham: Springer International Publishing, 2016. http://dx.doi.org/10.1007/978-3-319-30717-6_2.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
6

Unpingco, José. "Probability." In Python for Probability, Statistics, and Machine Learning, 39–121. Cham: Springer International Publishing, 2019. http://dx.doi.org/10.1007/978-3-030-18545-9_2.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
7

Unpingco, José. "Probability." In Python for Probability, Statistics, and Machine Learning, 47–134. Cham: Springer International Publishing, 2022. http://dx.doi.org/10.1007/978-3-031-04648-3_2.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
8

Faul, A. C. "Probability Theory." In A Concise Introduction to Machine Learning, 7–61. Boca Raton, Florida : CRC Press, [2019] | Series: Chapman & Hall/CRC machine learning & pattern recognition: Chapman and Hall/CRC, 2019. http://dx.doi.org/10.1201/9781351204750-2.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
9

Aggarwal, Charu C. "Probability Distributions." In Probability and Statistics for Machine Learning, 127–90. Cham: Springer Nature Switzerland, 2024. http://dx.doi.org/10.1007/978-3-031-53282-5_4.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
10

Ghatak, Abhijit. "Probability and Distributions." In Machine Learning with R, 31–56. Singapore: Springer Singapore, 2017. http://dx.doi.org/10.1007/978-981-10-6808-9_2.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.

Тези доповідей конференцій з теми "Probability learning"

1

Temlyakov, V. N. "Optimal estimators in learning theory." In Approximation and Probability. Warsaw: Institute of Mathematics Polish Academy of Sciences, 2006. http://dx.doi.org/10.4064/bc72-0-23.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
2

Neville, Jennifer, David Jensen, Lisa Friedland, and Michael Hay. "Learning relational probability trees." In the ninth ACM SIGKDD international conference. New York, New York, USA: ACM Press, 2003. http://dx.doi.org/10.1145/956750.956830.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
3

Arieli, Itai, Yakov Babichenko, and Manuel Mueller-Frank. "Naive Learning Through Probability Matching." In EC '19: ACM Conference on Economics and Computation. New York, NY, USA: ACM, 2019. http://dx.doi.org/10.1145/3328526.3329601.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
4

Sánchez, Emesta, Sibel Kazak, and Egan J. Chernoff. "Teaching and Learning of Probability." In The 14th International Congress on Mathematical Education. WORLD SCIENTIFIC, 2024. http://dx.doi.org/10.1142/9789811287152_0035.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
5

Ha, Ming-hu, Zhi-fang Feng, Er-ling Du, and Yun-chao Bai. "Further Discussion on Quasi-Probability." In 2006 International Conference on Machine Learning and Cybernetics. IEEE, 2006. http://dx.doi.org/10.1109/icmlc.2006.258542.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
6

Burgos, María, María Del Mar López-Martín, and Nicolás Tizón-Escamilla. "ALGEBRAIC REASONING IN PROBABILITY TASKS." In 14th International Conference on Education and New Learning Technologies. IATED, 2022. http://dx.doi.org/10.21125/edulearn.2022.0777.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
7

Herlau, Tue. "Active learning of causal probability trees." In 2022 21st IEEE International Conference on Machine Learning and Applications (ICMLA). IEEE, 2022. http://dx.doi.org/10.1109/icmla55696.2022.00193.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
8

Eugênio, Robson, Carlos Monteiro, Liliane Carvalho, José Roberto Costa Jr., and Karen François. "MATHEMATICS TEACHERS LEARNING ABOUT PROBABILITY LITERACY." In 14th International Technology, Education and Development Conference. IATED, 2020. http://dx.doi.org/10.21125/inted.2020.0272.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
9

Struski, Łukasz, Adam Pardyl, Jacek Tabor, and Bartosz Zieliński. "ProPML: Probability Partial Multi-label Learning." In 2023 IEEE 10th International Conference on Data Science and Advanced Analytics (DSAA). IEEE, 2023. http://dx.doi.org/10.1109/dsaa60987.2023.10302620.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
10

Ramishetty, Sravani, and Abolfazl Hashemi. "High Probability Guarantees For Federated Learning." In 2023 59th Annual Allerton Conference on Communication, Control, and Computing (Allerton). IEEE, 2023. http://dx.doi.org/10.1109/allerton58177.2023.10313468.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.

Звіти організацій з теми "Probability learning"

1

Shute, Valerie J., and Lisa A. Gawlick-Grendell. An Experimental Approach to Teaching and Learning Probability: Stat Lady. Fort Belvoir, VA: Defense Technical Information Center, April 1996. http://dx.doi.org/10.21236/ada316969.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
2

Ilyin, M. E. The distance learning course «Theory of probability, mathematical statistics and random functions». OFERNIO, December 2018. http://dx.doi.org/10.12731/ofernio.2018.23529.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
3

Kriegel, Francesco. Learning description logic axioms from discrete probability distributions over description graphs (Extended Version). Technische Universität Dresden, 2018. http://dx.doi.org/10.25368/2022.247.

Повний текст джерела
Анотація:
Description logics in their standard setting only allow for representing and reasoning with crisp knowledge without any degree of uncertainty. Of course, this is a serious shortcoming for use cases where it is impossible to perfectly determine the truth of a statement. For resolving this expressivity restriction, probabilistic variants of description logics have been introduced. Their model-theoretic semantics is built upon so-called probabilistic interpretations, that is, families of directed graphs the vertices and edges of which are labeled and for which there exists a probability measure on this graph family. Results of scientific experiments, e.g., in medicine, psychology, or biology, that are repeated several times can induce probabilistic interpretations in a natural way. In this document, we shall develop a suitable axiomatization technique for deducing terminological knowledge from the assertional data given in such probabilistic interpretations. More specifically, we consider a probabilistic variant of the description logic EL⊥, and provide a method for constructing a set of rules, so-called concept inclusions, from probabilistic interpretations in a sound and complete manner.
Стилі APA, Harvard, Vancouver, ISO та ін.
4

Kriegel, Francesco. Learning General Concept Inclusions in Probabilistic Description Logics. Technische Universität Dresden, 2015. http://dx.doi.org/10.25368/2022.220.

Повний текст джерела
Анотація:
Probabilistic interpretations consist of a set of interpretations with a shared domain and a measure assigning a probability to each interpretation. Such structures can be obtained as results of repeated experiments, e.g., in biology, psychology, medicine, etc. A translation between probabilistic and crisp description logics is introduced and then utilised to reduce the construction of a base of general concept inclusions of a probabilistic interpretation to the crisp case for which a method for the axiomatisation of a base of GCIs is well-known.
Стилі APA, Harvard, Vancouver, ISO та ін.
5

Gribok, Andrei V., Kevin P. Chen, and Qirui Wang. Machine-Learning Enabled Evaluation of Probability of Piping Degradation In Secondary Systems of Nuclear Power Plants. Office of Scientific and Technical Information (OSTI), May 2020. http://dx.doi.org/10.2172/1634815.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
6

de Luis, Mercedes, Emilio Rodríguez, and Diego Torres. Machine learning applied to active fixed-income portfolio management: a Lasso logit approach. Madrid: Banco de España, September 2023. http://dx.doi.org/10.53479/33560.

Повний текст джерела
Анотація:
The use of quantitative methods constitutes a standard component of the institutional investors’ portfolio management toolkit. In the last decade, several empirical studies have employed probabilistic or classification models to predict stock market excess returns, model bond ratings and default probabilities, as well as to forecast yield curves. To the authors’ knowledge, little research exists into their application to active fixed-income management. This paper contributes to filling this gap by comparing a machine learning algorithm, the Lasso logit regression, with a passive (buy-and-hold) investment strategy in the construction of a duration management model for high-grade bond portfolios, specifically focusing on US treasury bonds. Additionally, a two-step procedure is proposed, together with a simple ensemble averaging aimed at minimising the potential overfitting of traditional machine learning algorithms. A method to select thresholds that translate probabilities into signals based on conditional probability distributions is also introduced.
Стилі APA, Harvard, Vancouver, ISO та ін.
7

Dinarte, Lelys, Pablo Egaña del Sol, and Claudia Martínez. When Emotion Regulation Matters: The Efficacy of Socio-Emotional Learning to Address School-Based Violence in Central America. Inter-American Development Bank, March 2024. http://dx.doi.org/10.18235/0012854.

Повний текст джерела
Анотація:
After-school programs (ASP) that keep youth protected while engaging them in socio-emotional learning might address school-based violent behaviors. This paper experimentally studies the socio-emotional-learning component of an ASP targeted to teenagers in public schools in the most violent neighborhoods of El Salvador, Honduras, and Guatemala. Participant schools were randomly assigned to different ASP variations, some of them including psychology-based interventions. Results indicate that including psychology-based activities as part of the ASP increases by 23 percentage points the probability that students are well-behaved at school. The effect is driven by the most at-risk students. Using data gathered from task-based games and AI-powered emotion-detection algorithms, this paper shows that improvement in emotion regulation is likely driving the effect. When comparing a psychology-based curriculum aiming to strengthen participants' character and another based on mindfulness principles, results show that the latter improves violent behaviors while reducing school dropout.
Стилі APA, Harvard, Vancouver, ISO та ін.
8

Moreno Pérez, Carlos, and Marco Minozzo. “Making Text Talk”: The Minutes of the Central Bank of Brazil and the Real Economy. Madrid: Banco de España, November 2022. http://dx.doi.org/10.53479/23646.

Повний текст джерела
Анотація:
This paper investigates the relationship between the views expressed in the minutes of the meetings of the Central Bank of Brazil’s Monetary Policy Committee (COPOM) and the real economy. It applies various computational linguistic machine learning algorithms to construct measures of the minutes of the COPOM. First, we create measures of the content of the paragraphs of the minutes using Latent Dirichlet Allocation (LDA). Second, we build an uncertainty index for the minutes using Word Embedding and K-Means. Then, we combine these indices to create two topic-uncertainty indices. The first one is constructed from paragraphs with a higher probability of topics related to “general economic conditions”. The second topic-uncertainty index is constructed from paragraphs that have a higher probability of topics related to “inflation” and the “monetary policy discussion”. Finally, we employ a structural VAR model to explore the lasting effects of these uncertainty indices on certain Brazilian macroeconomic variables. Our results show that greater uncertainty leads to a decline in inflation, the exchange rate, industrial production and retail trade in the period from January 2000 to July 2019.
Стилі APA, Harvard, Vancouver, ISO та ін.
9

Robson, Jennifer. The Canada Learning Bond, financial capability and tax-filing: Results from an online survey of low and modest income parents. SEED Winnipeg/Carleton University Arthur Kroeger College of Public Affairs, March 2022. http://dx.doi.org/10.22215/clb20220301.

Повний текст джерела
Анотація:
Previous research has identified several likely causes of eligible non-participation in the Canada Learning Bond (CLB), including awareness, financial exclusion, and administrative barriers. This study expands on that research, with a particular focus on the role of tax-filing as an administrative obstacle to accessing the CLB. I present results from an online survey of low and modest income parents (n=466) conducted in 2021. We find that, even among parents reporting they have received the CLB (46%), a majority (51%) report low confidence in their familiarity with the program, and more than one in six (17%) are unaware of the need to file tax returns to maintain eligibility for annual CLB payments. Self-reported regular tax-filing is associated with a 59% increase in the probability of accessing the CLB, even when controlling for a range of parental characteristics. This study confirms previous work by Harding and colleagues (2019) that non-filing may explain some share of eligible non-participation in education savings incentives. Tax-filing services may be an important pathway to improve CLB access. Low and modest income parents show substantial diversity in their preferred filing methods and outreach efforts cannot be concentrated in only one avenue if they are to be successful. The study also tests a small ‘nudge’ to address gaps in awareness and finds that information-only approaches to outreach are likely to have limited success, even with motivated populations.
Стилі APA, Harvard, Vancouver, ISO та ін.
10

Schiefelbein, Ernesto, Paulina Schiefelbein, and Laurence Wolff. Cost-Effectiveness of Education Policies in Latin America: A Survey of Expert Opinion. Inter-American Development Bank, December 1998. http://dx.doi.org/10.18235/0008789.

Повний текст джерела
Анотація:
This paper provides an alternative approach to measuring the cost-effectiveness of educational interventions. The authors devised a questionnaire and gave it to ten international experts, mainly located in universities and international agencies, all of whom were well acquainted with educational research and with practical attempts at educational reform in the region; as well as to about 30 Latin American planner/practitioners, most of them working in the planning office of their ministry of education. Each respondent was asked to estimate the impact of 40 possible primary school interventions on learning as well as the probability of successful implementation. Using their own estimates of the incremental unit costs of these interventions, the authors created an innovative index ranking the cost-effectiveness of each of the 40 interventions.
Стилі APA, Harvard, Vancouver, ISO та ін.
Ми пропонуємо знижки на всі преміум-плани для авторів, чиї праці увійшли до тематичних добірок літератури. Зв'яжіться з нами, щоб отримати унікальний промокод!

До бібліографії