Um die anderen Arten von Veröffentlichungen zu diesem Thema anzuzeigen, folgen Sie diesem Link: Probability learning.

Dissertationen zum Thema „Probability learning“

Geben Sie eine Quelle nach APA, MLA, Chicago, Harvard und anderen Zitierweisen an

Wählen Sie eine Art der Quelle aus:

Machen Sie sich mit Top-50 Dissertationen für die Forschung zum Thema "Probability learning" bekannt.

Neben jedem Werk im Literaturverzeichnis ist die Option "Zur Bibliographie hinzufügen" verfügbar. Nutzen Sie sie, wird Ihre bibliographische Angabe des gewählten Werkes nach der nötigen Zitierweise (APA, MLA, Harvard, Chicago, Vancouver usw.) automatisch gestaltet.

Sie können auch den vollen Text der wissenschaftlichen Publikation im PDF-Format herunterladen und eine Online-Annotation der Arbeit lesen, wenn die relevanten Parameter in den Metadaten verfügbar sind.

Sehen Sie die Dissertationen für verschiedene Spezialgebieten durch und erstellen Sie Ihre Bibliographie auf korrekte Weise.

1

Gozenman, Filiz. „Interaction Of Probability Learning And Working Memory“. Master's thesis, METU, 2012. http://etd.lib.metu.edu.tr/upload/12614535/index.pdf.

Der volle Inhalt der Quelle
Annotation:
Probability learning is the ability to establish a relationship between stimulus and outcomes based on occurrence probabilities using repetitive feedbacks. Participants learn the task according to the cue-outcome relationship, and try to gain in depth understanding of this relationship throughout the experiment. While learning is at the highest level, people rely on their working memory. In this study 20 participants were presented a probability learning task, and their prefrontal cortex activity was measured with functional Near-Infrared Spectroscopy. It was hypothesized that as participants gain more knowledge of the probabilities they will learn cue-outcome relationships and therefore rely less on their working memory. Therefore as learning precedes a drop in the fNIRS signal is expected. We obtained results confirming our hypothesis: Significant negative correlation between dorsolateral prefrontal cortex activity and learning was found. Similarly, response time also decreased through the task, indicating that as learning precedes participants made decisions faster. Participants used either the frequency matching or the maximization strategy in order to solve the task in which they had to decide whether the blue or the red color was winning. When they use the frequency matching strategy they chose blue at the rate of winning for the blue choice. When they use the maximization strategy they chosed blue almost always. Our task was designed such that the frequency for blue to win was 80%. We had hypothesized that the people in frequency matching and maximization groups would show working memory differences which could be observed from the fNIRS signal. However, we were unable to detect this type of behavioral difference in the fNIRS signal. Overall, our study showed the relationship between probability learning and working memory as depicted by brain activity in the dorsolateral prefrontal cortex which widely known as the central executive component of working memory.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
2

RYSZ, TERI. „METACOGNITION IN LEARNING ELEMENTARY PROBABILITY AND STATISTICS“. University of Cincinnati / OhioLINK, 2004. http://rave.ohiolink.edu/etdc/view?acc_num=ucin1099248340.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
3

Bouchacourt, Diane. „Task-oriented learning of structured probability distributions“. Thesis, University of Oxford, 2017. https://ora.ox.ac.uk/objects/uuid:0665495b-afbb-483b-8bdf-cbc6ae5baeff.

Der volle Inhalt der Quelle
Annotation:
Machine learning models automatically learn from historical data to predict unseen events. Such events are often represented as complex multi-dimensional structures. In many cases there is high uncertainty in the prediction process. Research has developed probabilistic models to capture distributions of complex objects, but their learning objective is often agnostic of the evaluation loss. In this thesis, we address the aforementioned defficiency by designing probabilistic methods for structured object prediction that take into account the task at hand. First, we consider that the task at hand is explicitly known, but there is ambiguity in the prediction due to an unobserved (latent) variable. We develop a framework for latent structured output prediction that unifies existing empirical risk minimisation methods. We empirically demonstrate that for large and ambiguous latent spaces, performing prediction by minimising the uncertainty in the latent variable provides more accurate results. Empirical risk minimisation methods predict only a pointwise estimate of the output, however there can be uncertainty on the output value itself. To tackle this deficiency, we introduce a novel type of model to perform probabilistic structured output prediction. Our training objective minimises a dissimilarity coefficient between the data distribution and the model's distribution. This coefficient is defined according to a loss of choice, thereby our objective can be tailored to the task loss. We empirically demonstrate the ability of our model to capture distributions over complex objects. Finally, we tackle a setting where the task loss is implicitly expressed. Specifically, we consider the case of grouped observations. We propose a new model for learning a representation of the data that decomposes according to the semantics behind this grouping, while allowing efficient test-time inference. We experimentally demonstrate that our model learns a disentangled and controllable representation, leverages grouping information when available, and generalises to unseen observations.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
4

Li, Chengtao Ph D. Massachusetts Institute of Technology. „Diversity-inducing probability measures for machine learning“. Thesis, Massachusetts Institute of Technology, 2019. https://hdl.handle.net/1721.1/121724.

Der volle Inhalt der Quelle
Annotation:
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Electrical Engineering and Computer Science, 2019
Cataloged from PDF version of thesis.
Includes bibliographical references (pages 163-176).
Subset selection problems arise in machine learning within kernel approximation, experimental design, and numerous other applications. In such applications, one often seeks to select diverse subsets of items to represent the population. One way to select such diverse subsets is to sample according to Diversity-Inducing Probability Measures (DIPMs) that assign higher probabilities to more diverse subsets. DIPMs underlie several recent breakthroughs in mathematics and theoretical computer science, but their power has not yet been explored for machine learning. In this thesis, we investigate DIPMs, their mathematical properties, sampling algorithms, and applications. Perhaps the best known instance of a DIPM is a Determinantal Point Process (DPP). DPPs originally arose in quantum physics, and are known to have deep relations to linear algebra, combinatorics, and geometry. We explore applications of DPPs to kernel matrix approximation and kernel ridge regression.
In these applications, DPPs deliver strong approximation guarantees and obtain superior performance compared to existing methods. We further develop an MCMC sampling algorithm accelerated by Gauss-type quadratures for DPPs. The algorithm runs several orders of magnitude faster than the existing ones. DPPs lie in a larger class of DIPMs called Strongly Rayleigh (SR) Measures. Instances of SR measures display a strong negative dependence property known as negative association, and as such can be used to model subset diversity. We study mathematical properties of SR measures, and construct the first provably fast-mixing Markov chain that samples from general SR measures. As a special case, we consider an SR measure called Dual Volume Sampling (DVS), for which we present the first poly-time sampling algorithm.
While all considered distributions over subsets are unconstrained, those of interest in the real world usually come with constraints due to prior knowledge, resource limitations or personal preferences. Hence we investigate sampling from constrained versions of DIPMs. Specifically, we consider DIPMs with cardinality constraints and matroid base constraints and construct poly-time approximate sampling algorithms for them. Such sampling algorithms will enable practical uses of constrained DIPMs in real world.
by Chengtao Li.
Ph. D.
Ph.D. Massachusetts Institute of Technology, Department of Electrical Engineering and Computer Science
APA, Harvard, Vancouver, ISO und andere Zitierweisen
5

Hunt, Gareth David. „Reinforcement Learning for Low Probability High Impact Risks“. Thesis, Curtin University, 2019. http://hdl.handle.net/20.500.11937/77106.

Der volle Inhalt der Quelle
Annotation:
We demonstrate a method of reinforcement learning that uses training in simulation. Our system generates an estimate of the potential reward and danger of each action as well as a measure of the uncertainty present in both. The system generates this by seeking out not only rewarding actions but also dangerous ones in the simulated training. During runtime our system is able to use this knowledge to avoid risks while accomplishing its tasks.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
6

Słowiński, Witold. „Autonomous learning of domain models from probability distribution clusters“. Thesis, University of Aberdeen, 2014. http://digitool.abdn.ac.uk:80/webclient/DeliveryManager?pid=211059.

Der volle Inhalt der Quelle
Annotation:
Nontrivial domains can be difficult to understand and the task of encoding a model of such a domain can be difficult for a human expert, which is one of the fundamental problems of knowledge acquisition. Model learning provides a way to address this problem by allowing a predictive model of the domain's dynamics to be learnt algorithmically, without human supervision. Such models can provide insight about the domain to a human or aid in automated planning or reinforcement learning. This dissertation addresses the problem of how to learn a model of a continuous, dynamic domain, from sensory observations, through the discretisation of its continuous state space. The learning process is unsupervised in that there are no predefined goals, and it assumes no prior knowledge of the environment. Its outcome is a model consisting of a set of predictive cause-and-effect rules which describe changes in related variables over brief periods of time. We present a novel method for learning such a model, which is centred around the idea of discretising the state space by identifying clusters of uniform density in the probability density function of variables, which correspond to meaningful features of the state space. We show that using this method it is possible to learn models exhibiting predictive power. Secondly, we show that applying this discretisation process to two-dimensional vector variables in addition to scalar variables yields a better model than only applying it to scalar variables and we describe novel algorithms and data structures for discretising one- and two-dimensional spaces from observations. Finally, we demonstrate that this method can be useful for planning or decision making in some domains where the state space exhibits stable regions of high probability and transitional regions of lesser probability. We provide evidence for these claims by evaluating the model learning algorithm in two dynamic, continuous domains involving simulated physics: the OpenArena computer game and a two-dimensional simulation of a bouncing ball falling onto uneven terrain.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
7

Benson, Carol Trinko Jones Graham A. „Assessing students' thinking in modeling probability contexts“. Normal, Ill. Illinois State University, 2000. http://wwwlib.umi.com/cr/ilstu/fullcit?p9986725.

Der volle Inhalt der Quelle
Annotation:
Thesis (Ph. D.)--Illinois State University, 2000.
Title from title page screen, viewed May 11, 2006. Dissertation Committee: Graham A. Jones (chair), Kenneth N. Berk, Patricia Klass, Cynthia W. Langrall, Edward S. Mooney. Includes bibliographical references (leaves 115-124) and abstract. Also available in print.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
8

Rast, Jeanne D. „A Comparison of Learning Subjective and Traditional Probability in Middle Grades“. Digital Archive @ GSU, 2005. http://digitalarchive.gsu.edu/msit_diss/4.

Der volle Inhalt der Quelle
Annotation:
The emphasis given to probability and statistics in the K-12 mathematics curriculum has brought attention to the various approaches to probability and statistics concepts, as well as how to teach these concepts. Teachers from fourth, fifth, and sixth grades from a small suburban Catholic school engaged their students (n=87) in a study to compare learning traditional probability concepts to learning traditional and subjective probability concepts. The control group (n=44) received instruction in traditional probability, while the experimental group (n=43) received instruction in traditional and subjective probability. A Multivariate Analysis of Variance and a Bayesian t-test were used to analyze pretest and posttest scores from the Making Decisions about Chance Questionnaire (MDCQ). Researcher observational notes, teacher journal entries, student activity worksheet explanations, pre- and post-test answers, and student interviews were coded for themes. All groups showed significant improvement on the post-MDCQ (p < .01). There was a disordinal interaction between the combined fifth- and sixth-grade experimental group (n=28) and the control group (n=28), however the mean difference in performance on the pre-MDCQ and post-MDCQ was not significant (p=.096). A Bayesian t-test indicated that there is reasonable evidence to believe that the mean of the experimental group exceeded the mean of the control group. Qualitative data showed that while students have beliefs about probabilistic situations based on their past experiences and prior knowledge, and often use this information to make probability judgments, they find traditional probability problems easier than subjective probability. Further research with different grade levels, larger sample sizes or different activities would develop learning theory in this area and may provide insight about probability judgments previously labeled as misconceptions by researchers.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
9

Lindsay, David George. „Machine learning techniques for probability forecasting and their practical evaluations“. Thesis, Royal Holloway, University of London, 2007. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.445274.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
10

Kornfeld, Sarah. „Predicting Default Probability in Credit Risk using Machine Learning Algorithms“. Thesis, KTH, Matematisk statistik, 2020. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-275656.

Der volle Inhalt der Quelle
Annotation:
This thesis has explored the field of internally developed models for measuring the probability of default (PD) in credit risk. As regulators put restrictions on modelling practices and inhibit the advance of risk measurement, the fields of data science and machine learning are advancing. The tradeoff between stricter regulation on internally developed models and the advancement of data analytics was investigated by comparing model performance of the benchmark method Logistic Regression for estimating PD with the machine learning methods Decision Trees, Random Forest, Gradient Boosting and Artificial Neural Networks (ANN). The data was supplied by SEB and contained 45 variables and 24 635 samples. As the machine learning techniques become increasingly complex to favour enhanced performance, it is often at the expense of the interpretability of the model. An exploratory analysis was therefore made with the objective of measuring variable importance in the machine learning techniques. The findings from the exploratory analysis will be compared to the results from benchmark methods that exist for measuring variable importance. The results of this study shows that logistic regression outperformed the machine learning techniques based on the model performance measure AUC with a score of 0.906. The findings from the exploratory analysis did increase the interpretability of the machine learning techniques and were validated by the results from the benchmark methods.
Denna uppsats har undersökt internt utvecklade modeller för att estimera sannolikheten för utebliven betalning (PD) inom kreditrisk. Samtidigt som nya regelverk sätter restriktioner på metoder för modellering av kreditrisk och i viss mån hämmar utvecklingen av riskmätning, utvecklas samtidigt mer avancerade metoder inom maskinlärning för riskmätning. Således har avvägningen mellan strängare regelverk av internt utvecklade modeller och framsteg i dataanalys undersökts genom jämförelse av modellprestanda för referens metoden logistisk regression för uppskattning av PD med maskininlärningsteknikerna beslutsträd, Random Forest, Gradient Boosting och artificiella neurala nätverk (ANN). Dataunderlaget kommer från SEB och består utav 45 variabler och 24 635 observationer. När maskininlärningsteknikerna blir mer komplexa för att gynna förbättrad prestanda är det ofta på bekostnad av modellens tolkbarhet. En undersökande analys gjordes därför med målet att mäta förklarningsvariablers betydelse i maskininlärningsteknikerna. Resultaten från den undersökande analysen kommer att jämföras med resultat från etablerade metoder som mäter variabelsignifikans. Resultatet av studien visar att den logistiska regressionen presterade bättre än maskininlärningsteknikerna baserat på prestandamåttet AUC som mätte 0.906. Resultatet from den undersökande analysen för förklarningsvariablers betydelse ökade tolkbarheten för maskininlärningsteknikerna. Resultatet blev även validerat med utkomsten av de etablerade metoderna för att mäta variabelsignifikans.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
11

Ives, Sarah Elizabeth. „Learning to Teach Probability: Relationships among Preservice Teachers' Beliefs and Orientations, Content Knowledge, and Pedagogical Content Knowledge of Probability“. NCSU, 2009. http://www.lib.ncsu.edu/theses/available/etd-11042009-144919/.

Der volle Inhalt der Quelle
Annotation:
The purposes of this study were to investigate preservice mathematics teachersâ orientations, content knowledge, and pedagogical content knowledge of probability; the relationships among these three aspects; and the usefulness of tasks with respect to examining these aspects of knowledge. The design of the study was a multi-case study of five secondary mathematics education preservice teachers with a focus on their knowledge as well as tasks that were used in this study. Data from individual interviews and test items were collected and analyzed under a conceptual framework based on the work of Hill, Ball, and Schilling (2008); Kvatinsky and Even (2002); and Garuti, Orlandoni, and Ricci (2008). The researcher found that the preservice teachers held multiple orientations towards probability yet tended to be mostly objective (mathematical and statistical) with little evidence of subjective orientations. Relationships existed between the preservice teachersâ orientations and their content knowledge, as well as their pedagogical content knowledge. These relationships were found more in tasks where they were required to make a claim about a probability within some sort of real-world context. The researcher also found that tasks involving pedagogical situations tended to be more effective at eliciting knowledge than tasks involving only questions.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
12

Nogales, Chris Lorena. „Robot Autonomous Fire Location using a Weighted Probability Algorithm“. Thesis, Virginia Tech, 2016. http://hdl.handle.net/10919/73360.

Der volle Inhalt der Quelle
Annotation:
Finding a fire inside of a structure without knowing its conditions poses a dangerous threat to the safety of firefighters. As a result, robots are being explored to increase awareness of the conditions inside structures before having firefighter enter. This thesis presents a method that autonomously guides a robot to the location of a fire inside a structure. The method uses classification of fire, smoke, and other fire environment objects to calculate a weighted probability. Weighted probability is a measurement that indicates the probability that a given region on an infra-red image will lead to fire. This method was tested on large-scale fire videos with a robot moving towards a fire and it is also compared to following the highest temperatures on the image. Sending a robot to find a fire has the potential to save the lives of firefighters.
Master of Science
APA, Harvard, Vancouver, ISO und andere Zitierweisen
13

Seow, Hsin-Vonn. „Using adaptive learning in credit scoring to estimate acceptance probability distribution“. Thesis, University of Southampton, 2006. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.430721.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
14

Miller, Erik G. (Erik Gundersen). „Learning from one example in machine vision by sharing probability densities“. Thesis, Massachusetts Institute of Technology, 2002. http://hdl.handle.net/1721.1/29902.

Der volle Inhalt der Quelle
Annotation:
Thesis (Ph. D.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 2002.
Includes bibliographical references (p. 125-130).
Human beings exhibit rapid learning when presented with a small number of images of a new object. A person can identify an object under a wide variety of visual conditions after having seen only a single example of that object. This ability can be partly explained by the application of previously learned statistical knowledge to a new setting. This thesis presents an approach to acquiring knowledge in one setting and using it in another. Specifically, we develop probability densities over common image changes. Given a single image of a new object and a model of change learned from a different object, we form a model of the new object that can be used for synthesis, classification, and other visual tasks. We start by modeling spatial changes. We develop a framework for learning statistical knowledge of spatial transformations in one task and using that knowledge in a new task. By sharing a probability density over spatial transformations learned from a sample of handwritten letters, we develop a handwritten digit classifier that achieves 88.6% accuracy using only a single hand-picked training example from each class. The classification scheme includes a new algorithm, congealing, for the joint alignment of a set of images using an entropy minimization criterion. We investigate properties of this algorithm and compare it to other methods of addressing spatial variability in images. We illustrate its application to binary images, gray-scale images, and a set of 3-D neonatal magnetic resonance brain volumes.
Next, we extend the method of change modeling from spatial transformations to color transformations. By measuring statistically common joint color changes of a scene in an office environment, and then applying standard statistical techniques such as principal components analysis, we develop a probabilistic model of color change. We show that these color changes, which we call color flows, can be shared effectively between certain types of scenes. That is, a probability density over color change developed by observing one scene can provide useful information about the variability of another scene. We demonstrate a variety of applications including image synthesis, image matching, and shadow detection.
by Erik G. Miller.
Ph.D.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
15

Makar, Maggie S. M. Massachusetts Institute of Technology. „Learning the probability of activation in the presence of latent spreaders“. Thesis, Massachusetts Institute of Technology, 2017. http://hdl.handle.net/1721.1/111924.

Der volle Inhalt der Quelle
Annotation:
Thesis: S.M., Massachusetts Institute of Technology, Department of Electrical Engineering and Computer Science, 2017.
Cataloged from PDF version of thesis.
Includes bibliographical references (pages 71-74).
When an infection spreads among members of a community, an individual's probability of becoming infected depends on both his susceptibility to the infection and exposure to the disease through contact with others. While one often has knowledge regarding an individual's susceptibility, in many cases, whether or not an individual's contacts are contagious and spreading the infection is unknown or latent. We propose a new generative model in which we model the neighbors' spreader states and the individuals' exposure states as latent variables. Combined with an individual's characteristics, we estimate the risk of infection as a function of both exposure and susceptibility. We propose a variational inference algorithm to learn the model parameters. Through a series of experiments on simulated data, we measure the ability of the proposed model to identify latent spreaders, estimate exposure as a function of one's spreading neighbors, and predict the risk of infection. Our work can be helpful in both identifying potential asymptomatic carriers of infections, and in identifying characteristics that are associated with an increased likelihood of being an undiagnosed source of contagion.
by Maggie Makar.
S.M.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
16

Hild, Andreas. „ESTIMATING AND EVALUATING THE PROBABILITY OF DEFAULT – A MACHINE LEARNING APPROACH“. Thesis, Uppsala universitet, Statistiska institutionen, 2021. http://urn.kb.se/resolve?urn=urn:nbn:se:uu:diva-447385.

Der volle Inhalt der Quelle
Annotation:
In this thesis, we analyse and evaluate classification models for panel credit risk data. Variables are selected based on results from recursive feature elimination as well as economic reasoning where the probability of default is estimated. We employ several machine learning and statistical techniques and assess the performance of each model based on AUC, Brier score as well as the absolute mean difference between the predicted and the actual outcome, carried out with cross validation of four folds and extensive hyperparameter optimization. The LightGBM model had the best performance and many machine learning models showed a superior performance compared to traditional models like logistic regression. Hence, the results of this thesis show that machine learning models like gradient boosting models, neural networks and voting models have the capacity to challenge traditional statistical methods such as logistic regression within credit risk modelling.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
17

Bonneau, Maxime. „Reinforcement Learning for 5G Handover“. Thesis, Linköpings universitet, Statistik och maskininlärning, 2017. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-140816.

Der volle Inhalt der Quelle
Annotation:
The development of the 5G network is in progress, and one part of the process that needs to be optimised is the handover. This operation, consisting of changing the base station (BS) providing data to a user equipment (UE), needs to be efficient enough to be a seamless operation. From the BS point of view, this operation should be as economical as possible, while satisfying the UE needs.  In this thesis, the problem of 5G handover has been addressed, and the chosen tool to solve this problem is reinforcement learning. A review of the different methods proposed by reinforcement learning led to the restricted field of model-free, off-policy methods, more specifically the Q-Learning algorithm. On its basic form, and used with simulated data, this method allows to get information on which kind of reward and which kinds of action-space and state-space produce good results. However, despite working on some restricted datasets, this algorithm does not scale well due to lengthy computation times. It means that the agent trained can not use a lot of data for its learning process, and both state-space and action-space can not be extended a lot, restricting the use of the basic Q-Learning algorithm to discrete variables. Since the strength of the signal (RSRP), which is of high interest to match the UE needs, is a continuous variable, a continuous form of the Q-learning needs to be used. A function approximation method is then investigated, namely artificial neural networks. In addition to the lengthy computational time, the results obtained are not convincing yet. Thus, despite some interesting results obtained from the basic form of the Q-Learning algorithm, the extension to the continuous case has not been successful. Moreover, the computation times make the use of reinforcement learning applicable in our domain only for really powerful computers.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
18

Saive, Yannick. „DirCNN: Rotation Invariant Geometric Deep Learning“. Thesis, KTH, Matematisk statistik, 2019. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-252573.

Der volle Inhalt der Quelle
Annotation:
Recently geometric deep learning introduced a new way for machine learning algorithms to tackle point cloud data in its raw form. Pioneers like PointNet and many architectures building on top of its success realize the importance of invariance to initial data transformations. These include shifting, scaling and rotating the point cloud in 3D space. Similarly to our desire for image classifying machine learning models to classify an upside down dog as a dog, we wish geometric deep learning models to succeed on transformed data. As such, many models employ an initial data transform in their models which is learned as part of a neural network, to transform the point cloud into a global canonical space. I see weaknesses in this approach as they are not guaranteed to perform completely invariant to input data transformations, but rather approximately. To combat this I propose to use local deterministic transformations which do not need to be learned. The novelty layer of this project builds upon Edge Convolutions and is thus dubbed DirEdgeConv, with the directional invariance in mind. This layer is slightly altered to introduce another layer by the name of DirSplineConv. These layers are assembled in a variety of models which are then benchmarked against the same tasks as its predecessor to invite a fair comparison. The results are not quite as good as state of the art results, however are still respectable. It is also my belief that the results can be improved by improving the learning rate and its scheduling. Another experiment in which ablation is performed on the novel layers shows that the layers  main concept indeed improves the overall results.
Nyligen har ämnet geometrisk deep learning presenterat ett nytt sätt för maskininlärningsalgoritmer att arbeta med punktmolnsdata i dess råa form.Banbrytande arkitekturer som PointNet och många andra som byggt på dennes framgång framhåller vikten av invarians under inledande datatransformationer. Sådana transformationer inkluderar skiftning, skalning och rotation av punktmoln i ett tredimensionellt rum. Precis som vi önskar att klassifierande maskininlärningsalgoritmer lyckas identifiera en uppochnedvänd hund som en hund vill vi att våra geometriska deep learning-modeller framgångsrikt ska kunna hantera transformerade punktmoln. Därför använder många modeller en inledande datatransformation som tränas som en del av ett neuralt nätverk för att transformera punktmoln till ett globalt kanoniskt rum. Jag ser tillkortakommanden i detta tillgångavägssätt eftersom invariansen är inte fullständigt garanterad, den är snarare approximativ. För att motverka detta föreslår jag en lokal deterministisk transformation som inte måste läras från datan. Det nya lagret i det här projektet bygger på Edge Convolutions och döps därför till DirEdgeConv, namnet tar den riktningsmässiga invariansen i åtanke. Lagret ändras en aning för att introducera ett nytt lager vid namn DirSplineConv. Dessa lager sätts ihop i olika modeller som sedan jämförs med sina efterföljare på samma uppgifter för att ge en rättvis grund för att jämföra dem. Resultaten är inte lika bra som toppmoderna resultat men de är ändå tillfredsställande. Jag tror även resultaten kan förbättas genom att förbättra inlärningshastigheten och dess schemaläggning. I ett experiment där ablation genomförs på de nya lagren ser vi att lagrens huvudkoncept förbättrar resultaten överlag.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
19

Sandberg, Martina. „Credit Risk Evaluation using Machine Learning“. Thesis, Linköpings universitet, Statistik och maskininlärning, 2017. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-138968.

Der volle Inhalt der Quelle
Annotation:
In this thesis, we examine the machine learning models logistic regression, multilayer perceptron and random forests in the purpose of discriminate between good and bad credit applicants. In addition to these models we address the problem of imbalanced data with the Synthetic Minority Over-Sampling Technique (SMOTE). The data available have 273 286 entries and contains information about the invoice of the applicant and the credit decision process as well as information about the applicant. The data was collected during the period 2015-2017. With AUC-values at about 73%some patterns are found that can discriminate between customers that are likely to pay their invoice and customers that are not. However, the more advanced models only performed slightly better than the logistic regression.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
20

Goerg, Georg Matthias. „Learning Spatio-Temporal Dynamics: Nonparametric Methods for Optimal Forecasting and Automated Pattern Discovery“. Research Showcase @ CMU, 2012. http://repository.cmu.edu/dissertations/218.

Der volle Inhalt der Quelle
Annotation:
Many important scientific and data-driven problems involve quantities that vary over space and time. Examples include functional magnetic resonance imaging (fMRI), climate data, or experimental studies in physics, chemistry, and biology. Principal goals of many methods in statistics, machine learning, and signal processing are to use this data and i) extract informative structures and remove noisy, uninformative parts; ii) understand and reconstruct underlying spatio-temporal dynamics that govern these systems; and iii) forecast the data, i.e., describe the system in the future. Being data-driven problems, it is important to have methods and algorithms that work well in practice for a wide range of spatio-temporal processes as well as various data types. In this thesis I present such generally applicable statistical methods that address all three problems in a unifying manner. I introduce two new techniques for optimal nonparametric forecasting of spatiotemporal data: hard and mixed LICORS (Light Cone Reconstruction of States). Hard LICORS is a consistent predictive state estimator and extends previous work from Shalizi (2003); Shalizi, Haslinger, Rouquier, Klinkner, and Moore (2006); Shalizi, Klinkner, and Haslinger (2004) to continuous-valued spatio-temporal fields. Mixed LICORS builds on a new, fully probabilistic model of light cones and predictive states mappings, and is an EM-like version of hard LICORS. Simulations show that it has much better finite sample properties than hard LICORS. I also propose a sparse variant of mixed LICORS, which improves out-of-sample forecasts even further. Both methods can then be used to estimate local statistical complexity (LSC) (Shalizi, 2003), a fully automatic technique for pattern discovery in dynamical systems. Simulations and applications to fMRI data demonstrate that the proposed methods work well and give useful results in very general scientific settings. Lastly, I made most methods publicly available as R (R Development Core Team, 2010) or Python (Van Rossum, 2003) packages, so researchers can use these methods and better understand, forecast, and discover patterns in the data they study.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
21

Nilsson, Viktor. „Prediction of Dose Probability Distributions Using Mixture Density Networks“. Thesis, KTH, Matematisk statistik, 2020. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-273610.

Der volle Inhalt der Quelle
Annotation:
In recent years, machine learning has become utilized in external radiation therapy treatment planning. This involves automatic generation of treatment plans based on CT-scans and other spatial information such as the location of tumors and organs. The utility lies in relieving clinical staff from the labor of manually or semi-manually creating such plans. Rather than predicting a deterministic plan, there is great value in modeling it stochastically, i.e. predicting a probability distribution of dose from CT-scans and delineated biological structures. The stochasticity inherent in the RT treatment problem stems from the fact that a range of different plans can be adequate for a patient. The particular distribution can be thought of as the prevalence in preferences among clinicians. Having more information about the range of possible plans represented in one model entails that there is more flexibility in forming a final plan. Additionally, the model will be able to reflect the potentially conflicting clinical trade-offs; these will occur as multimodal distributions of dose in areas where there is a high variance. At RaySearch, the current method for doing this uses probabilistic random forests, an augmentation of the classical random forest algorithm. A current direction of research is learning the probability distribution using deep learning. A novel parametric approach to this is letting a suitable deep neural network approximate the parameters of a Gaussian mixture model in each volume element. Such a neural network is known as a mixture density network. This thesis establishes theoretical results of artificial neural networks, mainly the universal approximation theorem, applied to the activation functions used in the thesis. It will then proceed to investigate the power of deep learning in predicting dose distributions, both deterministically and stochastically. The primary objective is to investigate the feasibility of mixture density networks for stochastic prediction. The research question is the following. U-nets and Mixture Density Networks will be combined to predict stochastic doses. Does there exist such a network, powerful enough to detect and model bimodality? The experiments and investigations performed in this thesis demonstrate that there is indeed such a network.
Under de senaste åren har maskininlärning börjat nyttjas i extern strålbehandlingsplanering. Detta involverar automatisk generering av behandlingsplaner baserade på datortomografibilder och annan rumslig information, såsom placering av tumörer och organ. Nyttan ligger i att avlasta klinisk personal från arbetet med manuellt eller halvmanuellt skapa sådana planer. I stället för att predicera en deterministisk plan finns det stort värde att modellera den stokastiskt, det vill säga predicera en sannolikhetsfördelning av dos utifrån datortomografibilder och konturerade biologiska strukturer. Stokasticiteten som förekommer i strålterapibehandlingsproblemet beror på att en rad olika planer kan vara adekvata för en patient. Den särskilda fördelningen kan betraktas som förekomsten av preferenser bland klinisk personal. Att ha mer information om utbudet av möjliga planer representerat i en modell innebär att det finns mer flexibilitet i utformningen av en slutlig plan. Dessutom kommer modellen att kunna återspegla de potentiellt motstridiga kliniska avvägningarna; dessa kommer påträffas som multimodala fördelningar av dosen i områden där det finns en hög varians. På RaySearch används en probabilistisk random forest för att skapa dessa fördelningar, denna metod är en utökning av den klassiska random forest-algoritmen. En aktuell forskningsriktning är att generera in sannolikhetsfördelningen med hjälp av djupinlärning. Ett oprövat parametriskt tillvägagångssätt för detta är att låta ett lämpligt djupt neuralt nätverk approximera parametrarna för en Gaussisk mixturmodell i varje volymelement. Ett sådant neuralt nätverk är känt som ett mixturdensitetsnätverk. Den här uppsatsen fastställer teoretiska resultat för artificiella neurala nätverk, främst det universella approximationsteoremet, tillämpat på de aktiveringsfunktioner som används i uppsatsen. Den fortsätter sedan att utforska styrkan av djupinlärning i att predicera dosfördelningar, både deterministiskt och stokastiskt. Det primära målet är att undersöka lämpligheten av mixturdensitetsnätverk för stokastisk prediktion. Forskningsfrågan är följande. U-nets och mixturdensitetsnätverk kommer att kombineras för att predicera stokastiska doser. Finns det ett sådant nätverk som är tillräckligt kraftfullt för att upptäcka och modellera bimodalitet? Experimenten och undersökningarna som utförts i denna uppsats visar att det faktiskt finns ett sådant nätverk.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
22

Eiter, Brianna M. „Disappearing effects of transitional probability on visual word recognition during reading“. Diss., Online access via UMI:, 2005.

Den vollen Inhalt der Quelle finden
APA, Harvard, Vancouver, ISO und andere Zitierweisen
23

Vallin, Simon. „Small Cohort Population Forecasting via Bayesian Learning“. Thesis, KTH, Matematisk statistik, 2017. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-209274.

Der volle Inhalt der Quelle
Annotation:
A set of distributional assumptions regarding the demographic processes of birth, death, emigration and immigration have been assembled to form a probabilistic model framework of population dynamics. This framework was summarized as a Bayesian network and Bayesian inference techniques are exploited to infer the posterior distributions of the model parameters from observed data. The birth, death and emigration processes are modelled using a hierarchical beta-binomial model from which the inference of the posterior parameter distribution was analytically tractable. The immigration process was modelled with a Poisson type regression model where posterior distribution of the parameters has to be estimated numerically. This thesis suggests an implementation of the Metropolis-Hasting algorithm for this task. Classifi cation of incomings into subpopulations of age and gender is subsequently made using a Dirichlet-multinomial hierarchic model, for which parameter inference is analytically tractable. This model framework is used to generate forecasts of demographic data, which can be validated using the observed outcomes. A key component of the Bayesian model framework used is that is estimates the full posterior distributions of demographic data, which can take into account the full amount of uncertainty when forecasting population growths.
Genom att använda en mängd av distributionella antaganden om de demografiska processerna födsel, dödsfall, utflyttning och inflyttning har vi byggt ett stokastiskt ramverk för att modellera befolkningsförändringar. Ramverket kan sammanfattas som ett Bayesianskt nätverk och för detta nätverk introduceras tekniker för att skatta parametrar i denna uppsats. Födsel, dödsfall och utflyttning modelleras av en hierarkisk beta-binomialmodell där parametrarnas posteriorifördelning kan skattas analytiskt från data. För inflyttning används en regressionsmodell av Poissontyp där parametervärdenas posteriorifördelning måste skattas numeriskt. Vi föreslår en implementation av Metropolis-Hastingsalgoritmen för detta. Klassificering av subpopulationer hos de inflyttande sker via en hierarkisk Dirichlet-multinomialmodell där parameterskattning sker analytiskt. Ramverket användes för att göra prognoser för tidigare demografisk data, vilka validerades med de faktiska utfallen. En av modellens huvudsakliga styrkor är att kunna skatta en prediktiv fördelning för demografisk data, vilket ger en mer nyanserad pronos än en enkel maximum-likelihood-skattning.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
24

McKinnon, Melissa Taylor. „Probability and Statistics for Third through Fifth Grade Classrooms“. Digital Commons @ East Tennessee State University, 2007. https://dc.etsu.edu/etd/2118.

Der volle Inhalt der Quelle
Annotation:
This document contains a variety of lesson plans that can be readily used by a teacher of intermediate students. This thesis contains two units in Probability and one unit in Statistics. Any educator can supplement this document with any curriculum to teach lessons from vocabulary to concept.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
25

Maginnis, Michael Abbot. „THE DEVELOPMENT OF A PREDICTIVE PROBABILITY MODEL FOR EFFECTIVE CONTINUOUS LEARNING AND IMPROVEMENT“. UKnowledge, 2012. http://uknowledge.uky.edu/me_etds/2.

Der volle Inhalt der Quelle
Annotation:
It is important for organizations to understand the factors responsible for establishing sustainable continuous improvement (CI) capabilities. This study uses learning curves as the basis to examine learning obtained by team members doing work with and without the application of fundamental aspects of the Toyota Production System. The results are used to develop an effective model to guide organizational activities towards achieving the ability to continuous improve in a sustainable fashion. This research examines the effect of standardization and waste elimination activities supported by systematic problem solving on team member learning at the work interface and system performance. The results indicate the application of Standard Work principles and elimination of formally defined waste using the systematic 8-step problem solving process positively impacts team member learning and performance, providing the foundation for continuous improvement Compared to their untreated counterparts, treated teams exhibited increased, more uniformly distributed, and more sustained learning rates as well as improved productivity as defined by decreased total throughput time and wait time. This was accompanied by reduced defect rates and a significant decrease in mental and physical team member burden. A major outcome of this research has been the creation of a predictive probability model to guide sustainable CI development using a simplified assessment tool aimed at identifying essential organizational states required to support sustainable CI development.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
26

Callahan, Philip. „Learning and development of probability concepts: Effects of computer-assisted instruction and diagnosis“. Diss., The University of Arizona, 1989. http://hdl.handle.net/10150/184873.

Der volle Inhalt der Quelle
Annotation:
This study considered spontaneous versus feedback induced changes in probability strategies using grouped trials of two-choice problems. Third and sixth grade Anglo and Apache children were the focus of computer assisted instruction and diagnostics designed to maximize performance and measure understanding of probability concepts. Feedback, using indeterminate problems directed at specific strategies, in combination with a large problem set permitted examination of response latency and hypothesis alternation. Explicit training, in the form of computer based tutorials administered feedback as: (a) correctness and frequency information, (b) mathematical solutions, or (c) in a graphical format, targeted by weaknesses in the prevailing strategy. The tutorials encouraged an optimal proportional strategy and sought to affect the memorial accessibility or availability of information through the vividness of presentation. As the subject's response selection was based on the query to select for the best chance of winning, each bucket of the two-choice bucket problems was coded as containing target or winner (W) balls and distractor or loser (L) balls. Third and sixth grade subjects came to the task with position oriented strategies focusing on the winner or target elements. The strategies' sophistication was related to age with older children displaying less confusion and using proportional reasoning to a greater extent than the third grade children. Following the tutorial, the subjects displayed a marked decrease in winners strategies deferring instead to strategies focusing on both the winners and losers; however, there was a general tendency to return to the simpler strategies over the course of the posttest. These simpler strategies provided the fastest response latencies within this study. Posttest results indicated that both third and sixth grade subjects had made comparable gains in the use of strategies addressing both winners and losers. Based on the results of a long-term written test, sixth grade subjects appeared better able to retain or apply the knowledge that both winners and losers must be considered when addressing the two-choice bucket problems. Yet, for younger children, knowledge of these sophisticated strategies did not necessarily support generalization to other mathematical skills such as fraction understanding.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
27

Damasse, Jean-Bernard. „Smooth pursuit eye movements and learning : role of motion probability and reinforcement contingencies“. Thesis, Aix-Marseille, 2018. http://www.theses.fr/2018AIXM0223/document.

Der volle Inhalt der Quelle
Annotation:
Un défi majeur pour les organismes vivants est leur capacité d’adapter constamment leurs comportements moteurs. Dans la première étude de cette thèse, nous avons étudié le rôle des régularités statistiques et du conditionnement opérant sur la poursuite lisse d’anticipation (PLA). Nous avons démontré que la PLA est générée de manière cohérente avec la probabilité du mouvement attendu. De plus, nous avons montré pour la première fois que la PLA peut être modulée par les contingences de renforcement.Dans une seconde étude, nous avons créé un paradigme de poursuite, inspiré par l’Iowa Gambling Task (IGT), impliquant un choix entre deux cibles associées à différentes récompenses. Nous avons testé ce paradigme chez des patients Parkinson (PP), ainsi que des contrôles âgés ou jeunes. Seulement chez les participants jeunes, la latence du choix oculomoteur est fortement réduite quand celui-ci est associé à une règle de renforcement. Pour les PP le choix est fortement retardé dans toutes les conditions et cet effet n’est pas simplement attribuable à un déficit moteur. Autrement, la stratégie de choix s’avère de mauvaise qualité dans tous les groupes suggérant des différences avec les résultats de l’IGT classique.La dernière contribution de cette thèse fut de modéliser l’effet du biais directionnel sur la PLA que nous avons observé dans la première étude. Nous avons testé deux modèles incluant une mémoire de type intégrateur à fuite de la séquence d’essais, ou l’estimation Bayesienne adaptative de la taille optimale de mémoire. Nos résultats suggèrent que les modèles adaptatifs pourraient contribuer dans le futur à mieux comprendre l’apprentissage statistique et par renforcement
One of the main challenges for living organisms is the ability to constantly adapt their motor behavior. In the first study of this thesis, we investigated the role of statistical regularities and operant conditioning on anticipatory smooth eye movements (aSPEM), in a large set of healthy participants. We provided evidence that aSPEM are generated coherently with the expected probability of motion direction. Furthermore, by manipulating reinforcement contingencies our findings demonstrated for the first time that aSPEM can be considered an operant behavior. In a second study, we designed a novel two-targets choice-tracking task, where a choice-contingent reward was returned, inspired by Iowa Gambling Task (IGT). We administered this new paradigm to Parkinson’s disease (PD) patients as well as age-matched control participants and young adult controls. For young participants, choice latency was clearly shortened in the IGT-pursuit task compared to the control-task. For PD patients choice latency was overall delayed and this difference could not be attributed to pure motor deficits. Overall the choice strategy performance was poor in all groups suggesting some possible differences between the standard IGT task and our IGT-pursuit task in probing decision-making. The last contribution of this thesis is an attempt to model the relation between aSPEM velocity and local direction-bias. Two models were tested to account for the trial-sequence effects, including either a decaying memory, or a Bayesian adaptive estimation of the efficient memory size. Our results suggest that adaptive models could be used in the future to better assess statistical and reinforcement learning
APA, Harvard, Vancouver, ISO und andere Zitierweisen
28

Lundell, Jill F. „Tuning Hyperparameters in Supervised Learning Models and Applications of Statistical Learning in Genome-Wide Association Studies with Emphasis on Heritability“. DigitalCommons@USU, 2019. https://digitalcommons.usu.edu/etd/7594.

Der volle Inhalt der Quelle
Annotation:
Machine learning is a buzz word that has inundated popular culture in the last few years. This is a term for a computer method that can automatically learn and improve from data instead of being explicitly programmed at every step. Investigations regarding the best way to create and use these methods are prevalent in research. Machine learning models can be difficult to create because models need to be tuned. This dissertation explores the characteristics of tuning three popular machine learning models and finds a way to automatically select a set of tuning parameters. This information was used to create an R software package called EZtune that can be used to automatically tune three widely used machine learning algorithms: support vector machines, gradient boosting machines, and adaboost. The second portion of this dissertation investigates the implementation of machine learning methods in finding locations along a genome that are associated with a trait. The performance of methods that have been commonly used for these types of studies, and some that have not been commonly used, are assessed using simulated data. The affect of the strength of the relationship between the genetic code and the trait is of particular interest. It was found that the strength of this relationship was the most important characteristic in the efficacy of each method.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
29

Brookey, Carla M. „Application of Machine Learning and Statistical Learning Methods for Prediction in a Large-Scale Vegetation Map“. DigitalCommons@USU, 2017. https://digitalcommons.usu.edu/etd/6962.

Der volle Inhalt der Quelle
Annotation:
Original analyses of a large vegetation cover dataset from Roosevelt National Forest in northern Colorado were carried out by Blackard (1998) and Blackard and Dean (1998; 2000). They compared the classification accuracies of linear and quadratic discriminant analysis (LDA and QDA) with artificial neural networks (ANN) and obtained an overall classification accuracy of 70.58% for a tuned ANN compared to 58.38% for LDA and 52.76% for QDA. Because there has been tremendous development of machine learning classification methods over the last 35 years in both computer science and statistics, as well as substantial improvements in the speed of computer hardware, I applied five modern machine learning algorithms to the data to determine whether significant improvements in the classification accuracy were possible using one or more of these methods. I found that only a tuned gradient boosting machine had a higher accuracy (71.62%) that the ANN of Blackard and Dean (1998), and the difference in accuracies was only about 1%. Of the other four methods, Random Forests (RF), Support Vector Machines (SVM), Classification Trees (CT), and adaboosted trees (ADA), a tuned SVM and RF had accuracies of 67.17% and 67.57%, respectively. The partition of the data by Blackard and Dean (1998) was unusual in that the training and validation datasets had equal representation of the seven vegetation classes, even though 85% of the data fell into classes 1 and 2. For the second part of my analyses I randomly selected 60% of the data for the training data and 20% for each of the validation data and test data. On this partition of the data a single classification tree achieved an accuracy of 92.63% on the test data and the accuracy of RF is 83.98%. Unsurprisingly, most of the gains in accuracy were in classes 1 and 2, the largest classes which also had the highest misclassification rates under the original partition of the data. By decreasing the size of the training data but maintaining the same relative occurrences of the vegetation classes as in the full dataset I found that even for a training dataset of the same size as that of Blackard and Dean (1998) a single classification tree was more accurate (73.80%) that the ANN of Blackard and Dean (1998) (70.58%). The final part of my thesis was to explore the possibility that combining several of the machine learning classifiers predictions could result in higher predictive accuracies. In the analyses I carried out, the answer seems to be that increased accuracies do not occur with a simple voting of five machine learning classifiers.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
30

Eriksson, Alexander, und Jacob Långström. „Comparison of Machine Learning Techniques when Estimating Probability of Impairment : Estimating Probability of Impairment through Identification of Defaulting Customers one year Ahead of Time“. Thesis, Umeå universitet, Institutionen för matematik och matematisk statistik, 2019. http://urn.kb.se/resolve?urn=urn:nbn:se:umu:diva-160114.

Der volle Inhalt der Quelle
Annotation:
Probability of Impairment, or Probability of Default, is the ratio of how many customers within a segment are expected to not fulfil their debt obligations and instead go into Default. This is a key metric within banking to estimate the level of credit risk, where the current standard is to estimate Probability of Impairment using Linear Regression. In this paper we show how this metric instead can be estimated through a classification approach with machine learning. By using models trained to find which specific customers will go into Default within the upcoming year, based on Neural Networks and Gradient Boosting, the Probability of Impairment is shown to be more accurately estimated than when using Linear Regression. Additionally, these models provide numerous real-life implementations internally within the banking sector. The new features of importance we found can be used to strengthen the models currently in use, and the ability to identify customers about to go into Default let banks take necessary actions ahead of time to cover otherwise unexpected risks.
Titeln på denna rapport är En jämförelse av maskininlärningstekniker för uppskattning av Probability of Impairment. Uppskattningen av Probability of Impairment sker genom identifikation av låntagare som inte kommer fullfölja sina återbetalningsskyldigheter inom ett år. Probability of Impairment, eller Probability of Default, är andelen kunder som uppskattas att inte fullfölja sina skyldigheter som låntagare och återbetalning därmed uteblir. Detta är ett nyckelmått inom banksektorn för att beräkna nivån av kreditrisk, vilken enligt nuvarande regleringsstandard uppskattas genom Linjär Regression. I denna uppsats visar vi hur detta mått istället kan uppskattas genom klassifikation med maskininlärning. Genom användandet av modeller anpassade för att hitta vilka specifika kunder som inte kommer fullfölja sina återbetalningsskyldigheter inom det kommande året, baserade på Neurala Nätverk och Gradient Boosting, visas att Probability of Impairment bättre uppskattas än genom Linjär Regression. Dessutom medför dessa modeller även ett stort antal interna användningsområden inom banksektorn. De nya variabler av intresse vi hittat kan användas för att stärka de modeller som idag används, samt förmågan att identifiera kunder som riskerar inte kunna fullfölja sina skyldigheter låter banker utföra nödvändiga åtgärder i god tid för att hantera annars oväntade risker.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
31

Bratières, Sébastien. „Non-parametric Bayesian models for structured output prediction“. Thesis, University of Cambridge, 2018. https://www.repository.cam.ac.uk/handle/1810/274973.

Der volle Inhalt der Quelle
Annotation:
Structured output prediction is a machine learning tasks in which an input object is not just assigned a single class, as in classification, but multiple, interdependent labels. This means that the presence or value of a given label affects the other labels, for instance in text labelling problems, where output labels are applied to each word, and their interdependencies must be modelled. Non-parametric Bayesian (NPB) techniques are probabilistic modelling techniques which have the interesting property of allowing model capacity to grow, in a controllable way, with data complexity, while maintaining the advantages of Bayesian modelling. In this thesis, we develop NPB algorithms to solve structured output problems. We first study a map-reduce implementation of a stochastic inference method designed for the infinite hidden Markov model, applied to a computational linguistics task, part-of-speech tagging. We show that mainstream map-reduce frameworks do not easily support highly iterative algorithms. The main contribution of this thesis consists in a conceptually novel discriminative model, GPstruct. It is motivated by labelling tasks, and combines attractive properties of conditional random fields (CRF), structured support vector machines, and Gaussian process (GP) classifiers. In probabilistic terms, GPstruct combines a CRF likelihood with a GP prior on factors; it can also be described as a Bayesian kernelized CRF. To train this model, we develop a Markov chain Monte Carlo algorithm based on elliptical slice sampling and investigate its properties. We then validate it on real data experiments, and explore two topologies: sequence output with text labelling tasks, and grid output with semantic segmentation of images. The latter case poses scalability issues, which are addressed using likelihood approximations and an ensemble method which allows distributed inference and prediction. The experimental validation demonstrates: (a) the model is flexible and its constituent parts are modular and easy to engineer; (b) predictive performance and, most crucially, the probabilistic calibration of predictions are better than or equal to that of competitor models, and (c) model hyperparameters can be learnt from data.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
32

Enver, Asad. „Modeling Trouble Ticket ResolutionTime Using Machine Learning“. Thesis, Linköpings universitet, Statistik och maskininlärning, 2021. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-176779.

Der volle Inhalt der Quelle
Annotation:
This thesis work, conducted at Telenor Sweden, aims to build a model that would try to accurately predict the resolution time of Priority 4 Trouble Tickets. (Priority 4 trouble tickets are those tickets that get generated more often-e in higher volumes per month). It explores and investigates the possibility of applying Machine Learning and Deep Learning techniques to trouble ticket data to find an optimal solution that performs better than the current method in place (which is explained in Section 3.5). The model would be used by Telenor to inform the end-users of when the networks team expects to resolve the issues that are affecting them.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
33

Erdeniz, Burak. „Probability Learning In Normal And Parkinson Subjects: The Effect Of Reward, Context, And Uncertainty“. Master's thesis, METU, 2007. http://etd.lib.metu.edu.tr/upload/12608877/index.pdf.

Der volle Inhalt der Quelle
Annotation:
In this thesis, the learning of probabilistic relationships between stimulus-action pairs is investigated under the probability learning paradigm. The effect of reward is investigated in the first three experiments. Additionally, the effect of context and uncertainty is investigated in the second and third experiments, respectively. The fourth experiment is the replication of the second experiment with a group of Parkinson patients where the effect of dopamine medication on probability learning is studied. In Experiment 1, we replicate the classical probability learning task by comparing monetary and non-monetary reward feedback. Probability learning behavior is observed in both monetary and non-monetary rewarding feedback conditions. However, no significant difference between the monetary and non-monetary feedback conditions is observed. In Experiment 2, a variation of the probability learning task which includes irrelevant contextual information is applied. Probability learning behavior is observed, and a significant effect is found between monetary and non-monetary feedback conditions. In Experiment 3
a probability learning task similar to that in Experiment 2 is applied, however, in this experiment, stimulus included relevant contextual information. As expected, due to the utilization of the relevant contextual information from the start of the experiment, no significant effect is found for probability learning behavior. The effect of uncertainty observed in this experiment is a replication of the reports in literature. Experiment 4 is identical to Experiment 2
except that the subject population is a group of dopamine medicated Parkinson patients and a group of age matched controls. This experiment is introduced to test the suggestions in the literature regarding the enhancement effect of dopamine medication in probability learning based on positive feedback conditions. In Experiment 4, probability learning behavior is observed in both groups, but the difference in learning performance between Parkinson patients and controls was not significant, probably due to the low number of subject recruited in the experiment. In addition to these investigations, learning mechanisms are also examined in Experiments 1 and 4. Our results indicate that subjects initially search for patterns which lead to probability learning. At the end of Experiments 1 and 4, upon learning the winning frequencies, subjects change their behavior and demonstrate maximization behavior, which makes them prefer continuously one option over the other.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
34

Sjöholm, Johan. „Probability as readability : A new machine learning approach to readability assessment for written Swedish“. Thesis, Linköpings universitet, NLPLAB - Laboratoriet för databehandling av naturligt språk, 2012. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-78107.

Der volle Inhalt der Quelle
Annotation:
This thesis explores the possibility of assessing the degree of readability of writtenSwedish using machine learning. An application using four levels of linguistic analysishas been implemented and tested with four different established algorithmsfor machine learning. The new approach has then been compared to establishedreadability metrics for Swedish. The results indicate that the new method workssignificantly better for readability classification of both sentences and documents.The system has also been tested with so called soft classification which returns aprobability for the degree of readability of a given text. This probability can thenbe used to rank texts according to probable degree of readability.
Detta examensarbete utforskar möjligheterna att bedöma svenska texters läsbarhet med hjälp av maskininlärning. Ett system som använder fyra nivåer av lingvistisk analys har implementerats och testats med fyra olika etablerade algoritmer för maskininlärning. Det nya angreppssättet har sedan jämförts med etablerade läsbarhetsmått för svenska. Resultaten visar att den nya metoden fungerar markant bättre för läsbarhetsklassning av både meningar och hela dokument. Systemet har också testats med så kallad mjuk klassificering som ger ett sannolikhetsvärde för en given texts läsbarhetsgrad. Detta sannolikhetsvärde kan användas för rangordna texter baserad på sannolik läsbarhetsgrad.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
35

Lindberg, Jesper. „Simulation driven reinforcement learning : Improving synthetic enemies in flight simulators“. Thesis, Linköpings universitet, Statistik och maskininlärning, 2020. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-166593.

Der volle Inhalt der Quelle
Annotation:
This project focuses on how to implement an Artificial Intelligence (AI) -agent in a Tactical Simulator (Tacsi). Tacsi is a simulator used by Saab AB, one thing that the simulator is used for is pilot training. In this work, Tacsi will be used to simulate air to air combat. The agent uses Reinforcement Learning (RL) to be able to explore and learn how the simulator behaves. This knowledge will then be exploited when the agent tries to beat a computer-controlled synthetic enemy. The result of this study may be used to produce better synthetic enemies for pilot training. The RL-algorithm used in this work is deep Q-Learning, a well-known algorithm in the field. The results of the work show that it is possible to implement an RL-agent in Tacsi which can learn from the environment and be able to defeat the enemy, in some scenarios. The result produced by the algorithm verified that a RL-Agent works within Tacsi at Saab AB. Although the performance of the agent in this work is not impressive, there is a great opportunity for further development of the agent as well as the working environment.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
36

Rydén, Otto. „Statistical learning procedures for analysis of residential property price indexes“. Thesis, KTH, Matematisk statistik, 2017. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-207946.

Der volle Inhalt der Quelle
Annotation:
Residential Price Property Indexes (RPPIs) are used to study the price development of residential property over time. Modeling and analysing an RPPI is not straightforward due to residential property being a heterogeneous good. This thesis focuses on analysing the properties of the two most conventional hedonic index modeling approaches, the hedonic time dummy method and the hedonic imputation method. These two methods are analysed with statistical learning procedures from a regression perspective, specifically, ordinary least squares regression, and a number of more advanced regression approaches, Huber regression, lasso regression, ridge regression and principal component regression. The analysis is based on the data from 56 000 apartment transactions in Stockholm during the period 2013-2016 and results in several models of a RPPI. These suggested models are then validated using both qualitative and quantitative methods, specifically a bootstrap re-sampling to perform analyses of an empirical confidence interval for the index values and a mean squared errors analysis of the different index periods. Main results of this thesis show that the hedonic time dummy index methodology produces indexes with smaller variances and more robust indexes for smaller datasets. It is further shown that modeling of RPPIs with robust regression generally results in a more stable index that is less affected by outliers in the underlying transaction data. This type of robust regression strategy is therefore recommended for a commercial implementation of an RPPI.
Bostadsprisindex används för att undersöka prisutvecklingen för bostäder över tid. Att modellera ett bostadsprisindex är inte alltid lätt då bostäder är en heterogen vara. Denna uppsats analyserar skillnaden mellan de tvåhuvudsakliga hedoniska indexmodelleringsmetoderna, som är, hedoniska tiddummyvariabelmetoden och den hedoniska imputeringsmetoden. Dessa metoder analyseras med en statistisk inlärningsprocedur gjord utifrån ett regressionsperspektiv, som inkluderar analys utav minsta kvadrats-regression, Huberregression, lassoregression, ridgeregression och principal componentregression. Denna analys är baserad på ca 56 000 lägenhetstransaktioner för lägenheter i Stockholm under perioden 2013-2016 och används för att modellera era versioner av ett bostadsprisindex. De modellerade bostadsprisindexen analyseras sedan med hjälp utav både kvalitativa och kvantitativa metoder inklusive en version av bootstrap för att räkna ut ett empiriskt konfidensintervall för bostadsprisindexen samt en medelfelsanalys av indexpunktskattningarna i varje tidsperiod. Denna analys visar att den hedoniska tid-dummyvariabelmetoden producerar bostadsprisindex med mindre varians och ger också robustare bostadsprisindex för en mindre datamängd. Denna uppsats visar också att användandet av robustare regressionsmetoder leder till stabilare bostadsprisindex som är mindre påverkade av extremvärden, därför rekommenderas robusta regressionsmetoder för en kommersiell implementering av ett bostadsprisindex.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
37

Rayburn-Reeves, Rebecca Marie. „AN ANALYSIS OF BEHAVIORAL FLEXIBILITY AND CUE PREFERENCE IN PIGEONS UNDER VARIABLE REVERSAL LEARNING CONDITIONS“. UKnowledge, 2011. http://uknowledge.uky.edu/psychology_etds/1.

Der volle Inhalt der Quelle
Annotation:
Behavioral flexibility, the ability to change behavior in accordance with the changing environment, was studied in pigeons using a series of reversal learning paradigms. All experiments involved a series of 5-trial sequences and I was interested in whether pigeons are sensitive to the reversal by switching to the other alternative after a single error. In Experiments 1 and 2, the overall probability of the two stimuli was equated over sequences, but the probability correct of the two stimuli changed across trials. In both experiments, subjects showed no sensitivity to the differences in sequence type. Instead they used the overall average of the probability of reinforcement on each trial as the basis of choice. In the final two experiments, the likelihood that a reversal would occur on a given trial was equated such that there was an equal overall probability that the two stimuli would be correct on a given trial, but the overall probability of each stimulus being correct across sequences favored the second correct stimulus (S2). In Experiment 3, the overall probability of S2 correct was 80%, and results showed that subjects consistently chose S2 regardless of sequence type or trial number. In Experiment 4, the overall likelihood of S2 correct was 65%, and results showed that subjects began all sequences at chance, and as the sequence progressed, began choosing S2 more often. In all experiments, subjects showed remarkably similar behavior regardless of where (or whether) the reversal occurred in a given sequence. Therefore, subjects appeared to be insensitive to the consequences of responses within a sequence (local information) and instead, seemed to be averaging over the sequences based on the overall probability of reinforcement for S1 or S2 being correct on each trial (aggregate information), thus not maximizing overall reinforcement. Together, the results of this series of experiments suggest that pigeons have a basic disposition for using the overall probability instead of using local feedback cues provided by the outcome of individual trials. The fact that pigeons do not use the more optimal information afforded by recent reinforcement contingencies to maximize reinforcement has implications for their use of flexible response strategies under reversal learning conditions.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
38

BELLODI, Elena. „Integration of Logic and Probability in Terminological and Inductive Reasoning“. Doctoral thesis, Università degli studi di Ferrara, 2013. http://hdl.handle.net/11392/2388897.

Der volle Inhalt der Quelle
Annotation:
This thesis deals with Statistical Relational Learning (SRL), a research area combining principles and ideas from three important subfields of Artificial Intelligence: machine learn- ing, knowledge representation and reasoning on uncertainty. Machine learning is the study of systems that improve their behavior over time with experience; the learning process typi- cally involves a search through various generalizations of the examples, in order to discover regularities or classification rules. A wide variety of machine learning techniques have been developed in the past fifty years, most of which used propositional logic as a (limited) represen- tation language. Recently, more expressive knowledge representations have been considered, to cope with a variable number of entities as well as the relationships that hold amongst them. These representations are mostly based on logic that, however, has limitations when reason- ing on uncertain domains. These limitations have been lifted allowing a multitude of different formalisms combining probabilistic reasoning with logics, databases or logic programming, where probability theory provides a formal basis for reasoning on uncertainty. In this thesis we consider in particular the proposals for integrating probability in Logic Programming, since the resulting probabilistic logic programming languages present very in- teresting computational properties. In Probabilistic Logic Programming, the so-called "dis- tribution semantics" has gained a wide popularity. This semantics was introduced for the PRISM language (1995) but is shared by many other languages: Independent Choice Logic, Stochastic Logic Programs, CP-logic, ProbLog and Logic Programs with Annotated Disjunc- tions (LPADs). A program in one of these languages defines a probability distribution over normal logic programs called worlds. This distribution is then extended to queries and the probability of a query is obtained by marginalizing the joint distribution of the query and the programs. The languages following the distribution semantics differ in the way they define the distribution over logic programs. The first part of this dissertation presents techniques for learning probabilistic logic pro- grams under the distribution semantics. Two problems are considered: parameter learning and structure learning, that is, the problems of inferring values for the parameters or both the structure and the parameters of the program from data. This work contributes an algorithm for parameter learning, EMBLEM, and two algorithms for structure learning (SLIPCASE and SLIPCOVER) of probabilistic logic programs (in particular LPADs). EMBLEM is based on the Expectation Maximization approach and computes the expectations directly on the Binary De- cision Diagrams that are built for inference. SLIPCASE performs a beam search in the space of LPADs while SLIPCOVER performs a beam search in the space of probabilistic clauses and a greedy search in the space of LPADs, improving SLIPCASE performance. All learning approaches have been evaluated in several relational real-world domains. The second part of the thesis concerns the field of Probabilistic Description Logics, where we consider a logical framework suitable for the Semantic Web. Description Logics (DL) are a family of formalisms for representing knowledge. Research in the field of knowledge repre- sentation and reasoning is usually focused on methods for providing high-level descriptions of the world that can be effectively used to build intelligent applications. Description Logics have been especially effective as the representation language for for- mal ontologies. Ontologies model a domain with the definition of concepts and their properties and relations. Ontologies are the structural frameworks for organizing information and are used in artificial intelligence, the Semantic Web, systems engineering, software engineering, biomedical informatics, etc. They should also allow to ask questions about the concepts and in- stances described, through inference procedures. Recently, the issue of representing uncertain information in these domains has led to probabilistic extensions of DLs. The contribution of this dissertation is twofold: (1) a new semantics for the Description Logic SHOIN(D) , based on the distribution semantics for probabilistic logic programs, which embeds probability; (2) a probabilistic reasoner for computing the probability of queries from uncertain knowledge bases following this semantics. The explanations of queries are encoded in Binary Decision Diagrams, with the same technique employed in the learning systems de- veloped for LPADs. This approach has been evaluated on a real-world probabilistic ontology.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
39

Mirzaikamrani, Sonya. „Predictive modeling and classification for Stroke using the machine learning methods“. Thesis, Örebro universitet, Handelshögskolan vid Örebro Universitet, 2020. http://urn.kb.se/resolve?urn=urn:nbn:se:oru:diva-81837.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
40

Hedblom, Edvin, und Rasmus Åkerblom. „Debt recovery prediction in securitized non-performing loans using machine learning“. Thesis, KTH, Matematisk statistik, 2019. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-252311.

Der volle Inhalt der Quelle
Annotation:
Credit scoring using machine learning has been gaining attention within the research field in recent decades and it is widely used in the financial sector today. Studies covering binary credit scoring of securitized non-performing loans are however very scarce. This paper is using random forest and artificial neural networks to predict debt recovery for such portfolios. As a performance benchmark, logistic regression is used. Due to the nature of high imbalance between the classes, the performance is evaluated mainly on the area under both the receiver operating characteristic curve and the precision-recall curve. This paper shows that random forest, artificial neural networks and logistic regression have similar performance. They all indicate an overall satisfactory ability to predict debt recovery and hold potential to be implemented in day-to-day business related to non-performing loans.
Bedömning av kreditvärdighet med maskininlärning har fått ökad uppmärksamhet inom forskningen under de senaste årtiondena och är ofta använt inom den finansiella sektorn. Tidigare studier inom binär klassificering av kreditvärdighet för icke-presterande lånportföljer är få. Denna studie använder random forest och artificial neural networks för att prediktera återupptagandet av lånbetalningar för sådana portföljer. Som jämförelse används logistisk regression. På grund av kraftig obalans mellan klasserna kommer modellerna att bedömas huvudsakligen på arean under reciever operating characteristic-kurvan och precision-recall-kurvan. Denna studie visar på att random forest, artificial neural networks och logistisk regression presterar likartat med överlag goda resultat som har potential att fördelaktigt implementeras i praktiken.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
41

Truran, J. M. „The development of children's understanding of probability : and the application of research findings to classroom practice /“. Title page, contents and abstract only, 1992. http://web4.library.adelaide.edu.au/theses/09EDM/09edmt872.pdf.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
42

Shipitsyn, Aleksey. „Statistical Learning with Imbalanced Data“. Thesis, Linköpings universitet, Filosofiska fakulteten, 2017. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-139168.

Der volle Inhalt der Quelle
Annotation:
In this thesis several sampling methods for Statistical Learning with imbalanced data have been implemented and evaluated with a new metric, imbalanced accuracy. Several modifications and new algorithms have been proposed for intelligent sampling: Border links, Clean Border Undersampling, One-Sided Undersampling Modified, DBSCAN Undersampling, Class Adjusted Jittering, Hierarchical Cluster Based Oversampling, DBSCAN Oversampling, Fitted Distribution Oversampling, Random Linear Combinations Oversampling, Center Repulsion Oversampling. A set of requirements on a satisfactory performance metric for imbalanced learning have been formulated and a new metric for evaluating classification performance has been developed accordingly. The new metric is based on a combination of the worst class accuracy and geometric mean. In the testing framework nonparametric Friedman's test and post hoc Nemenyi’s test have been used to assess the performance of classifiers, sampling algorithms, combinations of classifiers and sampling algorithms on several data sets. A new approach of detecting algorithms with dominating and dominated performance has been proposed with a new way of visualizing the results in a network. From experiments on simulated and several real data sets we conclude that: i) different classifiers are not equally sensitive to sampling algorithms, ii) sampling algorithms have different performance within specific classifiers, iii) oversampling algorithms perform better than undersampling algorithms, iv) Random Oversampling and Random Undersampling outperform many well-known sampling algorithms, v) our proposed algorithms Hierarchical Cluster Based Oversampling, DBSCAN Oversampling with FDO, and Class Adjusted Jittering perform much better than other algorithms, vi) a few good combinations of a classifier and sampling algorithm may boost classification performance, while a few bad combinations may spoil the performance, but the majority of combinations are not significantly different in performance.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
43

Stagner, Jessica P. „INVESTIGATION OF THE MONTY HALL DILEMMA IN PIGEONS AND RATS“. UKnowledge, 2013. http://uknowledge.uky.edu/psychology_etds/31.

Der volle Inhalt der Quelle
Annotation:
In the Monty Hall Dilemma (MHD), three doors are presented with a prize behind one and participants are instructed to choose a door. One of the unchosen doors is then shown to not have the prize and the participant can choose to stay with their door or switch to the other one. The optimal strategy is to switch. Herbranson and Schroeder (2010) found that humans performed poorly on this task, whereas pigeons learned to switch readily. However, we found that pigeons learned to switch at level only slightly above humans. We also found that pigeons stay nearly exclusively when staying is the optimal strategy and when staying and switching are reinforced equally (Stagner, Rayburn-Reeves, & Zentall, 2013). In Experiment 1, rats were trained under these same conditions to observe if possible differences in foraging strategy would influence performance on this task. In Experiment 2, pigeons were trained in an analogous procedure to better compare the two species. We found that both species were sensitive to the overall probability of reinforcement, as both switched significantly more often than subjects in a group that were reinforced equally for staying and switching and a group that was reinforced more often for staying. Overall, the two species performed very similarly within the parameters of the current procedure.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
44

Samson, Rachel D., Adam W. Lester, Leroy Duarte, Anu Venkatesh und Carol A. Barnes. „Emergence of β-Band Oscillations in the Aged Rat Amygdala during Discrimination Learning and Decision Making Tasks“. SOC NEUROSCIENCE, 2017. http://hdl.handle.net/10150/626610.

Der volle Inhalt der Quelle
Annotation:
Older adults tend to use strategies that differ from those used by young adults to solve decision-making tasks. MRI experiments suggest that altered strategy use during aging can be accompanied by a change in extent of activation of a given brain region, inter-hemispheric bilateralization or added brain structures. It has been suggested that these changes reflect compensation for less effective networks to enable optimal performance. One way that communication can be influenced within and between brain networks is through oscillatory events that help structure and synchronize incoming and outgoing information. It is unknown how aging impacts local oscillatory activity within the basolateral complex of the amygdala (BLA). The present study recorded local field potentials (LFPs) and single units in old and young rats during the performance of tasks that involve discrimination learning and probabilistic decision making. Wefound task-and age-specific increases in power selectively within the beta range (15-30 Hz). The increased beta power occurred after lever presses, as old animals reached the goal location. Periods of high-power beta developed over training days in the aged rats, and was greatest in early trials of a session. beta Power was also greater after pressing for the large reward option. These data suggest that aging of BLA networks results in strengthened synchrony of beta oscillations when older animals are learning or deciding between rewards of different size. Whether this increased synchrony reflects the neural basis of a compensatory strategy change of old animals in reward-based decision-making tasks, remains to be verified.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
45

Parfionovas, Andrejus. „Enhancement of Random Forests Using Trees with Oblique Splits“. DigitalCommons@USU, 2013. http://digitalcommons.usu.edu/etd/1508.

Der volle Inhalt der Quelle
Annotation:
This work presents an enhancement to the classification tree algorithm which forms the basis for Random Forests. Differently from the classical tree-based methods that focus on one variable at a time to separate the observations, the new algorithm performs the search for the best split in two-dimensional space using a linear combination of variables. Besides the classification, the method can be used to determine variables interaction and perform feature extraction. Theoretical investigations and numerical simulations were used to analyze the properties and performance of the new approach. Comparison with other popular classification methods was performed using simulated and real data examples. The algorithm was implemented as an extension package for the statistical computing environment R and is available for free download under the GNU General Public License.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
46

Goodie, Adam S. „Base-rate neglect under direct experience /“. Diss., Connect to a 24 p. preview or request complete full text in PDF format. Access restricted to UC IP addresses, 1997. http://wwwlib.umi.com/cr/ucsd/fullcit?p9719865.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
47

Huang, Xin. „A study on the application of machine learning algorithms in stochastic optimal control“. Thesis, KTH, Matematisk statistik, 2019. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-252541.

Der volle Inhalt der Quelle
Annotation:
By observing a similarity between the goal of stochastic optimal control to minimize an expected cost functional and the aim of machine learning to minimize an expected loss function, a method of applying machine learning algorithm to approximate the optimal control function is established and implemented via neural approximation. Based on a discretization framework, a recursive formula for the gradient of the approximated cost functional on the parameters of neural network is derived. For a well-known Linear-Quadratic-Gaussian control problem, the approximated neural network function obtained with stochastic gradient descent algorithm manages to reproduce to shape of the theoretical optimal control function, and application of different types of machine learning optimization algorithm gives quite close accuracy rate in terms of their associated empirical value function. Furthermore, it is shown that the accuracy and stability of machine learning approximation can be improved by increasing the size of minibatch and applying a finer discretization scheme. These results suggest the effectiveness and appropriateness of applying machine learning algorithm for stochastic optimal control.
Genom att observera en likhet mellan målet för stokastisk optimal styrning för att minimera en förväntad kostnadsfunktionell och syftet med maskininlärning att minimera en förväntad förlustfunktion etableras och implementeras en metod för att applicera maskininlärningsalgoritmen för att approximera den optimala kontrollfunktionen via neuralt approximation. Baserat på en diskretiseringsram, härleds en rekursiv formel för gradienten av den approximerade kostnadsfunktionen på parametrarna för neuralt nätverk. För ett välkänt linjärt-kvadratisk-gaussiskt kontrollproblem lyckas den approximerade neurala nätverksfunktionen erhållen med stokastisk gradient nedstigningsalgoritm att reproducera till formen av den teoretiska optimala styrfunktionen och tillämpning av olika typer av algoritmer för maskininlärning optimering ger en ganska nära noggrannhet med avseende på deras motsvarande empiriska värdefunktion. Vidare är det visat att noggrannheten och stabiliteten hos maskininlärning simetrationen kan förbättras genom att öka storleken på minibatch och tillämpa ett finare diskretiseringsschema. Dessa resultat tyder på effektiviteten och lämpligheten av att tillämpa maskininlärningsalgoritmen för stokastisk optimal styrning.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
48

Andersson, Carl. „Deep learning applied to system identification : A probabilistic approach“. Licentiate thesis, Uppsala universitet, Avdelningen för systemteknik, 2019. http://urn.kb.se/resolve?urn=urn:nbn:se:uu:diva-397563.

Der volle Inhalt der Quelle
Annotation:
Machine learning has been applied to sequential data for a long time in the field of system identification. As deep learning grew under the late 00's machine learning was again applied to sequential data but from a new angle, not utilizing much of the knowledge from system identification. Likewise, the field of system identification has yet to adopt many of the recent advancements in deep learning. This thesis is a response to that. It introduces the field of deep learning in a probabilistic machine learning setting for problems known from system identification. Our goal for sequential modeling within the scope of this thesis is to obtain a model with good predictive and/or generative capabilities. The motivation behind this is that such a model can then be used in other areas, such as control or reinforcement learning. The model could also be used as a stepping stone for machine learning problems or for pure recreational purposes. Paper I and Paper II focus on how to apply deep learning to common system identification problems. Paper I introduces a novel way of regularizing the impulse response estimator for a system. In contrast to previous methods using Gaussian processes for this regularization we propose to parameterize the regularization with a neural network and train this using a large dataset. Paper II introduces deep learning and many of its core concepts for a system identification audience. In the paper we also evaluate several contemporary deep learning models on standard system identification benchmarks. Paper III is the odd fish in the collection in that it focuses on the mathematical formulation and evaluation of calibration in classification especially for deep neural network. The paper proposes a new formalized notation for calibration and some novel ideas for evaluation of calibration. It also provides some experimental results on calibration evaluation.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
49

Bhat, Sooraj. „Syntactic foundations for machine learning“. Diss., Georgia Institute of Technology, 2013. http://hdl.handle.net/1853/47700.

Der volle Inhalt der Quelle
Annotation:
Machine learning has risen in importance across science, engineering, and business in recent years. Domain experts have begun to understand how their data analysis problems can be solved in a principled and efficient manner using methods from machine learning, with its simultaneous focus on statistical and computational concerns. Moreover, the data in many of these application domains has exploded in availability and scale, further underscoring the need for algorithms which find patterns and trends quickly and correctly. However, most people actually analyzing data today operate far from the expert level. Available statistical libraries and even textbooks contain only a finite sample of the possibilities afforded by the underlying mathematical principles. Ideally, practitioners should be able to do what machine learning experts can do--employ the fundamental principles to experiment with the practically infinite number of possible customized statistical models as well as alternative algorithms for solving them, including advanced techniques for handling massive datasets. This would lead to more accurate models, the ability in some cases to analyze data that was previously intractable, and, if the experimentation can be greatly accelerated, huge gains in human productivity. Fixing this state of affairs involves mechanizing and automating these statistical and algorithmic principles. This task has received little attention because we lack a suitable syntactic representation that is capable of specifying machine learning problems and solutions, so there is no way to encode the principles in question, which are themselves a mapping between problem and solution. This work focuses on providing the foundational layer for enabling this vision, with the thesis that such a representation is possible. We demonstrate the thesis by defining a syntactic representation of machine learning that is expressive, promotes correctness, and enables the mechanization of a wide variety of useful solution principles.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
50

Truran, J. M. „The teaching and learning of probability, with special reference to South Australian schools from 1959-1994“. Title page, contents and abstract only, 2001. http://web4.library.adelaide.edu.au/theses/09PH/09pht872.pdf.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
Wir bieten Rabatte auf alle Premium-Pläne für Autoren, deren Werke in thematische Literatursammlungen aufgenommen wurden. Kontaktieren Sie uns, um einen einzigartigen Promo-Code zu erhalten!

Zur Bibliographie