Dissertations / Theses on the topic 'Brain and learning'
Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles
Consult the top 50 dissertations / theses for your research on the topic 'Brain and learning.'
Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.
You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.
Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.
Evanshen, Pamela, and L. Phillips. "Brain Compatible Learning Environments." Digital Commons @ East Tennessee State University, 2005. https://dc.etsu.edu/etsu-works/4368.
Full textEvanshen, Pamela. "Brain-compatible Learning Environments." Digital Commons @ East Tennessee State University, 2007. https://dc.etsu.edu/etsu-works/4404.
Full textThurston, Roy J. "Brain injury, memory and learning." Thesis, National Library of Canada = Bibliothèque nationale du Canada, 2000. http://www.collectionscanada.ca/obj/s4/f2/dsk1/tape3/PQDD_0024/NQ49543.pdf.
Full textBrodnax, Rita M. "Brain compatible teaching for learning." [Bloomington, Ind.] : Indiana University, 2004. http://wwwlib.umi.com/dissertations/fullcit/3173526.
Full textTitle from PDF t.p. (viewed Dec. 8, 2008). Source: Dissertation Abstracts International, Volume: 66-04, Section: A, page: 1257. Chair: Ron Barnes.
Parsapoor, Mahboobeh. "Brain Emotional Learning-Inspired Models." Licentiate thesis, Högskolan i Halmstad, Centrum för forskning om inbyggda system (CERES), 2014. http://urn.kb.se/resolve?urn=urn:nbn:se:hh:diva-25428.
Full textNair, Hemanth P. "Brain imaging of developmental learning effects /." Full text (PDF) from UMI/Dissertation Abstracts International, 2000. http://wwwlib.umi.com/cr/utexas/fullcit?p3004348.
Full textSperlich, Juntana Ginda. "Designing a brain-based learning environment." CSUSB ScholarWorks, 2007. https://scholarworks.lib.csusb.edu/etd-project/3216.
Full textOscarsson, Jacob. "Exploring the Brain : Interactivity and Learning." Thesis, Högskolan i Skövde, Institutionen för informationsteknologi, 2016. http://urn.kb.se/resolve?urn=urn:nbn:se:his:diva-12329.
Full textAmerineni, Rajesh. "BRAIN-INSPIRED MACHINE LEARNING CLASSIFICATION MODELS." OpenSIUC, 2020. https://opensiuc.lib.siu.edu/dissertations/1806.
Full textOlsson, Joakim. "A Critique of the Learning Brain." Thesis, Uppsala universitet, Avdelningen för teoretisk filosofi, 2020. http://urn.kb.se/resolve?urn=urn:nbn:se:uu:diva-432105.
Full textMorra, Jonathan Harold. "Learning methods for brain MRI segmentation." Diss., Restricted to subscribing institutions, 2009. http://proquest.umi.com/pqdweb?did=1905693471&sid=1&Fmt=2&clientId=1564&RQT=309&VName=PQD.
Full textBabalola, Karolyn Olatubosun. "Brain-computer interfaces for inducing brain plasticity and motor learning: implications for brain-injury rehabilitation." Diss., Georgia Institute of Technology, 2011. http://hdl.handle.net/1853/41164.
Full textTuraga, Srinivas C. "Learning image segmentation and hierarchies by learning ultrametric distances." Thesis, Massachusetts Institute of Technology, 2009. http://hdl.handle.net/1721.1/54626.
Full textCataloged from PDF version of thesis.
Includes bibliographical references (p. 100-105).
In this thesis I present new contributions to the fields of neuroscience and computer science. The neuroscientific contribution is a new technique for automatically reconstructing complete neural networks from densely stained 3d electron micrographs of brain tissue. The computer science contribution is a new machine learning method for image segmentation and the development of a new theory for supervised hierarchy learning based on ultrametric distance functions. It is well-known that the connectivity of neural networks in the brain can have a dramatic influence on their computational function . However, our understanding of the complete connectivity of neural circuits has been quite impoverished due to our inability to image all the connections between all the neurons in biological network. Connectomics is an emerging field in neuroscience that aims to revolutionize our understanding of the function of neural circuits by imaging and reconstructing entire neural circuits. In this thesis, I present an automated method for reconstructing neural circuitry from 3d electron micrographs of brain tissue. The cortical column, a basic unit of cortical microcircuitry, will produce a single 3d electron micrograph measuring many 100s terabytes once imaged and contain neurites from well over 100,000 different neurons. It is estimated that tracing the neurites in such a volume by hand would take several thousand human years. Automated circuit tracing methods are thus crucial to the success of connectomics. In computer vision, the circuit reconstruction problem of tracing neurites is known as image segmentation. Segmentation is a grouping problem where image pixels belonging to the same neurite are clustered together. While many algorithms for image segmentation exist, few have parameters that can be optimized using groundtruth data to extract maximum performance on a specialized dataset. In this thesis, I present the first machine learning method to directly minimize an image segmentation error. It is based the theory of ultrametric distances and hierarchical clustering. Image segmentation is posed as the problem of learning and classifying ultrametric distances between image pixels. Ultrametric distances on point set have the special property that
(cont.) they correspond exactly to hierarchical clustering of the set. This special property implies hierarchical clustering can be learned by directly learning ultrametric distances. In this thesis, I develop convolutional networks as a machine learning architecture for image processing. I use this powerful pattern recognition architecture with many tens of thousands of free parameters for predicting affinity graphs and detecting object boundaries in images. When trained using ultrametric learning, the convolutional network based algorithm yields an extremely efficient linear-time segmentation algorithm. In this thesis, I develop methods for assessing the quality of image segmentations produced by manual human efforts or by automated computer algorithms. These methods are crucial for comparing the performance of different segmentation methods and is used through out the thesis to demonstrate the quality of the reconstructions generated by the methods in this thesis.
by Srinivas C. Turaga.
Ph.D.
Herson, Laurie A. "Brain-compatible research: using brain-based techniques to positively impact student learning." [Tampa, Fla] : University of South Florida, 2006. http://purl.fcla.edu/usf/dc/et/SFE0001668.
Full textBradford-Meyer, Connie. "Follow-up of brain conference attendees and their application of brain research : a questionnaire approach /." ProQuest subscription required:, 2003. http://proquest.umi.com/pqdweb?did=990270771&sid=1&Fmt=2&clientId=8813&RQT=309&VName=PQD.
Full textHavaei, Seyed Mohammad. "Machine learning methods for brain tumor segmentation." Thèse, Université de Sherbrooke, 2017. http://hdl.handle.net/11143/10260.
Full textRésumé: Les tumeurs malignes au cerveau sont la deuxième cause principale de décès chez les enfants de moins de 20 ans. Il y a près de 700 000 personnes aux États-Unis vivant avec une tumeur au cerveau, et 17 000 personnes sont chaque année à risque de perdre leur vie suite à une tumeur maligne primaire dans le système nerveu central. Pour identifier de façon non-invasive si un patient est atteint d'une tumeur au cerveau, une image IRM du cerveau est acquise et analysée à la main par un expert pour trouver des lésions (c.-à-d. un groupement de cellules qui diffère du tissu sain). Une tumeur et ses régions doivent être détectées à l'aide d'une segmentation pour aider son traitement. La segmentation de tumeur cérébrale et principalement faite à la main, c'est une procédure qui demande beaucoup de temps et les variations intra et inter expert pour un même cas varient beaucoup. Pour répondre à ces problèmes, il existe beaucoup de méthodes automatique et semi-automatique qui ont été proposés ces dernières années pour aider les praticiens à prendre des décisions. Les méthodes basées sur l'apprentissage automatique ont suscité un fort intérêt dans le domaine de la segmentation des tumeurs cérébrales. L'avènement des méthodes de Deep Learning et leurs succès dans maintes applications tels que la classification d'images a contribué à mettre de l'avant le Deep Learning dans l'analyse d'images médicales. Dans cette thèse, nous explorons diverses méthodes d'apprentissage automatique et de Deep Learning appliquées à la segmentation des tumeurs cérébrales.
Petersson, Karl Magnus. "Learning and memory in the human brain /." Stockholm, 2005. http://diss.kib.ki.se/2005/91-7140-304-3/.
Full textMunro, M., and M. Coetzee. "Mind the Gap: Beyond Whole-brain learning." South African Theatre Journal, 2008. http://encore.tut.ac.za/iii/cpro/DigitalItemViewPage.external?sp=1000808.
Full textVon, Aulock Maryna. "Brain compatible learning in the radiation sciences." Thesis, Peninsula Technikon, 2003. http://hdl.handle.net/20.500.11838/1549.
Full textBrain Compatible Learning (BCL), as its name suggests, is a type of learning which is aligned with how the human brain naturally learns and develops. BCL offers many different options and routes to learning as alternatives to conventional 'chalk and talk' methodologies. A BCL curriculum is planned to define the structure and content of a programme of learning, but it also provides opportunities for students to participate in activities, which encourage and enhance the development of an active and deep approach to learning. Using BCL approaches in the classroom thus creates both a stimulating and a caring environment for student learning. This project researches a BCL intervention in a Radiation Science course. The use of BCL techniques has tended to have been done predominantly in the social sciences; this research fills an important 'gap' in the research literature by examining how BCL might be implemented in a technical and scientific context. The research was conducted using an adapted Participatory Active Research methodology in which classroom interventions were planned (within a constructive framework), rather than implemented and then reflected on by all participants. The PAR method was supplemented with a series of detailed questionnaires and interviews. The broad findings of this study relate to students' experiences of BCL in Radiation Science in terms of 'process' and 'product" issues. In terms of process, or the methodology of BCL, students' responses were largely positive.
Raina, Kevin. "Machine Learning Methods for Brain Lesion Delineation." Thesis, Université d'Ottawa / University of Ottawa, 2020. http://hdl.handle.net/10393/41156.
Full textZarogianni, Eleni. "Machine learning and brain imaging in psychosis." Thesis, University of Edinburgh, 2016. http://hdl.handle.net/1842/22814.
Full textPennington, Eva Patrice. "Brain-based learning theory the incorporation of movement to increase learning /." Lynchburg, Va. : Liberty University, 2010. http://digitalcommons.liberty.edu.
Full textLee, Hyangsook. "The brain and learning| Examining the connection between brain activity, spatial intelligence, and learning outcomes in online visual instruction." Thesis, Kent State University, 2014. http://pqdtopen.proquest.com/#viewpdf?dispub=3618876.
Full textThe purpose of the study was to compare 2D and 3D visual presentation styles, both still frame and animation, on subjects' brain activity measured by the amplitude of EEG alpha wave and on their recall to see if alpha power and recall differ significantly by depth and movement of visual presentation style and by spatial intelligence. In addition, the study sought to determine whether there is any significant interaction between spatial intelligence and visual presentation style on alpha power and recall, and to determine whether any relationship exists between alpha power and recall.
The subjects in the present study were one hundred and twenty three undergraduate students at a university in the Midwest. After taking Vandenberg & Kuse's Mental Rotations Test, subjects were divided into low and high spatial intelligence groups, and subjects in each spatial intelligence group were evenly assigned to four different types of visual presentation style (2D still frame, 2D animation, 3D still frame, and 3D animation), receiving an instruction on LASIK eye surgical procedure in its respective visual presentation style. During the one-minute visual instruction, subjects' brain activity was measured and recorded using a wireless EEG headset. Upon completion of the instruction, subjects were given a 10-item multiple-choice test to measure their recall of the material presented during the instruction.
Two 2 (spatial intelligence) x 2 (depth) x 2 (movement) factorial Analysis of Variance (ANOVA) were conducted, one with alpha power as a dependent variable and the other with recall as a dependent variable, to determine whether there is a significant difference in alpha power and recall by spatial intelligence and visual presentation style, as well as whether there is an interaction between these variables that affects alpha power and recall. The Pearson Correlation Coefficient was calculated to examine relationship between alpha power and recall.
The present study found (a) EEG alpha power did not differ by the difference in depth and movement, (b) 2D and animation were found to be more effective on recall, (c) alpha power did not differ by spatial intelligence, (d) recall did not differ by spatial intelligence, (e) there was a significant interaction between spatial intelligence and movement that affected alpha power; still frame resulted in higher alpha power for low spatial learners, and animation resulted in higher alpha power for high spatial learners, (f) there was a significant interaction between spatial intelligence, depth and movement on recall; for low spatial learners, 2D animation resulted in significantly higher recall than both 2D still frame and 3D animation, and for high spatial learners, 3D animation resulted in significantly higher recall than 3D still frame, and both 2D still frame and 2D animation resulted in close to significantly higher recall than 3D still frame, and (g) there was a mildly inverse relationship between alpha power and recall, brought on by a strong inverse relationship in 2D still frame revealing a 'higher alpha power-lower recall connection' for low spatial learners and a 'lower alpha power-higher recall connection' for high spatial learners.
Lee, Hyangsook. "The Brain and Learning: Examining the Connection between Brain Activity, Spatial Intelligence, and Learning Outcomes in Online Visual Instruction." Kent State University / OhioLINK, 2013. http://rave.ohiolink.edu/etdc/view?acc_num=kent1380667253.
Full textLake, Brenden M. "Towards more human-like concept learning in machines : compositionality, causality, and learning-to-learn." Thesis, Massachusetts Institute of Technology, 2014. http://hdl.handle.net/1721.1/95856.
Full textCataloged from PDF version of thesis.
Includes bibliographical references (pages 211-220).
People can learn a new concept almost perfectly from just a single example, yet machine learning algorithms typically require hundreds or thousands of examples to perform similarly. People can also use their learned concepts in richer ways than conventional machine learning systems - for action, imagination, and explanation suggesting that concepts are far more than a set of features, exemplars, or rules, the most popular forms of representation in machine learning and traditional models of concept learning. For those interested in better understanding this human ability, or in closing the gap between humans and machines, the key computational questions are the same: How do people learn new concepts from just one or a few examples? And how do people learn such abstract, rich, and flexible representations? An even greater puzzle arises by putting these two questions together: How do people learn such rich concepts from just one or a few examples? This thesis investigates concept learning as a form of Bayesian program induction, where learning involves selecting a structured procedure that best generates the examples from a category. I introduce a computational framework that utilizes the principles of compositionality, causality, and learning-to-learn to learn good programs from just one or a handful of examples of a new concept. New conceptual representations can be learned compositionally from pieces of related concepts, where the pieces reflect real part structure in the underlying causal process that generates category examples. This approach is evaluated on a number of natural concept learning tasks where humans and machines can be compared side-by-side. Chapter 2 introduces a large-scale data set of novel, simple visual concepts for studying concept learning from sparse data. People were asked to produce new examples of over 1600 novel categories, revealing consistent structure in the generative programs that people used. Initial experiments also show that this structure is useful for one-shot classification. Chapter 3 introduces the computational framework called Hierarchical Bayesian Program Learning, and Chapters 4 and 5 compare humans and machines on six tasks that cover a range of natural conceptual abilities. On a challenging one-shot classification task, the computational model achieves human-level performance while also outperforming several recent deep learning models. Visual "Turing test" experiments were used to compare humans and machines on more creative conceptual abilities, including generating new category examples, predicting latent causal structure, generating new concepts from related concepts, and freely generating new concepts. In each case, fewer than twenty-five percent of judges could reliably distinguish the human behavior from the machine behavior, showing that the model can generalize in ways similar to human performance. A range of comparisons with lesioned models and alternative modeling frameworks reveal that three key ingredients - compositionality, causality, and learning-to-learn - contribute to performance in each of the six tasks. This conclusion is further supported by the results of Chapter 6, where a computational model using only two of these three principles was evaluated on the one-shot learning of new spoken words. Learning programs with these ingredients is a promising route towards more humanlike concept learning in machines.
by Brenden M. Lake.
Ph. D.
Styles, Benjamin John. "Learning and sensory processing in a simple brain." Thesis, University of Sussex, 2005. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.404208.
Full textAlderman, Nicholas. "Maximising the learning potential of brain injured patients." Thesis, University of Southampton, 1994. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.296354.
Full textSoltaninejad, Mohammadreza. "Supervised learning-based multimodal MRI brain image analysis." Thesis, University of Lincoln, 2017. http://eprints.lincoln.ac.uk/30883/.
Full textMahbod, Amirreza. "Structural Brain MRI Segmentation Using Machine Learning Technique." Thesis, KTH, Skolan för teknik och hälsa (STH), 2016. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-189985.
Full textKarlaftis, Vasileios Misak. "Structural and functional brain plasticity for statistical learning." Thesis, University of Cambridge, 2018. https://www.repository.cam.ac.uk/handle/1810/278790.
Full textFernandes, José Joaquim Fonseca Ribas. "Hierarchical Reinforcement Learning in Behavior and the Brain." Doctoral thesis, Universidade Nova de Lisboa. Instituto de Tecnologia química e Biológica, 2013. http://hdl.handle.net/10362/11971.
Full textReinforcement learning (RL) has provided key insights to the neurobiology of learning and decision making. The pivotal nding is that the phasic activity of dopaminergic cells in the ventral tegmental area during learning conforms to a reward prediction error (RPE), as speci ed in the temporal-di erence learning algorithm (TD). This has provided insights to conditioning, the distinction between habitual and goal-directed behavior, working memory, cognitive control and error monitoring. It has also advanced the understanding of cognitive de cits in Parkinson's disease, depression, ADHD and of personality traits such as impulsivity.(...)
Evanshen, Pamela. "Rating the Learning Environment for Brain Compatible Elements." Digital Commons @ East Tennessee State University, 2004. https://dc.etsu.edu/etsu-works/4412.
Full textLaflamme, Denise Marie. "The brain-based theory of learning and multimedia." CSUSB ScholarWorks, 1994. https://scholarworks.lib.csusb.edu/etd-project/1002.
Full textJoos, Louis. "Deformable 3D Brain MRI Registration with Deep Learning." Thesis, KTH, Skolan för kemi, bioteknologi och hälsa (CBH), 2019. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-262852.
Full textLeonard, Julia Anne Ph D. Massachusetts Institute of Technology. "Social influences on children's learning." Thesis, Massachusetts Institute of Technology, 2018. http://hdl.handle.net/1721.1/120622.
Full textCataloged from PDF version of thesis.
Includes bibliographical references (pages 129-170).
Adults greatly impact children's learning: they serve as models of how to behave, and as parents, provide the larger social context in which children grow up. This thesis explores how adults impact children's learning across two time scales. Chapters 2 and 3 ask how a brief exposure to an adult model impacts children's moment-to-moment approach towards learning, and Chapters 4 and 5 look at how children's long-term social context impacts their brain development and capacity to learn. In Chapter 2, I show that preschool-age children integrate information from adults' actions, outcomes, and testimony to decide how hard to try on novel tasks. Children persist the longest when adults practice what they preach: saying they value effort, or giving children a pep talk, in conjunction with demonstrating effortful success on their own task. Chapter 3 demonstrates that social learning about effort is present in the first year of life and generalizes across tasks. In Chapter 4, I find that adolescents' long-term social environments have a selective impact on neural structure and function: socioeconomic-status (SES) relates to hippocampal-prefrontal declarative memory, but not striatal-dependent procedural memory. Finally, in Chapter 5 I demonstrate that the neural correlates of fluid reasoning differ by SES, suggesting that positive brain development varies by early life environment. Collectively, this work elucidates both the malleable social factors that positively impact children's learning and the unique neural and cognitive adaptations that children develop in response to adverse environments.
by Julia Anne Leonard.
Ph. D.
Brashers-Krug, Thomas M. (Thomas More). "Consolidation in human motor learning." Thesis, Massachusetts Institute of Technology, 1995. http://hdl.handle.net/1721.1/11884.
Full textFrank, Michael C. Ph D. Massachusetts Institute of Technology. "Early word learning through communicative inference." Thesis, Massachusetts Institute of Technology, 2010. http://hdl.handle.net/1721.1/62045.
Full textCataloged from PDF version of thesis.
Includes bibliographical references (p. 109-122).
How do children learn their first words? Do they do it by gradually accumulating information about the co-occurrence of words and their referents over time, or are words learned via quick social inferences linking what speakers are looking at, pointing to, and talking about? Both of these conceptions of early word learning are supported by empirical data. This thesis presents a computational and theoretical framework for unifying these two different ideas by suggesting that early word learning can best be described as a process of joint inferences about speakers' referential intentions and the meanings of words. Chapter 1 describes previous empirical and computational research on "statistical learning"--the ability of learners to use distributional patterns in their language input to learn about the elements and structure of language-and argues that capturing this abifity requires models of learning that describe inferences over structured representations, not just simple statistics. Chapter 2 argues that social signals of speakers' intentions, even eye-gaze and pointing, are at best noisy markers of reference and that in order to take advantage of these signals fully, learners must integrate information across time. Chapter 3 describes the kinds of inferences that learners can make by assuming that speakers are informative with respect to their intended meaning, introducing and testing a formalization of how Grice's pragmatic maxims can be used for word learning. Chapter 4 presents a model of cross-situational intentional word learning that both learns words and infers speakers' referential intentions from labeled corpus data.
by Michael C. Frank.
Ph.D.
Frogner, Charles (Charles Albert). "Learning and inference with Wasserstein metrics." Thesis, Massachusetts Institute of Technology, 2018. http://hdl.handle.net/1721.1/120619.
Full textCataloged from PDF version of thesis.
Includes bibliographical references (pages 131-143).
This thesis develops new approaches for three problems in machine learning, using tools from the study of optimal transport (or Wasserstein) distances between probability distributions. Optimal transport distances capture an intuitive notion of similarity between distributions, by incorporating the underlying geometry of the domain of the distributions. Despite their intuitive appeal, optimal transport distances are often difficult to apply in practice, as computing them requires solving a costly optimization problem. In each setting studied here, we describe a numerical method that overcomes this computational bottleneck and enables scaling to real data. In the first part, we consider the problem of multi-output learning in the presence of a metric on the output domain. We develop a loss function that measures the Wasserstein distance between the prediction and ground truth, and describe an efficient learning algorithm based on entropic regularization of the optimal transport problem. We additionally propose a novel extension of the Wasserstein distance from probability measures to unnormalized measures, which is applicable in settings where the ground truth is not naturally expressed as a probability distribution. We show statistical learning bounds for both the Wasserstein loss and its unnormalized counterpart. The Wasserstein loss can encourage smoothness of the predictions with respect to a chosen metric on the output space. We demonstrate this property on a real-data image tagging problem, outperforming a baseline that doesn't use the metric. In the second part, we consider the probabilistic inference problem for diffusion processes. Such processes model a variety of stochastic phenomena and appear often in continuous-time state space models. Exact inference for diffusion processes is generally intractable. In this work, we describe a novel approximate inference method, which is based on a characterization of the diffusion as following a gradient flow in a space of probability densities endowed with a Wasserstein metric. Existing methods for computing this Wasserstein gradient flow rely on discretizing the underlying domain of the diffusion, prohibiting their application to problems in more than several dimensions. In the current work, we propose a novel algorithm for computing a Wasserstein gradient flow that operates directly in a space of continuous functions, free of any underlying mesh. We apply our approximate gradient flow to the problem of filtering a diffusion, showing superior performance where standard filters struggle. Finally, we study the ecological inference problem, which is that of reasoning from aggregate measurements of a population to inferences about the individual behaviors of its members. This problem arises often when dealing with data from economics and political sciences, such as when attempting to infer the demographic breakdown of votes for each political party, given only the aggregate demographic and vote counts separately. Ecological inference is generally ill-posed, and requires prior information to distinguish a unique solution. We propose a novel, general framework for ecological inference that allows for a variety of priors and enables efficient computation of the most probable solution. Unlike previous methods, which rely on Monte Carlo estimates of the posterior, our inference procedure uses an efficient fixed point iteration that is linearly convergent. Given suitable prior information, our method can achieve more accurate inferences than existing methods. We additionally explore a sampling algorithm for estimating credible regions.
by Charles Frogner.
Ph. D.
Tenenbaum, Joshua B. (Joshua Brett) 1972. "A Bayesian framework for concept learning." Thesis, Massachusetts Institute of Technology, 1999. http://hdl.handle.net/1721.1/16714.
Full textIncludes bibliographical references (p. 297-314).
This electronic version was submitted by the student author. The certified thesis is available in the Institute Archives and Special Collections.
Human concept learning presents a version of the classic problem of induction, which is made particularly difficult by the combination of two requirements: the need to learn from a rich (i.e. nested and overlapping) vocabulary of possible concepts and the need to be able to generalize concepts reasonably from only a few positive examples. I begin this thesis by considering a simple number concept game as a concrete illustration of this ability. On this task, human learners can with reasonable confidence lock in on one out of a billion billion billion logically possible concepts, after seeing only four positive examples of the concept, and can generalize informatively after seeing just a single example. Neither of the two classic approaches to inductive inference hypothesis testing in a constrained space of possible rules and computing similarity to the observed examples can provide a complete picture of how people generalize concepts in even this simple setting. This thesis proposes a new computational framework for understanding how people learn concepts from examples, based on the principles of Bayesian inference. By imposing the constraints of a probabilistic model of the learning situation, the Bayesian learner can draw out much more information about a concept's extension from a given set of observed examples than either rule-based or similarity-based approaches do, and can use this information in a rational way to infer the probability that any new object is also an instance of the concept. There are three components of the Bayesian framework: a prior probability distribution over a hypothesis space of possible concepts; a likelihood function, which scores each hypothesis according to its probability of generating the observed examples; and the principle of hypothesis averaging, under which the learner computes the probability of generalizing a concept to new objects by averaging the predictions of all hypotheses weighted by their posterior probability (proportional to the product of their priors and likelihoods). The likelihood, under the assumption of randomly sampled positive examples, embodies the size principle for scoring hypotheses: smaller consistent hypotheses are more likely than larger hypotheses, and they become exponentially more likely as the number of observed examples increases. The principle of hypothesis averaging allows the Bayesian framework to accommodate both rule-like and similarity-like generalization behavior, depending on how peaked the posterior probability is. Together, the size principle plus hypothesis averaging predict a convergence from similarity-like generalization (due to a broad posterior distribution) after very few examples are observed to rule-like generalization (due to a sharply peaked posterior distribution) after sufficiently many examples have been observed. The main contributions of this thesis are as follows. First and foremost, I show how it is possible for people to learn and generalize concepts from just one or a few positive examples (Chapter 2). Building on that understanding, I then present a series of case studies of simple concept learning situations where the Bayesian framework yields both qualitative and quantitative insights into the real behavior of human learners (Chapters 3-5). These cases each focus on a different learning domain. Chapter 3 looks at generalization in continuous feature spaces, a typical representation of objects in psychology and machine learning with the virtues of being analytically tractable and empirically accessible, but the downside of being highly abstract and artificial. Chapter 4 moves to the more natural domain of learning words for categories of objects and shows the relevance of the same phenomena and explanatory principles introduced in the more abstract setting of Chapters 1-3 for real-world learning tasks like this one. In each of these domains, both similarity-like and rule-like generalization emerge as special cases of the Bayesian framework in the limits of very few or very many examples, respectively. However, the transition from similarity to rules occurs much faster in the word learning domain than in the continuous feature space domain. I propose a Bayesian explanation of this difference in learning curves that places crucial importance on the density or sparsity of overlapping hypotheses in the learner's hypothesis space. To test this proposal, a third case study (Chapter 5) returns to the domain of number concepts, in which human learners possess a more complex body of prior knowledge that leads to a hypothesis space with both sparse and densely overlapping components. Here, the Bayesian theory predicts and human learners produce either rule-based or similarity-based generalization from a few examples, depending on the precise examples observed. I also discusses how several classic reasoning heuristics may be used to approximate the much more elaborate computations of Bayesian inference that this domain requires. In each of these case studies, I confront some of the classic questions of concept learning and induction: Is the acquisition of concepts driven mainly by pre-existing knowledge or the statistical force of our observations? Is generalization based primarily on abstract rules or similarity to exemplars? I argue that in almost all instances, the only reasonable answer to such questions is, Both. More importantly, I show how the Bayesian framework allows us to answer much more penetrating versions of these questions: How does prior knowledge interact with the observed examples to guide generalization? Why does generalization appear rule-based in some cases and similarity-based in others? Finally, Chapter 6 summarizes the major contributions in more detailed form and discusses how this work ts into the larger picture of contemporary research on human learning, thinking, and reasoning.
by Joshua B. Tenenbaum.
Ph.D.
Piantadosi, Steven Thomas. "Learning and the language of thought." Thesis, Massachusetts Institute of Technology, 2011. http://hdl.handle.net/1721.1/68423.
Full textCataloged from PDF version of thesis.
Includes bibliographical references (p. 179-191).
This thesis develops the hypothesis that key aspects of learning and development can be understood as rational statistical inferences over a compositionally structured representation system, a language of thought (LOT) (Fodor, 1975). In this setup, learners have access to a set of primitive functions and learning consists of composing these functions in order to created structured representations of complex concepts. We present an inductive statistical model over these representations that formalizes an optimal Bayesian trade-off between representational complexity and fit to the observed data. This approach is first applied to the case of number-word acquisition, for which statistical learning with a LOT can explain key developmental patterns and resolve philosophically troublesome aspects of previous developmental theories. Second, we show how these same formal tools can be applied to children's acquisition of quantifiers. The model explains how children may achieve adult competence with quantifiers' literal meanings and presuppositions, and predicts several of the most-studied errors children make while learning these words. Finally, we model adult patterns of generalization in a massive concept-learning experiment. These results provide evidence for LOT models over other approaches and provide quantitative evaluation of different particular LOTs.
by Steven Thomas Piantadosi.
Ph.D.
Ellis, Kevin Ph D. (Kevin M. )Massachusetts Institute of Technology. "Algorithms for learning to induce programs." Thesis, Massachusetts Institute of Technology, 2020. https://hdl.handle.net/1721.1/130184.
Full textCataloged from student-submitted PDF version of thesis.
Includes bibliographical references (pages 213-224).
The future of machine learning should have a knowledge representation that supports, at a minimum, several features: Expressivity, interpretability, the potential for reuse by both humans and machines, while also enabling sample-efficient generalization. Here we argue that programs-i.e., source code-are a knowledge representation which can contribute to the project of capturing these elements of intelligence. This research direction however requires new program synthesis algorithms which can induce programs solving a range of AI tasks. This program induction challenge confronts two primary obstacles: the space of all programs is infinite, so we need a strong inductive bias or prior to steer us toward the correct programs; and even if we have that prior, effectively searching through the vast combinatorial space of all programs is generally intractable. We introduce algorithms that learn to induce programs, with the goal of addressing these two primary obstacles. Focusing on case studies in vision, computational linguistics, and learning-to-learn, we develop an algorithmic toolkit for learning inductive biases over programs as well as learning to search for programs, drawing on probabilistic, neural, and symbolic methods. Together this toolkit suggests ways in which program induction can contribute to AI, and how we can use learning to improve program synthesis technologies.
by Kevin Ellis.
Ph. D. in Cognitive Science
Ph.D.inCognitiveScience Massachusetts Institute of Technology, Department of Brain and Cognitive Sciences
Blum, Julia Maria. "Coherent brain oscillations during processes of human sensorimotor learning /." Zürich : ETH, 2008. http://e-collection.ethbib.ethz.ch/show?type=diss&nr=17951.
Full textGonzalez, Claudia Cristina. "Linking brain and behaviour in motor sequence learning tasks." Thesis, University of Leeds, 2012. http://etheses.whiterose.ac.uk/3603/.
Full textTompkins, Abreena Walker. "Brain-based learning theory an online course design model /." Lynchburg, Va. : Liberty University, 2007. http://digitalcommons.liberty.edu.
Full textFrangou, Polytimi. "Inhibitory mechanisms for visual learning in the human brain." Thesis, University of Cambridge, 2018. https://www.repository.cam.ac.uk/handle/1810/280767.
Full textScholz, Jan. "Structural brain plasticity : Individual differences and changes with learning." Thesis, University of Oxford, 2010. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.533876.
Full textNguyen, Dieu My Thanh. "OLFACTORY LEARNING AND BRAIN ACTIVITY IN NOVOMESSOR COCKERELLI ANTS." Thesis, The University of Arizona, 2016. http://hdl.handle.net/10150/613353.
Full textLau, Kiu Wai. "Representation Learning on Brain MR Images for Tumor Segmentation." Thesis, KTH, Skolan för kemi, bioteknologi och hälsa (CBH), 2018. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-234827.
Full textRose, Rickie Lou. "The connection of brain compatible learning theory and leadership." [Bloomington, Ind.] : Indiana University, 2005. http://wwwlib.umi.com/dissertations/fullcit/3175993.
Full textTitle from PDF t.p. (viewed Dec. 8, 2008). Source: Dissertation Abstracts International, Volume: 66-05, Section: A, page: 1587. Adviser: L. Burello.
Astolfi, Pietro. "Toward the "Deep Learning" of Brain White Matter Structures." Doctoral thesis, Università degli studi di Trento, 2022. http://hdl.handle.net/11572/337629.
Full text