Dissertations / Theses on the topic 'Application of learning theory'

To see the other types of publications on this topic, follow the link: Application of learning theory.

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 dissertations / theses for your research on the topic 'Application of learning theory.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.

1

Cleeton, G. "Development and application of a theory of learning barriers." Thesis, Keele University, 1991. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.306150.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Hu, Qiao Ph D. Massachusetts Institute of Technology. "Application of statistical learning theory to plankton image analysis." Thesis, Massachusetts Institute of Technology, 2006. http://hdl.handle.net/1721.1/39206.

Full text
Abstract:
Thesis (Ph. D.)--Joint Program in Applied Ocean Science and Engineering (Massachusetts Institute of Technology, Dept. of Mechanical Engineering; and the Woods Hole Oceanographic Institution), 2006.
Includes bibliographical references (leaves 155-173).
A fundamental problem in limnology and oceanography is the inability to quickly identify and map distributions of plankton. This thesis addresses the problem by applying statistical machine learning to video images collected by an optical sampler, the Video Plankton Recorder (VPR). The research is focused on development of a real-time automatic plankton recognition system to estimate plankton abundance. The system includes four major components: pattern representation/feature measurement, feature extraction/selection, classification, and abundance estimation. After an extensive study on a traditional learning vector quantization (LVQ) neural network (NN) classifier built on shape-based features and different pattern representation methods, I developed a classification system combined multi-scale cooccurrence matrices feature with support vector machine classifier. This new method outperforms the traditional shape-based-NN classifier method by 12% in classification accuracy. Subsequent plankton abundance estimates are improved in the regions of low relative abundance by more than 50%. Both the NN and SVM classifiers have no rejection metrics. In this thesis, two rejection metrics were developed.
(cont.) One was based on the Euclidean distance in the feature space for NN classifier. The other used dual classifier (NN and SVM) voting as output. Using the dual-classification method alone yields almost as good abundance estimation as human labeling on a test-bed of real world data. However, the distance rejection metric for NN classifier might be more useful when the training samples are not "good" ie, representative of the field data. In summary, this thesis advances the current state-of-the-art plankton recognition system by demonstrating multi-scale texture-based features are more suitable for classifying field-collected images. The system was verified on a very large real-world dataset in systematic way for the first time. The accomplishments include developing a multi-scale occurrence matrices and support vector machine system, a dual-classification system, automatic correction in abundance estimation, and ability to get accurate abundance estimation from real-time automatic classification. The methods developed are generic and are likely to work on range of other image classification applications.
by Qiao Hu.
Ph.D.
APA, Harvard, Vancouver, ISO, and other styles
3

Plaza, Cecilia Maria. "The Application of Transformative Learning Theory to Curricular Evaluation." Diss., The University of Arizona, 2006. http://hdl.handle.net/10150/194354.

Full text
Abstract:
Purpose: The purpose of this study was to develop a conceptual framework for curricular evaluation based on transformative learning theory and to demonstrate its use in evaluating a professional curriculum. Transformative learning theory considers the process of constructing knowledge through critical reflection on the content, process, and premise of an experience. Methods: Critical reflection was operationalized by using the College's Outcomes Expected document to provide the overarching curricular framework for a reflective portfolio developed by pharmacy students at the University of Arizona College of Pharmacy (UACOP). Content reflection consisted of curricular mapping based on student and faculty questionnaires as well as comparison to the American Association of Colleges of Pharmacy (AACP) Center for the Advancement of Pharmaceutical Education (CAPE) Educational Outcomes 2004. Process reflection focused on best practices literature-based indicators and student self-efficacy measures. Premise reflection included both content and process reflection to develop global recommendations. Results: The population consisted of 284 Doctor of Pharmacy (PharmD) students at the UACOP during the 2004-2005 academic year. Transformative learning theory provides a potentially valuable tool for curricular evaluation by considering the content, process, and premise of construction of knowledge about the pharmacy curricula at respective schools and colleges of pharmacy. This study also demonstrated how transformative learning theory can be applied to both make sense of and use existing data in curricular evaluation. Content reflection revealed concordance between student and faculty ranking of domain and associated competency coverage in their respective curricular maps. Process reflection revealed areas of needed improvement including student and faculty buy-in and the dual use of the portfolio for learning and assessment. Premise reflection provided several global recommendations that other schools and colleges of pharmacy could use in implementing portfolio assessment.
APA, Harvard, Vancouver, ISO, and other styles
4

Shi, Bin. "A Mathematical Framework on Machine Learning: Theory and Application." FIU Digital Commons, 2018. https://digitalcommons.fiu.edu/etd/3876.

Full text
Abstract:
The dissertation addresses the research topics of machine learning outlined below. We developed the theory about traditional first-order algorithms from convex opti- mization and provide new insights in nonconvex objective functions from machine learning. Based on the theory analysis, we designed and developed new algorithms to overcome the difficulty of nonconvex objective and to accelerate the speed to obtain the desired result. In this thesis, we answer the two questions: (1) How to design a step size for gradient descent with random initialization? (2) Can we accelerate the current convex optimization algorithms and improve them into nonconvex objective? For application, we apply the optimization algorithms in sparse subspace clustering. A new algorithm, CoCoSSC, is proposed to improve the current sample complexity under the condition of the existence of noise and missing entries. Gradient-based optimization methods have been increasingly modeled and inter- preted by ordinary differential equations (ODEs). Existing ODEs in the literature are, however, inadequate to distinguish between two fundamentally different meth- ods, Nesterov’s acceleration gradient method for strongly convex functions (NAG-SC) and Polyak’s heavy-ball method. In this paper, we derive high-resolution ODEs as more accurate surrogates for the two methods in addition to Nesterov’s acceleration gradient method for general convex functions (NAG-C), respectively. These novel ODEs can be integrated into a general framework that allows for a fine-grained anal- ysis of the discrete optimization algorithms through translating properties of the amenable ODEs into those of their discrete counterparts. As a first application of this framework, we identify the effect of a term referred to as gradient correction in NAG-SC but not in the heavy-ball method, shedding deep insight into why the for- mer achieves acceleration while the latter does not. Moreover, in this high-resolution ODE framework, NAG-C is shown to boost the squared gradient norm minimization at the inverse cubic rate, which is the sharpest known rate concerning NAG-C itself. Finally, by modifying the high-resolution ODE of NAG-C, we obtain a family of new optimization methods that are shown to maintain the accelerated convergence rates as NAG-C for minimizing convex functions.
APA, Harvard, Vancouver, ISO, and other styles
5

Mouton, Hildegarde Suzanne. "Reinforcement learning : theory, methods and application to decision support systems." Thesis, Stellenbosch : University of Stellenbosch, 2010. http://hdl.handle.net/10019.1/5304.

Full text
Abstract:
Thesis (MSc (Applied Mathematics))--University of Stellenbosch, 2010.
ENGLISH ABSTRACT: In this dissertation we study the machine learning subfield of Reinforcement Learning (RL). After developing a coherent background, we apply a Monte Carlo (MC) control algorithm with exploring starts (MCES), as well as an off-policy Temporal-Difference (TD) learning control algorithm, Q-learning, to a simplified version of the Weapon Assignment (WA) problem. For the MCES control algorithm, a discount parameter of τ = 1 is used. This gives very promising results when applied to 7 × 7 grids, as well as 71 × 71 grids. The same discount parameter cannot be applied to the Q-learning algorithm, as it causes the Q-values to diverge. We take a greedy approach, setting ε = 0, and vary the learning rate (α ) and the discount parameter (τ). Experimentation shows that the best results are found with set to 0.1 and constrained in the region 0.4 ≤ τ ≤ 0.7. The MC control algorithm with exploring starts gives promising results when applied to the WA problem. It performs significantly better than the off-policy TD algorithm, Q-learning, even though it is almost twice as slow. The modern battlefield is a fast paced, information rich environment, where discovery of intent, situation awareness and the rapid evolution of concepts of operation and doctrine are critical success factors. Combining the techniques investigated and tested in this work with other techniques in Artificial Intelligence (AI) and modern computational techniques may hold the key to solving some of the problems we now face in warfare.
AFRIKAANSE OPSOMMING: Die fokus van hierdie verhandeling is die masjienleer-algoritmes in die veld van versterkingsleer. ’n Koherente agtergrond van die veld word gevolg deur die toepassing van ’n Monte Carlo (MC) beheer-algoritme met ondersoekende begintoestande, sowel as ’n afbeleid Temporale-Verskil beheer-algoritme, Q-leer, op ’n vereenvoudigde weergawe van die wapentoekenningsprobleem. Vir die MC beheer-algoritme word ’n afslagparameter van τ = 1 gebruik. Dit lewer belowende resultate wanneer toegepas op 7 × 7 roosters, asook op 71 × 71 roosters. Dieselfde afslagparameter kan nie op die Q-leer algoritme toegepas word nie, aangesien dit veroorsaak dat die Q-waardes divergeer. Ons neem ’n gulsige aanslag deur die gulsigheidsparameter te verstel na ε = 0. Ons varieer dan die leertempo ( α) en die afslagparameter (τ). Die beste eksperimentele resultate is behaal wanneer = 0.1 en as die afslagparameter vasgehou word in die gebied 0.4 ≤ τ ≤ 0.7. Die MC beheer-algoritme lewer belowende resultate wanneer toegepas op die wapentoekenningsprobleem. Dit lewer beduidend beter resultate as die Q-leer algoritme, al neem dit omtrent twee keer so lank om uit te voer. Die moderne slagveld is ’n omgewing ryk aan inligting, waar dit kritiek belangrik is om vinnig die vyand se planne te verstaan, om bedag te wees op die omgewing en die konteks van gebeure, en waar die snelle ontwikkeling van die konsepte van operasie en doktrine lei tot sukses. Die tegniekes wat in die verhandeling ondersoek en getoets is, en ander kunsmatige intelligensie tegnieke en moderne berekeningstegnieke saamgesnoer, mag dalk die sleutel hou tot die oplossing van die probleme wat ons tans in die gesig staar in oorlogvoering.
APA, Harvard, Vancouver, ISO, and other styles
6

Gianvecchio, Steven. "Application of information theory and statistical learning to anomaly detection." W&M ScholarWorks, 2010. https://scholarworks.wm.edu/etd/1539623563.

Full text
Abstract:
In today's highly networked world, computer intrusions and other attacks area constant threat. The detection of such attacks, especially attacks that are new or previously unknown, is important to secure networks and computers. A major focus of current research efforts in this area is on anomaly detection.;In this dissertation, we explore applications of information theory and statistical learning to anomaly detection. Specifically, we look at two difficult detection problems in network and system security, (1) detecting covert channels, and (2) determining if a user is a human or bot. We link both of these problems to entropy, a measure of randomness information content, or complexity, a concept that is central to information theory. The behavior of bots is low in entropy when tasks are rigidly repeated or high in entropy when behavior is pseudo-random. In contrast, human behavior is complex and medium in entropy. Similarly, covert channels either create regularity, resulting in low entropy, or encode extra information, resulting in high entropy. Meanwhile, legitimate traffic is characterized by complex interdependencies and moderate entropy. In addition, we utilize statistical learning algorithms, Bayesian learning, neural networks, and maximum likelihood estimation, in both modeling and detecting of covert channels and bots.;Our results using entropy and statistical learning techniques are excellent. By using entropy to detect covert channels, we detected three different covert timing channels that were not detected by previous detection methods. Then, using entropy and Bayesian learning to detect chat bots, we detected 100% of chat bots with a false positive rate of only 0.05% in over 1400 hours of chat traces. Lastly, using neural networks and the idea of human observational proofs to detect game bots, we detected 99.8% of game bots with no false positives in 95 hours of traces. Our work shows that a combination of entropy measures and statistical learning algorithms is a powerful and highly effective tool for anomaly detection.
APA, Harvard, Vancouver, ISO, and other styles
7

Collins, Andrew. "Evaluating reinforcement learning for game theory application learning to price airline seats under competition." Thesis, University of Southampton, 2009. https://eprints.soton.ac.uk/69751/.

Full text
Abstract:
Applied Game Theory has been criticised for not being able to model real decision making situations. A game's sensitive nature and the difficultly in determining the utility payoff functions make it hard for a decision maker to rely upon any game theoretic results. Therefore the models tend to be simple due to the complexity of solving them (i.e. finding the equilibrium). In recent years, due to the increases of computing power, different computer modelling techniques have been applied in Game Theory. A major example is Artificial Intelligence methods e.g. Genetic Algorithms, Neural Networks and Reinforcement Learning (RL). These techniques allow the modeller to incorporate Game Theory within their models (or simulation) without necessarily knowing the optimal solution. After a warm up period of repeated episodes is run, the model learns to play the game well (though not necessarily optimally). This is a form of simulation-optimization. The objective of the research is to investigate the practical usage of RL within a simple sequential stochastic airline seat pricing game. Different forms of RL are considered and compared to the optimal policy, which is found using standard dynamic programming techniques. The airline game and RL methods displays various interesting phenomena, which are also discussed. For completeness, convergence proofs for the RL algorithms were constructed.
APA, Harvard, Vancouver, ISO, and other styles
8

Narasimha, Rajesh. "Application of Information Theory and Learning to Network and Biological Tomography." Diss., Georgia Institute of Technology, 2007. http://hdl.handle.net/1853/19889.

Full text
Abstract:
Studying the internal characteristics of a network using measurements obtained from endhosts is known as network tomography. The foremost challenge in measurement-based approaches is the large size of a network, where only a subset of measurements can be obtained because of the inaccessibility of the entire network. As the network becomes larger, a question arises as to how rapidly the monitoring resources (number of measurements or number of samples) must grow to obtain a desired monitoring accuracy. Our work studies the scalability of the measurements with respect to the size of the network. We investigate the issues of scalability and performance evaluation in IP networks, specifically focusing on fault and congestion diagnosis. We formulate network monitoring as a machine learning problem using probabilistic graphical models that infer network states using path-based measurements. We consider the theoretical and practical management resources needed to reliably diagnose congested/faulty network elements and provide fundamental limits on the relationships between the number of probe packets, the size of the network, and the ability to accurately diagnose such network elements. We derive lower bounds on the average number of probes per edge using the variational inference technique proposed in the context of graphical models under noisy probe measurements, and then propose an entropy lower (EL) bound by drawing similarities between the coding problem over a binary symmetric channel and the diagnosis problem. Our investigation is supported by simulation results. For the congestion diagnosis case, we propose a solution based on decoding linear error control codes on a binary symmetric channel for various probing experiments. To identify the congested nodes, we construct a graphical model, and infer congestion using the belief propagation algorithm. In the second part of the work, we focus on the development of methods to automatically analyze the information contained in electron tomograms, which is a major challenge since tomograms are extremely noisy. Advances in automated data acquisition in electron tomography have led to an explosion in the amount of data that can be obtained about the spatial architecture of a variety of biologically and medically relevant objects with sizes in the range of 10-1000 nm A fundamental step in the statistical inference of large amounts of data is to segment relevant 3D features in cellular tomograms. Procedures for segmentation must work robustly and rapidly in spite of the low signal-to-noise ratios inherent in biological electron microscopy. This work evaluates various denoising techniques and then extracts relevant features of biological interest in tomograms of HIV-1 in infected human macrophages and Bdellovibrio bacterial tomograms recorded at room and cryogenic temperatures. Our approach represents an important step in automating the efficient extraction of useful information from large datasets in biological tomography and in speeding up the process of reducing gigabyte-sized tomograms to relevant byte-sized data. Next, we investigate automatic techniques for segmentation and quantitative analysis of mitochondria in MNT-1 cells imaged using ion-abrasion scanning electron microscope, and tomograms of Liposomal Doxorubicin formulations (Doxil), an anticancer nanodrug, imaged at cryogenic temperatures. A machine learning approach is formulated that exploits texture features, and joint image block-wise classification and segmentation is performed by histogram matching using a nearest neighbor classifier and chi-squared statistic as a distance measure.
APA, Harvard, Vancouver, ISO, and other styles
9

Jalalzai, Hamid. "Learning from multivariate extremes : theory and application to natural language processing." Electronic Thesis or Diss., Institut polytechnique de Paris, 2020. http://www.theses.fr/2020IPPAT043.

Full text
Abstract:
Les extrêmes apparaissent dans une grande variété de données. Par exemple,concernant les données hydrologiques, les extrêmes peuvent correspondre à des inondations, des moussons voire des sécheresses. Les données liées à l’activité humaine peuvent également conduire à des situations extrêmes, dans le cas des transactions bancaires, le montant alloué à une vente peut être considérable et dépasser les transactions courantes. Un autre exemple lié à l’activité humaine est la fréquence des mots utilisés : certains mots sont omniprésents alors que d’autres sont très rares. Qu’importe le contexte applicatif, les extrêmes qui sont rares par définition, correspondent à des données particulières. Ces événements sont notamment alarmants au vu de leur potentiel impact désastreux. Cependant, les données extrêmes sont beaucoup moins considérées dans les statistiques modernes ou les pratiques courantes d’apprentissage machine, principalement car elles sont considérablement sous représentées : ces événements se retrouvent noyés - à l’ère du ”big data” - par une vaste majorité de données classiques et non extrêmes. Ainsi, la grande majorité des outils d’apprentissage machine qui se concentrent naturellement sur une distribution dans son ensemble peut être inadaptée sur les queues de distribution où se trouvent les observations extrêmes. Dans cette thèse, les défis liés aux extrêmes sont détaillés et l’accent est mis sur le développement de méthodes dédiées à ces données. La première partie se consacre à l’apprentissage statistique dans les régions extrêmes. Dans le chapitre 4, des garanties non asymptotiques sur l’erreur d’estimation de la mesure angulaire empirique sont étudiées et permettent d’améliorer des méthodes de détection d’anomalies par minimum volume set sur la sphère. En particulier, le problème de la minimisation du risque empirique pour la classification binaire dédiée aux échantillons extrêmes est traitée au chapitre 5. L’analyse non paramétrique et les garanties qui en résultent sont détaillées. L’approche est adaptée pour traiter de nouveaux échantillons se trouvant hors de l’enveloppe convexe formée par les données rencontrées. Cette propriété d’extrapolation est l’élément clé et charnière nous permettant de concevoir de nouvelles représentations conservant un label donné et d’ainsi augmenter la quantité de données. Le chapitre 6 se concentre sur l’apprentissage de cette représentation à queue lourde (pour être précis, à variation régulière) à partir d’une distribution d’entrée. Les illustrations montrent une meilleure classification des extrêmes et conduit à la génération de phrases cohérentes. Enfin, le chapitre 7 propose d’analyser la structure de dépendance des extrêmes multivariés. En constatant que les extrêmes se concentrent au sein de groupes où les variables explicatives ont tendance à prendre –de manière récurrente–de grandes valeurs simultanément ; il en résulte un problème d’optimisation visant à identifier ces sous-groupes grâce à des moyennes pondérées des composantes
Extremes surround us and appear in a large variety of data. Natural data likethe ones related to environmental sciences contain extreme measurements; inhydrology, for instance, extremes may correspond to floods and heavy rainfalls or on the contrary droughts. Data related to human activity can also lead to extreme situations; in the case of bank transactions, the money allocated to a sale may be considerable and exceed common transactions. The analysis of this phenomenon is one of the basis of fraud detection. Another example related to humans is the frequency of encountered words. Some words are ubiquitous while others are rare. No matter the context, extremes which are rare by definition, correspond to uncanny data. These events are of particular concern because of the disastrous impact they may have. Extreme data, however, are less considered in modern statistics and applied machine learning, mainly because they are substantially scarce: these events are out numbered –in an era of so-called ”big data”– by the large amount of classical and non-extreme data that corresponds to the bulk of a distribution. Thus, the wide majority of machine learning tools and literature may not be well-suited or even performant on the distributional tails where extreme observations occur. Through this dissertation, the particular challenges of working with extremes are detailed and methods dedicated to them are proposed. The first part of the thesisis devoted to statistical learning in extreme regions. In Chapter 4, non-asymptotic bounds for the empirical angular measure are studied. Here, a pre-established anomaly detection scheme via minimum volume set on the sphere, is further im-proved. Chapter 5 addresses empirical risk minimization for binary classification of extreme samples. The resulting non-parametric analysis and guarantees are detailed. The approach is particularly well suited to treat new samples falling out of the convex envelop of encountered data. This extrapolation property is key to designing new embeddings achieving label preserving data augmentation. Chapter 6 focuses on the challenge of learning the latter heavy-tailed (and to be precise regularly varying) representation from a given input distribution. Empirical results show that the designed representation allows better classification performanceon extremes and leads to the generation of coherent sentences. Lastly, Chapter7 analyses the dependence structure of multivariate extremes. By noticing that extremes tend to concentrate on particular clusters where features tend to be recurrently large simulatenously, we define an optimization problem that identifies the aformentioned subgroups through weighted means of features
APA, Harvard, Vancouver, ISO, and other styles
10

Scaggs, Anne Marie. "Student Perspectives on Application of Theory to Practice in Field Practicums." ScholarWorks, 2018. https://scholarworks.waldenu.edu/dissertations/6112.

Full text
Abstract:
The field practicum is designed to offer students the opportunity to integrate knowledge and practice prior to graduation; however, students continue to lack the ability to connect theory to practice within the field practicum. The purpose of this qualitative case study was to explore the beliefs, attitudes, and perspectives of social work students regarding the application of theory to practice within the field practicum. The conceptual framework included concepts of empowerment, empowerment theory, and social constructivism. The research question addressed how social work students at a local university described the issues related to connecting theory to practice within the field practicum. Data collection involved interviews with 6 social work practicum students, observations, and document analysis. Data were coded and analyzed to identify 4 themes: learned theories, concerns, theory to practice, and student beliefs related to theory and practice. Findings confirmed students' inability to connect theory to practice. Findings were used to develop a project incorporating simulated learning environments in social work curricula to increase the connection of theory to practice. Findings may be used to enhance students' ability to integrate theory into practice, which may strengthen the profession of social work through improved service delivery at local, state, national, and global levels.
APA, Harvard, Vancouver, ISO, and other styles
11

Kazemian, Hassan Bajgiran. "Study of MIMO learning fuzzy controllers for dynamic application." Thesis, Queen Mary, University of London, 1998. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.286348.

Full text
APA, Harvard, Vancouver, ISO, and other styles
12

Jurvelin, Olsson Mikael. "MULTI-AGENT REINFORCEMENT LEARNING WITH APPLICATION ON TRAFFIC FLOW CONTROL." Thesis, Uppsala universitet, Statistiska institutionen, 2021. http://urn.kb.se/resolve?urn=urn:nbn:se:uu:diva-447308.

Full text
Abstract:
Traffic congestion diminish driving experience and increases the CO2 emissions. With the rise of 5G and machine learning, the possibilities to reduce traffic congestion are endless. This thesis aims to study if multi-agent reinforcement learning speed recommendations on a vehicle level can reduce congestion and thus control traffic flow. This is done by simulating a highway with an obstacle on one side of the lanes, forcing all the vehicles to drive on the same lane past the obstacle, resulting in congestion. A game theory aspect of drivers not obeying the speed recommendations was implemented to further simulate real traffic. Three DeepQ-network based models were trained on the highway and the best model was tested. The tests showed that multi-agent reinforcement learning speed recommendations can reduce the congestion, measured in vehicle hours, up to 21% and if 1/3 of the vehicles uses the system, the total congestion can be significantly reduced. In addition, the test showed that the model achieves a success rate of 80%. Two improvements to the success rate would be more training and implementing a non reinforcement learning mechanism for the autonomous driving part.
APA, Harvard, Vancouver, ISO, and other styles
13

Izquierdo, Luis R. "Advancing learning and evolutionary game theory with an application to social dilemmas." Thesis, Manchester Metropolitan University, 2008. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.444030.

Full text
APA, Harvard, Vancouver, ISO, and other styles
14

DeGennaro, Alfred Joseph. "Application of Multiple Intelligence Theory to an e-Learning Technology Acceptance Model." Cleveland State University / OhioLINK, 2010. http://rave.ohiolink.edu/etdc/view?acc_num=csu1273053153.

Full text
APA, Harvard, Vancouver, ISO, and other styles
15

Deniz, Juan C. (Deniz Carlos) 1976. "Learning theory applications to product design modeling." Thesis, Massachusetts Institute of Technology, 2000. http://hdl.handle.net/1721.1/89269.

Full text
APA, Harvard, Vancouver, ISO, and other styles
16

Li, Xiao. "Regularized adaptation : theory, algorithms, and applications /." Thesis, Connect to this title online; UW restricted, 2007. http://hdl.handle.net/1773/5928.

Full text
APA, Harvard, Vancouver, ISO, and other styles
17

Chu, Chi-keung. "The study of the application of social learning theory in parent management training." [Hong Kong] : University of Hong Kong, 1988. http://sunzi.lib.hku.hk/hkuto/record.jsp?B12505195.

Full text
APA, Harvard, Vancouver, ISO, and other styles
18

Marriot, Shaun. "The application of adaptive resonance theory and reinforcement learning to mapping and control." Thesis, University of Sheffield, 1996. http://etheses.whiterose.ac.uk/5974/.

Full text
Abstract:
In this thesis, the ideas of Adaptive Resonance Theory (ART) and Reinforcement Learning (RL) are applied to the problems of mapping and control. A neural architecture, fuzzy ARTMAP is considered as an alternative to standard feedforward networks for noisy mapping tasks. It is one of a series of architectures based upon ART. Fuzzy ARTMAP has advantages over feedforward networks--such as increased autonomy- and is especially suited to classification-type problems. Here it is used to estimate a continuous mapping from noisy data. Results show that properties useful for classification problems are not necessarily advantageous for noisy mapping problems. One particular feature is found to cause specialisation to the data. A modified variant is proposed which stores probability information in a sub-unit of the architecture. The proposed fuzzy ARTMAP variant is found to outperform fuzzy ARTMAP in a mapping task. Another novel self-organising architecture, loosely based upon a particular implementation of ART, is proposed here as an alternative to the fixed state-space decoder in a seminal implementation of reinforcement learning. A well-known non-linear control problem is considered. Input / output pattern pairs, desired state-space regions and the network size / topology are not known in advance. Results show that, although learning is not smooth, the novel ART-based RL implementation is successful and develops a meaningful control mapping. The new decoder increases its information capacity as necessary and indicates that such a self-organising approach to control is viable. The self-organising properties of the new decoder allow the neurocontroller to retain previously learned information and to adapt to newly encountered states throughout its operation, on-line. A fuzzy version of the original RL implementation is implemented to investigate the possibility of distributing control information across more than one state-space region. The fuzzy version is found to outperform the original RL implementation in a control task.
APA, Harvard, Vancouver, ISO, and other styles
19

朱志強 and Chi-keung Chu. "The study of the application of social learning theory in parent management training." Thesis, The University of Hong Kong (Pokfulam, Hong Kong), 1988. http://hub.hku.hk/bib/B31975318.

Full text
APA, Harvard, Vancouver, ISO, and other styles
20

Stead, Valerie S. "Influences on individuals' application of learning : a grounded theory study and its evaluation." Thesis, Lancaster University, 1997. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.389942.

Full text
APA, Harvard, Vancouver, ISO, and other styles
21

Mo, Chunhui Fraser Scott E. "Synaptic learning rules for local synaptic interactions : theory and application to direction selectivity /." Diss., Pasadena, Calif. : California Institute of Technology, 2003. http://resolver.caltech.edu/CaltechETD:etd-05222003-170638.

Full text
APA, Harvard, Vancouver, ISO, and other styles
22

Statler, Judy K. "Learning theory and its application to at-risk programs for elementary school children /." free to MU campus, to others for purchase, 2002. http://wwwlib.umi.com/cr/mo/fullcit?p3074444.

Full text
APA, Harvard, Vancouver, ISO, and other styles
23

Zimmermann, Tom. "Inductive Learning and Theory Testing: Applications in Finance." Thesis, Harvard University, 2015. http://nrs.harvard.edu/urn-3:HUL.InstRepos:17467320.

Full text
Abstract:
This thesis explores the opportunities for economic research that arise from importing empirical methods from the field of machine learning. Chapter 1 applies inductive learning to cross-sectional asset pricing. Researchers have documented over three hundred variables that can explain differences in cross-sectional stock returns. But which ones contain independent information? Chapter 1 develops a framework, deep conditional portfolio sorts, that can be used to answer this question and that is based on ideas from the machine learning literature, tailored to an asset-pricing application. The method is applied to predicting future stock returns based on past stock returns at different horizons, and short-term returns (i.e. the past six months of returns) rather than medium- or long-term returns are recovered as the variables that convey almost all information about future returns. Chapter 2 argues that machine learning techniques, although focusing on predictions, can be used to test theories. In most theory tests, researchers control for known theories. In contrast, chapter 2 develops a simple model that illustrates how machine learning can be used to conduct an inductive test that allows to control for some unknown theories, as long as they are covered in some way by the data. The method is applied to the theory that realization utility and nominal loss aversion lead to the disposition effect (the propensity to sell winners rather than losers). An inductive test finds that short-term price trends and other features of the price history are more important to predict selling decisions than returns relative to purchase price. Chapter 3 provides another perspective on the disposition effect in the more traditional spirit of behavioral finance. It assesses the implications of different theories for an investor's probability to sell a stock as a function of the stock's return and then tests those implications empirically. Three different approaches that have been used in the literature are shown to lead to the, at first sight, contradictory findings that the probability to sell a stock is either V-shaped or inverted V-shaped in the stock's return. Since these approaches compute different conditional probabilities, they can be reconciled, however, when the conditioning set is taken into account.
Economics
APA, Harvard, Vancouver, ISO, and other styles
24

Anastasiou, Maria S. "Beginning Female Therapists' Experiences of Applying Theory into Their Practice." Thesis, Virginia Tech, 2006. http://hdl.handle.net/10919/31771.

Full text
Abstract:
Although there is an extensive amount of literature on the developmental stages of beginning therapists and the challenges they face, little is known about one of their most difficult challenges; transferring theory learned in class to their practice. This study is a qualitative look at how beginning therapists learn to apply theory to their practice. Ten students who were beginning therapists with at least 75 hours of client contact hours were interviewed from four different universities with accredited marriage and family therapy programs. The study was conducted using a phenomenological perspective to explore how beginning therapists begin to apply theory to their practice. Using the constant comparison method of analysis, five major themes emerged from the interviews as well as a general developmental process that help to describe how beginning therapists apply theory to their practice. The main themes found include before seeing clients, early process of theory application, what was helpful, later process of theory application and a reflection of that process. Implications for beginning therapists and training programs as well as future research are indicated.
Master of Science
APA, Harvard, Vancouver, ISO, and other styles
25

Chen, Shu-Fen, and res cand@acu edu au. "Cooperative Learning, Multiple Intelligences and Proficiency: application in college English language teaching and learning." Australian Catholic University. Faculty of Education, 2005. http://dlibrary.acu.edu.au/digitaltheses/public/adt-acuvp120.25102006.

Full text
Abstract:
The purpose of this research is to investigate whether the implementation of Cooperative Learning (CL) activities, incorporating the insights given by Howard Gardner’ (1993) theory of Multiple Intelligences (MI) and the notion of Whole Language Approach (WLA) in college EFL classrooms will have a positive effect on students’ language proficiency and attitude. A quasi-experimental study was developed. The site of this study was in an EFL classroom in a Taiwanese College. The subjects were from the researcher’s three English classes at Chung Hwa Institute of Medical Technology during one semester. Many learning activities based on Gardner’s theory of Multiple Intelligences were used while a Cooperative Learning approach was practiced. The data for this study was collected from three sources. One was from the subjects’ questionnaires on attitudes and on motivation, regarding Cooperative Learning and Multiple Intelligences. Another was from student interviews. The third was from the students’ test scores on their language proficiency tests. The results of the study showed that the experimental group that was taught using the ideas based on CL and MI outperformed the group based on CL, and the control group, on the Simulate English General Proficiency tests for the four language skills. Though there were no significant differences among them within this short-time study, the motivation in learning English was enhanced a great deal for the experimental group that was taught using the CL and MI ideas. Based upon the insight gained from this study, CL, MI, WLA and Language Learning Center were thus recommended to be integrated into the Junior College English curriculum. Pedagogical implications for the application of CL and MI in an EFL classroom were developed. Above all, suggestions for teacher development in CL and MI were proposed. Finally, suggestions for future research have been recommended.
APA, Harvard, Vancouver, ISO, and other styles
26

Gleason, James P. "THE IMPACT OF INTERACTIVE FUNCTIONALITY ON LEARNING OUTCOMES: AN APPLICATION OF OUTCOME INTERACTIVITY THEORY." Lexington, Ky. : [University of Kentucky Libraries], 2009. http://hdl.handle.net/10225/1165.

Full text
Abstract:
Thesis (Ph. D.)--University of Kentucky, 2009.
Title from document title page (viewed on May 24, 2010). Document formatted into pages; contains: xix, 225 p. : ill. (some col.). Includes abstract and vita. Includes bibliographical references (p. 217-222).
APA, Harvard, Vancouver, ISO, and other styles
27

Lyn, André T. "Training end-users, the application of cognitive theory to learning a database software package." Thesis, National Library of Canada = Bibliothèque nationale du Canada, 1997. http://www.collectionscanada.ca/obj/s4/f2/dsk2/tape16/PQDD_0035/MQ27039.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
28

Ma, Xiaoxu. "Learning coupled conditional random field for image decomposition : theory and application in object categorization." Thesis, Massachusetts Institute of Technology, 2008. http://hdl.handle.net/1721.1/44719.

Full text
Abstract:
Thesis (Ph. D.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 2008.
Includes bibliographical references (p. 171-180).
The goal of this thesis is to build a computational system that is able to identify object categories within images. To this end, this thesis proposes a computational model of "recognition-through-decomposition-and-fusion" based on the psychophysical theories of information dissociation and integration in human visual perception. At the lowest level, contour and texture processes are defined and measured. In the mid-level, a novel coupled Conditional Random Field model is proposed to model and decompose the contour and texture processes in natural images. Various matching schemes are introduced to match the decomposed contour and texture channels in a dissociative manner. As a counterpart to the integrative process in the human visual system, adaptive combination is applied to fuse the perception in the decomposed contour and texture channels. The proposed coupled Conditional Random Field model is shown to be an important extension of popular single-layer Random Field models for modeling image processes, by dedicating a separate layer of random field grid to each individual image process and capturing the distinct properties of multiple visual processes. The decomposition enables the system to fully leverage each decomposed visual stimulus to its full potential in discriminating different object classes. Adaptive combination of multiple visual cues well mirrors the fact that different visual cues play different roles in distinguishing various object classes. Experimental results demonstrate that the proposed computational model of "recognition-through-decomposition-and-fusion" achieves better performance than most of the state-of-the-art methods in recognizing the objects in Caltech-101, especially when only a limited number of training samples are available, which conforms with the capability of learning to recognize a class of objects from a few sample images in the human visual system.
by Xiaoxu Ma.
Ph.D.
APA, Harvard, Vancouver, ISO, and other styles
29

Lyn, Andre T. (Andre Tyrone) Carleton University Dissertation Management Studies. "Training end-users: The application of cognitive theory to learning a database software package." Ottawa, 1997.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
30

Lu, Yibiao. "Statistical methods with application to machine learning and artificial intelligence." Diss., Georgia Institute of Technology, 2012. http://hdl.handle.net/1853/44730.

Full text
Abstract:
This thesis consists of four chapters. Chapter 1 focuses on theoretical results on high-order laplacian-based regularization in function estimation. We studied the iterated laplacian regularization in the context of supervised learning in order to achieve both nice theoretical properties (like thin-plate splines) and good performance over complex region (like soap film smoother). In Chapter 2, we propose an innovative static path-planning algorithm called m-A* within an environment full of obstacles. Theoretically we show that m-A* reduces the number of vertex. In the simulation study, our approach outperforms A* armed with standard L1 heuristic and stronger ones such as True-Distance heuristics (TDH), yielding faster query time, adequate usage of memory and reasonable preprocessing time. Chapter 3 proposes m-LPA* algorithm which extends the m-A* algorithm in the context of dynamic path-planning and achieves better performance compared to the benchmark: lifelong planning A* (LPA*) in terms of robustness and worst-case computational complexity. Employing the same beamlet graphical structure as m-A*, m-LPA* encodes the information of the environment in a hierarchical, multiscale fashion, and therefore it produces a more robust dynamic path-planning algorithm. Chapter 4 focuses on an approach for the prediction of spot electricity spikes via a combination of boosting and wavelet analysis. Extensive numerical experiments show that our approach improved the prediction accuracy compared to those results of support vector machine, thanks to the fact that the gradient boosting trees method inherits the good properties of decision trees such as robustness to the irrelevant covariates, fast computational capability and good interpretation.
APA, Harvard, Vancouver, ISO, and other styles
31

Ahmadibasir, Mohammad. "The application of language-game theory to the analysis of science learning: developing an interpretive classroom-level learning framework." Diss., University of Iowa, 2011. https://ir.uiowa.edu/etd/1195.

Full text
Abstract:
In this study an interpretive learning framework that aims to measure learning on the classroom level is introduced. In order to develop and evaluate the value of the framework, a theoretical/empirical study is designed. The researcher attempted to illustrate how the proposed framework provides insights on the problem of classroom-level learning. The framework is developed by construction of connections between the current literature on science learning and Wittgenstein's language-game theory. In this framework learning is defined as change of classroom language-game or discourse. In the proposed framework, learning is measured by analysis of classroom discourse. The empirical explanation power of the framework is evaluated by applying the framework in the analysis of learning in a fifth-grade science classroom. The researcher attempted to analyze how students' colloquial discourse changed to a discourse that bears more resemblance to science discourse. The results of the empirical part of the investigation are presented in three parts: first, the gap between what students did and what they were supposed to do was reported. The gap showed that students during the classroom inquiry wanted to do simple comparisons by direct observation, while they were supposed to do tool-assisted observation and procedural manipulation for a complete comparison. Second, it was illustrated that the first attempt to connect the colloquial to science discourse was done by what was immediately intelligible for students and then the teacher negotiated with students in order to help them to connect the old to the new language-game more purposefully. The researcher suggested that these two events in the science classroom are critical in discourse change. Third, it was illustrated that through the academic year, the way that students did the act of comparison was improved and by the end of the year more accurate causal inferences were observable in classroom communication. At the end of the study, the researcher illustrates that the application of the proposed framework resulted in an improved version of the framework. The improved version of the proposed framework is more connected to the topic of science learning, and is able to measure the change of discourse in higher resolution.
APA, Harvard, Vancouver, ISO, and other styles
32

Hutchin, Charles E. "The application of the Theory of Constraints Thinking Process to manufacturing managers in implementing change." Thesis, Cranfield University, 1999. http://dspace.lib.cranfield.ac.uk/handle/1826/4690.

Full text
Abstract:
This research is concerned with the problems faced by managers within manufacturing when they are expected to successfully implement a major change within their organisation. It uses, as the vehicle for the research, the Theory of Constraints Thinking Process (TOC/TP) first developed by Dr Goldratt between 1986 and 1994. The TOC is used by managers to determine what requires to be changed within their organisation and then to develop both the solution and the implementation strategy. The research has used the access obtained by the researcher to examine the approaches adopted by manufacturing managers in implementing improvement projects, which involve significant change. The primary focus of the research was to confirm the existence of a significant barrier to change and to determine whether this was a function of the individual. Once the obstacle had been identified in specific situations, the second step was to consider whether the obstacle could be described in a generic form with application to a much wider range of change environments. The final stage was to replicate the exploratory stage in other companies in other countries through the involvement of colleagues of the researcher and then consider what might be included in any change project, which would overcome the obstacle so defined. The primary method of data collection was through the application of action research and the development of the data in the form of case studies. The number and types of companies that took part in the study and the range of countries was intended to ensure a reasonable spread of data. The results suggest that one of the key obstacles to change is that outlined in the research problem and that the TOC/TP, through the use of the cloud technique, can describe this obstacle and give direction to the way of successfully dealing with it.
APA, Harvard, Vancouver, ISO, and other styles
33

Lorenz, Nicole. "Application of the Duality Theory." Doctoral thesis, Universitätsbibliothek Chemnitz, 2012. http://nbn-resolving.de/urn:nbn:de:bsz:ch1-qucosa-94108.

Full text
Abstract:
The aim of this thesis is to present new results concerning duality in scalar optimization. We show how the theory can be applied to optimization problems arising in the theory of risk measures, portfolio optimization and machine learning. First we give some notations and preliminaries we need within the thesis. After that we recall how the well-known Lagrange dual problem can be derived by using the general perturbation theory and give some generalized interior point regularity conditions used in the literature. Using these facts we consider some special scalar optimization problems having a composed objective function and geometric (and cone) constraints. We derive their duals, give strong duality results and optimality condition using some regularity conditions. Thus we complete and/or extend some results in the literature especially by using the mentioned regularity conditions, which are weaker than the classical ones. We further consider a scalar optimization problem having single chance constraints and a convex objective function. We also derive its dual, give a strong duality result and further consider a special case of this problem. Thus we show how the conjugate duality theory can be used for stochastic programming problems and extend some results given in the literature. In the third chapter of this thesis we consider convex risk and deviation measures. We present some more general measures than the ones given in the literature and derive formulas for their conjugate functions. Using these we calculate some dual representation formulas for the risk and deviation measures and correct some formulas in the literature. Finally we proof some subdifferential formulas for measures and risk functions by using the facts above. The generalized deviation measures we introduced in the previous chapter can be used to formulate some portfolio optimization problems we consider in the fourth chapter. Their duals, strong duality results and optimality conditions are derived by using the general theory and the conjugate functions, respectively, given in the second and third chapter. Analogous calculations are done for a portfolio optimization problem having single chance constraints using the general theory given in the second chapter. Thus we give an application of the duality theory in the well-developed field of portfolio optimization. We close this thesis by considering a general Support Vector Machines problem and derive its dual using the conjugate duality theory. We give a strong duality result and necessary as well as sufficient optimality conditions. By considering different cost functions we get problems for Support Vector Regression and Support Vector Classification. We extend the results given in the literature by dropping the assumption of invertibility of the kernel matrix. We use a cost function that generalizes the well-known Vapnik's ε-insensitive loss and consider the optimization problems that arise by using this. We show how the general theory can be applied for a real data set, especially we predict the concrete compressive strength by using a special Support Vector Regression problem.
APA, Harvard, Vancouver, ISO, and other styles
34

Hill, S. "Applications of statistical learning theory to signal processing problems." Thesis, University of Cambridge, 2003. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.604048.

Full text
Abstract:
The dissertation focuses on the applicability of Support Vector Regression (SVR) in signal processing contexts. This is shown to be particularly well-suited to filtering in alpha-stable noise environments, and a further slight modification is proposed to this end. The main work in this dissertation on SVR is on the application to audio filtering based on perceptual criteria. This appears an ideal solution to the problem due to the fact that the loss function typically used by perceptual audio filtering practitioners incorporates a region of zero loss, as does SVR. SVR is extended to the problem of complex-valued regression, for application in the audio filtering problem to the frequency domain. This is with regions of zero loss that are both square and circular, and the circular case is extended to the problem of vector-valued regression. Three experiments are detailed with a mix of both good and poor results, and further refinements are proposed. Polychotomous, or multi-category classification is then studied. Many previous attempts are reviewed, and compared. A new approach is proposed, based on a geometrical structure. This is shown to overcome many of the problems identified with previous methods in addition to being very flexible and efficient in its implementation. This architecture is also derived, for just the three-class case, using a complex-valued kernel function. The general architecture is used experimentally in three separate implementations to give a demonstration of the overall approach. The methodology is shown to achieve results comparable to those of many other methods, and to include many of them as special cases. Further possible refinements are proposed which should drastically reduce optimisation times for so-called 'all-together' methods.
APA, Harvard, Vancouver, ISO, and other styles
35

Bouvrie, Jacob V. "Hierarchical learning : theory with applications in speech and vision." Thesis, Massachusetts Institute of Technology, 2009. http://hdl.handle.net/1721.1/54227.

Full text
Abstract:
Thesis (Ph. D.)--Massachusetts Institute of Technology, Dept. of Brain and Cognitive Sciences, 2009.
This electronic version was submitted by the student author. The certified thesis is available in the Institute Archives and Special Collections.
Cataloged from student submitted PDF version of thesis.
Includes bibliographical references (p. 123-132).
Over the past two decades several hierarchical learning models have been developed and applied to a diverse range of practical tasks with much success. Little is known, however, as to why such models work as well as they do. Indeed, most are difficult to analyze, and cannot be easily characterized using the established tools from statistical learning theory. In this thesis, we study hierarchical learning architectures from two complementary perspectives: one theoretical and the other empirical. The theoretical component of the thesis centers on a mathematical framework describing a general family of hierarchical learning architectures. The primary object of interest is a recursively defined feature map, and its associated kernel. The class of models we consider exploit the fact that data in a wide variety of problems satisfy a decomposability property. Paralleling the primate visual cortex, hierarchies are assembled from alternating filtering and pooling stages that build progressively invariant representations which are simultaneously selective for increasingly complex stimuli. A goal of central importance in the study of hierarchical architectures and the cortex alike, is that of understanding quantitatively the tradeoff between invariance and selectivity, and how invariance and selectivity contribute towards providing an improved representation useful for learning from data. A reasonable expectation is that an unsupervised hierarchical representation will positively impact the sample complexity of a corresponding supervised learning task.
(cont.) We therefore analyze invariance and discrimination properties that emerge in particular instances of layered models described within our framework. A group-theoretic analysis leads to a concise set of conditions which must be met to establish invariance, as well as a constructive prescription for meeting those conditions. An information-theoretic analysis is then undertaken and seen as a means by which to characterize a model's discrimination properties. The empirical component of the thesis experimentally evaluates key assumptions built into the mathematical framework. In the case of images, we present simulations which support the hypothesis that layered architectures can reduce the sample complexity of a non-trivial learning problem. In the domain of speech, we describe a 3 localized analysis technique that leads to a noise-robust representation. The resulting biologically-motivated features are found to outperform traditional methods on a standard phonetic classification task in both clean and noisy conditions.
by Jacob V. Bouvrie.
Ph.D.
APA, Harvard, Vancouver, ISO, and other styles
36

Huang, Xin. "A study on the application of machine learning algorithms in stochastic optimal control." Thesis, KTH, Matematisk statistik, 2019. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-252541.

Full text
Abstract:
By observing a similarity between the goal of stochastic optimal control to minimize an expected cost functional and the aim of machine learning to minimize an expected loss function, a method of applying machine learning algorithm to approximate the optimal control function is established and implemented via neural approximation. Based on a discretization framework, a recursive formula for the gradient of the approximated cost functional on the parameters of neural network is derived. For a well-known Linear-Quadratic-Gaussian control problem, the approximated neural network function obtained with stochastic gradient descent algorithm manages to reproduce to shape of the theoretical optimal control function, and application of different types of machine learning optimization algorithm gives quite close accuracy rate in terms of their associated empirical value function. Furthermore, it is shown that the accuracy and stability of machine learning approximation can be improved by increasing the size of minibatch and applying a finer discretization scheme. These results suggest the effectiveness and appropriateness of applying machine learning algorithm for stochastic optimal control.
Genom att observera en likhet mellan målet för stokastisk optimal styrning för att minimera en förväntad kostnadsfunktionell och syftet med maskininlärning att minimera en förväntad förlustfunktion etableras och implementeras en metod för att applicera maskininlärningsalgoritmen för att approximera den optimala kontrollfunktionen via neuralt approximation. Baserat på en diskretiseringsram, härleds en rekursiv formel för gradienten av den approximerade kostnadsfunktionen på parametrarna för neuralt nätverk. För ett välkänt linjärt-kvadratisk-gaussiskt kontrollproblem lyckas den approximerade neurala nätverksfunktionen erhållen med stokastisk gradient nedstigningsalgoritm att reproducera till formen av den teoretiska optimala styrfunktionen och tillämpning av olika typer av algoritmer för maskininlärning optimering ger en ganska nära noggrannhet med avseende på deras motsvarande empiriska värdefunktion. Vidare är det visat att noggrannheten och stabiliteten hos maskininlärning simetrationen kan förbättras genom att öka storleken på minibatch och tillämpa ett finare diskretiseringsschema. Dessa resultat tyder på effektiviteten och lämpligheten av att tillämpa maskininlärningsalgoritmen för stokastisk optimal styrning.
APA, Harvard, Vancouver, ISO, and other styles
37

Brown, TeAirra Monique. "Playing to Win: Applying Cognitive Theory and Gamification to Augmented Reality for Enhanced Mathematical Outcomes in Underrepresented Student Populations." Diss., Virginia Tech, 2018. http://hdl.handle.net/10919/97340.

Full text
Abstract:
National dialogue and scholarly research illustrate the need for engaging science, math, technology, and engineering (STEM) innovations in K-12 environments, most importantly in low-income communities (President's Council of Advisors on Science and Technology, 2012). According to Educating the Engineer of 2020, "current curricular material does not portray STEM in ways that seem likely to excite the interest of students from a variety of ethnic and cultural backgrounds" (Phase, 2005). The National Educational Technology Plan of 2010 believes that one of the most powerful ways to transform and improve K-12 STEM education it to instill a culture of innovation by leveraging cutting edge technology (Polly et al., 2010). Augmented reality (AR) is an emerging and promising educational intervention that has the potential to engage students and transform their learning of STEM concepts. AR blends the real and virtual worlds by overlaying computer-generated content such as images, animations, and 3D models directly onto the student's view of the real world. Visual representations of STEM concepts using AR produce new educational learning opportunities, for example, allowing students to visualize abstract concepts and make them concrete (Radu, 2014). Although evidence suggests that learning can be enhanced by implementing AR in the classroom, it is important to take into account how students are processing AR content. Therefore, this research aims to examine the unique benefits and challenges of utilizing augmented reality (AR) as a supplemental learning technique to reinforce mathematical concepts while concurrently responding to students' cognitive demands. To examine and understand how cognitive demands affect students' information processing and creation of new knowledge, Mayer's Cognitive Theory of Multimedia Learning (CTML) is leveraged as a theoretical framework to ground the AR application and supporting research. Also, to enhance students' engagement, gamification was used to incorporate game elements (e.g. rewards and leaderboards) into the AR applications. This research applies gamification and CTML principles to tablet-based gamified learning AR (GLAR) applications as a supplemental tool to address three research objectives: (1) understanding the role of prior knowledge on cognitive performance, (2) examining if adherence to CTML principles applies to GLAR, and, (3) investigating the impact of cognitive style on cognitive performance. Each objective investigates how the inclusion of CTML in gamifying an AR experience influences students' perception of cognitive effects and how GLAR affects or enhances their ability to create new knowledge. Significant results from objective one suggest, (1) there were no differences between novice and experienced students' cognitive load, and, (2) novice students' content-based learning gains can be improved through interaction with GLAR. Objective two found that high adherence to CTML's principles was effective at (1) lowering students' cognitive load, and, (2) improving GLAR performance. The key findings of objective three are (1) there was no difference in FID students' cognitive load when voice and coherence were manipulated, and, (2) both FID and FD students had content-based learning gains after engagement with GLAR. The results of this research adds to the existing knowledge base for researchers, designers and practitioners to consider when creating gamified AR applications. Specifically, this research provides contributions to the field that include empirical evidence to suggest to what degree CTML is effective as an AR-based supplemental pedagogical tool for underrepresented students in southwest Virginia. And moreover, offers empirical data on the relationship between underrepresented students' perceived benefits of GLAR and it is impact on students' cognitive load. This research further offers recommendations as well as design considerations regarding the applicability of CTML when developing GLAR applications.
PHD
APA, Harvard, Vancouver, ISO, and other styles
38

Lai, Kai Hong. "Transformative process in organizational learning, theory, skills and application of a four-phase mediation model." Thesis, National Library of Canada = Bibliothèque nationale du Canada, 2001. http://www.collectionscanada.ca/obj/s4/f2/dsk3/ftp04/MQ62023.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
39

Clore, Christine W. "Social skills use of adolescents with learning disabilities: An application of Bandura's theory of reciprocal interaction." Thesis, University of North Texas, 2006. https://digital.library.unt.edu/ark:/67531/metadc5291/.

Full text
Abstract:
This was a mixed methods study designed to investigate the social skills use of adolescents with learning disabilities through an application of Albert Bandura's theory of reciprocal interaction. Data were collected through ranking surveys, observations, interviews, and school records. Three questions were investigated. The first question was to determine whether the language deficits of LD students contributed to their general decreased social competency. Through data from the Social Skills Rating System, the seventh grade participants were considered socially competent to some degree by self report, their teachers, and their parents. Factor analysis revealed students were the best predictors of their social skills use from all data sources. In ranking participants' social skills use, students and teachers were more strongly correlated than were students and parents, or teachers and parents. No relationship of any strength existed between the participants' cognitive ability and their social competence. A use of Bandura's determinants indicated that a relationship existed between some subtypes of learning disabilities and some types of social skills misuse. The participants diagnosed with reading disability, auditory processing disability, receptive/expressive language disability, or nonverbal learning disability all made the majority of their observed social skills errors in the environmental determinant of Bandura's triad of reciprocal interaction. The participants in the four subtypes experienced their information processing deficits in attending to environmental stimuli, or in attending to inappropriate environmental stimuli. The area of the subtype of information processing deficit aligned with the determinant in which the participants in that subtype's social errors were experienced. Bandura's triad of cognition, environment, and behavior was not equilateral because the balance did not exist between the three determinants in participants with learning disabilities.
APA, Harvard, Vancouver, ISO, and other styles
40

Parisi, Aaron Thomas. "An Application of Sliding Mode Control to Model-Based Reinforcement Learning." DigitalCommons@CalPoly, 2019. https://digitalcommons.calpoly.edu/theses/2054.

Full text
Abstract:
The state-of-art model-free reinforcement learning algorithms can generate admissible controls for complicated systems with no prior knowledge of the system dynamics, so long as sufficient (oftentimes millions) of samples are available from the environ- ment. On the other hand, model-based reinforcement learning approaches seek to leverage known optimal or robust control to reinforcement learning tasks by mod- elling the system dynamics and applying well established control algorithms to the system model. Sliding-mode controllers are robust to system disturbance and modelling errors, and have been widely used for high-order nonlinear system control. This thesis studies the application of sliding mode control to model-based reinforcement learning. Computer simulation results demonstrate that sliding-mode control is viable in the setting of reinforcement learning. While the system performance may suffer from problems such as deviations in state estimation, limitations in the capacity of the system model to express the system dynamics, and the need for many samples to converge, this approach still performs comparably to conventional model-free reinforcement learning methods.
APA, Harvard, Vancouver, ISO, and other styles
41

Brown, Stephen F. (Stephen Francis). "The Use of Learning Theory in the Application of Artificial Intelligence to Computer-Assisted Instruction of Physics." Thesis, North Texas State University, 1985. https://digital.library.unt.edu/ark:/67531/metadc330775/.

Full text
Abstract:
It was the purpose of this research, to develop and test an artificially intelligent, learner-based, computer-assisted physics tutor. The resulting expert system is named ARPHY, an acronym for ARtificially intelligent PHYsics tutor. The research was conducted in two phases. In the first phase of the research, the system was constructed using Ausubel's advance organizer as a guiding learning theory. The content of accelerated motion was encoded into this organizer after sub-classification according to the learning types identified by Gagnds. The measurement of the student's level of learning was accomplished through the development of questioning strategies based upon Bloom's taxonomy of educational objectives. The second phase of this research consisted of the testing of ARPHY. Volunteers from four levels of first-semester physics classes at North Texas State University were instructed that their goal was to solve three complex physics problems related to accelerated motion. The only students initially instructed by ARPHY were from the class of physics majors. When the threshold values of the pedagogical parameters stabilized, indicating the fact that ARPHY's instructional technique had adapted to the class' learning style, students from other classes were tutored. Nine of the ten students correctly solved the three problems after being tutored for an average of 116 minutes. ARPHY's pedagogical parameters stabilized after 6.3 students. The remaining students, each from a different class, were tutored, allowing ARPHY to self-improve, resulting in a new tutorial strategy after each session. It is recommended that future research into intelligent tutoring systems for science incorporate the principles and theories of learning which this research was based upon. An authoring system based upon the control structure of ARPHY should be developed, since the modular design of this system will allow any field which can be organized into a net-archy of problems, principles, and concepts, to be tutored.
APA, Harvard, Vancouver, ISO, and other styles
42

Sėrikovienė, Silvija. "Research on application of learning objects reusability and quality evaluation methods." Doctoral thesis, Lithuanian Academic Libraries Network (LABT), 2013. http://vddb.laba.lt/obj/LT-eLABa-0001:E.02~2013~D_20130220_160734-35867.

Full text
Abstract:
Qualitative learning material (or Learning Objects – LOs) is one of the main factors of learning quality. Therefore, evaluation of LOs quality is one of the most relevant education problems. The problem is relevant for all participants of the educational sector – for educational institutions (e.g. schools) that have to select qualitative learning material for their needs, for education policy makers who need clear quality criteria while implementing LOs tenders, for authors of learning material (e.g. publishers) who need to know quality requirements to create LOs etc. The research work is aimed to propose and pilot LOs reusability and quality evaluation methodology, i.e., quality model, and simple and effective expert evaluation methods, thus improving solution of educational tasks using informatics engineering methods. To reach this aim, we have to analyse the notions of LOs reusability and expert evaluation of quality, principles of creating LOs reusability and quality model, and possible simple and effective methods for the expert evaluation of LOs quality and reusability. Both LO quality and reusability model and evaluation method are presented in the work. LO quality model created consists of 9 quality criteria divided into 3 groups i.e. technological, pedagogical, and IPR criteria. This model is comprehensive and matches scientific principles of creating a model. The following methods are selected and consecutively applied in the research while evaluating LOs quality... [to full text]
Kokybiška mokomoji medžiaga yra viena svarbiausių mokymo(-si) kokybės veiksnių, todėl mokomųjų objektų (toliau – MO) daugkartinio panaudojamumo kokybės vertinimas yra viena opiausių švietimo problemų. Problema yra aktuali visiems švietimo dalyviams – švietimo įstaigoms (pvz., mokykloms), kurios turi išrinkti kokybišką mokomąją medžiagą (MO) savo tikslams pasiekti, švietimo politikams, kuriems reikia aiškių kokybės kriterijų vykdant MO viešuosius pirkimus, mokomosios medžiagos autoriams (pvz., leidykloms), kurie turi žinoti kokybės reikalavimus, remdamiesi kuriais jie kurs MO, ir pan. Disertacinis darbas skirtas pasiūlyti ir išbandyti MO daugkartinio panaudojamumo kokybės vertinimo metodiką: kokybės modelį ir paprastus bei efektyvius ekspertinio kokybės vertinimo metodus (t.y., pagerinti edukologinių uždavinių sprendimo galimybes naudojant informatikos inžinerijos metodus). Tam analizuojamos MO daugkartinio panaudojamumo ir ekspertinio kokybės vertinimo sąvokos, kokybės modelio sudarymo principai, galimi paprasti ir efektyvūs kokybės ekspertinio vertinimo metodai. Darbe yra pateiktas sukurtas mokomųjų objektų daugkartinio panaudojamumo kokybės modelis ir vertinimo metodas. Mokomųjų objektų kokybės modelį sudaro devyni trijų grupių (technologiniai, pedagoginiai, intelektinių teisių) kokybės kriterijai, kurie atspindi visapusišką kokybės kriterijų sistemą, kurioje yra svarbūs ne tik patys kriterijai, bet ir jų tarpusavio sąryšiai. Mokomųjų objektų daugkartinio panaudojamumo... [toliau žr. visą tekstą]
APA, Harvard, Vancouver, ISO, and other styles
43

Griffiths, Kerryn Eva. "Discovering, applying and integrating self-knowledge : a grounded theory study of learning in life coaching." Thesis, Queensland University of Technology, 2008. https://eprints.qut.edu.au/37245/1/Kerryn_Griffiths_Thesis.pdf.

Full text
Abstract:
Professional coaching is a rapidly expanding field with interdisciplinary roots and broad application. However, despite abundant prescriptive literature, research into the process of coaching, and especially life coaching, is minimal. Similarly, although learning is inherently recognised in the process of coaching, and coaching is increasingly being recognised as a means of enhancing teaching and learning, the process of learning in coaching is little understood, and learning theory makes up only a small part of the evidence-based coaching literature. In this grounded theory study of life coaches and their clients, the process of learning in life coaching across a range of coaching models is examined and explained. The findings demonstrate how learning in life coaching emerged as a process of discovering, applying and integrating self-knowledge, which culminated in the development of self. This process occurred through eight key coaching processes shared between coaches and clients and combined a multitude of learning theory.
APA, Harvard, Vancouver, ISO, and other styles
44

Myers, James William. "Stochastic algorithms for learning with incomplete data an application to Bayesian networks /." Full text available online (restricted access), 1999. http://images.lib.monash.edu.au/ts/theses/Myers.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
45

Kusuma, Mutiara Tirta Prabandari Lintang. "Strengthening the competence of dietetics students on providing nutrition care for HIV patients: application of attribution theory." Diss., Kansas State University, 2017. http://hdl.handle.net/2097/36227.

Full text
Abstract:
Doctor of Philosophy
Department of Food, Nutrition, Dietetics, and Health
Tandalayo Kidd
HIV and nutrition status are interrelated. Nutrition problems associated with HIV or its treatment occur in nearly all people living with HIV (PLHIV) and can be indicative of the stage and progression of infection. On the other hand, adequate nutrition ensures good nutrition status, immune function, improved treatment outcome, and quality of life. The growing problems of HIV and AIDS in Indonesia require health professionals, including dietitians, to mobilize for HIV care and control. However, studies have demonstrated health care workers to have prejudicial attitudes towards PLHIV, which may further jeopardize the quality of care. The objective of this study was to implement the attribution theory to improve HIV-related knowledge and attitudes among dietetics students. It is hypothesized that given the opportunity to revisit the antecedent of their stigma, dietetic students might be able to improve their attitudes and emotional reactions to HIV. Results from the cross-sectional study confirmed the attribution theory, showing that the stigmatizing attitudes were influenced by both personal values and environmental factors. The study also found that greater knowledge about HIV was associated with a better attitude toward PLHIV. This and the fact that universities differed in how they educated dietetic students about HIV, raise questions on the current dietetic curriculum in Indonesia and the teaching conduct in each dietetic school. These notions were studied in the second study, using a qualitative approach to inquire lecturers and school administrators. Four major themes emerged from the analysis confirming that HIV discourse in dietetic schools in Indonesia is very limited since it is not mandatory in the curriculum, lecturers are reluctant to talk about HIV, and there is apparent restriction to work with the key population. The way the lecturers attribute HIV with blames of personal responsibility and fear of contagion, heavily influence their teaching conduct. The intervention model with transformative learning supported the hypothesis that given the opportunity to reflect and re-question their judgment, students were able to improve their knowledge and reduce their stigmatizing attitudes. Overall, these studies give a warning to policy makers in health and education sectors as well as the school administrators that dietetics students have negative attitudes towards PLHIV and this stigma is associated with lack of knowledge about HIV, hence the need to improve response from both sectors. This study also serves as a strong call to provide more opportunities to students to learn about HIV and to reach out to the patients and key population to instill better understanding and acceptance to HIV.
APA, Harvard, Vancouver, ISO, and other styles
46

Pappone, Francesco. "Graph neural networks: theory and applications." Bachelor's thesis, Alma Mater Studiorum - Università di Bologna, 2021. http://amslaurea.unibo.it/23893/.

Full text
Abstract:
Le reti neurali artificiali hanno visto, negli ultimi anni, una crescita vertiginosa nelle loro applicazioni e nelle architetture dei modelli impiegati. In questa tesi introduciamo le reti neurali su domini euclidei, in particolare mostrando l’importanza dell’equivarianza di traslazione nelle reti convoluzionali, e introduciamo, per analogia, un’estensione della convoluzione a dati strutturati come grafi. Inoltre presentiamo le architetture dei principali Graph Neural Network ed esponiamo, per ognuna delle tre architetture proposte (Spectral graph Convolutional Network, Graph Convolutional Network, Graph Attention neTwork) un’applicazione che ne mostri sia il funzionamento che l’importanza. Discutiamo, ulteriormente, l’implementazione di un algoritmo di classificazione basato su due varianti dell’architettura Graph Convolutional Network, addestrato e testato sul dataset PROTEINS, capace di classificare le proteine del dataset in due categorie: enzimi e non enzimi.
APA, Harvard, Vancouver, ISO, and other styles
47

Ostrow, Korinn S. "A Foundation For Educational Research at Scale: Evolution and Application." Digital WPI, 2018. https://digitalcommons.wpi.edu/etd-dissertations/163.

Full text
Abstract:
The complexities of how people learn have plagued researchers for centuries. A range of experimental and non-experimental methodologies have been used to isolate and implement positive interventions for students' cognitive, meta-cognitive, behavioral, and socio-emotional successes in learning. But the face of learning is changing in the digital age. The value of accrued knowledge, popular throughout the industrial age, is being overpowered by the value of curiosity and the ability to ask critical questions. Most students can access the largest free collection of human knowledge (and cat videos) with ease using their phones or laptops and omnipresent cellular and Wi-Fi networks. Viewing this new-age capacity for connection as an opportunity, educational stakeholders have delegated many traditional learning tasks to online environments. With this influx of online learning, student errors can be corrected with immediacy, student data is more prevalent and actionable, and teachers can intervene with efficiency and efficacy. As such, endeavors in educational data mining, learning analytics, and authentic educational research at scale have grown popular in recent years; fields afforded by the luxuries of technology and driven by the age-old goal of understanding how people learn. This dissertation explores the evolution and application of ASSISTments Research, an approach to authentic educational research at scale that leverages ASSISTments, a popular online learning platform, to better understand how people learn. Part I details the evolution and advocacy of two tools that form the research arm of ASSISTments: the ASSISTments TestBed and the Assessment of Learning Infrastructure (ALI). An NSF funded Data Infrastructure Building Blocks grant (#1724889, $494,644 2017-2020), outlines goals for the new age of ASSISTments Research as a result of lessons learned in recent years. Part II details a personal application of these research tools with a focus on the framework of Self Determination Theory. The primary facets of this theory, thought to positively affect learning and intrinsic motivation, are investigated in depth through randomized controlled trials targeting Autonomy, Belonging, and Competence. Finally, a synthesis chapter highlights important connections between Parts I & II, offering lessons learned regarding ASSISTments Research and suggesting additional guidance for its future development, while broadly defining contributions to the Learning Sciences community.
APA, Harvard, Vancouver, ISO, and other styles
48

Gardiner, Penelope Ann. "The application of learning organisation theory to the management of change : with reference to the engineering sector." Thesis, University of Plymouth, 1998. http://hdl.handle.net/10026.1/2639.

Full text
Abstract:
Recent contributions to the literature on organisations have emphasised the need for constant adaptation to keep pace with the accelerating rate of environmental change. The learning organisation is proposed as one of the most effective means of achieving succesful adaptation through a central focus on learning. This thesis examines the development of the .ideas which have led to the concept of the learning organisation and the application of this concept to the management of change. A number of reasons are proposed for the current adoption of learning organisation theory, these include the restructuring and downsizing of organisations, new Human Resource Management practices, improved understanding of learning and systems thinking. Organisational change is examined in relation to learning and a number of models of change management are considered. Different approaches to the evaluation of change are also discussed and some examples outlined. Some of the elements which comprise a learning organisation are described and the relationships between these indicated. The project aimed to apply learning organisation theory to the management of change by studying firms which were intending to become learning organisations. A generic model was constructed and used to form the basis of a specially designed diagnostic instrument for the measurement of learning organisation characteristics. This took the form of a questionnaire called the Learning Organisation Research Inventory (LORI). Data were collected from two large organisations in the engineering sector via administration of the questionnaire and interviews with employees. Analysis of the quantitative data was based on nine conceptual categories derived from the literature. Factor analysis was carried out on the second data set but this failed to provide a satisfactory classification. It was proposed that further factor analysis be conducted on a larger sample. The results of the study indicated that the generic model was probably inappropriate; there were factors specific to the engineering sector and to these particular companies which probably influenced the success of learning initiatives and indicated the need for a sector-specific model. Neither organisation could be said to be a learning organisation and it did not prove possible to identify the components of such organisations. However, the lack of certain characteristics in these organisations appeared to have acted as barriers to learning. It was proposed that a learning orientation might be a more useful perspective than a learning organisation and may perhaps be easier to achieve. A new model of a learning orientation was developed from the research; it is suggested that, subject to further testing, this might form the basis for future studies of this type.
APA, Harvard, Vancouver, ISO, and other styles
49

Leang, Kam K. "Iterative learning control of hysteresis in piezo-based nano-positioners : theory and application in atomic force microscopes /." Thesis, Connect to this title online; UW restricted, 2004. http://hdl.handle.net/1773/7127.

Full text
APA, Harvard, Vancouver, ISO, and other styles
50

Yu, Guoqiang. "Machine Learning to Interrogate High-throughput Genomic Data: Theory and Applications." Diss., Virginia Tech, 2011. http://hdl.handle.net/10919/28980.

Full text
Abstract:
The missing heritability in genome-wide association studies (GWAS) is an intriguing open scientific problem which has attracted great recent interest. The interaction effects among risk factors, both genetic and environmental, are hypothesized to be one of the main missing heritability sources. Moreover, detection of multilocus interaction effect may also have great implications for revealing disease/biological mechanisms, for accurate risk prediction, personalized clinical management, and targeted drug design. However, current analysis of GWAS largely ignores interaction effects, partly due to the lack of tools that meet the statistical and computational challenges posed by taking into account interaction effects. Here, we propose a novel statistically-based framework (Significant Conditional Association) for systematically exploring, assessing significance, and detecting interaction effect. Further, our SCA work has also revealed new theoretical results and insights on interaction detection, as well as theoretical performance bounds. Using in silico data, we show that the new approach has detection power significantly better than that of peer methods, while controlling the running time within a permissible range. More importantly, we applied our methods on several real data sets, confirming well-validated interactions with more convincing evidence (generating smaller p-values and requiring fewer samples) than those obtained through conventional methods, eliminating inconsistent results in the original reports, and observing novel discoveries that are otherwise undetectable. The proposed methods provide a useful tool to mine new knowledge from existing GWAS and generate new hypotheses for further research. Microarray gene expression studies provide new opportunities for the molecular characterization of heterogeneous diseases. Multiclass gene selection is an imperative task for identifying phenotype-associated mechanistic genes and achieving accurate diagnostic classification. Most existing multiclass gene selection methods heavily rely on the direct extension of two-class gene selection methods. However, simple extensions of binary discriminant analysis to multiclass gene selection are suboptimal and not well-matched to the unique characteristics of the multi-category classification problem. We report a simpler and yet more accurate strategy than previous works for multicategory classification of heterogeneous diseases. Our method selects the union of one-versus-everyone phenotypic up-regulated genes (OVEPUGs) and matches this gene selection with a one-versus-rest support vector machine. Our approach provides even-handed gene resources for discriminating both neighboring and well-separated classes, and intends to assure the statistical reproducibility and biological plausibility of the selected genes. We evaluated the fold changes of OVEPUGs and found that only a small number of high-ranked genes were required to achieve superior accuracy for multicategory classification. We tested the proposed OVEPUG method on six real microarray gene expression data sets (five public benchmarks and one in-house data set) and two simulation data sets, observing significantly improved performance with lower error rates, fewer marker genes, and higher performance sustainability, as compared to several widely-adopted gene selection and classification methods.
Ph. D.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography