To see the other types of publications on this topic, follow the link: Transfer of Learning.

Dissertations / Theses on the topic 'Transfer of Learning'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 dissertations / theses for your research on the topic 'Transfer of Learning.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.

1

Shell, Jethro. "Fuzzy transfer learning." Thesis, De Montfort University, 2013. http://hdl.handle.net/2086/8842.

Full text
Abstract:
The use of machine learning to predict output from data, using a model, is a well studied area. There are, however, a number of real-world applications that require a model to be produced but have little or no data available of the specific environment. These situations are prominent in Intelligent Environments (IEs). The sparsity of the data can be a result of the physical nature of the implementation, such as sensors placed into disaster recovery scenarios, or where the focus of the data acquisition is on very defined user groups, in the case of disabled individuals. Standard machine learning approaches focus on a need for training data to come from the same domain. The restrictions of the physical nature of these environments can severely reduce data acquisition making it extremely costly, or in certain situations, impossible. This impedes the ability of these approaches to model the environments. It is this problem, in the area of IEs, that this thesis is focussed. To address complex and uncertain environments, humans have learnt to use previously acquired information to reason and understand their surroundings. Knowledge from different but related domains can be used to aid the ability to learn. For example, the ability to ride a road bicycle can help when acquiring the more sophisticated skills of mountain biking. This humanistic approach to learning can be used to tackle real-world problems where a-priori labelled training data is either difficult or not possible to gain. The transferral of knowledge from a related, but differing context can allow for the reuse and repurpose of known information. In this thesis, a novel composition of methods are brought together that are broadly based on a humanist approach to learning. Two concepts, Transfer Learning (TL) and Fuzzy Logic (FL) are combined in a framework, Fuzzy Transfer Learning (FuzzyTL), to address the problem of learning tasks that have no prior direct contextual knowledge. Through the use of a FL based learning method, uncertainty that is evident in dynamic environments is represented. By combining labelled data from a contextually related source task, and little or no unlabelled data from a target task, the framework is shown to be able to accomplish predictive tasks using models learned from contextually different data. The framework incorporates an additional novel five stage online adaptation process. By adapting the underlying fuzzy structure through the use of previous labelled knowledge and new unlabelled information, an increase in predictive performance is shown. The framework outlined is applied to two differing real-world IEs to demonstrate its ability to predict in uncertain and dynamic environments. Through a series of experiments, it is shown that the framework is capable of predicting output using differing contextual data.
APA, Harvard, Vancouver, ISO, and other styles
2

Lu, Ying. "Transfer Learning for Image Classification." Thesis, Lyon, 2017. http://www.theses.fr/2017LYSEC045/document.

Full text
Abstract:
Lors de l’apprentissage d’un modèle de classification pour un nouveau domaine cible avec seulement une petite quantité d’échantillons de formation, l’application des algorithmes d’apprentissage automatiques conduit généralement à des classifieurs surdimensionnés avec de mauvaises compétences de généralisation. D’autre part, recueillir un nombre suffisant d’échantillons de formation étiquetés manuellement peut s’avérer très coûteux. Les méthodes de transfert d’apprentissage visent à résoudre ce type de problèmes en transférant des connaissances provenant d’un domaine source associé qui contient beaucoup plus de données pour faciliter la classification dans le domaine cible. Selon les différentes hypothèses sur le domaine cible et le domaine source, l’apprentissage par transfert peut être classé en trois catégories: apprentissage par transfert inductif, apprentissage par transfert transducteur (adaptation du domaine) et apprentissage par transfert non surveillé. Nous nous concentrons sur le premier qui suppose que la tâche cible et la tâche source sont différentes mais liées. Plus précisément, nous supposons que la tâche cible et la tâche source sont des tâches de classification, tandis que les catégories cible et les catégories source sont différentes mais liées. Nous proposons deux méthodes différentes pour aborder ce problème. Dans le premier travail, nous proposons une nouvelle méthode d’apprentissage par transfert discriminatif, à savoir DTL(Discriminative Transfer Learning), combinant une série d’hypothèses faites à la fois par le modèle appris avec les échantillons de cible et les modèles supplémentaires appris avec des échantillons des catégories sources. Plus précisément, nous utilisons le résidu de reconstruction creuse comme discriminant de base et améliore son pouvoir discriminatif en comparant deux résidus d’un dictionnaire positif et d’un dictionnaire négatif. Sur cette base, nous utilisons des similitudes et des dissemblances en choisissant des catégories sources positivement corrélées et négativement corrélées pour former des dictionnaires supplémentaires. Une nouvelle fonction de coût basée sur la statistique de Wilcoxon-Mann-Whitney est proposée pour choisir les dictionnaires supplémentaires avec des données non équilibrées. En outre, deux processus de Boosting parallèles sont appliqués à la fois aux distributions de données positives et négatives pour améliorer encore les performances du classificateur. Sur deux bases de données de classification d’images différentes, la DTL proposée surpasse de manière constante les autres méthodes de l’état de l’art du transfert de connaissances, tout en maintenant un temps d’exécution très efficace. Dans le deuxième travail, nous combinons le pouvoir du transport optimal (OT) et des réseaux de neurones profond (DNN) pour résoudre le problème ITL. Plus précisément, nous proposons une nouvelle méthode pour affiner conjointement un réseau de neurones avec des données source et des données cibles. En ajoutant une fonction de perte du transfert optimal (OT loss) entre les prédictions du classificateur source et cible comme une contrainte sur le classificateur source, le réseau JTLN (Joint Transfer Learning Network) proposé peut effectivement apprendre des connaissances utiles pour la classification cible à partir des données source. En outre, en utilisant différents métriques comme matrice de coût pour la fonction de perte du transfert optimal, JTLN peut intégrer différentes connaissances antérieures sur la relation entre les catégories cibles et les catégories sources. Nous avons effectué des expérimentations avec JTLN basées sur Alexnet sur les jeux de données de classification d’image et les résultats vérifient l’efficacité du JTLN proposé. A notre connaissances, ce JTLN proposé est le premier travail à aborder ITL avec des réseaux de neurones profond (DNN) tout en intégrant des connaissances antérieures sur la relation entre les catégories cible et source
When learning a classification model for a new target domain with only a small amount of training samples, brute force application of machine learning algorithms generally leads to over-fitted classifiers with poor generalization skills. On the other hand, collecting a sufficient number of manually labeled training samples may prove very expensive. Transfer Learning methods aim to solve this kind of problems by transferring knowledge from related source domain which has much more data to help classification in the target domain. Depending on different assumptions about target domain and source domain, transfer learning can be further categorized into three categories: Inductive Transfer Learning, Transductive Transfer Learning (Domain Adaptation) and Unsupervised Transfer Learning. We focus on the first one which assumes that the target task and source task are different but related. More specifically, we assume that both target task and source task are classification tasks, while the target categories and source categories are different but related. We propose two different methods to approach this ITL problem. In the first work we propose a new discriminative transfer learning method, namely DTL, combining a series of hypotheses made by both the model learned with target training samples, and the additional models learned with source category samples. Specifically, we use the sparse reconstruction residual as a basic discriminant, and enhance its discriminative power by comparing two residuals from a positive and a negative dictionary. On this basis, we make use of similarities and dissimilarities by choosing both positively correlated and negatively correlated source categories to form additional dictionaries. A new Wilcoxon-Mann-Whitney statistic based cost function is proposed to choose the additional dictionaries with unbalanced training data. Also, two parallel boosting processes are applied to both the positive and negative data distributions to further improve classifier performance. On two different image classification databases, the proposed DTL consistently out performs other state-of-the-art transfer learning methods, while at the same time maintaining very efficient runtime. In the second work we combine the power of Optimal Transport and Deep Neural Networks to tackle the ITL problem. Specifically, we propose a novel method to jointly fine-tune a Deep Neural Network with source data and target data. By adding an Optimal Transport loss (OT loss) between source and target classifier predictions as a constraint on the source classifier, the proposed Joint Transfer Learning Network (JTLN) can effectively learn useful knowledge for target classification from source data. Furthermore, by using different kind of metric as cost matrix for the OT loss, JTLN can incorporate different prior knowledge about the relatedness between target categories and source categories. We carried out experiments with JTLN based on Alexnet on image classification datasets and the results verify the effectiveness of the proposed JTLN in comparison with standard consecutive fine-tuning. To the best of our knowledge, the proposed JTLN is the first work to tackle ITL with Deep Neural Networks while incorporating prior knowledge on relatedness between target and source categories. This Joint Transfer Learning with OT loss is general and can also be applied to other kind of Neural Networks
APA, Harvard, Vancouver, ISO, and other styles
3

Alexander, John W. "Transfer in reinforcement learning." Thesis, University of Aberdeen, 2015. http://digitool.abdn.ac.uk:80/webclient/DeliveryManager?pid=227908.

Full text
Abstract:
The problem of developing skill repertoires autonomously in robotics and artificial intelligence is becoming ever more pressing. Currently, the issues of how to apply prior knowledge to new situations and which knowledge to apply have not been sufficiently studied. We present a transfer setting where a reinforcement learning agent faces multiple problem solving tasks drawn from an unknown generative process, where each task has similar dynamics. The task dynamics are changed by varying in the transition function between states. The tasks are presented sequentially with the latest task presented considered as the target for transfer. We describe two approaches to solving this problem. Firstly we present an algorithm for transfer of the function encoding the stateaction value, defined as value function transfer. This algorithm uses the value function of a source policy to initialise the policy of a target task. We varied the type of basis the algorithm used to approximate the value function. Empirical results in several well known domains showed that the learners benefited from the transfer in the majority of cases. Results also showed that the Radial basis performed better in general than the Fourier. However contrary to expectation the Fourier basis benefited most from the transfer. Secondly, we present an algorithm for learning an informative prior which encodes beliefs about the underlying dynamics shared across all tasks. We call this agent the Informative Prior agent (IP). The prior is learnt though experience and captures the commonalities in the transition dynamics of the domain and allows for a quantification of the agent's uncertainty about these. By using a sparse distribution of the uncertainty in the dynamics as a prior, the IP agent can successfully learn a model of 1) the set of feasible transitions rather than the set of possible transitions, and 2) the likelihood of each of the feasible transitions. Analysis focusing on the accuracy of the learned model showed that IP had a very good accuracy bound, which is expressible in terms of only the permissible error and the diffusion, a factor that describes the concentration of the prior mass around the truth, and which decreases as the number of tasks experienced grows. The empirical evaluation of IP showed that an agent which uses the informative prior outperforms several existing Bayesian reinforcement learning algorithms on tasks with shared structure in a domain where multiple related tasks were presented only once to the learners. IP is a step towards the autonomous acquisition of behaviours in artificial intelligence. IP also provides a contribution towards the analysis of exploration and exploitation in the transfer paradigm.
APA, Harvard, Vancouver, ISO, and other styles
4

Kiehl, Janet K. "Learning to Change: Organizational Learning and Knowledge Transfer." online version, 2004. http://rave.ohiolink.edu/etdc/view?acc%5Fnum=case1080608710.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Johnson, C. Dustin. "Set-Switching and Learning Transfer." Digital Archive @ GSU, 2008. http://digitalarchive.gsu.edu/psych_hontheses/7.

Full text
Abstract:
In this experiment I investigated the relationship between set-switching and transfer learning, both of which presumably invoke executive functioning (EF), which may in turn be correlated with intelligence. Set-switching was measured by a computerized version of the Wisconsin Card Sort Task. Another computer task was written to measure learning-transfer ability. The data indicate little correlation between the ability to transfer learning and the capacity for set-switching. That is, these abilities may draw from independent cognitive mechanisms. The major difference may be requirement to utilize previous learning in a new way in the learning-transfer task.
APA, Harvard, Vancouver, ISO, and other styles
6

Skolidis, Grigorios. "Transfer learning with Gaussian processes." Thesis, University of Edinburgh, 2012. http://hdl.handle.net/1842/6271.

Full text
Abstract:
Transfer Learning is an emerging framework for learning from data that aims at intelligently transferring information between tasks. This is achieved by developing algorithms that can perform multiple tasks simultaneously, as well as translating previously acquired knowledge to novel learning problems. In this thesis, we investigate the application of Gaussian Processes to various forms of transfer learning with a focus on classification problems. This process initiates with a thorough introduction to the framework of Transfer learning, providing a clear taxonomy of the areas of research. Following that, we continue by reviewing the recent advances on Multi-task learning for regression with Gaussian processes, and compare the performance of some of these methods on a real data set. This review gives insights about the strengths and weaknesses of each method, which acts as a point of reference to apply these methods to other forms of transfer learning. The main contributions of this thesis are reported in the three following chapters. The third chapter investigates the application of Multi-task Gaussian processes to classification problems. We extend a previously proposed model to the classification scenario, providing three inference methods due to the non-Gaussian likelihood the classification paradigm imposes. The forth chapter extends the multi-task scenario to the semi-supervised case. Using labeled and unlabeled data, we construct a novel covariance function that is able to capture the geometry of the distribution of each task. This setup allows unlabeled data to be utilised to infer the level of correlation between the tasks. Moreover, we also discuss the potential use of this model to situations where no labeled data are available for certain tasks. The fifth chapter investigates a novel form of transfer learning called meta-generalising. The question at hand is if, after training on a sufficient number of tasks, it is possible to make predictions on a novel task. In this situation, the predictor is embedded in an environment of multiple tasks but has no information about the origins of the test task. This elevates the concept of generalising from the level of data to the level of tasks. We employ a model based on a hierarchy of Gaussian processes, in a mixtures of expert sense, to make predictions based on the relation between the distributions of the novel and the training tasks. Each chapter is accompanied with a thorough experimental part giving insights about the potentials and the limits of the proposed methods.
APA, Harvard, Vancouver, ISO, and other styles
7

Chen, Xiaoyi. "Transfer Learning with Kernel Methods." Thesis, Troyes, 2018. http://www.theses.fr/2018TROY0005.

Full text
Abstract:
Le transfert d‘apprentissage regroupe les méthodes permettant de transférer l’apprentissage réalisé sur des données (appelées Source) à des données nouvelles, différentes, mais liées aux données Source. Ces travaux sont une contribution au transfert d’apprentissage homogène (les domaines de représentation des Source et Cible sont identiques) et transductif (la tâche à effectuer sur les données Cible est identique à celle sur les données Source), lorsque nous ne disposons pas d’étiquettes des données Cible. Dans ces travaux, nous relâchons la contrainte d’égalité des lois des étiquettes conditionnellement aux observations, souvent considérée dans la littérature. Notre approche permet de traiter des cas de plus en plus généraux. Elle repose sur la recherche de transformations permettant de rendre similaires les données Source et Cible. Dans un premier temps, nous recherchons cette transformation par Maximum de Vraisemblance. Ensuite, nous adaptons les Machines à Vecteur de Support en intégrant une contrainte additionnelle sur la similitude des données Source et Cible. Cette similitude est mesurée par la Maximum Mean Discrepancy. Enfin, nous proposons l’utilisation de l’Analyse en Composantes Principales à noyau pour rechercher un sous espace, obtenu à partir d’une transformation non linéaire des données Source et Cible, dans lequel les lois des observations sont les plus semblables possibles. Les résultats expérimentaux montrent l’efficacité de nos approches
Transfer Learning aims to take advantage of source data to help the learning task of related but different target data. This thesis contributes to homogeneous transductive transfer learning where no labeled target data is available. In this thesis, we relax the constraint on conditional probability of labels required by covariate shift to be more and more general, based on which the alignment of marginal probabilities of source and target observations renders source and target similar. Thus, firstly, a maximum likelihood based approach is proposed. Secondly, SVM is adapted to transfer learning with an extra MMD-like constraint where Maximum Mean Discrepancy (MMD) measures this similarity. Thirdly, KPCA is used to align data in a RKHS on minimizing MMD. We further develop the KPCA based approach so that a linear transformation in the input space is enough for a good and robust alignment in the RKHS. Experimentally, our proposed approaches are very promising
APA, Harvard, Vancouver, ISO, and other styles
8

Al, Chalati Abdul Aziz, and Syed Asad Naveed. "Transfer Learning for Machine Diagnostics." Thesis, Högskolan i Halmstad, Akademin för informationsteknologi, 2020. http://urn.kb.se/resolve?urn=urn:nbn:se:hh:diva-43185.

Full text
Abstract:
Fault detection and diagnostics are crucial tasks in condition-based maintenance. Industries nowadays are in need of fault identification in their machines as early as possible to save money and take precautionary measures in case of fault occurrence. Also, it is beneficial for the smooth interference in the manufacturing process in which it avoids sudden malfunctioning. Having sufficient training data for industrial machines is also a major challenge which is a prerequisite for deep neural networks to train an accurate prediction model. Transfer learning in such cases is beneficial as it can be helpful in adapting different operating conditions and characteristics which is the casein real-life applications. Our work is focused on a pneumatic system which utilizes compressed air to perform operations and is used in different types of machines in the industrial field. Our novel contribution is to build upon a Domain Adversarial Neural Network (DANN) with a unique approach by incorporating ensembling techniques for diagnostics of air leakage problem in the pneumatic system under transfer learning settings. Our approach of using ensemble methods for feature extraction shows up to 5 % improvement in the performance. We have also performed a comparative analysis of our work with conventional machine and deep learning methods which depicts the importance of transfer learning and we have also demonstrated the generalization ability of our model. Lastly, we also mentioned a problem specific contribution by suggesting a feature engineering approach, such that it could be implemented on almost every pneumatic system and could potentially impact the prediction result positively. We demonstrate that our designed model with domain adaptation ability will be quite useful and beneficial for the industry by saving their time and money and providing promising results for this air leakage problem in the pneumatic system.
APA, Harvard, Vancouver, ISO, and other styles
9

Arnekvist, Isac. "Transfer Learning using low-dimensional Representations in Reinforcement Learning." Licentiate thesis, KTH, Robotik, perception och lärande, RPL, 2020. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-279120.

Full text
Abstract:
Successful learning of behaviors in Reinforcement Learning (RL) are often learned tabula rasa, requiring many observations and interactions in the environment. Performing this outside of a simulator, in the real world, often becomes infeasible due to the large amount of interactions needed. This has motivated the use of Transfer Learning for Reinforcement Learning, where learning is accelerated by using experiences from previous learning in related tasks. In this thesis, I explore how we can transfer from a simple single-object pushing policy, to a wide array of non-prehensile rearrangement problems. I then explain how we can model task differences using a low-dimensional latent variable representation to make adaption to novel tasks efficient. Lastly, the dependence of accurate function approximation is sometimes problematic, especially in RL, where statistics of target variables are not known a priori. I present observations, along with explanations, that small target variances along with momentum optimization of ReLU-activated neural network parameters leads to dying ReLU.
Framgångsrik inlärning av beteenden inom ramen för Reinforcement Learning (RL) sker ofta tabula rasa och kräver stora mängder observationer och interaktioner. Att använda RL-algoritmer utanför simulering, i den riktiga världen, är därför ofta inte praktiskt utförbart. Detta har motiverat studier i Transfer Learning för RL, där inlärningen accelereras av erfarenheter från tidigare inlärning av liknande uppgifter. I denna licentiatuppsats utforskar jag hur vi kan vi kan åstadkomma transfer från en enklare manipulationspolicy, till en större samling omarrangeringsproblem. Jag fortsätter sedan med att beskriva hur vi kan modellera hur olika inlärningsproblem skiljer sig åt med hjälp av en lågdimensionell parametrisering, och på så vis effektivisera inlärningen av nya problem. Beroendet av bra funktionsapproximation är ibland problematiskt, särskilt inom RL där statistik om målvariabler inte är kända i förväg. Jag presenterar därför slutligen observationer, och förklaringar, att små varianser för målvariabler tillsammans med momentum-optimering leder till dying ReLU.

QC 20200819

APA, Harvard, Vancouver, ISO, and other styles
10

Mare, Angelique. "Motivators of learning and learning transfer in the workplace." Diss., University of Pretoria, 2015. http://hdl.handle.net/2263/52441.

Full text
Abstract:
Motivating employees to learn and transfer their learning to their jobs is an important activity to ensure that employees - and the organisation - continuously adapt, evolve and survive in this highly turbulent environment. The literature shows that both intrinsic and extrinsic motivators influence learning and learning transfer, and the extent of influence could be different for different people. This research sets out to explore and identify the intrinsic and extrinsic motivational factors that drive learning and learning transfer. A qualitative study in the form of focus groups was conducted. Three focus groups were conducted in which a total of 25 middle managers from two different multinational companies participated. Content and frequency analysis were used to identify the key themes from the focus group discussion. The outcome of the study resulted in the identification of the key intrinsic and extrinsic motivation factors that drive learning and learning transfer. The findings have been used to develop a Motivation-to-learn-and-transfer catalyst framework indicating that individual intrinsic motivators are at the core of driving motivation to learn and transfer learning. It also indicates which training design and work environment factors to focus on in support of intrinsic motivation to learn and transfer learning in the workplace for middle managers. It is hoped that the outcome of this research will contribute to catalysing learning and learning transfer for middle managers to achieve higher organisational effectiveness.
Mini Dissertation (MBA)--University of Pretoria, 2015.
pa2016
Gordon Institute of Business Science (GIBS)
MBA
Unrestricted
APA, Harvard, Vancouver, ISO, and other styles
11

Redko, Ievgen. "Nonnegative matrix factorization for transfer learning." Thesis, Sorbonne Paris Cité, 2015. http://www.theses.fr/2015USPCD059.

Full text
Abstract:
L’apprentissage par transfert consiste `a utiliser un jeu de taches pour influencerl’apprentissage et améliorer les performances sur une autre tache.Cependant, ce paradigme d’apprentissage peut en réalité gêner les performancessi les taches (sources et cibles) sont trop dissemblables. Un défipour l’apprentissage par transfert est donc de développer des approchesqui détectent et évitent le transfert négatif des connaissances utilisant tr`espeu d’informations sur la tache cible. Un cas particulier de ce type d’apprentissageest l’adaptation de domaine. C’est une situation o`u les tachessources et cibles sont identiques mais dans des domaines différents. Danscette thèse, nous proposons des approches adaptatives basées sur la factorisationmatricielle non-figurative permettant ainsi de trouver une représentationadéquate des données pour ce type d’apprentissage. En effet, unereprésentation utile rend généralement la structure latente dans les donnéesexplicite, et réduit souvent la dimensionnalité´e des données afin que d’autresméthodes de calcul puissent être appliquées. Nos contributions dans cettethèse s’articulent autour de deux dimensions complémentaires : théoriqueet pratique.Tout d’abord, nous avons propose deux méthodes différentes pour résoudrele problème de l’apprentissage par transfert non supervise´e bas´e sur destechniques de factorisation matricielle non-négative. La première méthodeutilise une procédure d’optimisation itérative qui vise `a aligner les matricesde noyaux calculées sur les bases des données provenant de deux taches.La seconde représente une approche linéaire qui tente de découvrir unplongement pour les deux taches minimisant la distance entre les distributionsde probabilité correspondantes, tout en préservant la propriété depositivité.Nous avons également propos´e un cadre théorique bas´e sur les plongementsHilbert-Schmidt. Cela nous permet d’améliorer les résultats théoriquesde l’adaptation au domaine, en introduisant une mesure de distancenaturelle et intuitive avec de fortes garanties de calcul pour son estimation.Les résultats propos´es combinent l’etancheite des bornes de la théoried’apprentissage de Rademacher tout en assurant l’estimation efficace deses facteurs cl´es.Les contributions théoriques et algorithmiques proposées ont et évaluéessur un ensemble de données de référence dans le domaine avec des résultatsprometteurs
The ability of a human being to extrapolate previously gained knowledge to other domains inspired a new family of methods in machine learning called transfer learning. Transfer learning is often based on the assumption that objects in both target and source domains share some common feature and/or data space. If this assumption is false, most of transfer learning algorithms are likely to fail. In this thesis we propose to investigate the problem of transfer learning from both theoretical and applicational points of view.First, we present two different methods to solve the problem of unsuper-vised transfer learning based on Non-negative matrix factorization tech-niques. First one proceeds using an iterative optimization procedure that aims at aligning the kernel matrices calculated based on the data from two tasks. Second one represents a linear approach that aims at discovering an embedding for two tasks that decreases the distance between the cor-responding probability distributions while preserving the non-negativity property.We also introduce a theoretical framework based on the Hilbert-Schmidt embeddings that allows us to improve the current state-of-the-art theo-retical results on transfer learning by introducing a natural and intuitive distance measure with strong computational guarantees for its estimation. The proposed results combine the tightness of data-dependent bounds de-rived from Rademacher learning theory while ensuring the efficient esti-mation of its key factors.Both theoretical contributions and the proposed methods were evaluated on a benchmark computer vision data set with promising results. Finally, we believe that the research direction chosen in this thesis may have fruit-ful implications in the nearest future
APA, Harvard, Vancouver, ISO, and other styles
12

Frenger, Tobias, and Johan Häggmark. "Transfer learning between domains : Evaluating the usefulness of transfer learning between object classification and audio classification." Thesis, Högskolan i Skövde, Institutionen för informationsteknologi, 2020. http://urn.kb.se/resolve?urn=urn:nbn:se:his:diva-18669.

Full text
Abstract:
Convolutional neural networks have been successfully applied to both object classification and audio classification. The aim of this thesis is to evaluate the degree of how well transfer learning of convolutional neural networks, trained in the object classification domain on large datasets (such as CIFAR-10, and ImageNet), can be applied to the audio classification domain when only a small dataset is available. In this work, four different convolutional neural networks are tested with three configurations of transfer learning against a configuration without transfer learning. This allows for testing how transfer learning and the architectural complexity of the networks affects the performance. Two of the models developed by Google (Inception-V3, Inception-ResNet-V2), are used. These models are implemented using the Keras API where they are pre-trained on the ImageNet dataset. This paper also introduces two new architectures which are developed by the authors of this thesis. These are Mini-Inception, and Mini-Inception-ResNet, and are inspired by Inception-V3 and Inception-ResNet-V2, but with a significantly lower complexity. The audio classification dataset consists of audio from RC-boats which are transformed into mel-spectrogram images. For transfer learning to be possible, Mini-Inception, and Mini-Inception-ResNet are pre-trained on the dataset CIFAR-10. The results show that transfer learning is not able to increase the performance. However, transfer learning does in some cases enable models to obtain higher performance in the earlier stages of training.
APA, Harvard, Vancouver, ISO, and other styles
13

Mallia, Gorg. "Transfer of learning from literature lessons." Thesis, University of Sheffield, 2002. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.274972.

Full text
APA, Harvard, Vancouver, ISO, and other styles
14

Quattoni, Ariadna J. "Transfer learning algorithms for image classification." Thesis, Massachusetts Institute of Technology, 2009. http://hdl.handle.net/1721.1/53294.

Full text
Abstract:
Thesis (Ph. D.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 2009.
Cataloged from PDF version of thesis.
Includes bibliographical references (p. 124-128).
An ideal image classifier should be able to exploit complex high dimensional feature representations even when only a few labeled examples are available for training. To achieve this goal we develop transfer learning algorithms that: 1) Leverage unlabeled data annotated with meta-data and 2) Exploit labeled data from related categories. In the first part of this thesis we show how to use the structure learning framework (Ando and Zhang, 2005) to learn efficient image representations from unlabeled images annotated with meta-data. In the second part we present a joint sparsity transfer algorithm for image classification. Our algorithm is based on the observation that related categories might be learnable using only a small subset of shared relevant features. To find these features we propose to train classifiers jointly with a shared regularization penalty that minimizes the total number of features involved in the approximation. To solve the joint sparse approximation problem we develop an optimization algorithm whose time and memory complexity is O(n log n) with n being the number of parameters of the joint model. We conduct experiments on news-topic and keyword prediction image classification tasks. We test our method in two settings: a transfer learning and multitask learning setting and show that in both cases leveraging knowledge from related categories can improve performance when training data per category is scarce. Furthermore, our results demonstrate that our model can successfully recover jointly sparse solutions.
by Ariadna Quattoni.
Ph.D.
APA, Harvard, Vancouver, ISO, and other styles
15

Aytar, Yusuf. "Transfer learning for object category detection." Thesis, University of Oxford, 2014. http://ora.ox.ac.uk/objects/uuid:c9e18ff9-df43-4f67-b8ac-28c3fdfa584b.

Full text
Abstract:
Object category detection, the task of determining if one or more instances of a category are present in an image with their corresponding locations, is one of the fundamental problems of computer vision. The task is very challenging because of the large variations in imaged object appearance, particularly due to the changes in viewpoint, illumination and intra-class variance. Although successful solutions exist for learning object category detectors, they require massive amounts of training data. Transfer learning builds upon previously acquired knowledge and thus reduces training requirements. The objective of this work is to develop and apply novel transfer learning techniques specific to the object category detection problem. This thesis proposes methods which not only address the challenges of performing transfer learning for object category detection such as finding relevant sources for transfer, handling aspect ratio mismatches and considering the geometric relations between the features; but also enable large scale object category detection by quickly learning from considerably fewer training samples and immediate evaluation of models on web scale data with the help of part-based indexing. Several novel transfer models are introduced such as: (a) rigid transfer for transferring knowledge between similar classes, (b) deformable transfer which tolerates small structural changes by deforming the source detector while performing the transfer, and (c) part level transfer particularly for the cases where full template transfer is not possible due to aspect ratio mismatches or not having adequately similar sources. Building upon the idea of using part-level transfer, instead of performing an exhaustive sliding window search, part-based indexing is proposed for efficient evaluation of templates enabling us to obtain immediate detection results in large scale image collections. Furthermore, easier and more robust optimization methods are developed with the help of feature maps defined between proposed transfer learning formulations and the “classical” SVM formulation.
APA, Harvard, Vancouver, ISO, and other styles
16

Farajidavar, Nazli. "Transductive transfer learning for computer vision." Thesis, University of Surrey, 2015. http://epubs.surrey.ac.uk/807998/.

Full text
Abstract:
Artificial intelligent and machine learning technologies have already achieved significant success in classification, regression and clustering. However, many machine learning methods work well only under a common assumption that training and test data are drawn from the same feature space and the same distribution. A real world applications is in sports footage, where an intelligent system has been designed and trained to detect score-changing events in a Tennis single match and we are interested to transfer this learning to either Tennis doubles game or even a more challenging domain such as Badminton. In such distribution changes, most statistical models need to be rebuilt, using newly collected training data. In many real world applications, it is expensive or even impossible to collect the required training data and rebuild the models. One of the ultimate goals of the open ended learning systems is to take advantage of previous experience/ knowledge in dealing with similar future problems. Two levels of learning can be identified in such scenarios. One draws on the data by capturing the pattern and regularities which enables reliable predictions on new samples. The other starts from an acquired source of knowledge and focuses on how to generalise it to a new target concept; this is also known as transfer learning which is going to be the main focus of this thesis. This work is devoted to a second level of learning by focusing on how to transfer information from previous learnings, exploiting it on a new learning problem with not supervisory information available for new target data. We propose several solutions to such tasks by leveraging over prior models or features. In the first part of the thesis we show how to estimate reliable transformations from the source domain to the target domain with the aim of reducing the dissimilarities between the source class-conditional distribution and a new unlabelled target distribution. We then later present a fully automated transfer learning framework which approaches the problem by combining four types of adaptation: a projection to lower dimensional space that is shared between the two domains, a set of local transformations to further increase the domain similarity, a classifier parameter adaptation method which modifies the learner for the new domain and a set of class-conditional transformations aiming to increase the similarity between the posterior probability of samples in the source and target sets. We conduct experiments on a wide range of image and video classification tasks. We test our proposed methods and show that, in all cases, leveraging knowledge from a related domain can improve performance when there are no labels available for direct training on the new target data.
APA, Harvard, Vancouver, ISO, and other styles
17

Jamil, Ahsan Adnan, and Daniel Landberg. "Detecting COVID-19 Using Transfer Learning." Thesis, KTH, Skolan för elektroteknik och datavetenskap (EECS), 2020. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-280352.

Full text
Abstract:
COVID-19 is currently an ongoing pandemic and the large demand for testing of the disease has led to insufficient resources in hospitals. In order to increase the efficiency of COVID- 19 detection, computer vision based systems can be used. However, a large set of training data is required for creating an accurate and reliable model, which is currently not feasible to be acquired considering the novelty of the disease. Other models are currently being used within the healthcare sector for classifying various diseases, one such model is for identifying pneumonia cases by using radiographs and it has achieved high enough accuracy to be used on patients [18]. With the background of having limited data for COVID-19 identification, this thesis evaluates the benefit of using transfer learning in order to augment the performance of the COVID-19 detection model. By using pneumonia dataset as a base for feature extraction the goal is to generate a COVID-19 classifier through transfer learning. Using transfer learning, an accuracy of 97% was achieved, compared to the initial accuracy of 32% when transfer learning was not used.
COVID-19 är för närvarande en pågående pandemi och det är en stor efterfrågan på tester, Vilket har lett till att resurserna på sjukhusen inte räcker till. I syfte att öka effektiviteten för COVID-19 tester kan datorsynbaserade system användas. En datorsynsbaserad klassificerare kräver en stor uppsättning träningsdata för att kunna skapa en noggrann och pålitlig modell, vilket för närvarande inte är tillgängligt eftersom sjukdomen endast har existerat i några månader. Diverse modeller används inom sjukvårdssektorn för klassificering av olika sjukdomar. Klassificering av lunginflammationsfall med hjälp av röntgenbilder är ett av de områden där modeller används. Modellerna har uppnått tillräckligt hög noggrannhet för att kunna användas på patienter [18]. Eftersom datamängden är begränsad för identifiering av COVID-19 utvärderar detta arbete nyttan med att använda överföringsinlärning i syfte att förbättra prestandan i COVID-19-detekteringsmodeller. Genom att använda Lunginflammations bilder som en bas för extraktion av attribut, är målet att generera en COVID-19 klassificerare genom överföringsinlärning. Med användning av denna metod uppnåddes en noggrannhet på 97 % jämfört med den ursprungliga noggrannheten på 32 % när överföringsinlärning inte användes.
APA, Harvard, Vancouver, ISO, and other styles
18

Mendoza-Schrock, Olga L. "Diffusion Maps and Transfer Subspace Learning." Wright State University / OhioLINK, 2017. http://rave.ohiolink.edu/etdc/view?acc_num=wright1503964976467066.

Full text
APA, Harvard, Vancouver, ISO, and other styles
19

Kumar, Sharad. "Localizing Little Landmarks with Transfer Learning." PDXScholar, 2019. https://pdxscholar.library.pdx.edu/open_access_etds/4827.

Full text
Abstract:
Locating a small object in an image -- like a mouse on a computer desk or the door handle of a car -- is an important computer vision problem to solve because in many real life situations a small object may be the first thing that gets operated upon in the image scene. While a significant amount of artificial intelligence and machine learning research has focused on localizing prominent objects in an image, the area of small object detection has remained less explored. In my research I explore the possibility of using context information to localize small objects in an image. Using a Convolutional Neural Network (CNN), I create a regression model to detect a small object in an image where model training is supervised by coordinates of the small object in the image. Since small objects do not have strong visual characteristics in an image, it's difficult for a neural network to discern their pattern because their feature map exhibits low resolution rendering a much weaker signal for the network to recognize. Use of context for object detection and localization has been studied for a long time. This idea is explored by Singh et al. for small object localization by using a multi-step regression process where spatial context is used effectively to localize small objects in several datasets. I extend the idea in this research and demonstrate that the technique of localizing in steps using contextual information when used with transfer learning can significantly reduce model training time.
APA, Harvard, Vancouver, ISO, and other styles
20

Daniel, Filippo <1995&gt. "Transfer learning with generative adversarial networks." Master's Degree Thesis, Università Ca' Foscari Venezia, 2020. http://hdl.handle.net/10579/16989.

Full text
Abstract:
Generative Adversarial Networks (GANs) emerged in recent years as the undiscussed SotA for image synthesis. This model leverages the recent successes of convolutional networks in the field of computer vision to learn the probability distribution of image datasets. Following the first proposal of GANs, many developments and usages of the models have been proposed. This thesis aims to review the evolution of the model and use one of the most recent variations to generate realistic portrait images with a targeted set of features. The usage of this model will be applied in a transfer learning approach, discussing the advantages and disadvantages from standard approaches. Furthermore, classical and deep computer vision tools will be used to edit and confirm the results obtained from the GAN model.
APA, Harvard, Vancouver, ISO, and other styles
21

Choi, Jin-Woo. "Action Recognition with Knowledge Transfer." Diss., Virginia Tech, 2021. http://hdl.handle.net/10919/101780.

Full text
Abstract:
Recent progress on deep neural networks has shown remarkable action recognition performance from videos. The remarkable performance is often achieved by transfer learning: training a model on a large-scale labeled dataset (source) and then fine-tuning the model on the small-scale labeled datasets (targets). However, existing action recognition models do not always generalize well on new tasks or datasets because of the following two reasons. i) Current action recognition datasets have a spurious correlation between action types and background scene types. The models trained on these datasets are biased towards the scene instead of focusing on the actual action. This scene bias leads to poor generalization performance. ii) Directly testing the model trained on the source data on the target data leads to poor performance as the source, and target distributions are different. Fine-tuning the model on the target data can mitigate this issue. However, manual labeling small- scale target videos is labor-intensive. In this dissertation, I propose solutions to these two problems. For the first problem, I propose to learn scene-invariant action representations to mitigate the scene bias in action recognition models. Specifically, I augment the standard cross-entropy loss for action classification with 1) an adversarial loss for the scene types and 2) a human mask confusion loss for videos where the human actors are invisible. These two losses encourage learning representations unsuitable for predicting 1) the correct scene types and 2) the correct action types when there is no evidence. I validate the efficacy of the proposed method by transfer learning experiments. I trans- fer the pre-trained model to three different tasks, including action classification, temporal action localization, and spatio-temporal action detection. The results show consistent improvement over the baselines for every task and dataset. I formulate human action recognition as an unsupervised domain adaptation (UDA) problem to handle the second problem. In the UDA setting, we have many labeled videos as source data and unlabeled videos as target data. We can use already exist- ing labeled video datasets as source data in this setting. The task is to align the source and target feature distributions so that the learned model can generalize well on the target data. I propose 1) aligning the more important temporal part of each video and 2) encouraging the model to focus on action, not the background scene, to learn domain-invariant action representations. The proposed method is simple and intuitive while achieving state-of-the-art performance without training on a lot of labeled target videos. I relax the unsupervised target data setting to a sparsely labeled target data setting. Then I explore the semi-supervised video action recognition, where we have a lot of labeled videos as source data and sparsely labeled videos as target data. The semi-supervised setting is practical as sometimes we can afford a little bit of cost for labeling target data. I propose multiple video data augmentation methods to inject photometric, geometric, temporal, and scene invariances to the action recognition model in this setting. The resulting method shows favorable performance on the public benchmarks.
Doctor of Philosophy
Recent progress on deep learning has shown remarkable action recognition performance. The remarkable performance is often achieved by transferring the knowledge learned from existing large-scale data to the small-scale data specific to applications. However, existing action recog- nition models do not always work well on new tasks and datasets because of the following two problems. i) Current action recognition datasets have a spurious correlation between action types and background scene types. The models trained on these datasets are biased towards the scene instead of focusing on the actual action. This scene bias leads to poor performance on the new datasets and tasks. ii) Directly testing the model trained on the source data on the target data leads to poor performance as the source, and target distributions are different. Fine-tuning the model on the target data can mitigate this issue. However, manual labeling small-scale target videos is labor-intensive. In this dissertation, I propose solutions to these two problems. To tackle the first problem, I propose to learn scene-invariant action representations to mitigate background scene- biased human action recognition models for the first problem. Specifically, the proposed method learns representations that cannot predict the scene types and the correct actions when there is no evidence. I validate the proposed method's effectiveness by transferring the pre-trained model to multiple action understanding tasks. The results show consistent improvement over the baselines for every task and dataset. To handle the second problem, I formulate human action recognition as an unsupervised learning problem on the target data. In this setting, we have many labeled videos as source data and unlabeled videos as target data. We can use already existing labeled video datasets as source data in this setting. The task is to align the source and target feature distributions so that the learned model can generalize well on the target data. I propose 1) aligning the more important temporal part of each video and 2) encouraging the model to focus on action, not the background scene. The proposed method is simple and intuitive while achieving state-of-the-art performance without training on a lot of labeled target videos. I relax the unsupervised target data setting to a sparsely labeled target data setting. Here, we have many labeled videos as source data and sparsely labeled videos as target data. The setting is practical as sometimes we can afford a little bit of cost for labeling target data. I propose multiple video data augmentation methods to inject color, spatial, temporal, and scene invariances to the action recognition model in this setting. The resulting method shows favorable performance on the public benchmarks.
APA, Harvard, Vancouver, ISO, and other styles
22

Lieu, Jenny. "Influences of policy learning, transfer, and post transfer learning in the development of China's wind power policies." Thesis, University of Sussex, 2013. http://sro.sussex.ac.uk/id/eprint/46453/.

Full text
Abstract:
China's renewable energy (RE) sector is developing rapidly, driven by growing energy needs, increased awareness of climate change, and heightened concerns for environmental degradation caused by the country's industrialisation process over the past decades. The Chinese government has been dedicated to the development of its RE industry and has engaged extensively in drawing lessons from abroad and applying these lessons to its own experiences in the post transfer learning process to develop policies that have contributed to the development of the largest wind power sector in the world. This thesis provides a perspective of how China, a ‘socialist market economy', has applied primarily market mechanisms from liberalised market systems found in Western Europe and the United States to develop its domestic wind power sector. Having similar economic, political and cultural value systems is not necessarily a prerequisite to policy learning; rather policy objective compatibility is a more important criterion when drawing and transferring lessons. The objective of this thesis is to analyse how the policy learning from abroad, policy transfer and the post transfer process has influenced the development of wind power policies in China through the application of a framework to analyse the policies. The framework was specifically developed for this thesis and was largely based on policy learning and policy transfer concepts as well as general learning literature. Using the wind power policies in China as a case study, this thesis identifies elements of policy learning from abroad and examines how transferred policies have been applied in first level policies that are top-level coordinating policies (e.g. mid- to long-term strategies and frameworks) as well as second level policies, with specific objectives focusing on diffusion and adoption (e.g. renewable energy policy instruments). Overall, studying policy learning from abroad, policy transfer and the post transfer process contributes to understanding how learning across political boarders contributes to the domestic policy formation and implementation process.
APA, Harvard, Vancouver, ISO, and other styles
23

Pettersson, Harald. "Sentiment analysis and transfer learning using recurrent neural networks : an investigation of the power of transfer learning." Thesis, Linköpings universitet, Interaktiva och kognitiva system, 2019. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-161348.

Full text
Abstract:
In the field of data mining, transfer learning is the method of transferring knowledge from one domain into another. Using reviews from prisjakt.se, a Swedish price comparison site, and hotels.com this work investigate how the similarities between domains affect the results of transfer learning when using recurrent neural networks. We test several different domains with different characteristics, e.g. size and lexical similarity. In this work only relatively similar domains were used, the same target function was sought and all reviews were in Swedish. Regardless, the results are conclusive; transfer learning is often beneficial, but is highly dependent on the features of the domains and how they compare with each other’s.
APA, Harvard, Vancouver, ISO, and other styles
24

Andersen, Linda, and Philip Andersson. "Deep Learning Approach for Diabetic Retinopathy Grading with Transfer Learning." Thesis, KTH, Skolan för elektroteknik och datavetenskap (EECS), 2020. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-279981.

Full text
Abstract:
Diabetic retinopathy (DR) is a complication of diabetes and is a disease that affects the eyes. It is one of the leading causes of blindness in the Western world. As the number of people with diabetes grows globally, so does the number of people affected by diabetic retinopathy. This demand requires that better and more effective resources are developed in order to discover the disease in an early stage which is key to preventing that the disease progresses into more serious stages which ultimately could lead to blindness, and streamline further treatment of the disease. However, traditional manual screenings are not enough to meet this demand. This is where the role of computer-aided diagnosis comes in. The purpose of this report is to investigate how a convolutional neural network together with transfer learning can perform when trained for multiclass grading of diabetic retinopathy. In order to do this, a pre-built and pre-trained convolutional neural network from Keras was used and further trained and fine-tuned in Tensorflow on a 5-class DR grading dataset. Twenty training sessions were performed and accuracy, recall and specificity were evaluated in each session. The results show that testing accuracies achieved were in the range of 35% to 48.5%. The average testing recall achieved for class 0, 1, 2, 3 and 4 was 59.7%, 0.0%, 51.0%, 38.7% and 0.8%, respectively. Furthermore, the average testing specificity achieved for class 0, 1, 2, 3 and 4 was 77.8%, 100.0%, 62.4%, 80.2% and 99.7%, respectively. The average recall of 0.0% and average specificity of 100.0% for class 1 (mild DR) were obtained because the CNN model never predicted this class.
Diabetisk näthinnesjukdom (DR) är en komplikation av diabetes och är en sjukdom som påverkar ögonen. Det är en av de största orsakerna till blindhet i västvärlden. Allt eftersom antalet människor med diabetes ökar, ökar även antalet med diabetisk näthinnesjukdom. Detta ställer högre krav på att bättre och effektivare resurser utvecklas för att kunna upptäcka sjukdomen i ett tidigt stadie, vilket är en förutsättning för att förhindra vidareutveckling av sjukdomen som i slutändan kan resultera i blindhet, och att vidare behandling av sjukdomen effektiviseras. Här spelar datorstödd diagnostik en viktig roll. Syftet med denna studie är att undersöka hur ett faltningsnätverk, tillsammans med överföringsinformation, kan prestera när det tränas för multiklass gradering av diabetisk näthinnesjukdom. För att göra detta användes ett färdigbyggt och färdigtränat faltningsnätverk, byggt i Keras, för att fortsättningsvis tränas och finjusteras i Tensorflow på ett 5-klassigt DR dataset. Totalt tjugo träningssessioner genomfördes och noggrannhet, sensitivitet och specificitet utvärderades i varje sådan session. Resultat visar att de uppnådda noggranheterna låg inom intervallet 35% till 48.5%. Den genomsnittliga testsensitiviteten för klass 0, 1, 2, 3 och 4 var 59.7%, 0.0%, 51.0%, 38.7% respektive 0.8%. Vidare uppnåddes en genomsnittlig testspecificitet för klass 1, 2, 3 och 4 på 77.8%, 100.0%, 62.4%, 80.2% respektive 99.7%. Den genomsnittliga sensitiviteten på 0.0% samt den genomsnittliga specificiteten på 100.0% för klass 1 (mild DR) erhölls eftersom CNN modellen aldrig förutsåg denna klass.
APA, Harvard, Vancouver, ISO, and other styles
25

Shermin, Tasfia. "Enhancing deep transfer learning for image classification." Thesis, Federation University Australia, 2021. http://researchonline.federation.edu.au/vital/access/HandleResolver/1959.17/179551.

Full text
Abstract:
Though deep learning models require a large amount of labelled training data for yielding high performance, they are applied to accomplish many computer vision tasks such as image classification. Current models also do not perform well across different domain settings such as illumination, camera angle and real-to-synthetic. Thus the models are more likely to misclassify unknown classes as known classes. These issues challenge the supervised learning paradigm of the models and encourage the study of transfer learning approaches. Transfer learning allows us to utilise the knowledge acquired from related domains to improve performance on a target domain. Existing transfer learning approaches lack proper high-level source domain feature analyses and are prone to negative transfers for not exploring proper discriminative information across domains. Current approaches also lack at discovering necessary visual-semantic linkage and has a bias towards the source domain. In this thesis, to address these issues and improve image classification performance, we make several contributions to three different deep transfer learning scenarios, i.e., the target domain has i) labelled data; no labelled data; and no visual data. Firstly, for improving inductive transfer learning for the first scenario, we analyse the importance of high-level deep features and propose utilising them in sequential transfer learning approaches and investigating the suitable conditions for optimal performance. Secondly, to improve image classification across different domains in an open set setting by reducing negative transfers (second scenario), we propose two novel architectures. The first model has an adaptive weighting module based on underlying domain distinctive information, and the second model has an information-theoretic weighting module to reduce negative transfers. Thirdly, to learn visual classifiers when no visual data is available (third scenario) and reduce source domain bias, we propose two novel models. One model has a new two-step dense attention mechanism to discover semantic attribute-guided local visual features and mutual learning loss. The other model utilises bidirectional mapping and adversarial supervision to learn the joint distribution of source-target domains simultaneously. We propose a new pointwise mutual information dependant loss in the first model and a distance-based loss in the second one for handling source domain bias. We perform extensive evaluations on benchmark datasets and demonstrate the proposed models outperform contemporary works.
Doctor of Philosophy
APA, Harvard, Vancouver, ISO, and other styles
26

Toll, Debora K. "The transfer of learning: Employees' lived experiences." Thesis, University of Ottawa (Canada), 2004. http://hdl.handle.net/10393/29178.

Full text
Abstract:
The employees' ability to continuously and collectively learn, and to apply their learning are critical to their own and their organization's performance. This study, therefore, sought to understand employees' perceptions of and experiences with the application of or, transfer of their learning. It also sought to understand the interplay between the three primary transfer sources. The overarching research question that guided this study was what were employees' lived experiences with transfer? The subquestions were how do employees transfer their learning, when did transfer enter their learning experiences, and why did they believe that transfer occurred? A hermeneutic phenomenological research design was employed. The participants' lived experiences were examined, described and interpreted. By allowing the participants' voices to resonate throughout the text, the depth, richness and meaning of their experiences were captured. Seven federal government employees, at the administrative, professional and managerial levels, comprised the purposeful sample. The participants engaged in a formal audiotaped interview, an informal interview and a focus group session. Eight main themes emerged from the data analysis. Two themes, related to the individuals' characteristics, were the desire to learn and how transfer occurred. Four themes, related to the training program's design and development features, were discourse, application of the learning to life's situations, learning by doing and when transfer entered the learners' learning experience. The last two themes, related to the organizational climate characteristics, were an open and supportive culture, and the major challenges to transfer. The transfer research, comprised of the individuals' characteristics, training program features and organizational climate characteristics, provided one lens through which the findings were interpreted. Three adult learning theories, self-directed, situated cognition and transformational learning, provided the second lens. The transfer and adult learning literatures were quite complimentary. The learning theories however, brought a broader and more comprehensive understanding to many of the participants' transfer experiences. The theories, by illuminating the interplay between the primary transfer sources, integrated the quantitative transfer research findings into a more coherent body of knowledge. This research also contributed to a more fullsome understanding of the learning theories and the difficulties in measuring transfer. Adult education principles and practices appear to be well positioned to enhance employees' transfer efforts as transfer does indeed appear to be a key concept in adult learning. This study advances our understanding of transfer from the perspective of the employees' "lived" experiences, and of the complexities of transfer. The findings are relevant to adult education practices, and to organizations and employees in better understanding and facilitating transfer.
APA, Harvard, Vancouver, ISO, and other styles
27

Masko, David. "Calibration in Eye Tracking Using Transfer Learning." Thesis, KTH, Skolan för datavetenskap och kommunikation (CSC), 2017. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-210815.

Full text
Abstract:
This thesis empirically studies transfer learning as a calibration framework for Convolutional Neural Network (CNN) based appearance-based gaze estimation models. A dataset of approximately 1,900,000 eyestripe images distributed over 1682 subjects is used to train and evaluate several gaze estimation models. Each model is initially trained on the training data resulting in generic gaze models. The models are subsequently calibrated for each test subject, using the subject's calibration data, by applying transfer learning through network fine-tuning on the final layers of the network. Transfer learning is observed to reduce the Euclidean distance error of the generic models within the range of 12-21%, which is in line with current state-of-the-art. The best performing calibrated model shows a mean error of 29.53mm and a median error of 22.77mm. However, calibrating heatmap output-based gaze estimation models decreases the performance over the generic models. It is concluded that transfer learning is a viable calibration framework for improving the performance of CNN-based appearance based gaze estimation models.
Detta examensarbete är en empirisk studie på överföringsträning som ramverk för kalibrering av neurala faltningsnätverks (CNN)-baserade bildbaserad blickapproximationsmodeller. En datamängd på omkring 1 900 000 ögonrandsbilder fördelat över 1682 personer används för att träna och bedöma flertalet blickapproximationsmodeller. Varje modell tränas inledningsvis på all träningsdata, vilket resulterar i generiska modeller. Modellerna kalibreras därefter för vardera testperson med testpersonens kalibreringsdata via överföringsträning genom anpassning av de sista lagren av nätverket. Med överföringsträning observeras en minskning av felet mätt som eukilidskt avstånd för de generiska modellerna inom 12-21%, vilket motsvarar de bästa nuvarande modellerna. För den bäst presterande kalibrerade modellen uppmäts medelfelet 29,53mm och medianfelet 22,77mm. Dock leder kalibrering av regionella sannolikhetsbaserade blickapproximationsmodeller till en försämring av prestanda jämfört med de generiska modellerna. Slutsatsen är att överföringsträning är en legitim kalibreringsansats för att förbättra prestanda hos CNN-baserade bildbaserad blickapproximationsmodeller.
APA, Harvard, Vancouver, ISO, and other styles
28

Maehle, Valerie A. "Conceptual models in the transfer of learning." Thesis, University of Aberdeen, 1993. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.261454.

Full text
Abstract:
In order to attain clinical competence student physiotherapists apply knowledge from a range of cognitive domains in the assessment and treatment of patients with a variety of conditions. Current research indicates that the ability to transfer knowledge to a wide variety of conditions requires a cognitive structure in which concepts are embedded in a rich network of interconnections (Faletti, 1990, Spiro, 1987). A concept mapping technique was selected as means of eliciting a representation of the knowledge the student possessed and would access in order to underpin the assessment and treatment of a specific peripheral joint condition. Twenty second and third year physiotherapy students currently on clinical placement in an Out-Patient Department each produced a concept map prior to assessing the patient. A modification of the 'Student Teacher Dialogue' (Hammond et al, 1989) was the methodology selected for identification of the transfer of learning. Analysis of the transcription of this interaction provided evidence of the domain specific and procedural knowledge transferred to the patient assessment. Weak correlations were found to exist between the degree of complexity of the concept map the student produced and the amount and level of transfer achieved in the clinical setting. Also there was evidence to suggest that abstract subject areas, or those which involved practical or clinical applications, facilitated the development of more concentrated conceptual networks. However, contrary to expectation, third year students failed to produce higher quality maps than second year students, despite having greater academic and clinical experience.
APA, Harvard, Vancouver, ISO, and other styles
29

Kodirov, Elyor. "Cross-class transfer learning for visual data." Thesis, Queen Mary, University of London, 2017. http://qmro.qmul.ac.uk/xmlui/handle/123456789/31852.

Full text
Abstract:
Automatic analysis of visual data is a key objective of computer vision research; and performing visual recognition of objects from images is one of the most important steps towards understanding and gaining insights into the visual data. Most existing approaches in the literature for the visual recognition are based on a supervised learning paradigm. Unfortunately, they require a large amount of labelled training data which severely limits their scalability. On the other hand, recognition is instantaneous and effortless for humans. They can recognise a new object without seeing any visual samples by just knowing the description of it, leveraging similarities between the description of the new object and previously learned concepts. Motivated by humans recognition ability, this thesis proposes novel approaches to tackle cross-class transfer learning (crossclass recognition) problem whose goal is to learn a model from seen classes (those with labelled training samples) that can generalise to unseen classes (those with labelled testing samples) without any training data i.e., seen and unseen classes are disjoint. Specifically, the thesis studies and develops new methods for addressing three variants of the cross-class transfer learning: Chapter 3 The first variant is transductive cross-class transfer learning, meaning labelled training set and unlabelled test set are available for model learning. Considering training set as the source domain and test set as the target domain, a typical cross-class transfer learning assumes that the source and target domains share a common semantic space, where visual feature vector extracted from an image can be embedded using an embedding function. Existing approaches learn this function from the source domain and apply it without adaptation to the target one. They are therefore prone to the domain shift problem i.e., the embedding function is only concerned with predicting the training seen class semantic representation in the learning stage during learning, when applied to the test data it may underperform. In this thesis, a novel cross-class transfer learning (CCTL) method is proposed based on unsupervised domain adaptation. Specifically, a novel regularised dictionary learning framework is formulated by which the target class labels are used to regularise the learned target domain embeddings thus effectively overcoming the projection domain shift problem. Chapter 4 The second variant is inductive cross-class transfer learning, that is, only training set is assumed to be available during model learning, resulting in a harder challenge compared to the previous one. Nevertheless, this setting reflects a real-world setting in which test data is available after the model learning. The main problem remains the same as the previous variant, that is, the domain shift problem occurs when the model learned only from the training set is applied to the test set without adaptation. In this thesis, a semantic autoencoder (SAE) is proposed building on an encoder-decoder paradigm. Specifically, first a semantic space is defined so that knowledge transfer is possible from the seen classes to the unseen classes. Then, an encoder aims to embed/project a visual feature vector into the semantic space. However, the decoder exerts a generative task, that is, the projection must be able to reconstruct the original visual features. The generative task forces the encoder to preserve richer information, thus the learned encoder from seen classes is able generalise better to the new unseen classes. Chapter 5 The third one is unsupervised cross-class transfer learning. In this variant, no supervision is available for model learning i.e., only unlabelled training data is available, leading to the hardest setting compared to the previous cases. The goal, however, is the same, learning some knowledge from the training data that can be transferred to the test data composed of completely different labels from that of training data. The thesis proposes a novel approach which requires no labelled training data yet is able to capture discriminative information. The proposed model is based on a new graph regularised dictionary learning algorithm. By introducing a l1- norm graph regularisation term, instead of the conventional squared l2-norm, the model is robust against outliers and noises typical in visual data. Importantly, the graph and representation are learned jointly, resulting in further alleviation of the effects of data outliers. As an application, person re-identification is considered for this variant in this thesis.
APA, Harvard, Vancouver, ISO, and other styles
30

Boyer, Sebastien (Sebastien Arcario). "Transfer learning for predictive models in MOOCs." Thesis, Massachusetts Institute of Technology, 2016. http://hdl.handle.net/1721.1/104832.

Full text
Abstract:
Thesis: S.M. in Technology and Policy, Massachusetts Institute of Technology, School of Engineering, Institute for Data, Systems, and Society, Technology and Policy Program, 2016.
Thesis: S.M., Massachusetts Institute of Technology, Department of Electrical Engineering and Computer Science, 2016.
Cataloged from PDF version of thesis.
Includes bibliographical references (pages 85-87).
Predictive models are crucial in enabling the personalization of student experiences in Massive Open Online Courses. For successful real-time interventions, these models must be transferable - that is, they must perform well on a new course from a different discipline, a different context, or even a different MOOC platform. In this thesis, we first investigate whether predictive models "transfer" well to new courses. We then create a framework to evaluate the "transferability" of predictive models. We present methods for overcoming the biases introduced by specific courses into the models by leveraging a multi-course ensemble of models. Using 5 courses from edX, we show a predictive model that, when tested on a new course, achieved up to a 6% increase in AUCROC across 90 different prediction problems. We then tested this model on 10 courses from Coursera (a different platform) and demonstrate that this model achieves an AUCROC of 0.8 across these courses for the problem of predicting dropout one week in advance. Thus, the model "transfers" very well.
by Sebastien Boyer.
S.M. in Technology and Policy
S.M.
APA, Harvard, Vancouver, ISO, and other styles
31

Scahill, Victoria Louise. "Perceptual learning and transfer along a continuum." Thesis, University of Cambridge, 2002. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.620585.

Full text
APA, Harvard, Vancouver, ISO, and other styles
32

Grönlund, Lucas. "Transfer learning in Swedish - Twitter sentiment classification." Thesis, KTH, Skolan för elektroteknik och datavetenskap (EECS), 2019. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-252536.

Full text
Abstract:
Language models can be applied to a diverse set of tasks with great results, but training a language model can unfortunately be a costly task, both in time and money. By transferring knowledge from one domain to another, the costly training only has to be performed once, thus opening the door for more applications. Most current research is carried out with English as the language of choice, thus limiting the amount of available already trained language models in other languages. This thesis explores how the amount of data available for training a language model effects the performance on a Twitter sentiment classification task, and was carried out using Swedish as the language of choice. The Swedish Wikipedia was used as a source for pre-training the language models which were then transferred over to a domain consisting of Swedish tweets. Several models were trained using different amounts of data from these two domains in order to compare the performance of these models. The results of the model evaluation shows that transferring knowledge from the Swedish Wikipedia to tweets yield little to no improvements, while unsupervised fine-tuning on tweets give raise to large improvements in performance.
Språkmodeller kan appliceras på en mängd olika uppgifter med bra resultat, men att träna en språkmodell kan dessvärre vara kostsamt både tids- och pengamässigt. Genom att överföra information från en domän till en annan behöver denna kostsamma träningsprocess bara genomföras en gång, och ger således lättare tillgång till dessa modeller. Dagens forskning genomförs främst med engelska som språk vilket således begränsar mängden av färdigtränade modeller på andra språk. Denna rapport utforskar hur mängden data tillgänglig för träning av språkmodeller påverkar resultatet i ett problem gällande attitydanalys av tweets, och utfördes med svenska som språk. Svenska Wikipedia användes för att först träna språkmodellerna som sedan överfördes till en domän bestående av tweets på svenska. Ett flertal språkmodeller tränades med olika mängd data från dessa två domäner för att sedan kunna jämföra deras prestanda. Resultaten visar att överföring av kunskap från Wikipedia till tweets knappt gav upphov till någon förbättring, medan oövervakad träning på tweets förbättrade resultaten markant.
APA, Harvard, Vancouver, ISO, and other styles
33

Pang, Jinyong. "Human Activity Recognition Based on Transfer Learning." Scholar Commons, 2018. https://scholarcommons.usf.edu/etd/7558.

Full text
Abstract:
Human activity recognition (HAR) based on time series data is the problem of classifying various patterns. Its widely applications in health care owns huge commercial benefit. With the increasing spread of smart devices, people have strong desires of customizing services or product adaptive to their features. Deep learning models could handle HAR tasks with a satisfied result. However, training a deep learning model has to consume lots of time and computation resource. Consequently, developing a HAR system effectively becomes a challenging task. In this study, we develop a solid HAR system using Convolutional Neural Network based on transfer learning, which can eliminate those barriers.
APA, Harvard, Vancouver, ISO, and other styles
34

Broqvist, Widham Emil. "Scaling up Maximum Entropy Deep Inverse Reinforcement Learning with Transfer Learning." Thesis, KTH, Skolan för elektroteknik och datavetenskap (EECS), 2020. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-281796.

Full text
Abstract:
In this thesis an issue with common inverse reinforcement learning algorithms is identified, which causes them to be computationally heavy. A solution is proposed which attempts to address this issue and which can be built upon in the future. The complexity of inverse reinforcement algorithms is increased because at each iteration something called a reinforcement learning step is performed to evaluate the result of the previous iteration and guide future learning. This step is slow to perform for problems with large state spaces and where many iterations are required. It has been observed that the problem solved in this step in many cases is very similar to that of the previous iteration. Therefore the solution suggested is to utilize transfer learning to retain some of the learned information and improve speed at subsequent steps. In this thesis different forms of transfers are evaluated for common reinforcement learning algorithms when applied to this problem. Experiments are run using value iteration and Q-learning as the algorithms for the reinforcement learning step. The algorithms are applied to two route planning problems and it is found that in both cases a transfer can be useful for improving calculation times. For value iteration the transfer is easy to understand and implement and shows large improvements in speed compared to the basic method. For Q-learning the implementation contains more variables and while it shows an improvement it is not as dramatic as that for value iteration. The conclusion drawn is that for inverse reinforcement learning implementations using value iteration a transfer is always recommended while for implementations using other algorithms for the reinforcement learning step a transfer is most likely recommended but more experimentation needs to be conducted.
I denna uppsats identifieras ett vanligt problem med algoritmer för omvänd förstärkt inlärning vilket leder till att de blir beräkningstunga. En lösning föreslås som försöker addressera problemet och som kan byggas på i framtiden. Komplexiteten i algoritmer för omvänd förstärkt inlärning ökar på grund av att varje iteration kräver ett så kallat förstärkt inlärnings-steg som har som syfte att utvärdera föregående iteration och guida lärandet. Detta steg tar lång tid att genomföra för problem med stor tillståndsrymd och där många iterationer är nödvändiga. Det har observerats att problemet som löses i detta steg i många fall är väldigt likt det problem som löstes i föregående iteration. Därför är den föreslagna lösningen att använda sig av informationsöverföring för att ta tillvara denna kunskap. I denna uppsats utvärderas olika former av informationsöverföring för vanliga algoritmer för förstärkt inlärning på detta problem. Experiment görs med value iteration och Q-learning som algoritmerna för förstärkt inlärnings-steget. Algoritmerna appliceras på två ruttplanneringsproblem och finner att i båda fallen kan en informationsöverföring förbättra beräkningstider. För value iteration är överföringen enkel att implementera och förstå och visar stora förbättringar i hastighet jämfört med basfallet. För Qlearning har implementationen fler variabler och samtidigt som en förbättring visas så är den inte lika dramatisk som för value iteration. Slutsaterna som dras är att för implementationer av omvänd förstärkt inlärning där value iteration används som algoritm för förstärkt inlärnings-steget så rekommenderas alltid en informationsöverföring medan för implementationer som använder andra algoritmer så rekommenderas troligtvis en överföring men fler experiment skulle behöva utföras.
APA, Harvard, Vancouver, ISO, and other styles
35

Juozapaitis, Jeffrey James. "Exploring Supervised Many Layered Learning as a Precursor to Transfer Learning." Thesis, The University of Arizona, 2012. http://hdl.handle.net/10150/271607.

Full text
Abstract:
In this paper, we learn a simple conceptual card game as learned by David Stracuzzi's Cumulus algorithm. We then posit a (sadly unimplemented) scheme to transfer the neural net created by it to a similar game with small modifications, hopefully cutting down the learning time. We then analyze the flaws with the transfer scheme and posit other schemes that may produce better results.
APA, Harvard, Vancouver, ISO, and other styles
36

Groneman, Kathryn Jane. "The Trouble with Transfer." BYU ScholarsArchive, 2009. https://scholarsarchive.byu.edu/etd/2164.

Full text
Abstract:
It is hoped that the scientific reasoning skills taught in our biology courses will carry over to be applied in novel settings: to new concepts, future courses, other disciplines, and non-academic pursuits. This is the educational concept of transfer. Efforts over many years in the Cell Biology course at BYU to design effective assessment questions that measure competence in both deep understanding of conceptual principles and the ability to draw valid conclusions from experimental data have had at least one disquieting result. The transfer performance of many otherwise capable students is not very satisfactory. In order to explain this unsatisfactory performance, we assumed that the prompts for our transfer problems might be at fault. Consequently, we experimented with multiple versions that differed in wording or the biological setting in which the concept was placed. Performance on the various versions did not change significantly. We are led to investigate two potential underlying causes for this problem. First, like any other important scholastic trait, the ability to transfer requires directed practice through multiple iterations, a feature absent from most courses. Second, perhaps there is something innate about an individual's learning style that is contrary to performing well at transfer tasks. Students sometimes see exams as tests of gamesmanship; "Teachers are trying to outsmart me with trick questions." Post-exam conversations can be very litigious: "But it's not clear what you wanted!" We recommend the pedagogical use of transfer problems which place on the learner the responsibility to define the appropriate scope for inquiry and improve one's ability to acquire the kind of precise and comprehensive understanding that makes transfer possible. In this study, we analyze the effects of directed practice and learning style on transfer abilities. Implications for teaching are discussed and include promoting meta-cognitive practices, carefully selecting lecture and textual materials to reduce the "spotlighting effect" (selective focus on only a subset of ideas), and encouraging students to consciously use multiple learning strategies to help them succeed on various tasks. It is important to note that these skills are likely to take a significant amount of time for both students and teachers to master.
APA, Harvard, Vancouver, ISO, and other styles
37

Shalabi, Kholood Matouq. "Motor learning and inter-manual transfer of motor learning after a stroke." Thesis, University of Newcastle upon Tyne, 2017. https://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.768491.

Full text
Abstract:
Aims: 1) To measure automatically in stroke survivors and neurologically intact adults, learning, inter-manual transfer (ImT) and retention of learning (Ret.) of a task requiring two sequential actions embedded with-in a video game. 2) To assess the effect of age and side of stroke on learning, ImT, and Ret. of a motor task consisting of two sequentially linked actions. Participants: All participants were right hand dominant and included: A) 112 neurologically intact adults comprising: 72 younger adults (41 females), mean±SD age, 27.06±4.8 years, range 20-36 years and 40 older adults (26 females), mean±SD age 66.2±8.4 years, range 52- 86 years. B) 21 previously right-handed stroke survivors (7 females; 9 left hemiparesis), mean±SD age 66.7±9.3 years, range 54-82 years. Methods: We developed a video game that requires the player to perform two sequential actions to complete a task that mimics natural manipulation tasks. The players must first move a spaceship to a meteor (the Lock-in time phase), using isometric forces applied to game controllers using their hand muscles. The player must then track the trajectory of the meteor; (Hold/Track phase). The Lock-in time phase is assessed as the time from target presentation to achieving the target. The Hold/Track phase is assessed as the accuracy of Tracking within the meteor during the hold/Track phase. Performance is measured as the mean accumulative distance of the centre of the space ship from the outer edge of the target during periods when spaceship is outside the target. For both phases indicators, shorter distances represent higher performance. The Lock-in time and Hold/Track data were recorded for pre-training performance for the non-trained hand (nTH), pre-training performance for the trained hand (TH), training trials of the TH, reassessment after training of both the TH and the nTH, and a reassessment of both the TH and the nTH seven days after the baseline assessment. Statistical Analysis: Repeated-measures ANOVA was used; Time was the within-participant factor to examine learning. Two separate analyses were undertaken; to examine initial learning -Time (Pre, and Post Training) and to examine retention/consolidation - Time (Post- Training and Retention at one week). Age (Young, Older), Training Hand (right or left), and Group (neurologically intact or stroke survivors) were the between-participant factor. The dependent variables were Lock-in Time or Track.
APA, Harvard, Vancouver, ISO, and other styles
38

Xue, Yongjian. "Dynamic Transfer Learning for One-class Classification : a Multi-task Learning Approach." Thesis, Troyes, 2018. http://www.theses.fr/2018TROY0006.

Full text
Abstract:
Le but de cette thèse est de minimiser la perte de performance d'un système de détection lorsqu'il rencontre un changement de distribution de données à la suite d’un événement connu (maintenance, ajout de capteur etc.). L'idée est d'utiliser l'approche d'apprentissage par transfert pour exploiter l'information apprise avant l’événement pour adapter le détecteur au système modifié. Un modèle d'apprentissage multitâche est proposé pour résoudre ce problème. Il utilise un paramètre pour équilibrer la quantité d'informations apportées par l'ancien système par rapport au nouveau. Ce modèle est formalisé de manière à pouvoir être résolu par un SVM mono-classe classique avec une matrice de noyau spécifique. Pour sélectionner le paramètre de contrôle, une méthode qui calcule les solutions pour toutes les valeurs du paramètre introduit et un critère de sélection de sa valeur optimale sont proposés. Les expériences menées dans le cas de changement de distribution et d’ajout de capteurs montrent que ce modèle permet une transition en douceur de l'ancien système vers le nouveau. De plus, comme le modèle proposé peut être formulé comme un SVM mono-classe classique, des algorithmes d'apprentissage en ligne pour SVM mono-classe sont étudiés dans le but d'obtenir un taux de fausses alarmes stable au cours de la phase de transition. Ils peuvent être appliqués directement à l'apprentissage en ligne du modèle proposé
The aim of this thesis is to minimize the performance loss of a one-class detection system when it encounters a data distribution change. The idea is to use transfer learning approach to transfer learned information from related old task to the new one. According to the practical applications, we divide this transfer learning problem into two parts, one part is the transfer learning in homogenous space and the other part is in heterogeneous space. A multi-task learning model is proposed to solve the above problem; it uses one parameter to balance the amount of information brought by the old task versus the new task. This model is formalized so that it can be solved by classical one-class SVM except with a different kernel matrix. To select the control parameter, a kernel path solution method is proposed. It computes all the solutions along that introduced parameter and criteria are proposed to choose the corresponding optimal solution at given number of new samples. Experiments show that this model can give a smooth transition from the old detection system to the new one whenever it encounters a data distribution change. Moreover, as the proposed model can be solved by classical one-class SVM, online learning algorithms for one-class SVM are studied later in the purpose of getting a constant false alarm rate. It can be applied to the online learning of the proposed model directly
APA, Harvard, Vancouver, ISO, and other styles
39

Wilde, Heather Jo. "Proportional and non-proportional transfer of movement sequences." Diss., Texas A&M University, 2004. http://hdl.handle.net/1969.1/3082.

Full text
Abstract:
The ability of spatial transfer to occur in movement sequences is reflected upon in theoretical perspectives, but limited research has been done to verify to what extent spatial characteristics of a sequential learning task occur. Three experiments were designed to determine participants’ ability to transfer a learned movement sequence to new spatial locations. A 16-element dynamic arm movement sequence was used in all experiments. The task required participants to move a horizontal lever to sequentially projected targets. Experiment 1 included 2 groups. One group practiced a pattern in which targets were located at 20, 40, 60, and 80° from the start position. The other group practiced a pattern with targets at 20, 26.67, 60, and 80°. The results indicated that participants could effectively transfer to new target configurations regardless of whether they required proportional or non-proportional spatial changes to the movement pattern. Experiment 2 assessed the effects of extended practice on proportional and non-proportional spatial transfer. The data indicated that while participants can effectively transfer to both proportional and non-proportional spatial transfer conditions after one day of practice, they are only effective at transferring to proportional transfer conditions after 4 days of practice. The results are discussed in terms of the mechanism by which response sequences become increasingly specific over extended practice in an attempt to optimize movement production. Just as response sequences became more fluent and thus more specific with extended practice in Experiment 2, Experiment 3 tested whether this stage of specificity may occur sooner in an easier task than in a more difficult task. The 2 groups in Experiment 3 included a less difficult sequential pattern practiced over either 1 or 4 days. The results support the existence of practice improvement limitations based upon simplicity versus complexity of the task.
APA, Harvard, Vancouver, ISO, and other styles
40

Lundström, Dennis. "Data-efficient Transfer Learning with Pre-trained Networks." Thesis, Linköpings universitet, Datorseende, 2017. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-138612.

Full text
Abstract:
Deep learning has dominated the computer vision field since 2012, but a common criticism of deep learning methods is their dependence on large amounts of data. To combat this criticism research into data-efficient deep learning is growing. The foremost success in data-efficient deep learning is transfer learning with networks pre-trained on the ImageNet dataset. Pre-trained networks have achieved state-of-the-art performance on many tasks. We consider the pre-trained network method for a new task where we have to collect the data. We hypothesize that the data efficiency of pre-trained networks can be improved through informed data collection. After exhaustive experiments on CaffeNet and VGG16, we conclude that the data efficiency indeed can be improved. Furthermore, we investigate an alternative approach to data-efficient learning, namely adding domain knowledge in the form of a spatial transformer to the pre-trained networks. We find that spatial transformers are difficult to train and seem to not improve data efficiency.
APA, Harvard, Vancouver, ISO, and other styles
41

Wright, Michael A. E. "Supporting the transfer of learning of freehand gestures." Thesis, University of Bath, 2015. https://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.665410.

Full text
Abstract:
Freehand gestural interaction, that is gestures performed mid air without holding an input device or wearing markers for tracking, are increasingly being used as an interaction technique for a range of devices and applications. Unlike traditional point-and-click interfaces, gestural interfaces typically provide the user with different freehand gestures for different tasks. For example, whereas opening a music player, selecting a song and moving forward in a playlist are typically accomplished using a series of mouse clicks in a desktop environment, gestural interfaces might provide the user with different freehand gestures for open, play and move forward. Therefore one of the challenges for designers, and users, is the need to support the learning of potentially large sets of freehand gestures. However, it is unclear whether a learnt freehand gesture, designed for a particular task on a particular device or application, can be transferred by the user to perform analogous tasks on different, and potentially unknown, devices and applications. In this thesis we address this challenge answering the research question, “how can we support the transfer of learning of freehand gestures across different devices and applications”? Where transfer of learning is the application of knowledge learnt in one context to a new context, for example, performing previously learnt freehand gestures to interact with different devices and applications. Drawing on previous work we develop an understanding of how designers can support the transfer of learning of freehand gestures. In particular, two mechanisms are investigated which, if supported, can facilitate transfer of learning: learning new material to automaticity and mindful abstraction, i.e. gaining an understanding of the underlying principle, technique, strategy, etc. The literature suggests that supporting both of these mechanisms can improve both the learning and the transfer of learning of freehand gestures. Building on this understanding, a series of related studies are designed and conducted. The results of these studies inform recommendations for designers on (i) how to support both mechanisms of transfer of learning for new users of freehand gestures and (ii) the effects that supporting these mechanisms are likely to have on the transfer of learning of freehand gestures. Additionally, the results of these studies provide metrics which allow designers to predict and evaluate both the ease of learning and the ease of transfer of learning of freehand gestures.
APA, Harvard, Vancouver, ISO, and other styles
42

Zhang, Yuan Ph D. Massachusetts Institute of Technology. "Transfer learning for low-resource natural language analysis." Thesis, Massachusetts Institute of Technology, 2017. http://hdl.handle.net/1721.1/108847.

Full text
Abstract:
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Electrical Engineering and Computer Science, 2017.
This electronic version was submitted by the student author. The certified thesis is available in the Institute Archives and Special Collections.
Cataloged from student-submitted PDF version of thesis.
Includes bibliographical references (pages 131-142).
Expressive machine learning models such as deep neural networks are highly effective when they can be trained with large amounts of in-domain labeled training data. While such annotations may not be readily available for the target task, it is often possible to find labeled data for another related task. The goal of this thesis is to develop novel transfer learning techniques that can effectively leverage annotations in source tasks to improve performance of the target low-resource task. In particular, we focus on two transfer learning scenarios: (1) transfer across languages and (2) transfer across tasks or domains in the same language. In multilingual transfer, we tackle challenges from two perspectives. First, we show that linguistic prior knowledge can be utilized to guide syntactic parsing with little human intervention, by using a hierarchical low-rank tensor method. In both unsupervised and semi-supervised transfer scenarios, this method consistently outperforms state-of-the-art multilingual transfer parsers and the traditional tensor model across more than ten languages. Second, we study lexical-level multilingual transfer in low-resource settings. We demonstrate that only a few (e.g., ten) word translation pairs suffice for an accurate transfer for part-of-speech (POS) tagging. Averaged across six languages, our approach achieves a 37.5% improvement over the monolingual top-performing method when using a comparable amount of supervision. In the second monolingual transfer scenario, we propose an aspect-augmented adversarial network that allows aspect transfer over the same domain. We use this method to transfer across different aspects in the same pathology reports, where traditional domain adaptation approaches commonly fail. Experimental results demonstrate that our approach outperforms different baselines and model variants, yielding a 24% gain on this pathology dataset.
by Yuan Zhang.
Ph. D.
APA, Harvard, Vancouver, ISO, and other styles
43

Robotti, Odile Paola. "Transfer of learning in binary decision making problems." Thesis, University College London (University of London), 2007. http://discovery.ucl.ac.uk/1445033/.

Full text
Abstract:
Transfer, the use of acquired knowledge, skills and abilities across tasks and contexts, is a key and elusive goal of learning. Most evidence available in literature is based on a limited number of tasks, predominantly open-ended problems, game-like problems and taught school subjects (e.g. maths, physics, algebra). It is not obvious that findings from this work can be extended to the domain of decision making problems. This thesis, which aims to broaden the understanding of enhancing and limiting factors of transfer, examines transfer of binary decision making problems (analogs of the Monty Hall problem) under various conditions of semantic distance between learning and target problems, contextual shifts, and delay between learning and target. Our results indicate that not all findings of the classic analogical transfer studies based on open problem solving tasks extend to binary decision making transfer. Specifically, analogical encoding (i.e. learning two analogs by comparing them) did not lead to higher transfer than summarization. Furthermore, in our experiments, transfer rates were never significantly higher for participants learning two analogs by comparison (thus presumably forming a schema) than for those learning one analog by summarization (thus presumably not forming a schema). This leads us to cautiously hypothesize that the role of schema in mediating transfer could be less relevant in binary decision making than it is in open-ended problem solving. Finally, context shifts up to medium level, even coupled with several days delay, did not significantly reduce transfer, although high context shifts did. On the other hand, semantic distance, quality of learning and explicit recognition, were confirmed to have a significant relationship with transfer.
APA, Harvard, Vancouver, ISO, and other styles
44

Romera, Paredes B. "Multitask and transfer learning for multi-aspect data." Thesis, University College London (University of London), 2014. http://discovery.ucl.ac.uk/1457869/.

Full text
Abstract:
Supervised learning aims to learn functional relationships between inputs and outputs. Multitask learning tackles supervised learning tasks by performing them simultaneously to exploit commonalities between them. In this thesis, we focus on the problem of eliminating negative transfer in order to achieve better performance in multitask learning. We start by considering a general scenario in which the relationship between tasks is unknown. We then narrow our analysis to the case where data are characterised by a combination of underlying aspects, e.g., a dataset of images of faces, where each face is determined by a person's facial structure, the emotion being expressed, and the lighting conditions. In machine learning there have been numerous efforts based on multilinear models to decouple these aspects but these have primarily used techniques from the field of unsupervised learning. In this thesis we take inspiration from these approaches and hypothesize that supervised learning methods can also benefit from exploiting these aspects. The contributions of this thesis are as follows: 1. A multitask learning and transfer learning method that avoids negative transfer when there is no prescribed information about the relationships between tasks. 2. A multitask learning approach that takes advantage of a lack of overlapping features between known groups of tasks associated with different aspects. 3. A framework which extends multitask learning using multilinear algebra, with the aim of learning tasks associated with a combination of elements from different aspects. 4. A novel convex relaxation approach that can be applied both to the suggested framework and more generally to any tensor recovery problem. Through theoretical validation and experiments on both synthetic and real-world datasets, we show that the proposed approaches allow fast and reliable inferences. Furthermore, when performing learning tasks on an aspect of interest, accounting for secondary aspects leads to significantly more accurate results than using traditional approaches.
APA, Harvard, Vancouver, ISO, and other styles
45

Praboda, Chathurangani Rajapaksha Rajapaksha Waththe Vidanelage. "Clickbait detection using multimodel fusion and transfer learning." Electronic Thesis or Diss., Institut polytechnique de Paris, 2020. http://www.theses.fr/2020IPPAS025.

Full text
Abstract:
Presque tous les internautes sont susceptibles d'être victimes de clickbait, supposant à tort qu’il s’agit d’informations légitimes. Un type important de clickbait se présente sous la forme de spam et de publicités qui sont utilisés pour rediriger les utilisateurs vers des sites web. Un autre type de "clickbait" est conçu pour faire la une des journaux et rediriger les lecteurs vers leurs sites en ligne, mais ces nouvelles sensationnelles peuvent être trompeuses. Il est difficile de prédire le degré de click-baity d'une nouvelle donnée car les clickbait sont des messages très courts et écrits de manière souvent obscure. La principale caractéristique qui permet d'identifier les clickbait est d'explorer l'écart entre ce qui est attendu dans un post, le titre de l'information et l’information réellement présente dans l'article qui y est lié. Dans cette thèse, on propose deux approches innovantes pour explorer le clickbait généré par les médias d'information dans les médias sociaux. Les contributions 1) de proposer une approche multimodèle basée sur la fusion en incorporant des techniques d'apprentissage profond et d'exploration de texte et 2) d’adapter les modèles d'apprentissage par transfert (TL) pour étudier l'efficacité des transformateurs permettant de prédire le contenu des clickbaits
Internet users are likely to be victims to clickbait assuming as legitimate news. The notoriety of clickbait can be partially attributed to misinformation as clickbait use an attractive headline that is deceptive, misleading or sensationalized. A major type of clickbait are in the form of spam and advertisements that are used to redirect users to web sites that sells products or services (often of dubious quality). Another common type of clickbait are designed to appear as news headlines and redirect readers to their online venues intending to make revenue from page views, but these news can be deceptive, sensationalized and misleading. News media often use clickbait to propagate news using a headline which lacks greater context to represent the article. Since news media exchange information by acting as both content providers and content consumers, misinformation that is deliberately created to mislead requires serious attention. Hence, an automated mechanism is required to explore likelihood of a news item being clickbait.Predicting how clickbaity a given news item is difficult as clickbait are very short messages and written in obscured way. The main feature that can identify clickbait is to explore the gap between what is promised in the social media post, news headline and what is delivered by the article linked from it. The recent enhancement to Natural Language Processing (NLP) can be adapted to distinguish linguistic patterns and syntaxes among social media post, news headline and news article.In my Thesis, I propose two innovative approaches to explore clickbait generated by news media in social media. Contributions of my Thesis are two-fold: 1) propose a multimodel fusion-based approach by incorporating deep learning and text mining techniques and 2) adapt Transfer Learning (TL) models to investigate the efficacy of transformers for predicting clickbait contents.In the first contribution, the fusion model is built on using three main features, namely similarity between post and headline, sentiment of the post and headline and topical similarity between news article and post. The fusion model uses three different algorithms to generate output for each feature mentioned above and fuse them at the output to generate the final classifier.In addition to implementing the fusion classifier, we conducted four extended experiments mainly focusing on news media in social media. The first experiment is on exploring content originality of a social media post by amalgamating the features extracted from author's writing style and online circadian rhythm. This originality detection approach is used to identify news dissemination patterns among news media community in Facebook and Twitter by observing news originators and news consumers. For this experiment, dataset is collected with our implemented crawlers from Facebook and Twitter streaming APIs. The next experiment is on exploring flaming events in the news media in Twitter by using an improved sentiment classification model. The final experiment is focused on detecting topics that are discussed in a meeting real-time aiming to generate a brief summary at the end.The second contribution is to adapt TL models for clickbait detection. We evaluate the performance of three TL models (BERT, XLNet and RoBERTa) and delivered a set of architectural changes to optimize these models.We believe that these models are the representatives of most of the other TL models in terms of their architectural properties (Autoregressive model vs Autoencoding model) and training datasets. The experiments are conducted by introducing advanced fine-tuning approaches to each model such as layer pruning, attention pruning, weight pruning, model expansion and generalization. To the best of authors' knowledge, there have been an insignificant number of attempts to use TL models on clickbait detection tasks and no any comparative analysis of multiple TL models focused on this task
APA, Harvard, Vancouver, ISO, and other styles
46

Olsson, Anton, and Felix Rosberg. "Domain Transfer for End-to-end Reinforcement Learning." Thesis, Högskolan i Halmstad, Akademin för informationsteknologi, 2020. http://urn.kb.se/resolve?urn=urn:nbn:se:hh:diva-43042.

Full text
Abstract:
In this master thesis project a LiDAR-based, depth image-based and semantic segmentation image-based reinforcement learning agent is investigated and compared forlearning in simulation and performing in real-time. The project utilize the Deep Deterministic Policy Gradient architecture for learning continuous actions and was designed to control a RC car. One of the first project to deploy an agent in a real scenario after training in a similar simulation. The project demonstrated that with a proper reward function and by tuning driving parameters such as restricting steering, maximum velocity, minimum velocity and performing input data scaling a LiDAR-based agent could drive indefinitely on a simple but completely unseen track in real-time.
APA, Harvard, Vancouver, ISO, and other styles
47

Qiu, David. "Representation and transfer learning using information-theoretic approximations." Thesis, Massachusetts Institute of Technology, 2020. https://hdl.handle.net/1721.1/127008.

Full text
Abstract:
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Electrical Engineering and Computer Science, May, 2020
Cataloged from the official PDF of thesis.
Includes bibliographical references (pages 119-127).
Learning informative and transferable feature representations is a key aspect of machine learning systems. Mutual information and Kullback-Leibler divergence are principled and very popular metrics to measure feature relevance and perform distribution matching, respectively. However, clean formulations of machine learning algorithms based on these information-theoretic quantities typically require density estimation, which could be difficult for high dimensional problems. A central theme of this thesis is to translate these formulations into simpler forms that are more amenable to limited data. In particular, we modify local approximations and variational approximations of information-theoretic quantities to propose algorithms for unsupervised and transfer learning. Experiments show that the representations learned by our algorithms perform competitively compared to popular methods that require higher complexity.
by David Qiu.
Ph. D.
Ph.D. Massachusetts Institute of Technology, Department of Electrical Engineering and Computer Science
APA, Harvard, Vancouver, ISO, and other styles
48

Holst, Gustav. "Route Planning of Transfer Buses Using Reinforcement Learning." Thesis, KTH, Skolan för elektroteknik och datavetenskap (EECS), 2020. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-281286.

Full text
Abstract:
In route planning the goal is to obtain the best route between a set of locations, which becomes a very complex task as the number of locations increase. This study will consider the problem of transfer bus route planning and examines the feasibility of applying a reinforcement learning method in this specific real-world context. In recent research, reinforcement learning methods have emerged as a promising alternative to classical optimization algorithms when solving similar problems. This due to their positive properties in terms of scalability and generalization. However, the majority of said research has been performed on strictly theoretical problems, not using real-world data. This study implements an existing reinforcement learning model and adapts it to fit the realms of transfer bus route planning. The model is trained to generate optimized routes in terms of time and cost consumption. Then, routes generated by the trained model are evaluated by comparing them to corresponding manually planned routes. The reinforcement learning model produces routes that outperforms manually planned routes with regards to both examined metrics. However, due to delimitations and assumptions made during the implementation, the explicit differences in consumptions are considered promising but cannot be taken as definite results. The main finding is the overarching behavior of the model, implying a proof of concept; reinforcement learning models are usable tools in the context of real-world transfer bus route planning.
Inom ruttplanering är målet att erhålla den bästa färdvägen mellan en uppsättning platser, vilket blir en mycket komplicerad uppgift i takt med att antalet platser ökar. Denna studie kommer att behandla problemet gällande ruttplanering av transferbussar och undersöker genomförbarheten av att tillämpa en förstärkningsinlärningsmetod på detta verkliga problem. I nutida forskning har förstärkningsinlärningsmetoder framträtt som ett lovande alternativ till klassiska optimeringsalgoritmer för lösandet av liknande problem. Detta på grund utav deras positiva egenskaper gällande skalbarhet och generalisering. Emellertid har majoriteten av den nämnda forskningen utförts på strikt teoretiska problem. Denna studie implementerar en befintlig förstärkningsinlärningsmodell och anpassar den till att passa problemet med ruttplanering av transferbussar. Modellen tränas för att generera optimerade rutter, gällande tids- och kostnadskonsumtion. Därefter utvärderas rutterna, som genererats av den tränade modellen, mot motsvarande  manuellt planerade rutter. Förstärkningsinlärningsmodellen producerar rutter som överträffar de manuellt planerade rutterna med avseende på de båda undersökta mätvärdena. På grund av avgränsningar och antagandet som gjorts under implementeringen anses emellertid de explicita konsumtionsskillnaderna vara lovande men kan inte ses som definitiva resultat. Huvudfyndet är modellens övergripande beteende, vilket antyder en konceptvalidering; förstärkningsinlärningsmodeller är användbara som verktyg i sammanhanget gällande verklig ruttplanering av transferbussar.
APA, Harvard, Vancouver, ISO, and other styles
49

Jin, Di Ph D. Massachusetts Institute of Technology. "Transfer learning and robustness for natural language processing." Thesis, Massachusetts Institute of Technology, 2020. https://hdl.handle.net/1721.1/129004.

Full text
Abstract:
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Mechanical Engineering, 2020
Cataloged from student-submitted PDF of thesis.
Includes bibliographical references (pages 189-217).
Teaching machines to understand human language is one of the most elusive and long-standing challenges in Natural Language Processing (NLP). Driven by the fast development of deep learning, state-of-the-art NLP models have already achieved human-level performance in various large benchmark datasets, such as SQuAD, SNLI, and RACE. However, when these strong models are deployed to real-world applications, they often show poor generalization capability in two situations: 1. There is only a limited amount of data available for model training; 2. Deployed models may degrade significantly in performance on noisy test data or natural/artificial adversaries. In short, performance degradation on low-resource tasks/datasets and unseen data with distribution shifts imposes great challenges to the reliability of NLP models and prevent them from being massively applied in the wild. This dissertation aims to address these two issues.
Towards the first one, we resort to transfer learning to leverage knowledge acquired from related data in order to improve performance on a target low-resource task/dataset. Specifically, we propose different transfer learning methods for three natural language understanding tasks: multi-choice question answering, dialogue state tracking, and sequence labeling, and one natural language generation task: machine translation. These methods are based on four basic transfer learning modalities: multi-task learning, sequential transfer learning, domain adaptation, and cross-lingual transfer. We show experimental results to validate that transferring knowledge from related domains, tasks, and languages can improve the target task/dataset significantly. For the second issue, we propose methods to evaluate the robustness of NLP models on text classification and entailment tasks.
On one hand, we reveal that although these models can achieve a high accuracy of over 90%, they still easily crash over paraphrases of original samples by changing only around 10% words to their synonyms. On the other hand, by creating a new challenge set using four adversarial strategies, we find even the best models for the aspect-based sentiment analysis task cannot reliably identify the target aspect and recognize its sentiment accordingly. On the contrary, they are easily confused by distractor aspects. Overall, these findings raise great concerns of robustness of NLP models, which should be enhanced to ensure their long-run stable service.
by Di Jin.
Ph. D.
Ph.D. Massachusetts Institute of Technology, Department of Mechanical Engineering
APA, Harvard, Vancouver, ISO, and other styles
50

Bäck, Jesper. "Domain similarity metrics for predicting transfer learning performance." Thesis, Linköpings universitet, Interaktiva och kognitiva system, 2019. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-153747.

Full text
Abstract:
The lack of training data is a common problem in machine learning. One solution to thisproblem is to use transfer learning to remove or reduce the requirement of training data.Selecting datasets for transfer learning can be difficult however. As a possible solution, thisstudy proposes the domain similarity metrics document vector distance (DVD) and termfrequency-inverse document frequency (TF-IDF) distance. DVD and TF-IDF could aid inselecting datasets for good transfer learning when there is no data from the target domain.The simple metric, shared vocabulary, is used as a baseline to check whether DVD or TF-IDF can indicate a better choice for a fine-tuning dataset. SQuAD is a popular questionanswering dataset which has been proven useful for pre-training models for transfer learn-ing. The results were therefore measured by pre-training a model on the SQuAD datasetand fine-tuning on a selection of different datasets. The proposed metrics were used tomeasure the similarity between the datasets to see whether there was a correlation betweentransfer learning effect and similarity. The results found a clear relation between a smalldistance according to the DVD metric and good transfer learning. This could prove usefulfor a target domain without training data, a model could be trained on a big dataset andfine-tuned on a small dataset that is very similar to the target domain. It was also foundthat even small amount of training data from the target domain can be used to fine-tune amodel pre-trained on another domain of data, achieving better performance compared toonly training on data from the target domain.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography