Academic literature on the topic 'Transfer of Learning'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the lists of relevant articles, books, theses, conference reports, and other scholarly sources on the topic 'Transfer of Learning.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Journal articles on the topic "Transfer of Learning"

1

Alla, Sri Sai Meghana, and Kavitha Athota. "Brain Tumor Detection Using Transfer Learning in Deep Learning." Indian Journal Of Science And Technology 15, no. 40 (October 27, 2022): 2093–102. http://dx.doi.org/10.17485/ijst/v15i40.1307.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Xu, Mingle, Sook Yoon, Jaesu Lee, and Dong Sun Park. "Unsupervised Transfer Learning for Plant Anomaly Recognition." Korean Institute of Smart Media 11, no. 4 (May 31, 2022): 30–37. http://dx.doi.org/10.30693/smj.2022.11.4.30.

Full text
Abstract:
Disease threatens plant growth and recognizing the type of disease is essential to making a remedy. In recent years, deep learning has witnessed a significant improvement for this task, however, a large volume of labeled images is one of the requirements to get decent performance. But annotated images are difficult and expensive to obtain in the agricultural field. Therefore, designing an efficient and effective strategy is one of the challenges in this area with few labeled data. Transfer learning, assuming taking knowledge from a source domain to a target domain, is borrowed to address this issue and observed comparable results. However, current transfer learning strategies can be regarded as a supervised method as it hypothesizes that there are many labeled images in a source domain. In contrast, unsupervised transfer learning, using only images in a source domain, gives more convenience as collecting images is much easier than annotating. In this paper, we leverage unsupervised transfer learning to perform plant disease recognition, by which we achieve a better performance than supervised transfer learning in many cases. Besides, a vision transformer with a bigger model capacity than convolution is utilized to have a better-pretrained feature space. With the vision transformer-based unsupervised transfer learning, we achieve better results than current works in two datasets. Especially, we obtain 97.3% accuracy with only 30 training images for each class in the Plant Village dataset. We hope that our work can encourage the community to pay attention to vision transformer-based unsupervised transfer learning in the agricultural field when with few labeled images.
APA, Harvard, Vancouver, ISO, and other styles
3

Würschinger, Hubert, Matthias Mühlbauer, and Nico Hanenkamp. "Transfer Learning für visuelle Kontrollaufgaben/Potentials of Transfer Learning." wt Werkstattstechnik online 110, no. 04 (2020): 264–69. http://dx.doi.org/10.37544/1436-4980-2020-04-98.

Full text
Abstract:
In der industriellen Praxis wird eine Vielzahl von Prozess- und Qualitätskontrollaufgaben visuell von Mitarbeitern oder mithilfe von Kamerasystemen durchgeführt. Durch den Einsatz Künstlicher Intelligenz (KI) lässt sich die Programmierung und damit die Implementierung von Kamerasystemen effizienter gestalten. Im Bereich der Bildanalyse können dabei vortrainierte Künstliche Neuronale Netze verwendet werden. Das Anwenden dieser Netze auf neue Aufgaben wird dabei Transfer Learning genannt.   In industrial practice, a large number of process and quality control tasks are performed visually by employees or with the aid of camera systems. By using artificial intelligence, the programming effort and thus the implementation of camera systems can be made more efficient. Pre-trained ^neural networks can be used for image analysis. The application of these networks to new tasks is called transfer learning.
APA, Harvard, Vancouver, ISO, and other styles
4

Vaishnavi, J., and V. Narmatha. "Novel Transfer Learning Attitude for Automatic Video Captioning Using Deep Learning Models." Indian Journal Of Science And Technology 15, no. 43 (November 20, 2022): 2325–35. http://dx.doi.org/10.17485/ijst/v15i43.1846.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Gardie, Birhanu, Smegnew Asemie, Kasahun Azezew, and Zemedkun Solomon. "Potato Plant Leaf Diseases Identification Using Transfer Learning." Indian Journal of Science and Technology 15, no. 4 (January 25, 2022): 158–65. http://dx.doi.org/10.17485/ijst/v15i4.1235.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Cao, Bin, Sinno Jialin Pan, Yu Zhang, Dit-Yan Yeung, and Qiang Yang. "Adaptive Transfer Learning." Proceedings of the AAAI Conference on Artificial Intelligence 24, no. 1 (July 3, 2010): 407–12. http://dx.doi.org/10.1609/aaai.v24i1.7682.

Full text
Abstract:
Transfer learning aims at reusing the knowledge in some source tasks to improve the learning of a target task. Many transfer learning methods assume that the source tasks and the target task be related, even though many tasks are not related in reality. However, when two tasks are unrelated, the knowledge extracted from a source task may not help, and even hurt, the performance of a target task. Thus, how to avoid negative transfer and then ensure a "safe transfer" of knowledge is crucial in transfer learning. In this paper, we propose an Adaptive Transfer learning algorithm based on Gaussian Processes (AT-GP), which can be used to adapt the transfer learning schemes by automatically estimating the similarity between a source and a target task. The main contribution of our work is that we propose a new semi-parametric transfer kernel for transfer learning from a Bayesian perspective, and propose to learn the model with respect to the target task, rather than all tasks as in multi-task learning. We can formulate the transfer learning problem as a unified Gaussian Process (GP) model. The adaptive transfer ability of our approach is verified on both synthetic and real-world datasets.
APA, Harvard, Vancouver, ISO, and other styles
7

Yu, Zhengxu, Dong Shen, Zhongming Jin, Jianqiang Huang, Deng Cai, and Xian-Sheng Hua. "Progressive Transfer Learning." IEEE Transactions on Image Processing 31 (2022): 1340–48. http://dx.doi.org/10.1109/tip.2022.3141258.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Renta-Davids, Ana-Inés, José-Miguel Jiménez-González, Manel Fandos-Garrido, and Ángel-Pío González-Soto. "Transfer of learning." European Journal of Training and Development 38, no. 8 (August 27, 2014): 728–44. http://dx.doi.org/10.1108/ejtd-03-2014-0026.

Full text
Abstract:
Purpose – This paper aims to analyse transfer of learning to workplace regarding to job-related training courses. Training courses analysed in this study are offered under the professional training for employment framework in Spain. Design/methodology/approach – During the training courses, trainees completed a self-reported survey of reasons for participation (time 1 data collection, N = 447). Two months after training, a second survey was sent to the trainees by email (time 2 data collection, N = 158). Factor analysis, correlations and multiple hierarchical regressions were performed. Findings – The results of this study demonstrate the importance of training relevance and training effectiveness in transfer of training. Results indicated that relevance, the extent training courses were related to participant’s workplace activities and professional development, positively influences transfer of training. Effectiveness, training features which facilitated participants to acquire knowledge and skills, also has a significantly positive influence in transfer of training. Motivation to participate and learning-conducive workplace features also have a positive influence in transfer of training. Originality/value – This study contributes to the understanding of transfer of learning in work-related training programmes by analysing the factors that influence transfer of learning back to the workplace. The study has practical implication for training designers and education providers to enhance work-related training in the context of the Professional Training for Employment Subsystem in Spain.
APA, Harvard, Vancouver, ISO, and other styles
9

Tetzlaff, Linda. "Transfer of learning." ACM SIGCHI Bulletin 17, SI (May 1986): 205–10. http://dx.doi.org/10.1145/30851.275631.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Koçer, Barış, and Ahmet Arslan. "Genetic transfer learning." Expert Systems with Applications 37, no. 10 (October 2010): 6997–7002. http://dx.doi.org/10.1016/j.eswa.2010.03.019.

Full text
APA, Harvard, Vancouver, ISO, and other styles

Dissertations / Theses on the topic "Transfer of Learning"

1

Shell, Jethro. "Fuzzy transfer learning." Thesis, De Montfort University, 2013. http://hdl.handle.net/2086/8842.

Full text
Abstract:
The use of machine learning to predict output from data, using a model, is a well studied area. There are, however, a number of real-world applications that require a model to be produced but have little or no data available of the specific environment. These situations are prominent in Intelligent Environments (IEs). The sparsity of the data can be a result of the physical nature of the implementation, such as sensors placed into disaster recovery scenarios, or where the focus of the data acquisition is on very defined user groups, in the case of disabled individuals. Standard machine learning approaches focus on a need for training data to come from the same domain. The restrictions of the physical nature of these environments can severely reduce data acquisition making it extremely costly, or in certain situations, impossible. This impedes the ability of these approaches to model the environments. It is this problem, in the area of IEs, that this thesis is focussed. To address complex and uncertain environments, humans have learnt to use previously acquired information to reason and understand their surroundings. Knowledge from different but related domains can be used to aid the ability to learn. For example, the ability to ride a road bicycle can help when acquiring the more sophisticated skills of mountain biking. This humanistic approach to learning can be used to tackle real-world problems where a-priori labelled training data is either difficult or not possible to gain. The transferral of knowledge from a related, but differing context can allow for the reuse and repurpose of known information. In this thesis, a novel composition of methods are brought together that are broadly based on a humanist approach to learning. Two concepts, Transfer Learning (TL) and Fuzzy Logic (FL) are combined in a framework, Fuzzy Transfer Learning (FuzzyTL), to address the problem of learning tasks that have no prior direct contextual knowledge. Through the use of a FL based learning method, uncertainty that is evident in dynamic environments is represented. By combining labelled data from a contextually related source task, and little or no unlabelled data from a target task, the framework is shown to be able to accomplish predictive tasks using models learned from contextually different data. The framework incorporates an additional novel five stage online adaptation process. By adapting the underlying fuzzy structure through the use of previous labelled knowledge and new unlabelled information, an increase in predictive performance is shown. The framework outlined is applied to two differing real-world IEs to demonstrate its ability to predict in uncertain and dynamic environments. Through a series of experiments, it is shown that the framework is capable of predicting output using differing contextual data.
APA, Harvard, Vancouver, ISO, and other styles
2

Lu, Ying. "Transfer Learning for Image Classification." Thesis, Lyon, 2017. http://www.theses.fr/2017LYSEC045/document.

Full text
Abstract:
Lors de l’apprentissage d’un modèle de classification pour un nouveau domaine cible avec seulement une petite quantité d’échantillons de formation, l’application des algorithmes d’apprentissage automatiques conduit généralement à des classifieurs surdimensionnés avec de mauvaises compétences de généralisation. D’autre part, recueillir un nombre suffisant d’échantillons de formation étiquetés manuellement peut s’avérer très coûteux. Les méthodes de transfert d’apprentissage visent à résoudre ce type de problèmes en transférant des connaissances provenant d’un domaine source associé qui contient beaucoup plus de données pour faciliter la classification dans le domaine cible. Selon les différentes hypothèses sur le domaine cible et le domaine source, l’apprentissage par transfert peut être classé en trois catégories: apprentissage par transfert inductif, apprentissage par transfert transducteur (adaptation du domaine) et apprentissage par transfert non surveillé. Nous nous concentrons sur le premier qui suppose que la tâche cible et la tâche source sont différentes mais liées. Plus précisément, nous supposons que la tâche cible et la tâche source sont des tâches de classification, tandis que les catégories cible et les catégories source sont différentes mais liées. Nous proposons deux méthodes différentes pour aborder ce problème. Dans le premier travail, nous proposons une nouvelle méthode d’apprentissage par transfert discriminatif, à savoir DTL(Discriminative Transfer Learning), combinant une série d’hypothèses faites à la fois par le modèle appris avec les échantillons de cible et les modèles supplémentaires appris avec des échantillons des catégories sources. Plus précisément, nous utilisons le résidu de reconstruction creuse comme discriminant de base et améliore son pouvoir discriminatif en comparant deux résidus d’un dictionnaire positif et d’un dictionnaire négatif. Sur cette base, nous utilisons des similitudes et des dissemblances en choisissant des catégories sources positivement corrélées et négativement corrélées pour former des dictionnaires supplémentaires. Une nouvelle fonction de coût basée sur la statistique de Wilcoxon-Mann-Whitney est proposée pour choisir les dictionnaires supplémentaires avec des données non équilibrées. En outre, deux processus de Boosting parallèles sont appliqués à la fois aux distributions de données positives et négatives pour améliorer encore les performances du classificateur. Sur deux bases de données de classification d’images différentes, la DTL proposée surpasse de manière constante les autres méthodes de l’état de l’art du transfert de connaissances, tout en maintenant un temps d’exécution très efficace. Dans le deuxième travail, nous combinons le pouvoir du transport optimal (OT) et des réseaux de neurones profond (DNN) pour résoudre le problème ITL. Plus précisément, nous proposons une nouvelle méthode pour affiner conjointement un réseau de neurones avec des données source et des données cibles. En ajoutant une fonction de perte du transfert optimal (OT loss) entre les prédictions du classificateur source et cible comme une contrainte sur le classificateur source, le réseau JTLN (Joint Transfer Learning Network) proposé peut effectivement apprendre des connaissances utiles pour la classification cible à partir des données source. En outre, en utilisant différents métriques comme matrice de coût pour la fonction de perte du transfert optimal, JTLN peut intégrer différentes connaissances antérieures sur la relation entre les catégories cibles et les catégories sources. Nous avons effectué des expérimentations avec JTLN basées sur Alexnet sur les jeux de données de classification d’image et les résultats vérifient l’efficacité du JTLN proposé. A notre connaissances, ce JTLN proposé est le premier travail à aborder ITL avec des réseaux de neurones profond (DNN) tout en intégrant des connaissances antérieures sur la relation entre les catégories cible et source
When learning a classification model for a new target domain with only a small amount of training samples, brute force application of machine learning algorithms generally leads to over-fitted classifiers with poor generalization skills. On the other hand, collecting a sufficient number of manually labeled training samples may prove very expensive. Transfer Learning methods aim to solve this kind of problems by transferring knowledge from related source domain which has much more data to help classification in the target domain. Depending on different assumptions about target domain and source domain, transfer learning can be further categorized into three categories: Inductive Transfer Learning, Transductive Transfer Learning (Domain Adaptation) and Unsupervised Transfer Learning. We focus on the first one which assumes that the target task and source task are different but related. More specifically, we assume that both target task and source task are classification tasks, while the target categories and source categories are different but related. We propose two different methods to approach this ITL problem. In the first work we propose a new discriminative transfer learning method, namely DTL, combining a series of hypotheses made by both the model learned with target training samples, and the additional models learned with source category samples. Specifically, we use the sparse reconstruction residual as a basic discriminant, and enhance its discriminative power by comparing two residuals from a positive and a negative dictionary. On this basis, we make use of similarities and dissimilarities by choosing both positively correlated and negatively correlated source categories to form additional dictionaries. A new Wilcoxon-Mann-Whitney statistic based cost function is proposed to choose the additional dictionaries with unbalanced training data. Also, two parallel boosting processes are applied to both the positive and negative data distributions to further improve classifier performance. On two different image classification databases, the proposed DTL consistently out performs other state-of-the-art transfer learning methods, while at the same time maintaining very efficient runtime. In the second work we combine the power of Optimal Transport and Deep Neural Networks to tackle the ITL problem. Specifically, we propose a novel method to jointly fine-tune a Deep Neural Network with source data and target data. By adding an Optimal Transport loss (OT loss) between source and target classifier predictions as a constraint on the source classifier, the proposed Joint Transfer Learning Network (JTLN) can effectively learn useful knowledge for target classification from source data. Furthermore, by using different kind of metric as cost matrix for the OT loss, JTLN can incorporate different prior knowledge about the relatedness between target categories and source categories. We carried out experiments with JTLN based on Alexnet on image classification datasets and the results verify the effectiveness of the proposed JTLN in comparison with standard consecutive fine-tuning. To the best of our knowledge, the proposed JTLN is the first work to tackle ITL with Deep Neural Networks while incorporating prior knowledge on relatedness between target and source categories. This Joint Transfer Learning with OT loss is general and can also be applied to other kind of Neural Networks
APA, Harvard, Vancouver, ISO, and other styles
3

Alexander, John W. "Transfer in reinforcement learning." Thesis, University of Aberdeen, 2015. http://digitool.abdn.ac.uk:80/webclient/DeliveryManager?pid=227908.

Full text
Abstract:
The problem of developing skill repertoires autonomously in robotics and artificial intelligence is becoming ever more pressing. Currently, the issues of how to apply prior knowledge to new situations and which knowledge to apply have not been sufficiently studied. We present a transfer setting where a reinforcement learning agent faces multiple problem solving tasks drawn from an unknown generative process, where each task has similar dynamics. The task dynamics are changed by varying in the transition function between states. The tasks are presented sequentially with the latest task presented considered as the target for transfer. We describe two approaches to solving this problem. Firstly we present an algorithm for transfer of the function encoding the stateaction value, defined as value function transfer. This algorithm uses the value function of a source policy to initialise the policy of a target task. We varied the type of basis the algorithm used to approximate the value function. Empirical results in several well known domains showed that the learners benefited from the transfer in the majority of cases. Results also showed that the Radial basis performed better in general than the Fourier. However contrary to expectation the Fourier basis benefited most from the transfer. Secondly, we present an algorithm for learning an informative prior which encodes beliefs about the underlying dynamics shared across all tasks. We call this agent the Informative Prior agent (IP). The prior is learnt though experience and captures the commonalities in the transition dynamics of the domain and allows for a quantification of the agent's uncertainty about these. By using a sparse distribution of the uncertainty in the dynamics as a prior, the IP agent can successfully learn a model of 1) the set of feasible transitions rather than the set of possible transitions, and 2) the likelihood of each of the feasible transitions. Analysis focusing on the accuracy of the learned model showed that IP had a very good accuracy bound, which is expressible in terms of only the permissible error and the diffusion, a factor that describes the concentration of the prior mass around the truth, and which decreases as the number of tasks experienced grows. The empirical evaluation of IP showed that an agent which uses the informative prior outperforms several existing Bayesian reinforcement learning algorithms on tasks with shared structure in a domain where multiple related tasks were presented only once to the learners. IP is a step towards the autonomous acquisition of behaviours in artificial intelligence. IP also provides a contribution towards the analysis of exploration and exploitation in the transfer paradigm.
APA, Harvard, Vancouver, ISO, and other styles
4

Kiehl, Janet K. "Learning to Change: Organizational Learning and Knowledge Transfer." online version, 2004. http://rave.ohiolink.edu/etdc/view?acc%5Fnum=case1080608710.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Johnson, C. Dustin. "Set-Switching and Learning Transfer." Digital Archive @ GSU, 2008. http://digitalarchive.gsu.edu/psych_hontheses/7.

Full text
Abstract:
In this experiment I investigated the relationship between set-switching and transfer learning, both of which presumably invoke executive functioning (EF), which may in turn be correlated with intelligence. Set-switching was measured by a computerized version of the Wisconsin Card Sort Task. Another computer task was written to measure learning-transfer ability. The data indicate little correlation between the ability to transfer learning and the capacity for set-switching. That is, these abilities may draw from independent cognitive mechanisms. The major difference may be requirement to utilize previous learning in a new way in the learning-transfer task.
APA, Harvard, Vancouver, ISO, and other styles
6

Skolidis, Grigorios. "Transfer learning with Gaussian processes." Thesis, University of Edinburgh, 2012. http://hdl.handle.net/1842/6271.

Full text
Abstract:
Transfer Learning is an emerging framework for learning from data that aims at intelligently transferring information between tasks. This is achieved by developing algorithms that can perform multiple tasks simultaneously, as well as translating previously acquired knowledge to novel learning problems. In this thesis, we investigate the application of Gaussian Processes to various forms of transfer learning with a focus on classification problems. This process initiates with a thorough introduction to the framework of Transfer learning, providing a clear taxonomy of the areas of research. Following that, we continue by reviewing the recent advances on Multi-task learning for regression with Gaussian processes, and compare the performance of some of these methods on a real data set. This review gives insights about the strengths and weaknesses of each method, which acts as a point of reference to apply these methods to other forms of transfer learning. The main contributions of this thesis are reported in the three following chapters. The third chapter investigates the application of Multi-task Gaussian processes to classification problems. We extend a previously proposed model to the classification scenario, providing three inference methods due to the non-Gaussian likelihood the classification paradigm imposes. The forth chapter extends the multi-task scenario to the semi-supervised case. Using labeled and unlabeled data, we construct a novel covariance function that is able to capture the geometry of the distribution of each task. This setup allows unlabeled data to be utilised to infer the level of correlation between the tasks. Moreover, we also discuss the potential use of this model to situations where no labeled data are available for certain tasks. The fifth chapter investigates a novel form of transfer learning called meta-generalising. The question at hand is if, after training on a sufficient number of tasks, it is possible to make predictions on a novel task. In this situation, the predictor is embedded in an environment of multiple tasks but has no information about the origins of the test task. This elevates the concept of generalising from the level of data to the level of tasks. We employ a model based on a hierarchy of Gaussian processes, in a mixtures of expert sense, to make predictions based on the relation between the distributions of the novel and the training tasks. Each chapter is accompanied with a thorough experimental part giving insights about the potentials and the limits of the proposed methods.
APA, Harvard, Vancouver, ISO, and other styles
7

Chen, Xiaoyi. "Transfer Learning with Kernel Methods." Thesis, Troyes, 2018. http://www.theses.fr/2018TROY0005.

Full text
Abstract:
Le transfert d‘apprentissage regroupe les méthodes permettant de transférer l’apprentissage réalisé sur des données (appelées Source) à des données nouvelles, différentes, mais liées aux données Source. Ces travaux sont une contribution au transfert d’apprentissage homogène (les domaines de représentation des Source et Cible sont identiques) et transductif (la tâche à effectuer sur les données Cible est identique à celle sur les données Source), lorsque nous ne disposons pas d’étiquettes des données Cible. Dans ces travaux, nous relâchons la contrainte d’égalité des lois des étiquettes conditionnellement aux observations, souvent considérée dans la littérature. Notre approche permet de traiter des cas de plus en plus généraux. Elle repose sur la recherche de transformations permettant de rendre similaires les données Source et Cible. Dans un premier temps, nous recherchons cette transformation par Maximum de Vraisemblance. Ensuite, nous adaptons les Machines à Vecteur de Support en intégrant une contrainte additionnelle sur la similitude des données Source et Cible. Cette similitude est mesurée par la Maximum Mean Discrepancy. Enfin, nous proposons l’utilisation de l’Analyse en Composantes Principales à noyau pour rechercher un sous espace, obtenu à partir d’une transformation non linéaire des données Source et Cible, dans lequel les lois des observations sont les plus semblables possibles. Les résultats expérimentaux montrent l’efficacité de nos approches
Transfer Learning aims to take advantage of source data to help the learning task of related but different target data. This thesis contributes to homogeneous transductive transfer learning where no labeled target data is available. In this thesis, we relax the constraint on conditional probability of labels required by covariate shift to be more and more general, based on which the alignment of marginal probabilities of source and target observations renders source and target similar. Thus, firstly, a maximum likelihood based approach is proposed. Secondly, SVM is adapted to transfer learning with an extra MMD-like constraint where Maximum Mean Discrepancy (MMD) measures this similarity. Thirdly, KPCA is used to align data in a RKHS on minimizing MMD. We further develop the KPCA based approach so that a linear transformation in the input space is enough for a good and robust alignment in the RKHS. Experimentally, our proposed approaches are very promising
APA, Harvard, Vancouver, ISO, and other styles
8

Al, Chalati Abdul Aziz, and Syed Asad Naveed. "Transfer Learning for Machine Diagnostics." Thesis, Högskolan i Halmstad, Akademin för informationsteknologi, 2020. http://urn.kb.se/resolve?urn=urn:nbn:se:hh:diva-43185.

Full text
Abstract:
Fault detection and diagnostics are crucial tasks in condition-based maintenance. Industries nowadays are in need of fault identification in their machines as early as possible to save money and take precautionary measures in case of fault occurrence. Also, it is beneficial for the smooth interference in the manufacturing process in which it avoids sudden malfunctioning. Having sufficient training data for industrial machines is also a major challenge which is a prerequisite for deep neural networks to train an accurate prediction model. Transfer learning in such cases is beneficial as it can be helpful in adapting different operating conditions and characteristics which is the casein real-life applications. Our work is focused on a pneumatic system which utilizes compressed air to perform operations and is used in different types of machines in the industrial field. Our novel contribution is to build upon a Domain Adversarial Neural Network (DANN) with a unique approach by incorporating ensembling techniques for diagnostics of air leakage problem in the pneumatic system under transfer learning settings. Our approach of using ensemble methods for feature extraction shows up to 5 % improvement in the performance. We have also performed a comparative analysis of our work with conventional machine and deep learning methods which depicts the importance of transfer learning and we have also demonstrated the generalization ability of our model. Lastly, we also mentioned a problem specific contribution by suggesting a feature engineering approach, such that it could be implemented on almost every pneumatic system and could potentially impact the prediction result positively. We demonstrate that our designed model with domain adaptation ability will be quite useful and beneficial for the industry by saving their time and money and providing promising results for this air leakage problem in the pneumatic system.
APA, Harvard, Vancouver, ISO, and other styles
9

Arnekvist, Isac. "Transfer Learning using low-dimensional Representations in Reinforcement Learning." Licentiate thesis, KTH, Robotik, perception och lärande, RPL, 2020. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-279120.

Full text
Abstract:
Successful learning of behaviors in Reinforcement Learning (RL) are often learned tabula rasa, requiring many observations and interactions in the environment. Performing this outside of a simulator, in the real world, often becomes infeasible due to the large amount of interactions needed. This has motivated the use of Transfer Learning for Reinforcement Learning, where learning is accelerated by using experiences from previous learning in related tasks. In this thesis, I explore how we can transfer from a simple single-object pushing policy, to a wide array of non-prehensile rearrangement problems. I then explain how we can model task differences using a low-dimensional latent variable representation to make adaption to novel tasks efficient. Lastly, the dependence of accurate function approximation is sometimes problematic, especially in RL, where statistics of target variables are not known a priori. I present observations, along with explanations, that small target variances along with momentum optimization of ReLU-activated neural network parameters leads to dying ReLU.
Framgångsrik inlärning av beteenden inom ramen för Reinforcement Learning (RL) sker ofta tabula rasa och kräver stora mängder observationer och interaktioner. Att använda RL-algoritmer utanför simulering, i den riktiga världen, är därför ofta inte praktiskt utförbart. Detta har motiverat studier i Transfer Learning för RL, där inlärningen accelereras av erfarenheter från tidigare inlärning av liknande uppgifter. I denna licentiatuppsats utforskar jag hur vi kan vi kan åstadkomma transfer från en enklare manipulationspolicy, till en större samling omarrangeringsproblem. Jag fortsätter sedan med att beskriva hur vi kan modellera hur olika inlärningsproblem skiljer sig åt med hjälp av en lågdimensionell parametrisering, och på så vis effektivisera inlärningen av nya problem. Beroendet av bra funktionsapproximation är ibland problematiskt, särskilt inom RL där statistik om målvariabler inte är kända i förväg. Jag presenterar därför slutligen observationer, och förklaringar, att små varianser för målvariabler tillsammans med momentum-optimering leder till dying ReLU.

QC 20200819

APA, Harvard, Vancouver, ISO, and other styles
10

Mare, Angelique. "Motivators of learning and learning transfer in the workplace." Diss., University of Pretoria, 2015. http://hdl.handle.net/2263/52441.

Full text
Abstract:
Motivating employees to learn and transfer their learning to their jobs is an important activity to ensure that employees - and the organisation - continuously adapt, evolve and survive in this highly turbulent environment. The literature shows that both intrinsic and extrinsic motivators influence learning and learning transfer, and the extent of influence could be different for different people. This research sets out to explore and identify the intrinsic and extrinsic motivational factors that drive learning and learning transfer. A qualitative study in the form of focus groups was conducted. Three focus groups were conducted in which a total of 25 middle managers from two different multinational companies participated. Content and frequency analysis were used to identify the key themes from the focus group discussion. The outcome of the study resulted in the identification of the key intrinsic and extrinsic motivation factors that drive learning and learning transfer. The findings have been used to develop a Motivation-to-learn-and-transfer catalyst framework indicating that individual intrinsic motivators are at the core of driving motivation to learn and transfer learning. It also indicates which training design and work environment factors to focus on in support of intrinsic motivation to learn and transfer learning in the workplace for middle managers. It is hoped that the outcome of this research will contribute to catalysing learning and learning transfer for middle managers to achieve higher organisational effectiveness.
Mini Dissertation (MBA)--University of Pretoria, 2015.
pa2016
Gordon Institute of Business Science (GIBS)
MBA
Unrestricted
APA, Harvard, Vancouver, ISO, and other styles

Books on the topic "Transfer of Learning"

1

Hohensee, Charles, and Joanne Lobato, eds. Transfer of Learning. Cham: Springer International Publishing, 2021. http://dx.doi.org/10.1007/978-3-030-65632-4.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Hall, D. D. The transfer of learning. Norwich: University of East Anglia, 1992.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
3

Razavi-Far, Roozbeh, Boyu Wang, Matthew E. Taylor, and Qiang Yang, eds. Federated and Transfer Learning. Cham: Springer International Publishing, 2023. http://dx.doi.org/10.1007/978-3-031-11748-0.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Wang, Jindong. Introduction to transfer learning. Singapore: Springer Nature Singapore, 2023. http://dx.doi.org/10.1007/978-981-99-1109-7.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Wehby, North Mary, ed. Successful transfer of learning. Malabar, Fla: Krieger Pub. Co., 2011.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
6

Daffron, Sandra Ratcliff, and Sandra Ratcliff Daffron. Successful transfer of learning. Malabar, Fla: Krieger Pub. Co., 2011.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
7

Gass, Susan M., and Larry Selinker, eds. Language Transfer in Language Learning. Amsterdam: John Benjamins Publishing Company, 1992. http://dx.doi.org/10.1075/lald.5.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Taylor, Matthew E. Transfer in Reinforcement Learning Domains. Berlin, Heidelberg: Springer Berlin Heidelberg, 2009. http://dx.doi.org/10.1007/978-3-642-01882-4.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Schneider, Käthe, ed. Transfer of Learning in Organizations. Cham: Springer International Publishing, 2014. http://dx.doi.org/10.1007/978-3-319-02093-8.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Analoui, Farhad. Training and transfer of learning. Aldershot: Avebury, 1993.

Find full text
APA, Harvard, Vancouver, ISO, and other styles

Book chapters on the topic "Transfer of Learning"

1

Sarang, Poornachandra. "Transfer Learning." In Artificial Neural Networks with TensorFlow 2, 133–88. Berkeley, CA: Apress, 2020. http://dx.doi.org/10.1007/978-1-4842-6150-7_4.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Chin, Ting-Wu, and Cha Zhang. "Transfer Learning." In Computer Vision, 1–4. Cham: Springer International Publishing, 2020. http://dx.doi.org/10.1007/978-3-030-03243-2_837-1.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Amaratunga, Thimira. "Transfer Learning." In Deep Learning on Windows, 131–79. Berkeley, CA: Apress, 2020. http://dx.doi.org/10.1007/978-1-4842-6431-7_7.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Chin, Ting-Wu, and Cha Zhang. "Transfer Learning." In Computer Vision, 1269–73. Cham: Springer International Publishing, 2021. http://dx.doi.org/10.1007/978-3-030-63416-2_837.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Rostami, Mohammad, Hangfeng He, Muhao Chen, and Dan Roth. "Transfer Learning via Representation Learning." In Federated and Transfer Learning, 233–57. Cham: Springer International Publishing, 2022. http://dx.doi.org/10.1007/978-3-031-11748-0_10.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Lyman, Frank T. "Cooperative Learning." In 100 Teaching Ideas that Transfer and Transform Learning, 101–2. New York: Routledge, 2021. http://dx.doi.org/10.4324/9781003230281-63.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Utgoff, Paul E., James Cussens, Stefan Kramer, Sanjay Jain, Frank Stephan, Luc De Raedt, Ljupčo Todorovski, et al. "Inductive Transfer." In Encyclopedia of Machine Learning, 545–48. Boston, MA: Springer US, 2011. http://dx.doi.org/10.1007/978-0-387-30164-8_401.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Weiss, Karl, Taghi M. Khoshgoftaar, and DingDing Wang. "Transfer Learning Techniques." In Big Data Technologies and Applications, 53–99. Cham: Springer International Publishing, 2016. http://dx.doi.org/10.1007/978-3-319-44550-2_3.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Thomas, Richard C. "Learning and Transfer." In Long Term Human-Computer Interaction, 59–78. London: Springer London, 1998. http://dx.doi.org/10.1007/978-1-4471-1548-9_4.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Seel, Norbert M. "Transfer of Learning." In Encyclopedia of the Sciences of Learning, 3337–41. Boston, MA: Springer US, 2012. http://dx.doi.org/10.1007/978-1-4419-1428-6_166.

Full text
APA, Harvard, Vancouver, ISO, and other styles

Conference papers on the topic "Transfer of Learning"

1

Arifuzzaman, Md, and Engin Arslan. "Learning Transfers via Transfer Learning." In 2021 IEEE Workshop on Innovating the Network for Data-Intensive Science (INDIS). IEEE, 2021. http://dx.doi.org/10.1109/indis54524.2021.00009.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Chen, Guanliang, Dan Davis, Claudia Hauff, and Geert-Jan Houben. "Learning Transfer." In L@S 2016: Third (2016) ACM Conference on Learning @ Scale. New York, NY, USA: ACM, 2016. http://dx.doi.org/10.1145/2876034.2876035.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Muller, Brandon, Harith Al-Sahaf, Bing Xue, and Mengjie Zhang. "Transfer learning." In GECCO '19: Genetic and Evolutionary Computation Conference. New York, NY, USA: ACM, 2019. http://dx.doi.org/10.1145/3319619.3322072.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Liu, Tongliang, Qiang Yang, and Dacheng Tao. "Understanding How Feature Structure Transfers in Transfer Learning." In Twenty-Sixth International Joint Conference on Artificial Intelligence. California: International Joint Conferences on Artificial Intelligence Organization, 2017. http://dx.doi.org/10.24963/ijcai.2017/329.

Full text
Abstract:
Transfer learning transfers knowledge across domains to improve the learning performance. Since feature structures generally represent the common knowledge across different domains, they can be transferred successfully even though the labeling functions across domains differ arbitrarily. However, theoretical justification for this success has remained elusive. In this paper, motivated by self-taught learning, we regard a set of bases as a feature structure of a domain if the bases can (approximately) reconstruct any observation in this domain. We propose a general analysis scheme to theoretically justify that if the source and target domains share similar feature structures, the source domain feature structure is transferable to the target domain, regardless of the change of the labeling functions across domains. The transferred structure is interpreted to function as a regularization matrix which benefits the learning process of the target domain task. We prove that such transfer enables the corresponding learning algorithms to be uniformly stable. Specifically, we illustrate the existence of feature structure transfer in two well-known transfer learning settings: domain adaptation and learning to learn.
APA, Harvard, Vancouver, ISO, and other styles
5

Cai, Guanyu, Yuqin Wang, Lianghua He, and Mengchu Zhou. "Adversarial Transform Networks for Unsupervised Transfer Learning." In 2020 IEEE International Conference on Networking, Sensing and Control (ICNSC). IEEE, 2020. http://dx.doi.org/10.1109/icnsc48988.2020.9238125.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Zhang, Y., N. Liu, Y. Yang, Z. Wang, J. Gao, and X. Jiang. "Sparse Time-Frequency Transform Via Deep Learning and Transfer Learning: Part Ii-Transfer Learning and Field Data Application." In 83rd EAGE Annual Conference & Exhibition. European Association of Geoscientists & Engineers, 2022. http://dx.doi.org/10.3997/2214-4609.202210126.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Tetzlaff, Linda. "Transfer of learning." In the SIGCHI/GI conference. New York, New York, USA: ACM Press, 1987. http://dx.doi.org/10.1145/29933.275631.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Zhuang, Fuzhen, Ping Luo, Changying Du, Qing He, and Zhongzhi Shi. "Triplex transfer learning." In the sixth ACM international conference. New York, New York, USA: ACM Press, 2013. http://dx.doi.org/10.1145/2433396.2433449.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Zhu, Zhenfeng, Xingquan Zhu, Yangdong Ye, Yue-Fei Guo, and Xiangyang Xue. "Transfer active learning." In the 20th ACM international conference. New York, New York, USA: ACM Press, 2011. http://dx.doi.org/10.1145/2063576.2063918.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Long, Mingsheng, Jianmin Wang, Guiguang Ding, Wei Cheng, Xiang Zhang, and Wei Wang. "Dual Transfer Learning." In Proceedings of the 2012 SIAM International Conference on Data Mining. Philadelphia, PA: Society for Industrial and Applied Mathematics, 2012. http://dx.doi.org/10.1137/1.9781611972825.47.

Full text
APA, Harvard, Vancouver, ISO, and other styles

Reports on the topic "Transfer of Learning"

1

Lozano-Perez, Tomas, and Leslie Kaelbling. Effective Bayesian Transfer Learning. Fort Belvoir, VA: Defense Technical Information Center, March 2010. http://dx.doi.org/10.21236/ada516458.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Kumar, Sharad. Localizing Little Landmarks with Transfer Learning. Portland State University Library, January 2000. http://dx.doi.org/10.15760/etd.6703.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Klenk, Matthew, and Kenneth D. Forbus. Learning Domain Theories via Analogical Transfer. Fort Belvoir, VA: Defense Technical Information Center, January 2007. http://dx.doi.org/10.21236/ada470404.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Cohen, Paul, and Carole Beal. LGIST: Learning Generalized Image Schemas for Transfer. Fort Belvoir, VA: Defense Technical Information Center, February 2008. http://dx.doi.org/10.21236/ada491488.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Kong, Q., A. Price, and S. Myers. Preliminary Transfer Learning Results on Israel Data. Office of Scientific and Technical Information (OSTI), April 2022. http://dx.doi.org/10.2172/1860678.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Gorski, Nicholas A., and John E. Laird. Investigating Transfer Learning in the Urban Combat Testbed. Fort Belvoir, VA: Defense Technical Information Center, September 2007. http://dx.doi.org/10.21236/ada478847.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Roschelle, Jeremy, Britte Haugan Cheng, Nicola Hodkowski, Lina Haldar, and Julie Neisler. Transfer for Future Learning of Fractions within Cignition’s Microtutoring Approach. Digital Promise, April 2020. http://dx.doi.org/10.51388/20.500.12265/95.

Full text
Abstract:
In this exploratory research project, our team’s goal was to design and begin validation of a measurement approach that could provide indication of a student’s ability to transfer their mathematics understanding to future, more advanced mathematical topics. Assessing transfer of learning in mathematics and other topics is an enduring challenge. We sought to invent and validate an approach to transfer that would be relevant to improving Cignition’s product, would leverage Cignition’s use of online 1:1 tutoring, and would pioneer an approach that would contribute more broadly to assessment research.
APA, Harvard, Vancouver, ISO, and other styles
8

Gernsbacher, Morton A. Learning to Suppress Competing Information: Do the Skills Transfer? Fort Belvoir, VA: Defense Technical Information Center, October 2001. http://dx.doi.org/10.21236/ada396312.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Munoz-Avila, Hector. Transfer Learning and Hierarchical Task Network Representations and Planning. Fort Belvoir, VA: Defense Technical Information Center, February 2008. http://dx.doi.org/10.21236/ada500020.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Fu, Yun. Modeling Spatiotemporal Contextual Dynamics with Sparse-Coded Transfer Learning. Fort Belvoir, VA: Defense Technical Information Center, August 2012. http://dx.doi.org/10.21236/ada587078.

Full text
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography