Academic literature on the topic 'Transfer learning (TL)'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the lists of relevant articles, books, theses, conference reports, and other scholarly sources on the topic 'Transfer learning (TL).'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Journal articles on the topic "Transfer learning (TL)"

1

Nishida, Satoshi, Yusuke Nakano, Antoine Blanc, Naoya Maeda, Masataka Kado, and Shinji Nishimoto. "Brain-Mediated Transfer Learning of Convolutional Neural Networks." Proceedings of the AAAI Conference on Artificial Intelligence 34, no. 04 (April 3, 2020): 5281–88. http://dx.doi.org/10.1609/aaai.v34i04.5974.

Full text
Abstract:
The human brain can effectively learn a new task from a small number of samples, which indicates that the brain can transfer its prior knowledge to solve tasks in different domains. This function is analogous to transfer learning (TL) in the field of machine learning. TL uses a well-trained feature space in a specific task domain to improve performance in new tasks with insufficient training data. TL with rich feature representations, such as features of convolutional neural networks (CNNs), shows high generalization ability across different task domains. However, such TL is still insufficient in making machine learning attain generalization ability comparable to that of the human brain. To examine if the internal representation of the brain could be used to achieve more efficient TL, we introduce a method for TL mediated by human brains. Our method transforms feature representations of audiovisual inputs in CNNs into those in activation patterns of individual brains via their association learned ahead using measured brain responses. Then, to estimate labels reflecting human cognition and behavior induced by the audiovisual inputs, the transformed representations are used for TL. We demonstrate that our brain-mediated TL (BTL) shows higher performance in the label estimation than the standard TL. In addition, we illustrate that the estimations mediated by different brains vary from brain to brain, and the variability reflects the individual variability in perception. Thus, our BTL provides a framework to improve the generalization ability of machine-learning feature representations and enable machine learning to estimate human-like cognition and behavior, including individual variability.
APA, Harvard, Vancouver, ISO, and other styles
2

Yu, Fuchao, Xianchao Xiu, and Yunhui Li. "A Survey on Deep Transfer Learning and Beyond." Mathematics 10, no. 19 (October 3, 2022): 3619. http://dx.doi.org/10.3390/math10193619.

Full text
Abstract:
Deep transfer learning (DTL), which incorporates new ideas from deep neural networks into transfer learning (TL), has achieved excellent success in computer vision, text classification, behavior recognition, and natural language processing. As a branch of machine learning, DTL applies end-to-end learning to overcome the drawback of traditional machine learning that regards each dataset individually. Although some valuable and impressive general surveys exist on TL, special attention and recent advances in DTL are lacking. In this survey, we first review more than 50 representative approaches of DTL in the last decade and systematically summarize them into four categories. In particular, we further divide each category into subcategories according to models, functions, and operation objects. In addition, we discuss recent advances in TL in other fields and unsupervised TL. Finally, we provide some possible and exciting future research directions.
APA, Harvard, Vancouver, ISO, and other styles
3

Cho, Seong Hee, Seokgoo Kim, and Joo-Ho Choi. "Transfer Learning-Based Fault Diagnosis under Data Deficiency." Applied Sciences 10, no. 21 (November 3, 2020): 7768. http://dx.doi.org/10.3390/app10217768.

Full text
Abstract:
In the fault diagnosis study, data deficiency, meaning that the fault data for the training are scarce, is often encountered, and it may deteriorate the performance of the fault diagnosis greatly. To solve this issue, the transfer learning (TL) approach is employed to exploit the neural network (NN) trained in another (source) domain where enough fault data are available in order to improve the NN performance of the real (target) domain. While there have been similar attempts of TL in the literature to solve the imbalance issue, they were about the sample imbalance between the source and target domain, whereas the present study considers the imbalance between the normal and fault data. To illustrate this, normal and fault datasets are acquired from the linear motion guide, in which the data at high and low speeds represent the real operation (target) and maintenance inspection (source), respectively. The effect of data deficiency is studied by reducing the number of fault data in the target domain, and comparing the performance of TL, which exploits the knowledge of the source domain and the ordinary machine learning (ML) approach without it. By examining the accuracy of the fault diagnosis as a function of imbalance ratio, it is found that the lower bound and interquartile range (IQR) of the accuracy are improved greatly by employing the TL approach. Therefore, it can be concluded that TL is truly more effective than the ordinary ML when there is a large imbalance between the fault and normal data, such as smaller than 0.1.
APA, Harvard, Vancouver, ISO, and other styles
4

Li, Yuyang, Bolin Fu, Xidong Sun, Donglin Fan, Yeqiao Wang, Hongchang He, Ertao Gao, Wen He, and Yuefeng Yao. "Comparison of Different Transfer Learning Methods for Classification of Mangrove Communities Using MCCUNet and UAV Multispectral Images." Remote Sensing 14, no. 21 (November 2, 2022): 5533. http://dx.doi.org/10.3390/rs14215533.

Full text
Abstract:
Mangrove-forest classification by using deep learning algorithms has attracted increasing attention but remains challenging. The current studies on the transfer classification of mangrove communities between different regions and different sensors are especially still unclear. To fill the research gap, this study developed a new deep-learning algorithm (encoder–decoder with mixed depth-wise convolution and cascade upsampling, MCCUNet) by modifying the encoder and decoder sections of the DeepLabV3+ algorithm and presented three transfer-learning strategies, namely frozen transfer learning (F-TL), fine-tuned transfer learning (Ft-TL), and sensor-and-phase transfer learning (SaP-TL), to classify mangrove communities by using the MCCUNet algorithm and high-resolution UAV multispectral images. This study combined the deep-learning algorithms with recursive feature elimination and principal component analysis (RFE–PCA), using a high-dimensional dataset to map and classify mangrove communities, and evaluated their classification performance. The results of this study showed the following: (1) The MCCUNet algorithm outperformed the original DeepLabV3+ algorithm for classifying mangrove communities, achieving the highest overall classification accuracy (OA), i.e., 97.24%, in all scenarios. (2) The RFE–PCA dimension reduction improved the classification performance of deep-learning algorithms. The OA of mangrove species from using the MCCUNet algorithm was improved by 7.27% after adding dimension-reduced texture features and vegetation indices. (3) The Ft-TL strategy enabled the algorithm to achieve better classification accuracy and stability than the F-TL strategy. The highest improvement in the F1–score of Spartina alterniflora was 19.56%, using the MCCUNet algorithm with the Ft-TL strategy. (4) The SaP-TL strategy produced better transfer-learning classifications of mangrove communities between images of different phases and sensors. The highest improvement in the F1–score of Aegiceras corniculatum was 19.85%, using the MCCUNet algorithm with the SaP-TL strategy. (5) All three transfer-learning strategies achieved high accuracy in classifying mangrove communities, with the mean F1–score of 84.37%~95.25%.
APA, Harvard, Vancouver, ISO, and other styles
5

Chen, Muzi. "Analysis on Transfer Learning Models and Applications in Natural Language Processing." Highlights in Science, Engineering and Technology 16 (November 10, 2022): 446–52. http://dx.doi.org/10.54097/hset.v16i.2609.

Full text
Abstract:
Assumptions have been established that many machine learning algorithms expect the training data and the testing data to share the same feature space or distribution. Thus, transfer learning (TL) rises due to the tolerance of the different feature spaces and the distribution of data. It is an optimization to improve performance from task to task. This paper includes the basic knowledge of transfer learning and summarizes some relevant experimental results of popular applications using transfer learning in the natural language processing (NLP) field. The mathematical definition of TL is briefly mentioned. After that, basic knowledge including the different categories of TL, and the comparison between TL and traditional machine learning models is introduced. Then, some applications which mainly focus on question answering, cyberbullying detection, and sentiment analysis will be presented. Other applications will also be briefly introduced such as Named Entity Recognition (NER), Intent Classification, and Cross-Lingual Learning, etc. For each application, this study provides reference on transfer learning models for related researches.
APA, Harvard, Vancouver, ISO, and other styles
6

Li, Zhichao, and Jinwei Dong. "A Framework Integrating DeeplabV3+, Transfer Learning, Active Learning, and Incremental Learning for Mapping Building Footprints." Remote Sensing 14, no. 19 (September 22, 2022): 4738. http://dx.doi.org/10.3390/rs14194738.

Full text
Abstract:
Convolutional neural network (CNN)-based remote sensing (RS) image segmentation has become a widely used method for building footprint mapping. Recently, DeeplabV3+, an advanced CNN architecture, has shown satisfactory performance for building extraction in different urban landscapes. However, it faces challenges due to the large amount of labeled data required for model training and the extremely high costs associated with the annotation of unlabelled data. These challenges encouraged us to design a framework for building footprint mapping with fewer labeled data. In this context, the published studies on RS image segmentation are reviewed first, with a particular emphasis on the use of active learning (AL), incremental learning (IL), transfer learning (TL), and their integration for reducing the cost of data annotation. Based on the literature review, we defined three candidate frameworks by integrating AL strategies (i.e., margin sampling, entropy, and vote entropy), IL, TL, and DeeplabV3+. They examine the efficacy of AL, the efficacy of IL in accelerating AL performance, and the efficacy of both IL and TL in accelerating AL performance, respectively. Additionally, these frameworks enable the iterative selection of image tiles to be annotated, training and evaluation of DeeplabV3+, and quantification of the landscape features of selected image tiles. Then, all candidate frameworks were examined using WHU aerial building dataset as it has sufficient (i.e., 8188) labeled image tiles with representative buildings (i.e., various densities, areas, roof colors, and shapes of the building). The results support our theoretical analysis: (1) all three AL strategies reduced the number of image tiles by selecting the most informative image tiles, and no significant differences were observed in their performance; (2) image tiles with more buildings and larger building area were proven to be informative for the three AL strategies, which were prioritized during the data selection process; (3) IL can expedite model training by accumulating knowledge from chosen labeled tiles; (4) TL provides a better initial learner by incorporating knowledge from a pre-trained model; (5) DeeplabV3+ incorporated with IL, TL, and AL has the best performance in reducing the cost of data annotation. It achieved good performance (i.e., mIoU of 0.90) using only 10–15% of the sample dataset; DeeplabV3+ needs 50% of the sample dataset to realize the equivalent performance. The proposed frameworks concerning DeeplabV3+ and the results imply that integrating TL, AL, and IL in human-in-the-loop building extraction could be considered in real-world applications, especially for building footprint mapping.
APA, Harvard, Vancouver, ISO, and other styles
7

Jeon, Ho-Kun, Seungryong Kim, Jonathan Edwin, and Chan-Su Yang. "Sea Fog Identification from GOCI Images Using CNN Transfer Learning Models." Electronics 9, no. 2 (February 11, 2020): 311. http://dx.doi.org/10.3390/electronics9020311.

Full text
Abstract:
This study proposes an approaching method of identifying sea fog by using Geostationary Ocean Color Imager (GOCI) data through applying a Convolution Neural Network Transfer Learning (CNN-TL) model. In this study, VGG19 and ResNet50, pre-trained CNN models, are used for their high identification performance. The training and testing datasets were extracted from GOCI images for the area of coastal regions of the Korean Peninsula for six days in March 2015. With varying band combinations and changing whether Transfer Learning (TL) is applied, identification experiments were executed. TL enhanced the performance of the two models. Training data of CNN-TL showed up to 96.3% accuracy in matching, both with VGG19 and ResNet50, identically. Thus, it is revealed that CNN-TL is effective for the detection of sea fog from GOCI imagery.
APA, Harvard, Vancouver, ISO, and other styles
8

Xin, Baogui, and Wei Peng. "Prediction for Chaotic Time Series-Based AE-CNN and Transfer Learning." Complexity 2020 (September 16, 2020): 1–9. http://dx.doi.org/10.1155/2020/2680480.

Full text
Abstract:
It has been a hot and challenging topic to predict the chaotic time series in the medium-to-long term. We combine autoencoders and convolutional neural networks (AE-CNN) to capture the intrinsic certainty of chaotic time series. We utilize the transfer learning (TL) theory to improve the prediction performance in medium-to-long term. Thus, we develop a prediction scheme for chaotic time series-based AE-CNN and TL named AE-CNN-TL. Our experimental results show that the proposed AE-CNN-TL has much better prediction performance than any one of the following: AE-CNN, ARMA, and LSTM.
APA, Harvard, Vancouver, ISO, and other styles
9

Wang, Peng (Edward), and Matthew Russell. "Domain Adversarial Transfer Learning for Generalized Tool Wear Prediction." Annual Conference of the PHM Society 12, no. 1 (November 3, 2020): 8. http://dx.doi.org/10.36001/phmconf.2020.v12i1.1137.

Full text
Abstract:
Given its demonstrated ability in analyzing and revealing patterns underlying data, Deep Learning (DL) has been increasingly investigated to complement physics-based models in various aspects of smart manufacturing, such as machine condition monitoring and fault diagnosis, complex manufacturing process modeling, and quality inspection. However, successful implementation of DL techniques relies greatly on the amount, variety, and veracity of data for robust network training. Also, the distributions of data used for network training and application should be identical to avoid the internal covariance shift problem that reduces the network performance applicability. As a promising solution to address these challenges, Transfer Learning (TL) enables DL networks trained on a source domain and task to be applied to a separate target domain and task. This paper presents a domain adversarial TL approach, based upon the concepts of generative adversarial networks. In this method, the optimizer seeks to minimize the loss (i.e., regression or classification accuracy) across the labeled training examples from the source domain while maximizing the loss of the domain classifier across the source and target data sets (i.e., maximizing the similarity of source and target features). The developed domain adversarial TL method has been implemented on a 1-D CNN backbone network and evaluated for prediction of tool wear propagation, using NASA's milling dataset. Performance has been compared to other TL techniques, and the results indicate that domain adversarial TL can successfully allow DL models trained on certain scenarios to be applied to new target tasks.
APA, Harvard, Vancouver, ISO, and other styles
10

Minami, Shunya, Song Liu, Stephen Wu, Kenji Fukumizu, and Ryo Yoshida. "A General Class of Transfer Learning Regression without Implementation Cost." Proceedings of the AAAI Conference on Artificial Intelligence 35, no. 10 (May 18, 2021): 8992–99. http://dx.doi.org/10.1609/aaai.v35i10.17087.

Full text
Abstract:
We propose a novel framework that unifies and extends existing methods of transfer learning (TL) for regression. To bridge a pretrained source model to the model on a target task, we introduce a density-ratio reweighting function, which is estimated through the Bayesian framework with a specific prior distribution. By changing two intrinsic hyperparameters and the choice of the density-ratio model, the proposed method can integrate three popular methods of TL: TL based on cross-domain similarity regularization, a probabilistic TL using the density-ratio estimation, and fine-tuning of pretrained neural networks. Moreover, the proposed method can benefit from its simple implementation without any additional cost; the regression model can be fully trained using off-the-shelf libraries for supervised learning in which the original output variable is simply transformed to a new output variable. We demonstrate its simplicity, generality, and applicability using various real data applications.
APA, Harvard, Vancouver, ISO, and other styles

Book chapters on the topic "Transfer learning (TL)"

1

Peters, Florin Tim, and Robin Hirt. "A Transfer Machine Learning Matching Algorithm for Source and Target (TL-MAST)." In Machine Learning, Optimization, and Data Science, 541–58. Cham: Springer International Publishing, 2020. http://dx.doi.org/10.1007/978-3-030-64580-9_45.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Lu, Kai-liang, Guo-rong Luo, Ming Zhang, Jin-feng Qi, and Chun-ying Huang. "Comparison of Deep Learning Methods and a Transfer-Learning Semi-Supervised GAN Combined Framework for Pavement Crack Image Identification." In Frontiers in Artificial Intelligence and Applications. IOS Press, 2022. http://dx.doi.org/10.3233/faia220571.

Full text
Abstract:
The pavement crack identification performance of typical models or algorithms of transfer learning (TL), encoder-decoder (ED), and generative adversarial networks (GAN), were evaluated and compared on SDNET2018 and CFD. TL mainly takes advantage of fine-tuning the architecture-optimized backbones pre-trained on large-scale data sets to achieve good classification accuracy. ED-based algorithms can take into account the fact that crack edges, patterns or texture features contribute differently to the identification. Both TL and ED rely on accurate crack ground truth (GT) annotation. GAN is compatible with other neural network architectures, thus can integrate various frameworks (e.g., TL, ED), and algorithms, but the training time is longer. In patch classification, the fine-tuned TL models can be equivalent to or even slightly better than the ED-based algorithms, and the predicting time is faster; In accurate crack location, both ED- and GAN-based algorithms can achieve pixel-level segmentation. It is expected to realize real-time automatic crack identification on a low computational power platform. Furthermore, a weakly supervised learning framework (namely, TL-SSGAN) is proposed, combining TL and semi-supervised GAN. It only needs approximately 10%–20% labeled samples of the total to achieve comparable crack classification performance to or even outperform supervised learning methods, via fine-tuned backbones and utilizing extra unlabeled samples.
APA, Harvard, Vancouver, ISO, and other styles
3

Tanaka-Ellis, Nobue, and Sachiyo Sekiguchi. "Not a language course (!): teaching global leadership skills through a foreign language in a flipped, blended, and ubiquitous learning environment." In CALL and complexity – short papers from EUROCALL 2019, 350–55. Research-publishing.net, 2019. http://dx.doi.org/10.14705/rpnet.2019.38.1035.

Full text
Abstract:
This paper reports on the evidence learning found from a flipped, blended, ubiquitous learning Content and Language Integrated Learning (CLIL) course teaching global leadership skills using a Massive Open Online Course (MOOC) to Japanese undergraduates through English. The purposes of the current study are to see if (1) there was any evidence of learning found in the students’ oral outputs, and (2) there were any changes in student perceptions about the course and their Target Language (TL) fluency over a 10-week period. The data were collected through two interview sessions conducted in Weeks 4 and 14. A similar set of questions were asked in both interviews to gauge student understanding of the course content, perceptual changes, and oral output skills. Three-semesters worth of interview data were transcribed and sorted into four categories; (1) transfer of words, (2) transfer of phrases, (3) transfer of concepts, and (4) application of concepts. The results indicated that the students’ perceptions of the course shifted from an English as a foreign language course to a leadership course, and they produced more course relevant answers.
APA, Harvard, Vancouver, ISO, and other styles

Conference papers on the topic "Transfer learning (TL)"

1

Tang, Yifan, M. Rahmani Dehaghani, and G. Gary Wang. "Review of Transfer Learning in Additive Manufacturing Modeling." In ASME 2022 International Design Engineering Technical Conferences and Computers and Information in Engineering Conference. American Society of Mechanical Engineers, 2022. http://dx.doi.org/10.1115/detc2022-89300.

Full text
Abstract:
Abstract The process-structure-property modeling of additive manufacturing (AM) products plays an important role in process and quality control. In practice however, only limited data are available for each product due to its expensive material and time-consuming fabricating process, which becomes an obstacle to achieve high quality models. Transfer learning (TL) is a new and promising approach that the model of one product (source) may be reused for another product (target) with limited new data on the target. This paper focuses on reviewing applications of TL in AM modeling in order to help further research in this area. To clarify the specific topic, the problem definition is presented, as well as the differences between TL, multi-fidelity modeling, and multi-task learning. Then current applications of TL in AM modeling are summarized according to different TL approaches. To better understand the performances of different TL approaches, several representative TL-assisted AM modeling methods are reproduced and tested on an open-source dataset. Based on the test results, their effectiveness and limitations are discussed in detail. Finally, future research directions about TL in AM modeling are discussed in hope to explore more potential of TL in boosting the AM model performance.
APA, Harvard, Vancouver, ISO, and other styles
2

Krithara, Anastasia, and Georgios Paliouras. "TL-PLSA: Transfer Learning between Domains with Different Classes." In 2013 IEEE International Conference on Data Mining (ICDM). IEEE, 2013. http://dx.doi.org/10.1109/icdm.2013.113.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Zhang, Xiaokai, Yonggui Mei, Hao Jin, and Dong Liang. "TL-FCMA: Indoor Localization by Integrating Fuzzy Clustering with Transfer Learning." In 2018 International Conference on Network Infrastructure and Digital Content (IC-NIDC). IEEE, 2018. http://dx.doi.org/10.1109/icnidc.2018.8525754.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Wu, Xianyu, Shihao Feng, Xiaojie Li, Jing Yin, Jiancheng Lv, and Canghong Shi. "TL-GAN: Generative Adversarial Networks with Transfer Learning for Mode Collapse (S)." In The 31st International Conference on Software Engineering and Knowledge Engineering. KSI Research Inc. and Knowledge Systems Institute Graduate School, 2019. http://dx.doi.org/10.18293/seke2019-160.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Masum, Mohammad, and Hossain Shahriar. "TL-NID: Deep Neural Network with Transfer Learning for Network Intrusion Detection." In 2020 15th International Conference for Internet Technology and Secured Transactions (ICITST). IEEE, 2020. http://dx.doi.org/10.23919/icitst51030.2020.9351317.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Liu, Che-Yu, Xiaoliang Chen, Roberto Proietti, and S. J. Ben Yoo. "Evol-TL: Evolutionary Transfer Learning for QoT Estimation in Multi-Domain Networks." In Optical Fiber Communication Conference. Washington, D.C.: OSA, 2020. http://dx.doi.org/10.1364/ofc.2020.th3d.1.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Zhu, Xulyu, Zheng Yan, Jianfei Ruan, Qinghua Zheng, and Bo Dong. "IRTED-TL: An Inter-Region Tax Evasion Detection Method Based on Transfer Learning." In 2018 17th IEEE International Conference On Trust, Security And Privacy In Computing And Communications/ 12th IEEE International Conference On Big Data Science And Engineering (TrustCom/BigDataSE). IEEE, 2018. http://dx.doi.org/10.1109/trustcom/bigdatase.2018.00169.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Sadreazami, Hamidreza, Miodrag Bolic, and Sreeraman Rajan. "TL-FALL: Contactless Indoor Fall Detection Using Transfer Learning from a Pretrained Model." In 2019 IEEE International Symposium on Medical Measurements and Applications (MeMeA). IEEE, 2019. http://dx.doi.org/10.1109/memea.2019.8802154.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Cheng, Jiahui, Bin Guo, Jiaqi Liu, Sicong Liu, Guangzhi Wu, Yueqi Sun, and Zhiwen Yu. "TL-SDD: A Transfer Learning-Based Method for Surface Defect Detection with Few Samples." In 2021 7th International Conference on Big Data Computing and Communications (BigCom). IEEE, 2021. http://dx.doi.org/10.1109/bigcom53800.2021.00023.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Wei, Xiao, Fabian Jochmann, Anna Lena Demmerling, and Dirk Söffker. "Application of Transfer Learning in Metalworking Fluid Distinction." In ASME 2022 International Design Engineering Technical Conferences and Computers and Information in Engineering Conference. American Society of Mechanical Engineers, 2022. http://dx.doi.org/10.1115/detc2022-89452.

Full text
Abstract:
Abstract This contribution introduces a Transfer Learning (TL) approach for the diagnostic task to distinguish the ingredients of a typical production machine element: metalworking fluid (MWF). Metalworking fluids are oil or water-based fluids used during machining and shaping of metals to provide lubrication and cooling. Additives in MWF affect their performance in different metalworking processes. Performance evaluation of MWF is of relevance for product development as well as for condition monitoring. In this contribution, for the first time, Transfer Learning is adapted for MWF distinction. Firstly, two experiments are designed to get Acoustic Emission (AE) signals from thread forming processes using variant MWF. In the first experiment, eleven kinds of water-based MWF are applied and AE signals are saved into dataset A, while in the second experiment, other five MWF are used in the process of thread forming and AE signals are stored in dataset B. A convolutional neural network (CNN)-based data mining approach including data segmentation, Short-Time Fourier Transform (STFT) and data normalization algorithms is developed from dataset A. Performance of the proposed approach in dataset A is good. Afterwards, parameters in data processing and hyperparameters in CNN of the approach are transferred into dataset B. Results of dataset B show that Transfer Learning allows suitable MWF distinction.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography