Journal articles on the topic 'Multitask learning'

To see the other types of publications on this topic, follow the link: Multitask learning.

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 journal articles for your research on the topic 'Multitask learning.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse journal articles on a wide variety of disciplines and organise your bibliography correctly.

1

Qiuhua Liu, Xuejun Liao, Hui Li, J. R. Stack, and L. Carin. "Semisupervised Multitask Learning." IEEE Transactions on Pattern Analysis and Machine Intelligence 31, no. 6 (June 2009): 1074–86. http://dx.doi.org/10.1109/tpami.2008.296.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Yang, Peng, Peilin Zhao, Jiayu Zhou, and Xin Gao. "Confidence Weighted Multitask Learning." Proceedings of the AAAI Conference on Artificial Intelligence 33 (July 17, 2019): 5636–43. http://dx.doi.org/10.1609/aaai.v33i01.33015636.

Full text
Abstract:
Traditional online multitask learning only utilizes the firstorder information of the datastream. To remedy this issue, we propose a confidence weighted multitask learning algorithm, which maintains a Gaussian distribution over each task model to guide online learning process. The mean (covariance) of the Gaussian Distribution is a sum of a local component and a global component that is shared among all the tasks. In addition, this paper also addresses the challenge of active learning on the online multitask setting. Instead of requiring labels of all the instances, the proposed algorithm determines whether the learner should acquire a label by considering the confidence from its related tasks over label prediction. Theoretical results show the regret bounds can be significantly reduced. Empirical results demonstrate that the proposed algorithm is able to achieve promising learning efficacy, while simultaneously minimizing the labeling cost.
APA, Harvard, Vancouver, ISO, and other styles
3

Li, Guangxia, Steven C. H. Hoi, Kuiyu Chang, Wenting Liu, and Ramesh Jain. "Collaborative Online Multitask Learning." IEEE Transactions on Knowledge and Data Engineering 26, no. 8 (August 2014): 1866–76. http://dx.doi.org/10.1109/tkde.2013.139.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Li, Zhen Xing, and Wei Hua Li. "Multitask Similarity Cluster." Advanced Materials Research 765-767 (September 2013): 1662–66. http://dx.doi.org/10.4028/www.scientific.net/amr.765-767.1662.

Full text
Abstract:
Single task learning is widely used training in artificial neural network. Before, people usually see other tasks as noise in same learning machine. However, multitask learning, proposed by Rich Caruana, sees simultaneously training several correlated tasks is helpful to improve single tasks performance. In this paper, we propose a new neural network multitask similarity cluster. Combined with hellinger distance, multitask similarity cluster can estimate distances among clusters more accurate. Experimental results show multitask learning is helpful to improve performance of single task and multitask similarity cluster can get satisfactory result.
APA, Harvard, Vancouver, ISO, and other styles
5

Li, Zhen Xing, and Wei Hua Li. "Multitask Fuzzy Learning with Rule Weight." Advanced Materials Research 774-776 (September 2013): 1883–86. http://dx.doi.org/10.4028/www.scientific.net/amr.774-776.1883.

Full text
Abstract:
In fuzzy learning system based on rule weight, certainty grade, denoted by membership function of fuzzy set, defines how close a rule to a classification. In this system, several rules can correspond to same classification. But it cannot reflect the changing while training several tasks simultaneously. In this paper, we propose multitask fuzzy learning based on error-correction, and define belonging grade to show how much a sample belongs to a rule. Experimental results demonstrate efficiency of multitask fuzzy learning, and multitask learning could help to improve learning machines prediction.
APA, Harvard, Vancouver, ISO, and other styles
6

Menghi, Nicholas, Kemal Kacar, and Will Penny. "Multitask learning over shared subspaces." PLOS Computational Biology 17, no. 7 (July 6, 2021): e1009092. http://dx.doi.org/10.1371/journal.pcbi.1009092.

Full text
Abstract:
This paper uses constructs from machine learning to define pairs of learning tasks that either shared or did not share a common subspace. Human subjects then learnt these tasks using a feedback-based approach and we hypothesised that learning would be boosted for shared subspaces. Our findings broadly supported this hypothesis with either better performance on the second task if it shared the same subspace as the first, or positive correlations over task performance for shared subspaces. These empirical findings were compared to the behaviour of a Neural Network model trained using sequential Bayesian learning and human performance was found to be consistent with a minimal capacity variant of this model. Networks with an increased representational capacity, and networks without Bayesian learning, did not show these transfer effects. We propose that the concept of shared subspaces provides a useful framework for the experimental study of human multitask and transfer learning.
APA, Harvard, Vancouver, ISO, and other styles
7

Kato, Tsuyoshi, Hisashi Kashima, Masashi Sugiyama, and Kiyoshi Asai. "Conic Programming for Multitask Learning." IEEE Transactions on Knowledge and Data Engineering 22, no. 7 (July 2010): 957–68. http://dx.doi.org/10.1109/tkde.2009.142.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Kong, Yu, Ming Shao, Kang Li, and Yun Fu. "Probabilistic Low-Rank Multitask Learning." IEEE Transactions on Neural Networks and Learning Systems 29, no. 3 (March 2018): 670–80. http://dx.doi.org/10.1109/tnnls.2016.2641160.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Yin, Jichong, Fang Wu, Yue Qiu, Anping Li, Chengyi Liu, and Xianyong Gong. "A Multiscale and Multitask Deep Learning Framework for Automatic Building Extraction." Remote Sensing 14, no. 19 (September 22, 2022): 4744. http://dx.doi.org/10.3390/rs14194744.

Full text
Abstract:
Detecting buildings, segmenting building footprints, and extracting building edges from high-resolution remote sensing images are vital in applications such as urban planning, change detection, smart cities, and map-making and updating. The tasks of building detection, footprint segmentation, and edge extraction affect each other to a certain extent. However, most previous works have focused on one of these three tasks and have lacked a multitask learning framework that can simultaneously solve the tasks of building detection, footprint segmentation and edge extraction, making it difficult to obtain smooth and complete buildings. This study proposes a novel multiscale and multitask deep learning framework to consider the dependencies among building detection, footprint segmentation, and edge extraction while completing all three tasks. In addition, a multitask feature fusion module is introduced into the deep learning framework to increase the robustness of feature extraction. A multitask loss function is also introduced to balance the training losses among the various tasks to obtain the best training results. Finally, the proposed method is applied to open-source building datasets and large-scale high-resolution remote sensing images and compared with other advanced building extraction methods. To verify the effectiveness of multitask learning, the performance of multitask learning and single-task training is compared in ablation experiments. The experimental results show that the proposed method has certain advantages over other methods and that multitask learning can effectively improve single-task performance.
APA, Harvard, Vancouver, ISO, and other styles
10

Szyszkowska, Joanna, Anna Kinga Zduńczyk-Kłos, Antonina Doroszewska, Barbara Banaszczak, Milena Michalska, and Katarzyna Potocka. "Zdolność do skupienia uwagi i wielozadaniowości u studentów uczelni wyższych w okresie pandemicznej nauki na odległość." Kwartalnik Pedagogiczny 68, no. 3 (2023): 71–90. http://dx.doi.org/10.31338/2657-6007.kp.2023-3.4.

Full text
Abstract:
The study aimed to investigate the impact of the changes in higher education during the COVID-19 pandemic on Polish university students’ ability to focus and multitask, and the presumed disproportions in these skills between medical students and other students. We also analysed the differences in the evaluation of the organisation of classes during the pandemic in medicine and in other programmes. The study consisted of a survey on distance learning during the COVID-19 pandemic, an assessment of cognitive and motivational functions based on the PDQ-20 questionnaire and the authors’ original questions, and a test examining the ability to multitask on the Psytoolkit platform. 201 students participated in the study – 111 medical students and 90 other students. The respondents’ answers indicate their greater exposure to distracting stimuli and their increased tendency to multitask during distance learning. The results of the experimental test show that multitasking affects longer task processing and higher error rates. Medical students were less satisfied with the quality of distance classes. The level of subjective cognitive deficits and multitasking intensity was similar in both respondent groups. According to the above results, the use of methods engaging students in distance learning may be helpful for learning, enhancing the focusing processes. It is the first study investigating university students’ ability to focus and multitask during the pandemic distance learning.
APA, Harvard, Vancouver, ISO, and other styles
11

Saylam, Berrenur, and Özlem Durmaz İncel. "Multitask Learning for Mental Health: Depression, Anxiety, Stress (DAS) Using Wearables." Diagnostics 14, no. 5 (February 26, 2024): 501. http://dx.doi.org/10.3390/diagnostics14050501.

Full text
Abstract:
This study investigates the prediction of mental well-being factors—depression, stress, and anxiety—using the NetHealth dataset from college students. The research addresses four key questions, exploring the impact of digital biomarkers on these factors, their alignment with conventional psychology literature, the time-based performance of applied methods, and potential enhancements through multitask learning. The findings reveal modality rankings aligned with psychology literature, validated against paper-based studies. Improved predictions are noted with temporal considerations, and further enhanced by multitasking. Mental health multitask prediction results show aligned baseline and multitask performances, with notable enhancements using temporal aspects, particularly with the random forest (RF) classifier. Multitask learning improves outcomes for depression and stress but not anxiety using RF and XGBoost.
APA, Harvard, Vancouver, ISO, and other styles
12

Yu, Qingtian, Haopeng Wang, Fedwa Laamarti, and Abdulmotaleb El Saddik. "Deep Learning-Enabled Multitask System for Exercise Recognition and Counting." Multimodal Technologies and Interaction 5, no. 9 (September 8, 2021): 55. http://dx.doi.org/10.3390/mti5090055.

Full text
Abstract:
Exercise is a prevailing topic in modern society as more people are pursuing a healthy lifestyle. Physical activities provide significant benefits to human well-being from the inside out. Human pose estimation, action recognition and repetitive counting fields developed rapidly in the past several years. However, few works combined them together to assist people in exercise. In this paper, we propose a multitask system covering the three domains. Different from existing methods, heatmaps, which are the byproducts of 2D human pose estimation models, are adopted for exercise recognition and counting. Recent heatmap processing methods have been proven effective in extracting dynamic body pose information. Inspired by this, we propose a deep-learning multitask model of exercise recognition and repetition counting. To the best of our knowledge, this approach is attempted for the first time. To meet the needs of the multitask model, we create a new dataset Rep-Penn with action, counting and speed labels. Our multitask system can estimate human pose, identify physical activities and count repeated motions. We achieved 95.69% accuracy in exercise recognition on the Rep-Penn dataset. The multitask model also performed well in repetitive counting with 0.004 Mean Average Error (MAE) and 0.997 Off-By-One (OBO) accuracy on the Rep-Penn dataset. Compared with existing frameworks, our method obtained state-of-the-art results.
APA, Harvard, Vancouver, ISO, and other styles
13

Sun, Kai, Richong Zhang, Samuel Mensah, Yongyi Mao, and Xudong Liu. "Progressive Multi-task Learning with Controlled Information Flow for Joint Entity and Relation Extraction." Proceedings of the AAAI Conference on Artificial Intelligence 35, no. 15 (May 18, 2021): 13851–59. http://dx.doi.org/10.1609/aaai.v35i15.17632.

Full text
Abstract:
Multitask learning has shown promising performance in learning multiple related tasks simultaneously, and variants of model architectures have been proposed, especially for supervised classification problems. One goal of multitask learning is to extract a good representation that sufficiently captures the relevant part of the input about the output for each learning task. To achieve this objective, in this paper we design a multitask learning architecture based on the observation that correlations exist between outputs of some related tasks (e.g. entity recognition and relation extraction tasks), and they reflect the relevant features that need to be extracted from the input. As outputs are unobserved, our proposed model exploits task predictions in lower layers of the neural model, also referred to as early predictions in this work. But we control the injection of early predictions to ensure that we extract good task-specific representations for classification. We refer to this model as a Progressive Multitask learning model with Explicit Interactions (PMEI). Extensive experiments on multiple benchmark datasets produce state-of-the-art results on the joint entity and relation extraction task.
APA, Harvard, Vancouver, ISO, and other styles
14

Su, Fang, Hai-Yang Shang, and Jing-Yan Wang. "Low-Rank Deep Convolutional Neural Network for Multitask Learning." Computational Intelligence and Neuroscience 2019 (May 20, 2019): 1–10. http://dx.doi.org/10.1155/2019/7410701.

Full text
Abstract:
In this paper, we propose a novel multitask learning method based on the deep convolutional network. The proposed deep network has four convolutional layers, three max-pooling layers, and two parallel fully connected layers. To adjust the deep network to multitask learning problem, we propose to learn a low-rank deep network so that the relation among different tasks can be explored. We proposed to minimize the number of independent parameter rows of one fully connected layer to explore the relations among different tasks, which is measured by the nuclear norm of the parameter of one fully connected layer, and seek a low-rank parameter matrix. Meanwhile, we also propose to regularize another fully connected layer by sparsity penalty so that the useful features learned by the lower layers can be selected. The learning problem is solved by an iterative algorithm based on gradient descent and back-propagation algorithms. The proposed algorithm is evaluated over benchmark datasets of multiple face attribute prediction, multitask natural language processing, and joint economics index predictions. The evaluation results show the advantage of the low-rank deep CNN model over multitask problems.
APA, Harvard, Vancouver, ISO, and other styles
15

Kim, Hyuncheol, and Joonki Paik. "Low-Rank Representation-Based Object Tracking Using Multitask Feature Learning with Joint Sparsity." Abstract and Applied Analysis 2014 (2014): 1–12. http://dx.doi.org/10.1155/2014/147353.

Full text
Abstract:
We address object tracking problem as a multitask feature learning process based on low-rank representation of features with joint sparsity. We first select features with low-rank representation within a number of initial frames to obtain subspace basis. Next, the features represented by the low-rank and sparse property are learned using a modified joint sparsity-based multitask feature learning framework. Both the features and sparse errors are then optimally updated using a novel incremental alternating direction method. The low-rank minimization problem for learning multitask features can be achieved by a few sequences of efficient closed form update process. Since the proposed method attempts to perform the feature learning problem in both multitask and low-rank manner, it can not only reduce the dimension but also improve the tracking performance without drift. Experimental results demonstrate that the proposed method outperforms existing state-of-the-art tracking methods for tracking objects in challenging image sequences.
APA, Harvard, Vancouver, ISO, and other styles
16

Jaśkowski, Wojciech, Krzysztof Krawiec, and Bartosz Wieloch. "Multitask Visual Learning Using Genetic Programming." Evolutionary Computation 16, no. 4 (December 2008): 439–59. http://dx.doi.org/10.1162/evco.2008.16.4.439.

Full text
Abstract:
We propose a multitask learning method of visual concepts within the genetic programming (GP) framework. Each GP individual is composed of several trees that process visual primitives derived from input images. Two trees solve two different visual tasks and are allowed to share knowledge with each other by commonly calling the remaining GP trees (subfunctions) included in the same individual. The performance of a particular tree is measured by its ability to reproduce the shapes contained in the training images. We apply this method to visual learning tasks of recognizing simple shapes and compare it to a reference method. The experimental verification demonstrates that such multitask learning often leads to performance improvements in one or both solved tasks, without extra computational effort.
APA, Harvard, Vancouver, ISO, and other styles
17

Skolidis, Grigorios, and Guido Sanguinetti. "Semisupervised Multitask Learning With Gaussian Processes." IEEE Transactions on Neural Networks and Learning Systems 24, no. 12 (December 2013): 2101–12. http://dx.doi.org/10.1109/tnnls.2013.2272403.

Full text
APA, Harvard, Vancouver, ISO, and other styles
18

Li, Cong, Michael Georgiopoulos, and Georgios C. Anagnostopoulos. "Pareto-Path Multitask Multiple Kernel Learning." IEEE Transactions on Neural Networks and Learning Systems 26, no. 1 (January 2015): 51–61. http://dx.doi.org/10.1109/tnnls.2014.2309939.

Full text
APA, Harvard, Vancouver, ISO, and other styles
19

Lee, Jeong Yoon, Youngmin Oh, Sung Shin Kim, Robert A. Scheidt, and Nicolas Schweighofer. "Optimal Schedules in Multitask Motor Learning." Neural Computation 28, no. 4 (April 2016): 667–85. http://dx.doi.org/10.1162/neco_a_00823.

Full text
Abstract:
Although scheduling multiple tasks in motor learning to maximize long-term retention of performance is of great practical importance in sports training and motor rehabilitation after brain injury, it is unclear how to do so. We propose here a novel theoretical approach that uses optimal control theory and computational models of motor adaptation to determine schedules that maximize long-term retention predictively. Using Pontryagin’s maximum principle, we derived a control law that determines the trial-by-trial task choice that maximizes overall delayed retention for all tasks, as predicted by the state-space model. Simulations of a single session of adaptation with two tasks show that when task interference is high, there exists a threshold in relative task difficulty below which the alternating schedule is optimal. Only for large differences in task difficulties do optimal schedules assign more trials to the harder task. However, over the parameter range tested, alternating schedules yield long-term retention performance that is only slightly inferior to performance given by the true optimal schedules. Our results thus predict that in a large number of learning situations wherein tasks interfere, intermixing tasks with an equal number of trials is an effective strategy in enhancing long-term retention.
APA, Harvard, Vancouver, ISO, and other styles
20

Dahan, Elay, and Israel Cohen. "Deep-Learning-Based Multitask Ultrasound Beamforming." Information 14, no. 10 (October 23, 2023): 582. http://dx.doi.org/10.3390/info14100582.

Full text
Abstract:
In this paper, we present a new method for multitask learning applied to ultrasound beamforming. Beamforming is a critical component in the ultrasound image formation pipeline. Ultrasound images are constructed using sensor readings from multiple transducer elements, with each element typically capturing multiple acquisitions per frame. Hence, the beamformer is crucial for framerate performance and overall image quality. Furthermore, post-processing, such as image denoising, is usually applied to the beamformed image to achieve high clarity for diagnosis. This work shows a fully convolutional neural network that can learn different tasks by applying a new weight normalization scheme. We adapt our model to both high frame rate requirements by fitting weight normalization parameters for the sub-sampling task and image denoising by optimizing the normalization parameters for the speckle reduction task. Our model outperforms single-angle delay and sum on pixel-level measures for speckle noise reduction, subsampling, and single-angle reconstruction.
APA, Harvard, Vancouver, ISO, and other styles
21

Pan, Haixia, Yanan Li, Hongqiang Wang, and Xiaomeng Tian. "Railway Obstacle Intrusion Detection Based on Convolution Neural Network Multitask Learning." Electronics 11, no. 17 (August 28, 2022): 2697. http://dx.doi.org/10.3390/electronics11172697.

Full text
Abstract:
The detection of train obstacle intrusion is very important for the safe running of trains. In this paper, we design a multitask intrusion detection model to warn of the intrusion of detected target obstacles in railway scenes. In addition, we design a multiobjective optimization algorithm that performs with different task complexity. Through the shared structure reparameterized backbone network, our multitask learning model utilizes resources effectively. Our work achieves competitive results on both object detection and line detection, and achieves excellent inference time performance(50 FPS). Our work is the first to introduce a multitask approach to realize the assisted-driving function in a railway scene.
APA, Harvard, Vancouver, ISO, and other styles
22

Zhang, Wenzheng, Chenyan Xiong, Karl Stratos, and Arnold Overwijk. "Improving Multitask Retrieval by Promoting Task Specialization." Transactions of the Association for Computational Linguistics 11 (2023): 1201–12. http://dx.doi.org/10.1162/tacl_a_00597.

Full text
Abstract:
Abstract In multitask retrieval, a single retriever is trained to retrieve relevant contexts for multiple tasks. Despite its practical appeal, naive multitask retrieval lags behind task-specific retrieval, in which a separate retriever is trained for each task. We show that it is possible to train a multitask retriever that outperforms task-specific retrievers by promoting task specialization. The main ingredients are: (1) a better choice of pretrained model—one that is explicitly optimized for multitasking—along with compatible prompting, and (2) a novel adaptive learning method that encourages each parameter to specialize in a particular task. The resulting multitask retriever is highly performant on the KILT benchmark. Upon analysis, we find that the model indeed learns parameters that are more task-specialized compared to naive multitasking without prompting or adaptive learning.1
APA, Harvard, Vancouver, ISO, and other styles
23

Li, Lu, Yongjiu Dai, Zhongwang Wei, Wei Shangguan, Yonggen Zhang, Nan Wei, and Qingliang Li. "Enforcing Water Balance in Multitask Deep Learning Models for Hydrological Forecasting." Journal of Hydrometeorology 25, no. 1 (January 2024): 89–103. http://dx.doi.org/10.1175/jhm-d-23-0073.1.

Full text
Abstract:
Abstract Accurate prediction of hydrological variables (HVs) is critical for understanding hydrological processes. Deep learning (DL) models have shown excellent forecasting abilities for different HVs. However, most DL models typically predicted HVs independently, without satisfying the principle of water balance. This missed the interactions between different HVs in the hydrological system and the underlying physical rules. In this study, we developed a DL model based on multitask learning and hybrid physically constrained schemes to simultaneously forecast soil moisture, evapotranspiration, and runoff. The models were trained using ERA5-Land data, which have water budget closure. We thoroughly assessed the advantages of the multitask framework and the proposed constrained schemes. Results showed that multitask models with different loss-weighted strategies produced comparable or better performance compared to the single-task model. The multitask model with a scaling factor of 5 achieved the best among all multitask models and performed better than the single-task model over 70.5% of grids. In addition, the hybrid constrained scheme took advantage of both soft and hard constrained models, providing physically consistent predictions with better model performance. The hybrid constrained models performed the best among different constrained models in terms of both general and extreme performance. Moreover, the hybrid model was affected the least as the training data were artificially reduced, and provided better spatiotemporal extrapolation ability under different artificial prediction challenges. These findings suggest that the hybrid model provides better performance compared to previously reported constrained models when facing limited training data and extrapolation challenges.
APA, Harvard, Vancouver, ISO, and other styles
24

Wang, Xiaoqi, Yingjie Cheng, Yaning Yang, Yue Yu, Fei Li, and Shaoliang Peng. "Multitask joint strategies of self-supervised representation learning on biomedical networks for drug discovery." Nature Machine Intelligence 5, no. 4 (April 24, 2023): 445–56. http://dx.doi.org/10.1038/s42256-023-00640-6.

Full text
Abstract:
AbstractSelf-supervised representation learning (SSL) on biomedical networks provides new opportunities for drug discovery; however, effectively combining multiple SSL models is still challenging and has been rarely explored. We therefore propose multitask joint strategies of SSL on biomedical networks for drug discovery, named MSSL2drug. We design six basic SSL tasks that are inspired by the knowledge of various modalities, inlcuding structures, semantics and attributes in heterogeneous biomedical networks. Importantly, fifteen combinations of multiple tasks are evaluated using a graph-attention-based multitask adversarial learning framework in two drug discovery scenarios. The results suggest two important findings: (1) combinations of multimodal tasks achieve better performance than other multitask joint models; (2) the local–global combination models yield higher performance than random two-task combinations when there are the same number of modalities. We thus conjecture that the multimodal and local–global combination strategies can be treated as the guideline of multitask SSL for drug discovery.
APA, Harvard, Vancouver, ISO, and other styles
25

Forouzannezhad, Parisa, Dominic Maes, Daniel S. Hippe, Phawis Thammasorn, Reza Iranzad, Jie Han, Chunyan Duan, et al. "Multitask Learning Radiomics on Longitudinal Imaging to Predict Survival Outcomes following Risk-Adaptive Chemoradiation for Non-Small Cell Lung Cancer." Cancers 14, no. 5 (February 26, 2022): 1228. http://dx.doi.org/10.3390/cancers14051228.

Full text
Abstract:
Medical imaging provides quantitative and spatial information to evaluate treatment response in the management of patients with non-small cell lung cancer (NSCLC). High throughput extraction of radiomic features on these images can potentially phenotype tumors non-invasively and support risk stratification based on survival outcome prediction. The prognostic value of radiomics from different imaging modalities and time points prior to and during chemoradiation therapy of NSCLC, relative to conventional imaging biomarker or delta radiomics models, remains uncharacterized. We investigated the utility of multitask learning of multi-time point radiomic features, as opposed to single-task learning, for improving survival outcome prediction relative to conventional clinical imaging feature model benchmarks. Survival outcomes were prospectively collected for 45 patients with unresectable NSCLC enrolled on the FLARE-RT phase II trial of risk-adaptive chemoradiation and optional consolidation PD-L1 checkpoint blockade (NCT02773238). FDG-PET, CT, and perfusion SPECT imaging pretreatment and week 3 mid-treatment was performed and 110 IBSI-compliant pyradiomics shape-/intensity-/texture-based features from the metabolic tumor volume were extracted. Outcome modeling consisted of a fused Laplacian sparse group LASSO with component-wise gradient boosting survival regression in a multitask learning framework. Testing performance under stratified 10-fold cross-validation was evaluated for multitask learning radiomics of different imaging modalities and time points. Multitask learning models were benchmarked against conventional clinical imaging and delta radiomics models and evaluated with the concordance index (c-index) and index of prediction accuracy (IPA). FDG-PET radiomics had higher prognostic value for overall survival in test folds (c-index 0.71 [0.67, 0.75]) than CT radiomics (c-index 0.64 [0.60, 0.71]) or perfusion SPECT radiomics (c-index 0.60 [0.57, 0.63]). Multitask learning of pre-/mid-treatment FDG-PET radiomics (c-index 0.71 [0.67, 0.75]) outperformed benchmark clinical imaging (c-index 0.65 [0.59, 0.71]) and FDG-PET delta radiomics (c-index 0.52 [0.48, 0.58]) models. Similarly, the IPA for multitask learning FDG-PET radiomics (30%) was higher than clinical imaging (26%) and delta radiomics (15%) models. Radiomics models performed consistently under different voxel resampling conditions. Multitask learning radiomics for outcome modeling provides a clinical decision support platform that leverages longitudinal imaging information. This framework can reveal the relative importance of different imaging modalities and time points when designing risk-adaptive cancer treatment strategies.
APA, Harvard, Vancouver, ISO, and other styles
26

Wang, Yan, Lei Zhang, Lituan Wang, and Zizhou Wang. "Multitask Learning for Object Localization With Deep Reinforcement Learning." IEEE Transactions on Cognitive and Developmental Systems 11, no. 4 (December 2019): 573–80. http://dx.doi.org/10.1109/tcds.2018.2885813.

Full text
APA, Harvard, Vancouver, ISO, and other styles
27

Tseng, Shao-Yen, Brian Baucom, and Panayiotis Georgiou. "Unsupervised online multitask learning of behavioral sentence embeddings." PeerJ Computer Science 5 (June 10, 2019): e200. http://dx.doi.org/10.7717/peerj-cs.200.

Full text
Abstract:
Appropriate embedding transformation of sentences can aid in downstream tasks such as NLP and emotion and behavior analysis. Such efforts evolved from word vectors which were trained in an unsupervised manner using large-scale corpora. Recent research, however, has shown that sentence embeddings trained using in-domain data or supervised techniques, often through multitask learning, perform better than unsupervised ones. Representations have also been shown to be applicable in multiple tasks, especially when training incorporates multiple information sources. In this work we aspire to combine the simplicity of using abundant unsupervised data with transfer learning by introducing an online multitask objective. We present a multitask paradigm for unsupervised learning of sentence embeddings which simultaneously addresses domain adaption. We show that embeddings generated through this process increase performance in subsequent domain-relevant tasks. We evaluate on the affective tasks of emotion recognition and behavior analysis and compare our results with state-of-the-art general-purpose supervised sentence embeddings. Our unsupervised sentence embeddings outperform the alternative universal embeddings in both identifying behaviors within couples therapy and in emotion recognition.
APA, Harvard, Vancouver, ISO, and other styles
28

Zhang, Linjuan, Jiaqi Shi, Lili Wang, and Changqing Xu. "Electricity, Heat, and Gas Load Forecasting Based on Deep Multitask Learning in Industrial-Park Integrated Energy System." Entropy 22, no. 12 (November 30, 2020): 1355. http://dx.doi.org/10.3390/e22121355.

Full text
Abstract:
Different energy systems are closely connected with each other in industrial-park integrated energy system (IES). The energy demand forecasting has important impact on IES dispatching and planning. This paper proposes an approach of short-term energy forecasting for electricity, heat, and gas by employing deep multitask learning whose structure is constructed by deep belief network (DBN) and multitask regression layer. The DBN can extract abstract and effective characteristics in an unsupervised fashion, and the multitask regression layer above the DBN is used for supervised prediction. Then, subject to condition of practical demand and model integrity, the whole energy forecasting model is introduced, including preprocessing, normalization, input properties, training stage, and evaluating indicator. Finally, the validity of the algorithm and the accuracy of the energy forecasts for an industrial-park IES system are verified through the simulations using actual operating data from load system. The positive results turn out that the deep multitask learning has great prospects for load forecast.
APA, Harvard, Vancouver, ISO, and other styles
29

Liu, Jiafei, Qingsong Wang, Jianda Cheng, Deliang Xiang, and Wenbo Jing. "Multitask Learning-Based for SAR Image Superpixel Generation." Remote Sensing 14, no. 4 (February 14, 2022): 899. http://dx.doi.org/10.3390/rs14040899.

Full text
Abstract:
Most of the existing synthetic aperture radar (SAR) image superpixel generation methods are designed based on the raw SAR images or artificially designed features. However, such methods have the following limitations: (1) SAR images are severely affected by speckle noise, resulting in unstable pixel distance estimation. (2) Artificially designed features cannot be well-adapted to complex SAR image scenes, such as the building regions. Aiming to overcome these shortcomings, we propose a multitask learning-based superpixel generation network (ML-SGN) for SAR images. ML-SGN firstly utilizes a multitask feature extractor to extract deep features, and constructs a high-dimensional feature space containing intensity information, deep semantic informantion, and spatial information. Then, we define an effective pixel distance measure based on the high-dimensional feature space. In addition, we design a differentiable soft assignment operation instead of the non-differentiable nearest neighbor operation, so that the differentiable Simple Linear Iterative Clustering (SLIC) and multitask feature extractor can be combined into an end-to-end superpixel generation network. Comprehensive evaluations are performed on two real SAR images with different bands, which demonstrate that our proposed method outperforms other state-of-the-art methods.
APA, Harvard, Vancouver, ISO, and other styles
30

Zheng, Weiping, Zhenyao Mo, and Gansen Zhao. "Clustering by Errors: A Self-Organized Multitask Learning Method for Acoustic Scene Classification." Sensors 22, no. 1 (December 22, 2021): 36. http://dx.doi.org/10.3390/s22010036.

Full text
Abstract:
Acoustic scene classification (ASC) tries to inference information about the environment using audio segments. The inter-class similarity is a significant issue in ASC as acoustic scenes with different labels may sound quite similar. In this paper, the similarity relations amongst scenes are correlated with the classification error. A class hierarchy construction method by using classification error is then proposed and integrated into a multitask learning framework. The experiments have shown that the proposed multitask learning method improves the performance of ASC. On the TUT Acoustic Scene 2017 dataset, we obtain the ensemble fine-grained accuracy of 81.4%, which is better than the state-of-the-art. By using multitask learning, the basic Convolutional Neural Network (CNN) model can be improved by about 2.0 to 3.5 percent according to different spectrograms. The coarse category accuracies (for two to six super-classes) range from 77.0% to 96.2% by single models. On the revised version of the LITIS Rouen dataset, we achieve the ensemble fine-grained accuracy of 83.9%. The multitask learning models obtain an improvement of 1.6% to 1.8% compared to their basic models. The coarse category accuracies range from 94.9% to 97.9% for two to six super-classes with single models.
APA, Harvard, Vancouver, ISO, and other styles
31

Nimbal, Pratik, and Gopal Krishna Shyam. "Multitask sparse Learning based Facial Expression Classification." International Journal of Computer Sciences and Engineering 7, no. 6 (June 30, 2019): 197–202. http://dx.doi.org/10.26438/ijcse/v7i6.197202.

Full text
APA, Harvard, Vancouver, ISO, and other styles
32

Yao, Chunhua, Xinyu Song, Xuelei Zhang, Weicheng Zhao, and Ao Feng. "Multitask Learning for Aspect-Based Sentiment Classification." Scientific Programming 2021 (November 29, 2021): 1–9. http://dx.doi.org/10.1155/2021/2055555.

Full text
Abstract:
Aspect-level sentiment analysis identifies the sentiment polarity of aspect terms in complex sentences, which is useful in a wide range of applications. It is a highly challenging task and attracts the attention of many researchers in the natural language processing field. In order to obtain a better aspect representation, a wide range of existing methods design complex attention mechanisms to establish the connection between entity words and their context. With the limited size of data collections in aspect-level sentiment analysis, mainly because of the high annotation workload, the risk of overfitting is greatly increased. In this paper, we propose a Shared Multitask Learning Network (SMLN), which jointly trains auxiliary tasks that are highly related to aspect-level sentiment analysis. Specifically, we use opinion term extraction due to its high correlation with the main task. Through a custom-designed Cross Interaction Unit (CIU), effective information of the opinion term extraction task is passed to the main task, with performance improvement in both directions. Experimental results on SemEval-2014 and SemEval-2015 datasets demonstrate the competitive performance of SMLN in comparison to baseline methods.
APA, Harvard, Vancouver, ISO, and other styles
33

Jin, Ran, Tengda Hou, Tongrui Yu, Min Luo, and Haoliang Hu. "A Multitask Deep Learning Framework for DNER." Computational Intelligence and Neuroscience 2022 (April 16, 2022): 1–10. http://dx.doi.org/10.1155/2022/3321296.

Full text
Abstract:
Over the years, the explosive growth of drug-related text information has resulted in heavy loads of work for manual data processing. However, the domain knowledge hidden is believed to be crucial to biomedical research and applications. In this article, the multi-DTR model that can accurately recognize drug-specific name by joint modeling of DNER and DNEN was proposed. Character features were extracted by CNN out of the input text, and the context-sensitive word vectors were obtained using ELMo. Next, the pretrained biomedical words were embedded into BiLSTM-CRF and the output labels were interacted to update the task parameters until DNER and DNEN would support each other. The proposed method was found with better performance on the DDI2011 and DDI2013 datasets.
APA, Harvard, Vancouver, ISO, and other styles
34

Xiong, Fangzhou, Biao Sun, Xu Yang, Hong Qiao, Kaizhu Huang, Amir Hussain, and Zhiyong Liu. "Guided Policy Search for Sequential Multitask Learning." IEEE Transactions on Systems, Man, and Cybernetics: Systems 49, no. 1 (January 2019): 216–26. http://dx.doi.org/10.1109/tsmc.2018.2800040.

Full text
APA, Harvard, Vancouver, ISO, and other styles
35

Pillonetto, G., F. Dinuzzo, and G. De Nicolao. "Bayesian Online Multitask Learning of Gaussian Processes." IEEE Transactions on Pattern Analysis and Machine Intelligence 32, no. 2 (February 2010): 193–205. http://dx.doi.org/10.1109/tpami.2008.297.

Full text
APA, Harvard, Vancouver, ISO, and other styles
36

Singh, Loitongbam Gyanendro, Akash Anil, and Sanasam Ranbir Singh. "SHE: Sentiment Hashtag Embedding Through Multitask Learning." IEEE Transactions on Computational Social Systems 7, no. 2 (April 2020): 417–24. http://dx.doi.org/10.1109/tcss.2019.2962718.

Full text
APA, Harvard, Vancouver, ISO, and other styles
37

Qian Xu, Sinno Jialin Pan, Hannah Hong Xue, and Qiang Yang. "Multitask Learning for Protein Subcellular Location Prediction." IEEE/ACM Transactions on Computational Biology and Bioinformatics 8, no. 3 (May 2011): 748–59. http://dx.doi.org/10.1109/tcbb.2010.22.

Full text
APA, Harvard, Vancouver, ISO, and other styles
38

Gibert, Xavier, Vishal M. Patel, and Rama Chellappa. "Deep Multitask Learning for Railway Track Inspection." IEEE Transactions on Intelligent Transportation Systems 18, no. 1 (January 2017): 153–64. http://dx.doi.org/10.1109/tits.2016.2568758.

Full text
APA, Harvard, Vancouver, ISO, and other styles
39

Habic, Vuk, Alexander Semenov, and Eduardo L. Pasiliao. "Multitask deep learning for native language identification." Knowledge-Based Systems 209 (December 2020): 106440. http://dx.doi.org/10.1016/j.knosys.2020.106440.

Full text
APA, Harvard, Vancouver, ISO, and other styles
40

Ramsundar, Bharath, Bowen Liu, Zhenqin Wu, Andreas Verras, Matthew Tudor, Robert P. Sheridan, and Vijay Pande. "Is Multitask Deep Learning Practical for Pharma?" Journal of Chemical Information and Modeling 57, no. 8 (August 2017): 2068–76. http://dx.doi.org/10.1021/acs.jcim.7b00146.

Full text
APA, Harvard, Vancouver, ISO, and other styles
41

Yang, Haiqin, Michael R. Lyu, and Irwin King. "Efficient online learning for multitask feature selection." ACM Transactions on Knowledge Discovery from Data 7, no. 2 (July 2013): 1–27. http://dx.doi.org/10.1145/2499907.2499909.

Full text
APA, Harvard, Vancouver, ISO, and other styles
42

Fujii, Keisuke, and Yoshinobu Kawahara. "Supervised dynamic mode decomposition via multitask learning." Pattern Recognition Letters 122 (May 2019): 7–13. http://dx.doi.org/10.1016/j.patrec.2019.02.010.

Full text
APA, Harvard, Vancouver, ISO, and other styles
43

Liu, Huaping, Fuchun Sun, and Yuanlong Yu. "Multitask Extreme Learning Machine for Visual Tracking." Cognitive Computation 6, no. 3 (January 8, 2014): 391–404. http://dx.doi.org/10.1007/s12559-013-9242-z.

Full text
APA, Harvard, Vancouver, ISO, and other styles
44

Xu, Yong-Li, Di-Rong Chen, and Han-Xiong Li. "Least Square Regularized Regression for Multitask Learning." Abstract and Applied Analysis 2013 (2013): 1–7. http://dx.doi.org/10.1155/2013/715275.

Full text
Abstract:
The study of multitask learning algorithms is one of very important issues. This paper proposes a least-square regularized regression algorithm for multi-task learning with hypothesis space being the union of a sequence of Hilbert spaces. The algorithm consists of two steps of selecting the optimal Hilbert space and searching for the optimal function. We assume that the distributions of different tasks are related to a set of transformations under which any Hilbert space in the hypothesis space is norm invariant. We prove that under the above assumption the optimal prediction function of every task is in the same Hilbert space. Based on this result, a pivotal error decomposition is founded, which can use samples of related tasks to bound excess error of the target task. We obtain an upper bound for the sample error of related tasks, and based on this bound, potential faster learning rates are obtained compared to single-task learning algorithms.
APA, Harvard, Vancouver, ISO, and other styles
45

Xu, Luhui, Jingying Chen, and Yanling Gan. "Head pose estimation using deep multitask learning." Journal of Electronic Imaging 28, no. 01 (February 7, 2019): 1. http://dx.doi.org/10.1117/1.jei.28.1.013029.

Full text
APA, Harvard, Vancouver, ISO, and other styles
46

Dinuzzo, F., G. Pillonetto, and G. De Nicolao. "Client–Server Multitask Learning From Distributed Datasets." IEEE Transactions on Neural Networks 22, no. 2 (February 2011): 290–303. http://dx.doi.org/10.1109/tnn.2010.2095882.

Full text
APA, Harvard, Vancouver, ISO, and other styles
47

Yang, Min, Wei Zhao, Wei Xu, Yabing Feng, Zhou Zhao, Xiaojun Chen, and Kai Lei. "Multitask Learning for Cross-Domain Image Captioning." IEEE Transactions on Multimedia 21, no. 4 (April 2019): 1047–61. http://dx.doi.org/10.1109/tmm.2018.2869276.

Full text
APA, Harvard, Vancouver, ISO, and other styles
48

Luo, Yong, Yonggang Wen, and Dacheng Tao. "Heterogeneous Multitask Metric Learning Across Multiple Domains." IEEE Transactions on Neural Networks and Learning Systems 29, no. 9 (September 2018): 4051–64. http://dx.doi.org/10.1109/tnnls.2017.2750321.

Full text
APA, Harvard, Vancouver, ISO, and other styles
49

Zhao, Qian, Xiangyu Rui, Zhi Han, and Deyu Meng. "Multilinear Multitask Learning by Rank-Product Regularization." IEEE Transactions on Neural Networks and Learning Systems 31, no. 4 (April 2020): 1336–50. http://dx.doi.org/10.1109/tnnls.2019.2919774.

Full text
APA, Harvard, Vancouver, ISO, and other styles
50

Su, Jing, Yafei Yuan, Chunmin Liu, and Jing Li. "Multitask Learning by Multiwave Optical Diffractive Network." Mathematical Problems in Engineering 2020 (July 10, 2020): 1–7. http://dx.doi.org/10.1155/2020/9748380.

Full text
Abstract:
Recently, there has been tremendous research studies in optical neural networks that could complete comparatively complex computation by optical characteristic with much more fewer dissipation than electrical networks. Existed neural networks based on the optical circuit are structured with an optical grating platform with different diffractive phases at different diffractive points (Chen and Zhu, 2019 and Mo et al., 2018). In this study, it proposed a multiwave deep diffractive network with approximately 106 synapses, and it is easy to make hardware implementation of neuromorphic networks. In the optical architecture, it can utilize optical diffractive characteristic and different wavelengths to perform different tasks. Different wavelengths and different tasks inputs are independent of each other. Moreover, we can utilize the characteristic of them to inference several tasks, simultaneously. The results of experiments were demonstrated that the network could get a comparable performance to single-wavelength single-task. Compared to the multinetwork, single network can save the cost of fabrication with lithography. We train the network on MNIST and MNIST-FASHION which are two different datasets to perform classification of 32∗32 inputs with 10 classes. Our method achieves competitive results across both of them. In particular, on the complex task MNIST_FASION, our framework obtains an excellent accuracy improvement with 3.2%. In the meanwhile, MNSIT also has the improvement with 1.15%.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography