Title
Multi-Task Learning With Localized Generalization Error Model
Abstract
In cases, the same or similar network architecture is used to deal with related but different tasks, where tasks come from different statistical distributions in the sample input space and share some common features. Multi-Task Learning (MTL) combines multiple related tasks for training at the same time, so as to learn some shared feature representation among multiple tasks. However, it is difficult to improve each task when statistical distributions of these related tasks are greatly different. This is caused by the difficulty of extracting an effective generalization of feature representation from multiple tasks. Moreover, it also slows down the convergence rate of MTL. Therefore, we propose a MTL method based on the Localized Generalization Error Model (L-GEM). The L-GEM improves the generalization capability of the trained model by minimizing the upper bound of generalization error of it with respect to unseen samples similar to training samples. It also helps to narrow the gap between different tasks due to different statistical distributions in MTL. Experimental results show that the L-GEM speeds up the training process while significantly improves the final convergence results.
Year
DOI
Venue
2019
10.1109/ICMLC48188.2019.8949255
2019 International Conference on Machine Learning and Cybernetics (ICMLC)
Keywords
Field
DocType
Multi-Task learning,Multi-layer perceptron neural network,Convolutional neural network,Localized generalization error model
Convergence (routing),Multi-task learning,Pattern recognition,Convolutional neural network,Upper and lower bounds,Computer science,Network architecture,Probability distribution,Rate of convergence,Artificial intelligence,Generalization error
Conference
ISSN
ISBN
Citations 
2160-133X
978-1-7281-2817-7
0
PageRank 
References 
Authors
0.34
11
4
Name
Order
Citations
PageRank
Wendi Li101.35
Yi Zhu200.34
Ting Wang3725120.28
Wing W. Y. Ng452856.12