Title
Multi-Task Learning By Pareto Optimality
Abstract
Deep Neural Networks (DNNs) are often criticized because they lack the ability to learn more than one task at a time: Multitask Learning is an emerging research area whose aim is to overcome this issue. In this work, we introduce the Pareto Multitask Learning framework as a tool that can show how effectively a DNN is learning a shared representation common to a set of tasks. We also experimentally show that it is possible to extend the optimization process so that a single DNN simultaneously learns how to master two or more Atari games: using a single weight parameter vector, our network is able to obtain sub-optimal results for up to four games.
Year
DOI
Venue
2019
10.1007/978-3-030-37599-7_50
MACHINE LEARNING, OPTIMIZATION, AND DATA SCIENCE
Keywords
Field
DocType
Multitask learning, Neural and evolutionary computing, Deep neuroevolution, Hypervolume, Kullback-Leibler Divergence, Evolution Strategy, Deep artificial neural networks, Atari 2600 Games
Mathematical optimization,Multi-task learning,Computer science,Pareto principle
Conference
Volume
ISSN
Citations 
11943
0302-9743
0
PageRank 
References 
Authors
0.34
0
4
Name
Order
Citations
PageRank
Deyan Dyankov100.34
Salvatore Danilo Riccio200.34
Giuseppe Di Fatta352939.23
Giuseppe Nicosia401.69