Title
High level high performance computing for multitask learning of time-varying models
Abstract
We propose an approach suitable to learn multiple time-varying models jointly and discuss an application in data-driven weather forecasting. The methodology relies on spectral regularization and encodes the typical multi-task learning assumption that models lie near a common low dimensional subspace. The arising optimization problem amounts to estimating a matrix from noisy linear measurements within a trace norm ball. Depending on the problem, the matrix dimensions as well as the number of measurements can be large. We discuss an algorithm that can handle large-scale problems and is amenable to parallelization. We then compare high level high performance implementation strategies that rely on Just-in-Time (JIT) decorators. The approach enables, in particular, to offload computations to a GPU without hard-coding computationally intensive operations via a low-level language. As such, it allows for fast prototyping and therefore it is of general interest for developing and testing novel computational models.
Year
DOI
Venue
2014
10.1109/CIBD.2014.7011522
Computational Intelligence in Big Data
Keywords
Field
DocType
geophysics computing,learning (artificial intelligence),optimisation,parallel processing,time series,weather forecasting,data-driven weather forecasting,high level high performance computing,just-in-time decorators,matrix dimensions,multiple time-varying models,multitask learning,optimization problem,spectral regularization,time series analysis
Kernel (linear algebra),Multi-task learning,Supercomputer,Subspace topology,CUDA,Computer science,Theoretical computer science,Computational model,Regularization (mathematics),Computer engineering,Optimization problem
Conference
Citations 
PageRank 
References 
4
0.49
7
Authors
4
Name
Order
Citations
PageRank
Marco Signoretto140.49
Emanuele Frandi240.49
Zahra Karevan340.49
Johan A. K. Suykens463553.51