Title
A Transformer-based Framework for Multivariate Time Series Representation Learning
Abstract
ABSTRACTWe present a novel framework for multivariate time series representation learning based on the transformer encoder architecture. The framework includes an unsupervised pre-training scheme, which can offer substantial performance benefits over fully supervised learning on downstream tasks, both with but even without leveraging additional unlabeled data, i.e., by reusing the existing data samples. Evaluating our framework on several public multivariate time series datasets from various domains and with diverse characteristics, we demonstrate that it performs significantly better than the best currently available methods for regression and classification, even for datasets which consist of only a few hundred training samples. Given the pronounced interest in unsupervised learning for nearly all domains in the sciences and in industry, these findings represent an important landmark, presenting the first unsupervised method shown to push the limits of state-of-the-art performance for multivariate time series regression and classification.
Year
DOI
Venue
2021
10.1145/3447548.3467401
KDD
Keywords
DocType
Citations 
transformer, deep learning, multivariate time series, unsupervised learning, self-supervised learning, framework, regression, classification, imputation
Conference
2
PageRank 
References 
Authors
0.64
0
5
Name
Order
Citations
PageRank
George Zerveas121.31
Srideepika Jayaraman221.65
Dhaval Patel321.65
Anuradha Bhamidipaty421.65
Carsten Eickhoff536539.21