Title
Universal Context Tree Least Squares Prediction
Abstract
In this paper we investigate the problem of sequen- tial prediction of individual sequences using a competitive algo- rithm approach. In previous work, we have developed prediction algorithms that are universal with respect to the class of all linear predictors, such that our prediction algorithm competes against a continuous class of prediction algorithms, under the square error loss. In this paper, we introduce the use of a "context tree," to compete against a doubly exponential number of piecewise linear models. We use the context tree to achieve the performance of the best piecewise linear model that can choose its partition of the real line and real-valued prediction parameters, based on observing the entire sequence in advance, for the square error loss, uniformly, for any individual sequence. This performance is achieved with a prediction algorithm whose complexity is only linear in the depth of the context tree. In this paper, we first present results for the piecewise linear regression problem when the boundaries of each region are fixed and known. We will demonstrate an algorithm that achieves the performance of the best piecewise linear regressor for a given partition of a real line. We then extend these results to when the boundaries of each region are also design parameters. In this case, we try to achieve the performance of the best piecewise linear regressor when the regressor has the additional freedom of choosing the boundaries of each region from a large class of possible partitions. These partitions will be compactly represented using a "context-tree" (1). Here, we have neither a priori knowledge of the selected partition nor the best model parameters given that partition. We initiall y investigate scalar piecewise linear regression, such that each prediction algorithm in the competition class is a function of only the latest observation, i.e., x(t − 1). We then show how these results can be extended to higher-order regression models. We start our discussion when the boundries of each region are fixed and known. Given such a partition S J j=1 Rj = (−A, A), real valued sequences xn = {x(t)}nt=1 and yn = {y(t)}nt=1 are assumed to be bounded but are otherwise arbi- trary, in that |x(t)| < Ax for some Ax < ∞ and |y(t)| < Ay for some Ay < ∞, respectively. Given past values of the desired signal x(t), t = 1, . . . , n − 1, and a sequence of observations
Year
DOI
Venue
2006
10.1109/ISIT.2006.261704
international symposium on information theory
Keywords
Field
DocType
competitive algorithms,least squares approximations,piecewise linear techniques,sequences,trees (mathematics),competitive algorithm approach,individual sequences,linear predictors,piecewise linear model,sequential prediction,square error loss,universal context tree least squares prediction
Sequence prediction,Least squares,Mathematical optimization,Exponential function,Real line,Computer science,Square error,Prediction algorithms,Partition (number theory),Piecewise linear model
Conference
Citations 
PageRank 
References 
2
0.50
10
Authors
2
Name
Order
Citations
PageRank
Andrew C. Singer11224104.92
Suleyman S. Kozat221215.94