Title
Limitations of Autoregressive Models and Their Alternatives
Abstract
Standard autoregressive language models perform only polynomial-time computation to compute the probability of the next symbol. While this is attractive, it means they cannot model distributions whose next-symbol probability is hard to compute. Indeed, they cannot even model them well enough to solve associated easy decision problemsin main text, note that they’re easy because you get to see the whole string rather than a prefix—this is really the difference between checking a given assignment against a formula and asking whether any satisfying assignment exists for which an engineer might want to consult a language model. These limitations apply no matter how much computation and data are used to train the model, unless the model is given access to oracle parameters that grow superpolynomially in sequence length. Thus, simply training larger autoregressive language models is not a panacea for NLP. Alternatives include energy-based models (which give up efficient sampling) and latent-variable autoregressive models (which give up efficient scoring of a given string). Both are powerful enough to escape the above limitations.
Year
Venue
DocType
2021
NAACL-HLT
Conference
Citations 
PageRank 
References 
0
0.34
0
Authors
5
Name
Order
Citations
PageRank
Chu-Cheng Lin100.34
Aaron Jaech221.38
Xin Li341.74
Matthew Gormley48410.25
Jason Eisner522.07