Title
Perceiver IO: A General Architecture for Structured Inputs & Outputs
Abstract
The recently-proposed Perceiver model obtains good results on several domains (images, audio, multimodal, point clouds) while scaling linearly in compute and memory with the input size. While the Perceiver supports many kinds of inputs, it can only produce very simple outputs such as class scores. Perceiver IO overcomes this limitation without sacrificing the original's appealing properties by learning to flexibly query the model's latent space to produce outputs of arbitrary size and semantics. Perceiver IO still decouples model depth from data size and still scales linearly with data size, but now with respect to both input and output sizes. The full Perceiver IO model achieves strong results on tasks with highly structured output spaces, such as natural language and visual understanding, StarCraft II, and multi-task and multi-modal domains. As highlights, Perceiver IO matches a Transformer-based BERT baseline on the GLUE language benchmark without the need for input tokenization and achieves state-of-the-art performance on Sintel optical flow estimation.
Year
Venue
Keywords
2022
International Conference on Learning Representations (ICLR)
Perceiver,BERT,natural language processing,optical flow,computer vision,multimodal,GLUE,ImageNet,StarCraft
DocType
Citations 
PageRank 
Conference
0
0.34
References 
Authors
0
14