Title
What Makes Good In-Context Examples for GPT-3?
Abstract
GPT-3 has attracted lots of attention due to its superior performance across a wide range of NLP tasks, especially with its in-context learning abilities. Despite its success, we found that the empirical results of GPT-3 depend heavily on the choice of in-context examples. In this work, we investigate whether there are more effective strategies for judiciously selecting incontext examples (relative to random sampling) that better leverage GPT-3's in-context learning capabilities. Inspired by the recent success of leveraging a retrieval module to augment neural networks, we propose to retrieve examples that are semantically-similar to a test query sample to formulate its corresponding prompt. Intuitively, the examples selected with such a strategy may serve as more informative inputs to unleash GPT-3's power of text generation. We evaluate the proposed approach on several natural language understanding and generation benchmarks, where the retrieval-based prompt selection approach consistently outperforms the random selection baseline. Moreover, it is observed that the sentence encoders fine tuned on task-related datasets yield even more helpful retrieval results. Notably, significant gains are observed on tasks such as table-totext generation (44.3% on the ToTTo dataset) and open-domain question answering (45.5% on the NQ dataset).
Year
DOI
Venue
2022
10.18653/v1/2022.deelio-1.10
PROCEEDINGS OF DEEP LEARNING INSIDE OUT (DEELIO 2022): THE 3RD WORKSHOP ON KNOWLEDGE EXTRACTION AND INTEGRATION FOR DEEP LEARNING ARCHITECTURES
DocType
Volume
Citations 
Conference
Proceedings of Deep Learning Inside Out (DeeLIO 2022): The 3rd Workshop on Knowledge Extraction and Integration for Deep Learning Architectures
0
PageRank 
References 
Authors
0.34
0
6
Name
Order
Citations
PageRank
Jiachang Liu100.34
Dinghan Shen210810.37
Yizhe Zhang300.34
Bill Dolan42137132.21
L. Carin54603339.36
Weizhu Chen659738.77