Title | ||
---|---|---|
Cutting Down on Prompts and Parameters: Simple Few-Shot Learning with Language Models |
Abstract | ||
---|---|---|
Prompting language models (LMs) with training examples and task descriptions has been seen as critical to recent successes in few-shot learning. In this work, we show that finetuning LMs in the few-shot setting can considerably reduce the need for prompt engineering. In fact, one can use null prompts, prompts that contain neither task-specific templates nor training examples, and achieve competitive accuracy to manually-tuned prompts across a wide range of tasks. While finetuning LMs does introduce new parameters for each downstream task, we show that this memory overhead can be substantially reduced-finetuning only the bias terms can achieve comparable or better accuracy than standard finetuning while only updating 0.1% of the parameters. All in all, we recommend finetuning LMs for few-shot learning as it is more accurate, has relatively stable performance across different prompts, and can be made nearly as efficient as using frozen LMs. |
Year | DOI | Venue |
---|---|---|
2022 | 10.18653/v1/2022.findings-acl.222 | FINDINGS OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS (ACL 2022) |
DocType | Volume | Citations |
Conference | Findings of the Association for Computational Linguistics: ACL 2022 | 0 |
PageRank | References | Authors |
0.34 | 0 | 6 |
Name | Order | Citations | PageRank |
---|---|---|---|
Robert L. Logan IV | 1 | 12 | 3.50 |
Ivana Balazevic | 2 | 0 | 1.35 |
Eric Wallace | 3 | 18 | 7.45 |
Fabio Petroni | 4 | 0 | 0.34 |
Sameer Singh | 5 | 1060 | 71.63 |
Sebastian Riedel | 6 | 0 | 0.34 |