Title
Piqa: Reasoning About Physical Commonsense In Natural Language
Abstract
To apply eyeshadow without a brush, should I use a cotton swab or a toothpick? Questions requiring this kind of physical commonsense pose a challenge to today's natural language understanding systems. While recent pretrained models (such as BERT) have made progress on question answering over more abstract domains - such as news articles and encyclopedia entries, where text is plentiful - in more physical domains, text is inherently limited due to reporting bias. Can AI systems learn to reliably answer physical commonsense questions without experiencing the physical world?In this paper, we introduce the task of physical commonsense reasoning and a corresponding benchmark dataset Physical Interaction: Question Answering or PIQA. Though humans find the dataset easy (95% accuracy), large pretrained models struggle (similar to 75%). We provide analysis about the dimensions of knowledge that existing models lack, which offers significant opportunities for future research.
Year
Venue
DocType
2020
THIRTY-FOURTH AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE, THE THIRTY-SECOND INNOVATIVE APPLICATIONS OF ARTIFICIAL INTELLIGENCE CONFERENCE AND THE TENTH AAAI SYMPOSIUM ON EDUCATIONAL ADVANCES IN ARTIFICIAL INTELLIGENCE
Conference
Volume
ISSN
Citations 
34
2159-5399
3
PageRank 
References 
Authors
0.38
0
5
Name
Order
Citations
PageRank
Yonatan Bisk119617.54
Rowan G. Zellers21107.55
Ronan Le Bras34115.74
Jianfeng Gao45729296.43
Yejin Choi52239153.18