Title
How Pre-trained Language Models Capture Factual Knowledge? A Causal-Inspired Analysis
Abstract
Recently, there has been a trend to investigate the factual knowledge captured by Pre-trained Language Models (PLMs). Many works show the PLMs' ability to fill in the missing factual words in cloze-style prompts such as "Dante was born in [ MASK]." However, it is still a mystery how PLMs generate the results correctly: relying on effective clues or shortcut patterns? We try to answer this question by a causal-inspired analysis that quantitatively measures and evaluates the wordlevel patterns that PLMs depend on to generate the missing words. We check the words that have three typical associations with the missing words: knowledge-dependent, positionally close, and highly co-occurred. Our analysis shows: (1) PLMs generate the missing factual words more by the positionally close and highly co-occurred words than the knowledge-dependent words; (2) the dependence on the knowledge-dependent words is more effective than the positionally close and highly co-occurred words. Accordingly, we conclude that the PLMs capture the factual knowledge ineffectively because of depending on the inadequate associations.
Year
DOI
Venue
2022
10.18653/v1/2022.findings-acl.136
FINDINGS OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS (ACL 2022)
DocType
Volume
Citations 
Conference
Findings of the Association for Computational Linguistics: ACL 2022
0
PageRank 
References 
Authors
0.34
0
9
Name
Order
Citations
PageRank
Shaobo Li100.68
Xiaoguang Li214119.54
Lifeng Shang348530.96
Zhenhua Dong400.68
Chengjie Sun519826.21
Bingquan Liu617027.47
Zhenzhou Ji710720.11
Xin Jiang815032.43
Qun Liu92149203.11