Do Neural Language Representations Learn Physical Commonsense?
- Maxwell Forbes, University of Washington, Seattle, Washington, United States
- Ari Holtzman, University of Washington, Seattle, Washington, United States
- Yejin Choi, Paul G. Allen School of Computer Science & Engineering, University of Washington, Seattle, Washington, United States
AbstractHumans understand language based on the rich background knowledge about how the physical world works, which in turn, allows us to reason about the physical world through language. In addition to the properties of objects (e.g.,boats require fuel) and their affordances, i.e., the actions that are applicable to them (e.g., boats can be driven), we can also reason about if–then inferences between what properties of objects imply the kind of actions that are applicable to them (e.g., that if we can drive something then it likely requires fuel). In this paper, we investigate the extent to which state-of-the-art neural language representations, trained on a vast amount of natural language text, demonstrate physical commonsense reasoning. While recent advancements of neural language models have demonstrated strong performance on various types of natural language inference tasks, our study based on a dataset of over 200k newly collected annotations suggests that neural language representations still only learn associations that are explicitly written down.
Return to previous page