How people detect incomplete explanations
- Joanna Korman, Navy Center for Applied Research in Artificial Intelligence, Naval Research Laboratory, WASHINGTON, District of Columbia, United States
- Sangeet Khemlani, Navy Center for Applied Research in Artificial Intelligence, Naval Research Laboratory, WASHINGTON, District of Columbia, United States
AbstractIn theory, there exists no bound to a causal explanation – every explanation can be elaborated further. But reasoners rate some explanations as more complete than others. To account for this behavior, we developed a novel theory of the detection of explanatory incompleteness. The theory is based on the idea that reasoners construct mental models of causal explanations. By default, each causal relation refers to a single mental model. Reasoners should consider an explanation complete when they can construct a single mental model, but incomplete when they must consider multiple models. Reasoners should thus rate causal chains, e.g., A causes B and B causes C, as more complete than “common cause” explanations (e.g., A causes B and A causes C) or “common effect” explanations (e.g., A causes C and B causes C). Two experiments validate the theory's prediction. The data suggest that reasoners construct mental models when generating explanations.
Return to previous page