In daily conversations, what information do people use to assess their conversational partner’s explanations? We explore how a metacognitive cue, in particular the partner’s confidence or uncertainty, can modulate the credibility of an explanation. Two experiments showed that explanations are accepted more often when delivered by an uncertain conversational partner. Participants in Experiment 1 demonstrated the general effect by interacting with a pseudo-autonomous robotic confederate. Experiment 2 used the same methodology to show that the effect was applicable to explanatory reasoning and not other sorts of inferences. Results are consistent with an account in which reasoners use relative confidence as a metacognitive cue to infer their conversational partner’s depth of processing.