Verbal overshadowing refers to a phenomenon whereby verbalization of a non-verbal stimulus (e.g., he had slant eyes) impairs subsequent non-verbal recognition accuracy. In order to understand the mechanism by which this phenomenon occurs, we constructed a computational model that was trained to generate an individual-face-specific representation upon input of a noise-filtered retinotopic face (i.e., face recognition). When the model verbalized the facial features before receiving the retinotopic input, the model incorrectly recognized a new face input as one of the different, yet visually-similar, trained items (that is, a false-alarm occurred). In contrast, this recognition error did not occur without prior verbalization. Close inspection of the model revealed that verbalization changed the internal representation such that it lacked the fine-grained information necessary to discriminate visually-similar faces. This supports the view that verbalization causes unavailability/degradation of fine-grained non-verbal representations, thus impairing recognition accuracy.