Other people are the most important source of information in a child’s life, and one important channel for social information is faces. Faces can convey affective, linguistic, and referential information through expressions, speech, and eye-gaze. But in order for children to apprehend this information, it must be accessible. How much of the time can children actually see the faces of the people around them? We use data from a head-mounted camera, in combination with face-detection methods from computer vision, to address this question in a scalable, automatic fashion. We develop a detection system using off-the-shelf methods and show that it produces robust results. Data from a single child’s visual experience suggest the possibility of systematic changes in the visibility of faces across the first year, possibly due to postural shifts.