Story-telling performers often rely on their audience’s smiles, sounds of laughter, body movements, and other qualitative observations to gauge whether their performances are appreciated by the audience. The current study aims to capture the temporal patterns between a performer and his or her audience. A professional rakugo story-teller performed live in front of 20 audience members aged 16 to 67 (M = 40.6, SD = 16.4) in a laboratory. Videotaped performances were categorized by a computer-aided coding system, and the audience’s reactions were quantified using the face-tracking and background subtraction computer program. Results demonstrated performer-audience correlations only in a particular frequency band. While the audience often smiled in response to incongruent lines and interpreted gestures, the performer sometimes delivered the points only after audience-initiated smiles and movements. This dynamic co-creation may offer suggestions for a variety of orators who speak regularly in front of audiences.