Like language, the human capacity to create music is one of the most salient and unique markers that differentiates humans from other species (Cross, 2005). In the following study, the authors show that people’s ability to perceive emotions in infants’ vocalizations (e.g., cooing and babbling) is linked to the ability to perceive timbres of musical instruments. In one experiment, 180 “synthetic baby sounds” were created by rearranging spectral frequencies of cooing, babbling, crying, and laughing made by 6 to 9-month-old infants. Undergraduate participants (N=145) listened to each sound one at a time and rated the emotional quality of the “synthetic baby sounds.” The results of the experiment showed that five acoustic components of musical timbre (e.g., roll off, mel-frequency cepstral coefficient, attack time and attack slope) could account for nearly 50% of the variation of the emotion ratings made by undergraduate students. The results suggest that the same mental processes are probably applied for the perception of musical timbres and that of infants’ prelinguistic vocalization.