Skip to Main content Skip to Navigation
Journal articles

Motherese by Eye and Ear: Infants Perceive Visual Prosody in Point-Line Displays of Talking Heads

Abstract : Infant-directed (ID) speech provides exaggerated auditory and visual prosodic cues. Here we investigated if infants were sensitive to the match between the auditory and visual correlates of ID speech prosody. We presented 8-month-old infants with two silent line-joined point-light displays of faces speaking different ID sentences, and a single vocal-only sentence matched to one of the displays. Infants looked longer to the matched than mismatched visual signal when full-spectrum speech was presented; and when the vocal signals contained speech low-pass filtered at 400 Hz. When the visual display was separated into rigid (head only) and non-rigid (face only) motion, the infants looked longer to the visual match in the rigid condition; and to the visual mismatch in the non-rigid condition. Overall, the results suggest 8-month-olds can extract information about the prosodic structure of speech from voice and head kinematics, and are sensitive to their match; and that they are less sensitive to the match between lip and voice information in connected speech.
Document type :
Journal articles
Complete list of metadatas

https://hal-univ-paris10.archives-ouvertes.fr/hal-01478469
Contributor : Administrateur Hal Nanterre <>
Submitted on : Tuesday, February 28, 2017 - 10:46:45 AM
Last modification on : Tuesday, November 19, 2019 - 9:34:28 AM

Links full text

Identifiers

Citation

Christine Kitamura, Bahia Guellaï, Jeesun Kim. Motherese by Eye and Ear: Infants Perceive Visual Prosody in Point-Line Displays of Talking Heads. PLoS ONE, Public Library of Science, 2014, 9 (10), pp.e111467. ⟨10.1371/journal.pone.0111467⟩. ⟨hal-01478469⟩

Share

Metrics

Record views

126