You are here:

Examination of Phoneme and Viseme Synchronization On Listening Task Performance
PROCEEDINGS

, Defense Language Institute, Monterey CA, United States

E-Learn: World Conference on E-Learning in Corporate, Government, Healthcare, and Higher Education, in Orlando, Florida, USA ISBN 978-1-880094-83-9 Publisher: Association for the Advancement of Computing in Education (AACE), San Diego, CA

Abstract

This paper describes a work-in-progress research study that will examine the effect of phoneme and viseme synchronization on listening task performance. The influence of audiovisual integration on speech perception and second language acquisition (SLA) in video telephony and videoconferencing used in foreign language distance learning is not fully understood. McGurk and MacDonald (1976) demonstrated that speech processing is a multimodal process that relies on both auditory and visual speech processing. The implications for SLA acquisition in video conferencing environments is that if there is a delay caused by network traffic or bandwidth constraints that results in the auditory channel not being synchronized with the lip movement in the visual channel, the phoneme or utterance may not be perceived accurately.

Citation

Isaacson, R. (2010). Examination of Phoneme and Viseme Synchronization On Listening Task Performance. In J. Sanchez & K. Zhang (Eds.), Proceedings of E-Learn 2010--World Conference on E-Learning in Corporate, Government, Healthcare, and Higher Education (pp. 1834-1837). Orlando, Florida, USA: Association for the Advancement of Computing in Education (AACE). Retrieved February 25, 2020 from .

References

View References & Citations Map

These references have been extracted automatically and may have some errors. Signed in users can suggest corrections to these mistakes.

Suggest Corrections to References