Linguistics Colloquium: Matt Huenerfauth (GC/Queens College)

MAY 09, 2013 | 4:15 PM TO 6:15 PM



The Graduate Center
365 Fifth Avenue




May 09, 2013: 4:15 PM-6:15 PM




Learning to Generate Understandable Animations of American Sign Language.

A majority of deaf high school graduates in the U.S. have a fourth-grade English reading level or below, and so computer-generated animations of American Sign Language (ASL) could make more information and services accessible to these individuals.  Instead of presenting English text on websites or computer software, information could be conveyed in the form of animations of virtual human characters performing ASL (produced by a computer through automatic translation software or by an ASL-knowledgable human scripting the animation). Unfortunately, getting the details of such animations accurate enough linguistically so that they are clear and understandable is difficult, and methods are needed for automating the creation of high-quality ASL animations.

This talk will discuss my lab's research, which is at the intersection of the fields of assistive technology for people with disabilities, computational linguistics, and the linguistics of ASL. Our methodology includes: experimental evaluation studies with native ASL signers, motion-capture data collection of an ASL corpus, linguistic analysis of this corpus, statistical modeling techniques, and animation synthesis technologies. In this way, we investigate new models that underlie the accurate and natural movements of virtual human characters performing ASL; our current work focuses on modeling how signers use 3D points in space and how this affects the hand-movements required for ASL verb signs.