Communicate with text, facial expression, speech, body language, and now American sign language.

Reason: Sign language is a foreign language that can most benefit from having an avatar and I personally enjoy the challenge of calibrating fine movement using 3D editing tool such as Blender and Maya.

What it does

A mannequin will sign "I love you" in ASL while a Sumerian host voices it.

How I built it

Experiment with Sumerian, Blender, Maya.

Challenges I ran into

Can't edit .fdx file with Blender, wasted time there and had to use Maya to import the gesture animation and edit there.

Accomplishments that I'm proud of

Hands on experience with basic features of Sumerian, Blender & Maya.

What I learned

Doing animation is hard. I appreciate the animated films more now.

What's next for SignWithMe

Features (from must-have to nice-to-haves):

  • Open scene: a mannequin that says and signs “I love you.”
  • Additional inputs: text and/or voice.
  • Dress up the mannequin; add billboard that displays the words being signed.
  • Zoning in the camera when the mannequin’s doing finger-spelling.
  • Add the gesture to the gesture marks list.

Future plan:

  • Open source.
  • Depends on feedback, “Translate” more ASL (American Sign Language) vocabularies.

Built With

Share this project: