Inspiration

We wanted to use the Deepgram API as a force for social good. It was apparent to us that in the domain of speech recognition for accessibility, there are many applications for practical matters like word processing and filling out forms, but far fewer for expressing yourself and creating art. We believe that this is an unfortunate missed opportunity.

Since the start of the pandemic, Pictionary-esque games and applications focused on creative expression have taken off as a way of connecting to people. Even over this very Hackathon we took part in MS-Paint Bob Ross tutorials. But these experiences often expect able-bodied control of a mouse of touchpad, at a level which can be hard or impossible for many people with motor impairments. Even for those who wouldn't normally think of themselves as struggling, we know first-hand how drawing on a laptop can over a relatively short period of time introduce pain in the form of RSI.

What it does

Our project introduces a hands-free painting experience. Commands like bold, down, go and so on control the stroke. Going beyond just mimicking mouse-controlled paint apps, advanced features include voice-controlled colour mixing, "shortcuts" to jump between bounded regions of the painting, and a velocity-acceleration mode for the brush.

Try it out yourself at art-iculate.tech (and discover a new meaning to a "a picture is worth a thousand words")!

How we built it

We used Deepgram as a speech recognition API, React as our web framework, and p5.js as a canvas to build the core drawing algorithms on top of. Git for version control helped us distribute then merge packages of work amongst team members.

Challenges we ran into

  • Turning often inexact speech transcriptions into exact commands.
  • Creating UI and UX that is intuitively usable.
  • Algorithmic challenges behind drawing tools.

Accomplishments that we're proud of

The integration of p5 in a React environment was something that none of us had tried before, though separately we had some experience in each. Being able to communicate and share expertise to integrate the two platforms so deeply and effectively was very rewarding.

In researching our project we came across a similar concept in a 2007 paper from the University of Washington ("VoiceDraw" - Susumu Harada et al). We were pleased to see that the more kinetic nature of our approach meant that results (i.e. cool paintings) could be achieved far more quickly than those described in the paper.

And most of all, we're pleased that using the program turned out to be so fun. :)

What we learned

  • How to recover your files after you accidentally git reset --hard all your changes of the last few hours...

What's next for ARTiculate

We have plenty of extensions planned, including the ability to fluidly pull images from online and insert them into them canvas, and additional accessibility options - e.g. custom voice commands and colour-blindness options to assist with colour mixing.

Built With

+ 11 more
Share this project:

Updates