A lot of us have Alexa’s at home, and many are familiar with the voice service. But, something we all noticed was that the Alexa can get mundane. When we interact with people, people are able to use body language, and visuals, in order to make their speech more interactive. But, the Alexa cannot. So, our mission for this project has been to give Alexa some personality, and add interactive components to it in order to make interaction with the device just as fun as it is useful.

What it does

The Alexa has features: -what is/who is ____, and then it renders a hologram of what you asked for. Ex: what is a llama? Banana? Who is Barack Obama? -make me laugh, plays a funny gif or meme -open world map, opens an interactive world map, which detects your hand gestures so that you can interact with the globe -draw something, opens up an interactive drawing interface that detects hand gestures and allows you to draw on the canvas

How I built it

-Acrylic sheets -Node -Javascript -AWS -Alexa

Challenges I ran into

  • Getting the client to load what we wanted to was a huge headache. Built on a Node.js server, we were trying to do things like res.render...but we had no res (since the stuff listening for Alexa was basically a websocket). So after trying a ton of hacky ways to do things that didn’t work, we ended up using, which worked beautifully!
  • For the features that used the Leap Motion, we needed to use two different versions of the Javascript file. We tried appending the script, using different versions, etc...nothing worked. In the end, setTimeout saves the day after we load the correct script file, and then load our custom code that used the Leap Motion file afterwards.

Accomplishments that I'm proud of

-Rendering the actual hologram looks awesome -Making the interactive interfaces work, specifically the globe and the canvas

What I learned

  • is not as hard as it seems!
  • MQTT
  • Lots of Javascript frameworks!

What's next for Alexagram

  • Daily news- when you ask for a flash briefing, the hologram will use nlp to grab nouns from the news and display them. To make the conversation interactive!
  • Being able to zoom in on the globe and interact with it in a precise way. That way, one can lookup addresses from the interface, or even request a lyft from it
Share this project: