Inspiration

We built Magic Mirror after noticing our increasing dependence on information we usually get from our phone. The mirror is an accessible alternative to the viewing of information using an object that is already inherently part of our daily routine.

What it does

As the user approaches the Magic Mirror, the current time, stock data, and current weather are displayed. If the user opens their palm or opens their mouth, certain graphics will appear to add to the user's reflection of themselves that mimic those seen on Snapchat.
Graphic Filters:

  • User opens their mouth -> the nancat rainbow appears to be cascading out of their mouth.
  • User opens their palm -> a flame appears and follows the user's hand around. When the user is in full frame, the Kinect will measure their heart rate and display that as well.
    Various voice controls are also available:
  • "Time"/"Weather"/"Stock"/"Text" -> emphasizes the information associated with that specific feature by making it larger.
  • "Mirror, Mirror on the Wall, Who's the Fairest of Them All" -> pulls up a series of entertaining media and audio ending with an image of Lord Voldemort.
  • "Start Music"-> Starts a music player.
  • "Stop Music"-> Stops a music player.
    All audio is visible on a visual synthesizer. When the user receive a text, the text appears on the mirror and is read out loud.

How we built it

Magic Mirror is a two way mirror mounted on a computer monitor - two way mirrors are both transparent and reflective. A web app is running on the monitor with all the features. The web app was coded in JavaScript and is pulling from a Node.JS server. We are using Yahoo API to pull stock data, Forecast.io to pull weather data, and the user's iMessage database to pull text data. The Kinect is taking the user's image and sending the coordinates of the user's hand and mouth (and whether they are open or not) to the server to add the cThe Kinect is also used to detect the user's heartbeat by using an algorithm that samples the RGB values of a sampling of pixels in the user's face. The Voice Recognition library of Microsoft's Project Oxford API was used for speech to text for the voice controls and text to speech for converting incoming text messages to audio.

What's next for Magic Mirror

To add more features that would aide our users as a part of their morning routine. These include traffic conditions about the user's morning route to work and the main breaking news highlights of the day.

Share this project:
×

Updates