Summary

MITE is about changing the way we consume media. Instead of passively watching TV, MITE allows the viewer to interact with the program. The toy contains sensors, actuators and connects to the cloud via WiFi. The actuators respond to the content being played to bring the show to life, while the sensors let the viewer control the flow of the program by reacting at certain points.

Concept

The toy contains a microcontroller with a WiFi radio. This connects to a cloud service which we are creating, that can send commands to the toy and receive sensor data. On the server side, we are creating a video player with the ability to send commands over the cloud service at predetermined points in the video. The player can also dynamically change content based on input from remote functions.

Goals

Baseline
Hack an existing toy with sensors, actuators and a microcontroller.
Create a tree of video content that branches at certain points to create 2^n unique narratives.
Bind the toy's actions to the decision tree.

Reach
Design a custom toy with actuators and sensors.
Move all data processing to the cloud.
Connect toy to the cloud.

System Diagram

Timeline

Week beginning 10/9:
Find and buy an appropriately sized toy with large amount of related TV content
Week beginning 10/16:
Work out controlling sensors, actuators with toy’s microcontroller
Embed sensors and actuators in toy
Week beginning 10/23:
Develop a communication link from Raspi to toy
Start developing cloud video player
Week beginning 10/30:
Set up Raspi to control local content while sending/receiving commands
Week beginning 11/6:
Develop video content tree
Start designing custom toy
Week beginning 11/13:
Final testing and debugging, demo-1 prep
_Demo day 1 - 11/17 _
Week beginning 11/20:
Print custom toy and embed sensors and actuators
Finish cloud video player
Week beginning 11/27:
Develop cloud service communication with toy
Final testing and debugging, demo-2 prep

Project update: 10/20/16

Our first update cycle encompasses the start of the project through the hackathon. We were able to accomplish the following:

  • Choose microcontroller platform (Photon)
  • Dismantle and modify our toy
  • Execute some basic code on the Photon remotely using the Spark cloud service

We chose the Photon because of its abundant I/O, useful IDE, and robust WiFi framework. The Photon by default works with the particle.io cloud service. We were able to execute functions on the Photon and receive return data on our laptops using HTML. We also tested Python code on Raspberry Pi with similarly good results. However, the particle.io service generally had very high latency, and wasn't very reliable. We will focus on creating our own cloud service from this point on rather than relying on theirs.

Once we dismantled the toy (a Power Rangers figure), we made some modifications to create room for electronics and servos. We were able to install a servo to replace the shoulder joint, and will eventually 3D print an adapter to connect the new shoulder joint to the existing arm. We created room in the chest for the IMU and other hardware.
Inside the legs, we were able to build a +5V regulated supply into one of the feet. On the other side, we installed the Photon and made connections to the servo.
Altogether, we finished the night with our original toy now having a microcontroller brain, power, and a robotic shoulder joint that we were able to control over the cloud.

Project update: 10/31/16

While waiting for parts to arrive, we shifted our focus to software. Owing to the issues we discovered with particle.io's cloud service, we wrote our own communication code that uses the Photon's TCP client. Work is ongoing, but we've been able to send commands with very low latency and will soon be able to receive data as well.
Work was also done on the video player. We used HTML5/Javascript to create a player that can seamlessly switch content when called to by remote functions. The player can also make function calls at predetermined points in playback to send commands to the toy. We are working on integrating our cloud service and player and should be able to play video while controlling the Photon soon. Next, we will develop our video tree methods to select video clips based on input from the Photon.

Share this project:
×

Updates

Ryan Spicer posted an update

Demo day 1 went very well for team MITE! We had a short video tree built up, servers hosted on a laptop, and video playing on the big screen TV in xLab. Our core technology is mature enough at this point, and we're working on adding flashy new features for reach day.

Text to speech: We'll be using a text to speech module and audio amplifier to let the toy talk. Speech will be activated on demand by the video player, and by the toy itself when needed.

Scoring system: Doing the "right" actions to the toy when asked will add to your score. We plan on changing the toy's appearance (lighting, possibly motion) to correspond with your score. Once a "high" score is achieved, a flashy. fun reward is yours!

Idle watchdog: To keep the fun going, we're adding a watchdog timer that is reset when the toy is moved or interacted with. If the timer goes off, the toy will beg for your attention again.

3D printing: We were advised not to print a full toy just for its own sake, but focus on creating parts that we need. We're likely going to make a replacement head and maybe legs, to accommodate the new features and add some room for all our new parts.

Log in or sign up for Devpost to join the conversation.

Ryan Spicer posted an update

API: We made changes to the way the API handles waiting for the Photon's response before sending to the video player, and it's now streamlined and non-blocking. No other major changes have been necessary except for adding new Photon functions as they've been developed.

Photon: The IMU and a servo were integrated. Right now, the IMU lets us detect if the toy has been shaken or left idle. We also took advantage of the magnetometer to detect if an accessory part is on the toy or not. The servo replaces the shoulder joint of the left arm, which required us to 3D print an adapter. The arm now can move when we call on it to. Other hardware integrated: photoresistor, vibration motor.

Video player: The original player was a linear script. It was overhauled to be extensible, and now runs a loop that works through an infinitely expandable list of video objects. Our video class contains all the necessary parameters to tell the player when to do things with the toy, and what to do with the data returned from it. We built out a 3-layer demo tree, with 2 decision points and 4 possible paths.

Toy: Hardware has been integrated in the toy. The vibration motor connects to the torso piece and is strong enough to be felt even on the table the toy stands on. We put the photocell on the toy's forehead (we have a branch of our tree where you need to shield the hero's face from a blast). The IMU is secured in the waist area. We built a 5V regulated supply in one leg, and put the Photon in the other.

Log in or sign up for Devpost to join the conversation.

Ryan Spicer posted an update

11/12/16 Update

The Photon API got to a good working state, and is now able to call Photon functions and return their response to the client when they are ready. We have small improvements to make, but overall this portion is working.

In parallel, we wrote more Photon code to perform actuation and sense inputs, which can be called by the player/API. The player can now actuate in real time and poll for inputs at designated timestamps int he video. We successfully demo'ed a sequence where a character is hit and an LED temporarily lights upon the hit, and then the system looks for a button input. The button input determined the next video in the sequence and there was no noticeable gap between clips.

Work is ongoing with hardware, with a focus currently on IMU integration.

Log in or sign up for Devpost to join the conversation.