Summary
MITE is about changing the way we consume media. Instead of passively watching TV, MITE allows the viewer to interact with the program. The toy contains sensors, actuators and connects to the cloud via WiFi. The actuators respond to the content being played to bring the show to life, while the sensors let the viewer control the flow of the program by reacting at certain points.
Concept
The toy contains a microcontroller with a WiFi radio. This connects to a cloud service which we are creating, that can send commands to the toy and receive sensor data. On the server side, we are creating a video player with the ability to send commands over the cloud service at predetermined points in the video. The player can also dynamically change content based on input from remote functions.
Goals
Baseline
Hack an existing toy with sensors, actuators and a microcontroller.
Create a tree of video content that branches at certain points to create 2^n unique narratives.
Bind the toy's actions to the decision tree.
Reach
Design a custom toy with actuators and sensors.
Move all data processing to the cloud.
Connect toy to the cloud.
System Diagram
Timeline
Week beginning 10/9:
Find and buy an appropriately sized toy with large amount of related TV content
Week beginning 10/16:
Work out controlling sensors, actuators with toy’s microcontroller
Embed sensors and actuators in toy
Week beginning 10/23:
Develop a communication link from Raspi to toy
Start developing cloud video player
Week beginning 10/30:
Set up Raspi to control local content while sending/receiving commands
Week beginning 11/6:
Develop video content tree
Start designing custom toy
Week beginning 11/13:
Final testing and debugging, demo-1 prep
_Demo day 1 - 11/17 _
Week beginning 11/20:
Print custom toy and embed sensors and actuators
Finish cloud video player
Week beginning 11/27:
Develop cloud service communication with toy
Final testing and debugging, demo-2 prep
Project update: 10/20/16
Our first update cycle encompasses the start of the project through the hackathon. We were able to accomplish the following:
- Choose microcontroller platform (Photon)
- Dismantle and modify our toy
- Execute some basic code on the Photon remotely using the Spark cloud service
We chose the Photon because of its abundant I/O, useful IDE, and robust WiFi framework. The Photon by default works with the particle.io cloud service. We were able to execute functions on the Photon and receive return data on our laptops using HTML. We also tested Python code on Raspberry Pi with similarly good results. However, the particle.io service generally had very high latency, and wasn't very reliable. We will focus on creating our own cloud service from this point on rather than relying on theirs.
Once we dismantled the toy (a Power Rangers figure), we made some modifications to create room for electronics and servos. We were able to install a servo to replace the shoulder joint, and will eventually 3D print an adapter to connect the new shoulder joint to the existing arm. We created room in the chest for the IMU and other hardware.
Inside the legs, we were able to build a +5V regulated supply into one of the feet. On the other side, we installed the Photon and made connections to the servo.
Altogether, we finished the night with our original toy now having a microcontroller brain, power, and a robotic shoulder joint that we were able to control over the cloud.
Project update: 10/31/16
While waiting for parts to arrive, we shifted our focus to software. Owing to the issues we discovered with particle.io's cloud service, we wrote our own communication code that uses the Photon's TCP client. Work is ongoing, but we've been able to send commands with very low latency and will soon be able to receive data as well.
Work was also done on the video player. We used HTML5/Javascript to create a player that can seamlessly switch content when called to by remote functions. The player can also make function calls at predetermined points in playback to send commands to the toy. We are working on integrating our cloud service and player and should be able to play video while controlling the Photon soon. Next, we will develop our video tree methods to select video clips based on input from the Photon.

Log in or sign up for Devpost to join the conversation.