Inspiration

Content reactive toys enable kids to have a much more immersible experience while watching their favourite action anime/cartoon/movies. Content reactive humanoid toys provide a new dimension to toys which can be commonly christened “Smart Toys”. Content reactive toys react and enact out their virtual self in an anime/cartoon/movies. It would be desirable to design such toys that successfully enact a sequence or a set of actions in sync with a media program with a very low latency.

Project Components

Hardware :

  1. Raspberry Pi
  2. Stepper/Servo/DC motors//Speaker
  3. Vibrator/Led lights/Capacitors/Resistors/Inductors/Voltage Regulator etc

Software :

  1. Python based TCP/IP communication protocol
  2. Data file containing content reaction data required.
  3. Timing program to sync the media file and the content reactive data sent over communication network via Amazon-S3.

Weekly Updates

Week 1:

  1. Literature survey of the domain and current progress in the field of smart toys. After carefully exploring the existing toys available we have decided to go ahead with our own design, in order to have the various degrees of freedom as per our project goals.
  2. Designed the complete blueprint of the system on paper. While designing we figured that we will have various sensors, motors, and other components within the body of our toy, hence deciding on the optimal dimensions of the toy was critical at this point. We also didn't want the figure to be too big and heavy.

Week 2:

We identify the key components required - Power Management, IMU, Speaker, Motors, Web Interface, Network Communication, Database and Mechanical Structure. We divide the system into hardware/software components, identified the features to be implemented and distributed the work among ourselves.

Week 3:

Our first major target was to integrate various components together so we have a complete system with our major components talking to each other at a rudimentary level. The ESE 519 hackathon was a perfect night to quickly implement a rough communication layer with our major modules up and running. We set up a communication medium between the hardware and software. We wrote our own communication code that uses the TCP client/server python setup. By the end of the night we have been successful in sending commands with very low latency and also successfully ran a media file (albeit from a local machine only, using VLC Media Player) along with arm movements of the toy as per the content.

Week 4:

Next we focused on getting the body enclosure done. We went ahead with a Laser cut enclosure initially, which enabled us to do incorporate the basic features and also test them. To demonstrate functionality the housing was crudely made. Major considerations were made to have a modular system that allowed access to all components for testing purposes. Additionally, it had to be structurally sound to support the major components (servos and boards). Lastly we created a proto-board containing all of our circuitry in a small package. This made integration of all the components much easier. This was a crucial step as by now we had reached a point where the prototype was working with all components integrated and communicating. Our biggest take-away from building this prototype design was power management issue. Up until now we were powering everything (Raspberry-pi and other actuators and sensors) using the same power supply. But as soon as all of our sensors and actuators started working and communicating, the circuit drew a large amount of current from the Pi, causing voltage fluctuations. This lead us to our next week's goal - to go with a more robust power management circuit, with perhaps the Pi and other components to be powered separately.

Week 5:

After receiving the feedback for the base goal, we have decided to incorporate some valid changes and also add features to give our toy a finished look, after all we have to impress the kids! We have divided the work efficiently between us so we can focus equally on various aspects.

We focused on getting the video player working using HTML5/Javascript/JQuery. We first got the player to work with local content and were able to seamlessly switch content when called to by remote functions. However, for our best implementation we wanted a two-way communication so that whenever a media file finishes we get an instant notification over our communication network to either prompt the child for next reaction or have the toy to automatically switch the content in case the child doesn't respond.

After digging more into JQuery/HTML5 we have successfully established a two-way communication between the player and the toy and are now able to make function calls at predetermined points. By this point we are also able to pull the media content via Amazon-s3.

Simultaneously, we continued our work on designing our final 3D CAD model. The previous versions of the laser cut figure gave us a good estimate on our design's shortcomings and areas of improvement. We are now using smaller motors in order to keep our toy compact and light. The toy is being designed such that we will be able to place Raspberry-pi and Sound circuit on the back side of the figure and the rest of the sensors towards the front. This design will give us proper room to have a major circuitry go in the body and we will place the battery system in the legs of our toy.

In-order to make our toy more interactive and responsive, we are working on getting the sound system up using speakers and an amplifier. Our plan is to have the toy speak some welcoming lines like "Hey come play with, pick me up", so that a child gets attracted towards it and likes the toy much more.

Next, we will develop our video tree methods to select video clips and also to get the entire system to work efficiently.

Week 6:

We finally have everything that we need and are all geared up to get everything to work seamlessly. We have assembled our entire circuit inside our 3D printed toy figure (we have made it look like 'Baymax' from the movie Big Hero 6). We are continuously working on getting the media files and the toy to have a perfect communication and interactive communication.

Challenges we ran into

  1. Our biggest challenge came from creating a reliable power supply from a reasonably sized li-po battery. We did not consider the amount of power required to have a stable power supply (5v) to the raspberry pi wile it was connecting over a WiFi network. Going forward, now that the concept is demonstrated, we will use a micro controller that requires less power. This way we can power our device from our li-po batteries and boost circuit.

  2. The way our systems decides the next media content to play was challenging. We first tried pulling the content in a random fashion but soon realized that it didn't create a WOW child - toy interaction. Hence we decided to have predetermined series of files to be based on child's interaction with the toy by implementing a complex binary tree video searching algorithm. Example, if the content asks the child to pick and shake the toy, then we have a binary decision point. If the child responds then we go ahead and display a video (positive video) that continues our story-line, otherwise we show a video (negative video) that signifies a failure and then we give the child another chance to do the same action. This way we are keeping the kid more engaged and providing him/her a platform to have a friendly interactive experience with the toy.

  3. It was a challenge to connect every hardware peripheral and establish communication with a raspberry pi without a printed circuit board. If this system was laid out onto a chip then the number of manual connections required would decrease ten-fold.

Accomplishments that we proud of

  1. We are most proud about the fact that our product will give children a platform to have an amazing real-time experience with the toy and the corresponding media files.

  2. The fact that we designed and built our toy from scratch, being able to see it imitate and interact with the whole system is our biggest accomplishment.

  3. Our light-weight yet efficient algorithm to switch the media content along with our communication network layer is proud software based achievement.

What I learned

What's next for Content Reactive Humanoid Toy Figures

  1. Improve design and features (Completed): Have a compact design for our toy figure and make it more appealing and welcoming for the kid, by using miniature components and adding features like sound synthesis, sensor inputs, etc

  2. Optimize our switching algorithm: Make sure the switching is triggered at the right time without any delay and provide kids with the ability to react multiple times to interact with the toy such that the content changes (Completed). Provide a full fledged customized video content in the form of stories, games, movies etc.

  3. Replicate the same system with different toy designs and also add features for some kind of parental control.

Share this project:
×

Updates

Zairah Mustahsan posted an update

Inspiration

Content reactive toys enable kids to have a much more immersible experience while watching their favourite action anime/cartoon/movies. Content reactive humanoid toys provide a new dimension to toys which can be commonly christened “Smart Toys”. Content reactive toys react and enact out their virtual self in an anime/cartoon/movies. It would be desirable to design such toys that successfully enact a sequence or a set of actions in sync with a media program with a very low latency.

Log in or sign up for Devpost to join the conversation.