Education: the great equalizer.

Though every child enters the world with their hand already dealt to them, education is the single most powerful mechanism by which one can transcend the hardships of their circumstance and achieve something greater.

For instance, research at Edinburgh University has demonstrated a positive relationship between early reading skills at age seven and later socio-economic status [1]. Moreover, research from the University of Maryland suggested that engaged readers from disadvantaged backgrounds routinely outperform unengaged readers even from the most advantaged backgrounds [2]. These findings substantiate and bolster the claim that reading is of paramount importance for the intellectual development of young children.

However, it is often difficult for parents to check in on their child’s reading habits. Oftentimes, parents are too busy to read with their children every night. With the advent of distracting technology, it is much easier for a parent to hand a child an electronic device or turn on the television for their child. Even though there are electronic book options available, children are tempted to watch television on their tablet instead of reading an e-book. In this project, we set out to achieve the dual goals of making reading an engaging experience for the child, while also providing meaningful feedback for the parents. The result of our efforts is Beary Good- the fuzzy reading companion for your child.

What it does

There are three components to Beary Good: the parent portal, the child portal, and the bear itself.

Parent Portal:

The web application is a centralized environment where the parents can manage and track their child’s reading. Parents can sign into the web application with their credentials and first view the general statistics page to view the general reading statistics for their child. The application contains information about what books their child is reading. The parent can view further information about the book, access statistics for the book, as view potential problem areas. The mobile application compares the child’s reading to the actual text to determine problem areas. The parents can analyze what kind of words and books their child is having difficulty with, and help their child more effectively using that data. Moreover, the parent can search for different types of books using the book search feature, and add different books of interest to their library. Lastly, the parent can prerecord themselves reading a book to their child, rather than opting for a mechanical and unloving text-to-speech reader.

Child Portal

The phone app is mounted in the hands of the bear. The child can set the mode of the application, as they can have the bear read to them, or have the bear read alongside them. If the first option is selected, the application uses computer vision in order to read the text to the child. If the latter option is selected, the application analyzes the child’s speech and provides meaningful feedback to the parents via the web application.


The bear serves the dual purpose of providing positive feedback to the child as well as holding the phone. The phone is mounted into the arms of the bear, so the phone stays steady while the child is reading. The bear also provides an element of positive reinforcement, as there are mini rewards to completing a book. Every time the child finishes a book, the bear does a dance as well as displays the child’s avatar on its LCD screen. If the child continues to progress in their reading, the avatar may be upgraded in order to provide a sense of progress and accomplishment.

How we built it

Web Application

The web application was built on a React and Javascript framework, and the user authentication was done through Redux. We used an Express server to handle the communication between the website and the bear. In order to store and manage documents such as books, media files, and users, we used MongoDB.

We used the OCR feature in the Google Cloud Vision API in order to read the books. Given an image input of the book, we outputted text; we then used the Google Chrome Text to Speech API to generate speech audio, allowing the bear to “read” to the child.

We used the Google Chrome Speech Detection API in order to analyze the child’s speech. The speech was then compared with the OCR generated text in order to provide meaningful feedback to the parent regarding the areas that their child struggled with the most.

We used the Google Books API in order to query different books the parents could access. These books were then stored onto mongoDB in order to maintain books the child is currently reading.

Our application runs on the always-free compute instances on Oracle Cloud Infrastructure.


The skeleton of the bear was CAD-ified in Autodesk Fusion 360 and 3D printed on an Ender 3 Pro. The skeleton consisted of two movable arms, a moveable head joint, and a magnetic phone mount. Four MG996R servo-motors enabled each arm to pivot around the shoulder in two degrees of freedom. To keep wires tidy, we used a custom protoboard with hand-soldered traces.

In terms of control, we used a Raspberry Pi Zero W driving an Arduino Mega 2560 to enable all the on-bear functionality. The Arduino interfaced with all the low level devices, while the Raspberry Pi opened a connection to communicate with the phone in browser.

Challenges we ran into

  • Trying to deal with MP3 files in-memory
  • Performing surgery on a teddy bear
  • Deciding whether to show off our DP skills by solving edit distance, or to use an existing library instead
  • Playing 'guess the encoding' while shuffling images across the network
  • When someone kept pushing to master instead of making feature branches
  • NPM-related issues, like always
  • Getting Raspberry Pi Python to play nicely with browser Javascript
  • Magnetic breakaways breakawaying (they're doing what they're supposed to be doing, just a little too well)

Accomplishments that we're proud of

  • Wrote a full-featured FSM for robot control in Arduino C++
  • Successfully pulled off a hardware hack while following COVID rules
  • Beat all 24 levels of Flexbox Froggy
  • We're moving up in the world of hardware hacks - goodbye cardboard, hello PLA!
  • Built a hack with evidence grounded in FACTS and SCIENCE and RESEARCH
  • Actually planned things out before coding (like you're 'supposed to')

What we learned

  • How to upload files to node
  • Multiply any 3D print time estimate by 2 to account for reprints :/
  • Google Books API doesn't require any API keys - made our lives a lot easier!
  • Integrating the Google Chrome Speech to Text API!
  • How to use Figma's Export as CSS functionality to save time
  • OCI compute instances are always free :0

What's next?

There's a lot more layers of analysis that we can apply on top of our recorded data to make this project a more useful tool for parents. At the same time, we also want to expand the range of interactivity, by adding new moves to Beary's repertoire and upgrading the display for eye-catching animations.

For a hackathon project, the business model is actually surprisingly solid. Children's ebooks have historically underperformed, largely because reading a book on a screen is often the last thing a child wants to do with their parent's phone. However, integrated audiobooks with professional narration could offer a reliable recurring revenue stream to support continued development of the project.


  • [1] Ritchie SJ, Bates TC. Enduring links from childhood mathematics and reading achievement to adult socioeconomic status. Psychol Sci. 2013 Jul 1;24(7):1301-8. doi: 10.1177/0956797612466268. Epub 2013 May 2. PMID: 23640065.
  • [2] Guthrie, J. T., Wigfield, A., Barbosa, P., Perencevich, K. C., Taboada, A., Davis, M. H., Scafiddi, N. T., & Tonks, S. (2004). Increasing Reading Comprehension and Engagement Through Concept-Oriented Reading Instruction. Journal of Educational Psychology, 96(3), 403–423.

Built With

Share this project: