Team members: 211 Arya Vohra 212 Taichi Kato 221 Khush Jammu
After we used iPhones for the first time, we found ourselves tapping everything—looking for the magical feedback that the iPhone gave.
The Kindle brought the paper book into the digital age; the Kindle gave users access to a huge library of books on the go. Many of us, however, still use physical books: it's simply a more natural—thus favourable—experience.
So, we took the informative power of the Kindle and brought it into the real world by naturally digitising the reading experience.
What it does
It's simple: tap any word, sentence or image in a real book to instantly get more information. Tap a word to get its definition, select a sentence to highlight it, all through touching your book. An iPad, iPhone or computer beside the book will show this information.
How we built it
openFrameworks and OpenCV enable finger tracking and OCR, which works with a companion React Native app to display relevant information. A camera feed is fed into openFrameworks through Syphon, which uses OpenCV to process the image and track finger movements and taps. Our python backend then uses the images and the taps to recognize the word which was selected. This is pushed real time to the client app built with React native
Challenges we ran into
Tracking fingers was quite a bit of a pain, and setting up a high quality / responsive image stream from a camera was also quite difficult.
Also, minimizing the number of OCR runs for efficiency was a fun challenge.
Accomplishments that we're proud of
A whole working pipeline! A really nice UI and snappy experience.
What we learned
While we learned a lot about image processing and networking, we were really happy to get a glimpse into the future of computing, in which computers become ubiquitous and our interaction with it becomes more natural. Working on projects like this proved to be a lot of fun, and ignited our interest in computer interactions. We also learned the value of creating a magical experience.
What's next for BookWorm
Porting this over to something more portable like MagicLeap / HoloLens / Facebook AR glasses, etc.