The feel of a physical book has always battled with the convenience of a digital one. Research and user data repeatedly proved to us that a physical book was just as important, if not more, than the advancement of a digital one, especially for avid readers. The tactile nature of the book, the smell of the paper and the comfort of slightly bending a page to flip it, has been lost behind the functionality and convenience of technology that brings forward features like research, accessible learning and interactive note taking. As a generation of dreamers who get to build experiences that make technology a part of our reality, and not just separate from it, we leveled up to build a bridge between this gap of physical and digital book reading, while making the experience learnable and enjoyable for all users.
What it does
Realize is a space to surround yourself with an immersive experience that aligns your focus while reading a physical book by activating three senses at once. While you read your paperback in an immersive environment custom to the genre of your book, you can also interact with the app’s features to find definitions to the words in your book, research them further-on and explore 3D elements to learn about them better. You can also place favored quotes, 3D elements and books into your personal space, the Mind Palace, allowing you to view or refer to them as conveniently as a note on your desk. Zone Out is a space for people with concentration troubles, ADHD, or Autism to have greater calm and focus.
How we built it
Beginning with Brainstorming activities to solidify our concept, we gather research to find a problem with the journey of physically reading a book and learning alongside, as well as the relevance of digital interaction in it. Once we found an opportunity, we mapped our user flow, interaction and task lists, and began the build. To avoid anything harsh and prevent users from detracting from the calming environment, we used a small collection of synthesizers all based off of the same altered sine wave, hence creating sounds that all felt like they belonged together. This point forward, we collected resources, built 3D models for our AR environments, learnt new variations to implement our visions and set interactions using Unity and MRTK. We scanned through every possible library to find one compatible with HoloLens and Unity to allow the text recognition (with the right OCR library) to work. We ran into multiple challenges to build this ability and all that’s left for us now is to implement interactions in Unity with the OCR for it to function.
Challenges we ran into
Connecting to the HoloLens service with the Rest API, and formatting the request correctly gave us some troubles and required multiple attempts. Our biggest technical challenge was OCR. We tried several services and we came across major compatibility problems. Finding a working OCR library for both HoloLens and Unity took us many hours and countless attempts. We tried every single one of them until we finally found Azure Computer Vision. The remaining challenge as we work with HoloLens, is its high quality camera resolution, as to be able to work with Azure we have to downsize that image quality, to add interactions and to have it visible in Unity. For interactions with the user interface and the environment, we had to take input from designers and implement them as custom actions within HoloLens, and manage placements of objects in the scene and implement the affordances. It was also fairly challenging to set up all SDKs for the builds in Visual Studio and to get holographic remoting to work perfectly. When creating our 3D models and art assets, translation from Blender into Unity was buggy and once implemented, they would reduce in quality significantly. Another similar challenge was with the animations of those objects in Unity. Once placed into Unity with an animation path set from Blender, the path disappeared, leading to constant errors causing the project to cease function. Creating unique sounds that were simple and pleasing to the ears was ideal to the scene of the Realize app and the MVP environment. Between designers and developers, organization and communication is very important to keep the team glued together on the same page throughout. It was a hurdle to keep this going with the constant changes on each end, but everyone was extremely hardworking and passionate about the work, and we worked seamlessly on trying our best for all of it.
Accomplishments that we're proud of
Envisioning a platform that fuses the process of reading a physical book with the interactive, learnable convenience of a digital one felt like an overly ambitious goal to us since the very beginning. But nothing beats the feeling of being able to accomplish what you didn’t realize. Looking back into the process, we’re proud to have a working environment and interactions, and image to text recognition. Towards the end we managed to find a working OCR library for both HoloLens and Unity. This was a challenge due to compatibility errors, because we went through multiple attempts and library implications such as Windows.Media.Ocr, Tesseract and IronOCr, but after hours and countless attempts we finally found Azure Computer Vision.
What we learned
How to keep up spirits to push through each problem that came our way. By assessing problems rationally, we never let an issue stop us in our tracks. Our clarity of mind helped us to battle each and every hurdle that came along with our project. Without a positive mentality, our problems would have been even more difficult to overcome. By choosing a diverse group of people, with vastly different skill sets our team learned not only about the power of communication, but the importance of time management and division. Each of our unique skill sets was able to be utilized by dividing tasks that catered to our specialities. By every member of our team working separately, we were able to make progress much faster than if we had all tackled the same issues at once.
What's next for Realize
We will add the following to Realize today: Currently we have a proof of concept in our code for text recognition and search up working in Unity. Now we just have to implement interaction for them to have them show in HoloLens. For that, we will now be getting the text data to connect back from Azure computer vision to enable a display in mixed reality with HoloLens and attach the pixel coordinate and string of each word to the interaction. Additional features to be added: The ability to explore more themes and immersive environments. Share your Mind Palace with your friends. Greater spatial audio awareness. Create your own spaces for book readings. Add books to queue and have environments for them based on themes or genres. Customize elements in an environment. Add music or connect your streaming platform in an environment. Ability to select multiple words to search up/research. Borrow books and spaces from your friends. Have a range of audio books and translated books available for higher accessibility. Have features built in translations for higher accessibility. Have speech to text for higher accessibility and help someone read the words and understand pronunciations. Polishing visual design and user interaction.
Log in or sign up for Devpost to join the conversation.