Inspiration

In an age where movies and media envelop us in rich sensory experiences, why should books remain silent? With vast libraries awaiting discovery, we envisioned a world where reading is not just visual, but auditory, evoking deeper emotions and engagement. TIM doesn't just make books exciting; it turns every page into a vivid soundscape, making literature truly come alive for everyone.

What it does

TIM connects the world of literature with the immersion of audio. Through the use of a recommendation engine that maps the text of what’s being read on screen to a particular Spotify track and playing it live while you read through sentiment analysis.

How we built it

Like many projects, TiM has two major sides to it. First the front end which allows for signing in with Spotify, selecting & reading books and listening to music was built with ReactJS and Javascript. The other side to this was our API for our artificial intelligence model to select suitable Spotify music from text paragraphs of books.

Challenges we ran into

Building this was not without its challenges. First we noticed that obtaining training data is hard. There is not a lot of labeled music data on the internet, so we combined multiple resources (spotify’s features, MuSe sentiment dataset, GPT’s knowledge on translating emotions to musical features). We also encountered issues with pagination: we would like the flexibility to be able to obtain user’s reading speed and pattern, which existing e-hub plug-ins don’t allow us to do. We created our own pagination to obtain more information on the reading progress.

Accomplishments that we're proud of

After all that though, we’re super proud to have built out a working prototype of a project and working throughout almost the entire Hackathon we were all super passionate about in just 24 hours and we learned a lot!

What we learned

For starters building an artificial intelligence model is hard! Even with the help of the MuSe Dataset which has tens of thousands of songs and their emotional features, getting soundtracks and music that genuinely fit the vibe of what the user wants to listen to is quite the challenge, especially in 24 hours. We also noticed that sleep helps you write better code (haha).

What's next for TIM - Text Into Music

So What's next for TIM - Text Into Music? We’re planning on further expanding our project by making it easier for users to immerse their reading experiences by building a better model that reflects detailed elements in the text, eye tracking or progress prediction to see where users are at in the text to make experiences even more immersive for example during suspenseful or intense scenes. We want users to be able to bring their own books to the scene. We want to provide personalized recommendations, based on users’ given music preferences and their reading patterns, and we would like to improve our music model continuously through user feedback. Creating a storefront for tailored, precision made audio-literature experience would be amazing. We’d love to experiment with not just selecting already existing music but generating our own through platforms like stable audio as platforms like those continue to mature. We do believe that reading can become the next high-tier immersive media experience and it all starts with TiM.

Built With

Share this project:

Updates