Inspiration

We were inspired by the learning vertical and we wanted to tackle a problem in the education field. We thought of subAI by thinking of our own experiences struggling to understand lectures and having difficulty finding help online. We wanted to fix this by implementing a way to get custom answers for questions in lectures.

What it does

subAI lets you upload an audio file which we convert to text and store in a vector database. The user can then ask questions about the lecture in the program, and the program fetches the most similar sentences from the lecture and conducts retrieval-augmented generation to generate a custom response based on the users question and the lecture materials.

How we built it

We built it with a React front end connected to a fastAPI back end, and a Weaviate vector database. We utilized OpenAI's whisper alongside a Cohere model to do RAG.

Challenges we ran into

We tried to use llama to generate responses alongside custom quiz questions but we did not have enough time to complete this. We also had trouble connecting our front end to the back end to display the spoken text. We managed to fix this error at the end. Lastly, we had a few minor git merge issues, but we resolved them quickly

Accomplishments that we're proud of

We are proud of the fact that we connected and integrated all parts of our project together. Even though we didn't have time to make our final product with more user-app interaction.

What we learned

For me, I did not have much experience in connecting react to a backend and database so this was the first time I was able to do that.

What's next for subAI

Next, subAI will be able to provide quiz questions and question prompts for the user as we want to implement stronger user-app integration. After that, we want to be able to provide more types of information. For example: graphics, equations, and even code.

Built With

Share this project:

Updates