Inspiration
Making sense of all the information thrown at the student during online lectures can be difficult, and oftentimes we find ourselves wanting to ask our teachers questions, but having to ask our peers to fill in the gaps of information we missed or look things up on the internet. However, these sources are either not available or not reliable and oftentimes adds to the noise rather than clarify confusion.
What it does
Enter the Virtual TA. Our application provides an overlay for students to record audio from any source, whether that be a Zoom lecture or youtube video essay, and live transcribes this audio into dynamically updated notes and a knowledge base that the student can ask questions to via an AI chat interface. It is simple and easy to use, with the chatbot and notes being easily accessible through an overlay on the screen.
How we built it
We developed the application using Electron, React, and Node.js. We used Pinecone as our vector database for storing the embeddings of the transcribed material that form the chatbot's knowledge base, and LangChain to create agents for LLMs. MongoDB was used to store our other information for the application.
Challenges we ran into
This was our first time using most of these technologies, and integrating it all together posed significant compatibility challenges.
Accomplishments that we're proud of
We all learned how to render components on electron. We only had experience with React beforehand. We were able to utilize LLMs to solve two interesting tasks.
What we learned
We learned a lot about good project design choices and how to work together as a team to integrate many moving parts into a successful application.
What's next for Virtual TA
idk I'm gonna sleep on it.
Built With
- javascript
- llm
- node.js
- python
- react
- sound
Log in or sign up for Devpost to join the conversation.