Inspiration
As many ML scientists have proclaimed, 2023 was the year of the RAG (Retrieval Augmented Generation) architecture. Despite this, it's still very inaccessible for regular day people. With RAG-U, I bring the power of RAG closer to everyone.
What it does
RAG-U allows you to upload PDFs or insert links to any kind of public material, and with a single button click, trains a RAG LLM model which can answer questions based on specifically what you provided. It's like having a fine-tuned ChatGPT just for your documents!
How we built it
We used LangChain for orchestrating the LLMs, FAISS (Facebook AI Similarity Search) for the vector database, and taipy for the user interface.
Challenges we ran into
Building a RAG is a very new concept, and it has not been done on Taipy before. Hence, there were a lot of testing involved to find the right way to connect the back end to the front end.
Accomplishments that we're proud of
It works, and is deployed! It can do pretty much everything it needs to do. It also has a moderation factor on it, which prevents malicious prompts from being passed down into the app.
What we learned
Learnt a lot about Langchain, and the RAG architecture.
What's next for RAG-U
Next, we expand the capabilities of RAG-U. Make it even more universal, more powerful, and more useful for everyday joes like me!
Built With
- faiss
- langchain
- python
- taipy


Log in or sign up for Devpost to join the conversation.