✨ Inspiration
When we are working on new projects, we often struggle a great deal with having to go through large pages of documentation about these diverse technologies when all we are looking for is things related to X or Y. We decided we wanted to make something to help make things of that nature easier, a way to interact with the article.
🚀 What it does
Linky leverages a RAG AI model with a Pinecone Vector database. The user inputs a URL which is then sent into the vector database. From there, the user can select a stored URL and ask Linky a question. Linky retrieves data from the sent in query and then generates an accurate response, all while sourcing the data it formed it's answer around.
🛠️ How we built it
- TypeScript
- Mantine
- Vercel
- Pinecone Vector Database
- Open AI Ada 2
🤯 Challenges we ran into
- Text wrapping issues with retrieved data
- Continuous development (the model is not configured on our localhosts, we needed to deploy first when it came time to changes related to the generations).
🏆 Accomplishments that we're proud of
- Mobile functionality
- Overall end result
- Degree of information learnt
- Staying awake for over 24hrs
📖 What we learned
- More about each technology in our tech stack
🔮 What's next for Linky
- Better mobile responsiveness
- Better display of retrieved data
Built With
- figma
- mantine
- pinecone
- typescript
- vercel
- visme
Log in or sign up for Devpost to join the conversation.