Inspiration

The "conquer the world" prompt.

What it does

Doof Chat is a homework assistance tool with a simple twist to it. Doof Chat, while humorous can also be powerful when it comes to accomplishing your homework goals.

How we built it

Doof chat is composed of three main components:

1. Web interface:

Doof Chat utilizes a Next j.s. user interface that allows the user to interact with our Digital Ocean HTTP server.

2. Digital Ocean Server:

I designed a docker program that acts as an HTTP server to field requests from the user's interface. Once a request is received, we pass this information over a socket connection to my desktop running a 4070. We then process the user's prompt and return the response from the model.

3. LLM:

The LLM for this project is a LLaMA 3.1 7b quantized to 8q. We use Ollama to run the model. We have a fully functional model with RAG capabilities allowing you to upload certain documents for more contextually based results. Unfortunately due to Netlify's 10 second Lambda function timeout, we were not able to keep this feature implemented.

Challenges we ran into

Netlify function timeout.

Accomplishments that we're proud of

Our team is very proud of our teamwork and what we've been able to achieve together. We're very happy with the performance of our model. We're also very proud of the networking solutions implemented to make this all possible.

What we learned

We learned that when you need to be able to access a private machine to leverage its computational power, you can simply setup a server that can act as a proxy between the user and the GPU machine. We also learned how to implement RAG with Ollama while also utilizing character based output from our LLM.

What's next for Doof Chat

Re-implementation of RAG functionality, LLM optimization, persistent chats, the whole nine yards.

Built With

Share this project:

Updates