Inspiration

Many online storefronts have integrated chat bots to help shoppers, but Toyota notably lacks one. As these tools become more effective, these agents will become more important, so we thought it would be a good project to attempt a first pass.

What it does

The car dealership of the future enables better online car shopping by building an LLM powered chat interface rather than the traditional storefront. This is more responsive and enables better communication of niche facts that would be interesting to a user, advantages that are amplified by the LLM architecture we've implemented. By conversing with a client, identifying the car they need, then handing it off to humans to conduct items an LLM can't our product allows the freedom of evaluating cars that a dealership does.

How we built it

The core of the backend is built with a python program that leverages GPT-4o as the primary LLM, with a RAG built using Llama-index to feed the LLM Toyota specific information. Llama-index enables fine tuning the model to focus on data we care about, and it's generalizability to both unstructured and structured data enabled the use of large data sets acquired by simple web scraping. The effectiveness of these models were evaluated using Arize AI, demonstrating that it is effective at avoid hallucinations and sticking on topic.

We built the frontend using HTML, CSS, JavaScript, React, and Vite, ensuring smooth integration with the backend. These technologies were chosen for their flexibility, performance, and ease of use in creating a responsive and dynamic user interface. We started by designing an introduction message to welcome users, followed by dynamically displaying a Toyota image. The chatbot, placed next to the car options, engages users with personalized questions to help them identify their ideal Toyota model. This interactive and user-friendly experience was a key focus during development.

Challenges we ran into

About every part of this platform had major challenges due to the time crunch, unfamiliarity with, and complexity of the systems. The LLM models in particular often didn't gel well with both our front end, and with other LLM tools in the back end.

Accomplishments that we're proud of & what we learned

We've produced a front end that looks pretty and a back end LLM model that very often produces on topic, effective responses, based on a set of tools we didn't know about before this hackathon. We feel we've all learned a lot about gen AI and web development in general, and are looking forward to applying these skills in our future projects and careers.

Built With

Share this project:

Updates