Inspiration
What it does
The chatbot tries to derive the client's values and then his/her needs through interaction with the client. For that purpose, the chatbot employs Marketing Models like Means-End-Chain and real psychological sales tactics during the conversation. The various electric car models are assigned values in 7 different categories. The chatbot then assigns weights to each category based on the derived values and needs of the clients and calculates a weighted sum (rating) for each EV model. We give recommendations based on the calculated rating.
How we built it
We provide the pre-trained LLM with a prompt. After that the client can start chatting with the LLM.
At the beginning of the prompt, we ask the chatbot to try to mirror the client's way of communication/style of speech. It is also told to employ a general yes-strategy that avoids, as much as possible, posing questions that might result in a negative answer.
We also provided the goal and general strategy for the conversation, as well as the values we defined for each categories of all the EV models we have. We also included links for test drive for each car.
Challenges we ran into
We found that llama-3-70b-instruct, gpt-4-turbo-2024-04-09 and gemini-1.5-pro-api-0409-preview work quite well with our prompt. However, with the former two models, specific specs/metrics of EV models are included in the response, despite not being included in our prompt (we used a 1-10 rating for each category instead of specific metrics such as km/h). We suspect this is due to similar but outdated information being included in the training data.
Another problem is that G-Class is not being regarded as an EV by the models probably due to the same issue. We had to explicitly say in the prompt that it is so to ensure correct behavior.
We need to employ arithmetic in calculating the rating for the final recommendation (as a weighted sum of the values from 7 categories), however a lot of models seem to be not quite proficient in the task, except for larger models like gpt-4-turbo-2024-04-09.
We had some issues when trying to build our own chatbot using HuggingFace API with input for the for-loop. We are still in the process of trying to solve it.
Accomplishments that we're proud of
Our prompt induced desired behavior with at least the following models: llama-3-70b-instruct, gpt-4-turbo-2024-04-09 and gemini-1.5-pro-api-0409-preview.
llama-3-70b-instruct worked especially well, as can be seen in our demo video. The responses generated by gemini-1.5-pro-api-0409-preview are also very concise and to-the-point and does not suffer from the problems we mentioned in ## Challenges we ran into.
Log in or sign up for Devpost to join the conversation.