Inspiration
When signing up for the Databricks LLM Cup 2023, we knew we wanted to use a real-life use case. From the beginning we have been reaching out to our various departments, to see if there were any processes which could be made more efficient and less time-consuming, or if we had the opportunity to help the departments to make more data-driven decisions.
We found that Hafnia’s Bunker Department (the department purchasing fuel for our vessels) had a high impact use case, since this application will help them initiate more informed negotiations.
What it does
The Marvis Bunker AI Assistant serves multiple functions for our Bunker Departments negotiations process.
It consists of four pages:
- The Home page, where the Bunker Trader will get a quick summary of the latest news relevant to them, where they are also able to filter the news on various topics of interest
- The Marvis Chat, where the Bunker Trader will be able to chat with Marvis about the dataset which is the foundation of the application
- The Price Simulation , where the Bunker Trader will be able to simulate quotes from a supplier and get predictions from our machine learning model on which final price they might be able to achieve with each supplier. On this price, Marvis can also help the Bunker Trader to compose communication in the form of an email or WhatsApp message to their suppliers, continuing negotiations, or accepting or declining offers
- The Supplier Comparison, where the Bunker Trader can compare suppliers in the various ports around the world, so they can get the best idea of which supplier to initiate negotiations with, based on who will give the best price or best discount. They will also get an individual recommendation on each supplier on how they are performing in this port compared to other suppliers
How we built it
We used what we learned about Large Language Models (LLMs) in another internal app (“Simple” GPT-3.5 chatbot) and made this new LLM app in just three weeks.
We started on November 6th. Our team worked with the Bunker team, trying different things to see how the app could help them work better.
After we knew what we wanted to do, we put our plan on the KANBAN board and organized it like we usually do in a sprint.
Challenges we ran into
- Find a relevant topic
- Discover, integrate, process, and clean the data from a new data source
- Learn about Data Science (we were not using MLFlow or any ML model before this)
- Understand the LLM options (Serving models, inference, GPU cluster...) and when to apply each use case
- Learn and use Langchain
- Llama-2 is not as easy to use as Azure Open AI GPT-3.5 or 4 model (Token limitations and to start a serving endpoint takes around 1 hour)
- Prompts optimizations
Accomplishments that we're proud of
We are proud to deliver an application with features that can be potentially useful to our colleagues from the Bunker department, which aims to speed up their negotiation processes and to get the best deals possible. We successfully harnessed the capabilities of LLMs to implement sentiment analysis on textual and structured data and to craft coherent and contextually relevant textual content for our application.
Furthermore, our team takes pride in the successful integration of predictive models from the Databricks Model Serving Endpoint to generate tailored recommendations, expanding the functionality of our application beyond LLMs.
Feedback from our Head of Bunker Procurement: "This could save a company millions of dollars on a yearly basis and improve the industry as a whole"
What we learnt
Throughout the experience, we have gained valuable insights into the capabilities and potential applications of the LLMs.
We have learnt how to harness the power of LLMs for various tasks such as summarizing, applying sentiment analysis to the data provided, enabling us to generate personalized recommendations and explore data from our unity catalog with questions asked in plain English.
This experience has shown us the versatility of LLMs in deciphering emotions within a combination of textual data and structured data. We discovered the significance of leveraging on our LLMs and ML Model with the use of the Databricks Model Serving Endpoint by combining prediction models and LLMs to achieve our desired performance tailored to our specific use cases.
Our overall experience underscored the importance of a holistic approach, combining the usage of LLMs with other methodologies such as predictive models to unlock new dimensions for data analytics, innovation, and efficiency.
What's next for HAFNIA Databricks LLM Cup 2023
In our quest to not be viewed as a cost center, but as an internal partner, the Data and Analytics team will use this LLMCup video and app (we will deploy it in a test environment) to market internally the usage of LLMs and MLs and hopefully, to start 2024 with workshops with businesses to educate and empower our company with even more data driven decisions.
The Bunker department has reviewed the video, sparking new ideas. We are committed to evolving this demo application into a fully functional real-world application.
Built With
- azure
- bing-search-api
- databricks
- gpt-4
- langchain
- llama2
- ml
- openai
- python

Log in or sign up for Devpost to join the conversation.