Inspiration

Inspired by the troubled Lucid car software and contrasting robustness in competitor software products, this project aims to create a relatively simple but robust system to manage the most common systems. Now with new LLMs like Gemini 3 being more robust, there should be more room for more reliable and usable natural language support in common features.

What it does

By offering common tool calling support for frequently used features in cars, this system allows users to both communicate with an assistant through voice or text to help users to set car configurations to best address driver comfort and support passengers throughout the drive.

How we built it

This project started with a simulated dashboard with system status, which simulates the status of a car in the real world. Additional integrations that utilized standardized internal called for car configurations are then added that allow user input to directly and indirectly affect vehicle system configurations.

System prompt tuning was then applied to improve model behavior, for example, to anticipate user needs through indirect conversation topics. To use current status and larger context to make adjustments to onboard configurations to improve passenger comfort and safety.

Challenges we ran into

It is somewhat difficult to come up with a fully functional simulated dashboard that has decent coverage and basic working logic of a car user interface. Modern cars are complicated and the simulator really covers a core subset of system controls. Expanded functionalities are possible for sure but will need more work put in compared to this simplified prototype.

Speech to text is also a challenging point when not using a high performance system, the current system used is flawed and often does not perform super accurately. Though this should be a lesser issue if API costs were a lesser concern in a production environment. This prototype uses a lower performance speech to text as a proof of concept and the text input does a good job to demonstrate system stability for now.

Accomplishments that we're proud of

The natural language interface is surprisingly accurate and the stability from a few hours spent on tuning the system has quickly turned the assistant into a fairly capable system that in my opinion makes human to car interaction carry less friction. The humanized and logical responses offer a more intuitive interaction experience for the driver and passenger, and I believe this is to be a more widely used system on modern cars.

What we learned

A complete and polished system is difficult, the one person effort and time put into this project is not enough to deliver even a smaller set of features that cover all user scenarios. There are edge cases that behave slightly unexpectedly that are difficult to fully cover and with LLM introduction there are many more of these cases that may be a hassle for end users even with high performance models. I have come to further understand that a good model is not a single end all solution for all problems, and development efforts are still needed to make production ready systems that are highly capable.

What's next for Car infotainment

I believe future developments in car infotainment systems have lots of potential. The current development trends tend to focus on cost cutting, taking away more components trading them for a singular screen. But this means complicated software and often buggy and unreliable performance on life threatening situations in a highspeed capable vehicle. Future of this type of system should be highly automated with many controls that better tend to drivers and passenger, and LLM technology is something that can be helpful in the future and I fully anticipate its presence. In a future with more advanced HCI technology, more controls should become effortless and car infotainment development will benefit from these developments starting with a more intuitive assistant that reduce distracted driving and make the journey more relaxing.

Share this project:

Updates