💡 Inspiration

Traffic accidents are a global crisis. Distracted driving, poor visibility, and lack of real-time information are major causes. We asked ourselves: "What if every car, even an old one, could have a Jarvis-like co-pilot?"

That's how BOLT was born. We wanted to democratize advanced driver-assistance systems (ADAS) by turning a simple Raspberry Pi into a powerful AI Co-pilot.

🧠 What it does

BOLT is an intelligent Head-Up Display (HUD) system that:

  1. Sees: Uses Computer Vision (YOLOv8) to detect hazards like trucks, pedestrians, and traffic lights in real-time.
  2. Speaks: Interacts with the driver using a hyper-realistic AI persona named Anthony (powered by ElevenLabs).
  3. Thinks: Uses Google Gemini 2.0 Flash to understand context (e.g., "Is it safe to drive fast in this weather?").
  4. Guides: Analyzes traffic data via Google Maps API to suggest fuel-efficient routes.

⚙️ How we built it

  • Hardware: Raspberry Pi 5, Webcam (Thronemax), Mini Speaker, OBD-II Adapter.
  • AI Core: Google Vertex AI (Gemini 2.0 Flash) for reasoning and context awareness.
  • Voice Engine: ElevenLabs API for natural, low-latency speech synthesis (The voice of Anthony).
  • Vision: YOLOv8n running on OpenCV for object detection.
  • Data: Google Maps Directions API (Traffic) + OpenWeatherMap API.
  • Software: Python 3.12, Pygame (HUD Interface), Vosk (Offline Speech-to-Text).

🚧 Challenges we ran into

  • Latency: Combining Cloud AI (Gemini) and Cloud TTS (ElevenLabs) created a lag. We solved this by using Vosk for offline listening and optimizing the API calls to run in parallel threads.
  • Hardware Constraints: Running YOLO on a CPU was slow. We optimized the model inference loop to prioritize "Trucks" and skip frames intelligently.
  • ElevenLabs Integration: We had to handle API quota limits and network timeouts gracefully to ensure the driver always gets a response.

🏅 Accomplishments that we're proud of

  • Created a fully functional Voice-Activated HUD that runs on low-cost hardware.
  • Successfully integrated 3 major AI technologies (Vision, LLM, TTS) into a single cohesive Python application.
  • "Anthony" feels like a real passenger, not a robot.

🚀 What's next for BOLT

  • V2X Communication: Connecting BOLT to smart city infrastructure.
  • Driver Monitoring: Using an internal camera to detect drowsiness.
  • Commercialization: Partnering with ride-hailing services (Be, Grab) to deploy BOLT for fleet safety.

Built With

Share this project:

Updates