Inspiration

The idea for this project came from observing the lack of accessible tools for detecting car damage in real time, especially with 3D models. I wanted to bridge that gap by building a system that could not only detect damaged parts from images but also visualize it using interactive 3D models. The motivation was to create something both practical and innovative—blending machine learning with immersive 3D interaction.

What It Does

This project detects and highlights damaged parts of a car from both real-world images and 3D model screenshots. Users can interact with a 3D car model, simulate damage by pressing E to scratch it, take a screenshot, and upload it to the system. The AI model then analyzes the image and identifies the areas affected. It’s a hands-on way to visualize and test car damage detection, whether from real photos or synthetic scenarios.

How We Built It

We trained a YOLOv8 model on a custom dataset of damaged car images, which included a variety of damage types and severity levels. For the visualization, we used Unity to simulate 3D cars and allow real-time interaction. Integrating the screenshot mechanism into Unity enabled users to export synthetic images for detection testing—blending real and synthetic data workflows.

Challenges We Faced

One of the biggest challenges was achieving a high level of accuracy. Despite experimenting with different training strategies and data augmentations, we couldn’t quite push the accuracy past 80%. Additionally, working with 3D model screenshots introduced new variables like lighting, texture variance, and camera angles, which made detection more complex.

Accomplishments We’re Proud Of

We’re proud to have built a fully functional prototype that:

Detects damaged parts on both real and synthetic images Integrates seamlessly with a 3D Unity simulation Allows real-time interaction and experimentation

What We Learned

How to train and fine-tune YOLOv8 on a specialized dataset Real-time interaction development using Unity Game Engine Building a pipeline between synthetic image generation and model inference The importance of clean, well-annotated data for object detection tasks

What's Next for Car Damage Detection

The next step is to integrate Augmented Reality (AR) for real-time car damage detection. Imagine pointing your phone at a vehicle and instantly seeing the highlighted damaged areas. We’re also exploring: Severity classification (minor, moderate, major damage) Cost estimation based on detected damage Expanding the dataset with more edge cases and real-world examples

Built With

Share this project:

Updates