About the Project — Witness.AI: Automated Video-Based Accident Reconstruction
PPT: https://witnessai-automated-acci-6gmfgzi.gamma.site/
Github: https://github.com/harthikss9/Witness.ai
Inspiration
This project was developed for the AWS × INRIX Transportation Hackathon, which challenged teams to create solutions that improve transportation safety and efficiency through AI and cloud technologies. While INRIX already provides large-scale mobility and congestion data, our team focused on the post-incident layer of transportation intelligence — understanding what caused an accident, who was at fault, and how severe it was. This inspired the creation of Witness.AI, a fully automated crash reconstruction pipeline that analyzes driving videos and produces detailed, explainable accident reports using AWS AI and ML infrastructure.
System Overview
Witness.AI transforms ordinary video footage into professional-grade crash analysis reports. The system uses a completely serverless architecture built on AWS services, where each stage of the workflow is triggered automatically by S3 events. The pipeline combines deep learning for perception, rule-based AI for reasoning, and large-language-model generation for narrative reporting.
The workflow follows these stages:
Frame Extraction: A Lambda function or AWS MediaConvert job extracts video frames at 5 frames per second when a new video is uploaded to S3.
Object Detection: A SageMaker endpoint running YOLOv8 and DETR models identifies vehicles and lanes, producing bounding boxes for each frame in JSON format.
Object Tracking and Metrics: A tracking Lambda assigns IDs to vehicles and calculates their motion, average speed, and time-to-collision using frame-to-frame distance changes.
Fault Reasoning: Another Lambda function evaluates behavioral flags such as sudden cut-ins, hard approaches, weaving, and stationary obstacles, classifying each vehicle as low, medium, or high risk.
Report Generation: The final Lambda sends the structured JSON output to Anthropic’s Claude Haiku 4.5 model hosted on AWS Bedrock. The model generates a formal Markdown report describing what happened, why it happened, who was at fault, and what preventive actions could reduce similar incidents in the future.
AWS Tools and Technologies
The system is powered entirely by AWS services:
AWS Lambda for event-driven orchestration and computation.
Amazon S3 for centralized data storage and inter-function triggers.
AWS MediaConvert for extracting frames efficiently.
Amazon SageMaker for deploying YOLOv8 and DETR deep-learning models.
AWS Bedrock for large-language-model inference using Claude Haiku 4.5.
Amazon CloudWatch for monitoring and logging.
IAM and EventBridge for access control and event routing.
Technical Approach
Witness.AI unifies three layers of intelligence. The first layer uses deep-learning-based perception to detect and track vehicles. The second layer applies rule-based reasoning to estimate relative speeds, distances, and potential risks through time-to-collision metrics. The third layer leverages generative AI on AWS Bedrock to convert raw structured data into human-readable, evidence-driven narratives. This combination of numerical reasoning and natural-language synthesis ensures that every report is both precise and interpretable.
Challenges and Solutions
Building a multi-stage, real-time analytics pipeline entirely on serverless infrastructure required addressing several engineering constraints. Frame extraction was optimized using AWS MediaConvert to avoid Lambda runtime limits. Maintaining object identity across frames was achieved through an intersection-over-union–based tracker. Time-to-collision estimation, typically dependent on calibrated sensors, was approximated using relative bounding-box changes, yielding consistent proxy measurements. To ensure consistent language-model output, we used structured prompt templates and deterministic generation parameters. Cold-start delays in model inference were reduced by pre-warming SageMaker endpoints through scheduled triggers.
Key Learnings
The project demonstrated that combining classical computer-vision logic with LLM-based reasoning can create powerful and interpretable AI systems for transportation safety. AWS Bedrock provided a controlled and secure interface for integrating generative models into automated pipelines. Using an event-driven, fully serverless architecture drastically reduced operational complexity while maintaining scalability and cost efficiency.
Outcome
Witness.AI proves that cloud-native AI pipelines can go beyond traffic analytics to deliver detailed post-incident reasoning. It automates the entire process—from raw video ingestion to a professional-style crash report—making it applicable to road-safety research, insurance analysis, and autonomous-vehicle testing. The system offers a scalable blueprint for next-generation INRIX-inspired safety solutions that combine perception, reasoning, and generative intelligence on the AWS cloud.
Built With
- amazon-cloudwatch
- amazon-web-services
- api
- bedrock
- boto3
- eventbridge
- json
- lambda
- media-convert
- python
- pytorch
- s3
- sagemaker
- yolo
Log in or sign up for Devpost to join the conversation.