Inspiration
In 2026, "Greenwashing" is a trillion-dollar blind spot. Companies like Drax Power Station claim to be carbon neutral while satellite imagery exposes them clear-cutting primary forests in British Columbia. The problem isn't a lack of data; it's information asymmetry. Investors, journalists, and consumers currently rely on corporate ESG reports that are months old and self-audited.
We thought: what if the market had a lie detector for ESG claims?
We built VANTAGE to be that system. It is an Agentic AI system that audits environmental reality in real-time, assigning companies a dynamic "Ethical Score." We provide the hard data that journalists need to expose fraud and investors need to de-risk their portfolios.
What it does
VANTAGE is a transparency engine that compares Corporate Claims vs. Satellite Reality. VANTAGE scans high-resolution satellite imagery using a custom ResNet-18 model to detect specific land-use violations (e.g., "Illegal Logging" in a "Protected Zone").
It cross-references the GPS coordinates of the violation with a Cadastral Database to identify the specific license holder (e.g., "Timber Mark EM2960: Drax Biomass").
The system updates the company's public Ethical Score. A verified violation lowers their score, while verified sustainability raises it. This allows the public to see a real-time "Trust Index" for major corporations.
How we built it We engineered a Microservice Architecture to integrate high-frequency data streams with predictive AI and a verifiable audit trail. When one of our 4 teammates had to withdraw due to a midterm conflict, the remaining 3 of us restructured our roadmap to handle the increased technical load.
The Core (Python/Reflex): We utilized Reflex to build a unified full-stack environment where the backend logic and frontend UI share a synchronized state. This acted as the central nervous system, processing the "Claim vs. Reality" logic across 1,000+ simulated logs.
Computer Vision (PyTorch): We leveraged Transfer Learning on a pre-trained ResNet-18 model, fine-tuning it on the EuroSAT multispectral dataset. This allowed the system to autonomously distinguish between "Forest," "Industrial," and "River" signatures to detect illegal land-use changes.
Data Warehouse (Snowflake API): We integrated Snowflake to manage high-volume satellite metadata. By utilizing SQL-based querying via the Snowflake API, we ensured sub-200ms latency for retrieving historical evidence and audit trails for 5+ mock global enterprises.
The Interface (Tailwind CSS & Responsive Reflex): I engineered a high-performance dashboard featuring Glassmorphism aesthetics. By utilizing rx.tablet_and_desktop and rx.mobile_only, I ensured the interface was fully responsive, featuring custom CSS keyframe animations for real-time "Danger" alerts when hypocrisy scores exceeded 60%.
Challenges we ran into
A major turning point occurred when one of our four teammates had to withdraw due to an urgent midterm conflict. This left the remaining three of us to absorb a significant workload. We had to quickly re-evaluate our roadmap, shifting from a linear development process to a highly parallelized one.
We initially planned to pull live ownership data from government cadastral registries, and satellite data from Google Earth. However, these APIs are locked behind days to weeks of enterprise verification which is inaccessible to us. Real-time, sub-meter satellite imagery (clear enough to see logging trucks) is prohibitively expensive or restricted for civilian use. We used historical, substantiated high-resolution imagery from the Drax Biomass scandal to simulate the pipeline. This allowed us to prove the architecture works, demonstrating what is possible if these data silos were opened.
Log in or sign up for Devpost to join the conversation.