🗺️ Project Story: Location-Aware Intelligent Advertising with Machine Learning
🚀 About the Project
Our project transforms public digital screens into intelligent, location-aware advertising platforms using machine learning, markerless motion capture, and Google Maps Platform APIs. Inspired by the underutilization of physical ad spaces across urban Africa, we set out to create a solution that responds to the real-time environment, audience dynamics, and location context—delivering smarter ads and higher engagement.
🔍 What Inspired Us
Billboards and public screens are everywhere, yet most play content without context—regardless of the time of day, local events, or who’s watching. We asked: What if outdoor screens could think like online ads? That question led to SimbaGPT, a system that brings hyper-local intelligence to physical media using maps, computer vision, and ML.
⚙️ How We Built It
We used the following components:
- Google Maps Platform for location data, geofencing, and contextual triggers (e.g., traffic, places, weather overlays).
- Luxonis OAK-D PoE Cameras and Moverse markerless motion capture for real-world visual sensing.
- ML models trained on behavioral data to choose the most relevant ad based on audience density, time, and location.
- A backend system that combines Maps API outputs with camera data to make real-time ad decisions.
Mathematically, we approached ad optimisation as a contextual multi-armed bandit problem:
$$ \text{Ad}_{t} = \arg\max_a \left( \mathbb{E}[r_t(a) \mid x_t] + \beta \cdot \sqrt{\frac{\ln t}{n_t(a)}} \right) $$
Where $r_t(a)$ is the expected reward for ad $a$, and $x_t$ is the location + visual context.
💡 What We Learned
- Location APIs can unlock new dimensions of personalisation in offline spaces.
- ML inference must happen at the edge to ensure low latency.
- Map-based insights (e.g., peak hours, popular venues) are critical for predicting audience intent.
- Creatives perform better when adapted not just to the user, but to the place.
🧗 Challenges We Faced
- Balancing privacy with perception: we ensured all sensing is anonymised and GDPR-compliant.
- Delays in procuring specialised hardware impacted live deployments.
- Training models to adapt across different regions, languages, and cultures took time and iteration.
🌍 Why It Matters
Our platform enables real-time, intelligent storytelling in physical spaces, combining the power of maps, AI, and motion. It can serve municipalities, retail networks, transportation hubs, and event venues — anywhere dynamic, location-relevant messaging matters.
We’re currently in early-stage rollout across Nairobi, Cape Town, and Kigali, with cloud backends hosted on Paperspace by Digital Ocean and integrated with Google Maps APIs during live campaigns and Google Adwords for analytics.
Built With
- c++
- custom-ad-serving-api-*-**browser-interface**:-rendered-through-google-chrome-###-?-v2.0-?-current-system-*-**edge-inference**:-nvidia-jetson
- depthai-sdk-*-**orchestration**:-docker
- firebase-*-**apis**:-google-maps-platform-(maps-javascript-api
- flask-*-**cloud-services**:-digitalocean-(compute
- geocoding-api)
- google-cloud-functions-*-**apis**:-google-maps-platform-(places-api
- javascript-*-**frameworks**:-tensorflow
- kubernetes-*-**monitoring**:-grafana
- luxonis-oak-d-cameras-*-**vision-ai**:-markerless-human-pose-estimation-via-moverse-studio-*-**languages**:-python
- node.js-*-**cloud-services**:-google-cloud-platform-(bigquery
- prometheus
- react-(ui)-*-**frameworks**:-pytorch
- reflecting-the-evolution-from-v1.0-to-v2.0:-##-?-built-with-###-?-v1.0-?-initial-deployment-*-**video-processing**:-nvidia-deepstream-sdk-*-**hardware**:-hikvision-ptz-cameras-with-25x-optical-zoom-*-**languages**:-python
- roads-api)
- storage)
- vertex-ai)
Log in or sign up for Devpost to join the conversation.