Inspiration
- Inspiration Navigating downtown Chicago as a university student or commuter is a fragmented experience. You have to check Ventra for train delays, Citizen or Twitter for safety alerts, and Yelp or Google Maps to find a decent spot to study or grab food. We realized that what people actually need isn't another dashboard—it's a knowledgeable local friend. We built Chicago Atlas to be that friend: a real-time, context-aware city intelligence platform that understands the heartbeat of Chicago.
*What it does Chicago Atlas is a mobile application that aggregates live city data and feeds it into an ultra-fast conversational AI named "Harold."
Live Signals: The app provides a high-fidelity "Blueprint" dashboard showing live CTA train arrivals, real-time Air Quality Indexes, and official Chicago Police Department (CPD) dispatch data cleanly separated from crowdsourced community reports.
Harold, the Local AI: Users can use push-to-talk voice or text to ask Harold anything. Harold dynamically ingests the live city data (weather, transit, safety) into his system prompt, allowing him to give highly contextual, latency-free answers with deep-linked Perplexity-style Google Maps cards.
*How we built it We architected the platform to prioritize low latency and native feel:
**Frontend: Built with React Native and Expo (TypeScript) to leverage native device APIs like expo-av for audio and expo-haptics for tactile feedback. We used the Animated API to build custom radar pulses instead of relying on heavy third-party map SDKs.
**Backend: A Next.js application deployed on Vercel utilizing Serverless Edge Functions to handle API orchestration without cold-start lag.
The AI Engine: We utilized Groq (running Llama 3 / Mixtral) for lightning-fast inference, ElevenLabs for hyper-realistic Text-to-Speech (TTS), and Groq STT for near-instant voice transcription.
Live Data: We wired up the official City of Chicago Data Portal (Socrata API) for live CPD incidents and integrated CTA APIs for transit status.
Challenges we ran into
Building a production-ready voice AI app in a hackathon setting tested our limits.
The iOS Audio Trap: We encountered severe -1008 AVPlayer errors when trying to stream raw HTTP audio chunks over hackathon Wi-Fi. We had to completely pivot our audio architecture to a "pre-download and cache" strategy using expo-file-system to ensure stable playback.
API Rate Limits: Pushing our LLMs to the limit during aggressive testing completely drained our Groq and ElevenLabs token quotas mid-sprint. We had to quickly hot-swap models and implement fallback UI states to prevent the frontend from crashing when the backend threw HTTP 500 errors.
UI/UX Data Rendering: Passing raw, messy government JSON data into an LLM and forcing it to output clean, parseable ||MAP:|| tags required strict prompt engineering and custom frontend parsing logic.
Accomplishments that we're proud of
We are incredibly proud of achieving true "Siri-style" interaction speeds. The pipeline of recording voice, transcribing it, running inference with live contextual data, and playing back high-fidelity audio feels seamless. Additionally, directly integrating the official CPD API into our Safety drawer—and visually differentiating it from user-submitted reports—adds a layer of civic trust that most city apps lack.
What we learned
We learned hard lessons about the realities of building AI wrappers. It’s not just about the prompt; it’s about managing state, handling race conditions (like iOS mic permissions freezing the UI), and building robust error handling for when upstream APIs inevitably fail or rate-limit you.
What's next for Chicago Atlas
Our immediate next step is expanding the real-time data ingestion beyond the Loop to all Chicago neighborhoods. Long term, we want to implement multi-agent reinforcement learning to analyze historical CPD and CTA data, allowing the app to proactively suggest predictive, safe routing for commuters before incidents even hit the dashboard.
Log in or sign up for Devpost to join the conversation.