Inspiration
I'm someone who is not afraid of driving long, unfamiliar routes. In fact I even did exactly that for this very hackathon, driving almost 4 hours just to be at Conhacks. So I speak from experience when I say that sometimes, GPS driver assistance tools like Google Maps just fall short. Sure, they’ll get you to your destination, but there’s billions of little edge cases that you’ll run into on the road- exactly the wrong place to be facing a problem.
Just an example: For a brief time I had about an hour long commute (2 hours round trip.) On the route back, there was this weird fork in the road with 3 lanes. Google Maps correctly tells you to get into one of the three lanes, but you actually want to be in either of the two left most lanes, because the third lane then splits off again in the wrong direction. It processed what the road sign said correctly, but failed to account for the on-the ground reality of the situation.
Sometimes also, when you’re driving at 70mph/110kph, it’s more than a little bit difficult to figure out where an exit actually is, especially if there’s multiple right next to each other in a busy location. On the way to Conhacks there was, if I recall, 148A and a 148B exit off the highway, and because of the sudden increase in complexity on the road and my own fatigue, I couldn’t tell the two apart and took A mistakenly. They were close enough that even with Google Maps for reference, I couldn’t tell which was which at the time in that state.
This is what RRR comes in. If you know you’re going to be taking a road trip to an unfamiliar area, RRR allows you to practice that route, from the comfort of your own home. You can gain the knowledge of the roads you need to ensure a smooth and safe journey without having to actually brave them first.
What it does
RRR uses several API’s to provide 2D, Streetview, and 3D location data to the user, and highlights potential hazards along the user’s route.
How we built it
AI coding tools were used to create the project/assist with documentation. An initial brainstorming blueprint was manually created and then iterated upon with AI feedback. The tools used were: Codex (GPT 5.5, GPT 5.4, GPT 5.4 Mini, 5.3-Codex), Gemini-CLI (Gemini 3.1 Pro, Gemini 3 Flash), Google Antigravity (Sonnet 4.6, Opus 4.6, Gemini 3 Flash, Gemini 3.1 Pro), Cursor (Auto), Kiro CLI (Sonnet 4.5,) Kilo Code (using NVIDIA NIM backend, Deepseek V4 Pro- well, attempted, it ended up being too slow to make any changes) and Windsurf (Kimi K 2.6, SWE 1.6). Google AI Studio also assisted with some question asking also using the Gemini 3.1 Pro backend, as well as a few of the Google AI Overviews providing clarifications on certain details. Notebook LM was used for some help in presentational video, and in preparing for the final demonstration.
As for the techstack itself (also iteratively written with AI tools:) Road Route Rehearsal is a lightweight, zero-build-step vanilla web application built on native HTML5, CSS3, and ES Modules. It orchestrates a sophisticated, single-page driver rehearsal experience by combining Leaflet's performant 2D mapping and ArcGIS satellite imagery with CesiumJS's immersive Google Photorealistic 3D Tiles. The frontend relies heavily on modern CSS Grid and Flexbox for responsive layouts and utilizes the Device Orientation API to transform the user's mobile phone into a responsive, tilt-to-steer driving controller.
On the backend, a minimal Node.js server paired with the ws library and self-signed SSL certificates maintains real-time, low-latency WebSocket communication over HTTPS/WSS between the laptop simulation and the phone controller. The server binds to 0.0.0.0:8080 for same-network pairing with automatic LAN IP detection and QR code generation for easy phone connection. (This does require the user to disable some Chrome security features to permit connection- longer term, a more secure solution would be used.) The platform's hazard detection pipeline fuses precise geometric turn data from OSRM, real-world infrastructure constraints from the Overpass API, and intelligent soft-hazard coaching insights from Google Gemini 2.5 Flash via the Gemini API (with OpenRouter API as an emergency fallback using the Gemma 4 26B A4B model in case of quota exhaustion).
Challenges we ran into
The typical challenges one faces with vibe coding were present- AI coding quotas are unfortunately being reduced writ large, which forced me to switch between different models due to my decision to rely upon it for creating the submission. Testing the application was also difficult due to the integration of a phone to serve via gyro as a steering wheel replacement. The main nightmare was the 3d driving mode. It was a bit of a stretch but I think it was a nice addition.
Accomplishments that we're proud of
I feel this product will have some real practical value! It helps solve a real problem- unfamiliarity with roads in high-stakes situations- which people actually face on a day-to-day basis. I also am proud of minimizing costs and avoiding having to directly pay for any of the API’s I am utilizing everything is either free or using a free trial tier.
What we learned
Having the additional wiggle room that a 36-hour hackathon provides allows for applications to be developed with far greater complexity and usefulness without significantly degrading the time crunch pressure helps developers motivate themselves to finish their MVP’s in time for demonstration.
What's next for Road Route Rehearsal
Assorted bug fixes. While the core functionality is present, some minor flourishes/details did not work as expected. Restoring these features that were removed or sidelined for demo functionality is crucial.
A top wish is VR integration + those steering wheel and break systems for full immersion, as this would make the product more useful. Additional, higher-order evaluation of hazards utilizing publicly available weather sources, etc is being considered to refine the complexity and usefulness of hazard detection. Elevenlabs integration to create responsive "hazards" from passengers to help simulate cognitive load (and thus one’s ability to respond to or ignore these situations) is being considered.
Built With
- 5.3-codex
- arcgis-world-imagery
- carto-dark-tiles
- cesiumjs
- codex
- css3
- cursor
- deepseek-v4-pro
- device-orientation-api
- elevenlabs-tts-api
- gemini-3-flash
- gemini-3.1-pro
- gemini-cli
- google-ai-studio
- google-antigravity
- google-gemini-2.5-flash-api
- google-photorealistic-3d-tiles
- gpt-5.4
- gpt-5.4-mini
- gpt-5.5
- html5
- https
- javascript-es6-modules
- kilo-code
- kimi-k-2.6
- kiro-cli
- leaflet.js
- node.js
- nominatim
- nvidia-nim
- openrouter-api
- opus-4.6
- osrm
- overpass-api
- qr-code-api
- self-signed-ssl-certificates
- sonnet-4.5
- sonnet-4.6
- swe-1.6
- websocket-(ws)
- windsurf
Log in or sign up for Devpost to join the conversation.