Gemini 3 Features
Our app leverages the power of the Gemini 3 API in two ways. First, we utilize it to analyze the video of a room input by the user and extract their relative coordinates and dimensions as a JSON file. This input is crticial for Rapier, the Physics engine to construct and simulate the earthquke for different magnitudes. Additionally, we run the simulation and send the room data to Gemini 3 and prompt it to scrutinize the room layout and offer top safety suggestions.
Inspiration
Living along the San Andreas fault, earthquakes hit close to home -- quite literally for one of our group members, who lives one street away from a prominent epicenter. Since November 2025, 80+ earthquakes above 2.0 have hit the Bay Area, kindling unsettlement and stress due to their sheer unpredictability. Seismologists estimate that there's a 70+% chance that an earthquake of magnitude 6.7 or greater will strike California by 2032. That gives us less than 6 years to prepare, and for most people, that’s not enough time--or money--to retrofit their house.
Yet when we searched for resources, we realized that earthquakes are one of the most unpredictable natural catastrophes and there's quite literally nothing beyond "Drop, Cover, and Hold on" that helps people, espeically children, prepare themselves. This led us to create QuakeProof, your personalized earthquake simulator that enables you to understand your personal risk, visualize real impacts, or take meaningful action before disaster strikes.
What it does
The app extracts the main objects in the room from an input video, displays an interactive 3D rendition, allows the user to view the impact at varying magnitudes, and suggests fixes based on the layout. First, the user takes a video of their surroundings and our app renders a 3D version using the Gemini 3.0 model. The user can then drag a slider to see exactly how different magnitudes would affect the room and get personalized suggestions for safety features to add. They can also enter Live Mode, where their phone camera color codes objects on the phone screen as they pan their camera around based on risk while adjusting magnitude.
How we built it
3D rendition: The user inputs a video of their room to the app. Then, we analyze the video and send a prompt to Gemini AI API, asking it to identify the objects in the video based on predefined categories. The objects are then sent to Rapier, a physics engine that processes the location and object information from Gemini and renders the 3D block versions of the objects in their appropriate coordinates. Rapier also considers the mass, height, width, and configuration of the objects and simulates how the horizontal acceleration applied during an earthquake will affect them. We also send the furniture data to the Gemini API to suggest fixes based on room data.
Live Mode: We trained a YOLOv8 model on a dataset containing common furniture items and used OpenCV for camera vision and TailScale to allow the phone to detect and classify objects in real time and send the data back to the web app. We then run a Monte Carlo physics simulation, randomizing several parameters and averaging the results to determine the percent likelihood of an object tipping given a certain magnitude.
Challenges we ran into
We had trouble connecting the computer to the phone’s webcam while constantly sending data to and from each device. We first used ngrok to create a virtual https link that a phone could open, but this was extremely slow and would freeze after overlaying simulation results for over 10 seconds. Our solution used Tailscale, which was much faster with no rate limits, so the phone would keep running the simulation to overlay data for the furniture tipping percentage while sending the information to the backend on the computer.
In addition, we worked on three different but related portions and had to combine our code and dependencies while still having all the external connections (phone to computer, Gemini API) work. One component used Vite, the other used React, and the last used Next.js, so we had to update our apps to make them all compatible. This involved a lot of high-impact merging and structural changes. Deploying this app was a challenge in itself, due to the high memory requirements of the YOLO model, the small yet ineluctable sizes of the input videos themselves, and the mismatch in communication between Render, where our backend is hosted, and the frontend part of the app, which lives in Vercel.
Accomplishments that we're proud of
Our app converts a video into an interactive 3D model, identifying object type and material and using object density while running the simulation. It also suggests fixes based on the specific room layout. Live Mode runs the simulation in real time and overlays boxes at a rate of 5 fps on a phone camera instead of the laptop for added convenience.
What we learned
We learned how to connect multiple devices to pass data seamlessly and rapidly between each other. The phone uses a YOLO model stored on the computer, sends frames back to the computer to analyze, and overlays the result back on the camera screen.
We also learned how to render 3D object reconstructions from a video, combining the power of the YOLO model, Gemini AI API, and Rapier physics engine. The process of coming up with this pipeline was highly enriching, as we explored ideas that we thought were beyond the limits of what currently available public AI models can do. In this process, in order to ensure our physics simulations were grounded in science, we also learned deeply about the structure and impact of earthquakes.
What's next for Quakeproof
Our next steps are to build a more interactive UI, where the user can click on objects to “secure” them, and rerun the simulation to see how securing heavy furniture items improves safety. We would also like to add an overall “safety score” using room data, steps taken, and location data. Lastly, we would map the exit and create personalized recommendations to ensure the exit path remains clear.
Built With
- geminiapi
- javascript
- opencv
- python
- rapier
- react
- render
- tailscale
- three.js
- vercel
- yolo
Log in or sign up for Devpost to join the conversation.