Home interior design is tedious. Really, really tedious. As college students moving into apartments and houses for the first time in our lives, we’ve experienced firsthand the guesswork and stress that comes with buying pieces of furniture that we only think would look nice together. After all, it’s not like there’s any way to visualize the entire design and adjust the style and layout before we actually start buying, right?

SceneItAll

SceneItAll is an iOS app that lets the user create a variety of stylized furniture designs for any room they wish:

  1. LiDAR Room Decomposition - the user can scan any room to obtain its dimensions and semantic labels, generating an interactive 3D mesh that the user can test out furniture in.
  2. Huge Furniture Catalog - 7500+ furniture options (all fully modeled in 3D!) are available for the user to try out, enabling drag/dropping of additional furniture in the 3D scene.
  3. Agentic Assistant - if the user ever gets stuck, they can ask the Gemini agent for exactly what they want (style, color, price range, etc.), who will perform a semantic search through the entire furniture catalog to pinpoint the perfect pieces for the user. If requested, it will add the preview of the suggestion, embedding it directly in your room blueprint. 🔎

How we built it

The frontend is built using Swift, while the backend is built using FastAPI, with data hosted on MongoDB and Gemini for the underlying LLM model for the agent. We leveraged Apple’s built-in LiDAR framework RoomPlan SDK, to enable rich visualizations during the scans, along with any additional furniture add-ons, using a hybrid mix of RealityKit and SceneKit. Big thanks to Claude Design for styling a large portion of the user interface. The backend mainly uses Gemma4 Effective and Gemini-3.1-Pro-preview-custom-tools as an agent (with the Pydantic AI framework) to generate and implement recommendations via "inline suggestions" for the user, drawing from a large IKEA furniture collection, which conveniently had 3D models (.glb) on many of its pages. Separately, we wrote a web scraper to download the models and extract crucial metadata (such as category, material, descriptions, color, style, etc.), storing them in MongoDB Atlas. By additionally generating vector embeddings (using OpenAI's CLIP ViT-32 for embedding generation), our agent can semantically search through the database for specific kinds of furniture the user desires.

Challenges we ran into

Integration was the biggest challenge. At the start, we had two people work on the backend, focusing on MongoDB, FastAPI, and eventually most of the agent tooling workflow. We had another teammate focus solely on the front-end side, using SwiftUI. This was a mistake, as by the time we put it together at T-8hrs, we realized there were multiple discrepancies on how we wanted to store the data and where it should be retrieved. We also spent a long time trying to sort out the sensitivity levels for invalid furniture placements. For example, we had a difficult time validating placements in the agentic loops and tinkering with the system prompt to adhere to it. We also had to figure out what the minimal JSON file structure was while maintaining high spatial reasoning. A good amount of time was spent sanitizing and simplifying the raw JSON file while keeping enough data to reconstruct the spatial environment.

Accomplishments that we're proud of

We’re really proud of the huge scope we managed to take on in this project. When discussing the idea and outlining the technologies we’d be implementing, we weren’t exactly sure how much of it was actually possible in 36 hours, yet we decided to take it on anyway. We managed to work surprisingly efficiently (although not the most flawlessly) throughout the entire hackathon—we managed to collect all the furniture data in MongoDB, get the 3D room display to work on the frontend, and write the basic interfaces for interacting with the database by the halfway point—leaving us the rest of the time to work on the agentic integration, arguably the most crucial feature, and polish the application. (Plus, we even had time to sleep 😴). The agentic workflow ended up very robust!!!

You can launch a scan, which generates the spatials --> given your room, you can query "Make me a cozy study book corner with a couch, rug, and a chair." --> In subsequent interactions with the prompts, you could ask it a followup: "can you move the rug from behind the couch to in front" or "apply Feng Shui principles to the setup, and ensure the sofa is 2 feet away from any entrance." With each step, it will generate a different configuration given your constraints.

What we learned

We integrated many technologies from a wide range of domains, from RealityKit to display the 3D room to MongoDB Atlas to store and retrieve the vast number of furniture items available. Admittedly, since we relied on LLM agents to write a lot of the code, we found ourselves instead spending a lot of time thinking about a sensible way to transfer the most important information from domain to domain, especially from the user's visual design choices on the frontend to the precise object data on the database end. We had to redesign our schema for the API multiple times, but we feel we came out of it with a greater understanding of how to efficiently store the crucial information needed for each part of the puzzle (or furniture in the room). We learned how to integrate vastly different technologies from a higher-level perspective, instead of inefficiently cramming unnecessary information into our database. A lot of us on the team also weren't fluent with Swift, so we had one teammate who is more experienced to hover and help around on bottlenecks with XCode and ensure adherence to iOS architecture and UI design language

What's next for SceneItAll

Since our project scope was so huge, we still have a lot of ideas that were simply scrapped for the sake of time:

  1. Texture mapping - Unfortunately, the rooms (walls, windows, doors) always have blank textures, which is not exactly true of most rooms and can be quite distracting for the user to visualize their "ideal" room. Future development could allow us to accurately represent the textures of every room, generating an even more realistic room view.
  2. More Diverse Furniture Catalog - We gathered all our furniture data from IKEA because the website had the most convenient format to scrape from. However, IKEA furniture has quite a distinctive “style”, meaning that certain styles were quite underrepresented in the application. If we had more time, we would devise a way to translate even more furniture items from other furniture websites into 3D object files for our database.

Built With

Share this project:

Updates