Inspiration
Designing a room usually starts with imagination, but the tools often start with friction: measurements, asset hunting, flat moodboards, and switching between apps. We wanted to build a faster bridge between “I can picture it” and “I can actually see it.” Inter was inspired by interior designers who need a faster way to explore concepts, as well as people who are not designers but still want to plan, decorate, or reimagine their own spaces.
What it does
INTER is an AI-assisted 3D interior design studio. Users can define a room, add doors, windows, wall segments, cameras, and custom shapes, then generate furniture from text prompts like “round walnut coffee table.” Generated or uploaded GLB/GLTF models can be placed, rotated, scaled, saved to a local library, and viewed in both a 3D blockout and a 2D blueprint.
INTER also supports a room-to-world workflow: users can take a panoramic photo of a space, send it through GPT-2 image generation to create a stylized interior concept, and then turn that result into a Gaussian splat for an immersive room visualization. This helps interior designers quickly prototype layouts and also helps everyday users visualize ideas before buying furniture or changing a room.
How we built it
We built INTER with Next.js, React, TypeScript, Three.js, React Three Fiber, Drei, and Tailwind/CSS tokens. The 3D editor is backed by shared editor state for rooms, furniture, wall segments, doors, windows, cameras, and generated assets.
For the AI pipeline, we used Meshy for text-to-3D furniture generation, Gemma 4 hosted on Vultr as our self-hosted design reasoning layer, and Google Cloud Vision API for visual understanding. Google Cloud Vision helped us analyze room imagery before generating the final scene. We used it for object detection so Inter could identify major furniture and room elements, depth map information so the system could better understand spatial layout, and normal map cues so surfaces like walls, floors, and large objects could be interpreted with more accurate orientation and structure. This made the panoramic-to-3D workflow more grounded in the real room instead of being purely image-based.
Gemma 4 helped us interpret user intent, clean up rough room/style descriptions, and produce more structured guidance for the rest of the pipeline. Hosting it on Vultr gave us more control over latency, cost, and deployment, while also giving us a path toward future fine-tuning with our own interior design data.
We also built a panoramic image pipeline where a captured room photo is passed into GPT-2 image generation, then converted into a Gaussian splat for an immersive 3D-style room view. Server routes handle API calls and asset persistence, while the blueprint renderer stays synchronized with the 3D scene. The interface was designed as a compact creative tool with a full-screen viewport, floating mode controls, object/furniture/world panels, and a landing experience that transitions directly into the workspace.
Challenges we ran into
The hardest parts were keeping the 2D blueprint and 3D scene consistent, handling real-world scale for generated models, and making imported/generated assets feel usable inside a precise room editor. We also had to manage async generation states, progress updates, local library persistence, and interactive 3D controls without making the UI feel cluttered.
Another big challenge was building the panoramic-to-Gaussian-splat workflow. We had to think through how to preserve the feel of the original room while still letting AI reinterpret the design, then convert that output into something immersive enough to explore spatially.
Accomplishments that we're proud of
We’re proud that Inter feels like an actual design workspace rather than just a demo. Users can generate furniture, place it in a real editable room, switch between blockout and blueprint views, upload their own models, save generated assets, and build a scene with architectural details like windows, doors, and wall segments.
We’re also proud of the panoramic room pipeline: taking a photo, generating a redesigned interior concept, and turning it into a Gaussian splat makes the experience feel much closer to standing inside a future version of your room. Most importantly, Inter can support both professionals exploring client concepts and non-designers trying to make confident decisions about their own spaces.
What we learned
We learned how much complexity lives between a cool generated 3D object and a useful design tool. Model scale, placement, footprints, selection, camera behavior, and blueprint accuracy all matter. We also learned that AI tools work best when they are paired with direct manipulation: prompt-based generation is powerful, but users still need to move, edit, compare, and refine the results themselves.
We also learned that image generation and spatial visualization solve different parts of the design problem. Image generation is great for style and atmosphere, while Gaussian splats help make the result feel spatial and immersive. Using Gemma 4 also taught us the value of having a smaller, self-hosted reasoning layer that we could control and adapt around our product instead of treating every AI step as a black box.
What's next for Inter
Next, we want to make Inter more collaborative and more grounded in real spaces. That means room scanning, better measurement tools, richer material/style controls, multiplayer design sessions, exportable floor plans, and more realistic final renders.
We also want to improve the AI pipeline so users can generate full cohesive room concepts from a layout, not just individual furniture pieces, and make the panoramic-to-splat workflow faster, more accurate, and easier to share.
Built With
- c#
- gaussian-splat
- gemma
- google-vision
- gpt-2-image
- next.js
- spark.js
- tailwind
- typescript
- unity
- vultr

Log in or sign up for Devpost to join the conversation.