Inspiration

Every game and animation begins with a world. However, building that world takes hours of manual asset placement, lighting tweaks, and GPU-heavy rendering. We wanted to remove that barrier.

Scenergy was born from a simple question: How might we empower creators to focus on storytelling, and let the world build itself, intelligently, beautifully, and fast?

By leveraging AMD’s open, high-performance compute ecosystem, Scenergy helps animators, students, and indie developers generate entire 3D scenes and artifacts in real time, directly in the browser.

What it does

Scenergy turns a single prompt (or a reference image) into a playable, editable 3D scene—live in your browser.

  1. Text-to-World: Describe a scene (“a sunlit forest clearing with a wooden cabin and a river”) and Scenergy generates the terrain, sky, lighting, camera, and matching props automatically.
  2. Smart Artifact Generation: Need a specific object? Type it. Scenergy creates the 3D artifact (mesh + PBR textures) and drops it into your scene with sensible scale, pivot, and collisions.
  3. Real-Time Composition: Drag, rotate, and arrange assets in a responsive Three.js viewport with live GI/IBL lighting, shadow toggles, and depth-of-field camera presets.
  4. Auto-Layout & Lighting: A placement engine proposes good-looking arrangements (avoid overlaps, align to terrain) and adapts lighting to time-of-day vibes (“golden hour”, “noir”, “neon dusk”).
  5. Quick Animation: Apply canned motion (idle, walk, looped props) or attach prompt-driven motion clips to characters and cameras for instant previz.
  6. One-Click Variations: Generate stylistic alternates (low-poly, photoreal, toon) or ask for “more trees / fewer props / wider river”, Scenergy rebalances the scene, non-destructively.

How we built it

Scenergy was built from the ground up to merge creativity, intelligence, and performance, all powered by AMD.

Frontend (React + Three.js): Designed a live 3D composer where users can drag, drop, and arrange animated assets with real-time lighting and camera control.

Backend (FastAPI): Handles asset orchestration, caching, and project state, every scene represented

Artifact Engine (Blender + PyTorch): Procedurally generates terrain, props, and textures using AI diffusion models and geometry scripts.

AMD ROCm Acceleration: Optimized mesh rendering and texture blending for smooth, energy-efficient performance on AMD GPUs.

Browser Deployment (Vite + Render): Fully online, no installs, no setup, just world-building at the speed of imagination. Vultr Cloud Node to host the website

Domain: GoDaddy domain

Challenges we ran into

Like every ambitious idea, Scenergy began as chaos.

We faced moments where:

  • We questioned if our idea of “auto-building worlds” was even possible in a weekend.
  • We fought the clock while balancing creativity, code, and caffeine.
  • We struggled to find harmony between automation and artistic control, specifically when should the AI lead, and when should the creator?
  • We experienced recurring compatibility challenges with AMD GPUs because the majority of models and frameworks target NVIDIA/CUDA. Consequently, we routinely implemented patches and code changes to ensure functionality on AMD’s ROCm platform.

But each of those hurdles became a lesson, that pushed our boundaries. Every glitch, mismatch, and crash reminded us that innovation is messy and that’s okay. By the end, we weren’t just optimizing scenes, we were learning to trust the process, powered not just by AMD Compute but also chaos and caffeine.

Accomplishments that we're proud of

  • Turning an idea like “What if worlds built themselves?” into a working prototype in just a few days pushed every skill we had.
  • We proved that creativity scales with accessibility. Using AMD’s open compute stack, we showed that high-performance 3D tools can run anywhere and not just in big studios. Getting a working final product and getting it done in time - very little sleep this weekend!

What we learned

  • Performance is freedom. Optimizing with AMD taught us that speed isn’t just about benchmarks, it’s about giving people more time and access to create, iterate, and dream.
  • Collaboration fuels innovation. Every late-night debugging session or lighting fix taught us something new about teamwork, patience, and creative resilience.

In the end, we didn’t just learn to build better scenes we learned how to build better experiences for the people behind them.

What's next for Scenergy

  • Add multiplayer creation. Let teams co-build worlds together in real time, like a collaborative Google Docs for 3D.
  • Expand AMD integration. Push ROCm even further with cloud rendering and real-time scene streaming.
  • Launch a creator hub. A shared library where users can publish, remix, and trade their Scenergy-made environments and artifacts.
  • Open-source our artifact engine. We want other developers to build on top of Scenergy — turning it into a community-driven world-generation framework.

Built With

Share this project:

Updates