Architectural renderings play a crucial role in the design process, helping to ensure that the project is functional, aesthetically pleasing, and meets the needs of all stakeholders. They are the most important communication tool to convey design intentions to people who may not have a background in architecture, making it easier for them to understand and evaluate the proposal. With only a few libraries of cut-out assets, architects find themselves struggling to find the appropriate people to incorporate into their designs. On top of the time-consuming effort of looking for the right asset, these libraries also fall short when it comes to representation – showing mainly white demographics from North America or Northern Europe.

What it does uses AI to generate diverse images of people for architectural rendering. The goal is to enable architects and designers to easily incorporate a wider range of people, cultures, and environments into their designs, promoting diversity and inclusion in the built environment.

How we built it

After brainstorming on different ideas, we saw the potential of an AI-powered asset generator for designers. We designed the first iteration of the user interface and went on to build the prototype using:

  • React client hosted on GitHub pages
  • Node.js & Express.js for the backend
  • Replicate API for inference
  • AWS for hosting the backend

Challenges we ran into

When building the prototype, we found ourselves running into some challenges around:

  • finding the best way to set up the inference, first by deploying our own model with lower quality results and later by using an API from Replicate as we were running out of time.
  • prompt engineering and understanding of the mechanism of using Stable Diffusion. By experimenting with different prompts, we were able to overcome some of the poor results we were initially getting.
  • finding an intuitive and user-friendly experience for non-technical users. By abstracting the complexity of using Stable Diffusion, we are lowering the barrier to entry for this novel AI tool.

Accomplishments that we're proud of

We have designed a new user experience to allow non-technical designers to make the most out of Stable Diffusion for their design projects. Aside from increasing representation in the architecture world, our functioning prototype is also tackling one of the biggest problems with Stable Diffusion: lowering the barrier to entry of prompt engineering and technical literacy to make Generative AI technology accessible to anyone.

What we learned

Through these short intense few days, we have learned a lot about the technical aspects of Stable Diffusion: how to deploy our own models, better understand SD models in production and prompt engineering, and abstract away the technical parts of SD for a better UX.

What's next for

Our next iteration will focus on meeting the evolving needs of designers by providing different types of asset generation as well as inpainting capabilities.

Built With

Share this project: