Inspiration

Everyday, more and more people are generating videos using AI. On top of that, the LLMs powering this process are becoming increasingly powerful, capable of generating compelling videos that adhere to real world physics, and look photorealistic. When I look at this trend, I believe that in the not-too-distant future, anyone will be able to generate compelling, shorts, films and movies. That's what even inspired the name; a backlot is an outdoor set where films are made. The problem is that right now, the workflow for making good videos is fractured: generate an image in chatgpt, copy the image into Veo to create a video, download the video on to your device, then upload the video into your editor of choice, e.g. adobe premier or davinci, before finally publishing it to your platform of choice. This fractured process is where I think there is an opportunity to create a tool that can consolidate all of these steps. This was the inspiration for Backlot AI.

What it does

Backlot AI enables creators to easily edit their videos on the web. The platform provides a library of built-in audio and video files to help creators achieve a cinematic quality to their videos. It also brings the power of AI into creators' hands by providing AI text-to-video integration. Creators can therefore upload, generate, edit, and publish their videos all in one space. No more workflows that require migrating from one tool to another. Backlot AI combines all your workflows into a single workspace.

How we built it

The app was designed in Figma, then imported into Bolt where it was further customized using AI prompting. Bolt was incredibly helpful in creating a custom and refreshed look to the video editing interface. It was also very helpful when it came to troubleshooting through errors in the code, while providing clear explanations of what was happening. Just by going through the process of making the app, I learned how to read code a lot better and grasp how everything fits together.

Challenges we ran into

As a fairly new coder, I think the biggest challenge was the fact that I didn’t know what I didn’t know. For example, I didn’t know that there were programmatic video editing frameworks like revideo or Remotion that would help in creating the video editing functionality and/or the UI for an editing timeline. As a result, there was a lot of manual customization which could have spared both time and tokens.

Accomplishments that we're proud of

There are plenty!
Making it to the finish line would be the biggest. I got started on the hackathon fairly late, but used every spare minute I could to build. Hitting deploy was a great feeling.

I’m also proud of the tenacity this project inspired in me. There are a ton of complexities that come from building a custom-made video editing tool. It was especially challenging when some solutions birthed new problems. Being able to solve each problem as it arose is something that I am proud of. By trying to understand the issues so I could better frame a solution or request to Bolt AI, I was able to chip away at each issue until I had a working product.

What I learned

There is so much power in the hands of people now thanks to AI and tools like Bolt. I’ve always enjoyed learning new things, and this project helped me learn how to be a better “prompt engineer” so I could achieve the outcome I was looking for, and how to read and write in React (I'm still a beginner for the most part, but now, a seasoned beginner). I've learned what libraries are, what state management is, how to deploy an app to Netlify, how to debug using the console, how to run commands in a terminal, and so much more.

What's next for Backlot AI

I know this idea has a lot of potential, for many reasons I've already highlighted above. But as with any beta product, the reality is that there are some minor bugs left to work out, and I intend to continue chipping away at those. I'd also like to further the UI to better align with my Figma concept - prompting the customization of the UI proved to be difficult, as the AI seemed inclined to refactor the design I shared.

I would also like to fully explore AI video generation in the tool, and explore different ways it could be applied. What if a user could pause playback, sketch an idea on the screen, give the AI an instruction to change the video according to their sketch, and watch their video transform in real-time? I believe the LLMs will get to that point where real-time changes can be made. When they do, Backlot should not be far behind. There’s opportunity for an all-inclusive video editing tool with AI video generation to be valuable to many people, and Backlot AI could be that valuable tool.

Built With

  • bolt
  • figma
  • netlify
  • react
  • wavespeed.ai
Share this project:

Updates