Inspiration

WE ALL (yes even YOU, the person reading this) love Fortnite. But getting better at it? That’s a grind.

When I was younger, I was inspired by streamers like SypherPK, who break down their gameplay to improve and entertain. What if every Fortnite player had their own SypherPK, giving personalized, meme-worthy, and brutally honest feedback on every clip they upload?

And what if you could explore your progress on an interactive 3D island, opening loot chests filled with random clips of you in the past?

That’s where my idea began.

What it does

Fortnite, We Need to Talk is your AI Fortnite coach… wearing the SypherPK icon skin.

Upload your clip → Get back:

🔍 A detailed breakdown of your performance (what you did well, what you fumbled, and how to improve).

🧠 Advice in the voice of SypherPK, mimicking his signature phrases and tone (yes, he talks about Chun Li as much as the real SypherPK does).

🖼️ AI-generated thumbnails for each clip (I used Gemini for this!).

🗺️ A 3D interactive Fortnite island. Click on randomized chests to teleport into different moments and feedback sessions.

It’s not just coaching. It’s content + fun + glow-up.

How I built it

Frontend:

  • React + Tailwind for the UI

  • Three.js + @react-three/fiber + @react-three/cannon to build a physics-based 3D map

  • Dynamic 3D loot chest spawning, clickable to jump into random video breakdowns

  • Custom collision boxes to keep everything grounded (literally)

Backend:

  • Twelve Labs API to analyze Fortnite gameplay (extract key moments, summarize performance)

  • Gemini API to generate thumbnails and SypherPK-style advice

  • Flask server to orchestrate the AI pipelines and serve video data

    Challenges I ran into

    Gemini API quirks: I faced unexpected bugs and inconsistencies when generating AI thumbnails and SypherPK-style feedback, which required lots of trial and error to stabilize.

3D physics & collisions: Getting the loot chests to behave naturally on the island with proper collision detection was tricky and took extensive tuning using Cannon.js.

Using Twelve Labs for the first time: There was a steep learning curve to understand how to analyze Fortnite clips effectively and integrate the API into my backend and frontend.

Nailing the Fortnite vibe: Designing the UI to match Fortnite’s iconic color scheme and font (like the “Luckiest Guy” font) took several iterations to get just right.

Accomplishments that I'm proud of

Built a functioning AI coach that talks like a streamer, not a robot.

Created a 3D interactive island experience with physics-based mechanics (in a weekend!).

Seamlessly integrated multiple advanced APIs (Twelve Labs, Gemini, Flask server) into a coherent UX.

Managed to make watching your own Ls actually fun.

What I learned

How to work with AI video understanding at scale, parsing real gameplay footage for useful insights.

Prompt engineering to generate content with personality (not generic advice).

Using Three.js and physics libraries to create interactive 3D environments.

How to balance fun + utility in an app designed for gamers.

What's next for Fortnite, We Need to Talk

Add timeline scrubbing and clip annotation directly in the app

Train a custom SypherPK style model using finetuning or vector prompting for even more realism

Enable clip sharing and leaderboard systems — rank your plays, see friends' feedback

Expand to other games like Valorant or Apex Legends with themed AI coaches (imagine “Shroud, We Need to Talk” 👀)

Mobile-friendly version so you can get roasted on the go

Built With

Share this project:

Updates