Inspiration Every frontend developer has experienced this moment: you scroll through a website, notice a beautiful animation, and immediately wonder how it was built. Sometimes it’s a subtle hover effect, sometimes it’s a flowing background or a smooth page transition. Recreating that motion usually means digging through DevTools, experimenting with easing curves, and guessing timing values — a process that can take hours. I wanted to remove that friction. AnimReverse AI started as a simple idea: what if a tool could observe an animation and translate what it sees into usable source code? Instead of manually reverse-engineering interactions, developers could focus on creativity and iteration. The goal was not to copy anyone’s work, but to learn from visual behavior and recreate it in an original way. How I Built It The application is built using Google’s Gemini 3 multimodal model through AI Studio. Users can either provide a website URL or upload a screenshot of an animation. When a live page cannot be accessed due to browser security restrictions, the image path ensures the system still works reliably. Once the input is provided, the model analyzes motion patterns, layout structure, color distribution, and timing behavior. Instead of treating the animation as static pixels, the system reasons about how elements move over time and what type of mechanism could produce that effect. Based on this understanding, the application generates a clean, standalone implementation using HTML, CSS, and JavaScript. To make the experience practical, the generated code is rendered immediately inside a live preview panel, allowing users to see the animation running, tweak values, and export the files. Challenges One of the main challenges was dealing with website embedding restrictions. Many modern sites block iframe loading, which prevents direct inspection of the page. I solved this by adding a visual fallback workflow where users upload a screenshot instead. The AI then reconstructs the animation purely from visual reasoning. Another challenge was maintaining consistency across generated files. Since the system produces HTML, CSS, and JavaScript separately, mismatched class names or IDs can easily break the animation. This was addressed by enforcing a strict structured output format so identifiers remain synchronized automatically. What I Learned This project reinforced how sensitive animation is to timing and motion. Very small changes in delay or easing can drastically change how polished an interaction feels. I also learned that modern AI models are surprisingly good at identifying visual patterns that humans intuitively recognize but struggle to quantify. More importantly, building AnimReverse AI showed me that AI can be a creative partner rather than just a code generator. Instead of replacing developers, it accelerates experimentation and helps translate visual ideas into working systems faster.

Built With

  • and
  • animation
  • built-with-react-19
  • esm.sh
  • for
  • google-gemini-3-pro-api-(via-ai-studio/antigravity)
  • iframes
  • live
  • rendering
  • sandboxed
  • tailwind-css
  • typescript
Share this project:

Updates