Inspiration

We wanted to reimagine Photoshop for the AI-first, mobile-first era—bringing pro-level editing, 3D effects, and intelligent automation to phones using natural language and voice.

What it does

Aurora performs AI-powered photo editing with 3D parallax, weather-aware day→night transforms, voice commands, fast/slow AI modes, and a full toolkit for detection, segmentation, depth, SR, filters, and inpainting.

How we built it

Built with FastAPI + Python for the AI pipeline, React & React Native for UI, Ollama for on-device LLMs (Gemma3 1B, Llama3.2 3B), Deepgram for voice, and optimized vision models (BiRefNet, DepthAnythingV2, MobileSAM, MI-GAN, SwinIR). Includes a custom multi-step orchestrator.

Challenges we ran into

Optimizing heavy models for mobile, designing a stable orchestrator, ensuring smooth clarifications, handling 1.4GB of models, reducing VRAM usage, and keeping latency low across web + mobile.

Accomplishments that we're proud of

Real-time 3D parallax, fast day/night transformations, dual-mode LLM editing, on-device reasoning, a clean unified UI, and fully local processing with no cloud LLM calls.

What we learned

Efficient model orchestration, INT4/ONNX optimization, mobile-friendly pipelines, resolving ambiguity in AI editing, and building a balanced system combining speed, reasoning, and creativity.

What's next for Aurora

On-device vision models, full offline mode, AI presets, auto-stylization, video editing, live filters, marketplace for community styles, and deeper personalization.

Built With

Share this project:

Updates