💡 Inspiration
The current AI photo enhancement market is dominated by tools that over-optimize appearance at the cost of identity.
Over-smoothed skin, altered facial geometry, and “plastic” results have become the norm — especially in professional and dating contexts where authenticity actually matters most.
I wanted to challenge that direction.
The core idea behind IdentityGuard is simple:
You should look like the best version of yourself — not like a stranger who just looks similar.
Instead of building another beautification filter, I focused on creating an identity-preserving refinement engine governed by strict internal validation and reasoning.
🧠 What this project does
IdentityGuard is an AI-powered portrait refinement pipeline that improves presence, confidence, and professionalism while strictly preserving human identity.
Unlike traditional AI photo apps, this system:
- Separates analysis, validation, refinement, and UI feedback
- Explicitly identifies Identity Anchors (biometric traits that must never change)
- Allows only constrained, explainable refinements
- Uses self-correcting AI loops to avoid both over-editing and under-editing
The result is a photo that feels better — without obvious edits and without identity drift.
🏗️ How I built it
The system is structured as a multi-stage AI pipeline governed by Gemini:
Validation & Analysis
- Ensures the image contains exactly one human face
- Rejects low-quality, non-human, or multi-face images
- Extracts identity anchors and modifiable nuances before any refinement occurs
Intent Abstraction Layer
- Users select context (e.g. LinkedIn, Dating)
- Context is translated into constrained goals like confidence, professionalism, or presence
- No direct aesthetic sliders that could cause unsafe edits
Refinement Planning
- Gemini generates a retouch plan based on allowed transformations only
- Plans are internally validated against identity constraints
- If a plan is too aggressive or too subtle, the system self-corrects
Post-Refinement Identity Verification
- The refined image is compared against the original
- If recognizability is compromised, a safe fallback refinement is applied
This architecture turns Gemini into a constrained decision system, not a creative free-for-all.
⚙️ Technologies used
- Google Gemini (Gemini 3 Flash & Image models)
- TypeScript
- Structured JSON schemas for AI self-governance
- Multimodal image analysis and generation
🚧 Challenges I faced
The biggest challenge was balance.
Early versions preserved identity perfectly — but changed almost nothing.
Later versions improved visual quality — but risked identity drift.
The solution was not more rules, but better permissions: Instead of telling the AI what not to do, I taught it exactly what it is allowed to change.
Another challenge was making the system explainable and safe without killing creativity — which led to the self-correcting refinement loops.
📚 What I learned
- Identity preservation cannot be enforced by fear-based constraints alone
- AI systems perform better when given explicit transformation boundaries
- Context-based intent is safer and more usable than direct aesthetic controls
- Gemini excels when used as a reasoning orchestrator, not just a generator
🚀 What’s next
Future iterations could include:
- Temporal consistency for video portraits
- Industry-specific profiles (journalism, legal, healthcare)
- Auditable refinement logs for compliance-heavy use cases
🏁 Final note
IdentityGuard is not about making people look different.
It’s about restoring how they look on their best day — authentically, safely, and confidently.
Built With
- aistudio
- gemini-2.5-flash-image
- gemini-3-flash-preview
- gemini3
- html
- javascript
- react
- typescript
Log in or sign up for Devpost to join the conversation.