Inspiration

The challenging part of this project is not coming up with an idea but solving a problem that WILL come in 10 years time. Whilst brainstorming, identity and privacy came up several time however these were just buzzwords and no single software is able to solve both. So we considered something of the later order consequence that to us right now may not seem like a big deal but will compound and have a drastic affect on us, particularly our children.

Think of it like this you might think posting updates and photos of your children might be nothing to be concerned about - this is the first order consequence. But over the years, enough information will be online to essentially build a digital twin using their hidden profile that is being built - this is the second order consequence. With this information it is not an understatement to say it can subtly decide your child's future, what university they attend, etc, as AI becomes more integrated into systems and has access to this - this is the third order consequence.

Statistics reveal that the average child has 1,500 photos posted online before they are five years old. Not by stranger or even hackers, its by their parent simply out of love. This made us think what happens with this longitudinal data and is it possible in the age of integrating AI into almost everything for efficiency, for an AI to build a hidden shadow profile or even a digital twin that we will not know about? In 10 years time the future will look very different and we think it will be much different.

In 2036, AI systems will be sophisticated enough to cross-reference every post, every caption, every location tag, and every life event shared publicly over a child's entire childhood. They will build digital twins using these hidden profiles and those profiles will be used, invisibly and silently, to make decisions about insurance premiums, university admissions, credit scores, and employment. Not only this, but as AI agents become more deeply embedded into the infrastructure of daily life - making autonomous decisions on behalf of institutions without human oversight - your child's digital twin will not sit passively in a database. It will be queried, updated, and acted upon in real time, in rooms your child will never enter, by systems they will never see.

The child never consented. They were never asked. They were documented before they could speak.

We built Unherit because nobody was talking about this, and nobody had built a tool to do anything about it.


What it does

Unherit is a three agent AI system built around a single mission: reveal the shadow profile that already exists for your child, then destroy it.

Agent 1 analyses a parent's social media profile exactly as a 2036 data broker would, extracting every caption, location tag, life event, and photo description into a longitudinal dataset.

Agent 2 passes this through a single batched Gemini 2.0 Flash multimodal prompt, building a clinical shadow profile across six categories - Health, Location, Financial, Educational, Behavioural, and Family - synthesised into a bureaucratic dossier read aloud by an ElevenLabs voice in real time, with a cumulative Risk Score out of 100.

Agent 3 applies adversarial perturbation derived from Fast Gradient Sign Method, making pixel level modifications that are imperceptible to humans but render every photo unreadable to AI vision systems, dropping the Risk Score from 84 to 8 in real time.

Beyond the core pipeline, a free caption analyser on the landing page lets any parent paste a caption or upload a photo and receive an instant multimodal risk assessment with no account required. The full reclamation flow concludes with pre built GDPR Article 17 deletion requests for 214 data brokers, platform opt out links, a digital property deed, and a data dividend dashboard.

How we built it

The stack is intentionally lean. Next.js 15 with the App Router handles the full stack, Framer Motion drives all animations, and Tailwind CSS enforces the brutalist monospace design system throughout. The three agent pipeline runs across three API routes, with Agent 2 using a single batched Gemini 2.0 Flash prompt across all posts simultaneously rather than individual calls per post, reducing API usage by 93% and improving inference quality since the model reasons across the full longitudinal dataset at once. Agent 3 applies adversarial perturbation client side, keeping all photo processing on device with no data ever leaving the browser. ElevenLabs Turbo v2.5 handles all narration with the audio pipeline running independently from the UI animation pipeline so neither blocks the other, and the caption analyser uses Gemini's multimodal capabilities to accept both text and base64 encoded images in a single prompt returning structured JSON findings instantly. The entire experience is stateless so no database, no authentication, no data stored, everything runs in memory for the duration of the session and is gone when the tab closes to ensure security of one data.

Accomplishments that we're proud of

What we are most proud of is not the technology but the thinking behind it. Anyone can build a privacy tool but the creative leap here was identifying a third order consequence that nobody is talking about yet or even thought about. The villain in this story is not a corporation or a hacker but a loving parent, and that reframing changes everything. We built something that makes an abstract future threat feel immediate and personal, and we did it in a single hackathon session. The fact that this problem does not feel urgent yet is exactly why it matters because by the time it does, it will be too late.

What we learned

Building Unherit taught us that the hardest part of designing a software solution is not the engineering but the ethics. We had to constantly ask whether revealing a shadow profile was empowering or exploitative, and whether an adversarial poisoning tool in the wrong hands could cause harm rather than prevent it. Navigating that shaped every design decision we made.

We also learned a great deal about how AI agents interact within a pipeline. The output quality of Agent 2 is entirely dependent on how Agent 1 structures its data, and a poorly formatted prompt between agents can collapse the entire chain. Getting the batched multimodal prompt to reason coherently across fourteen posts simultaneously and return consistently structured JSON that Agent 3 could act on required far more iteration than we anticipated.

Beyond the technical side we learned that framing matters as much as functionality. A tool that shows you a risk score lands differently than a tool that reads your child's medical history aloud in a cold clinical voice. The emotional architecture of the product is itself a design decision and getting that right was as challenging as anything we wrote in code. We came in thinking we were building a privacy tool and left having built something that tries to make people feel the weight of a problem before it arrives.

What's next for Unherit

The immediate next step is expanding platform coverage beyond Facebook to Instagram, TikTok, and X, where the volume of child data is arguably even greater and the visual nature of the content makes AI inference significantly more powerful. Alongside this we plan to introduce a subscription model with tiered plans giving families access to continuous monitoring, unlimited broker deletion requests, real time alerts when their child's data is accessed, and legal evidence packs for GDPR enforcement. Long term the vision is bigger than a product. As AI agents become the primary interface between people and institutions, Unherit becomes the layer that sits between your child's identity and every system that wants to query it. A vault that a child inherits at eighteen, containing the full record of what was taken, what was disputed, and what was reclaimed on their behalf before they were old enough to ask.

Built With

  • elevenlabs-turbo-v2.5-api
  • framer-motion
  • gemini-2.0-flash-api
  • next.js-15
  • tailwind-css
  • typescript
  • vercel
Share this project:

Updates