Inspiration

I have been building software for 20 years. I have built products, led teams, designed systems. I know who I am and what I bring. And yet, when I started thinking about my next career move, and about what AI was doing to the job market, I felt something I didn't expect.

Fear.

Not of being unqualified. Of being unreadable. Of a world moving so fast, with AI reshaping every hiring decision, every workflow, every definition of what a skilled person looks like — and not knowing whether any of that experience, that nuance, that specific way I think and build would survive the filter.

The systems that evaluate people — CVs, ATS filters, keyword matching — were not built to understand anyone. They were built to sort. And the rise of AI in hiring has made that worse, not better. Recruiters paste CVs into their AI assistant and ask "is this person a fit?" The AI reads a list of skills and years of experience. It has no idea who the person actually is.

That frustration was personal. But the idea that crystallized wasn't about fixing the CV — it was about something deeper.

I had been thinking about AI and identity for a while. Most AI tools have some form of memory now — they remember things you've said, they recall past conversations. But that memory is passive. It accumulates in the background without you shaping it. You can't edit it, you can't audit it, you can't make it precisely represent who you are. And you certainly can't hand it to a different tool, or to someone else's AI agent.

There had to be a better way to give AI a real understanding of who you are — something intentional, something you controlled, something captured in your own voice.

Those two threads came together into a single question: what if you could give AI a real understanding of who you are — captured in your own voice, in a format any agent can load?

That question became Eigenself.

I built my own protocol first. I sat down with the interview, answered the questions honestly — about how I think, what drives me, where I draw the line, what kind of work energizes me and what drains me. And when I read what came out the other side, I felt something I didn't expect.

I felt at ease. I felt ready for the new era that is coming with AI. I wasn't afraid anymore.

That feeling is what this product is trying to give people.


What It Does

Eigenself conducts a 20-minute voice interview using Amazon Nova 2 Sonic, then synthesises everything you said into a structured .md identity protocol — a machine-readable document that captures not just what you have done, but how you think, what drives you, how you communicate, and what makes you specifically you.

The protocol is designed to be loaded into any AI agent as a system prompt. Once loaded, that agent can represent you accurately — not approximately. What you do with it from there is up to you. Some examples of what's already built in:

  • Evaluate job fit — paste a job description, get a scored fit report with what aligns, what doesn't, and any deal-breakers flagged
  • Write cover letters — generated in your actual voice, grounded in your real values and experience
  • Publish a web profile — a human-readable page hosted on S3 via CloudFront
  • Power interview practice — feed the protocol to any AI and get questions calibrated to who you actually are
  • Serve as your AI context layer — load it into Claude Projects, Custom GPTs, Cursor, or any AI tool you use daily

But these are just a starting point. The protocol is a portable identity layer — its uses extend as far as AI itself does.


How I Built It

At a high level: the user speaks, Nova 2 Sonic listens and responds in real-time — handling the entire voice conversation on its own. When the interview ends, Nova 2 Lite reads the full transcript and writes the protocol. For users who prefer to type, a Bedrock Agent guides the conversation instead, bringing session memory and adaptive questioning across eight identity areas. Either way, the result is the same: a markdown file the user owns. Everything runs on AWS.

For those who want the details:

Nova 2 Sonic conducts the interview via a bidirectional WebSocket stream managed by Socket.IO. The Angular frontend captures microphone input as 16kHz PCM via Web Audio API AudioWorklet, streams it to the Node.js backend, which feeds it into the Sonic session. Audio output returns as 24kHz PCM chunks, reassembled and played in the browser.

Bedrock Agents orchestrate the interview intelligence when in text mode — tracking which of eight identity sections have been covered, adapting questions based on prior answers, deciding when depth has been reached. The agent adds some latency compared to a direct model call, but brings session memory that makes the conversation genuinely stateful. Voice mode (Nova 2 Sonic) handles its own conversational state directly through the bidirectional stream, without routing through the agent.

Nova 2 Lite synthesises the protocol after the conversation ends — reading the full transcript and producing a structured .md identity document in second person. The same model handles fit evaluation, cover letter generation, HTML profile generation, and fallback transcription.

S3 + CloudFront host published profiles. DynamoDB stores profile slug records — used when a user publishes their web profile, to check whether a slug already exists and determine whether to invalidate the CloudFront cache on update.

The frontend is Angular 21 with standalone components, signals throughout, and OnPush change detection. The backend is Node.js + Express + TypeScript.


Challenges I Ran Into

The hardest challenge had nothing to do with AI.

It was CORS, nginx, and deployment infrastructure. Getting the Angular frontend on CloudFront, the Express + Socket.IO backend on EC2, and the WebSocket connections to survive nginx proxying — with the right headers, the right allowed origins, the right proxy pass configuration — took longer than any of the AI integration work.

It is always the plumbing.

The second challenge was prompt engineering for the identity protocol synthesis. The temptation for a model doing synthesis is to pad — to fill sections with reasonable-sounding content when the interview didn't actually cover them. The instruction "faithfulness over completeness — never invent, omit sections with insufficient data" had to be enforced explicitly and tested repeatedly. A protocol that invents things about you is worse than no protocol at all.


Accomplishments That I'm Proud Of

Two things, honestly.

The first is the protocol format itself. When you run the interview and read what comes out, it is genuinely useful — not a summary, not a list of bullet points, but something that actually captures how a person thinks and communicates. That quality didn't come automatically. It came from careful prompt engineering, from running the interview on myself, from iterating on the faithfulness principle until the output earned trust. Getting that right matters more than any technical achievement.

The second is shipping a complete, full-stack product solo in under two weeks. Nova 2 Sonic integration, Bedrock Agents, Angular frontend with Web Audio API PCM capture, EC2 deployment with nginx, CloudFront, S3, DynamoDB — all of it working together, end to end. Not a demo. A product.


What I Learned

The thing that surprised me most was how easy and how powerful it all was at the same time — and that combination is what made it slightly unsettling in the best way.

Nova 2 Sonic is extraordinary. A single model that listens, understands, reasons, and responds with natural voice — no Transcribe → LLM → Polly chain, no mechanical lag, no seams. The interview feels like a real conversation because the model is doing all of it at once. I had expected to spend weeks fighting the voice pipeline. The power was there immediately.

The other thing I learned — from actually running the interview on myself — is that the protocol format has a strange effect. When you read a well-structured identity document about yourself, written in second person, faithful to what you actually said, it does something. You slow down. You re-read sections. You think: "I've never put that into words before." The technology is the vehicle. The real product is the clarity.


What's Next for Eigenself

Right now, the goal is simple: win this hackathon.

That's not deflection — it's honest prioritisation. I built Eigenself in two weeks, solo. The idea is proven. The core experience is live. The product works.

But I also want to be clear about where it stands: Eigenself is not yet ready for public use. The protocol needs more polishing, more edge case handling, more iterations before it earns the trust of people who aren't early testers. The foundation is solid — but there's real work still to do before thinking about what comes after.

The immediate next step is to make it genuinely useful. Not just working — useful. That's the goal worth setting.

Built With

Share this project:

Updates