Human — DevPost Submission
GenAI Genesis Hackathon
About the Project
The Problem: We Forgot How to Meet
There's a quiet epidemic nobody talks about. Loneliness is at an all-time high, yet we are more surrounded by people than at any point in history. We sit on trains, in coffee shops, in parks - inches from strangers who might change our lives - and we stare at our phones. We've optimized ourselves out of serendipity.
The third places are gone. The corner bar, the record shop, the neighborhood bookstore — the informal gathering grounds where humans bumped into each other without agenda. Replaced by algorithmic feeds that show us the same 200 people we already know, or dating apps that reduce human beings to a swipe.
Chance encounters — the stranger who becomes your best friend, the person you lock eyes with across a room, the accidental conversation that reshapes your worldview — these are the fabric of the human condition. Random, beautiful, terrifying, and increasingly rare.
We built Human to give fate a little help.
What It Does
Human is a proximity-based AI matchmaking app for iOS. Not for dating. Not for networking. Just for meeting the person you were probably supposed to meet.
Here's the loop:
You talk to an AI. A voice conversation. Claude conducts a natural, adaptive personality interview — asking about your values, how you see the world, what gets you out of bed. It listens. It asks follow-up questions. It builds a multi-dimensional model of who you are.
We build your fingerprint. When the interview concludes, your personality is encoded into 8 separate 512-dimensional embedding vectors — Big Five traits, values, interests, energy pattern, communication style, relationship style, compatibility signals, and semantic keywords. This is you, in math.
You go about your life. The app runs quietly in the background, updating your GPS location. When you walk within 100 meters of someone whose fingerprint matches yours — weighted cosine similarity across all 8 dimensions — the match trigger fires.
Fate knocks. You get a push notification. There's someone nearby. You can see their name, a few photos, and the AI-generated reason it thinks you should meet. You accept. They accept (turn-based, both must consent). The radar screen opens.
You find each other. A live radar guides you to within 15 meters using GPS broadcast over Supabase Realtime channels, then Bluetooth proximity detection closes the gap. Confetti. Haptics. Go say hi.
That's it. That's the whole app. AI-assisted fate.
How We Built It
The Interview — Voice-Native AI Conversation
The onboarding experience is entirely voice-driven. We use expo-speech-recognition (backed by iOS's SFSpeechRecognizer) to capture user speech in real-time with interim transcripts. Each user utterance is sent to a Claude Haiku 4.5 edge function that streams the response back via Server-Sent Events. The response is then forwarded to OpenAI's tts-1-hd model (nova voice, 1.05x speed) which returns a base64-encoded MP3, written to the device cache via expo-file-system, and played through expo-av.
The result is a seamless voice loop: you speak, Claude listens and responds, you hear it back as natural speech. The animated orb in the UI breathes and glows in response to each state — idle, listening, speaking.
When Claude determines it has enough signal (typically 6–8 exchanges), it embeds a <profile> JSON block in its response containing structured personality data: Big Five traits, values, interests, energy pattern, communication style, relationship style, compatibility notes, and semantic keywords.
The Matching Engine — 8-Vector Personality Fingerprint
We didn't want a single "compatibility score." Human personality is multi-dimensional, and flattening it to one number loses information. Instead, each profile dimension is embedded separately using text-embedding-3-small at 512 dimensions, stored in PostgreSQL with the pgvector extension, and indexed with HNSW indexes for fast approximate nearest-neighbor search.
When a location update fires, a PostgreSQL trigger (trigger_match_check) runs automatically:
- Queries everyone within a 100m radius using PostGIS geography indexing
- Computes a weighted cosine similarity score across all 8 vectors:
$$\text{score} = 0.35 \cdot \cos(v_{\text{values}}) + 0.25 \cdot \cos(v_{\text{big5}}) + 0.15 \cdot \cos(v_{\text{interests}}) + 0.05 \cdot \sum_{k \in \text{rest}} \cos(v_k)$$
- Filters by gender preference (array overlap), interaction history (no re-matching), and active match cooldown (one match at a time)
- Inserts the highest-scoring compatible match and fires a webhook to notify the user via Expo Push
The database is the matching engine.
The Radar — Finding Each Other in the Real World
Once both users confirm the match, the radar screen opens. Both phones join a Supabase Broadcast channel keyed to the match ID and broadcast their GPS coordinates every second. We compute haversine distance and bearing client-side, directly in the app. GPS distance is smoothed using an exponential moving average ($\alpha = 0.15$) to eliminate the jitter that makes raw GPS unusable at close range.
The directional UI deliberately avoids a sharp arrow — instead a soft two-cone sector (65° outer glow, 22° inner beam) communicates roughly this way rather than exactly here, which is honest about what GPS can actually tell you at sub-30m.
At close range, we hand off to expo-nearby-connections (Bluetooth/WiFi peer discovery) to trigger the final celebration. When the other phone is found via BLE/WiFi, it fires. No GPS required.
Challenges
GPS accuracy at close range is a lie. Consumer GPS has 10–20m horizontal error in open sky, and 30–50m in cities. When two people are 12 meters apart, the computed distance is pure noise. We addressed this with three layers: accuracy-filtered GPS reads (>20m uncertainty is discarded), EMA smoothing on the displayed distance, and — most importantly — leaning on Bluetooth proximity detection for the final meetup trigger. The UI was redesigned to stop showing precise distances below 30m and instead show zone-based language ("Very close!", "Look around!").
Streaming TTS in React Native. React Native's fetch doesn't support ReadableStream, so the standard SSE/streaming approach requires a workaround. We read the full response body as text, manually parse the data: SSE lines, accumulate the text into a sentence buffer, and flush completed sentences to the TTS queue as they arrive. This gives a streaming feel (TTS starts playing before Claude has finished responding) without requiring native streaming support.
The matching trigger firing at the right time. The database trigger runs on every location update — including updates from users who haven't completed their interview yet. We added a guard: if v_values IS NULL (no profile), the trigger exits immediately. This avoids expensive vector searches for incomplete users.
Making the conversation feel human. Early versions of the interview felt like a survey. We iterated on Claude's system prompt extensively to make it more curious, more reactive, and willing to push back or ask for elaboration. The final prompt instructs Claude to never ask multiple questions at once, to reflect what it's hearing back to the user, and to explicitly wait until it has rich signal on all 8 dimensions before concluding.
What We Learned
That the hard part of this wasn't the AI or the matching algorithm — it was the product question of what does a human being need to feel comfortable walking up to a stranger? The radar screen went through five design iterations. The copy on the match card was rewritten a dozen times. The answer we landed on: just enough signal that it doesn't feel random, framed as a suggestion rather than a command. The AI doesn't tell you to go meet someone. It just says: hey, there's someone nearby you might like. The rest is up to you.
Built With
| Layer | Technology |
|---|---|
| Mobile Framework | React Native 0.83, Expo SDK 55, Expo Router |
| Language | TypeScript |
| Backend Runtime | Deno 2 (Supabase Edge Functions) |
| Database | PostgreSQL 17 (Supabase) |
| Geo Queries | PostGIS (GEOGRAPHY points, GiST index) |
| Vector Search | pgvector — 8× VECTOR(512), HNSW indexes |
| Auth | Supabase Auth |
| Storage | Supabase Storage (profile photos) |
| AI — Interview | Anthropic Claude Haiku 4.5 (streaming SSE) |
| AI — Embeddings | OpenAI text-embedding-3-small (512-dim) |
| AI — Voice | OpenAI tts-1-hd, nova voice |
| Speech-to-Text | expo-speech-recognition (iOS SFSpeechRecognizer) |
| Background Location | expo-location + expo-task-manager |
| Bluetooth Proximity | expo-nearby-connections (BLE/WiFi peer discovery) |
| Push Notifications | Expo Push Service + expo-notifications |
| CI/CD | GitHub Actions → Supabase CLI |
Built With
- anthropic-claude-api
- deno
- expo-nearby-connections
- openai-api
- pgvector
- postgis
- postgresql
- react-native-&-expo
- supabase
- typescript
Log in or sign up for Devpost to join the conversation.