Inspiration

We've all been there. Alarm goes off at 7am, you swipe dismiss, and suddenly it's noon and you've done nothing. The problem isn't that people don't want to study, it's that there's zero accountability when you're alone. We looked at what Strava did for running. It made a solitary activity social and grew to 100 million users because of one insight: people perform better when others are watching. We wanted to build that for studying. The market is every student who has ever said "I'll start tomorrow." At Purdue alone, that's 50,000 people. The average college student loses 3 hours a day to distraction that's 150,000 hours lost every single day on one campus. We wanted to fix that.

What it does

LockIn is a social accountability app that makes studying competitive, visible, and verified. AI alarm verification: your alarm won't stop until you photograph the right location. A custom on-device ML model verifies the photo in real time. No cheating with an online photo. Buddy mode: a friend watches your session live. The moment you leave the app to doomscroll, they get a push notification within 3 seconds. Together sessions: study with friends on a shared live timer. If someone picks up their phone, everyone knows. Social feed: sessions, streaks, and milestones auto-post to your circle. Live leaderboard: weekly rankings updated after every session. Calendar heatmap: Google Calendar syncs automatically to surface mutual free time across your friend group. One tap to invite to a meeting. Streak tracking: daily accountability built into your profile and feed.

How we built it

Frontend: React Native with Expo and TypeScript, using expo-router for navigation. Designed for iOS. Backend: Node.js + Express API hosted on Railway. Handles buddy requests, session management, push notifications via the Expo Push API, leaderboard scoring, and Google Calendar freebusy queries. Database: Supabase for all data storage and Supabase Realtime for live updates across all connected devices with zero polling. Machine Learning: We fine-tuned MobileNetV3Small, a lightweight convolutional neural network pretrained on ImageNet, to classify scene categories like desk and fridge. The model was trained on a hybrid dataset combining Open Images V7 with real photos we took ourselves in our dorm rooms, on Purdue's RCAC NVIDIA H100 80GB GPU via the Gautschi cluster. Identity: World ID via MiniKit for proof of personhood. Integrated directly into the leaderboard so only verified humans rank.

Challenges we ran into

  • ML accuracy was our biggest technical hurdle. Early models collapsed to ~22% validation accuracy, predicting the same class for every input. We diagnosed it as a combination of class imbalance and poor data quality from online datasets. The fix was a hybrid approach, augmenting real photos with random flips, rotations, brightness shifts, and zoom, combined with class-weighted loss.
  • iOS AppState limitations: Apple doesn't allow apps to see which other apps are open. We worked around this by detecting when the user leaves LockIn using React Native's AppState API, firing buddy notifications within 3 seconds of them backgrounding the app.

Accomplishments that we're proud of

  • A on-device ML classifier trained on our own photos, deployed inside a real app
  • End-to-end realtime social layer - feed, leaderboard, and buddy notifications all update live across devices
  • Google Calendar mutual availability detection that surfaces a heatmap and sends a study invite in one tap
  • The whole thing actually works. We demo'd it phone-to-phone, multiple times.

What's next for LockIn

  • Expand alarm missions to more object categories with a larger, more diverse training dataset.
  • LockIn pattern analysis and anomaly detection on app usage. Flag when someone's distraction pattern deviates from their baseline
  • Android support

Built With

Share this project:

Updates