Inspiration
My inspirations come from different sources. I've melded these into my app.
Some background & context: I'm a Team GB athlete and I've always been into sports. I love going to the gym and feeling like the best and strongest version of myself. I believe that what gets measured, gets managed and I seek to manage progress - continuous progress, and uncover new insights about my performance. It's motivating and a signal that I'm heading in the right direction.
I loved played games like Pokemon when I was younger. I'd catch a pokemon, and battle other trainers with it. My pokemon would gain XP after each battle and level up. At a certain point, it would evolve and I would have a cool looking pokemon. The aspect of levelling up and gaining 'XP' is a gaming mechanic and I'm curious to see if that can be augmented into the real world. What would happen if we provided XP for things we did in real life? in particular - healthy habits?
I have a thesis for UX/UI's of the future, and fundamentally I believe that the future of UX/UI is no UI. The ultimate interface is one that doesn't exist, but within it contains everything a user could ever need to experience. I think the way humans interface with machines/devices will completely change in the future. No keyboards, no mouse, no touch - just voice, and visual commands (kind of like how humans communicate with each other now). I believe this future isn't far away. We already have our LLMs as a foundational layer, tools like Cursor and Windsurf have displayed the use case for how agents can be empowered by LLMs to take autonomous actions and performing CRUD operations. Right now, the focus is on the backend tooling i.e. making sure the agent has access to the methods required for its functionality. The next step is the frontend, where an agent has access to both frontend and backend tooling so it can generate and render the frontend the user needs on demand and perform the necessary backend operations to fill the UI with data. I want to test out what this future could look like - especially in the domain of mobile apps. Some questions I have (ordered in degree of difficulty):
1) Cursor and Windsurf are great examples of Agentic frameworks in the domain of coding, but would this work for a mobile app in fitness? 2) If it does work on a mobile app, could it accurately perform CRUD operations? 3) Could it perform UI updates alongside CRUD operations? 4) Could the agent be activated and managed by voice so the user doesn't touch a keyboard?
An itch for the idea: I subscribe to the idea of Misogi, it’s a Japanese term. It’s the idea that every year, you set a hard, year defining challenge for yourself. Last year I set the challenge of running 100km every month, I achieved it and in the process, I lost 17kg. This year I’ve set the challenge of moving 1,000,000 kg of weight in the gym in 6 months. I started last month and I’ve been tracking all of my workouts including the lifts, reps and weights I've been doing , but it’s been a painful process.
- Tracking the exercises, weights, reps on notes
- Planning workouts on excel
- Using AI to calculate analytics and progress
The ideal user experience I seek:
- An app which I can talk to so I don't have to do an entries/admin
- An app that can track all my workouts and provide me with analytics on my workouts
- An app that motivates me to continue progressing
- An app that can adapt based on my performance
What it does
Lifta a fully autonomous AI voice agent that combines advanced lifting insights and a real-time awareness of your training sessions to help you be the strongest version of yourself.
Imagine if Cursor, Eleven labs, Arnold Schwarzenegger and a Data Scientist had a baby together inside a mobile app. That is what Lifta is.
With lifta you can:
- Create personalised workout plans designed for your body and goals
- Track workouts and progress
- Get deep analytics on your lifts
All done without you having to touch your screen.
How we built it
*Part A - The minimum viable flow In total it took around 25 hours to build the minimum viable flow. This involved building the UI using Bolt and building the API routes for eleven labs and openai. The hardest part about this was executing the flow correctly:
Start: User taps record button 1) User's voice recording is transcribed into text 2) Text is sent to Openai 3) Openai decides the right tool to call and sends response back 4) Response is sent to eleven labs + tool is executed 5) Eleven labs sends the response back with AI voice End: AI voice response + tool execution completion (+ updates to storage) + UI state updated
In other words: STT > openai > TTS + agent execution
Tech stack is React Expo, Supabase, Elevenlabs, OpenAI Deployed landing page on Netlify
*Part B - the minimum viable app Once the minimum viable flow was completed, I built out some of the other screens:
1) history (to track workout history) 2) Analytics (to understand performance) 3) Profile (for user management) 4) making it accessible beyond my local server (so users could play around with it)
Then linked it to Supabase for auth and user data storage.
Challenges we ran into
For context, I've built prod apps for web on Next.js, Vite and Supabase but this was my first time building:
- A fully Agentic model with tool calling
- A voice interface using eleven labs
- A mobile app
As I was doing this solo and I only had a weekend to work on this so time management was critical. I broke my project down into two parts: 1) Part A - the minimum viable flow 2) Part B - the minimum viable app
I set the expectation that even if only achieved part A, I would be over the moon because I was playing around with new technologies and I was trying to create something at the frontier of mobile apps. Part B was a nice to have.
My goal was to focus all of my energy on Part A (the minimum viable flow) as it was the most significant and novel part of my app. The flow of data was quite complicated and I didn't realise how API works in a mobile app context so I spent a lot of time calling the wrong endpoints. After a lot of console logging and a lot of time testing each individual aspect of the flow before integrating it all together, the flow was working.
The other issue I ran into at this point was inconsistent tool calling and inconsistent updates to the UI. I had to spend a lot of time researching to find a novel solution to this.
Once part A was done, I celebrated by grabbing a pint and then started on part B.
Part B was pretty standard stuff.
Accomplishments that we're proud of
- Building a mobile app
- Building a mobile app with autonomous agent tool calling
- Building a mobile app with autonomous voice agent tool calling
- Building a mobile app with autonomous voice agent tool calling for weightlifting
- Building a full-stack mobile app with autonomous voice agent tool calling for weightlifting
- Being able to define it in a few words
- Building continuously and not sleeping for 32 hours (weird flex, I know)
- Doing is all solo
- Doing it all on a weekend
What we learned
- If you don't sleep for 32 hours, you feel jet-lagged the day after
- Building a mobile app is fun
- Confirmed my hypothesis: I did a gym session with my app, and interfacing with the AI by just using my voice and it is a whole new experience. It's a new paradigm, and it makes you think - huh, I wonder why we didn't do this before?
What's next for lifta
- Getting it into the app store
- Reaching out to the people in my gym who are eager to try it
- Reaching out to some creators to market my app
- Showcasing it on my youtube channel (I'm doing a vlog of 'road to 1,000,000 kg')


Log in or sign up for Devpost to join the conversation.