Inspiration
Honestly, this lens project has been strong a vision of mine had since 2018-9. As a fellow fitness enthusiast, I deeply understand the how the start the fitness journey leaves you feeling completely clueless when surrounded by gym equipment which you have no clue how they work or what to even do with them to achieve your goals. With personal trainers being extremely costly, apps/websites/videos being too distracting and the overwhelming feeling of asking strangers for help, most people are left with an immense feeling of discouragement and a high risk of injury.
From my 8+ years experience, I know that the ideal fitness app would have to adapt to the users environment as opposed to taking them away from it. One example of this is the heavy mobile phone camera usage at the gym to record exercises as opposed to note-taking. The process of recording yourself is seamless and requires little to barely any manual input. This project follows a similar format by using the lens to easily adapt to their camera, AR to project a more affordable alternative to a human trainer, and using ML to understand the users environment and suggest appropriate exercises with little to no manual input. With Snap being the most used camera in the world and with its Scan & ML capabilities, it is definitely the perfect platform and the perfect time to build this visual gym assistant.
What it does
This lens uses an ML Classification model to detect fitness equipment(e.g barbells, dumbbells, kettlebells, medicine ball, exercise mat etc). After the equipment has been identified, a range of exercises are presented to the user with each exercise specifically focusing on a particular target area of their choice (chest, back, legs, arms, shoulders) after an exercise is selected, a 3D character performing the selected exercise is augmented into their camera. From this 3D character, they're able to learn how to perform the exercise, optimal range of motion and how the exercise helps achieve their goals.
How we built it
The custom models were built using Teachable Machine & TensorFlow lite, the 3D models were made and animated using Blender. The UI elements (Discrete picker etc) were provided by Snap UI templates.
Challenges we ran into
Most of the challenges we faced were due to the technical limitations/boundaries around building a lens. The biggest challenge we faced was finding the balance between being able to recognise a wide enough range of fitness equipment while providing a large/suitable enough range of exercises without exceeding the total lens size limit of 10MB.
Compression via Draco became our best friend at this point but wasn't enough to handle the original target of 50 3D models and 10 different classes of fitness equipment. We used this opportunity to focus on free weights as they're found both at home and at commercial gyms and only focusing on common exercises that are known to always achieve optimal results.
Accomplishments that we're proud of
One accomplishment would definitely have to be finally seeing this vision come to life. Not only will this lens be able to inspire people who are starting off or are already on their fitness journey, but is also something that'll positively impact their perspective of the gym. I know that I needed at the start of my journey almost a decade ago, would be sincerely grateful for.
Another accomplishment which i'm really proud of is being able bring two different components (ML & AR) to create something that has never existed in the area of fitness be it as an app or lens. One of my biggest goals was to create a next generational product and I truly believe by building this lens with Snap, we're on the right track.
What we learned
Most important thing we learned was how to use Lens studio for several ML tasks from image classification to object detection to voice classification and body tracking. The templates and docs were easy to understand and get up running.
Also the SnapML workshop provided the opportunity to ask any questions/topics around using SnapML. From the workshop, I was able to get an in-depth explanation of how MLComponents work and how they can be customised to support different use cases.
What's next for Beam.ai
Our main priority will be to keep focusing on different ways to use Snap to help fitness enthusiasts. One way we are doing this by looking for ways to expand and creatively enrich our exercise library e.g adding highlighted target areas to our 3D models(chest, back etc), or by making a voice controlled lens which uses VoiceML to receive keywords (chest, back, legs, arms etc) and suggest appropriate exercises based off the detected voice command ("Show me a chest workout", "Give me a pull workout for today", show me some stretches for my lower back etc). We believe this lens is not only suitable for mobile apps, but also suitable as a lens for new generation Spectacles where users can easily call for a virtual gym assistants to perform exercises right in front of them.
Built With
- blender
- javascript
- lens
- lensstudio
- machine-learning
- snap
- snapchat
- snapml
- tensorflow





Log in or sign up for Devpost to join the conversation.