Inspiration

Habits are the foundation for the person we are, as well as the person we wish to become. Hence, it's important to find ways of improving our own daily habits in order to become the best possible version of ourselves.

There are a huge amount of resources showing different methods for building good habits, however, this sometimes makes the process of developing better habits/becoming more aware of our habits rather difficult to apply in real-life.

Fortunately, some smartphone apps are already implementing these techniques to help people build better habits, for example: drinking more water, practicing guided meditation, fitness tracking, etc. Although, the current experience isn't seamless - it requires people to unlock their phone -> open an app -> input information -> ... and other steps. This process is too complex and not user friendly, people want to be able to get started right away.

We believe it's possible to apply research-backed habit forming methods to create a seamless experience for the user. This experience will be accessible to the general public through the use of AR glasses, helping people build the habits they've always dreamt of but which they did not know how to build.

What it does

Our app applies a technique called "pointing and calling", proven to reduce human error in decisions up to 85% by increasing an individual's awareness.

Example: We have a user aiming to develop healthier eating habits. When the user is about to make a decision related to a food purchase, such as buying an apple, they will point at the apple and the app will register the event as an action. The AI model will then automatically detect the object and correlate the object with a positive or negative habit. In this case, the apple is a healthy choice, so the AR glasses will display a positive habit related animation.

How we built it

Tech Used:

  • Snap Spectacles: Uses the Spectacle's camera to scan objects and send information back to the multi-object detection model.
  • Multi-object Detection Model: Recognizes objects from images which trigger cues to give real-time feedback to our users.
  • Blender: Created 3D animated objects to provide instant feedback to our users regarding their actions.
  • Lens Studio: Integrated a pointing motion, Multi-object Detection Model, and 3D animated characters.

Built With

Share this project:

Updates