Modern luxury cars often provide gesture control systems for their hardware and sound, that transform driving from just a necessity to an enjoyable experience. Our aim was to make a platform to expand these features into a universally compatible standard. DepthSense is designed to work seamlessly with any car, and will revolutionize the way we drive. DepthSense has significant advantages over voice control due to its extremely intuitive nature.
What it does
It is a mobile application that uses AI and camera vision to control basic smartphone functions such as playing music or making calls using hand gestures while driving. DepthSense uses advanced neural network models (TensorFlow.Js in a MOBILE APP) to estimate the hand pose in 3d space, and detect the corresponding gestures. It provides the following features:
Answering incoming calls Making calls to predefined priority phone numbers. Playing, pausing and skipping music on Spotify. Listing phone and slack notifications.
How we built it
React Native with Expo for the mobile app. Tensorflow.js with react native for the hand estimation models. Spotify and Slack APIs with OAuth2 authentication. Figma for UI design. HTML + Vanilla Js for developing and testing gesture algorithms
Challenges we ran into
Getting TensorFlow.js to run our models on React Native was a significant issue that we solved by doing a custom WebGL driver configuration. Getting the Spotify OAuth2 sign in to work seamlessly. Doing lots of debugging in a short period of time.
Accomplishments that we're proud of
Getting extremely sophisticated and accurate machine learning models to work on a mobile app. Creating a clean and responsive UI that is compatible with both Android and IOS.
What we learned
Pose estimation models with Tensorflow.js Advanced React Native functionality OAuth2 authentication flow.
What's next for DepthSense
Integrating with facetime and google duo. Making it run as a background app, because many people who drive have google maps open. Publishing it on the App store.