Inspiration

What it does

How we built it

We used a computer vision devkit from Qualcomm to run 2 AI models concurrently. Using YOLO, we can identify the objects placed on our platform.

all with heat

Using parallel AI inference together with MidasV2, we were able to get real life coordinates of objects using data fusion.

pipeline

We have a simple setup, a camera facing a platform where objects can be detected and be interacted in the XR environement.

setup

Challenges we ran into

Using Qualcomm's project platform and learning from scratch and working with a complicated gstreamer pipeline.

Accomplishments that we're proud of

Getting our model to work and working across different languages within our team. We're proud of the different skills and experiences we came to with the project - whether a developer or pitcher, and incorporated them.

What we learned

We learned how to navigate an array of Qualcomm tools, how to work through team differences while remaining successful.

What's next for ZenFriend XR

Training to recognize more items that could be found in one's house or room and more user testing.

Built With

Share this project:

Updates