Having to open multiple tabs and books when we study, getting laptops dirty when we cook.

What it does

Gives the user the ability to have video playback and place down videos anywhere in their surroundings. They can also put text, labeling items, controlling the labels with automatic object recognition or speech. We can also recognize tagged objects like doors and send HTTP requests to a server, which may be integrated with smarthome lights and doors - allowing you to control lights just by pointing at them.

How we built it

We built it on the Unity platform for holo lens.

Challenges we ran into

Debugging on emerging technology, trying to interface with smart home objects.

Accomplishments that we're proud of

The fact that we have a product that not only works with video playback and dropping text, but demonstrates the capability to merge with smart home systems.

What's next for Eye-Sistant

Working of better UI and control. We have shown you can switch on a light by looking at it, now comes implementing it with a centralized smart home system.

Share this project: