Empowering those in need with technology, along with examples from our lives.
What it does
Camera on glasses used to recognize scenes using neural network to help aide those who are visually impaired in understanding their surroundings.
How we built it
We used PyTorch to train our neural network using an MIT database of over 2.5 million images to recognize scenes. We then offloaded the processing to a Jetson TX2 for portability.
Challenges we ran into
Dependencies. Mainly training the network to run on an ARM processor instead of x86 architecture.
Accomplishments that we're proud of
Getting the network to classify images.
What we learned
ARM processor architecture is way different than x86.
What's next for SceneNet
Add localization and be able to extended the use cases to classify more dynamic scenes and objects.