Inspiration

inSight's primary focus was inspired on finding a way to assist the blind and elderly. Giving the power back to them using our machine completely using their voice. This gives them the power to be independent and have a caretaker look over them remotely using the power of the Internet of Things.

What it does

inSight takes a combination of a raspberry pi wearable body cam with a microphone and headset to take user input from the user. We've taken the Pi and added the capability of Alexa so the user has access to all that Alexa can offer. On-top of that we've made our own skillset that takes in input from the raspberry pi camera and allows the user to ask about the environment it's in. This can be quite useful for the blind.

On board the Pi also contains a hat that gives us variety of sensors. We take input from these to determine location and orientation for fall detection. If a fall is detected the caretaker is immediately notified and given the location of the user and number so they can address the situation.

How we built it

The Alexa skill was all built using the Alexa Skills Kit alongside a python script that sends the environment data to Alexa using a lambda function. The environment is identified by the Microsoft Computer Vision API.

For device tracking and sensing we took advantage of MongoDb-Stitch. Stitch handles all user authentication for the dashboard alongside the functions for storing/getting data from the database. Bootstrap 4 was used for the UI for the dashboard and custom python/node.js scripts on the Raspberry Pi.

Challenges we ran into

Our first challenge was location since we didn't have a GPS hat. Instead we solved this using GeoIP based location with our inSight Cloud Service. Connecting to the Microsoft Azure, which we debugged and solved. Alexa Skills kit gave us issues with voice output from Alexa, learning the skills of Amazon Lambda and being able to store response but retrieval output was invalid. This was all solved by re-evaluating our development process and isolating the problem where Amazon wasn't accounting for the changes we made to the skill. The final one was learning curve for mongoDB Stitch which took some time but helped us return a powerful implementation of the new MongoDb Service.

Accomplishments that we're proud of

. Our biggest accomplishment was us debugging our issue with the Alexa Skills Kit. After our skill kept returning errors we spent hours debugging and made test cases for the system to finally isolate the issue we had with our skill associating the camera input to a specific Alexa device.

What we learned

. We learned a lot. One of the newest skills the team acquired was MongoDb Stitch. Once we got the hang of it, the service made deploying new database functions a breeze. We also had a first time hacker on our team and we gave him a taste of every aspect of a hackathon and learned everything from Alexa skills to using Raspberry Pi sensors.

What's next for inSight

. Now that we have the hardware and basic software ready we want to make an enclosure and strap so the device could be worn and do some tests on potential users to get feedback. We want to add voice based navigation which can be solved with a GPS chip so the user can be voice guided around the world alongside the classifier system.

We're Open Source!

Check out our github @ https://github.com/weinzach/inSight-Codebase

Share this project:

Updates