Taking Aim at Gun Control

This project is intended to aid in screening people who request to purchase a firearm. Gun retailers can use our tools to survey someone and analyze if they are mentally and emotionally fit to own one.

Project: Taking Aim at Gun Control with Microsoft Azure and the Muse Headband

Team members Natasha Thakur, Rupali Bahl, Michelle Huntley

Tools and Environments

Microsoft Azure, Muse SDK, Node.js, socket.io, Open Sound Control (OSC) library, Android SDK

Goal

Our goal of this project was to provide society with a more comprehensive screening process for prospective gun owners. Many disastrous events in recent history have been caused by guns getting into the hands of emotionally and mentally unstable people, due to inadequate screening processes. Our project utilizes a wide variety of data such as written responses, EEG brain activity data, and vocal responses to analyze a subject's true intentions and mental/emotional state.

Concept and Functionalities

Our project uses the Language API in Microsoft Azure to create a chatbot that asks the user preliminary questions about their age and gun license ownership, and ask them to provide short responses about their mental state. We then use sentiment analysis from the API to analyze the user's responses and determine a preliminary judgement of whether they are fit to own a gun. If it determines that they are not, then they are barred from further screenings. If they are determined to be potentially fit to own a gun, then they would theoretically come in to a testing space where a moderator will have them take a survey of yes/no questions on a mobile app that records their vocal responses into a database so that sentiment analysis could later be performed on them. In addition, while they are taking the survey, they would be wearing the MUSE headband that records their brain activity, such as EEG, their jaw clench activity, their mellowness, and their concentration, and streams it in realtime to an online graph.

Technical Specifications

  • Chatbot: We used the Language API in Microsoft Azure to create a chatbot that would ask a subject some "yes/no" questions and have them provide some short responses. Then, we used the API to perform sentiment analysis on the responses, using a score system that we constructed ourselves. This was done in JavaScript and Node.js.

  • Brain Activity analysis: We used the OSC library to access the Muse.io data being put out by the headband and pulled specific elements of the data, in particular the EEG, mellowness, concentration, and jaw_clench activities. We then used socket.io to stream that data to a browser on the local machine, where it would be plotted on a custom graph in realtime. This was all done in JavaScript and Node.js.

  • Yes/No mobile survey: We used the Android SDK in Android Studio to create a mobile app with a simple interface that would (theoretically) record the subject's voice as they're answering a series of "yes/no" questions.

Challenges We Ran Into

It was difficult to figure out how to use the data from the Muse headband, since there is very little user-friendly developer documentation available for it online. There was virtually no tutorials that showed someone actually implementing something with it, so we instead chose to find a way to access the raw data being put out by the headband and use that.

In addition, our idea was not yet solid for most of the first day of the hackathon, since it seemed like there was no way to integrate the chatbot with the Muse headband, which led to the decision to implement them separately. Also, one member kept encountering technical difficulties with her laptop, where Android Studio would freeze and glitch and the headband would not pair with the Bluetooth on her computer, leading her to have to restart her computer multiple times.

Accomplishments We're Proud Of

We're proud of how we were able to creatively use a wide variety of data processing to implement this unique idea. In addition, we were amazed at how the idea grew over the course of the hackathon and became something we were not expecting to make at all.

What's Next

  • Program the survey app to actually record the subject's voice
  • Find a way to use sentiment analysis to analyze those voice responses
  • Find ways to use neuroscience mathematics and concepts to better analyze the EEG data
  • Design better user interfaces
  • Add more functionality to the online graph and perhaps host it on a better platform
  • Design better survey and preliminary screening questions
Share this project:
×

Updates