Our inspiration was innovation. We wanted to create a new way for people to interact with their devices.
What it does
We use a Melon EEG headband to monitor your brainwaves and a Microsoft Kinect to monitor your voice, and based on these we control various options on your computer or server.
How we built it
We used a deconstructed Android sample application to obtain the Melon SDK as it is not publicly available. Once we obtained this, we were able to tap into the data that was sent from the Melon headset to our android mobile device. We transmitted this data to a firebase server for temporary storage. On the other hand, we used the Kinect SDK to perform speech recognition in C#. We stream the Melon EEG data from our web server, and finally use all available data to influence events on our computer such as keystrokes, mouse clicks, window changes, etc. An interface with a low level functions allows us to directly make keystrokes via software.
Challenges we ran into
The Melon EEG SDK was not publicly available and was very hard to find online. Calibrating the Melon and training for it was also very challenging.
Accomplishments that we're proud of
Having the entire system work together was very rewarding. It was a very unique experience controlling aspects of the computer with just your mind and assistance from your voice.
What we learned
We learned how to make an android application that interfaces with the Melon. Additionally, we learned how to use Kinect for various applications, including voice recognition.
What's next for BrainTap
We plan to expand the number of outputs that we can receive from the Melon for BrainTap. To do this, we will use machine learning to better train the software for calibration. We also want to expand the role the Kinect plays in BrainTap.