We were inspired by the demo from Myo's website that the device could recognize various hand gestures and translate into computer input. The computer technologies are changing every day and the way we interact with computers also need to be changed. Just think about that when our wearables could understand us and help us input the command into computers and that totally got us started this project.

What it does

Multiple users with Myo Armbands can collaborate on the same remote white board and combine result could be displayed at the same time in different location.

How we built it

At first, we build a simple web application that prints the frames of data captured from Myo using the MyoJS framework, later we decided to implement a backend server that its job is to manage, delegate and synchronize the Myo data stream coming from our Myo Web Client, we have used Microsoft Azure to host the server and it worked really well. We are able to achieve maximum throughput of 600 eps (events per second). Because we have broken the barrier that 1 myo control one computer, we are able to use multiple Myo data streams to control the same canvas on a remote computer running our blackboard demo software. The rest of the time we have invested in debugging the application, increase its stability and creating tools that help us mange the 3 Myos we have borrowed. We have created a very simple iOS/Android application using Ionic frameworks that shows the data stream of all Myos on the server and its EMG graph. We also made a connection counter using Beaglebone, it has a 7-segment display will show how many Myos is currently connected.

Challenges we ran into

  • It's not easy to get meaningful data from the Myo Armband SDK.
  • Data are sent to a centralized server which could be limited by internet condition

Accomplishments that we're proud of

  • Myo could only connect to one computer via Bluetooth but we broke the barrier that we could use Myo to send command across the globe
  • We have successfully came up with a demo to show that it's possible to interact with others on the same digital workspace without using the traditional computer input (keyboard/mouse).

What we learned

  • There are still works to do to optimize the data collection
  • Myo need to improve the SDK to be more developer friendly

What's next for Wasabi Myo

If we had more time, we would love to take this project to the next level, by enabling gesture controls and simple collaboration software, the current Wasabi Myo is a proof-of-concept project that we have spent a lot of time to create, so there are still a lot of features planned for Wasabi Myo.

Share this project: