Inspiration

The team recognised the issue of aphasia with shared experience in stroke medicine and neuroscience. The Emotiv wearable EEG gave us a way to engage patients who have a speech impairment. There are 100,000 new strokes in the UK every year. Speech impairment is a common deficit post-stroke, this can manifest as difficulty understanding speech, articulating words or a mixture of the two. In most instances the speech impairment is significant and has a huge impact on the patient’s quality of life.

What it does

A patient wearing the Emotiv device is spoken to by a robot companion who interprets the patient’s feelings and responds appropriately. This provides cognitive stimulation and could help improve speech in stroke patients who have difficulty articulating words.

How we built it

  • Figured out how to pull data from brain-computer interface
  • Used React to build the user interface and Flask to run the web server
  • Data from a python library (Animus) was used to interface with the hardware (Pepper the robot)

Challenges we ran in to

Emotiv is the Apple of BCIs: very pretentious and difficult to work with. Circumventing the ridiculous licensing process was an issue. There was no developer documentation for Animus, we had to learn how to use the data using trial and error. We had no idea what web sockets were - still not sure we do... Flask was new to all us except for Kishan whose passion for web development knows no bounds. Raj can’t code and couldn’t understand anything anyone was saying, he’s the use case for Verbal Seasoning.

Accomplishments that we’re proud of

Integrating hardware and software was really tough, especially in only 24 hours. Each of us worked with languages or tools that we hadn’t previously used. The logo is a masterpiece. And Hayley is really proud that she somehow managed to understand the undocumented Animus SDK and made the robot talk and change its eye's colour. Live Demo: https://twitter.com/hayleykwok0_0/status/1239200369776963585?ref_src=twsrc%5Etfw

What we learned

Brain-computer interfaces are difficult to work with, they don’t have APIs and we (Kate) had to get very creative in feeding data to the web server. We developed our UI skills and adopted a user-centred approach, referring to our target user throughout the hackathon. If we’d done this again then we’d probably attempt something simpler, we attempted to integrate a lot of different components which didn’t end well...

What’s next for Verbal Seasoning

We didn’t use Pepper’s proprietary software which has sophisticated AI and would’ve been able to better engage the patient. Naturally we would like to try Verbal Seasoning with patients and see if we’ve solved the problem we identified. The capability of Pepper would also allow us to extend function beyond speech: the EEG could identify moments of distress and call for help

Share this project:

Updates