A member of our team brought us a touching story about his family member with a visual disability struggles to use the internet and computers daily. Accessibility is something that all of us take for granted, and so we wanted to create a whole new way to interact with the web. This lead us to create a voice based extension which later we improved by adding AI to learn from our browsing behavior and better predict what we want to see.

What it does

When the extension is activated in the browser. It waits till it receives a voice command, executes the command and then a neural network analyzes our patterns and trains a model based on our online behavior.

How we built it

We used JavaScript and node.js to build our extension, we used Annyang a small JavaScript library to provide the voice control functions and then used C/C++ to create and train a neural network. No other external libraries were used.

Challenges we ran into

  1. Image processing,
  2. Communicating between different coding languages and various API,
  3. Facebook front end reverse engineering (DOM),
  4. Difficulties integrating Facebook API

Accomplishments that we're proud of

In 36 hours we’re proud that we managed to create a whole Reinforcement Training neural network from scratch and created a revolutionary voice interface that changes the way one can interact with the web.

What we learned

Coming from different backgrounds we learnt to use our collective strengths to create a fully functioning voice controlled artificially intelligent chrome extension, in making this we learnt about AI, node.js and the Annyang JavaScript library.

What's next for Hi-Oid!

While our prototype is only for navigating a function in Facebook, our extension can potentially control any website with our voice. Our future vision would be to create an open source wrapper where other developers can code their own voice interactions in various ways not just pertaining to the web but also different hardware.

Built With

Share this project: