What inspired us
We were inspired by APIs developed by Nexmo, and J.P.Morgan's empowerment ideas. HearMe is our website, aimed at people who have hearing impairments or other conditions which make telephone conversations difficult or impossible.
What does HearMe do
HearMe has a simple clean interface which takes in two arguments and returns one. It takes a phone number and a message. HearMe then calls the given number using the Nexmo API and reads the message. After the call, the speech from the other user is converted to text. It is 'a voice for those who need it'.
How we built it
Through the division of labour, we began the process with pair programming. Two of our team members were experienced with Node.js and the other two had other skills. This allowed the less experienced team members to catch up rapidly. We used an angular front end and a Node.js backend. This allowed for asynchronous development which didn't interfere with one another. Git was used as our version control methodology.
Plans for HearMe
We're proud of HearMe, and feel with further development it could become a genuinely assistive tool for those who require it.
Challenges we ran into
Some of our team members struggled with merging code inefficiently. We should make better use of branches next time.
We had issues with the Nexmo Voice API which meant we implemented a message/response system. Originally we planned to make conversations dynamic.
We learned a lot about Nexmo's Voice API, which was a really interesting learning opportunity. All favouring Java and Python, we stepped a little out of our comfort zones using Angular, Node.js and lots of HTML and CSS.
Future plans for HearMe
We would fix the response text being displayed to the user. Currently the client side WebSocket doesn't receive this, although it is processed into text server side correctly.
We would like to make conversations dynamic; with the hearing-impaired user being able to respond by typing text to be read over the phone as it's input, and the audio response response piped into text displayed to this user. This would require tweaks to the NCCO call object to stream the audio into a WebSocket endpoint on the server, and converting the text processing to use an audio stream and give a text stream.