Inspiration

In our increasingly technical world knowing how to code has become more of a necessity rather than a luxury. Everyone from the local barber to giant corporations make use of the wide array of programming languages that exist to further their ambitions. However, all programming languages that exist are limited in that they are visually based and strictly syntactical: meaning that everything takes place on a screen in text and the language we use to communicate with the computer is very robotic. This means that two large groups are left behind as the rest of us take advantage of the wonders learning a programming language can provide: those who cannot interact with a screen or a keyboard (the visually impaired, amputees) and those who find it difficult to learn the strict robotic syntax of existing programming languages. As a result, we took it upon ourselves to truly democratize coding and make everyone interconnected. We are proud to say that we have made coding more accessible for the two main groups we mentioned above. Physically Disabled: A language based on speech and hands free coding instead of the traditional typing. Amateur Coders: A more conversational programming language over the existing strict syntactical limitations.

What it does

Verbal Coding is a fully functional web-based programming language (built with JS and HTML) that takes a different approach to programming. Users can write, edit and export their programs with minimal to no use of the keyboard; instead, most of the coding is done verbally by talking to the computer and the computer reporting back to the user. Our interface prioritizes the semantics of the code over the syntax of the language. Our interface supports error reporting, the creation of projects, and predictive text.

How we built it

We crafted the verbal coding language from scratch by analyzing hindrances and burdens in writing code, especially for the visually impaired. Next, we integrated the Google Web to Speech APIs to an HTML view that provides a verbal interface rather than one based on vision. We added specific features to make the coding experience more accommodating to those who are visually impaired. We used fuzzy logic to create smart error reporting along with a framework to parse the verbal code into Python script. Lastly, we created an interface for the user to save their verbal code into projects saved to the user’s LocalStorage, and we integrated interactive tutorials and predictive text.

Challenges we ran into?

Over the span of the project we ran into countless challenges, however we still persevered. For instance, we found that the voice to text api supplied by google would often confuse similar sounding words that threw off our code. One prime example would be when we said "end" and it would report "and". To overcome this issue we set bounds on the words that can be taken by the api so that we never get an input of "and". We set bounds for countless words including : "to", "compile", and "quit". Another problem we encountered was when we attempted to host our service on a domain. We encountered a surplus of DNS issues, however we just redirected our website domain through Github. This allowed us to fix our web hosting issues, and make our application publicly and easily accessible.

Accomplishments that we're proud of

Our biggest accomplishments throughout the project are being able to create lessons that were simple and easy to follow. While creating these tutorials we kept the difficulties that a visually-impaired person may face while coding, and we simplified the process for them. We focused on semantics over syntax, so the user can develop a strong conceptual understanding with an ease. This allowed our target demographic to be very inclusive, appealing to audiences that range from the youth to the disabled elderly. Another important accomplishment was our conversion from pseudocode to python. We allow the users code to be turned into a python file that can be later executed on other systems, transitioning them into syntax based programing. Overall, these were our major accomplishments.

What we learned

Throughout our project, we were able to understand the broad reaches of computer science and the possibilities it has to offer. We found a common but overlooked problem and were able to brainstorm an idea that could change the way computer science is taught. For this very reason, we learned about the perspective of disabled people and how important computer science is to the world. We were able to better understand how people with disabilities were falling behind in the coding world due to the numerous difficulties they had to encounter. On the technical side, we learned how Speech Recognition and Speech Synthesis work to understand and produce speech. We were introduced to a breadth of predictive and machine learning APIs and how they can be used in projects. Lastly, we learned insights about our programming language and how it would be perceived by our user base through interactions with fellow hackers and esteemed mentors.

What's next for Verbal Coding

In the future, we want to make text to speech and speech to text faster more accurate by extending the API to learn the user’s accent and intonations. We want to allow users to upload Python files and accommodate for more complex control flow, algorithms, and data structures to make this language more feasible for use in the workplace. We also want to integrate our program into the curriculum of schools who cater to the disabled. Lastly, we want to add more tutorials for the verbal language and use machine learning algorithms to understand what the user is trying to code overall, so that we can offer more holistic suggestions on the code.

Share this project:
×

Updates