Our inspiration was Saqib Shaikh, a software engineer at Microsoft who also happens to be hard of sight. In order to write code, he used a text-to-speech engine that would read back what he wrote. That got us thinking: how do other visually impaired individuals learn how to code and develop software? What if there were not only text-to-speech plugins, but also speech-to-text plugins? That became the basis of our project, Shaikh.
What it does
Shaikh takes in a speech input, either through your device's built-in microphone or an Amazon Echo. Sample inputs include: "declare a variable named x and initialize it to 0," or "declare a for loop that iterates from 1 to 50." Alexa will then translate these requests into the appropriate syntax (in this case, Python), and send this to the server. From the server, we make a GET request from our extension in VS Code and insert this code snippet into our text editor.
How I built it
Challenges I ran into
Since there were so many components to our project, we had issues integrating the backend portion together cohesively. In addition, we had some difficulties with the Amazon Echo, since it was our first time using it.
Accomplishments that I'm proud of
We built something super cool! This was our first time working with Alexa and we're excited to integrate her technology into future hacks.
What I learned
How to use Alexa, the difference between a JAR and a runnable JAR file was, language models, integrating all the backend together, Kern's juice, how long we could stay awake without sleeping.
What's next for Shaikh
Using Stack Overflow's API to catch errors and suggest/implement changes, expand to other code editors,