Inspiration for this project is personally motivated. I was diagnosed with Retinitis Pigmentosa (RP) when I was 11 years old. RP is a degenerative disease that could very well leave me totally blind.
Growing up, I was often told that I wouldn't be able to do something, or be successful in something because of my vision. When I was in undergrad, I wanted to pursue computer science. However, I was met with not only the "But you are a girl" arguments, I was also told that I wouldn't be successful in the program due to my limited vision. However, that never stopped my passion. I continued to learn on my own, including creative shortcuts to make learning easier for me, and eventually was able to gain the skills to be able to pursue software development as a career.
Screenreader is a tool that I wish I had access to when I was learning. I feel that with this tool, I could have learned quicker, or been more successful in my early endeavours.
What it does
Screenreader is a Visual Studio Code extension that uses Microsoft’s Text-to-Speech API library. The extension is able to read selected text aloud using either a key command (cmd/ctrl + shift+ s) or by using the keyword "Speak" in the command palette.
The extension would require the user to have an API key, which can be entered into the Extension Settings.
How we built it
We capture the Speak event and perform some pre-processing to translate some symbols that the API did not recognize. Next, we pass off the values to Azure Services and wait a tiny bit, and we have our output.
Challenges we ran into
Challenges that I ran into included:
- Trying to construct how the feature should work
- Using contribution points to get user input (API Key) to use within the code
- Parsing document to get line number
Accomplishments that we're proud of
I am proud that I was able to launch the MVP on the VSCode marketplace, and that it has two additional downloads aside from mine.
What we learned
- How to work with contribution points in VS Code
What's next for Screenreader
Some goals for future development that I have include:
- Allowing the user to adjust the key commands
- Be able to dynamically grab the line number of the editor to be read aloud,
- Adjust the talkback speed
- Implement a toggle ability to read the document based off of cursor location without running a command.