Our full case study, including our user research and ideation process can be read here: https://medium.com/@annakambhampaty/pocket-democracy-empowering-voters-using-the-google-cloud-vision-api-ibm-watson-and-revspeech-61268791fcd3
Inspiration
"If I don't know either candidate in the race, I'd go by picking the Democrat over the Republican, then women over men, then names that sound like they come from like some kind of racial minority or something, then from there we're just straight up guessing."
Over 30% of voters fail to complete their ballots every year. Political scientists attribute this to an absence in information which causes the SAT effect- if you don't know, don't answer it. Even more, researchers have found that candidates listed first on the ballot can receive up to 5% more votes. When they don't have the information they need, candidates' names, ethnicity, and gender can affect voters make decisions. The above quote from a voter we interviewed vividly illustrates this fact.
There are several issues surrounding voter engagement, voter registration, and disenfranchisement policy, but, for the scope of this project, we focus on the specific interaction of the registered voter filling out their ballot. We ask, how might we help a voter make a more informed, more personal decision at the booth?
Through interviewing users on the day of the hackathon and going off past observations of this issue, yesterday, we ideated and came up with the following solution that utilizes a wide range of technologies.
What it does
Our solution is an augmented reality experience that allows a user to scan their smart phone over their ballot. Our app, Pocket Democracy, will pick up the names on the ballot and allow the user to click them to reveal relevant information, popular news links, and a sentiment analysis of articles relating to the candidate. Pocket Democracy also supports speech-to-text and text-to-speech recognition processing.
How we built it
We developed a web app that first processes an image of the ballot using Google Cloud Vision's Optical Character Recognition API to detect and then extract the text from the image of the ballot. We grab the candidate names in text-form and pass them in queries to IBM Watson's Discovery News API. We use this API to scrape the web and gather the relevant information on the candidate- stances on prominent policy issues, relevant news links, and a sentiment analysis of news articles. We also utilize RevSpeech's API to implement a speech to text feature, for accessibility reasons. A user can say a name into the app, and it will pull up the same relevant information on the candidate. The app also has the ability, thanks to Google Cloud's text to speech, to speak the relevant information it scraped back to the user. Beyond just accessibility, this also makes it so that the user does not need to be in front of a ballot and can get informed prior, as well.
What's next for Pocket Democracy
Before moving forward with our project, extensive research in information ethics and user testing for accessibility and usability will be required. Then, we can iterate on our design in an informed manner to make it as accessible and equitable as possible. Algorithmic and news source bias should also be addressed in the future. We'd like to implement a personalization feature as well as a simple text input . We also need to more smoothly connect the varying components of our project. A reminder of our original mission- to help the voter make an informed decision with ease for themselves!
Built With
- google-cloud
- google-cloud-text-to-speech
- google-cloud-vision
- ibm-watson
- ibm-watson-news-discovery
- rev
- revspeech
Log in or sign up for Devpost to join the conversation.