Twitter is an important part of public discourse. As it becomes more and more image heavy, people who are blind are left out of the conversation. That's where Alt-Bot comes in. Alt-Bot fills the gaps in image content using an image recognition API to add text descriptions.
The inspiration for the format of the message is a tweet by @stevefaulkner, in which he adds alt text to a retweet.
How it works
Mention @alt_text_bot in a message or retweet that has an image attached, and you'll get a reply with a text description.
Alt-Bot uses APIs from Twitter and CloudSight to retrieve and transcribe images in Tweets.
Challenges I ran into
Some people asked why anyone cares about meme photos or what I ate for dinner. My response was "We don't get to decide who cares". We need to make sure everyone is involved in the entire conversation, even if it's mostly trivial. We decide for ourselves what's important.
Accomplishments that I'm proud of
The application has captured the attention and imagination of people on Twitter. There have been almost 100 transcriptions by 2:30pm on Sunday, and a handful of notable retweets. I'm very happy with the quality of the descriptions too.
What I learned
I learned not to expect anyone to work harder for the same content. The people I talked to wanted image descriptions inline instead of posting or retweeting. I've already started work on a Twitter client to make this possible.
What's next for @alt_text_bot
Alt-Bot needs to be "push" rather than "pull". That means that I'll be creating a Twitter client that adds descriptions inline as part of the feed.