In the Turing Test, the 'interrogator' that makes the final decision as to whether A or B is human, is human himself. But what if we scramble things up a bit - what happens if the 'interrogator' is instead an 'observer' that does not talk to A or B, but only watches A and B speak to each other? What if this 'observer' is not a human, but actually an AI trained to predict whether A is human or a chatbot based on what it says? In that case, what counts as 'human' and what counts as a 'chatbot'?

How it works

You have a conversation with a chatbot via PandoraBots API. At the end of your conversation, you hit submit and a string of your messages is sent over to a Python program, which runs it through a decision tree and returns a prediction as to whether you're the human or the chatbot. A string of your messages and a string of the chatbot's messages are both sent, labeled, to Firebase, which is then used as training data for the next round of conversations.

Challenges we ran into

In the beginning, we had trouble finding a chatbot API that worked.

Share this project: