Inspiration

We've been alarmed at the rise of high-quality deepfakes and it was remarkably easy to generate one of our own using open source AI tools. Here's a sample: https://twitter.com/cz_binance/status/1189391243920297985

As a recent immigrant to the United States, I experienced the benefits of having freedom of speech, but also the harms that come with filter bubbles and online echo-chambers. If we consider that social media is broken today, DeepFakes, if democratized and accessible to all will truly make the internet a "hall of mirrors" where reality is so badly obfuscated, our collective sense-making capacities would be significantly damaged.

I brought a team together to solve this problem and we agreed on building a gamified HOT or NOT version for deepfakes, that would make it easy, fun and accessible for the average person to begin questioning what they're seeing. The data collected on these deepfakes would then be aggregated to understand how people are perceiving and interacting with these fakes.

Because deepfake technology is widely available, we discovered a key product insight that "Inoculation is the best defense" against deepfakes as opposed to outright banning, regulating or censoring deepfakes, which is technically impossible to do, courtesy of American free speech laws. Safely exposing people to fake videos, while training our AI models wrapped in our APIs would allow us to determine if the content is being accurately perceived and the remedial actions that need to be taken by governments.

What it does

Some Taglines we are playing around with: a) We are the HOT or NOT for DeepFakes. b) We are the mechanical turk for deepfake detection. c) We sharpen your senses to detect deepfakes d) We are the Lumosity for Deep fake detection

V1: The skill-based mobile game pushes deepfake videos to users in several categories (Entertainment, Politics and more coming soon!) and engages users with a countdown timer to label the video correctly and sharpen their senses in the process! If they label the video correctly, they get points (integrations down the line could mean distributing STX tokens, Satoshis, or DAI instead of points). The user then receives a score towards the end of the game and are incentivized to share their deepfake detection scores.

How we built it

We started by sketching the app design and flow on paper, and a number of iterations to simplify the UI & UX. We then built our prototype on Figma and developed the user-flow. Once there was alignment on design and UI, development started for both the iOS and Android versions, where we used Swift and Java respectively.

Database Management: Both versions use Firebase for the backend as it allows us to scale effortlessly, and with the Blockstack authentication, we now move closer to decentralizing one aspect of our ecosystem.

Other tools used include: a) DigitalOcean for the hosting our Blockstack webapp and website (Alethea.ai), b) AWS and Python-based generative models for deepfake creation (We are creating our own deepfakes to test the app as well)

Challenges I ran into

Technical Challenges: The Blockstack app generation was not working due to a node issue so we cloned the repo to workaround it. There were other technical challenges within our app, including optimizing data consumption, managing the flow of data between Firebase and Blockstack and ensuring the app's point logic was consistent.

Non-technical challenges: Managing both Android and iOS releases was a bit more than we could chew for this hackathon and we should have committed to one app first, as opposed to trying to launch on both platforms. We're launching on Android first, because of its potential in emerging markets where Crypto incentives can make a significant impact on user acquisition as per Dani's

Accomplishments that I'm proud of

1) Blockstack integration successfully completed for both iOS & Android Apps 2) 2 Apps ready for launch and app mining for January (as advised by Xan to submit them for App mining in January) 3) Positive feedback on the app from the blockstack team and getting to know them in person in NYC. I was nervous to go to the hackathon in-person but in the end enjoyed meeting everyone there!

What I learned

Three learnings: 1) User Feedback: Customers love the game and its engagement, but need accessibility tools and clearer language to depict whether something is fake or real. We need to design future iterations with people who may be color blind (red/green doesn't work), and put our users first

2) Tech: Exposure to the "Can't be Evil" suite of software has allowed my team to think through how we want to build future software, and the philosophy behind why it's important to do so.

3) Ecosystem: The blockstack ecosystem is something we would like to be a part of. Xan and the team were extremely helpful in engaging us and answering our numerous questions.

What's next for Alethea AI

In our next version of the app: We plan on building a (i) reconciliation mechanism and a (ii)reputation registry for DeepFake Videos. The reconciliation mechanism will allow us to scrape deepfakes daily from the web and push them into the app to be tagged, labelled and categorized in as short a time span as possible. This will create immediate reconciliation and provide enterprises with the "mechanical turk" equivalent of deepfake detection. The value of this API can be tremendous as companies can discover:

1) When a deepfake is detected by users (Immediate detection or within a certain timeframe e.g. 24 hours) 2) Whether a real video is being falsely identified as a deepfake (False positive, creating a liars dividend) 3) Edge-cases for inoculation campaigns (prior to an election, or deliberate fakes put out there to test the publics responsiveness and ability to detect them)

(ii)reputation registry for DeepFake Videos. The Reputation registry, will look at the crowdsourced data, and will leverage AI tools to further enhance the confidence scoring of a particular deepfake e.g. if 90% of people who viewed a video determine a video to be a fake, can our AI algorithms confirm or deny that rating and boost that percentage or negate it further? Having an AI layer to further validate the ranking of humans, and learn from crowdsourced input will be valuable in finally timestamping and labeling the deepfakes as fakes.

Selling to Enterprises/Pilots: Our next steps would be that an organization like CNN, Twitter, FB, NYTimes could then call our API, to know if a video is being perceived as fake by people. There are considerations here around abuse/gaming of the system and we will progressively iterate to build our game and AI engine as robustly as possible.

We intend to use any money won/raised, to fund our growth and downloads of our mobile app and to contribute to the blockstack ecosystem. We are also raising a SEED round, which will help bootstrap our growth in the coming year.

Share this project:
×

Updates