Inspiration

Last year I was reading a very long article published on the internet by someone I thought knew what they were talking about. Unfortunately, I realized the article was filled logical fallacies. I spent an entire day rereading that article and picking out all of those fallacies.

Then I decided to write a long, clever prompt to find logical and factual errors for me. In a minute or two, not only did it find every fallacy I had identified, but one which I had missed. I was very impressed.

We're in a busy world and sometimes it's nice to quickly discern disinformation from useful content. That's what Diogenes is for. Diogeses was famous for "looking for an honest man." That appeals to me.

What it does

Visit any web page, select the Diogenes plugin, and click "Analyze." It will attempt to break the article down by factual and logical content. It will report:

  1. A summary of the argument
  2. A breakdown of logical statements and an assessment of their accuracy
  3. A breakdown of logical fallacies, if any
  4. A conclusion of the strength of the argument (good, average, or weak)
  5. A list of counter-arguments, if any
  6. Suggestions for improving the argument (great for students!)
  7. Links to reliable sources, where available.

How we built it

It's a standard Chrome plugin. Though I've written them before, I used generative AI to write all of the code for this plugin in just a few minutes (I had already written the prompt, so that part was easy).

Challenges we ran into

Nothing serious. I needed to have the AI create a more modern interface and at one point, it suggested I use a Markdown parsing Javascript library on a CDN, but that violates plugin security rules. I had to copy it locally (it has an MIT license).

I have noticed that in using variations of this prompt with other LLMs, the LLM would often get do a good job with opinion pieces, but would get confused when someone was writing an opinion piece about another opinion piece. They often could not tell which were the new opinions being offered, or the old opinions being dissected. Gemini seems to do a better job handling this layer of indirection.

For added context, here are some comments in the (current) version of the code regarding issues I'm trying to work around.

// We ignore the "appeal to authority" fallacy unless the authority is
// obviously not an expert in the field. This is because the LLM would often
// cite this an logical flaw if that appeal isn't also backed up by citations.
// It's not a great way to handle this, but we're still getting strong
// arguments listed as "weak" due to this.
//
// We also ignore numeric information if the numbers are close to the actual
// numbers. This is because we were getting "factual errors" where "almost two
// degrees" did not match "1.8 degrees" in the source.

Accomplishments that we're proud of

So far, it gives good results. I've also tried it on some of my essays on the web and found, to my dismay, that one of my favorite essays is a "weak argument." Upon reflection, I couldn't argue with it. Darn it.

What we learned

Using the Google Gemini API is dead simple! Unfortunately,

What's next for Diogenes

Current work is focused on prompt refinement. Obviously, this is a very difficult task, but we're trying to provide enough context for readers to understand if the information they have is good or bad.

Built With

Share this project:

Updates