Wanted to read & talk about the two user biases (automation, authority) I consider to be relevant towards current and future LM use as question-answering tools.


Automation and authority bias are particularly relevant when it comes to the public use of AI language models, as users may place too much trust in the answers generated without doing their due diligence. Considering that language models are currently incapable of giving correct answers 100% of the time, this is a problem. There may be ways to rectify this.

Built With

  • words
Share this project: