Reading AI-generated communications is enslavement by the robots

Ben Zotto
4 min readAug 5, 2024

Humans have finite time on this earth. It’s an insult and ethical crime to cause someone to contemplate AI content under false pretenses.

Google’s Gemini AI ad for the Olympics—in which a father “helps” his small daughter produce a fan letter to her favorite Olympian by simply asking the LLM to write one—has come in for well deserved derision, most delightfully from the Washington Post’s Alexandra Petri. Her angle on the problem, which is that AI does not know your thoughts and feelings, and thus its transmissions are vacuous and generic, is the right way to think about what writing means for humans. This point has been made elsewhere as well, and remains a philosophical challenge with AI-driven writing.

The flip side of this, though, gets less worry, but it is to my view the bigger problem: What about the reader of an AI-generated letter? The Olympian hurdler in this case, but it’s also true of your boss who gets a cooked up status letter; your podcast listener who sits through an auto-generated script; the aspiring romantic sifting through a computer-drafted dating profile; or your teacher who marks up a blast of LLM homework with a red pen.

If the audience of your work knew that the material they were spending their time reading, or hearing, or watching, came not from you but from…

--

--