Marcus on AI

LLMs aren’t very bright. Why are so many people fooled?

How Generative AI preys on human cognitive vulnerability

Gary Marcus
Apr 14, 2024
125
  • Support Marcus on AI

    Since you liked this post, why not support Marcus on AI with a subscription?
124
12
Share

For a long time I have been baffled about why so many people take LLMs so seriously, when it is obvious, from the boneheaded errors, hallucinations, and utter inability to fact-check their own work, that the cognitive capacities of LLMs are severely limited.

Overnight, two friends, Magda Borowik and Kobi Leins, sent me a fantastic analysis (from last year, as it happens) of this very phenomenon, by a software engineer/website designer/lucid author named Baldur Bjarnason, called “The LLMentalist Effect: how chat-based Large Language Models replicate the mechanisms of a psychic’s con.”

The basic structure of Bjarnason’s essay is to compare the methodology of mentalists (aka mind-readers, entertainers and con artists who pretend to read minds) with what happens in LLMs. The essence is distilled in a pair of summary tables.

The first, as he puts it, lays out “the psychic’s con”.

The second summary, deeply parallel with the first, outlines the process by which an unthinking LLM (without meaning to do so) cons many of its users. He calls it the LLMentalist Effect:

Both damning and correct. People who believe in LLMs have played themselves.

Whole thing is worth reading, not just for the elaboration of the above but also for the further parallels with the tech industry as a whole. Check it out.

Gary Marcus thinks the whole world would be different if so many people weren’t so easily taken by the parlor tricks of generative AI.

Marcus on AI is a reader-supported publication. To receive new posts and support my work, consider becoming a free or paid subscriber.

125 Likes
·
12 Restacks
125
  • Support Marcus on AI

    Since you liked this post, why not support Marcus on AI with a subscription?
124
12
Share
124 Comments
Richard Self
Richard’s Substack
Apr 14Liked by Gary Marcus

An amazingly damning analysis that rings so true to life.

Believers will never critically analysis their belief.

Like (13)
Reply
Share
1 reply
Gerben Wierda
Apr 14Liked by Gary Marcus

Oh, this one is really good. Beautiful.

Like (7)
Reply
Share
122 more comments...
Things are about to get a lot worse for Generative AI
A full of spectrum of infringment
Dec 29, 2023 • 
Gary Marcus
236
161
What if Generative AI turned out to be a Dud?
Some possible economic and geopolitical implications
Aug 13, 2023 • 
Gary Marcus
298
134
ChatGPT has gone berserk
Not a joke
Feb 21 • 
Gary Marcus
136
88

Ready for more?

© 2024 Gary Marcus
Privacy ∙ Terms ∙ Collection notice
Start WritingGet the app
Substack is the home for great culture