Fool me not, generative AI
Renowned psychologists and NYT bestselling authors Daniel Simons and Christopher Chabris contemplate the intersection of AI and gullibility
A few months ago, my mother was the target of a sophisticated scheme that’s becoming all too common. Scammers replicated her grandson’s voice and led her to believe he was in jail and urgently needed financial assistance.
“Please, don’t tell my dad,” he implored her.
Thankfully, her moral compass prevailed, stopping her from simply forking over the money. She called my brother, who was able to quickly reach my nephew. He was in class, safe and sound.
Presumably, a form of generative AI—voice cloning—enabled the work of these fraudsters. With the rapid advancement and availability of such generative AI capabilities, concerns about their potential for facilitating deception and scams have become a significant point of discussion.
Much like the way AI can enhance what our brains already do well—for example, perform mathematical calculations we can do, but much faster—it can also exploit our cognitive biases. A new book by renowned psychologists and New York Times best-selling authors Daniel Simons and Christopher Chabris, “Nobody’s Fool: Why We Get Taken and What We Can Do About It,” sheds light on our brains’ vulnerabilities as they face mechanisms of deception.
I had the privilege of interviewing these authors about the interplay between cognitive biases and AI as well as about the ways they’re experimenting with using generative AI in their work. An edited version follows.
The concept of truth bias is one that you talk a lot about in your book. It seems to have a clear link to some of the issues being discussed around generative AI. Can you explain this concept and how it both helps and hurts us?
Christopher: Truth bias is the idea that we initially accept what others are telling us as true and only with effort can we determine that it’s false or remain uncertain. This is a default tendency rather than an absolute rule—if you go into an interaction assuming that you’re dealing with a liar, you might start with a smaller truth bias, or be ready to question whatever they say.
But in most interactions in our daily lives, we default to accepting what we’re told as true. And for the most part, that’s a good thing. If we instantly second-guessed every comment or statement, we’d be unable to communicate or interact with anyone.
Daniel: That default hurts us, though, when someone is deliberately trying to deceive us. By providing false information, especially under time pressure or when we’re distracted, we might accept what they’re saying as true without having a chance to countermand it in our own minds. This necessary tendency to trust others is a precursor for most acts of deception.
I imagine this is what makes the “hallucinations” (incorrect info presented as definitively correct) of ChatGPT and other generative AI apps particularly problematic. What is our brain doing when we read AI outputs and how can we counteract our tendencies?
Daniel: When we encounter a claim made with certainty and clarity, especially one that tells a good story, we tend not to question it enough. That’s true whether the story comes from an AI or from another person.
ChatGPT and other generative AIs make it easier to create plausible-sounding but wrong claims—”often wrong but never in doubt” could be their motto.
Christopher: It takes effort to think about each claim and question whether it’s really true, especially when it sounds plausible, and the fact that something comes from “artificial intelligence” may lead people to question it less than they would if the same words came from a human being.
Unfortunately, there is no easy fix for this. And although it is unlikely that any one of us will be victimized by a grand con, we all can be fooled by compelling, false claims, which language models can create at scale.
Are there other ways that AI could exploit our truth bias?
Daniel: Voice fakes are likely to be a big problem, both because of truth bias and because they will allow scammers to tap into other habits and hooks that fool us. For example, a scam call from someone who sounds like our friend could make the scam that much more believable and will lower our defenses further (because it uses the hook of familiarity).
What are some ways you think we could use AI or other technology to help us counteract this tendency?
Christopher: AI might actually help counter scams that work by emailing or calling huge numbers of people with the expectation that only a few will respond. Those scams are profitable because the people who are most likely to reply are the ones most likely to hand over money. But they become cost ineffective if the scammers have to deal with large numbers of people who prolong the interactions but never will pay up. Some people apply this logic to “bait” the scammers, keeping them occupied interacting with skeptics rather than marks. The idea is to make it unprofitable for them. AI might help automate that process by triggering a chatBot to interact with the scammers for as long as possible.
Of course, the scammers might use chat bots to help reel in the fish. I look forward to the day when a scam chat bot spends its time interacting with a scam-baiter chat bot.
Are there other thought pathways that AI can exploit to enable us to be more easily scammed or other ways it can be used to “team up” with a thought mode we have that makes us less likely to be scammed?
Christopher: Actually, AI has been used for years to enable cheating in games. You can think of computer chess software as a primitive form of generative AI: it generates very strong chess moves in any chess position, allowing its human users to masquerade as much better players than they really are. If a player can do this without detection, they can earn prize money, tournament victories, and higher rankings.
There are a lot of subtleties to how cheaters use computers to get unfair advantages, and the specifics vary from game to game, but in all cases they typically exploit truth bias.
They also rely on our habit of focusing narrowly on what’s in front of us (my opponent is playing good moves!) rather than what’s harder to find (the suspicious timing pattern of my opponent’s play or the consistent correlation of my opponent’s moves with computer-suggested moves). And they count on our discomfort with asking people tough questions like did you really play those moves by yourself or did you have help?
Have you used ChatGPT or other generative AI apps yet? If so, for what purposes, and what are your initial impressions of the technology?
Daniel: I have tested ChatGPT on some of my teaching materials and exams. It performed remarkably well on conceptual statistics questions I use in my introductory statistics classes. Because of that, flexible take-home exams will no longer be an option. In other ways, these generative AI apps will become useful tools. Just as calculators changed math instruction, we will need to teach students how to use tools like ChatGPT accurately and effectively.
We’ve also played with using ChatGPT for programming tasks, and it does make some tasks easier and faster. The time savings came from eliminating the need to look up coding syntax and it sometimes instantly suggested solutions that would have taken a while to figure out.
That said, it takes a lot of effort, fixed prompts, and other searching to debug the code. And there’s a danger that people will accept AI-generated code and instructions that appear to solve a problem but don’t really do it.
Overall are you more excited or concerned about recent AI advances and their potential effects on society?
Christopher: I’m excited by any new technology like this—the advances over the past year have been remarkable, and these advances might someday be refined enough to automate a lot of otherwise onerous tasks. But I am concerned that they will amplify misinformation and disinformation in ways and at a scale that are hard to anticipate and might be impossible to overcome.
Daniel: I agree. It’s fair to say we’re also both concerned that they are already having a dramatic impact on our education system, one that might have a lasting impact on the skills that students acquire by the time they finish their education.
“Nobody’s Fool: Why We Get Taken and What We Can Do About It,” by Daniel Simons and Christopher Cabris is a book about how deception works, from life-changing frauds like Ponzi schemes to everyday tricks like false advertising and fake news. It’s available at most major online retailers.
Nice book summary thanks David DeLallo, the idea about using AI to bait scammer's by 'keeping them talking' is an interesting one.
Energy Operations Analyst at Equinor
5moI love this article and definitely will get their book today!
Writer, Editor, Moderator and Chair, Douglas Knowledge Partners. Formerly Editor of Fortune Magazine and Partner, Global Publishing McKinsey & Co.
5moGreat interview David DeLallo! We need these kinds of insights in the era we're barreling into. And I chuckled at "Often wrong but never in doubt" to descrive AI hallucinations. That's how in journalism we used to describe over-confident oped writers! (a trait that goes with the trade apparently) 😀
Great article! AI and scams - ugh. This is dead on: "When we encounter a claim made with certainty and clarity, especially one that tells a good story, we tend not to question it enough. That’s true whether the story comes from an AI or from another person." We're suckers because we're human. Don't let AI take advantage of that!
Tech Editor | B2B Content Pro | AI Industry Analyst | Shaping conversations on tech + business | ~10 yrs in AI @McKinsey, IBM
5moYou can buy Daniel and Christopher’s book on Amazon among other sites (it’s already the top new release in several categories): https://www.amazon.com/gp/aw/d/B0BLNDRL8V/ref=tmm_kin_swatch_0?ie=UTF8&qid=&sr=