How AI Mistakes Teach Us About Ourselves
Both computers and humans use biases and training data
We usually praise artificial intelligence for its wins, such as beating humans at chess, translating languages, and recognizing faces. But its often seemingly bizarre mistakes? A self-driving car mistakes the moon for a traffic light. A translation turns “The spirit is willing but the flesh is weak” into “The vodka is good but the meat is rotten.” These mistakes are more revealing than the successes.
Why does this happen? AI works by spotting patterns, and its errors show how those patterns work. AI might call a dog in an online photograph a wolf because all of the wolf pictures it used in training had snow. This is an example of shortcut logic that is usually reliable but sometimes ridiculous.
Humans do the same. We rely on mental shortcuts, or heuristics, to navigate the world. Most of the time they work, but sometimes they mislead.
We rely on shortcuts such as confirmation bias, availability bias, and anchoring bias, and our minds also draw on “training data” from past experiences, culture, upbringing, and education.
Human visual illusions arise when these shortcuts and background influences mislead us, producing false perceptions.
In the cases of AI and humans, errors are not random. They reveal the structure of the thinking.
Perfect reasoning does not exist. Both humans and machines trade some accuracy for efficiency. The same shortcuts that trip us up also let us make speedy decisions and make sense of complexity without freezing in indecision.
Studying AI failures sheds light on our own. A self-driving car misreading a shadow mirrors our out-of-proportion fear of rare dangers. A language model confidently hallucinating facts echoes our tendency to fill in gaps in our memory or knowledge with false facts.
These sometimes comical and seemingly irrational mistakes are windows into how minds, artificial and human, work.
Yeah, I've noticed that everyone who disagrees with me is either suffering from confirmation bias, availability bias, anchoring bias, or were exposed to bad training data.
Just kidding. Excellent article!