AI Snake Oil

What Artificial Intelligence Can Do, What It Can’t, and How to Tell the Difference

Arvind Narayanan, Sayash Kapoor

14 min read
1m 10s intro

Brief summary

AI Snake Oil argues that the most important question about AI is not whether it seems intelligent, but what specific task a system performs and how much trust it deserves. It explains how to distinguish genuinely useful tools from systems that make dangerously unreliable claims about human behavior.

Who it's for

This book is for anyone who wants to understand the real-world capabilities and limitations of AI beyond the marketing hype.

AI Snake Oil

Audio & text in the Readsome app

What AI Can and Cannot Do

Artificial intelligence has become a catchall term for very different kinds of systems. Treating all of them as if they were the same creates confusion from the start. A tool that writes text, a system that recommends a route, and software that claims to judge a person’s future are not doing the same kind of work, and they should not be trusted in the same way.

One useful distinction separates generative AI from predictive AI. Generative systems create text, images, audio, or code by learning patterns from huge amounts of data. Predictive systems are sold as tools for forecasting human behavior, such as who will be a good employee, who might commit a crime, or which student will struggle in school. Real progress has been strongest in generative AI, while many of the boldest claims about predictive AI fall apart under scrutiny.

Generative AI can be genuinely useful. It can help draft documents, summarize material, suggest code, and support people in many kinds of knowledge work. But it does not understand the world in a human sense. It predicts plausible outputs from patterns in data, which is why it can sound confident while inventing facts, fake citations, or false accusations.

Predictive AI creates deeper problems because it is often asked to answer questions that are much harder than they appear. Human lives are shaped by luck, changing circumstances, hidden information, and social systems that do not sit still long enough to be modeled cleanly. When a company claims it can measure honesty from a face, infer personality from a short video, or rank people by future worth, it is often wrapping guesswork in technical language.

The consequences are serious because these systems are often deployed in settings where people cannot easily challenge them. Insurance companies, courts, schools, employers, and hospitals may rely on automated scores that appear objective but rest on weak assumptions. Even tools that perform well on narrow technical tasks can become dangerous when they are plugged into institutions that affect liberty, health, income, or reputation.

A clearer public conversation starts with plain language. Some AI systems work well for limited tasks like spam filtering, navigation, transcription, or generating drafts. Skepticism becomes essential when software claims it can peer into the future of a human being and make life-altering decisions on that basis.

Full summary available in the Readsome app

Get it on Google PlayDownload on the App Store

About the author

Arvind Narayanan

Arvind Narayanan is a computer science professor at Princeton University where he serves as the director of the Center for Information Technology Policy. His research focuses on the societal impact of digital technology, including AI, privacy, and cryptocurrencies. Narayanan's significant contributions include demonstrating the fundamental limits of data de-identification, leading the Princeton Web Transparency and Accountability Project, and showing how machine learning can reflect cultural stereotypes.

Similar book summaries