AI Snake Oil: What Artificial Intelligence Can Do, What It Can't, and How to Tell the Difference
Book Author: Arvind Narayanan & Sayash Kapoor
The disconnect between what AI actually does and what people think it does has never been wider. Startups pitch revolutionary AI that will solve every problem, investors amplify those claims, and media coverage often misunderstands the underlying technology. The result is a cycle of hype followed by inevitable disappointment, even when the technology itself is genuinely useful.
AI Snake Oil by Arvind Narayanan and Sayash Kapoor is a clear-eyed guide to separating AI hype from AI reality. Written by two Princeton computer scientists, the book doesn't dismiss AI—the authors use it themselves and acknowledge its genuine capabilities. But they argue that unrealistic expectations distract us from focusing on AI's real strengths and addressing its real problems.
The book divides AI into three categories: tasks where AI excels (like language translation and image recognition), tasks where AI fails but is sold as working (like predicting criminal recidivism or hiring outcomes), and tasks that are theoretically possible but practically dangerous (like deepfakes and automated misinformation). Narayanan and Kapoor argue that we've become obsessed with science fiction scenarios—killer robots, sentient AI, the Terminator—while ignoring the immediate harms happening today. AI systems are already making consequential decisions about who gets hired, who gets arrested, and who receives medical treatment. These systems often don't work as advertised and frequently encode existing biases, but the media focuses on whether AI will become conscious rather than whether it's fair.
The authors make a compelling case that Artificial General Intelligence (AGI) is much further away than the hype suggests. Large Language Models (LLMs), despite their impressive capabilities, are fundamentally limited by their architecture. They're prediction engines trained on patterns in text, not reasoning machines that understand the world. The book walks through the mathematical and architectural reasons why scaling up LLMs won't lead to AGI—at least not without fundamental breakthroughs we don't currently have. This matters because the "AGI is coming soon" narrative shapes how we regulate AI, how we invest in AI companies, and how we think about AI risk.
The ChatGPT-5 release is a perfect example of the hype problem. When it launched, the media narrative was overwhelmingly negative—commentators called it a "disappointment" and questioned whether AI progress had stalled. But the technology itself is remarkable. It's a genuinely useful tool that helps millions of people work more effectively every day. The problem wasn't ChatGPT-5; it was that people expected HAL 9000 or the Star Trek computer. When you overhype technology as magical, the inevitable result is letdown, even when the actual product is excellent. This cycle of hype and disappointment makes it harder to have honest conversations about what AI can and should do.
The book also tackles practical problems that get less attention than they deserve. Narayanan and Kapoor dissect AI systems used in criminal justice, showing how "predictive policing" tools often just encode historical biases in arrest data. They examine hiring algorithms that claim to identify top talent but mostly filter out qualified candidates based on spurious correlations. They explore medical AI that performs well in research papers but fails in real clinical settings because the training data doesn't match real-world conditions. These aren't hypothetical future problems—they're happening now, and the hype around AGI distracts from fixing them.
Why I Recommend This Book
The book doesn't pick sides in the AI wars. It's not Team Yudkowsky (the AI safety researcher who believes AI poses existential risk) or Team Andreessen (the VC who advocates for accelerating AI development without restraint). Instead, it charts a practical middle path: AI is a genuinely transformative technology when applied correctly, but it requires clear thinking, honest evaluation, and appropriate safeguards. This balanced perspective is refreshing in a debate that often feels like choosing between "AI will kill us all" and "AI will save us all."
For anyone working with AI—whether you're building products, investing in companies, or just trying to understand the technology shaping our world—this book provides essential AI literacy. It teaches you to ask the right questions: What is this AI system actually doing? What data was it trained on? What happens when it fails? Is this a problem AI is good at solving, or are we forcing a square peg into a round hole? These questions matter more than speculation about when we'll achieve artificial general intelligence.
The future of AI is genuinely exciting, but only if we're honest about what we're building. This book helps cut through the snake oil and focus on reality. In a world where everyone has an opinion about AI but few understand how it actually works, Narayanan and Kapoor provide the clarity we desperately need.