How Harmful Algorithms Take Control
Cathy O'Neil began with a deep faith in mathematics. As a child she loved patterns and logic, and later she became a math professor before moving into high finance. The 2008 financial crisis shattered her confidence that math, on its own, made systems fairer. She saw formulas being used not to reveal truth, but to scale up reckless decisions that damaged millions of lives.
After the crash, the same style of data-driven decision-making spread far beyond Wall Street. Algorithms started grading teachers, screening job applicants, setting insurance prices, sorting students, and guiding police patrols. These systems were sold as objective because they relied on numbers rather than personal judgment. Yet they often absorbed the assumptions, blind spots, and incentives of the people who built them.
O'Neil calls the worst of these systems Weapons of Math Destruction. They share three features. They are opaque, they affect large numbers of people, and they cause damage while offering little chance for appeal or correction. Their authority rests on complexity, which makes it easy for institutions to hide behind the claim that the math is too advanced to question.
A striking example appeared in Washington, D.C., where a teacher evaluation system called IMPACT helped determine who would be fired. Sarah Wysocki, a well-regarded fifth-grade teacher with strong support from parents and her principal, lost her job after the algorithm gave her a low score. The system leaned heavily on student test results, even though children’s scores are influenced by many forces outside a teacher’s control, including poverty, stress, and instability at home.
The danger grows when a flawed model has no meaningful feedback loop. If good teachers are fired, the system rarely checks whether it made a mistake. It simply treats the outcome as proof that the decision was correct. The same pattern appears when employers reject people because of low credit scores or personality tests. Once shut out, those people often fall further behind, and their worsening situation is then treated as confirmation that the model had judged them properly in the first place.



