How Social Media Prioritizes Profit Over Safety
When journalist Max Fisher visited Facebook’s headquarters in 2018, he found a sprawling, high-tech campus filled with free restaurants and original art. Behind this facade of success, however, lay a growing global crisis. Fisher carried with him over 1,400 pages of leaked internal documents revealing how the company’s secret rules dictated the political speech of billions of people. While the public often blamed bad actors for social media’s harms, evidence suggested the technology itself was the primary driver of extremism and real-world violence.
The internal documents were provided by an employee named Jacob, who worked for an outsourcing firm reviewing content for Facebook and Instagram. Jacob and his team noticed a disturbing trend: the more hateful and conspiratorial a post was, the more the platform seemed to promote it. Despite raising alarms about how these invisible rules were failing to stop incitement—even contributing to genocide in Myanmar—Jacob’s warnings were ignored by headquarters. He eventually leaked the files to Fisher, hoping to bypass the corporate bureaucracy and reach CEO Mark Zuckerberg directly.
During his investigation, Fisher interviewed several high-ranking executives who were experts in human rights and counterterrorism. These professionals could answer complex policy questions with nuance, yet they seemed to have a blind spot regarding the platform’s fundamental design. They viewed social media as a neutral tool that merely reduced the friction of communication. They compared online harms to traditional societal problems, failing to acknowledge that their own algorithms were actively shaping user behavior by rewarding divisiveness to keep people staring at their screens.
This disconnect became clear when Fisher alerted Facebook employees to a recurring, violent rumor about child kidnappings that was causing lynch mobs in Indonesia. The employees showed little curiosity, suggesting that perhaps an independent researcher might look into it someday. Meanwhile, the company’s recommendation engines were actively promoting fringe conspiracy movements because they generated high engagement. Even when Facebook’s own internal researchers warned that their algorithms exploited the human brain’s attraction to conflict, executives shelved the findings to protect their business model. Independent audits later confirmed what whistleblowers had long claimed: the platforms were driving people into echo chambers of extremism and undermining democracy.



