The Chaos Machine by Max Fisher is a journalistic tour de force, comprehensive in its coverage of the perils of social media. And there are many. Even if you are aware of “the algorithm” or watched The Social Dilemma, this book is required reading. And if you still have social media, you may find yourself planning an exit from the platforms that algorithmically-sponsored misinformation campaigns across the world, violence against refugees in Europe, and ethnic cleansing campaigns in Southeast Asia. I deleted most of my social media profiles only recently, and this book has led me to question how I use the internet writ large.
This book presents eye-opening revelations about the delusional depths of naive tech-bros in the face of algorithmic radicalization and platform-supported genocide. Journalistically, Fisher extensively covers scenes of violence, from Europe to Southeast Asia, in order to determine social media’s culpability, of pulling apart correlation and causation as best as possible. His coverage of Facebook’s hand in the ethnic cleansing campaigns in Myanmar (according to the United Nations investigation) and Sri Laka are incredibly difficult chapters to read, in part due to Meta’s complete lack of response to government officials pleading for the company to censor the calls for violence on their platform.
What’s more curious and maddening and damning, however, is how recipients of Facebook-spread violence continue to use the platform even after their experiences. In seeking to understand how these platforms hook us, Fisher uses his journalistic style to cover a tremendous body of interdisciplinary research. A distorted picture emerges, of an infinitely-iterative process of platform-user interaction that is guided by a single goal: Expand forever. This includes raw user counts, but also includes capturing as much attention from each user as possible. From this profit-driven center, Fisher presents research from sources such as evolutionary psychology and moral philosophy to understand how algorithms achieve this goal. The answers, surely unsurprising in 2025, revolve around manipulating human outrage, fear, and groupthink. The problem with social media is social media, full stop.
Well, maybe.
When interviewed on The Rich Roll Podcast, Fisher suggests near the end of the interview that taming the algorithm isn’t necessarily the same as removing social media from our lives. Across his work he discusses the early days of social media, platforms like Myspace and even early Facebook, that were not fueled by the algorithm recommendation engine. And given the stubborn refusal of Meta to release their internal research about this chaos machine, a potential solution is to simply turn the damn thing off. Any other robust solution requires corporate transparency, yet Fisher’s book demonstrates that these companies will reach toward deception to avoid public accountability.
This book has shot me toward other work related to understanding “The Algorithm,” especially regarding corporate-bro complicity. Frances Haugen, known as the “Facebook Whistleblower,” gave a fantastic interview with 60 Minutes about her decision to release internal Facebook research. She expands on this research in her book, on The Rich Roll Podcast, 1 and during her Senate Commerce Committee testimony. Also, a new memoir, Careless People, by former Facebook exec Sarah Wynn-Williams supposedly exposes how the bad behavior reaches all the way to the top―and Meta is doing all it can to stop its promotion.
As a final note: I am simply stunned by the risk of social media. I think many people have a belief that something about these platforms is not good; I thought I was educated on these harms before reading The Chaos Machine. But I was incredibly wrong. While it’s true that regulating these issues is technically complex, it is also true that the corporate deception and the dangers at play is comparable to the cigarette industry of the 20th century―internal research documenting the harm, executives’ awareness of the risk, coordinated efforts to deceive the public of their products’ harms. These companies know they are spreading misinformation, spawning political extremism, and tearing apart the fabric of our social lives.
In such a muddy environment, there is one thing we have control over. It is limited and does not feel like enough, and yet it is uniquely ours: choice. How will you―the individual, the user, the product―respond?
I’ve only listened to this podcast once before researching these authors.↩︎