67 pages 2 hours read

Brian Christian

The Alignment Problem: Machine Learning and Human Values

Nonfiction | Book | Adult | Published in 2020

A modern alternative to SparkNotes and CliffsNotes, SuperSummary offers high-quality Study Guides with detailed chapter summaries and analysis of major themes, characters, and more.

Summary and Study Guide


In The Alignment Problem: Machine Learning and Human Values, Brian Christian tackles the increasingly urgent issues that artificial intelligence (AI) poses to human ethics, public policy, and society at large. Published in 2020, the book falls within the nonfiction genre, focusing on technology, ethics, and computer science at Brown University and the University of Washington. Christian’s educational background combines degrees in computer science, philosophy, and poetry. His interdisciplinary background informs his exploration of the complex topics related to AI and allows him to deliver a comprehensive account of the main discussions in the field to a wider public.

The Alignment Problem presents a nuanced discussion of how machine learning algorithms and neural network models can sometimes diverge from human ethical standards, and what this means for future developments in AI. As the author notes, the book is the result of “nearly a hundred formal interviews and many hundreds of informal conversations” with researchers and specialists in the fields related to the book’s themes (13). Christian’s narrative also provides a critical examination of the principles guiding AI development and the potential consequences of misalignment between machine-generated decisions and human values. The main themes that the book develops are the Ethical Implications of AI Use, The Intersection of Human and Machine Learning, and the Interdisciplinary Approaches to AI Development and Implementation. The Alignment Problem was well-received by many critics, with positive reviews published in The New York Times and The Wall Street Journal. It also received the National Academies Communication Award for Science Communication in 2022.

This guide references the paperback Atlantic Books 2021 edition.


The Prologue opens with a scene in 1935, in which 12-year-old Walter Pitts escapes bullies in a Detroit library. In the library, he discovers and corrects errors in a logic book, leading to correspondence with author Bertrand Russell. Declining an early doctoral invitation due to his age, Pitts later meets Jerry Lettvin and neurologist Warren McCulloch. Their collaboration results in a seminal paper on neural networks, laying the groundwork for neural network theory, despite its minimal impact initially.

The Introduction discusses the 2013 Google project called word2vec, an unsupervised learning system that converted words into numerical forms to detect linguistic patterns. This technology was applied across Google’s various services including translation and search. However, in 2015, it was discovered by Microsoft researchers that word2vec sustained biases, such as gender stereotypes in professional contexts. Meanwhile, the US criminal justice system increasingly used algorithms like Northpointe’s Correctional Offender Management Profiling for Alternative Sanctions (COMPAS) for bail and parole decisions. COMPAS faced scrutiny for lack of transparency and potential racial bias, as revealed by a 2016 ProPublica investigation in Florida.

Chapter 1 of Christian’s book discusses the historical development and ethical implications of AI models like the perceptron and AlexNet. Introduced in 1958 by Frank Rosenblatt, the perceptron was a pioneering neural network that learned from errors and contributed foundational concepts to machine learning, despite facing criticism for its limitations in complex pattern recognition. This criticism led to a temporary stall in neural network research until 2012, when Alex Krizhevsky and his team at the University of Toronto made significant advances in image recognition with AlexNet, highlighting the potential of deep neural networks. However, the success of these technologies also exposed underlying biases in AI systems, as illustrated by incidents like Google Photos’s mislabeling of Black individuals.

Chapter 2 begins with historical development, highlighting the work of Ernest Burgess in 1927, which introduced predictive models to make parole decisions more objective and less reliant on human judgment. By 1951, Illinois advocated the use of statistical models for consistent and fair parole decisions. However, the broader adoption was slow, with significant use noted by 2000 and COMPAS becoming standard by 2011. Ethical concerns, especially regarding racial biases, surfaced with these models, followed by investigations by journalists like Julia Angwin, which revealed racial disparities. Prompted by these discussions, researchers like Cynthia Dwork and Jon Kleinberg analyzed and defined “fairness,” uncovering mathematical challenges in achieving equitable outcomes across different racial groups.

Chapter 3 focuses on a project by Rich Caruana that aims to predict pneumonia patient outcomes using neural networks during his graduate studies at Carnegie Mellon. Although Caruana’s model outperformed others, its deployment was halted due to the discovery of misleading correlations in simpler models that were easier to interpret, such as mistakenly identifying asthma patients as low-risk due to historically better in-hospital care. This incident sparked a wider debate on the transparency and safety of neural networks in healthcare, emphasizing the risks associated with their opaque decision-making processes.

Chapter 4 discusses Alan Turing’s idea of mimicking a child’s mind to develop AI and Arthur Samuel’s checkers-playing computer to illustrate early connections between psychology and computer science. Harry Klopf’s theory positioned organisms as hedonists, always seeking maximum pleasure—a concept that influenced both neuroscience and machine learning. Key experiments by James Olds and Peter Milner identified the brain’s reward centers and dopamine’s role in reward-seeking behavior, although later research showed the complexities of dopamine’s functions. Researchers further refined reinforcement learning models, tying biological processes to computational algorithms.

Chapter 5 investigates B.F. Skinner’s WWII experiments with pigeons to guide bombs, which led to his discovery of shaping—a method to instill complex behaviors through successive rewards for simpler actions. This principle is crucial in both animal behavior studies and computational fields, helping agents learn from outcomes. However, sparse rewards pose challenges, necessitating extensive trials for learning, known as the problem of sparsity. To address this problem, researchers have developed incremental techniques like rewarding simpler behaviors to build toward complex goals to encourage progression toward desired behaviors.

Chapter 6 explores the development of the Arcade Learning Environment (ALE) by Marc Bellemare in 2008, a platform that used Atari games to challenge algorithms to learn from pixel input alone. DeepMind’s deep Q-network model (DQN) showed notable success on this platform, although it struggled with games offering sparse rewards, highlighting the importance of intrinsic motivation (such as curiosity) for learning.

The chapter also discusses the psychological aspects of curiosity. Christian discusses how intrinsic motivations like curiosity enhance machine learning, particularly in complex or unrewarding environments.

Chapter 7 explores mimicry and learning in humans and AI. Christian starts by discussing the historical view of primates as adept imitators, a notion that recent studies have challenged, showing that spontaneous imitation among non-human primates is rare without human training, positioning humans as the primary imitators. However, imitation poses limitations, especially when learners do not fully grasp the underlying principles of their actions. This dilemma is paralleled in ethical discussions about AI development, where the challenge is to surpass mere imitation to achieve genuine innovation and understanding—a theme illustrated by advancements in machine learning models like AlphaGo and its successors.

Chapter 8 discusses Stuart Russell’s theory of inverse reinforcement learning (IRL) in the 1990s, a concept that infers motivations from observed behaviors, aiming to align AI actions with human intentions more effectively. IRL has been applied to tasks such as driving and piloting, where AI learns from expert behavior without explicit instructions. Recent advancements in IRL allow learning from indirect feedback, improving machine alignment with human values.

Chapter 9 recounts the story of Stanislav Petrov who, in 1983, faced false missile attack warnings and chose not to retaliate, recognizing the early-warning satellite system’s error. This event illustrates the reliability issues in predictive systems, which can misinterpret random data with high confidence—a problem in AI known as the “open category problem.” Contemporary AI struggles with this by failing to recognize objects outside predefined categories, often misclassifying them confidently. The chapter concludes by likening AI’s ethical debates to theological discussions—the ongoing struggle to define and manage moral actions within AI frameworks.

In the Conclusion, Christian provides an overview of the book, highlighting key themes across chapters. He emphasizes the need for AI systems to accurately model human behavior, which is critical for ethical interaction and decision-making. Christian concludes with a reference to a discussion with Alan Turing, illustrating the mutual learning process between humans and machines.