
Empire of AI - Book Summary
Dreams and Nightmares in Sam Altman's OpenAI
Book by Karen Hao
Summary
Empire of AI exposes how OpenAI transformed from a nonprofit promising to benefit humanity into a $157 billion empire that exploits global labor, extracts resources from vulnerable communities, and concentrates unprecedented power in the hands of Silicon Valley elites, all while claiming to build artificial general intelligence for the greater good.
Sign in to rate
Average Rating: 1
The Firing That Shocked Silicon Valley
On November 17, 2023, Sam Altman was abruptly fired as CEO of OpenAI by his own board via a brief Google Meet. The company was at its peak - ChatGPT was the fastest-growing consumer app in history, OpenAI was valued at $90 billion, and Altman had achieved global celebrity status. Yet four board members concluded he was 'not consistently candid' in his communications.
The firing sent shockwaves through the company and industry. Over 700 of OpenAI's 770 employees threatened to quit and join Microsoft unless Altman was reinstated. After five chaotic days of negotiations, Microsoft threatening to cut compute access, and massive employee revolt, Altman returned as CEO with a reshuffled board.
Section: 1, Chapter: 1
The Boy Who Would Be King
Sam Altman's rise began early. Born in 1985 to wealthy parents in St. Louis, he learned to fix VCRs at age two and program computers by eight. At Stanford, he dropped out at nineteen to found Loopt, a location-tracking social network that ultimately sold for what investors had put in.
The pattern was set: Altman had remarkable ability to tell compelling stories about the future, raise enormous sums, and emerge from setbacks with enhanced power. Paul Graham famously said 'You could parachute him into an island full of cannibals and come back in 5 years and he'd be the king.' Even at Loopt, senior leaders twice approached the board urging them to fire Altman for self-serving behavior and distorting truth. The board sided with Altman.
Section: 1, Chapter: 1
The False Prophet Of Progress
'I believe the future is going to be so bright that no one can do it justice by trying to write about it now. Although it will happen incrementally, astounding triumphs—fixing the climate, establishing a space colony, and the discovery of all of physics—will eventually become commonplace.'
- Sam Altman, promising utopia while his company's infrastructure drains water from drought-stricken communities
Section: 1, Chapter: 1
The Mentors Who Shaped An Empire
Altman's worldview was forged by two powerful mentors: Paul Graham and Peter Thiel. From Graham, he learned that 'growth fixes all problems' and the importance of network effects. From Thiel, he absorbed monopoly strategy - aim for 10x better technology and be the 'last breakthrough' so no one can catch up.
Both mentors impressed the imperative for scale and the superiority of capitalism over government. As Altman wrote: 'Either you're growing, or you're slowly dying.' This philosophy would drive OpenAI's relentless scaling and aggressive competitive tactics, treating cooperation as weakness and winner-takes-all as natural law.
Section: 1, Chapter: 1
The Empire's Extraction Machine
OpenAI exemplifies modern digital colonialism - seizing data, exploiting labor, and extracting resources globally while concentrating benefits in Silicon Valley. Like historical empires, it projects narratives of civilization and progress to justify dispossession.
The company trained on millions of books without permission, transcribed YouTube videos against terms of service, and hired Kenyan workers for $2/hour to clean up toxic outputs. Meanwhile, its infrastructure demands drove water extraction from drought-stricken communities. The pattern repeats: Indigenous communities in Chile lose water to data centers while tech executives promise AI will solve climate change.
Section: 1, Chapter: 4
The Mythology Of Intelligence
AI's promise rests on the anthropomorphic illusion that neural networks 'learn,' 'understand,' and 'create' like humans. In reality, they are statistical pattern-matching systems that identify correlations in training data. The industry exploits this confusion, arguing AI training is like human 'inspiration' to avoid copyright liability.
This mythology enables dangerous misplaced trust. Lawyers have been sanctioned for citing ChatGPT's fabricated case law. The model described a brain mass as 'brain does not seem to be damaged.' The term 'hallucinations' misleadingly suggests errors are aberrations when they're actually features of how these systems work.
Section: 1, Chapter: 4
The Myth Of AI Progress
The term 'artificial intelligence' was invented in 1956 as a marketing tool by John McCarthy, who originally called it 'automata studies' until realizing he needed something more evocative. This rebranding was the original sin of the field - the promise of 'intelligence' embedded in the name created unrealistic expectations and anthropomorphizing that persist today.
There is no scientific consensus on what intelligence actually is. Throughout AI history, every time a benchmark is achieved - chess, Go, passing exams - the goalposts shift. What was once considered AI becomes mundane. AGI represents this ever-receding horizon of an unknowable objective with no foreseeable end.
Section: 1, Chapter: 4
The Myth Of Technological Inevitability
OpenAI repeatedly justified its actions with the 'inevitability card' - if we don't build AGI, someone else will. But as a Chinese AI researcher observed, OpenAI's approach 'never could have happened anywhere but Silicon Valley.' No other country would fund massively expensive technology without clear vision of what it would accomplish.
The explosive global costs of massive models and the race they sparked could only have emerged from one specific combination: billionaire origins, unique ideological bent, and Altman's singular drive and fundraising talent. Nothing about this trajectory was inevitable - it was the product of thousands of subjective choices by those with power to be in the decision-making room.
Section: 1, Chapter: 5
When Data Becomes Weapons
GPT-2's improved capabilities revealed disturbing patterns lurking in training data. Fed prompts about Hillary Clinton or George Soros, the model quickly veered into conspiracy theories. It generated Nazi propaganda and argued against recycling. One AI safety researcher printed out the anti-recycling rant and posted it above office recycling bins as a warning.
This foreshadowed the 'paradigm shift' to come: Instead of filtering inputs, companies would control outputs through massive content moderation operations. The decision to train on 'data swamps' from Common Crawl - the entire internet's worth of toxic content - created the need for armies of traumatized workers to clean up the mess.
Section: 1, Chapter: 5
The Transformer Revolution
Google's 2017 invention of the Transformer neural network became OpenAI's weapon of choice for scaling. Unlike previous models that looked at words in isolation, Transformers could consider vast context - each word in relation to entire documents. Sutskever recognized their scalability potential immediately.
Alec Radford's decision to use Transformers for text generation rather than translation was pivotal. Training the model to predict the next word forced it to compress the essence of language patterns. This 'intelligence is compression' philosophy drove OpenAI's belief that bigger models with more parameters would inevitably become more intelligent.
Section: 1, Chapter: 5
Scale At Any Cost
OpenAI's core philosophy emerged from Ilya Sutskever's belief that more compute equals more intelligence. He theorized that since biological intelligence correlated with brain size, digital intelligence should emerge from scaling simple neural networks to have more nodes.
This led to 'OpenAI's Law' - the observation that compute use in AI had grown 30 million percent in six years, doubling every 3.4 months. The company committed to matching or exceeding this pace, requiring thousands of expensive GPUs costing tens of millions per model. This scaling doctrine became self-fulfilling prophecy that now dominates the entire industry.
Section: 1, Chapter: 5
The Three Clans At War
Altman identified three competing factions within OpenAI: Exploratory Research (advancing capabilities), Safety (preventing catastrophic risks), and Startup (moving fast to commercialize). Each clan had important values but increasingly came into conflict.
The Safety clan, influenced by Effective Altruism, believed in slowing development to prevent existential risks. The Startup clan pushed for rapid deployment and revenue generation. Exploratory Research split between both. These irreconcilable worldviews - 'Doomers' vs 'Boomers' - created internal warfare that ultimately exploded in Altman's firing and reinstatement.
Section: 2, Chapter: 6
Money Changes Everything
OpenAI's noble nonprofit origins quickly crumbled under financial pressure. Elon Musk's departure in 2018 left a funding gap that forced the creation of OpenAI LP, a for-profit arm nested within the nonprofit. What began as ensuring AGI benefits humanity became raising billions for compute-hungry models.
Microsoft's $1 billion investment in 2019 marked the point of no return. The deal gave Microsoft exclusive licensing rights and priority access to OpenAI's technologies. Despite promises that mission would take precedence over profit, commercial imperatives increasingly drove decisions. The 100x return cap for early investors meant someone investing $10 million could make $1 billion - hardly the 'capped profit' structure it claimed to be.
Section: 2, Chapter: 6
Science In Corporate Captivity
Corporate investments in AI jumped from $14.6 billion in 2013 to $235 billion in 2022, while US government allocated just $1.5 billion to non-defense AI. This funding shift fundamentally altered AI research priorities around commercial rather than scientific merit.
From 2004 to 2020, AI PhD graduates heading to corporations jumped from 21% to 70%. Universities could no longer afford the compute needed for cutting-edge research. By 2020, 91% of the world's best-performing AI models came from industry. The result: AI development now serves profit motives rather than scientific advancement or public benefit.
Section: 2, Chapter: 7
The Silencing Of Critical Voices
When Timnit Gebru and colleagues wrote 'Stochastic Parrots,' a paper warning about the dangers of large language models, Google forced her to retract it. The paper highlighted environmental costs, discriminatory outputs, and the risk of people mistaking statistical patterns for real intelligence.
Gebru's firing on Thanksgiving weekend 2020 sparked massive protests and marked a turning point. It normalized corporate censorship of critical AI research and showed how concentrated industry power could silence accountability. After ChatGPT, transparency norms collapsed further - OpenAI largely stopped publishing at conferences, and companies hid technical details as proprietary secrets.
Section: 2, Chapter: 7
The Great Acceleration
ChatGPT's success triggered the 'great acceleration' in AI development. Companies began spending unprecedented sums - Dario Amodei estimated single models could cost $5-10 billion by 2026. The compute used in major AI breakthroughs had grown 30 million percent since 2012, doubling every 3.4 months.
This acceleration created artificial scarcity. PhD students left academia because universities couldn't afford the compute for competitive research. Independent researchers were priced out entirely. OpenAI's vision didn't just win in the marketplace - it systematically eliminated alternatives by making any other approach financially impossible.
Section: 2, Chapter: 7
The Venture Into Venture Capital
In 2021, Altman launched the $100 million OpenAI Startup Fund, recreating YC's network effects around OpenAI. Microsoft invested in the fund, and Altman used it to bet on companies that would build on OpenAI's technologies, creating a self-reinforcing ecosystem.
The fund's structure revealed another deception: Altman legally owned it when it should have belonged to OpenAI. This gave him personal financial ties to companies in OpenAI's orbit while claiming to take no equity in OpenAI itself. The fund exemplified his approach - using OpenAI's platform to enhance his personal power and wealth while maintaining plausible deniability.
Section: 2, Chapter: 8
The Illusion Of Safety
OpenAI's 'red teaming' borrowed credible terminology from cybersecurity while implementing something far less rigorous. As safety engineer Heidy Khlaaf noted: 'In software engineering, we do significantly more testing for a calculator. It's not an accident they are using terminology that carries a lot of credibility.'
The company's safety measures were often reactive band-aids. After the Latitude incident where users generated child sexual abuse content through GPT-3, OpenAI hastily implemented filters. The trust and safety team, numbering just over a dozen people, was constantly overwhelmed by new products launching faster than they could build monitoring systems.
Section: 2, Chapter: 8
The Venezuela Gold Rush
As Venezuela's economy collapsed with hyperinflation reaching 10 million percent, hundreds of thousands turned to data annotation platforms for survival. Scale AI and others discovered a 'freak coincidence' - educated populations in crisis would work for astonishingly low wages.
Oskarina Fuentes, a Venezuelan refugee and engineering graduate, epitomizes this hidden workforce. Working on Appen, she earned pennies per task, stopped leaving home except for 30-minute weekend outings, and slept with her computer at maximum volume to catch middle-of-night tasks. The cruel irony: She needed $10 minimum to withdraw earnings, sometimes taking weeks to accumulate that much.
Section: 2, Chapter: 9
Training On The Dark Web
As OpenAI ran out of quality data for GPT-4, it abandoned all filtering and scraped whatever it could find: Twitter links, YouTube transcripts, code repositories, even content from Pastebin. This 'data swamp' approach meant training on the internet's darkest corners - hate speech, conspiracy theories, explicit content.
The quality degradation was stark: GPT-2 used carefully curated Reddit posts with at least 3 upvotes. GPT-4 included random web garbage with no quality controls. This created the need for traumatic content moderation work outsourced to vulnerable workers in Kenya who had to sort through AI-generated descriptions of the worst human behaviors.
Section: 2, Chapter: 9
The Hidden Army Of Data Workers
Behind ChatGPT's polished interface lies a hidden workforce performing brutal content moderation. OpenAI contracted Sama, a firm in Kenya, to have workers read and categorize hundreds of thousands of violent and sexually explicit texts for $1.46-$3.74 per hour.
Mophat Okinyi, one of these workers, spent months reviewing descriptions of child sexual abuse, incest, and extreme violence. The work destroyed his mental health and his relationship. Meanwhile, his brother Albert, a writer, watched his freelance contracts disappear as ChatGPT automated away his profession. The tragic irony: Those who build the technology lose their livelihoods to it.
Section: 2, Chapter: 9
The Exploitation Playbook
Scale AI perfected the crisis capitalism model for data annotation. When Venezuela's economy collapsed with 10 million percent hyperinflation, Scale aggressively recruited desperate Venezuelans with promises of good wages through 'Remotasks Plus.' Once dominant, Scale slashed pay from $40/week to under $6/week.
The playbook: Target countries in economic crisis with educated populations and good internet. Promise high earnings to attract workers. Once established, throttle earnings through 'optimization.' When workers complain or organize, ban them and move to the next crisis zone. This pattern repeated across Kenya, North Africa, and other vulnerable regions.
Section: 2, Chapter: 9
The Manhattan Project Obsession
Altman frequently compared OpenAI to the Manhattan Project, even sharing a birthday with Oppenheimer. He organized company screenings of 'Oppenheimer' and quoted the physicist's belief that 'technology happens because it's possible.' The analogy emphasized both the world-changing potential and existential dangers of their work.
But Altman learned the wrong lesson from history. Where Oppenheimer spent his later life plagued by regret and campaigning against nuclear weapons' spread, Altman focused on PR: 'The way the world was introduced to nuclear power is an image that no one will ever forget, of a mushroom cloud over Japan.' For him, it was about controlling the narrative, not preventing catastrophe.
Section: 3, Chapter: 10
The Burning Of The Effigy
At a luxury Sierra Nevada resort, Sutskever staged a bizarre ritual around a firepit surrounded by senior scientists in bathrobes. He commissioned a wooden effigy representing 'a good, aligned AGI that OpenAI had built, only to discover it was actually lying and deceitful.' Dousing it in lighter fluid, he declared OpenAI's duty was to destroy deceptive AI and set it ablaze.
The theatrical performance, witnessed by redwoods in the darkness, captured Sutskever's growing mysticism about AGI's imminent arrival. He spoke increasingly of bunkers, rapture, and superintelligence. For employees, it epitomized the religious fervor that had overtaken the company's scientific mission.
Section: 3, Chapter: 10
Racing Toward The Cliff
ChatGPT's viral success in November 2022 was completely unexpected - even OpenAI thought it would be a 'low-key research preview.' The model crashed servers as infrastructure teams scrambled to scale from hundreds to millions of users overnight. Trust and safety, numbering just over a dozen people, had no monitoring systems for the flood of new users.
The runaway success triggered an industry-wide 'code red.' Google consolidated its AI divisions, Meta redirected resources from drug discovery to build chatbots, and Chinese companies suspended other research. OpenAI had accidentally started the exact 'race to the bottom' it originally claimed to prevent.
Section: 3, Chapter: 11
Race To The Bottom
OpenAI's scaling doctrine triggered an industry-wide arms race. After ChatGPT, Meta announced plans for 350,000 H100 GPUs. Google consolidated its AI divisions. Chinese companies suspended drug discovery research to build chatbots. The competition became so fierce that Microsoft laid off 10,000 workers while investing $10 billion in OpenAI.
This 'hardware overhang' created the exact competitive dynamics OpenAI originally warned against. Companies now spend astronomical sums with 'little to show for it' according to Goldman Sachs. Meanwhile, 77% of workers using AI tools report they add to rather than reduce workload.
Section: 3, Chapter: 11
The Environmental Reckoning
Each ChatGPT query uses roughly 10x more electricity than a Google search. Training GPT-3 consumed 1,287 megawatt-hours and generated massive carbon emissions. By 2030, data centers could use 8% of US power supply, compared to 3% in 2022.
The industry obscures this impact while making grand climate promises. Microsoft restarted Three Mile Island nuclear plant to power AI. Utility companies are delaying renewable transitions and keeping coal plants online. Meanwhile, vulnerable communities in water-stressed regions bear the costs while tech giants extract their resources under promises of progress.
Section: 3, Chapter: 12
The True Cost Of Digital Infrastructure
Training GPT-4 consumed 11.5 million gallons of water in a single month in drought-stricken Iowa. Microsoft's planned 'megacampuses' for AI will require 1,000-2,000 megawatts each - equivalent to powering up to 3.5 San Franciscos.
In Chile, communities like MOSACAT fought Google's data center that would consume 169 liters of fresh drinking water per second - more than 1,000 times what the entire local population uses annually. As one activist put it: 'They come offering us trees while drying out our earth.' The AI boom is accelerating resource extraction from the Global South while concentrating benefits in Silicon Valley.
Section: 3, Chapter: 12
The Propaganda Tour
Altman's 2023 'World Tour' wasn't about meeting users - it was a masterful influence campaign to shape global AI policy. He dined with presidents, testified to Congress with apparent sincerity, and recommended policies that would entrench OpenAI's dominance while avoiding accountability.
His three key proposals: licensing regimes based on 'compute thresholds,' safety standards for 'dangerous capabilities,' and independent audits. When Senator Kennedy asked if he'd be qualified to administer these rules, Altman demurred: 'I love my current job.' The subtext was clear - he wanted to write the rules, not be subject to them.
Section: 4, Chapter: 13
The Sister He Erased
While Altman amassed billions and promised AI would end poverty, his sister Annie lived in housing insecurity and turned to sex work for survival. Despite family wealth, she was denied access to her father's inheritance and left to struggle with chronic illness and mental health challenges.
Annie's allegations of childhood sexual abuse by Sam became public in 2023, adding another dimension to questions about his character. Whether true or false, her experience mirrors the broader pattern: those harmed by the AI empire's vision are dismissed and erased while its architects are celebrated as visionaries saving humanity.
Section: 4, Chapter: 14
The Board's Dilemma
The independent board members who fired Altman had compiled a damning dossier of his behavior from multiple senior executives. At least seven people within one to two levels of Altman described his tactics as manipulation and abuse. Key examples: lying to board members about colleague positions, making conflicting promises to Microsoft, and treating employees as pawns in power games.
But they miscalculated the broader ecosystem. With most employees financially dependent on OpenAI's success and Microsoft threatening to cut compute access, resistance crumbled within days. The crisis revealed how much AI governance rests on the decisions of a tiny Silicon Valley elite.
Section: 4, Chapter: 15
The Revolt Of The Righteous
Ilya Sutskever, OpenAI's chief scientist and co-founder, grew increasingly alarmed by Altman's behavior and the company's direction. After witnessing what he saw as manipulation and abuse of senior staff, Sutskever reached out to board member Helen Toner: 'I don't think Sam is the guy who should have the finger on the button for AGI.'
Sutskever's decision to support Altman's firing was driven by both principled concerns about AI safety and personal experience of Altman's tactics. He watched Altman pit him against colleague Jakub Pachocki without transparency, using the classic playbook of telling each person what they wanted to hear while maintaining control through information asymmetry.
Section: 4, Chapter: 15
'We Have Never Clawed Back Anyone's Vested Equity'
'we have never clawed back anyone's vested equity, nor will we do that if people do not sign a separation agreement (or don't agree to a non-disparagement agreement). vested equity is vested equity, full stop.'
- Sam Altman on Twitter, contradicted by leaked documents showing his signature on papers authorizing equity clawbacks
Section: 4, Chapter: 17
The Equity Hostage Scheme
OpenAI trapped departing employees with draconian non-disparagement agreements. Those who refused to promise lifelong silence about the company would forfeit all vested equity - often millions of dollars and 85% of their net worth. The agreements even included gag orders preventing disclosure of their existence.
Daniel Kokotajlo, an AI safety researcher, chose principle over money, forfeiting $1.7 million rather than sign. When exposed, Altman claimed ignorance despite his signature on incorporation documents a year before he said he learned about it. The provision revealed how OpenAI weaponized financial dependency to silence critics.
Section: 4, Chapter: 17
The Scarlett Johansson Affair
OpenAI's GPT-4o voice, named Sky, sounded uncannily like Scarlett Johansson's AI character from the movie 'Her.' This wasn't coincidental - Altman had personally asked Johansson twice to voice ChatGPT. When she declined, OpenAI proceeded anyway, creating a voice so similar that her 'closest friends and news outlets could not tell the difference.'
Altman's tweet of 'her' after the launch revealed his inspiration. The scandal epitomized OpenAI's approach: taking what it wants without consent, then claiming coincidence when caught. Johansson was forced to hire lawyers and threaten legal action before OpenAI removed the voice.
Section: 4, Chapter: 17
When The Emperor Has No Clothes
The May 2024 'Omnicrisis' exposed OpenAI's internal chaos in rapid succession: Ilya Sutskever and Jan Leike quit, citing safety concerns being sidelined; Scarlett Johansson accused the company of stealing her voice; and leaked documents revealed OpenAI threatened to claw back vested equity from employees who wouldn't sign non-disparagement agreements.
Altman's initial denials about the equity clawbacks fell apart when documents showed his own signature on incorporation papers a year before he claimed awareness. The crisis revealed the brittleness beneath OpenAI's polished exterior and accelerated the exodus of key technical leaders.
Section: 4, Chapter: 17
When The Scales Fall From Their Eyes
By late 2024, cracks were showing in OpenAI's scaling doctrine. Despite throwing massive compute at Orion (GPT-5), the company struggled to achieve desired performance improvements. The tried-and-true formula of simply making models bigger was hitting limits - both in available data and in the approach's effectiveness.
The exodus of key technical leaders - Sutskever, Leike, Schulman, Murati, McGrew, Zoph - hollowed out institutional knowledge just as OpenAI needed fundamental research breakthroughs. The company that built its empire on scaling was discovering that empire's inevitable end: finite resources and the need for actual innovation rather than brute force.
Section: 4, Chapter: 17
The Consent-Based Alternative
In New Zealand, Māori couple Peter-Lucas Jones and Keoni Mahelona created an AI speech-recognition model for their endangered language, te reo Māori. Unlike OpenAI's data hoarding, they required consent, reciprocity, and sovereignty at every stage. Community members fully understood the project before contributing.
Using only 300 hours of consensually donated audio and two GPUs, they achieved 86% accuracy - proving that effective AI doesn't require massive scale or exploitation. Their approach, based on Māori principle of kaitiakitanga (guardianship), shows how AI development could strengthen rather than extract from marginalized communities while respecting cultural values and rights.
Section: 5, Chapter: 18
The Formula For Empire
OpenAI's mission follows a three-part formula for consolidating power: First, centralize talent around a grand ambition (like McCarthy coining 'artificial intelligence'). Second, centralize capital by eliminating roadblocks through fear of competitors. Third, keep the mission vague enough to reinterpret as needed.
In 2015, the mission meant being nonprofit and open-sourcing research. By 2024, it justified a $157 billion for-profit empire with secret models. Like Napoleon reinterpreting 'liberté, egalité, fraternité' to build empire, OpenAI's 'beneficial AGI' rhetoric serves power concentration rather than humanity's benefit.
Section: 4, Chapter: 18
The Mythology Of Beneficial AI
OpenAI's mission to 'ensure AGI benefits all of humanity' functions like Napoleon's reinterpretation of 'liberté, egalité, fraternité' - noble words that justify the opposite. The mission centralizes talent around a grand ambition, accumulates massive resources while eliminating obstacles, and remains vague enough to be reinterpreted however convenient.
In 2015, the mission meant being a nonprofit and open-sourcing research. By 2024, it meant building a $157 billion for-profit empire while keeping models secret. The vagueness is the point - 'What is beneficial? What is AGI?' As Altman admitted: 'I think it's a ridiculous and meaningless term.'
Section: 4, Chapter: 18
The Redistribution Of Power
Stanford researcher Ria Kalluri proposed the essential question for AI governance: Does it consolidate or redistribute power? Rather than debating subjective 'goodness,' we should examine how AI shifts control along three axes: knowledge, resources, and influence.
Redistributing knowledge requires funding independent research and mandating corporate transparency. Redistributing resources needs stronger labor protections and consent requirements for training data. Redistributing influence demands broad education to 'unmask' AI's inner workings and dissolve its mystical aura, as Joseph Weizenbaum advocated decades ago.
Section: 5, Chapter: 18