Karen Hao on the Empire of AI, AGI evangelists, and the price of perception

Karen Hao on the Empire of AI, AGI evangelists, and the price of perception

Last Updated: September 14, 2025By

On the heart of each empire is an ideology, a perception system that propels the system ahead and justifies growth – even when the price of that growth straight defies the ideology’s said mission.

For European colonial powers, it was Christianity and the promise of saving souls whereas extracting assets. For as we speak’s AI empire, it’s synthetic common intelligence to “profit all humanity.” And OpenAI is its chief evangelist, spreading zeal throughout the trade in a approach that has reframed how AI is constructed. 

“I used to be interviewing folks whose voices had been shaking from the fervor of their beliefs in AGI,” Karen Hao, journalist and bestselling writer of “Empire of AI,” informed TechCrunch on a recent episode of Equity

In her guide, Hao likens the AI trade generally, and OpenAI specifically, to an empire. 

“The one strategy to actually perceive the scope and scale of OpenAI’s habits…is definitely to acknowledge that they’ve already grown extra highly effective than just about any nation state on the planet, and so they’ve consolidated a unprecedented quantity of not simply financial energy, but additionally political energy,” Hao stated. “They’re terraforming the Earth. They’re rewiring our geopolitics, all of our lives. And so you may solely describe it as an empire.”

OpenAI has described AGI as “a extremely autonomous system that outperforms people at most economically worthwhile work,” one that may by some means “elevate humanity by rising abundance, turbocharging the economic system, and aiding within the discovery of recent scientific data that adjustments the boundaries of risk.” 

These nebulous guarantees have fueled the trade’s exponential development — its huge useful resource calls for, oceans of scraped information, strained vitality grids, and willingness to launch untested techniques into the world. All in service of a future that many specialists say could by no means arrive.

Techcrunch occasion

San Francisco
|
October 27-29, 2025

Hao says this path wasn’t inevitable, and that scaling isn’t the one strategy to get extra advances in AI. 

“It’s also possible to develop new strategies in algorithms,” she stated. “You’ll be able to enhance the prevailing algorithms to cut back the quantity of knowledge and compute that they should use.”

However that tactic would have meant sacrificing pace. 

“While you outline the hunt to construct helpful AGI as one the place the victor takes all — which is what OpenAI did — then crucial factor is pace over the rest,” Hao stated. “Pace over effectivity, pace over security, pace over exploratory analysis.”

Picture Credit:Kim Jae-Hwan/SOPA Photographs/LightRocket / Getty Photographs

For OpenAI, she stated, the easiest way to ensure pace was to take present strategies and “simply do the intellectually low-cost factor, which is to pump extra information, extra supercomputers, into these present strategies.”

OpenAI set the stage, and moderately than fall behind, different tech firms determined to fall in line. 

“And since the AI trade has efficiently captured many of the prime AI researchers on the planet, and people researchers not exist in academia, then you might have a whole self-discipline now being formed by the agenda of those firms, moderately than by actual scientific exploration,” Hao stated.

The spend has been, and might be, astronomical. Final week, OpenAI stated it expects to burn by $115 billion in cash by 2029. Meta stated in July that it will spend as much as $72 billion on constructing AI infrastructure this yr. Google expects to hit as much as $85 billion in capital expenditures for 2025, most of which might be spent on increasing AI and cloud infrastructure. 

In the meantime, the objective posts maintain shifting, and the loftiest “advantages to humanity” haven’t but materialized, even because the harms mount. Harms like job loss, focus of wealth, and AI chatbots that fuel delusions and psychosis. In her guide, Hao additionally paperwork staff in creating international locations like Kenya and Venezuela who had been uncovered to disturbing content material, together with little one sexual abuse materials, and had been paid very low wages — round $1 to $2 an hour — in roles like content material moderation and information labeling.

Hao stated it’s a false tradeoff to pit AI progress towards current harms, particularly when different types of AI provide actual advantages.

She pointed to Google DeepMind’s Nobel Prize-winning AlphaFold, which is educated on amino acid sequence information and complicated protein folding constructions, and might now precisely predict the 3D construction of proteins from their amino acids — profoundly helpful for drug discovery and understanding illness.

“These are the sorts of AI techniques that we’d like,” Hao stated. “AlphaFold doesn’t create psychological well being crises in folks. AlphaFold doesn’t result in colossal environmental harms … as a result of it’s educated on considerably much less infrastructure. It doesn’t create content material moderation harms as a result of [the datasets don’t have] all the poisonous crap that you simply hoovered up if you had been scraping the web.” 

Alongside the quasi-religious dedication to AGI has been a story concerning the significance of racing to beat China in the AI race, in order that Silicon Valley can have a liberalizing impact on the world. 

“Actually, the other has occurred,” Hao stated. “The hole has continued to shut between the U.S. and China, and Silicon Valley has had an illiberalizing impact on the world … and the one actor that has come out of it unscathed, you can argue, is Silicon Valley itself.”

After all, many will argue that OpenAI and different AI firms have benefitted humanity by releasing ChatGPT and different giant language fashions, which promise big beneficial properties in productiveness by automating duties like coding, writing, analysis, buyer help, and different knowledge-work duties. 

However the way in which OpenAI is structured — half non-profit, half for-profit — complicates the way it defines and measures its impression on humanity. And that’s additional sophisticated by the information this week that OpenAI reached an agreement with Microsoft that brings it nearer to ultimately going public.

Two former OpenAI security researchers informed TechCrunch that they worry the AI lab has begun to confuse its for-profit and non-profit missions — that as a result of folks take pleasure in utilizing ChatGPT and different merchandise constructed on LLMs, this ticks the field of benefiting humanity.

Hao echoed these considerations, describing the hazards of being so consumed by the mission that actuality is ignored.

“Even because the proof accumulates that what they’re constructing is definitely harming vital quantities of individuals, the mission continues to paper all of that over,” Hao stated. “There’s one thing actually harmful and darkish about that, of [being] so wrapped up in a perception system you constructed that you simply lose contact with actuality.”


Source link

Leave A Comment

you might also like