Silicon Valley executives are enthusiastically promising that artificial intelligence will radically improve everyone’s life, starting very soon. AI is heralded as ‘the new electricity,’ and some even claim it’s ‘bigger than fire.’ Whispers suggest we might not even need to save for retirement, as AI will make ‘everyone rich rich rich.’
This kind of rhetoric isn’t new. Your grandparents heard similar grand pronouncements. The creators of novel technologies have historically framed their inventions as ushering in fundamental transformations of human existence. Radio was once touted as bringing ‘perpetual peace on earth.’ Television was supposed to foster such empathy across cultures that it would end war. Cable television, in its turn, promised to educate the masses and lead to widespread enlightenment.
However, this time, the public isn’t quite convinced.
A You.gov survey last year revealed that more than a third of respondents feared AI could lead to the end of human life. Even those with more optimistic views largely stated in another poll that they wouldn’t pay extra for AI features on their devices. Furthermore, the most recent comprehensive survey by the National Bureau of Economic Research found that 80% of firms reported AI was having no impact on their productivity or employment.
Sam Altman, CEO of OpenAI and a leading figure in the AI boom, admitted he’s surprised by the public’s resistance to ‘the diffusion, the absorption’ of AI into culture and the economy.
‘Looking at what’s possible, it does feel sort of surprisingly slow,’ Mr. Altman remarked at an AI conference this month.
Jensen Huang, chief executive of chipmaker Nvidia, is also concerned. Despite the omnipresent tech industry hype, Mr. Huang believes ‘the battle of narratives’ is being lost to critics.
‘It’s extremely hurtful, frankly,’ Mr. Huang stated in a podcast interview last month. He claimed ‘a lot of damage’ has been inflicted by ‘very well-respected people who have painted a doomer narrative, end-of-the-world narrative, science fiction narrative.’
In Mr. Huang’s view, critics are pushing for regulations that would hinder the AI industry’s development. Meanwhile, skeptics are ‘scaring people from making the investments in A.I.’ that are crucial for its improvement.

Nvidia, which supplies the chips for AI data centers, certainly isn’t lacking investors. It has become the most highly valued company globally, boasting a market capitalization of $4.5 trillion. Tech giants like Google, Microsoft, Amazon, and Meta have also seen their values soar. Some AI start-ups, beginning with OpenAI, have achieved astonishing valuations practically overnight.
Nonetheless, Mr. Huang’s observation about plateaued adoption holds true. In the fourth quarter of 2025, Gallup reported that 38% of employees had integrated AI technology into their workplace, a figure essentially unchanged from the previous quarter.
Clearly, AI isn’t a technology universally embraced as inevitable. Corporations frequently report that, so far, it ‘does not seem to do much.’ Yet fears are pervasive. The S&P North American software index plummeted 15% in January, its largest monthly drop in 17 years, due to concerns that AI would eventually replace existing software.
William Quinn, co-author of ‘Boom and Bust: A Global History of Financial Bubbles,’ notes the unusual ‘active hostility’ surrounding this boom. ‘People usually find new technology exciting,’ he said. ‘It happened with electricity, bicycles, motorcars. There were fears but also hopes. A.I. is notable, perhaps unique, for the lack of enthusiasm.’
Even though more than half of Americans have experimented with large language models (and nearly everyone online has inadvertently used AI), studies indicate that concern far outweighs excitement. According to Pew Research, 61% of respondents in a 2025 survey wished for more control over how AI is used in their lives.
(The New York Times has sued OpenAI and Microsoft, alleging copyright infringement of news content related to AI systems. Both companies deny the claims.)
The public’s indifference and outright hostility to AI were likely unavoidable. AI’s proponents often paint a disquieting future where humans who leverage the technology will replace those who don’t.
Perhaps this explains why AI regulation is one of the few issues that unites a divided America. A Gallup survey last spring found that 80% of Americans want rules for AI, even if it means slowing down technological development.
This wariness isn’t confined to low-skill workers. Edelman, the global communications firm, surveys trust in society annually. Its latest report, released in January, revealed that two-thirds of low-income U.S. respondents believed ‘people like me will be left behind rather than realize any real advantages from generative A.I.’ Perhaps more remarkably, nearly half of high-income workers expressed the same sentiment.
A Boom in Booms

Era-defining booms once occurred infrequently: the South Seas mania of 1720, the British railway boom in the 1840s, the Roaring Twenties, Tokyo in the 1980s.
These booms followed a predictable pattern. A few investors would assert that new technological developments had changed everything. Early believers would profit, drawing in more investors. Critics would be silenced. Speculators would take over. The boom would inflate into a bubble, then pop. Everyone would express regret and vow to be more prudent in the future.
Eventually, a new technology would emerge. Utopia would beckon, and the cycle would begin anew.
‘One generation after another has renewed the belief that, whatever was said about earlier technologies, the latest one will fulfill a radical and revolutionary promise,’ wrote Vincent Mosco, a technology historian, in ‘The Digital Sublime: Myth, Power, and Cyberspace.’
However, today, booms are arriving at an accelerated pace.
‘There was the Japanese stock bubble, then the Thai and Taiwanese bubbles, then dot-com, housing, the Chinese stock market, crypto and now A.I.,’ said Mr. Quinn, a senior lecturer in finance at Queen’s University Belfast. He attributes this rapid succession to ‘very mobile financing, a deregulated financial system and rapid technological change, all of which make it easy for ordinary people to speculate.’
A few months ago, analysts and investors, uneasy about a stock market that seemed to lack a solid foundation for its continued ascent, began to ponder how the AI boom might conclude, recalling how the Roaring Twenties’ boom led to the Great Depression.
The debate yielded little consensus. Andrew Odlyzko, a scholar of investment manias, believes the discussion stalled because it relied too heavily on outdated concepts of booms and bubbles.
Consider cryptocurrency, he suggested. It has been around for 15 years. Few still pretend that crypto holds value beyond speculation. Yet, despite recent price fluctuations, its value has risen over the long term.
Its high price, he argues, is a reflection of one thing: the unwavering faith of its investors, which—at least so far—has overshadowed any doubts.
‘As our society is getting more complicated and wealthier, it is losing contact with reality,’ said Dr. Odlyzko, a former head of the University of Minnesota’s Digital Technology Center. ‘Mass psychology is now far more important than technology or economics.’
Ultimately, AI doesn’t need to transform humanity. Tech companies simply need to convince people that it is succeeding.
The “Doomer” Paradox
For all the familiarity of its promises to change the world, some aspects of the AI boom are genuinely different—and are preventing it from gaining widespread public acceptance.
Past booms, such as the California gold rush in 1849 or the dot-com era, offered at least the illusion of public participation. Anyone, it seemed, could go to California and try their luck.
Starting an AI company, conversely, demands specialized expertise and substantial funding. For most individuals, AI feels less like an opportunity and more like something being imposed upon them, starting with their email and web browser.
Another challenge arises from AI’s most vocal proponents, who occasionally make unsettling remarks, sometimes without seeming to realize the implications. For example, talk-show host Jimmy Fallon once plaintively asked Microsoft co-founder Bill Gates, ‘Will we still need humans?’

Mr. Gates replied: ‘Not for most things.’
Just last week, Mrinank Sharma, the leader of Anthropic’s safeguards research team, cryptically posted on social media that he felt pressured ‘to set aside what matters most.’ He announced his resignation to write poetry, stating the world is ‘in peril’ and that while the problems were ‘not just from AI,’ his decision was influenced by his concerns.
These ‘doomer narratives,’ as Nvidia’s Mr. Huang calls them, are emerging from within the AI community itself.
The tech executives who have staked their companies’ futures on AI’s triumph possess vast resources to ensure its success. They can invest even more heavily in building new data centers. However, data centers across the country are increasingly encountering local opposition from residents who complain about the noise, disruption, secrecy, and lack of tangible community benefits like jobs.
Satya Nadella, Microsoft’s chief executive, acknowledges these risks. He believes ‘the real question’ is when AI will be perceived as genuinely helping people.
People admire AI as a sophisticated tech tool capable of impressive feats, he suggested at the Davos economics forum last month. Yet, they don’t necessarily see a positive impact in their own daily lives.

‘I think we, even as a global community, have to get to a point where we’re using this to do something useful that changes the outcomes of people and communities and countries,’ Mr. Nadella stated.
Without demonstrating such benefits, he warned, the technology risks losing ‘social permission’ to, for example, consume as much energy as it does—a thirst that is already driving up electricity prices for U.S. households.
AI companies appear increasingly aware of this perception problem. This year’s Super Bowl featured AI-themed ads that were either defensive or simply bizarre. Amazon’s ad, for instance, showed AI suggesting ways to eliminate Chris Hemsworth, only to reveal the AI’s true intention was to offer him a massage.
Caught in Technology’s Grip
Anxiety about technology has been simmering for a long time. What’s new with AI is that these fears, vague yet comprehensive, are now breaching the inner sanctums of Silicon Valley itself.
A 2024 survey of registered voters in the Bay Area found that three-quarters believed leading tech companies wielded too much power and influence.
Robert Thomas, a retired psychologist from the Bay Area, is among the disaffected. ‘At first we greatly anticipated the electronic conveniences created by cybergeniuses — the personal computer, the internet, the iPad, the iPod, etc.,’ he recounted. ‘However, since then things feel out of control. More and more, my life is partly controlled by some electronic device.’
Even worse, he laments, ‘I barely relate to people anymore.’
Ironically, from the perspective of AI promoters, an inability to relate to human beings isn’t a flaw but a feature. ‘I suspect that in a couple of years on almost any topic, the most interesting, maybe the most empathetic conversation that you could have will be with an A.I.,’ Mr. Altman, the OpenAI chief executive, said on a podcast.
It’s no wonder Mr. Thomas, 78, often feels frustrated. He admits to fantasizing about punching a young tech worker in the face. And yet… he recently had ChatGPT compose a speech for his wife’s birthday. It was beautiful and eloquent.
All of which means the future of AI could indeed go either way.