OpenAI isn't too big to fail - and Big Tech would be fine wi


[source image]
Dow Jones NewsNov 18, 1:05 PM UTC
MW OpenAI isn't too big to fail - and Big Tech would be fine with that

By Gary Smith

 

No tech company owed money by OpenAI would be bankrupted by the AI giant's demise

 

Sam Altman, CEO of OpenAI.

 

It's hard to envision a too-big-to-fail argument for tech companies that mirrors the too-big-to-fail argument for banks.

 

OpenAI had losses of $5.3 billion on revenue of $3.5 billion in 2024 and losses of $7.8 billion on revenue of $4.3 billion in the first half of 2025. The artificial-intelligence giant forecasts $115 billion in cumulative losses by 2029, which is most likely an underestimate.

 

OpenAI has nonetheless raised hundreds of billions of dollars for AI research and infrastructure - with no realistic way to generate a reasonable return on this massive investment. As I have argued elsewhere, scaling is not going to make large language models (LLMs) intelligent and expert training leaves LLMs unreliable in cases where they have not been given explicit instructions. The disappointing business performance of LLMs confirms that LLMs cannot be trusted in cases where the costs of mistakes are substantial - and this limits the potential revenue from business applications.

 

OpenAI's most promising revenue sources are from three harmful addictions: (1) students using GPT to cheat their way through school; (2) people carrying around AI buddies that give them companionship and life advice, and (3) "verified adults" interacting with a (forthcoming) erotic GPT. Even these addictions are unlikely to justify OpenAI's intended massive expenditures.

 

So far, OpenAI has stayed afloat with hype, supported by the kind of circular-financing sleight-of-hand that helped inflate the dot-com bubble. Bloomberg's recent "AI Money Machine" chart is a striking depiction of this tangled knot:

 

A (likely intentional) consequence of this interlocking web of dependencies is to create something akin to the links among financial institutions - the implication being that the failure of a relatively small tech company like OpenAI might topple the tech sector, the same way that the failure of a relatively small financial institution like Lehman Brothers threatened to topple the financial sector in 2008.

 

OpenAI CEO Sam Altman recently argued that, "When something gets sufficiently huge...the federal government is kind of the insurer of last resort, as we've seen in various financial crises...given the magnitude of what I expect AI's economic impact to look like, sort of, I do expect the government ends up as like the insurer of last resort."

 

But the comment sparked backlash and Altman backtracked. In a lengthy post on the social-media platform X, Altman stated that in fact OpenAI does not want a government bailout should it fail.

 

Read: OpenAI walks back comments about government support for its AI spending

 

If a too-big-to-fail argument is a nonstarter for OpenAI, what other options would it have in a financial crisis?

 

It's hard to envision a too-big-to-fail argument for tech companies that mirrors the too-big-to-fail argument for banks - with OpenAI as the lynchpin. When Lehman Brothers failed in 2008, fears that it would precipitate a meltdown of financial firms and markets persuaded Congress to authorize a $700 billion bailout fund, of which hundreds of billions were used to prop up AIG and the nine largest U. S. banks.

 

Perhaps Congress should have acted earlier to save Lehman. Perhaps Congress should similarly step in to save OpenAI should it ever be threatened with bankruptcy, in order to avoid having to bail out the big tech companies.

 

Read: Here's one question about the AI bubble that even ChatGPT can't answer

 

There is a crucial difference, however, in that financial institutions are highly leveraged. At the time of the 2008 crisis, the leverage ratios were 10+ for commercial banks, 20 for AIG, and 30 to 40 for investment banks. A firm with a leverage ratio of 10-to-1 would be bankrupted by a 10% drop in the value of its assets, say from a default on 10% of its assets. An investment bank with 40-1 leverage would be bankrupted by a 2.5% drop in asset values.

 

Unlike banks, most Big Tech firms have little or no leverage. Those that are owed money by OpenAI would not be bankrupted by its failure. Notice, too, that Amazon.com (AMZN), Apple (AAPL), Alphabet (GOOG) (GOOGL) and Meta Platforms (META) aren't in Bloomberg's AI Money Machine graphic - because they finance their AI spending out of their own profits. If anything, they would benefit from the disappearance of OpenAI as a competitor.

 

Another crucial difference is that in the Lehman crisis, Goldman Sachs (GS), Citigroup (C), JPMorgan Chase (JPM) and others were fundamentally profitable firms. Within a few years, they repaid the federal bailout funds they received, plus interest. OpenAI is not fundamentally profitable and there is no clear path to profitability that will allow it to repay the government.

 

If a too-big-to-fail argument is a nonstarter for OpenAI, what other options would it have in a financial crisis? One would be that the U.S. is in an existential race with China to create artificial general intelligence (AGI), which will be a strategic asset that allows world domination. In recent congressional testimony, Altman said that the future of AGI "can be almost unimaginably bright, but only if we take concrete steps to ensure that an American-led version of AI, built on democratic values like freedom and transparency, prevails over an authoritarian one."

 

There are two problems with this argument. First, LLMs are a detour away from AGI. Second, several profitable, well-capitalized Big Tech companies already can continue the AGI race (if they want to) were OpenAI to collapse.

 

Another possibility is that a struggling OpenAI could rely on a political-industrial complex bailout by currying favor with government officials who will reward its sycophancy with government contracts and favorable regulations. In June 2016, Altman compared Trump to Hitler and wrote that "Trump's casual racism, misogyny and conspiracy theories are without precedent among major presidential nominees."

 

That was then. Now, Altman has donated $1 million to Trump's 2025 inauguration fund and become as fawning as GPT. In a January 2025 post on X, Altman wrote: "Watching [Trump] more carefully recently has really changed my perspective on him...I'm not going to agree with him on everything, but I think he will be incredible for the country in many ways!"

 

Any potential rescue if OpenAI should face collapse is reminiscent of government subsidies to U.S. tobacco farmers - a politically motivated support of a harmful addiction. Subsidies of OpenAI would be even worse because GPT's addictions are even worse. The government should just say no.

 

Gary Smith is the author of more than 100 academic papers and 20 books, including "Standard Deviations: The Truth About Flawed Statistics, AI and Big Data" (Duckworth, 2024) and (co-authored with Margaret Smith) "The Power of Modern Value Investing: Beyond Indexing, Algos and Alpha" (PalgraveMacmillan, 2024).

 

Also read: Sam Altman was finally asked how OpenAI can target trillions in spending on very little revenue

 

More: AI has real problems. The smart money is investing in the companies solving them.

 

-Gary Smith

 

This content was created by MarketWatch, which is operated by Dow Jones & Co. MarketWatch is published independently from Dow Jones Newswires and The Wall Street Journal.

(END) Dow Jones Newswires

 

11-18-25 0805ET

 
Copyright © 2025 Dow Jones & Company, Inc.

请您先登陆,再发跟帖!