← Back to Horse Energy

OpenAI: Less Like the Next Google, More Like the Next WeWork

A data-driven analysis of $157B valuation, $16B burn rate, and $1 trillion in commitments

October 21, 2025 | Data Analysis

OpenAI is widely viewed as the next Google—a transformative technology company that will define the AI era and justify its $157 billion valuation through sustained dominance and profitability. But the financial data tells a different story, one that more closely resembles WeWork's trajectory than Google's.

The parallels are striking: a company burning through capital at an extraordinary rate ($16 billion annually), revenue growth that masks deteriorating unit economics, a CEO dismissing profitability as "not in my top-10 concerns," and an explicit strategy of "leveraging other people's balance sheets" to fund expansion. Most concerning, OpenAI has committed to over $1 trillion in infrastructure spending over the next decade while generating just $13 billion in annual revenue and losing money on every dollar earned.

This isn't speculation or hot takes. It's what the numbers reveal when examined closely.

In 2019, WeWork was valued at $47 billion despite losing $2 billion annually on $1.8 billion in revenue.[1] The company had grandiose visions of "elevating the world's consciousness," a charismatic founder who believed his own hype, a governance structure that made Enron look transparent, and investors who suspended disbelief because everyone else was doing it. By September 2019, the company imploded so spectacularly that its IPO was pulled, its valuation collapsed by 90%, and its CEO was forced out.

Today, OpenAI is valued at $157 billion.[2] By October 2025, the company had reached approximately $13 billion in annual recurring revenue while burning through an estimated $16 billion annually—$8 billion in just the first half of 2025.[11] That's a 123% burn rate that makes WeWork's financial metrics look prudent. Like WeWork, OpenAI has a visionary narrative ("artificial general intelligence will transform humanity"), a governance crisis that exploded in November 2023,[8] a corporate structure designed to prioritize mission over profit that's rapidly being unwound, and a strategy of "leveraging other people's balance sheets" to fund growth.

The comparison isn't perfect—AI has more transformative potential than office subletting. But the patterns are eerily similar: unsustainable unit economics masked by growth, massive future commitments, and a valuation that requires near-perfect execution across multiple uncertain dimensions.

I. The WeWork Parallels: When Burn Rate Meets Reality

WeWork at its peak in early 2019 was valued at $47 billion on $1.8 billion in revenue while losing approximately $2 billion. By mid-2019, as the company prepared for its ill-fated IPO, investors finally read the S-1 filing and the valuation collapsed to $8 billion within months.

OpenAI in October 2024 raised $6.6 billion at a $157 billion valuation.[2] By October 2025, the financial picture had become even more stark: the company reported approximately $13 billion in annual recurring revenue while burning through an estimated $8 billion in just the first half of 2025—projecting to roughly $16 billion in annual losses.[11] That means OpenAI loses approximately $1.23 for every dollar of revenue it generates, and the absolute losses are accelerating even as revenue grows.

WeWork (2018): $2B loss on $1.8B revenue = 111% burn rate
OpenAI (2025 est.): $16B loss on $13B revenue = 123% burn rate

Put differently: OpenAI's financial profile remains comparable to WeWork's at its peak dysfunction, despite operating in a sector with supposedly higher margins and greater scalability. And the absolute dollar losses are now 8x larger than WeWork's ever were.

The Revenue Growth Mirage and "Leveraging Other People's Balance Sheets"

WeWork's defenders pointed to rapid revenue growth—from $886 million in 2017 to $1.8 billion in 2018, a 103% increase. OpenAI's defenders make similar arguments. The company's annualized revenue run rate reached $13 billion in 2025, up from approximately $1.6 billion in late 2023—more than 8x growth in 18 months.

But both growth rates obscure deteriorating unit economics and, more importantly, unsustainable liability structures. WeWork's revenue growth came from signing long-term leases and offering short-term memberships—a structural mismatch that meant each new customer increased long-term liabilities faster than short-term revenue.

OpenAI's strategy is remarkably similar, just with different balance sheets being leveraged. According to October 2025 reporting, CEO Sam Altman has committed to more than $1 trillion in infrastructure spending over the next decade, including 26+ gigawatts of computing capacity from Oracle, Nvidia, AMD, and Broadcom.[11] The company's explicit strategy, per a senior OpenAI executive, is to "leverage other people's balance sheets" to give OpenAI "time to build the business."

This is WeWork's playbook in different clothing. WeWork signed long-term lease obligations and hoped short-term revenue would eventually cover them. OpenAI is signing long-term infrastructure commitments (26GW requires power equivalent to 20 nuclear reactors) and hoping revenue growth will eventually justify them. In both cases, the company is taking on massive future obligations that dwarf current cash flows, betting that growth will solve the structural mismatch.

Metric WeWork (2018-2019) OpenAI (2025)
Peak Valuation $47B $157B
Annual Revenue $1.8B $13B (ARR)
Annual Loss ~$2B ~$16B (projected)
Burn Rate (Loss/Revenue) 111% 123%
Revenue Multiple 26x revenue 12x revenue
Future Commitments ~$47B in lease obligations $1+ trillion infrastructure

The Path to Profitability Problem: "Not in My Top-10 Concerns"

WeWork's pitch to investors was that profitability was just around the corner—once they reached sufficient scale, unit economics would flip positive. The S-1 filing revealed this was fantasy. Each new location required massive upfront capital, long-term lease commitments, and buildout costs, while revenue per member was actually declining as competition increased and the company offered more aggressive discounts to hit growth targets.

OpenAI faces a structurally similar challenge. The company's costs are heavily front-loaded (training runs reportedly cost tens to hundreds of millions of dollars each), while revenue is uncertain and potentially declining on a per-user basis. When asked about profitability in October 2025, Sam Altman stated it was "not in my top-10 concerns"—a remarkable statement from a CEO burning $16 billion annually with over $1 trillion in future infrastructure commitments.[11]

This echoes Adam Neumann's dismissal of profitability concerns. OpenAI's October 2024 funding round included provisions requiring the company to complete its for-profit restructure within two years or face significant consequences[5]—similar to WeWork's desperation financing rounds when SoftBank's commitment came with increasingly onerous terms.

II. The Revenue Mirage: When Growth Hides Deterioration

OpenAI's revenue growth looks impressive on the surface: from approximately $1.6 billion annualized in late 2023 to $13 billion in 2025,[3][11] representing more than 8x growth in 18 months. But underneath that headline number, several concerning trends are emerging.

The API Pricing Collapse

API revenue is OpenAI's largest business segment, with consumer subscriptions (ChatGPT) accounting for 70% of the $13 billion total, suggesting API and enterprise combined represent approximately 30% or roughly $4 billion.[11] But API pricing has collapsed under competitive pressure.

GPT-4's initial pricing in March 2023 was $30 per million input tokens and $60 per million output tokens.[12] By May 2024, GPT-4o launched at $5/$15—an 83% reduction in just 14 months.[13] GPT-4 Turbo pricing dropped from $10/$30 to match GPT-4o's levels.

This wasn't strategic pricing to gain market share; it was defensive pricing to maintain market share against competitors:

For OpenAI, this pricing pressure creates a vicious cycle: lower prices mean lower revenue per API call, which means needing more volume to hit revenue targets, which means more compute costs, which means worse margins, which means more pressure to cut prices further.

Margin compression evidence: GPT-4 incurred $2.3 billion in inference costs between March 2023 and end of 2024, accounting for 49% of its total revenue—far higher than the 5% server costs typical of traditional SaaS businesses.[17] At initial $30 pricing, margins were healthy. But as prices collapsed to $5 (and as low as $3-4 in blended rates[13]), inference costs consume an unsustainably large share of revenue. If competitors force prices to $2-3, margins become razor-thin or negative.

The Commodification Risk

Every tech platform business fears commodification: when competitors offer equivalent products at lower prices, eroding margins and differentiation. OpenAI is experiencing this in real-time.

GPT-4's initial advantage was significant—noticeably better than GPT-3.5 on complex reasoning tasks. But by October 2025, the competitive landscape has fundamentally shifted:

The moat has effectively disappeared. GPT-5's August 2025 release delivered measurable improvements but received mixed user reception, and competitors matched or exceeded its capabilities within weeks on specific benchmarks. OpenAI retains advantages in API reliability and ecosystem maturity, but the technical differentiation that justified premium pricing has evaporated.

This is particularly problematic for OpenAI's valuation, which assumes sustained pricing power and margin expansion. If LLMs commodify into low-margin infrastructure (like cloud storage or compute), the $157 billion valuation becomes indefensible.

The Switching Cost Illusion

Tech defenders often point to switching costs as a protective moat. In theory, once enterprise customers integrate OpenAI's APIs into their infrastructure, the costs of switching to competitors—rewriting code, retraining employees, migrating data—create stickiness that justifies premium pricing.

This argument works for cloud providers. Migrating from AWS to Google Cloud or Azure is genuinely painful: you're moving compute instances, databases, storage buckets, networking configurations, identity management systems, and potentially thousands of internal tools that assume AWS-specific services. The switching costs are measured in millions of dollars and months of engineering time. This creates the vendor lock-in that allows AWS to maintain 30%+ operating margins despite competition.

But LLM switching costs are fundamentally different—and much lower:

OpenAI's enterprise business is also relatively immature. The company doesn't disclose exact numbers, but industry estimates suggest enterprise revenue accounts for 10-20% of total revenue—meaning the vast majority comes from API developers and consumer subscriptions, both of which have minimal switching costs.

Compare this to Salesforce (85% of revenue from multi-year enterprise contracts with high switching costs due to deep CRM integration), or Oracle databases (switching costs so high that customers pay maintenance fees they despise rather than migrate), or even Microsoft Office (institutional inertia and file format lock-in create massive friction).

OpenAI's position is closer to Dropbox in 2015: useful, popular, but facing commodification from Google Drive, Microsoft OneDrive, and Box, all offering similar functionality at similar or lower prices. Dropbox's stock price has languished because switching costs weren't high enough to prevent customer churn when competitors offered equivalent products cheaper.

The stickiness problem: Without high switching costs or a large enterprise base, OpenAI is exceptionally vulnerable to competitor price cuts. If Anthropic drops Claude pricing by 30%, developers can switch in a weekend. If Google bundles Gemini into Workspace for free, enterprises can migrate in a quarter. The company's $157B valuation assumes it can maintain pricing power, but the structural conditions for that power don't exist.

This is why the API pricing collapse (83% in 14 months) is so concerning—it's not a strategic choice, it's what happens when you lack a moat and competitors force your hand. And it's why OpenAI's burn rate matters: the company is spending billions to acquire customers who could leave for a 20% discount.

The Consumer Subscription Question: The 5% Conversion Problem

ChatGPT has 800 million regular users as of October 2025,[11] but only 5% are paying subscribers—approximately 40 million paying users generating about $9 billion of OpenAI's $13 billion annual revenue.[11] That's a 95% free rider problem.

Consumer SaaS products typically plateau at 2-5% free-to-paid conversion. Even doubling to 80 million paid users would add $9.6 billion in revenue—not enough to cover a $16 billion annual burn rate. Meanwhile, Google offers Gemini Advanced at $19.99/month, Anthropic offers Claude Pro at $20/month, and OpenAI is rolling out cheaper pricing in emerging markets—further compressing per-user revenue.

III. The Hype Cycle Context: $600 Billion Questions

In June 2024, Sequoia Capital published a widely-cited analysis titled "AI's $600 Billion Question."[7] The math was straightforward and damning:

The AI industry (led by Nvidia, cloud providers, and AI companies) was spending approximately $150 billion annually on AI infrastructure—data centers, GPUs, networking, power. For this to make economic sense, the industry needs to generate roughly $600 billion in revenue (assuming 25% profit margins). But actual AI revenue was estimated at $100-150 billion, creating a $450-500 billion gap.

As Sequoia partner David Cahn wrote: "The model is simultaneously too hot and too cold. Too hot because the infrastructure investment is premature for the revenue being generated. Too cold because current AI capabilities aren't yet good enough to drive the revenue needed to justify the investment."

OpenAI sits at the center of this paradox. The company has raised $13 billion from Microsoft and $6.6 billion in its October 2024 round, giving it a $157 billion valuation. That valuation assumes OpenAI will capture a substantial portion of the eventual $600 billion AI market. But:

Bubble Indicator #1: Valuation Multiples

OpenAI is valued at 12x its 2025 annual recurring revenue—down from 42x based on 2024 figures, but still elevated given the 123% burn rate. For comparison:

Company Revenue Multiple at Peak Outcome
WeWork (2019) 26x Collapsed 90%, CEO ousted
Pets.com (2000) 73x Bankrupt within 9 months of IPO
Webvan (2000) 55x Bankrupt 18 months post-IPO
Google (2004 IPO) 10x Success
Amazon (1997 IPO) 8x Success
Microsoft (1986 IPO) 7x Success
OpenAI (2025) 12x TBD

High revenue multiples aren't inherently disqualifying—Amazon and Netflix traded at stratospheric multiples for years while delivering extraordinary returns. But 12x revenue with a 123% burn rate, $1 trillion in future infrastructure commitments, and a 5% user conversion rate is in the danger zone where liabilities exceed revenue generation capacity.

Bubble Indicator #2: Capital Efficiency Declining

OpenAI raised $19.6 billion ($13B from Microsoft, $6.6B in October 2024) and generated approximately $4-5 billion in cumulative revenue while burning $10-15 billion—roughly $0.25-0.50 of revenue per dollar invested. For comparison, Google showed 10x capital efficiency at IPO, Amazon showed 16x.

Bubble Indicator #3: The "Greater Fool" Investor Base

OpenAI's October 2024 funding round included unusual terms: mandatory for-profit restructure within two years, valuation conditioned on 100%+ annual revenue growth, and secondary sales restrictions. These terms suggest investor nervousness masked by FOMO. SoftBank's participation is particularly telling—they led WeWork's funding at $47B, then rescued it at $8B six months later.

Bubble Indicator #4: Circular Financing and "Creative" Deal Structures

Perhaps the most telling sign of unsustainable financing is the circular nature of OpenAI's recent deals. In October 2025, the company announced infrastructure partnerships with Nvidia, AMD, and Broadcom worth billions in combined commitments.[11] But these aren't traditional vendor relationships—they're structured as investments where the hardware companies fund OpenAI, which then uses that funding to buy those same companies' chips.

As one senior OpenAI executive explained, the strategy is to "come up with creative financing strategies" to signal "we're good for the debt."[11] The circularity mirrors dot-com era telecom companies investing in startups that would then buy their equipment—inflating both companies' figures until the music stopped.

The scale is concerning: over $1 trillion in infrastructure spending over the next decade—26+ gigawatts requiring power equivalent to 20 nuclear reactors.[11] These are long-term obligations backed by the assumption that revenue will eventually materialize. This is precisely the liability structure that doomed WeWork.

The Sequoia Question Revisited

For OpenAI's $157 billion valuation to make sense, the company needs to eventually generate $20-30 billion in annual revenue with 30%+ profit margins (implying $6-9 billion in annual profit, which at a 20-25x multiple justifies the valuation). That requires:

Each of these individually is challenging. Achieving all simultaneously is possible but requires near-perfect execution—which the product confusion, governance chaos, and talent exodus suggest is unlikely.

IV. Steel-Manning the Counter-Arguments

Fairness requires examining the strongest arguments against this analysis. Here are the best cases for why OpenAI might actually be the next Google:

Counter-Argument #1: "AI is Different—This Time It Really Is"

LLMs represent a genuine platform shift. ChatGPT reached 100+ million users faster than any previous consumer product. If AI's impact matches the internet, Google's market cap grew from $23B at IPO to $1T+, suggesting OpenAI could be undervalued.

Response: VCs made similar arguments about WeWork ("real estate is the next platform"), Pets.com ("e-commerce is the future"), and Webvan. They were right about trends, wrong about companies. The question isn't whether AI is transformative—it is. The question is whether OpenAI will capture the value, or whether it accrues to infrastructure providers (Nvidia), application developers, or more focused competitors.

Counter-Argument #2: "They'll Figure It Out—Amazon and Netflix Burned Cash Too"

Amazon and Netflix lost money for years prioritizing market position over profitability. ChatGPT has become synonymous with AI like "Google" means search.

Response: Amazon's losses funded warehouses and logistics—tangible moats. Netflix's burn funded content libraries with lasting value. OpenAI's burn funds model training with uncertain differentiation and customer acquisition with low retention. Critically: Amazon and Netflix had positive unit economics that improved over time. OpenAI's unit economics are deteriorating (pricing collapse, margin compression), not improving.

Counter-Argument #3: "The Microsoft Partnership Changes Everything"

Microsoft invested $13 billion and integrated ChatGPT into Windows, Office, and Azure. Even if OpenAI struggles, Microsoft could acquire the company and fold it into Azure AI.

Response: The terms reportedly give Microsoft 75% of profit until recouped and exclusive cloud rights. If Microsoft believes the technology is replicable, it has strong incentives to develop its own models (already doing with Phi-3) rather than pay OpenAI's premium. SoftBank's relationship with WeWork started supportive and became predatory—Microsoft could follow a similar path.

V. Conclusion: The Data Tells a Story

Let's return to the numbers, because they're what matter:

Individually, any of these could be explained away. A high valuation might reflect transformative potential. Losses might be strategic investment. Pricing pressure might be temporary. Product proliferation might be rapid iteration. Executive departures might be natural in a fast-growing company. Corporate restructuring might be necessary for scale.

But collectively, they paint a coherent picture: a company burning capital at an unsustainable rate, facing intensifying competition that's eroding pricing power, launching products faster than it can articulate their value, hemorrhaging safety-focused technical talent, and abandoning its founding governance structure to satisfy investor demands.

This is not the profile of the next Google. Google at IPO in 2004 had clear product focus (search and search advertising), improving unit economics (margins expanding as revenue scaled), technical moats (PageRank and infrastructure advantages), and a path to profitability that was visible and credible. Its $23 billion IPO valuation was 10x revenue on growing profits—aggressive but defensible.

OpenAI's profile more closely resembles WeWork: extraordinary valuation on revenue that doesn't cover costs, governance chaos that drove out skeptical voices, and a business model that requires sustained pricing power in a market that's rapidly commodifying. The addition of Snapchat-style product confusion makes the comparison even more apt.

What Would Prove This Analysis Wrong?

Predictions should be falsifiable. Here's what evidence would demonstrate this analysis is incorrect:

  1. Sustained pricing power: If OpenAI maintains or increases API pricing over the next 12 months while growing volume, that would suggest durable competitive moats rather than commodification.
  2. Margin expansion: If the company demonstrates improving gross margins and a credible path to profitability (not just "we'll figure it out later" but actual improving unit economics), that would suggest sustainable business model.
  3. Model breakthrough: If future model versions deliver capability gains comparable to GPT-3→GPT-4 that significantly widen the competitive moat, that would justify continued technical leadership premium. (Note: GPT-5, released August 2025, delivered benchmark improvements—94.6% on AIME math vs GPT-4's lower scores, 45% fewer hallucinations—but received mixed user reception with many calling it "sterile" and "lifeless," suggesting incremental rather than transformative progress.)
  4. Product consolidation: If OpenAI simplifies its product line into coherent tiers with clear differentiation (instead of continuing to proliferate), that would suggest strategic clarity emerging.
  5. Talent retention: If departures from safety/technical leadership stop and the company attracts top-tier researchers from competitors, that would suggest cultural/strategic concerns are being addressed.

Conversely, what would confirm this analysis:

  1. Continued pricing pressure: Further API price cuts to match competitors, especially if accompanied by margin compression.
  2. Down round: If OpenAI raises capital at a lower valuation than $157 billion, that would signal investor confidence declining.
  3. More departures: Additional exits of technical/safety leadership, particularly to competitors like Anthropic or Google DeepMind.
  4. Product proliferation continues: Further model launches (o4, GPT-6, additional variants) without consolidating existing offerings or clarifying product strategy.
  5. Microsoft develops competitive models: If Microsoft's internal models (Phi series, others) reach parity with OpenAI's offerings, reducing Microsoft's incentive to fund OpenAI.

The Broader Implications

OpenAI's trajectory matters beyond the company itself. The firm has become the public face of AI progress, the benchmark against which other companies are measured, and the template for "AI startup" strategies. If OpenAI succeeds despite warning signs, it validates "growth at any cost" approaches in AI. If it stumbles, it could trigger broader AI investment retrenchment.

The $600 billion question that Sequoia posed remains unanswered: the AI industry is investing far more capital than it's generating in revenue. Some companies will successfully bridge that gap. Others will become cautionary tales. The evidence suggests OpenAI is more likely to be in the latter category—not because AI isn't transformative, but because the company's execution, governance, and business model show more parallels to failed hype cycles than to successful platform businesses.

WeWork convinced investors that it was a "technology company" transforming real estate. It was actually a real estate company with unsustainable unit economics and a charismatic founder who believed his own narrative. OpenAI has convinced investors that it's building AGI and capturing the AI revolution. It might be. Or it might be burning billions to train models that competitors will match while offering consumer/enterprise products that lack durable differentiation and margins.

The difference between those scenarios is worth $157 billion. The data suggests betting on the latter is the higher-probability outcome.

Only time will tell whether this analysis ages like "Google will never make money" or "Pets.com is the future of retail." But in investing and business analysis, you make decisions based on available evidence, not unknown futures. And the evidence, as of October 2025, flashes more warnings than promises. GPT-5's August 2025 release—with benchmark improvements but mixed user reception and no fundamental shift in competitive dynamics—suggests the pattern of incremental progress rather than breakthrough innovation continues.

OpenAI Financial Trajectory: Revenue vs. Burn Rate
API Pricing Decline: GPT-4 Model Family (Mar 2023 - Dec 2024)
OpenAI Product Launch Timeline (2022-2024)

Sources & References

[1] WeWork S-1 Filing (August 2019). Available at: SEC.gov
[2] Metz, C. & Griffith, E. "OpenAI Closes Funding at $157 Billion Valuation, as a For-Profit Company" New York Times (October 2024). Link
[3] Woo, A. & Nix, N. "OpenAI Projects $11.6 Billion Revenue Next Year" The Information (September 2024). Link
[4] Lunden, I. "OpenAI reportedly burning through $8.5B per year" TechCrunch (September 2024). Link
[5] Woo, A. "OpenAI Funding Deal Pressures Company to Restructure" The Information (October 2024). Link
[6] Metz, C. & Griffith, E. "How Microsoft's Satya Nadella Became Tech's Steely-Eyed A.I. Gambler" New York Times (June 2023). Link
[7] Cahn, D. et al. "AI's $600B Question" Sequoia Capital (June 2024). Link
[8] Knight, W. "The ChatGPT Boom Made Them Fortune 500 CEOs. Now They Want to Be Careful" Wired (November 2023). Coverage of November 2023 board crisis.
[9] Leike, J. "Why I'm leaving OpenAI" Personal blog post (May 2024). Link
[10] Patel, D. "GPT-4 Architecture, Datasets, Costs and More Leaked" SemiAnalysis (July 2023). Industry analysis of compute costs.
[11] Hammond, G. & Criddle, C. "OpenAI makes 5-year business plan to meet $1tn spending pledges" Financial Times (October 15, 2025). Link
[12] "OpenAI GPT-4 API Pricing" Nebuly (2023). Historical pricing documentation for GPT-4 launch. Link
[13] Ng, A. "After a recent price reduction by OpenAI, GPT-4o tokens now cost $4 per million tokens" X/Twitter (August 2024). Analysis of GPT-4o pricing trends. Link
[14] "Anthropic's Claude Opus 4.1 Improves Refactoring and Safety, Scores 74.5% SWE-bench Verified" InfoQ (August 2025). Link
[15] "Gemini 2.5 Pro" Google DeepMind (March 2025). Official model documentation and benchmarks. Link
[16] "Grok 4 Launches With Benchmark Records and Idiosyncratic Behavior" The Batch / DeepLearning.AI (July 2025). Link
[17] "All You Need to Know about Inference Cost" Primitiva (2024). Analysis of GPT-4 inference costs and margin compression. Link

Additional data sources: OpenAI official blog and announcements, Microsoft earnings calls (Q3-Q4 2024), Anthropic pricing documentation, Google Cloud AI pricing, Meta AI Research announcements, Bloomberg Technology coverage, Financial Times tech reporting, Reuters business news, TechCrunch funding coverage.

Note on estimates: OpenAI is a private company and does not publicly disclose financial statements. Revenue figures, burn rates, and operational costs cited in this article are based on reporting from The Information, Financial Times, New York Times, and other business publications citing company documents, investor presentations, and sources familiar with the company's finances. Where figures are estimates or projections, this is explicitly noted in the text.