2025-08-09 01:02:20
Thanks for subscribing to Where’s Your Ed At Premium, please shoot me an email at [email protected] if you ever have any questions.
Yesterday, OpenAI launched GPT-5, a new “flagship” model of some sort that’s allegedly better at coding and writing, but upon closer inspection it feels like the same old shit it’s been shoveling for the last year or two.
Sure, I’m being dismissive, but three years and multiple half-billion-dollar training runs later, OpenAI has delivered us a model that is some indeterminate level of “better” that “scared” Sam Altman, and immediately began doing what some Twitter users called “chart crimes” with its supposed coding benchmark charts.
This also begs the question: what is GPT-5? WIRED calls it a “flagship language model,” but OpenAI itself calls it a “unified system with a smart, efficient model that answers most questions, a deeper reasoning model, and a real-time router that quickly decides which[model] to use based on conversation type, complexity, tool needs, and your explicit intent.” That sure sounds like two models to me, and not necessarily new ones! Altman, back in February, said that GPT-5 was “a system that integrates a lot of our technology, including o3.”
It is a little unclear what GPT-5 — or at least the one accessed through ChatGPT — is. According to Simon Willison, there’s three sub-models — a regular, mini and a nano model, “which can each be run at one of four reasoning levels” if you configure them using the API.
When it comes to what you access on ChatGPT, however, you’ve got two options — GPT-5 and GPT-5-Thinking, with the entire previous generation of GPT models no longer available for most users to access.
I believe GPT-5 is part of a larger process happening in generative AI — enshittification, Cory Doctorow’s term for when platforms start out burning money offering an unlimited, unguarded experience to attract their users, then degrade and move features to higher tiers as a means of draining the blood from users.
With the launch of GPT-5, OpenAI has fully committed to enshittifying its consumer and business subscription products, arbitrarily moving free users to a cheaper model and limiting their ability to generate images, and removing the ability to choose which model you use in its $20, $35 and “enterprise” subscriptions, moving any and all choice to its “team” and $200-a-month “pro” subscriptions.
OpenAI’s justification is an exercise in faux-altruism, framing “taking away all choice” as a “real-time router that quickly decides which [model] to use.” ChatGPT Plus and Team members now mostly have access to two models — GPT-5 and GPT-5-Thinking — down from the six they had before.
This distinction is quite significant. Where users once could get hundreds of messages a day on OpenAI’s o4-mini-high and o4-mini reasoning models, GPT-5 for ChatGPT Plus subscribers offers 200 reasoning (GPT-5-thinking) messages a week, with 80 GPT-5 messages every 3 hours which allow you to ask it to “think” about its answer, shoving you over to an undisclosed reasoning model. This may seem like a good deal, OpenAI is likely putting you on the cheapest model whenever it can in the name of “the best choice.”
While Team accounts have “unlimited” access to GPT-5, they still face the same 200-reasoning-messages-a-week limit, and while yes, you could ask it to “think” more, do you think that OpenAI is going to give you their best reasoning models? Or will they, as they said, “bring together the best of their previous models” and “choose the right one for the job”?
Furthermore, OpenAI is permanently sunsetting ChatGPT access to every model that doesn’t start with GPT-5 on August 14th except for customers of its most expensive subscription tier. OpenAI will (and it appears this applies to the $200-a-month "Pro" plan too, I'm told by reporter Joanna Stern)- reduce your model options to two or three choices (Chat, Thinking and Pro), and will choose whatever sub-model it sees fit in the most opaque way possible. GPT-5 is, by definition, a “trust me bro” product.
OpenAI is trying to reduce the burden of any particular user on the system under the guise of providing the “smartest, fastest model,” with “smartest” defined internally in a way that benefits the company, marketed as “choosing the best model for the job.”
Let's see how users feel! An intrepid Better Offline listener pulled together some snippets from r/ChatGPT, where users are mourning the loss of GPT-4o, furious at the loss of other models and calling GPT-5, in one case, "the biggest peice (sic) of garbage even as a paid user," who says that "projects are absolutely brain-dead now." One user said that GPT-5 is "the biggest bait-and-switch in AI history," another said that OpenAI "deleted a workfow of 8 models overnight, with no prior warning," and another said that "ChatGPT 5 is the worst model ever." In fact, there are so many of these posts that I could find posts to link to for every word of this paragraph in under five minutes.
Yet OpenAI isn’t just screwing over consumers. Developers that want to integrate OpenAI’s model now have access to “priority processing” — previously an enterprise-only feature (see this archive from July 21st 2025) to guarantee low latency and uptime. While this sounds like something altruistic, or a new beneficial feature, I’m not convinced. I believe there’s only one reason to do this: that OpenAI intends to, or will be forced to due to capacity constraints, start degrading access to its API.
As with every model developer, we have no real understanding of what may or may not lead to needing “reliable, high-speed performance” from API access, but the suggestion here is that failing to pay OpenAI’s troll toll will put your API access in the hole. That toll is harsh, too, nearly doubling the API price on each model, and while the Priority Processing Page has pricing for all manner of models, its pricing page reduces the options down to two models — GPT-5 and GPT-5-mini, suggesting it may not intend to provide priority access in perpetuity.
OpenAI is far from alone in turning the screws on its customers. As I’ll explain, effectively every consumer generative AI company has started some sort of $200-a-month “pro” plan — Perplexity Max, Gemini ($249.99 a month before discounts), Cursor Ultra, Grok Heavy (which is $300 a month!), and, of course, Anthropic, whose $100-a-month and $200-a-month plans allowed Claude Code users to spend anywhere from 100% to 10,000% of their monthly subscription in API calls. This led to rate limits starting August 28 2025 — a conveniently-placed date to allow Anthropic to close as much as $5 billion in funding before its users churn.
Worse still, Anthropic burned all of that cash to get Claude Code to $400 million in annualized revenue according to The Information — around $33 million in monthly revenue that will almost certainly evaporate as its customers hit week-long rate limits on a product that’s billed monthly.
These are not plans created for “power users.” They are the actual price points at which these things need to be to be remotely sustainable, though Sam Altman said earlier in the year that ChatGPT Pro’s $200-a-month subscription was losing OpenAI money. And with GPT-5, meaningful functionality — the ability to choose the specific model you want for a task — is being completely removed for ChatGPT Plus and Team subscribers.
This is part of an industry-wide enshittification of generative AI, where the abominable burn rates behind these products are forcing these companies to take measures ranging from minor to drastic.
The problem, however, is that these businesses have yet to establish truly essential products, and even when they create something popular — like Claude Code — they can’t make it popular without burning horrendous amounts of cash. The same goes for Cursor, and I believe just about every other major product built on top of Large Language Models. And I believe that when they try to adjust pricing to reflect their actual costs, that popularity will begin to wane. I believe we’re already seeing that with Claude Code, based on the sentiment I’ve seen on the tool’s Reddit page, although I’m also wary of making any sweeping statements right now, as it’s just too early to say.
The great enshittification of AI has begun.
2025-08-07 04:02:39
In the last week, we’ve had no less than three different pieces asking whether the massive proliferation of data centers is a massive bubble, and though they, at times, seem to take the default position of AI’s inevitable value, they’ve begun to sour on the idea that it’s going to happen soon.
Meanwhile, quirked-up threehundricorn OpenAI has either raised or is about to raise another $8.3 billion in cash, less than two months since it raised $10 billion from SoftBank and a selection of venture capital firms.
I hate to be too crude, but where the fuck is this money going? Is OpenAI just incinerating capital? Is it compute? Is it salaries? Is it compute? Is it to build data centers, because SoftBank isn’t actually building anything for Stargate?
The Information suggested OpenAI is using the money to build data centers — possibly the only worse investment it can make other than generative AI, and it’s one that it can’t avoid because OpenAI also is somehow running out of compute. And now they're in "early-stage discussions" about an employee share sale that would value the company at $500 billion, a ludicrous number that shows we're leaving the realm of reality. To give you some context, Shopify's market cap is $197 billion, Salesforce's is $248 billion, and Netflix's is $499 billion. Do you really think that OpenAI is worth more than these companies? Do you think they're worth more than AMD at a $264 billion market cap? Do you?
AHhhhhhhh-
Amongst this already-ridiculous situation sits the issue of OpenAI and Anthropic’s actual revenues, which I wrote about last week, and have roughly estimated to be $5.26 billion and $1.5 billion respectively (as of July). In any case, these estimates were made based on both companies’ predilection for leaking their “annualized revenues,” or monthx12.
This extremely annoying term is one that I keep bringing up because it’s become the de-facto way for generative AI companies to express their revenue, and both OpenAI and Anthropic are leaking them intentionally, and doing so in a way that suggests they’re not using even the traditional ways of calculating them. OpenAI leaked on July 30 2025 that it was at $12 billion annualized revenue — so around $833 million in a 30-day period — yet two days later on August 1 2025 the New York Times reported they were at $13 billion annualized revenue, or $1.08 billion of monthly revenue.
It’s very clear OpenAI is not talking in actual calendar months, at which point we can assume something like a trailing 30 day window (as in the “month” is just 30 days rather than a calendar month). We can, however, declaratively say that it’s not doing “the month of June” or “the month of July” because if it was, OpenAI wouldn’t have given two vastly different god damn numbers in the same two day period. That doesn’t make any sense. There are standard ways to handle annualized revenue, and it's clear they're not following them.
And to be even clearer, while I can’t say for certain, I believe these leaks are deliberate. OpenAI’s timing matches exactly with fundraising.
On Anthropic’s side, these revenues are beginning to get really weird. Anthropic went from making $72 million ($875 million annualized) in January to $433 million in July — or at least, it leaked on July 1, 2025 that it was at $4 billion annualized to The Information ($333 million a month) and claimed it had reached $5 billion annualized revenue ($416 million) to Bloomberg on July 29 2025 .
How’d it get there? I’m guessing it was from cranking up prices on Cursor, and we’ve had the confirmation that’s the case thanks to The Information reporting that $1.4 billion of its annualized revenue is from its top two customers (so around $116 million a month), the biggest of which is Cursor. Confusingly, The Information also says that Anthropic’s Claude Code is “generating nearly $400 million in annualized revenue, roughly doubling from just a few weeks ago,” meaning about $33 million of monthly revenue.
In any case, I think Cursor is a huge indicator of the current fragility of the bubble — and the fact that for most AI startups, there’s simply no way out, because being acquired or going public does not appear to be a viable route.
I know it sounds a little insane, but I believe that Cursor is the weak point of the entire AI bubble, and I’ll explain why, and how this could go. This is, by no means, inevitable, but I cannot work out what Cursor does other than this.
Cursor, at this point, faces two options: die, or get acquired. This is not an attack on anyone who works at the company, nor anything personal. The unit economics of this business do not make sense and yet, on some level, its existence is deeply important to the valley’s future.
OpenAI? OpenAI couldn’t acquire Windsurf because it was too worried Microsoft would get the somehow-essential IP of one of what feels like a hundred different AI-powered coding environments. It also already tried and failed to buy Cursor, and if I’m honest, I bet Cursor would sell now. Honestly, Cursor fucked up bad not selling then. It could have got $10 billion and Sam Altman would’ve had to accelerate the funding clause. It would’ve been so god-damn sick, but now the only “sick” thing here is Cursor’s fragile, plagued business model.
How about Anthropic? Eh! It already has their own extremely-expensive coding environment, Claude Code, which I estimated loses the company 100% to 10,000% of a subscription per-customer a few weeks ago, and now Anthropic is adding weekly limits on accounts, which will, I believe, create some of the most gnarly churn in SaaS history. Also, does Anthropic really want to acquire its largest customer? Also, with what money? It’s not raising $5 billion to bail out Cursor. Anthropic needs that to feed directly into Andy Jassy’s pocket to keep offering increasingly-more-complex models that never quite seem to be good enough.
Google? It just sort-of-bought Windsurf! It can’t do that again. It’s already given out the participation trophy multiple billions of dollars to investors and founders so nobody has to get embarrassed about this, and then allowed Cognition to pick up the scraps of a business that made $6.83 million a month after burning $143 million of investor capital (TechCrunch reports Windsurf was left with $100 million in cash post-acquisition). TechCrunch also reports that Cognition paid $250 million for what remained, and that this deal didn’t actually pay out the majority of Windsurf’s employees,
Meta? If I’m Cursor’s CEO, I am calling Mark Zuckerberg and pretending that I think the only person in the world who can usher in the era of Superintelligence is the guy who burned more than $45 billion on the metaverse and believes that not wearing AI glasses in the future will be a disadvantage. I would be saying all manner of shit about the future, and that the only way to do this was to buy my AI-powered coding startup that literally can’t afford to exist.
And that really is the problem. These companies are all going through the same motions that every company before them did — raise as much money as possible, get as big as possible, and eventually scale to the point you’re fat with enterprise cash.
Except the real problem is that, just like big tech’s new gluttony of physical real estate it's taken on, generative AI companies are burdened with a constant and aggressive form of cloud debt — the endless punishment of the costs of accessing the API for generative AI models that always seem to get a little better, but never in such a way that anything really changes other than how much Anthropic and OpenAI are going to need at the end of the month or they break your startup’s legs.
I’m not even trying to be funny! Anthropic raised its prices on Cursor so severely it broke its already-unprofitable business model. These products — while also, for the most part, not producing that much revenue — need to be sold with users being aware of (and sensitive to) the cost of providing them, and Cursor’s original product was $20-a-month for 500 “fast requests” of different models, in the same way that accessing Claude Code on any subscription is either $20, $100, or $200 a month rather than paying per API call, because these companies all sell products that shield the customer from the actual costs of running the services.
The irony is that, despite being willing to kill these companies by fundamentally changing the terms upon which they access these models, Anthropic is also, in some way, dependent on Cursor, Replit, and other similar firms continuing to buy tokens at the same rate as before, as that consumption is baked into its ARR figures, as well as the forward-looking revenue projections.
It is, in some sense, a Kobayashi Maru. Anthropic has an existential need to screw over its customers by hiking rates and imposing long-term commitments, but its existence is also, in some way, predicated on these companies continuing to exist. If Cursor and Replit both die, that’s a significant chunk of Anthropic's API business gone in a flash — and, may I remind you, that significantly overshadows its subscription business (making it almost like an inverse of OpenAI, where subscriptions drive the bulk of revenue).
Anthropic’s future is wedded to Cursor, and I just don’t see how Cursor survives, let alone exits, or gets subsumed by another company in a way that mirrors how acquisitions have worked since…ever.
If Cursor does not sell for a healthy amount — I’m talking $10 billion plus, and I mean actually sell, not “the founders are hired in a strange contractual agreement that pays out investors and its assets are sold to Rick from Pawn Stars” — it will prove that no generative AI company, to this date, has actually been successful. In reality, I expect a Chumlee-esque deal that helps CEO Michael Truell buy a porsche while his staff makes nothing.
Is Cursor worth $10 billion? Nope! No matter how good its product may or may not be, it is not good enough to be sold at a price that doesn’t require Cursor to incinerate hundreds of millions of dollars with no end in sight.
And this ultimately gives us the real conundrum — why aren’t generative AI startups selling?
Before we go any further, there have been some acquisitions, but they are sparse, and seem almost entirely centered around bizarre acqui-hires and confusing fire sales.
AMD bought Silo AI, “the largest private AI lab in Europe,” in August 2024 for $665 million, which appears to be the only real acquisition in generative AI history, and appears to be partially based on Silo’s use of AMD’s GPUs.
Elsewhere, NVIDIA bought OctoAI for an estimated $250 million in September 2024, after buying Brev.dev in July 2024 for an undisclosed sum, and then Gretel in March 2025. Yet in all three cases these are products to deploy generative AI, and not products built on top of generative AI or AI models. Canva bought “generative AI content and research company” Leonardo.AI in July 2024 for an undisclosed sum.
Really, the only significant one I’ve seen was on July 29 2025 — publicly-traded customer service platform NICE buying AI-powered customer service company Cognigy in a $955 million deal. According to Cxtoday, Cognigy expects about $85 million in revenue this year, though nobody appears to be talking about costs. However, Cognigy, according to some sources, charges tens or hundreds of thousands per contract for its “AI voice agents” that can “understand and respond to user input in a natural way.”
Great! We’ve got one real-deal “company built on models” acquisition, and it’s a company that most people haven’t heard of making around $7 million a month.
Let’s take a look at the others.
Outside of one very industry-specific acquisition, there just doesn’t seem to be the investor hunger to buy a company valued at $9.9 billion.
And you have to ask why. If AI is, as promised, the thing that’ll radically change our economy, and these companies are building the tools that’ll bring about that change, why does nobody want to buy them?
And, in the broader term, what does it mean when these companies — those with $10bn, or in the case of OpenAI, $300bn valuations — can’t be bought, and can’t go public? Where does this go? What happens next? What’s the gameplan here? How will the venture firms that ploughed billions of capital into these businesses bring a return for their LPs if there are no IPOs or buyouts?
The economic implications of these questions are, quite frankly, terrifying — especially when you consider the importance that VC has historically held in building the US tech ecosystem, and they raise further questions about the impact of an AI bubble on companies that are promising, and do have a viable business model, and a product with actual fit, but won’t be able to actually raise any cash.
Great! I would believe it was possible if it had ever, ever happened, which it has not.
I’m not even being sarcastic or rude. It has just not happened. No company that actually stakes their entire product on generative AI appears to be able to make money. Glean, a company that makes at best $8.3 million a month ($100 million annualized revenue) said it had $550 million in cash December of last year, and then had to raise $150 million in June of this year. Where did that money go? Why does a generative search engine product with revenues that are less than a third of the Cincinnati Reds baseball team need half a billion dollars to make $8.3 million a month?
I’m not saying these companies are unnecessary, so much as they may very well be impossible to run as real businesses. This isn’t even a qualitative judgment of any one generative AI company. I’m just saying, if any of these were good businesses, they would be either profitable or being acquired in actual deals, and there would be good businesses by now.
The amount of cash they are burning does not suggest they’re rapidly approaching any kind of sane burn rate, or we would have heard. Putting aside any kind of skepticism I have, anything you may hold against me for what I say or the way I say it, where are the profitable companies? Why isn’t there one, outside of the companies creating data to train the AI models, or Nvidia? We’re three years in, and we haven’t had one.
We also have had no exits and no IPOs. There has been no cause for celebration, no validation of a business model through another company deciding that it was necessary to continue its dominance by raising funds on the public market, or allowing actual investors — flawed though they may be — act as the determiner of their value.
It is unclear what the addition of Windsurf’s intellectual property adds to Cognition, much like it’s a little unclear what differentiates Cognition’s so-called AI-powered software engineer “Devin” from anything else on the market. I hear Goldman is paying for it, and said the stupidest shit I’ve ever heard to CNBC that nevertheless shows how little it’s actually paying for:
“We’re going to start augmenting our workforce with Devin, which is going to be like our new employee who’s going to start doing stuff on the behalf of our developers,” Argenti told CNBC. “Initially, we will have hundreds of Devins [and] that might go into the thousands, depending on the use cases.”
Hundreds of Devins = hundreds of seats. At a very optimistic 500 users at the highest-end pricing of $500-a-month (if it’s $20-a-month, Cognition is making a whole, at most, less than $20,000 a month) — and let’s assume that it does a discount at enterprise scale, because that always happens — that’s $250,000 a month! Wow! $3 million in revenue? On a trial basis? Amazing!
Sidenote: I'm so impressed! To be clear, it’s probably far fewer seats and far fewer dollars a month.
In fact, I can’t find a shred of evidence that Cognition otherwise makes much money. Despite currently raising $300 million at a $10 billion valuation, I can find no information about Cognition’s revenues beyond one comment from The Information from July 2024, when Cognition raised at a $2 billion valuation:
Cognition’s fundraise is the latest example of AI startups raising capital at sky-high valuations despite having little or no revenue.”
In a further move per The Information that is both a pale horse and a deeply scummy thing to do, Cognition has now laid off 30 people from the Windsurf team, and is now offering the remaining 200 buyouts equal to 9 months of salary and, I assume, the end of any chance to accrue further stock in Cognition. CEO Scott Wu said the following in the email telling Windsurf employees about the layoffs and buyouts:
“We don’t believe in work-life balance—building the future of software engineering is a mission we all care so deeply about that we couldn’t possibly separate the two,” he said. “We know that not everyone who joined Windsurf had signed up to join Cognition where we spend 6 days at the office and clock 80+ hour weeks.”
All that piss, vinegar, and burning of the midnight oil does not appear to have created a product that actually matters. I realize this is a little cold, but if you’re braying and smacking your chest about your hard-charging, 6-days-a-week office culture, you should be able to do better than “we have one publicly-known customer and nobody knows our revenue.” Maybe it’s a little simpler: Cognition paid $250 million to acquire Windsurf so that it could, after the transaction, say they have $82 million in annualized revenue.
If that’s the case, this is one of the dodgiest, weirdest acquisitions I’ve seen in my life — two founders getting a few hundred million dollars between them and their investors, and a few of their colleagues moving with them to Google, leaving the rest of the staff effectively jobless or in Hell with little payoff for their time working at Windsurf.
I can only imagine how it must have felt to go from being supposedly acquired by OpenAI to this farcical “rich get richer” bullshit. It also suggests that the actual underlying value of Windsurf’s IP was $250 million.
So, I ask, why, exactly, is Cognition worth $10 billion? And why did it have to raise $300 million after raising “hundreds of millions” according to Bloomberg in March? Where is the money going? It doesn’t seem to have great revenue, Carl Brown of the Internet of Bugs revealed it faked the demo of “Devin the AI powered software developer” last year, and Devin doesn’t even rank on SWE-benchmark, the industry standard for model efficacy at coding tasks.
At best, it’s now acquired their own unprofitable coding environment and the smidgen of revenue associated. How would Cognition go public? What is the actual exit path for Cognition, or any other generative AI startup?
And that, right there, is Silicon Valley’s own housing crisis, except instead of condos houses they can’t afford with sub-prime adjustable rate mortgages, venture capitalists have invested in unprofitable, low-revenue startups with valuations that they can never sell at. And, like homeowners in the dismal years of 2008 and 2009, they’re almost certainly underwater — they just haven’t realized it yet.
Where consumers were unable to refinance their mortgages to bring their monthly payments down, generative AI startups face pressure to continually raise at higher and higher valuations to keep up with their costs, with each one making it less likely their company will survive.
The other difference is that, in the case of the housing crisis, those who were able to hold onto their properties eventually saw their equity recover to their pre-crash levels, in part because housing is essential and because its price is influenced just as much by supply and demand, as it is the ability for people to finance the purchase of properties, and when the population increases, so too does the demand for housing. None of that is true with AI. There’s a finite number of investors, a finite number of companies, and a finite amount of capital — and those companies are only as valuable as the expectations that investors have for them, and as the broader sentiment towards AI.
Who is going to buy Cognition? Because the only other opportunity for the investors who put the money into this company to make money here — let alone to recoup their initial investment — is for Cognition to go public. Do you think Cognition will go public? How about Cursor? It’s worth $9.9 billion, and there was a rumour that it was raising at a valuation of $18 billion to $20 billion back in June.
Do you see Perplexity, at a valuation of $18 billion, selling to another company? The alternative, as discussed, is that Perplexity, a company with 15 million users and, at $150 million annualized revenue, is still making less than half of the revenue of the Cincinnati Reds baseball team ($325 million in annual revenue, and that’s real money, not “annualized revenue”), must go public. Perplexity has, at this point, raised over a billion dollars to lose $68 million in 2024 on $34 million of revenue.
By comparison, the Cincinnati Reds is a great business, with a net monthly income of $29 million, all to provide a service that upsets and humiliates millions of people from Ohio every year for the pleasure of America.
Putting aside the Reds, what exactly is it that Perplexity could offer to the public markets as a stock, or to an acquirer? Apple considered acquiring it in June, but Apple tends to acquire the companies it wants to integrate into the core business (as was the case with Siri), which makes me think that Perplexity leaked information about a deal that was never really serious. Hell, Meta talked about acquiring it too. Isn’t it weird that two different companies talked about buying Perplexity but neither of them did it? CEO Aravind Srivinas said in July that he wanted to “remain independent,” which is a weird thing to say after talking to two giant multi-trillion-dollar market cap tech firms about selling to them.
It’s almost as if nobody actually wants to buy Perplexity, or any of these sham companies, which I know sounds mean, but if you are worth billions or tens of billions of dollars and you can’t make more than a bottom-tier baseball team in fucking Ohio, you are neither innovative nor deserving of said valuation.
But really, my pissiness and baseball comparisons aside, what exactly is the plan for these companies? They don’t make enough money to survive without a continuous flow of venture capital, and they don’t seem to make impressive sums of money even when allowed to burn as much as they’d like. These companies are not being forced to live frugally, or at least have yet to be made to, perhaps because they’re all actively engaged at spending as much money as possible in pursuit of finding an idea that makes more money than it loses. This is not a rational or reasonable way to proceed.
Yes, there are startups that can justify burning capital. Yes, there are companies that have burned hundreds of millions of dollars to find their business models, or billions in the case of Uber, but none of these companies are like those companies in the generative AI space. GenAI businesses don’t have the same economics, nor do they have the same total addressable markets. If you’re going to say “Amazon Web Services,” I already explained why you’re wrong a few weeks ago.
These startups are their VC firms’ subprime mortgages, overstuffed valuations with no exit route, and no clear example of how to sell them or who to sell them to.
The closest they’ve got is using generative AI startups as beauty pageants for guys wearing Patagonia, finding ways to pretend that the guy who runs an AI startup — sorry, AI lab — is some sort of mysterious genius versus just another founder in just another bubble with just another overstuffed valuation.
The literal only liquidity mechanism (outside of Cognigy) that generative AI has had so far is “selling AI talent to big tech at a premium.” Nobody has gone or is going public, and if they are not going public, the only route for these companies is to either become profitable — which they haven’t — or sell to somebody, which they do not.
But I’ve been dancing around the real reason they won’t sell: because, fundamentally, generative AI does not let companies build something new. Anyone that builds a generative AI product is ultimately just prompting the model, albeit in increasingly more-complex ways at the scale of something like Claude Code — though Anthropic has the advantage of being one of the main veins of infrastructure. This means that a generative AI company owns very few unique things beyond their talent, and will forever be at the mercy of any and all decisions that their model provider makes, such as increasing prices or creating competing products.
I know it sounds ludicrous, but this is the reality of these companies. While there are some companies that have some unique training and models, none of them seem to be building interesting or unique products as a result.
If your argument is that these things take some time — how long do they have?
No, really! So many of you have said that “this is what happens, they burn a bunch of money, they grow, and then…” and then you stop short because the next thing you say is “turn profitable by getting enterprise customers.” Nobody can do the first part and few can do the second part in anything approaching a consistent fashion.
But really, how long should we give them? Three years?
Perplexity’s had three years and a billion dollars, it doesn’t seem to be close to profitable. How long does Perplexity deserve, exactly? An eternity?
Every single example of a company that has “burned a lot of money and then not done so in the end” has been a company with a physical thing or connections to the real world, with the exception of Facebook, which was never the kind of cash-burning monstrosity that generative AI is.
There has never been a software company that has just chewed through hundreds of millions — or billions — of dollars and then suddenly became profitable, mostly because the magical valuations of software have been in their ability to transcend infrastructure. One’s unit economics in the sales of software like Microsoft Office or providing access to Instagram do not require the most powerful graphics processing units run at full tilt at all times, and those are products that people like and want to use every day.
I get people saying “they’re in the growth stage!” about a few companies, but when all of them are unprofitable, and even the unprofitable ones outside of OpenAI and Anthropic aren’t really making impressive amounts of money anyway? C’mon! This isn’t anything like any boom that leads to something, and it’s because the economics do not make sense.
And that’s before we get to OpenAI and Anthropic!
So, as a reminder, OpenAI appears to have burned at least ten billion dollars in the last two months. It is has just raised another $8.3 billion dollars (after raising $10 billion in June according to the New York Times), and intends to receive around $22.5 billion by the end of year from SoftBank, and that is assuming it becomes a for-profit entity by the end of the year, and if that doesn’t happen, the round gets cut to $20 billion total, meaning that SoftBank would only be on the hook for a further $1.7 billion.
I am repeating myself, but I need you to really get this: OpenAI just got $10 billion in June 2025, and had to raise another $8.3 billion in August 2025. That is an unbelievable cash burn, one dwarfing any startup in history, rivalled only by xAI, makers of “Grok, the racist LLM,” losing it over $1 billion a month.
I should be clear that if OpenAI does not convert to a for-profit, there is no path forward. To continue raising capital, OpenAI must have the promise of an IPO. It must go public, because at a valuation of $300 billion, OpenAI can no longer be acquired, because nobody has that much money and, if let’s be real, nobody actually believes OpenAI is worth that much. The only way to prove that anybody does is to take OpenAI public, and that will be impossible if it cannot convert.
And, ironically, Softbank’s large and late-stage participation makes any exit harder, as early investors will see their holdings diluted as a percentage of total equity — or whatever the hell we’re calling it. While a normal company could just issue equity, and deal with the dilution that way, OpenAI’s structure necessitates a negotiation where companies can obstruct the entire process if they see fit.
Speaking of companies that might obstruct that transition, let’s talk about Microsoft. As I asked in my premium newsletter a few weeks ago, what if Microsoft doesn’t want OpenAI to convert? It owns all the IP, it owns access to all OpenAI’s research, and already runs most of its infrastructure. While — assuming a best-case scenario — that it would end up owning a massive chunk of the biggest tech startup of all time (I’m talking about equity, not OpenAI’s current profit-sharing units), Microsoft might also believe that it stands more to gain by letting AI die and assuming its role in the AI ecosystem.
But let’s assume it converts, and OpenAI now…has to continue raising money at a rate that will require it, allegedly, to only need to raise $17 billion in 2027.
That number doesn’t make sense, considering it already had to bring forward its $8.3 billion fundraise by at least three months, but let’s stick with that idea. OpenAI believes it will be profitable, somehow, by 2030, and even if we assume that, that means it intends to burn over a hundred billion dollars to get there.
Is the plan to take OpenAI public, dumping a toxic asset onto the public markets, only to let it flounder and convulse and die for all to see? Can you imagine OpenAI’s S-1? How well do you think this company would handle a true financial audit from a major accounting firm?
If you want to know what that looks like, google “WeWork,” which went from tech industry darling to joke in a matter of days, in part because it was forced to disclose how bad things actually were on its S-1. No, really, read this article.
With that in mind, I feel similarly about Anthropic. Nobody is buying this company at $170 billion, and thus the only way to access liquidity would be to take it public, and show the world how a company that made $72 million in January 2025 and then more than $400 million in July 2025 also loses $3 billion or more after revenue, and then let the market decide on its fair price.
The arguments against my work always come down to “costs will go down” and “these products will become essential.” Outside of ChatGPT, there’s really no proof that these products are anything remotely essential, and I argue there’s very little about ChatGPT that Microsoft couldn’t provide with rate limits via Copilot.
I’d also argue that “essential” is a very subjective term. Essential — in the sense that some people use it as search — doesn’t mean that it’s useful for enterprises, or the majority of people.
And, I guess, ChatGPT somehow makes $1 billion a month in revenue selling access to premium versions of ChatGPT — though I’m not 100% sure how. Assuming it has 20 million customers paying $20 a month, that’s $400 million a month, then 5 million business customers paid an average of $100 each, that’s $900 million…and is that average really that good? Are that many people paying $35 a month, or $50, or $200? OpenAI doesn’t break out the actual revenues behind these numbers for a reason, and I believe that reason is “they don’t look as good.”
What’s OpenAI’s churn like? And does it really, as I wrote last week, end the year making more than Spotify at $1.5 billion a month?
We don’t know, and OpenAI (much like Anthropic) has never shared actual revenues, choosing instead to leak to the media and hope to obfuscate the actual amounts of money being spent on its services.
Anyway, long story short, these companies are unprofitable with no end in sight, don’t even make that much money in most cases, are valued more than anybody would ever buy them for, do not have much in the way of valuable intellectual property, and the two biggest players burn billions of dollars more than they make.
Even if this were going to happen — it will not! — who would they give the money to and for how long? Would they give it to all the startups? Is every startup going to get a Paycheck Protection Program but for generative AI? How would that play out in rural red districts (where big tech has never been popular), which are being hit with both massive cuts to welfare, as well as the shockwaves of a trade war that has made American agricultural exports (like feedstocks, which previously went to China by the shipload) less appealing worldwide?
So they bail out OpenAI, then stuff it full of government contracts to the tune of $15 billion a year, right? Sorry, just to be clear, that’s the low end of what this would take to do, and they’ll have to keep doing it forever, until Sam Altman can build enough data centers to…keep burning billions, because there’s no actual plan to make this profitable.
Say this happens. Now what? America has a bullshit generative AI company attached to the state that doesn’t really innovate and doesn’t really matter in any meaningful way, except that it owns a bunch of data centers?
I don’t think this happens! I think this is a silly idea, and the most likely situation would be that Microsoft would unhinge its jaw and swallow OpenAI and its customers whole. Hey, did you know that Microsoft’s data center construction is down year-over-year, and it’s basically signed no new data center leases? I wonder why it isn’t building these new data centers for OpenAI? Who knows.
Stargate isn’t saving it, either. As I wrote previously, Stargate doesn’t actually exist beyond the media hype it generated.
And yes, OpenAI is offering ChatGPT at $1 for a year to US government workers - and I cannot express how little this means other than that they are horribly desperate. This product doesn't do enough to make it essential, and this fire sale doesn't change anything.
Anyway, does the government do this for everybody? Because everyone else is gonna need it as none of these companies can go public as they all suffer from the burden of generative AI. And, if the government does it, will it also subsidize the compute of for-profit companies like Cursor? To what end? Where is the limit?
I think this is a question that we have to seriously consider at this point, because its ramifications are significant.
If I’m honest, I think the future of LLMs will be client-side on egregiously-expensive personal setups for enthusiasts, and in a handful of niche enterprise roles. Large Language Models do not scale profitably, and their functionality is not significant enough to justify the costs of running them. By immediately applying old economics — the idea that you would pay a monthly fee to have relatively-unlimited access — companies like OpenAI and Anthropic immediately trained users to use their products in a way that was antithetical to their costs.
Then again, had these models been served in a way that was mindful of their costs, there would likely have been no way to even get this far. If OpenAI is making a billion dollars a month, it is possibly losing that much (or more) after revenue, and that’s the money it can get selling the product in a form that can never turn profitable. If OpenAI charged in line with its actual costs, would it even be able to justify a freely-available version of ChatGPT, outside of a few free requests?
The revenue you see today is what people are willing to pay for a product that loses money, and I cannot imagine they would pay as much if the companies in question charged their costs. If I’m wrong, Cursor will be just fine, and that’s assuming that Cursor’s current hobbled form is even profitable, which it has not said it is.
So, you’ve got an entire industry of companies that struggle to do anything other than lose a lot of money. Great.
And now we have a massive expansive data centre buildout, the likes of which we’ve never seen, all to capture demand for a product that nobody makes much money selling.
This, naturally, leads to an important question: how do these people building data centers actually make money?
Last week, the Wall Street Journal published one of the more worrying facts I’ve seen in the last two years:
Investor and tech pundit Paul Kedrosky says that, as a percentage of gross domestic product, spending on AI infrastructure has already exceeded spending on telecom and internet infrastructure from the dot-com boom—and it’s still growing. He also argues that one explanation for the U.S. economy’s ongoing strength, despite tariffs, is that spending on IT infrastructure is so big that it’s acting as a sort of private-sector stimulus program.…Capex spending for AI contributed more to growth in the U.S. economy in the past two quarters than all of consumer spending, says Neil Dutta, head of economic research at Renaissance Macro Research, citing data from the Bureau of Economic Analysis.
A global accounting of this infrastructure spending would be even bigger, as it would include capex from these companies’ most important partners. Foxconn has recently spent big building out factories for Apple in India, which just supplanted China as the source of the majority of U.S.-destined iPhones, according to Canalys. And the world’s largest chip manufacturer, TSMC, spent about $10 billion on capex in its most recent quarter.
The massive buildout of data centers — and the associated physical gear like chips, servers, and raw materials for building them — has become a massive, dominant economic force…building capacity for an industry that is yet to prove it can make real revenues.
And no, Microsoft talking about its Azure revenue in its last quarterly earnings for the first time is not the same thing, as it stopped explicitly stating their AI revenue in January (when it was $13 billion annualized).
Anyway, AI capex allegedly — though I have some questions about this figure! — accounts for 1.2% of the US GDP in the first half of the year, and accounted for more than half of the (to quote the Wall Street Journal) “already-sluggish” 1.2% growth rate of the US economy.
Another Wall Street Journal piece published a few days later discussed how data center development is souring the free cash flow for big tech, turning them from the kind of “asset-light” businesses that the markets love into entities burdened by physical real estate and their associated costs:
For years, investors loved those companies because they were “asset-light.” They earned their profits on intangible assets such as intellectual property, software, and digital platforms with “network effects.” Users flocked to Facebook, Google, the iPhone, and Windows because other users did. Adding revenue required little in the way of more buildings and equipment, making them cash-generating machines.
This can be seen in a metric called free cash flow, roughly defined as cash flow from operations minus capital expenditures. It excludes things such as noncash impairment charges that can distort net income. This is arguably the purest measure of a business’s underlying cash-generating potential. Amazon, for example, tells investors: “Our financial focus is on long-term, sustainable growth in free cash flow.”
…
From 2016 through 2023, free cash flow and net earnings of Alphabet, Amazon, Meta and Microsoft grew roughly in tandem. But since 2023, the two have diverged. The four companies’ combined net income is up 73%, to $91 billion, in the second quarter from two years earlier, while free cash flow is down 30% to $40 billion, according to FactSet data. Apple, a relative piker on capital spending, has also seen free cash flow lag behind.
These numbers are all very scary, and I mean that sincerely, but they also fail to express why. How much was actually spent on AI capex in the US? One would think two different articles on this subject would include that number versus a single quarter’s worth, but from my estimates, I expect capital expenditures from the Magnificent Seven alone to crest $200 billion in the first half of 2025, with Axios estimating they’d spend around $400 billion this year.
Most articles are drafting off of a blog from Paul Kedrosky, who estimates total AI capex would be somewhere in the region of $520 billion in total for the year, which felt conservative to me, so I did the smart thing and asked him. Kedrosky noted that these numbers focus entirely on the four big spenders — Microsoft, Google, Meta and Amazon, and his own estimated $312 billion capex, and the 1.2% number came from the assumption that the US GDP in 2025 will be around $28 trillion (which, I add, is significantly lower than other forecasts, which puts it closer to $30 trillion).
Kedrosky, in his own words, was trying to be conservative, using public data and then building his analysis from there. I, personally, believe his estimate is too conservative — because it doesn’t factor in the capital expenditures from Oracle, which (along with Crusoe) is building the vast Abilene Texas data center for OpenAI, or any private data center developers sinking cash into AI capex.
When I asked him to elaborate, he estimated that “...AI spend, all-in, was around half of 3.0% Q2 real GDP growth, so 2-3x the lower bound, given multipliers, debt, etc. it could be half of US GDP full-year GDP growth.”
That’s so cool! Half of the US economy’s growth came from building data centers for generative AI, which has the combined revenue of a little more than the fucking smart watch industry in 2024.
Another troubling point is that big tech doesn’t just buy data centers and then use them, but in many cases pays a construction company to build them, fills them with GPUs and then leases them from a company that runs them, meaning that they don’t have to personally staff up and maintain them. This creates an economic boom for construction companies in the short term, as well as lucrative contracts for ongoing support…as long as the company in question still wants them. While Microsoft or Amazon might use a data center and, indeed, act as if it owns it, ultimately somebody else is holding the bag and the ultimate responsibility for the data centers.
One such company is QTS, a data center developer that leases to both Amazon and Meta according to the New York Times, which was acquired by Blackstone in 2021 for $10 billion. Since then, Blackstone has used commercial mortgage-backed securities — I know! — to raise over $8.7 billion since then to sink into QTS’ expansion, and as of mid-July said it’d be investing $25 billion in AI data centers and energy.
Blackstone, according to the New York Times, sees “strong demand from tech companies,” who are apparently “willing to sign what they describe as airtight leases for 15 to 20 years to rent out data center space.”
Yet the Times also names another problem — the “unanswered question” of how these private equity firms actually exit these situations. Blackstone, KKR and other asset management firms do not buy companies with the intention of syphoning off revenue, but to pump them up and sell them to another company. Much like AI startups, it isn’t obvious who would buy QTS at what I imagine would be a $25 billion or $30 billion valuation, meaning that Blackstone would have to take them public. Similarly, KKR’s supposed $50 billion partnership with investment firm Energy Capital partners to build data centers and their associated utilities does not appear to have much of an exit plan either.
And let’s not forget Oracle, OpenAI, and Crusoe’s abominable mess in Abilene Texas, where Oracle is paying for the $40 billion of GPUs and Crusoe is spending $15 billion raised from Blue Owl Capital and Primary Digital infrastructure to build data centers for OpenAI, a company that loses billions of dollars a year. Why? So that OpenAI can, allegedly starting in 2028, pay Oracle $30 billion a year for compute, and yes, I am being fully serious.
To be clear, OpenAI, by my estimates, has only made around $5.26 billion this year (and will have trouble hitting its $12.7 billion revenue projection for 2025), and will likely lose more than $10 billion to do so.
Oracle will, according to The Information, owe Crusoe $1 billion in payments across the 15 year span of its lease. How does Crusoe afford to pay back its $15 billion in loans? Beats me! The Information says it’s raising $1 billion to “take on cloud giants” by “earning construction management fees and rent, and it can sell its stake in the project upon reaching certain completion milestones,” while also building its own AI compute, making the assumption that the demand is there outside of hyperscalers.
Then there’s CoreWeave, my least-favourite company in the world. As I discussed a few months ago, CoreWeave is burdened by obscene debt and a horrifying cash burn, and has seen its stock spike to a high of $183 on June 20, 2025 to around $111 as of writing this sentence, which has led to its all-stock attempt to acquire developer Core Scientific for $9 billion to start to fall apart as shareholders balk at the worrisome drop in CoreWeave’s stock price. CoreWeave has, since going public, had to borrow billions of dollars to fund its obscene capital expenditures to handle the upcoming October 2025 start date for OpenAI’s $11.9 billion, 5-year-long deal for compute, which is also when CoreWeave must start paying off its largest loan. CoreWeave lost $314 million in its last earnings, and I see no path to profitability or, honestly, its ability to keep doing business if the market sours.
Coreweave, I add, is pretty much reliant on Microsoft as its primary customer. While this relationship has been fairly smooth (so far, and as far as we know), this dependence also presents an existential threat to Coreweave, and is part of the reason why I’m so pessimistic about its survival. Microsoft has its own infrastructure, and has every incentive to cut out middlemen when it's able to meet supply with the demand it itself owns (or leases, rather than subcontracts out), simply because middlemen add costs and shrink margins. If Microsoft walks, what’s left? How does it service its ongoing obligations, and its mountain of debt?
In all of these cases, data center developers seem to have very few options as to making actual money. We have companies spending billions of dollars to vastly expand their data center footprint, but very little evidence that doing so results in revenue let alone some sort of payoff, and similarly, the actual capital expenditures they’re making are…much smaller than those of big tech.
Digital Realty Trust — one of the largest developers with over 300 data centers worldwide and $5.55 billion in revenue in 2024 — only spent $3.5 billion in capex last quarter, and Equinix ($8.7 billion revenue in 2024), which has 270 of them, put capex at $3.5 billion too. NTT Global Data Centers, which has over 160 data centers, has dedicated $10 billion in capital expenditures “through 2027” to build out data centers.
Yet in many of these cases, it’s because these companies are — to quote a source of mine — “functionally obsolete for this cycle,” because legacy data centers are not plug-and-play ready for GPUs to slot into. Any investment in capex by these companies would have to be for both GPUs and either building or retrofitting (basically ripping the insides out of old) data centers.
This means that the money flowing into AI data centers is predominantly going to neoclouds like CoreWeave and Crusoe, and all seems to flow back to private equity firms that never thought about where the cashout might be. Blackstone led CoreWeave’s $7.5 billion loan with Magnetar Capital, and Crusoe signed a deal a week ago with infrastructure firm Blackstone-owned Tallgrass to build a data center in Wyoming, all of which seems very good for Blackstone unless you think “how does it actually make money here,” as private equity firms do not generally like to hold assets longer than five years.
Even if it did, its capital expenditures are a drop in the bucket in the grand scheme of things. Assuming Crusoe burns, as The Information suggests it will, as much as $4 billion in 2025, CoreWeave spends as much as $20 billion, Digital Realty Trust spends $14 billion, Global Data Centers spends $3.33 billion (that’s $10bn over 3 years), and Equinix spends $14 billion. That’s $55.33 billion in AI capex spent in 2025 from the largest developers of data centers in the world.
For some context, as discussed above, $102 billion was spent by Meta, Alphabet, Microsoft and Amazon in the last quarter.
Private equity may ultimately face the same problem as many AI startups: there is no clear exit strategy for these investments. In the absence of real liquidity, firms will likely resort to all manner of financial engineering (read: bullshit) —marking up portfolio companies using internally generated valuations, charging fees on those inflated marks, and using those marks to entice new commitments from limited partners.
Compounding this is their ability to lend increasing amounts of capital to their own portfolio companies via affiliated private credit vehicles—effectively recycling capital and pushing valuation risk further down the line. This kind of self-reinforcing leverage loop is particularly opaque in private credit, which now underpins much of the AI infrastructure buildout. The complexity of these arrangements makes it hard to anticipate the full economic fallout if the cycle breaks down, but the systemic risk is building.
In any case, the supposed “AI capex boom” that is driving the US economy is not, as reported, driven by the massive interest in building out AI infrastructure for a variety of customers.
The reality is simple: the majority of all AI capex is from big tech, which is a massive systemic weakness in our economy.
While some might say that “AI capex” has swallowed the US economy, I think it’s more appropriate to say that Big Tech Capex Has Swallowed The US Economy.
I also want to be clear that the economy — which is the overall state of the country’s production and consumption of stuff, and the flow of money between participants in said economy — and the markets (as in the stock market) are very different things, but the calculations from Kedrosky and others have now allowed us to see where one might hit the other.
You see, the markets do not actually represent reality. While Microsoft, Amazon, Google, and Meta might want you to think there’s a ton of money in AI, their growth is mostly from selling further iterations and contracts for their existing stuff, or in the case of Meta further increasing its ad revenue. The economy is where things are actually bought and sold, representing the economic effects of both the things happening to build out AI and selling access to services and the AI models themselves. I recognize this is simplistic, but I am laying it out for a reason.
As I discussed at length in the Hater’s Guide to the AI Bubble, NVIDIA is the weak point in the stock market, representing roughly 19% of the value of the Magnificent 7, which in turn makes up about 35% of the value of the US stock market. The associated Magnificent Seven stocks have seen a huge boom through their own growth, which has been mistakenly and incorrectly attributed to revenue from AI, which as I laid out previously, is about $35 to $40 billion in the last two years. Nevertheless, the markets can continue to be irrational because all they care about is “number going up,” as the “value” of a stock is oftentimes disconnected from the value of the company itself, instead associated with its propensity for growth.
GDP and other measurements of the economy aren’t really something you can fudge quite as easily (at least, in transparent, democratic societies), nor can you say a bunch of fancy words to make people feel better in the event that growth stalls or declines.
This leads me to my principle worry: that “AI capex” is actually a term for the expenditures of four companies, namely Microsoft, Amazon, Google and Meta, with NVIDIA’s GPU sales being part of that capex too.
While we can include others like Oracle, Musk’s xAI, and various Neoclouds like CoreWeave and Crusoe — who, according to D.A. Davidson’s Gil Luria, will account for about 10% of NVIDIA’s GPU sales in 2025 — the reality is that whatever economic force is being driven by “AI investment” is really just four companies building and leasing data centers to burn on generative AI, a product that makes a relatively small amount of money before losing a great deal more.
42% of NVIDIA’s revenue comes from the Magnificent Seven (per Laura Bratton at Yahoo Finance), which naturally means that big tech is the lynchpin of investment in data centers.
I’ll put it far more simply: if AI capex represents such a large part of our GDP and economic growth, our economy does, on some level, rest on the back of Microsoft, Google, Meta and Amazon and their continued investment in AI. What should worry everybody is that Microsoft — which makes up 18.9% of NVIDIA’s revenue — has signed basically no leases in the last 12 months, and its committed datacenter construction and land purchases are down year-over-year.
While its capex may not have dipped yet (in part because the chip-heavy nature of generative AI means that capex isn’t exclusively dominated by property), it’s now obvious that if it does there will be direct effects on both the US economy and stock market, as Microsoft is part of what amounts to a stimulus package propping up America’s economic growth.
And not to repeat the point too much, but big tech has yet to actually turn anything resembling a profit on these data centers, and isn’t making much revenue at all out of generative AI.
How, exactly, does this end? What is the plan here? Is big tech going to spend hundreds of billions a year in capital expenditures on generative AI in perpetuity? Will they continue to buy more and more NVIDIA chips as they do so?
At some point, surely these companies have built enough data centers? Surely, at some point, they’ll run out of space to put these GPUs in? Is the plan to, by then, make so much money from AI that it won’t matter? What does NVIDIA do at that point? And how does the US economy rebound from the loss of activity that follows?
As I’ve said again and again, the generative AI bubble is, and always has been, fundamentally irrational, and inherently gothic, playing in the ruins, patterns and pathways of previous tech booms despite this one having little or no resemblance to them. Though the tech industry loves to talk about building a glorious future, its present is one steeped in rituals of decay and death, where the virtues of value creation and productivity take a backseat to burning billions and lying to the public again and again and again. The way in which the media has participated in these lies is disgusting.
Venture capital, still drunk off the fumes of 2021, keeps running the old playbook: shove as much money into a company as possible in the hopes you can dump it onto an acquirer or the public markets, only to get high on their own supply, pushing valuations to the point that there is no possible liquidity event for the majority of big private AI companies as a result of their overstuffed valuations, burdensome business models and lack of any real intellectual property.
And, like the rest of the AI bubble, Silicon Valley’s only liquidity path out of the bubble is big tech itself. Without Google, Character.ai and Windsurf’s founders would likely have been left for dead, and the same goes for Inflection, and I’d even argue Scale AI, whose $14.3 billion “investment” from Meta effectively decapitated the company, removing its CEO Alexandr Wang, leaving the rest of the company to die, laying off 14% of its staff and 500 contractors mere weeks after its CEO and investors cashed in.
In fact, generative AI is turning out to be a fever dream entirely made up by big tech. OpenAI would be dead if it wasn’t for the massive infrastructure provided by Microsoft at-cost in return for rights to its IP, research, and the ability to sell its models on top of the tens of billions of dollars of venture capital thrown into its billion-dollar cash incinerator. Anthropic would be dead if both Google and Amazon — the latter of which provides much of its infrastructure — hadn’t invested billions in keeping it alive so that it can burn $3 billion or more in 2025 while fucking over its enterprise customers and rate limiting the rest.
The generative AI industry is, at its core, unnatural. It does not make significant revenue compared to its unbelievable costs, nor does it have much revenue potential. It requires, unlike just about every software revolution, an unbelievable amount of physical infrastructure to run, and because nobody but big tech can afford to build the infrastructure necessary, creates very little opportunity for competition or efficiency. As the markets are in the throes of the growth-at-all-costs Rot Economy, they have failed to keep big tech in line, conflating big tech’s ability to grow with growth driven as a result of their capital expenditures. Sensible, reasonable markets would notice the decay of free cash flow or the ridiculousness of big tech’s capex bonanza, but instead they clap and squeal every time Satya Nadella jingles his keys.
What is missing is any real value generation. Again, I tell you, put aside any feelings you may have about generative AI itself, and focus on the actual economic results of this bubble. How much revenue is there? Why is there no profit? Why are there no exits? Why does big tech, which has sunk hundreds of billions of dollars into generative AI, not talk about the revenues they’re making? Why, for three years straight, have we been asked to “just wait and see,” and for how long are we going to have to wait to see it?
What’s incredible is that the inherently compute-intensive nature of generative AI basically requires the construction of these facilities, without actually representing whether they are contributing to the revenues of the companies that operate the models (like Anthropic or OpenAI, or any other business that builds upon them). As the models get more complex and hungry, more data centers get built — which hyperscalers book as long-term revenue, even though it’s either subsidised by said hyperscalers, or funded by VC money. This, in turn, stimulates even more capex spending. And without having to answer any basic questions about longevity or market fit.
Yet the worst part of this financial farce is that we’ve now got a built-in economic breaking point in the capex from AI. At some point capex has to slow — if not because of the lack of revenues or massive costs associated, but because we live in a world with finite space, and when said capex slow happens, so will purchases of NVIDIA GPUs, which will in turn, as proven by Kedrosky and others, slow America’s economic growth.
And that growth is pretty much based on the whims of four companies, which is an incredibly risky and scary proposition. I haven’t even dug into the wealth of private credit deals that underpin buildouts for private AI “neoclouds” like CoreWeave, Crusoe, Nebius, and Lambda, in part because their economic significance is so much smaller than big tech’s ugly, meaningless sprawl.
We are in a historically anomalous moment. Regardless of what one thinks about the merits of AI or explosive datacenter expansion, the scale and pace of capital deployment into a rapidly depreciating technology is remarkable. These are not railroads—we aren’t building century-long infrastructure. AI datacenters are short-lived, asset-intensive facilities riding declining-cost technology curves, requiring frequent hardware replacement to preserve margins.
You can’t bail this out, because there is nothing to bail out. Microsoft, Meta, Amazon and Google have plenty of money and have proven they can spend it. NVIDIA is already doing everything it can to justify people spending more on its GPUs. There’s little more it can do here other than soak up the growth before the party ends.
That capex reduction will bring with it a reduction in expenditures on NVIDIA GPUs, which will take a chunk out of the US stock market. Although the stock market isn’t the economy, the two things are inherently linked, and the popping of the AI bubble will have downstream ramifications, just like the dot com bubble did on the wider economy.
Expect to see an acceleration in layoffs and offshoring, in part driven by a need for tech companies to show — for the first time in living memory — fiscal restraint. For cities where tech is a major sector of the economy — think Seattle and San Francisco — there’ll be knock-on effects to those companies and individuals that support the tech sector (like restaurants, construction companies building apartments, Uber drivers, and so on). We’ll see a drying-up of VC funding. Pension funds will take a hit — which will affect how much people have to spend in retirement. It’ll be grim.
Worse than that is the fact that these data centers will be, by definition, non-performing assets, and one that inflicted an opportunity cost that’ll be almost impossible to calculate. While a house, once built and sold, technically falls into that category (it doesn’t add to any economic productivity), people at least need somewhere to live. Shelter is an essential component of life. You can live without a data center the size of Manhattan.
What would have happened if companies like Microsoft and Meta instead spent the money on things that actually drove productivity, or created a valuable competitive business that drove economic activity? Hell, even if they just gave everyone a 10% raise, it would have likely been better for the economy than this, if we’re factoring in things like consumer spending.
It’s just waste. Profligate, pointless waste.
In summary, we’re already facing the prospect of a recession, and though I am not an economist, I can imagine it being a particularly nasty one given that the Magnificent Seven accounted for 47.87% of the Russell 1000 Index’s returns in 2024. Even if big tech somehow makes this crap profitable, it’s hard to imagine that they’ll counterbalance any capex reduction with revenue, because there doesn’t seem to be that much revenue in generative AI to begin with.
This is what happens when you allow the Rot Economy to run wild, building the stock market and tech industry on growth over everything else. This is what happens when the tech media repeatedly fails to hold the powerful to account, catering to their narratives and making excuses for their abominable, billion-dollar losses and mediocre, questionably-useful products.
Waffle on all you want about the so-called “agentic era” or “annualized revenues” that make you hot under the collar — I see no reason for celebration about an industry with no exit plans and needless capital expenditures that appear to be one of the few things keeping the American economy growing.
I have been writing about the tech industry’s obsession with generative AI for two years, and never have I felt more grim. Before this was an economic uncertainty — a way that our markets might contract, that big tech might take a big haircut, that a bunch of money might be wasted but otherwise the world would keep turning.
It feels as if everything is aligning for disaster, and I fear there’s nothing that can be done to avert it.
2025-08-02 00:20:01
Hello and welcome to the latest premium edition of Where's Your Ed At, I appreciate any and all of your subscriptions. I work very hard on these, and they help pay for the costs of running Ghost and, well, my time investigating different things. If you're on the fence, subscribe! I promise it's worth it.
I also want to give a HUGE thank you to Westin Lee, a writer who has written about business and the use of AI, who was the originator of the whole "what if we used ARR to work out what these people make?" idea. He's been a tremendous help, and I recommend you check out his work.
If you're an avid reader of the business and tech media, you'd be forgiven for thinking that OpenAI has made (or will make) in excess of $10 billion this year, and Anthropic in excess of $4 billion.
Why? Because both companies have intentionally reported or leaked their "annualized recurring revenue" – a month's revenue multiplied by 12. OpenAI leaked yesterday to The Information that it hit $12 billion in "annual recurring revenue" – suggesting that its July 2025 revenues were around $1 billion. The Information reported on July 1 2025 that Anthropic's annual run rate was $4 billion – meaning that its revenue for the month of June 2025 was around $333 million. Then, yesterday, it reported that the run rate was up to $5 billion.
As a reminder, both of these companies burn billions of dollars – more than $5 billion each in 2024.
These do not, however, mean that their previous months were this high, nor do they mean that they've "made" anything close to these numbers. Annualized recurring revenue is one of the most regularly-abused statistics in the startup world, and can mean everything from "[actual month]x12" to "[30 day period of revenue]x12" and in most cases it's a number that doesn't factor in churn. Some companies even move around the start dates for contracts as a means of gaming this number.
ARR, also, doesn’t factor seasonality of revenue into the calculations. For example, you’d expect ChatGPT to have peaks and troughs that correspond with the academic year, with students cancelling their subscriptions during the summer break. If you use ARR, you’re essentially taking one month and treating it as representative of the entire calendar year, when it isn’t.
Sidenote: I want to make one thing especially obvious. When I described ARR as “one of the most regularly-abused statistics in the startup world,” I meant it. ARR is only really used by startups (and other non-public companies). It’s not considered a GAAP-standard accounting practice, and public companies (those traded on the stock market) generally don’t use it because they have to report actual figures, and so there’s no point. You can’t really obfuscate something that you have to, by law, state publicly and explicitly for all to see with crafty trickery.
These companies are sharing (or leaking) their annualized revenues for a few reasons:
In any case, I want to be clear this is a standard metric in non-public Software-as-a-Service (SaaS) businesses. Nothing is inherently wrong with the metric, save for its use and what's being interpreted from it.
Nevertheless, there has been a lot of reporting on both OpenAI and Anthropic's revenues that has created incredible confusion in the market that benefits both companies, making them seem far more successful than they really are, and giving them credit for revenue they are yet to book.
Before I dive into this — and yes, before the premium break — I want to establish some facts.
OpenAI:
Anthropic:
The intention of either reporting or leaking their annualized revenue numbers was to make you think that OpenAI would hit its projected $12.7 billion revenue number, and Anthropic would hit its "optimistic" $4 billion number, because those "annualized revenue" figures sure seem to have the word "annual" in them.
A sidenote about ARR, and a potential way my analysis is actually too kind: In this analysis I have assumed that OpenAI and Anthropic's revenues have always gone up.
Annualized revenue is a one month snapshot of a business. Though I have no way of proving it — which is why I don't try! — but there is always a chance that one or more of the months I discuss here was lower than the following. If I had to speculate, I’d wager that the summer months — those outside the normal academic calendar — see lower subscription revenue for these companies. Nevertheless, we do not have that information, and thus will not factor it into the analysis. But even one "off" month would be bad for either Anthropic or OpenAI.
I will add that we've never had real reporting about OpenAI or Anthropic's actual total year revenues before, which is why I am doing my best to work it out.
Yet through an historic analysis of reported annual recurring revenue numbers over the past three years, I've found things to be a little less certain. You see, when a company reports their "annual recurring revenue," what they're actually telling you is how much they made in a month, and I've sat down and found every single god damn bit of reporting about these numbers, calculating (based on the compound growth necessary between the months of reported monthly revenue) how much these companies are actually making in cash.
My analysis, while imperfect (as we lack the data for certain months), aligns closely enough with projections that I am avoiding any egregious statements. OpenAI and Anthropic's previous projections were fairly accurate, though as I'll explain in this piece, I believe their new ones are egregious and ridiculous.
More importantly, in all of these stories, there was only one time that these companies shared their revenues — when OpenAI shared its $10 billion runrate in May, though the July $12 billion ARR leak is likely intentional too. In fact, I believe both were an intentional attempt to mislead the general public into believing the company was more successful than it is.
Based on my analysis, OpenAI made around $3.616 billion in revenue in 2024, and so far in 2025 has made, by my calculations, around $5.266 billion in revenue as of the end of July.
This is also a slower growth rate than it’s experienced so far in the year. Going from $5.5 billion in annualized revenue in December 2024 to $10 billion annualized in May 2025 was a compound growth rate of around 12.7%. The "jump" from $10 billion ARR to $12 billion ARR is 9.54%. While I realize this may not seem like a big drop, every single penny counts, and percentage point shifts are worth hundreds of millions (if not billions) of dollars.OpenAI has been projected to make $12.7 billion in revenue in 2025. Making this number will be challenging, and require OpenAI to grow by 14%, every single month, without fail. For OpenAI to hit this number will require it to make nearly $2 billion a month in revenue by the end of the year to account for the disparity with the earlier months in the year when it made far, far less.
I also have serious suspicions about how much OpenAI actually made in May, June and July 2025.
OpenAI roughly doubled its revenue in the first seven months of the year, reaching $12 billion in annualized revenue, according to a person who spoke to OpenAI executives.
Yet the New York Times, mere days later, reported $13 billion annualized revenue:
OpenAl's business continues to surge. DealBook hears that the company's annual recurring revenue has soared to $13 billion, up from $10 billion in June — and is projected to surpass $20 billion by the end of the year.
First and foremost, it’s incredibly fucking suspicious that two very different numbers were reported here so closely, and even more so that the June 9 2025 announcement of OpenAI hitting $10 billion in annualized revenue was not, as I had originally believed, discussing the month of May 2025.
This likely means that OpenAI is not using standard annualized revenue metrics - which would traditionally mean “the last month’s revenue multiplied by 12,” and instead choosing “if all the monthly subscribers and contracts that are currently paying us on this day, June 9 2025, were to be multiplied by 12, we’d have $X annualized revenue.”
This is astronomically fucking dodgy. For the sake of this analysis, I am assuming any announcement of annualized revenue refers to the month previous. So, for example, when OpenAI announced they hit $10 billion in annualized revenue, I am going to assume this is for the month of May 2025.
This analysis is going to favour the companies in question. If OpenAI “hit $10 billion annualized” in or around June 9 2025, it likely means that their May revenues were lower than that. Similarly, OpenAI “hitting” $12 billion in annualized revenue (announced end of July 2025) - which I have factored into my analysis - is considered the revenue they hit in July 2025.
In reality, this is likely to credit them more revenue than they deserve. If June’s annualized revenue was $10 billion, that means they made $833 million, rather than the $939 million I credit them with for the month.
One cannot hit $12 billion AND $13 billion annualized in one month unless you are playing extremely silly games with the numbers in question, such as moving around when you start a 30 day period to artificially inflate things. In any case, my analysis for OpenAI’s revenue for August is around $13.145 billion - so in line with a “$13 billion annualized” figure.
In any case, I am sticking with my analysis as it stands. However, the timing of these annualized revenue leaks now makes me doubt the veracity of their previous leaks, in the sense that there’s every chance that they too are either inflated or used in a deceptive manner.
Based on these numbers, OpenAI's current growth rate is around 9.54% — and at that current pace, it will finish the year at around $11.89 billion in revenue. This is an impressive number, meaning it’d be making over $1.5 billion a month in revenue by December 2025 — but such an impressive number will be difficult to reach, and mean it has something in the region of $18 billion annualized revenue by the end of the year.
I also question whether it can make it, and even if it does, how it could possibly afford to serve that revenue long-term.
In Anthropic's case, I am extremely confident, based on its well-reported annualized revenues, that Anthropic has, through July 2025, made around $1.5 billion in revenue. This is, of course, assuming that their annualized revenue leaks are for calendar months, and if they're not, this number could actually be lower.
This is not a question of opinion. Other than April, we have ARR for every single month of the year. Bloomberg is now reporting that Anthropic sees its revenue rate "maybe [going] to $9 billion annualized by year-end," which, to use a technical term, is total bullshit, especially as this number was leaked as Anthropic is fundraising.
In any case, I believe Anthropic can beat its base case estimates. It will almost certainly cross $2 billion in revenue, but I also believe that revenue growth is slowing for these companies, and the amount of cash we credit them as actually making is decidedly more average than "annualized revenue" would have you believe.
2025-07-25 01:19:51
Earlier in the week, the Wall Street Journal reported that SoftBank and OpenAI's "$500 billion" "AI Project" was now setting a "more modest goal of building a small data center by year-end."
To quote:
A $500 billion effort unveiled at the White House to supercharge the U.S.’s artificial-intelligence ambitions has struggled to get off the ground and has sharply scaled back its near-term plans.
Six months after Japanese billionaire Masayoshi Son stood shoulder to shoulder with Sam Altman and President Trump to announce the Stargate project, the newly formed company charged with making it happen has yet to complete a single deal for a data center.
One might be forgiven for being a little confused here, as there is, apparently, a Stargate Data Center being built in Abilene Texas. Yet the Journal added another detail:
Altman has used the Stargate name, shared with a 1994 Kurt Russell film about aliens who teleport to ancient Egypt, on projects that aren’t being financed by the partnership between OpenAI and SoftBank. The trademark to Stargate is held by SoftBank, according to public filings.
For instance, OpenAI refers to a data center in Abilene, Texas, and another it agreed in March to use in Denton, Texas, as part of Stargate even though they are being done without SoftBank, some of the people familiar with the matter said.
Confusing, right? One might also be confused by the Bloomberg story called "Inside The First Stargate AI Data Center," which had the subheadline "OpenAI, Oracle and SoftBank hope that the site in Texas is the first of many across the US." More-confusingly, the piece talked about Stargate LLC, which OpenAI, Oracle and SoftBank were (allegedly) shareholders of.
Yet I have confirmed that SoftBank never, ever had any involvement with the site in Abilene Texas. It didn't fund it, it didn't build it, it didn't choose the site and, in fact, does not appear to have anything to do with any data center that OpenAI uses. The data center many, many reporters have referred to as "Stargate" has nothing to do with the "Stargate data center project." Any reports suggesting otherwise are wrong, and I believe that this is a conscious attempt at misleading the public by OpenAI and SoftBank.
I confirmed the following with a PR representative from Crusoe, one of the developers of the site in Abilene Texas:
Funding for construction of [the] Abilene data center is a JV between Crusoe, Blue Owl and Primary Digital Infrastructure. Confirming that Softbank is not and has not been involved in the funding for its construction.
And, as a reminder, Stargate as an entity was never formed — my source being the CEO of Oracle Safra Catz on Oracle’s earnings call.
This is an astonishing — and egregious — act of misinformation on the part of Sam Altman and OpenAI. By my count, at least 15 different stories attribute the Abilene Texas data center to the Stargate project, despite the fact that SoftBank was never and has never been involved. One would forgive anyone who got this wrong, because OpenAI itself engaged in the deliberate deception in its own announcement of the Stargate Project [emphasis mine]:
The initial equity funders in Stargate are SoftBank, OpenAI, Oracle, and MGX. SoftBank and OpenAI are the lead partners for Stargate, with SoftBank having financial responsibility and OpenAI having operational responsibility. Masayoshi Son will be the chairman.
Arm, Microsoft, NVIDIA, Oracle, and OpenAI are the key initial technology partners. The buildout is currently underway, starting in Texas, and we are evaluating potential sites across the country for more campuses as we finalize definitive agreements.
You can weasel-word all you want about how nobody has directly reported that SoftBank was or was not part of Abilene. This is a deliberate, intentional deception, perpetrated by OpenAI and SoftBank, who deliberately misled both the public and the press as a means of keeping up the appearance that SoftBank was deeply involved in (and financially obligated to) the Abilene site.
Based on reporting that existed at the time but was never drawn together, it appears that Abilene was earmarked by Microsoft for OpenAI's use as early as July 2024, and never involved SoftBank in any way, shape or form. The "Stargate" Project, as reported, was over six months old when it was announced in January 2025, and there have been no additional sites added other than Abilene.
In simpler terms, Stargate does not exist other than as a name that Sam Altman gives things to make them feel more special than they are, and SoftBank was never involved. Stargate does not exist as reported.
The use of the term "Stargate" is an intentional act of deceit, but beneath the surface lies, in my mind, a much bigger story. Furthermore, I believe this deceit means that we should review any and all promises made by OpenAI and SoftBank, and reconsider any and all statements they've made, or that have been made about them.
Let's review.
According to reporting:
Yet based on my research, it appears that SoftBank may not be able to — or want to — proceed with any of these initiatives other than funding OpenAI's current round, and evidence suggests that even if it intends to, SoftBank may not be able to afford investing in OpenAI further.
I believe that SoftBank and OpenAI's relationship is an elaborate ruse, one created to give SoftBank the appearance of innovation, and OpenAI the appearance of a long-term partnership with a major financial institution that, from my research, is incapable of meeting the commitments it has made.
In simpler terms, OpenAI and SoftBank are bullshitting everyone.
I can find no tangible proof that SoftBank ever intended to seriously invest money in Stargate, and have evidence from its earnings calls that suggests SoftBank has no idea — or real strategy — behind its supposed $3-billion-a-year deployment of OpenAI software.
In fact, other than the $7.5 billion that SoftBank invested earlier in the year, I don't see a single dollar actually earmarked for anything to do with OpenAI at all.
SoftBank is allegedly going to send upwards of $20 billion to OpenAI by December 31 2025, and doesn't appear to have started any of the processes necessary to do so, or shown any signs it will. This is not a good situation for anybody involved.
2025-07-22 00:07:38
Hey! Before we go any further — if you want to support my work, please sign up for the premium version of Where’s Your Ed At, it’s a $7-a-month (or $70-a-year) paid product where every week you get a premium newsletter, all while supporting my free work too.
Also, subscribe to my podcast Better Offline, which is free. Go and subscribe then download every single episode. Here's parts 1, 2 and 3 of the audio version of the Hater's Guide.
One last thing: This newsletter is nearly 14,500 words. It’s long. Perhaps consider making a pot of coffee before you start reading.
Good journalism is making sure that history is actively captured and appropriately described and assessed, and it's accurate to describe things as they currently are as alarming.
And I am alarmed.
Alarm is not a state of weakness, or belligerence, or myopia. My concern does not dull my vision, even though it's convenient to frame it as somehow alarmist, like I have some hidden agenda or bias toward doom. I profoundly dislike the financial waste, the environmental destruction, and, fundamentally, I dislike the attempt to gaslight people into swearing fealty to a sickly and frail psuedo-industry where everybody but NVIDIA and consultancies lose money.
I also dislike the fact that I, and others like me, are held to a remarkably different standard to those who paint themselves as "optimists," which typically means "people that agree with what the market wishes were true." Critics are continually badgered, prodded, poked, mocked, and jeered at for not automatically aligning with the idea that generative AI will be this massive industry, constantly having to prove themselves, as if somehow there's something malevolent or craven about criticism, that critics "do this for clicks" or "to be a contrarian."
I don't do anything for clicks. I don't have any stocks or short positions. My agenda is simple: I like writing, it comes to me naturally, I have a podcast, and it is, on some level, my job to try and understand what the tech industry is doing on a day-to-day basis. It is easy to try and dismiss what I say as going against the grain because "AI is big," but I've been railing against bullshit bubbles since 2021 — the anti-remote work push (and the people behind it), the Clubhouse and audio social networks bubble, the NFT bubble, the made-up quiet quitting panic, and I even, though not as clearly as I wished, called that something was up with FTX several months before it imploded.
This isn't "contrarianism." It's the kind of skepticism of power and capital that's necessary to meet these moments, and if it's necessary to dismiss my work because it makes you feel icky inside, get a therapist or see a priest.
Nevertheless, I am alarmed, and while I have said some of these things separately, based on recent developments, I think it's necessary to say why.
In short, I believe the AI bubble is deeply unstable, built on vibes and blind faith, and when I say "the AI bubble," I mean the entirety of the AI trade.
And it's alarmingly simple, too.
But this isn’t going to be saccharine, or whiny, or simply worrisome. I think at this point it’s become a little ridiculous to not see that we’re in a bubble. We’re in a god damn bubble, it is so obvious we’re in a bubble, it’s been so obvious we’re in a bubble, a bubble that seems strong but is actually very weak, with a central point of failure.
I may not be a contrarian, but I am a hater. I hate the waste, the loss, the destruction, the theft, the damage to our planet and the sheer excitement that some executives and writers have that workers may be replaced by AI — and the bald-faced fucking lie that it’s happening, and that generative AI is capable of doing so.
And so I present to you — the Hater’s Guide to the AI bubble, a comprehensive rundown of arguments I have against the current AI boom’s existence. Send it to your friends, your loved ones, or print it out and eat it.
No, this isn’t gonna be a traditional guide, but something you can look at and say “oh that’s why the AI bubble is so bad.” And at this point, I know I’m tired of being gaslit by guys in gingham shirts who desperately want to curry favour with other guys in gingham shirts but who also have PHDs. I’m tired of reading people talk about how we’re “in the era of agents” that don’t fucking work and will never fucking work. I’m tired of hearing about “powerful AI” that is actually crap, and I’m tired of being told the future is here while having the world’s least-useful most-expensive cloud software shoved down my throat.
Look, the generative AI boom is a mirage, it hasn’t got the revenue or the returns or the product efficacy for it to matter, everything you’re seeing is ridiculous and wasteful, and when it all goes tits up I want you to remember that I wrote this and tried to say something.
As I write this, NVIDIA is currently sitting at $170 a share — a dramatic reversal of fate after the pummelling it took from the DeepSeek situation in January, which sent it tumbling to a brief late-April trip below $100 before things turned around.
The Magnificent 7 stocks — NVIDIA, Microsoft, Alphabet (Google), Apple, Meta, Tesla and Amazon — make up around 35% of the value of the US stock market, and of that, NVIDIA's market value makes up about 19% of the Magnificent 7. This dominance is also why ordinary people ought to be deeply concerned about the AI bubble. The Magnificent 7 is almost certainly a big part of their retirement plans, even if they’re not directly invested.
Back in May, Yahoo Finance's Laura Bratton reported that Microsoft (18.9%), Amazon (7.5%), Meta (9.3%), Alphabet (5.6%), and Tesla (0.9%) alone make up 42.4% of NVIDIA's revenue. The breakdown makes things worse. Meta spends 25% — and Microsoft an alarming 47% — of its capital expenditures on NVIDIA chips, and as Bratton notes, Microsoft also spends money renting servers from CoreWeave, which analyst Gil Luria of D.A.Davidson estimates accounted for $8 billion (more than 6%) of NVIDIA's revenue in 2024. Luria also estimates that neocloud companies like CoreWeave and Crusoe — that exist only to prove AI compute services — account for as much as 10% of NVIDIA's revenue.
NVIDIA's climbing stock value comes from its continued revenue growth. In the last four quarters, NVIDIA has seen year-over-year growth of 101%, 94%, 78% and 69%, and, in the last quarter, a little statistic was carefully brushed under the rug: that NVIDIA missed, though narrowly, on data center revenue. This is exactly what it sounds like — GPUs that are used in servers, rather than gaming consoles and PCs (. Analysts estimated it would make $39.4 billion from this category, and NVIDIA only (lol) brought in $39.1 billion. Then again, it could be attributed to its problems in China, especially as the H20 ban has only just been lifted. In any case, it was a miss!
NVIDIA's quarter-over-quarter growth has also become aggressively normal — from 69%, to 59%, to 12%, to 12% again each quarter, which, again, isn't bad (it's pretty great!), but when 88% of your revenue is based on one particular line in your earnings, it's a pretty big concern, at least for me. Look, I'm not a stock analyst, nor am I pretending to be one, so I am keeping this simple:
In simpler terms, 35% of the US stock market is held up by five or six companies buying GPUs. If NVIDIA's growth story stumbles, it will reverberate through the rest of the Magnificent 7, making them rely on their own AI trade stories.
And, as you will shortly find out, there is no AI trade, because generative AI is not making anybody any money.
I'm so tired of people telling me that companies are "making tons of money on AI." Nobody is making a profit on generative AI other than NVIDIA. No, really, I’m serious.
If they keep their promises, by the end of 2025, Meta, Amazon, Microsoft, Google and Tesla will have spent over $560 billion in capital expenditures on AI in the last two years, all to make around $35 billion.
This is egregiously fucking stupid.
As of January 2025, Microsoft's "annualized" — meaning [best month]x12 — revenue from artificial intelligence was around $13 billion, a number that it chose not to update in its last earnings, likely because it's either flat or not growing, though it could in its upcoming late-July earnings. Yet the problem with this revenue is that $10 billion of that revenue, according to The Information, comes from OpenAI's spend on Microsoft's Azure cloud, and Microsoft offers preferential pricing — "a heavily discounted rental rate that essentially only covers Microsoft's costs for operating the servers" according to The Information.
In simpler terms, 76.9% of Microsoft's AI revenue comes from OpenAI, and is sold at just above or at cost, making Microsoft's "real" AI revenue about $3 billion, or around 3.75% of this year's capital expenditures, or 16.25% if you count OpenAI's revenue, which costs Microsoft more money than it earns.
The Information reports that Microsoft made $4.7 billion in "AI revenue" in 2024, of which OpenAI accounted for $2 billion, meaning that for the $135.7 billion that Microsoft has spent in the last two years on AI infrastructure, it has made $17.7 billion, of which OpenAI accounted for $12.7 billion.
Things do not improve elsewhere. An analyst estimates that Amazon, which plans to spend $105 billion in capital expenditures this year, will make $5 billion on AI in 2025, rising, and I quote, "as much as 80%," suggesting that Amazon may have made a measly $2.77 billion in 2024 on AI in a year when it spent $83 billion in capital expenditures. [editor's note: this piece originally said "$1 billion" instead of "$2.77 billion" due to a math error, sorry!]
Last year, Amazon CEO Andy Jassy said that “AI represents for sure the biggest opportunity since cloud and probably the biggest technology shift and opportunity in business since the internet." I think he's full of shit.
Bank of America analyst Justin Post estimated a few weeks ago that Google's AI revenue would be in the region of $7.7 billion, though his math is, if I'm honest, a little generous:
Google’s artificial intelligence model is set to drive $4.2 billion in subscription revenue within its Google Cloud segment in 2025, according to an analysis from Bank of America last week.
That includes $3.1 billion in revenue from subscribers to Google’s AI plans with its Google One service, Bank of America’s Justin Post estimates.
Post also expects that the integration of Google’s Gemini AI features within its Workspace service will drive $1.1 billion of the $7.7 billion in revenue he projects for that segment in 2025.
Google's "One" subscription includes increased cloud storage across Google Drive, Gmail and Google Photos, and added a $20-a-month "premium" plan in February 2024 that included access to Google's various AI models. Google has claimed that the "premium AI tier accounts for millions" of the 150 million subscribers to the service, though how many millions is impossible to estimate — but that won't stop me trying!
Assuming that $3.1 billion in 2025 revenue would work out to $258 million a month, that would mean there were 12.9 million Google One subscribers also paying for the premium AI tier. This isn't out of the realm of possibility — after all, OpenAI has 15.5 million paying subscribers — but Post is making a generous assumption here. Nevertheless, we'll accept the numbers as they are.
And the numbers fuckin' stink! Google's $1.1 billion in workspace service revenue came from a forced price-hike on those who use Google services to run their businesses, meaning that this is likely not a number that can significantly increase without punishing them further.
$7.7 billion of revenue — not profit! — on $75 billion of capital expenditures. Nasty!
Someone's gonna get mad at me for saying this, but I believe that Meta is simply burning cash on generative AI. There is no product that Meta sells that monetizes Large Language Models, but every Meta product now has them shoved into them, such as your Instagram DMs oinking at you to generate artwork based on your conversation.
Nevertheless, we do have some sort of knowledge of what Meta is saying due to the copyright infringement case Kadrey v. Meta. Unsealed judgment briefs revealed in April that Meta is claiming that "GenAI-driven revenue will be more than $2 billion," with estimates as high as $3 billion. The same document also claims that Meta expects to make $460 billion to $1.4 trillion in total revenue through 2035, the kind of thing that should get you fired in an iron ball into the sun.
Meta makes 99% of its revenue from advertising, and the unsealed documents state that it "[generates] revenue from [its] Llama models and will continue earning revenue from each iteration," and "share a percentage of the revenue that [it generates] from users of the Llama models...hosted by those companies," with the companies in question redacted. Max Zeff of TechCrunch adds that Meta lists host partners like AWS, NVIDIA, Databricks, Groq, Dell, Microsoft Azure, Google Cloud, and Snowflake, so it's possible that Meta makes money from licensing to those companies. Sadly, the exhibits further discussing these numbers are filed under seal.
Either way, we are now at $332 billion of capital expenditures in 2025 for $28.7 billion of revenue, of which $10 billion is OpenAI's "at-cost or just above cost" revenue. Not great.
Despite its prominence in the magnificent 7, Tesla is one of the least-exposed of the magnificent 7 to the AI trade, as Elon Musk has turned it into a meme stock company. That doesn't mean, of course, that Musk isn't touching AI. xAI, the company that develops racist Large Language Model "Grok" and owns what remains of Twitter, apparently burns $1 billion a month, and The Information reports that it makes a whopping $100 million in annualized revenue — so, about $8.33 million a month. There is a shareholder vote for Tesla to potentially invest in xAI, which will probably happen, allowing Musk to continue to pull leverage from his Tesla stock until the company's decaying sales and brand eventually swallow him whole.
But we're not talking about Elon Musk today.
Apple Intelligence radicalized millions of people against AI, mostly because it fucking stank. Apple clearly got into AI reluctantly, and now faces stories about how they "fell behind in the AI race," which mostly means that Apple aggressively introduced people to the features of generative AI by force, and it turns out that people don't really want to summarize documents, write emails, or make "custom emoji," and anyone who thinks they would is a fucking alien.
In any case, Apple hasn't bet the farm on AI, insomuch as it hasn't spent two hundred billion dollars on infrastructure for a product with a limited market that only loses money.
To be clear, I am not saying that any of the Magnificent 7 are going to die — just that five companies' spend on NVIDIA GPUs largely dictate how stable the US stock market will be. If any of these companies (but especially NVIDIA) sneeze, your 401k or your kid’s college fund will catch a cold.
I realize this sounds a little simplistic, but by my calculations, NVIDIA's value underpins around 8% of the value of the US stock market. At the time of writing, it accounts for roughly 7.5% of the S&P 500 — an index of the 500 largest US publicly-traded companies. A disturbing 88% of Nvidia’s revenue comes from enterprise-scale GPUs primarily used for generative AI, of which five companies' spend makes up 42% of its revenue. In the event that any one of these companies makes significant changes to their investments in NVIDIA chips, it will eventually have a direct and meaningful negative impact on the wider market.
NVIDIA's earnings are, effectively, the US stock market's confidence, and everything rides on five companies — and if we're honest, really four companies — buying GPUs for generative AI services or to train generative AI models. Worse still, these services, while losing these companies massive amounts of money, don't produce much revenue, meaning that the AI trade is not driving any real, meaningful revenue growth.
Silence!
Any of these companies talking about "growth from AI" or "the jobs that AI will replace" or "how AI has changed their organization" are hand-waving to avoid telling you how much money these services are actually making them. If they were making good money and experiencing real growth as a result of these services, they wouldn't shut the fuck up about it! They'd be in your ear and up your ass hooting about how much cash they were rolling in!
And they're not, because they aren't rolling in cash, and are in fact blowing nearly a hundred billion dollars each to build massive, power-hungry, costly data centers for no real reason.
Don’t watch the mouth — watch the hands. These companies are going to say they’re seeing growth from AI, but unless they actually show you the growth and enumerate it, they are hand-waving.
So, one of the most annoying and consistent responses to my work is to say that either Amazon or Amazon Web Services “ran at a loss,” and that Amazon Web Services — the invention of modern cloud computing infrastructure — “lost money and then didn’t.”
The thing is, this statement is one of those things that people say because it sounds rational. Amazon did lose money, and Amazon Web Services was expensive, that’s obvious, right?
The thing is, I’ve never really had anyone explain this point to me, so I am finally going to sit down and deal with this criticism, because every single person who mentions it thinks they just pulled Excalibur from the stone and can now decapitate me. They claim that because people in the past doubted Amazon — because, or in addition to the burn rate of Amazon Web Services as the company built out its infrastructure — that I too am wrong, because they were wrong about that.
This isn't Camelot, you rube! You are not King Arthur!
I will address both the argument itself and the "they" part of it too — because if the argument is that the people that got AWS wrong should not be trusted, then we should no longer trust them, the people actively propagandizing our supposed generative AI future.
Right?
So, I'm honestly not sure where this argument came from, because there is, to my knowledge, no story about Amazon Web Services where somebody suggested its burnrate would kill Amazon.
But let’s go back in time to the May 31 1999 piece that some might be thinking of, called "Amazon.bomb," and how writer Jacqueline Doherty was mocked soundly for "being wrong" about Amazon, which has now become quite profitable.
I also want to be clear that Amazon Web Services didn't launch until 2006, and Amazon itself would become reliably profitable in 2003. Technically Amazon had opened up Amazon.com's web services for developers to incorporate its content into their applications in 2002, but what we consider AWS today — cloud storage and compute — launched in 2006.
But okay, what did she actually say?
Unfortunately for Bezos, Amazon is now entering a stage in which investors will be less willing to rely on his charisma and more demanding of answers to tough questions like, when will this company actually turn a profit? And how will Amazon triumph over a slew of new competitors who have deep pockets and new technologies?
We tried to ask Bezos, but he declined to make himself or any other executives of the company available. He can ignore Barron's, but he can't ignore the questions.
Amazon last year posted a loss of $125 million [$242.6m in today's money) on revenues of $610 million [$1.183 billion in today's money]. And in this year's first quarter it got even worse, as the company posted a loss of $61.7 million [$119.75 million in today's money] on revenues of $293.6 million [$569.82 million in today's money].
Her argument, for the most part, is that Amazon was burning cash, had a ton of competition from other people doing similar things, and that analysts backed her up:
"The first mover does not always win. The importance of being first is a mantra in the Internet world, but it's wrong. The ones that are the most efficient will be successful," says one retail analyst. "In retailing, anyone can build a great-looking store. The hard part is building a great-looking store that makes money."
Fair arguments for the time, though perhaps a little narrow-minded. The assumption wasn't that what Amazon was building was a bad idea, but that Amazon wouldn't be the ones to build it, with one saying:
"Once Wal-Mart decides to go after Amazon, there's no contest," declares Kurt Barnard, president of Barnard's Retail Trend Report. "Wal-Mart has resources Amazon can't even dream about."
In simpler terms: Amazon's business model wasn't in question. People were buying shit online. In fact, this was just before the dot com bubble burst, and when optimism about the web was at a high point. Yet the comparison stops there — people obviously liked buying shit online, it was the business models of many of these companies — like WebVan — that sucked.
Amazon Web Services was an outgrowth of Amazon's own infrastructure, which had to expand rapidly to deal with the influx of web traffic for Amazon.com, which had become one of the world's most popular websites and was becoming increasingly more-complex as it sold things other than books. Other companies had their own infrastructure, but if a smaller company wanted to scale, they’d basically need to build their own thing.
It's actually pretty cool what Amazon did! Remember, this was the early 2000s, before Facebook, Twitter, and a lot of the modern internet we know that runs on services like Amazon Web Services, Microsoft Azure and Google Cloud. It invented the modern concept of compute!
But we're here to talk about Amazon Web Services being dangerous for Amazon and people hating on it.
A November 2006 story from Bloomberg talked about Jeff Bezos' Risky Bet to "run your business with the technology behind his web site," saying that "Wall Street [wanted] him to mind the store." Bezos, referred to as a "one-time internet poster boy" that became "a post-dot-com piñata." Nevertheless, this article has what my haters crave:
But if techies are wowed by Bezos' grand plan, it's not likely to win many converts on Wall Street. To many observers, it conjures up the ghost of Amazon past. During the dot-com boom, Bezos spent hundreds of millions of dollars to build distribution centers and computer systems in the promise that they eventually would pay off with outsize returns. That helped set the stage for the world's biggest Web retail operation, with expected sales of $10.5 billion this year.
...
All that has investors restless and many analysts throwing up their hands wondering if Bezos is merely flailing around for an alternative to his retail operation. Eleven of 27 analysts who follow the company have underperform or sell ratings on the stock--a stunning vote of no confidence. That number of sell recommendations is matched among large companies only by Qwest Communications International Inc. (Q ), according to investment consultant StarMine Corp. It's more than even the eight sell opinions on struggling Ford Motor Co. (F )
Pretty bad, right? My goose is cooked? All those analysts seem pretty mad!
Except it's not, my goose is raw! Yours, however, has been in the oven for over a year!
Emphasis mine:
By all accounts, Amazon's new businesses bring in a minuscule amount of revenue. Although its direct cost of providing them appears relatively low because the hardware and software are in place, Stifel Nicolaus & Co. (SF ) analyst Scott W. Devitt notes: "There's not going to be any economic return from any of these projects for the foreseeable future." Bezos himself admits as much. But with several years of heavy spending already, he's making this a priority for the long haul. "We think it's going to be a very meaningful business for us one day," he says. "What we've historically seen is that the seeds we plant can take anywhere from three, five, seven years."
That's right — the ongoing costs aren't the problem.
Hey wait a second, that's a name! I can look up a name! Scott W. Devitt now works at Wedbush as its managing director of equity research, and has said AI companies would enter a new stage in 2025...god, just read this:
The second stage is "the application phase of the cycle, which should benefit software companies as well as the cloud providers. And then, phase three of this will ultimately be the consumer-facing companies figuring out how to use the technology in ways that actually can drive increased interactions with consumers."
The analyst says the market will enter phase two in 2025, with software companies and cloud provider stocks expected to see gains. He adds that cybersecurity companies could also benefit as the technology evolves.
Dewitt specifically calls out Palantir, Snowflake, and Salesforce as those who would "gain." In none of these cases am I able to see the actual revenue from AI, but Salesforce itself said that it will see no revenue growth from AI this year. Palantir also, as discovered by the Autonomy Institute’s recent study, recently added to the following to its public disclosures:
There are significant risks involved in deploying AI and there can be no assurance that using AI in our platforms and products will enhance or be beneficial to our business, including our profitability.
What I'm saying is that analysts can be wrong! And they can be wrong at scale! There is no analyst consensus that agrees with me. In fact, most analysts appear to be bullish on AI, despite the significantly-worse costs and total lack of growth!
Yet even in this Hater's Parade, the unnamed journalist makes a case for Amazon Web Services:
Sooner than that, those initiatives may provide a boost for Amazon's retail side. For one, they potentially make a profit center out of idle computing capacity needed for that retail operation. Like most computer networks, Amazon's uses as little as 10% of its capacity at any one time just to leave room for occasional spikes. It's the same story in the company's distribution centers. Keeping them humming at higher capacity means they operate more efficiently, besides giving customers a much broader selection of products. And the more stuff Amazon ships, both its own inventory or others', the better deals it can cut with shippers.
Nice try, chuckles!
In 2015, the year that Amazon Web Services became profitable, Morgan Stanley analyst Katy Huberty believed that it was running at a "material loss," suggesting that $5.5 billion of Amazon's "technology and content expenses" was actually AWS expenses, with a "negative contribution of $1.3 billion."
Here is Katy Huberty, the analyst in question, declaring six months ago that "2025 [will] be the year of Agentic AI, robust enterprise adoption, and broadening AI winners."
So, yes, analysts really got AWS wrong. But putting that aside, there might actually be a comparison here! Amazon Web Services absolutely created a capital expenditures drain on Amazon. From Forbes’s Chuck Jones:
In 2014 Amazon had $4.9 billion in capital expenditures, up 42% from 2013’s $3.4 billion. The company has a wide range of items that it buys to support and grow its business ranging from warehouses, robots and computer systems for its core retail business and AWS. While I don’t expect Amazon to detail how much goes to AWS I suspect it is a decent percentage, which means AWS needs to generate appropriate returns on the capital deployed.
In today's money, this means that Amazon spent $6.76 billion in capital expenditures on AWS in 2014. Assuming it was this much every year — it wasn't, but I want to make an example of every person claiming that this is a gotcha — it took $67.6 billion and ten years (though one could argue it was nine) of pure capital expenditures to turn Amazon Web Services into a business that now makes billions of dollars a quarter in profit.
That's $15.4 billion less than Amazon's capital expenditures for 2024, and less than one-fifteenth its projected capex spend for 2025. And to be clear, the actual capital expenditure numbers are likely much lower, but I want to make it clear that even when factoring in inflation, Amazon Web Services was A) a bargain and B) a fraction of the cost of what Amazon has spent in 2024 or 2025.
A fun aside: On March 30 2015, Kevin Roose wrote a piece for New York Magazine about the cloud compute wars, in which he claimed that, and I quote, "there's no reason to suspect that Amazon would ever need to raise prices on AWS, or turn the fabled "profit switch" that pundits have been speculating about for years." Less than a month later Amazon revealed Amazon Web Services was profitable. They don't call him "the most right man in tech journalism" for nothing!
Some people compare Large Language Models and their associated services to Amazon Web Services, or services like Microsoft Azure or Google Cloud, and they are wrong to do so.
Amazon Web Services, when it launched, comprised of things like (and forgive how much I'm diluting this) Amazon's Elastic Compute Cloud (EC2), where you rent space on Amazon's servers to run applications in the cloud, or Amazon Simple Storage (S3), which is enterprise-level storage for applications. In simpler terms, if you were providing a cloud-based service, you used Amazon to both store the stuff that the service needed and the actual cloud-based processing (compute, so like your computer loads and runs applications but delivered to thousands or millions of people).
This is a huge industry. Amazon Web Services alone brought in revenues of over $100 billion in 2024, and while Microsoft and Google don't break out their cloud revenues, they're similarly large parts of their revenue, and Microsoft has used Azure in the past to patch over shoddy growth.
These services are also selling infrastructure. You aren't just paying for the compute, but the ability to access storage and deliver services with low latency — so users have a snappy experience — wherever they are in the world. The subtle magic of the internet is that it works at all, and a large part of that is the cloud compute infrastructure and oligopoly of the main providers having such vast data centers. This is much cheaper than doing it yourself, until a certain point. Dropbox moved away from Amazon Web Services as it scaled. It also allows someone else to take care of maintenance of the hardware and make sure it actually gets to your customers. You also don't have to worry about spikes in usage, because these things are usage-based, and you can always add more compute to meet demand.
There is, of course, nuance — security-specific features, content-specific delivery services, database services — behind these clouds. You are buying into the infrastructure of the infrastructure provider, and the reason these products are so profitable is, in part, because you are handing off the problems and responsibility to somebody else. And based on that idea, there are multiple product categories you can build on top of it, because ultimately cloud services are about Amazon, Microsoft and Google running your infrastructure for you.
Large Language Models and their associated services are completely different, despite these companies attempting to prove otherwise, and it starts with a very simple problem: why did any of these companies build these giant data centers and fill them full of GPUs?
Amazon Web Services was created out of necessity — Amazon's infrastructure needs were so great that it effectively had to build both the software and hardware necessary to deliver a store that sold theoretically everything to theoretically anywhere, handling both the traffic from customers, delivering the software that runs Amazon.com quickly and reliably, and, well, making sure things ran in a stable way. It didn't need to come up with a reason for people to run web applications — they were already doing so themselves, but in ways that cost a lot, were inflexible, and required specialist skills. AWS took something that people already did, and what there was a proven demand for, and made it better. Eventually, Google and Microsoft would join the fray.
And that appears to be the only similarity with generative AI — that due to the ridiculous costs of both the data centers and GPUs necessary to provide these services, it's largely impossible for others to even enter the market.
Yet after that, generative AI feels more like a feature of cloud infrastructure rather than infrastructure itself. AWS and similar megaclouds are versatile, flexible and multifaceted. Generative AI does what generative AI does, and that's about it.
You can run lots of different things on AWS. What are the different things you can run using Large Language Models? What are the different use cases, and, indeed, user requirements that make this the supposed "next big thing"?
Perhaps the argument is that generative AI is the next AWS or similar cloud service because you can build the next great companies on the infrastructure of others — the models of, say, OpenAI and Anthropic, and the servers of Microsoft.
So, okay, let's humour this point too. You can build the next great AI startup, and you have to build it on one of the megclouds because they're the only ones that can afford to build the infrastructure.
One small problem.
Let's start by establishing a few facts:
None of this is to say that one hundred million dollars isn't a lot of money to you and me, but in the world of Software-as-Service or enterprise software, this is chump change. Hubspot had revenues of $2.63 billion in its 2024 financial year.
We're three years in, and generative AI's highest-grossing companies — outside OpenAI ($10 billion annualized as of early June) and Anthropic ($4 billion annualized as of July), and both lose billions a year after revenue — have three major problems:
But let's start with Anysphere and Cursor, its AI-powered coding app, and its $500 million of annualized revenue. Pretty great, right? It hit $200 million in annualized revenue in March, then hit $500 million annualized revenue in June after raising $900 million. That's amazing!
Sadly, it's a mirage. Cursor's growth was a result of an unsustainable business model that it’s now had to replace with opaque terms of service, dramatically restricted access to models, and rate limits that effectively stop its users using the product at the price point they were used to.
It’s also horribly unprofitable, and a sign of things to come for generative AI.
A couple of weeks ago, I wrote up the dramatic changes that Cursor made to its service in the middle of June on my premium newsletter, and discovered that they timed precisely with Anthropic (and OpenAI to a lesser extent) adding "service tiers" and "priority processing," which is tech language for "pay us extra if you have a lot of customers or face rate limits or service delays." These price shifts have also led to companies like Replit having to make significant changes to its pricing model that disfavor users.
I will now plagiarise myself:
In simpler terms, Cursor raised $900 million and very likely had to hand large amounts of that money over to OpenAI and Anthropic to keep doing business with them, and then immediately changed its terms of service to make them worse. As I said at the time:
While some may believe that both OpenAI and Anthropic hitting "annualized revenue" milestones is good news, you have to consider how these milestones were hit. Based on my reporting, I believe that both companies are effectively doing steroids, forcing massive infrastructural costs onto big customers as a means of covering the increasing costs of their own models.
There is simply no other way to read this situation. By making these changes, Anthropic is intentionally making it harder for its largest customer to do business, creating extra revenue by making Cursor's product worse by proxy. What's sickening about this particular situation is that it doesn't really matter if Cursor's customers are happy or sad — they, like OpenAI's enterprise Priority Access API, require a long-term commitment which involves a minimum throughput of tokens for second as part of their Tiered Access program.
If Cursor's customers drop off, both OpenAI and Anthropic still get their cut, and if Cursor's customers somehow outspend even those commitments, they'll either still get rate limited or Anysphere will incur more costs.
Cursor is the largest and most-successful generative AI company, and these aggressive and desperate changes to its product suggest A) that its product is deeply unprofitable and B) that its current growth was a result of offering a product that was not the one it would sell in the long term. Cursor misled its customers, and its current revenue is, as a result, highly unlikely to stay at this level.
Worse still, the two Anthropic engineers who left to join Cursor two weeks ago just returned to Anthropic. This heavily suggests that whatever they saw at Cursor wasn’t compelling enough to make them stay.
As I also said:
While Cursor may have raised $900 million, it was really OpenAI, Anthropic, xAI and Google that got that money.
At this point, there are no profitable enterprise AI startups, and it is highly unlikely that the new pricing models by both Cursor and Replit are going to help.
These are now the new terms of doing business with these companies — a shakedown, where you pay up for priority access or "tiers" or face indeterminate delays or rate limits. Any startup scaling into an "enterprise" integration of generative AI which means, in this case, anything that requires a certain level of service uptime) has to commit to both a minimum amount of months and a throughput of tokens, which means that the price of starting an AI startup that gets any kind of real market traction just dramatically increased.
While one could say "oh perhaps you don't need priority access," the "need" here is something that will be entirely judged by Anthropic and OpenAI in an utterly opaque manner. They can — and will! — throttle companies that are too demanding on their system, as proven by the fact that they've done this to Cursor multiple times.
I realize it's likely a little boring hearing about software as a service, but this is the only place where generative AI can really make money. Companies buying hundreds or thousands of seats are how industries that rely upon compute grow, and without that growth, they're going nowhere.
To give you some context, Netflix makes about $39 billion a year in subscription revenue, and Spotify about $18 billion. These are the single-most-popular consumer software subscriptions in the world — and OpenAI's 15.5 million subscribers suggest that it can't rely on them for the kind of growth that would actually make the company worth $300 billion (or more).
Cursor is, as it stands, the one example of a company thriving using generative AI, and it appears its rapid growth was a result of selling a product at a massive loss. As it stands today, Cursor's product is significantly worse, and its Reddit is full of people furious at the company for the changes.
In simpler terms, Cursor was the company that people mentioned to prove that startups could make money by building products on top of OpenAI and Anthropic's models, yet the truth is that the only way to do so and grow is to burn tons of money. While the tempting argument is to say that Cursor’s "customers are addicted," this is clearly not the case, nor is it a real business model.
This story also showed that Anthropic and OpenAI are the biggest threats to their customers, and will actively rent-seek and punish their success stories, looking to loot as much as they can from them.
To put it bluntly: Cursor's growth story was a lie. It reached $500 million in annualized revenue selling a product it can no longer afford to sell, suggesting material weakness in its own business and any and all coding startups.
It is also remarkable — and a shocking failure of journalism — that this isn’t in every single article about Anysphere.
I'm serious! Perplexity? Perplexity only has $150 million in annualized revenue! It spent 167% of its revenue in 2024 ($57m, its revenue was $34m) on compute services from Anthropic, OpenAI, and Amazon! It lost $68 million!
And worse still, it has no path to profitability, and it’s not even anything new! It’s a search engine! Professional gasbag Alex Heath just did a flummoxing interview with Perplexity CEO Aravind Srivinas, who, when asked how it’d become profitable, appeared to experience a stroke:
Maybe let me give you another example. You want to put an ad on Meta, Instagram, and you want to look at ads done by similar brands, pull that, study that, or look at the AdWords pricing of a hundred different keywords and figure out how to price your thing competitively. These are tasks that could definitely save you hours and hours and maybe even give you an arbitrage over what you could do yourself, because AI is able to do a lot more. And at scale, if it helps you to make a few million bucks, does it not make sense to spend $2,000 for that prompt? It does, right? So I think we’re going to be able to monetize in many more interesting ways than chatbots for the browser.
Aravind, do you smell toast?
And don’t talk to me about “AI browsers,” I’m sorry, it’s not a business model. How are people going to make revenue on this, hm? What do these products actually do? Oh they can poorly automate accepting LinkedIn invites? It’s like God himself has personally blessed my computer. Big deal!
In any case, it doesn't seem like you can really build a consumer AI startup that makes anything approaching a real company. Other than ChatGPT, I guess?
Arguably the biggest sign that things are troubling in the generative AI space is that we use "annualized revenue" at all, which, as I've mentioned repeatedly, means multiplying a month by 12 and saying "that's our annualized!"
The problem with this number is that, well, people cancel things. While your June might be great, if 10% of your subscribers churn in a bad month (due to a change in your terms of service), that's a chunk of your annualized revenue gone.
But the worst sign is that nobody is saying the monthly figures, mostly because the monthly figures kinda suck! $100 million of annualized revenue is $8.33 million a month. To give you some scale, Amazon Web Services hit $189 million ($15.75 million a month) in revenue in 2008, two years after founding, and while it took until 2015 to hit profitability, it actually hit break-even in 2009, though it invested cash in growth for a few years after.
Right now, not a single generative AI software company is profitable, and none of them are showing the signs of the kind of hypergrowth that previous "big" software companies had. While Cursor is technically "the fastest growing SaaS of all time," it did so using what amounts to fake pricing. You can dress this up as "growth stage" or "enshittification (it isn't by the way, generally price changes make things profitable, which this did not)," but Cursor lied. It lied to the public about what its product would do long-term. It isn't even obvious whether its current pricing is sustainable.
Outside of Cursor, what other software startups are there?
Glean?
Everyone loves to talk about enterprise search company Glean — a company that uses AI to search and generate answers from your company's files and documents.
In December 2024, Glean raised $260 million, proudly stating that it had over $550 million of cash in hand with "best-in-class ARR growth." A few months later in February 2025, Glean announced it’d "achieved $100 million in annual recurring revenue in fourth quarter FY25, cementing its position as one of the fastest-growing SaaS startups and reflecting a surging demand for AI-powered workplace intelligence." In this case, ARR could literally mean anything, as it appears to be based on quarters — meaning it could be an average of the last three months of the year, I guess?
Anywho, in June 2025, Glean announced it had raised another funding round, this time raising $150 million, and, troublingly, added that since its last round, it had "...surpassed $100M in ARR."
Five months into the fucking year and your monthly revenue is the same? That isn't good! That isn't good at all!
Also, what happened to that $550 million in cash? Why did Glean need more? Hey wait a second, Glean announced its raise on June 18 2025, two days after Cursor's pricing increase and the same day that Replit announced a similar hike!
It's almost as if its pricing dramatically increased due to the introduction of Anthropic's Service Tiers and OpenAI's Priority Processing.
I'm guessing, but isn't it kind of weird that all of these companies raised money about the same time?
Hey, that reminds me.
If you look at what generative AI companies do (note that the following is not a quality barometer), it's probably doing one of the following things:
Every single generative AI company that isn't OpenAI or Anthropic does one or a few of these things, and I mean every one of them, and it's because every single generative AI company uses Large Language Models, which have inherent limits on what they can do. LLMs can generate, they can search, they can edit (kind of!), they can transcribe (sometimes accurately!) and they can translate (often less accurately).
As a result, it's very, very difficult for a company to build something unique. Though Cursor is successful, it is ultimately a series of system prompts, a custom model that its users hate, a user interface and connections to models by OpenAI and Anthropic, both of whom have competing products and make money from Cursor and its competitors. Within weeks of Cursor's changes to its services, Amazon and ByteDance released competitors that, for the most part, do the same thing. Sure there's a few differences in how they're designed, but design is not a moat, especially in a high-cost, negative-profit business, where your only way of growing is to offer a product you can't afford to sustain.
The only other moat you can build..is the services you provide, which, when your services are dependent on a Large Language Model, are dependent on the model developer, who, in the case of OpenAI and Anthropic, could simply clone your startup, because the only valuable intellectual property is theirs.
You may say "well, nobody else has any ideas either," to which I'll say that I fully agree. My Rot-Com Bubble thesis suggests we're out of hypergrowth ideas, and yeah, I think we're out of ideas related to Large Language Models too.
At this point, I think it's fair to ask — are there any good companies you can build on top of Large Language Models? I don't mean add features related to, I mean an AI company that actually sells a product that people buy at scale that isn't called ChatGPT.
In previous tech booms, companies would make their own “models” — their own infrastructure, or the things that make them distinct from other companies — but the generative AI boom effectively changes that by making everybody build stuff on top of somebody else’s models, because training your own models is both extremely expensive and requires vast amounts of infrastructure.
As a result, much of this “boom” is about a few companies — really two, if we’re honest — getting other companies to try and build functional software for them.
I wanted to add one note — that, ultimately, OpenAI and Anthropic are bad for their customers. Their models are popular (by which I mean their customers' customers will expect access to them) meaning that OpenAI and Anthropic can (as they did with Cursor) arbitrarily change pricing, service availability or functionality based on how they feel that day. Don't believe me? Anthropic cut off access to AI coding platform Windsurf because it looked like it might get acquired by OpenAI.
Even by big tech standards this fucking sucks. And these companies will do it again!
Because all Large Language Models require more data than anyone has ever needed, they all basically have to use the same data, either taken from the internet or bought from one of a few companies (Scale, Surge, Turing, Together, etc.). While they can get customized data or do customized training/reinforcement learning, these models are all transformer-based, and they all function similarly, and the only way to make them different is by training them, which doesn't make them much different, just better at things they already do.
I already mentioned OpenAI and Anthropic's costs, as well as Perplexity's $50 million+ bill to Anthropic, Amazon and OpenAI off of a measly $34 million in revenue. These companies cost too much to run, and their functionality doesn't make enough money to make them make sense.
The problem isn't just the pricing, but how unpredictable it is. As Matt Ashare wrote for CIO Dive last year, generative AI makes a lot of companies’ lives difficult through the massive spikes in costs that come from power users, with few ways to mitigate their costs. One of the ways that a company manages their cloud bills is by having some degree of predictability — which is difficult to do with the constant slew of new models and demands for new products to go with them, especially when said models can (and do) cost more with subsequent iterations.
As a result, it's hard for AI companies to actually budget.
"But Ed!" you cry, "What about AGENTS?"
The term "agent" is one of the most egregious acts of fraud I've seen in my entire career writing about this crap, and that includes the metaverse.
When you hear the word "agent," you are meant to think of an autonomous AI that can go and do stuff without oversight, replacing somebody's job in the process, and companies have been pushing the boundaries of good taste and financial crimes in pursuit of them.
Most egregious of them is Salesforce's "Agentforce," which lets you "deploy AI agents at scale" and "brings digital labor to every employee, department and business process." This is a blatant fucking lie. Agentforce is a god damn chatbot platform, it's for launching chatbots, they can sometimes plug into APIs that allow them to access other information, but they are neither autonomous nor "agents" by any reasonable definition.
Not only does Salesforce not actually sell "agents," its own research shows that agents only achieve around a 58% success rate on single-step tasks, meaning, to quote The Register, "tasks that can be completed in a single step without needing follow-up actions or more information." On multi-step tasks — so, you know, most tasks — they succeed a depressing 35% of the time.
Last week, OpenAI announced its own "ChatGPT agent" that can allegedly go "do tasks" on a "virtual computer." In its own demo, the agent took 21 or so minutes to spit out a plan for a wedding with destinations, a vague calendar and some suit options, and then showed a pre-prepared demo of the "agent" preparing an itinerary of how to visit every major league ballpark. In this example's case, "agent" took 23 minutes, and produced arguably the most confusing-looking map I've seen in my life.
It also missed out every single major league ballpark on the East Coast — including Yankee Stadium and Fenway Park — and added a random stadium in the middle of the Gulf of Mexico. What team is that, eh Sam? The Deepwater Horizon Devils? Is there a baseball team in North Dakota?
I should also be clear this was the pre-prepared example. As with every Large Language Model-based product — and yes, that's what this is, even if OpenAI won't talk about what model — results are extremely variable.
Agents are difficult, because tasks are difficult, even if they can be completed by a human being that a CEO thinks is stupid. What OpenAI appears to be doing is using a virtual machine to run scripts that its models trigger. Regardless of how well it works (it works very very poorly and inconsistently), it's also likely very expensive.
In any case, every single company you see using the word agent is trying to mislead you. Glean's "AI agents" are chatbots with if-this-then-that functions that trigger events using APIs (the connectors between different software services), not taking actual actions, because that is not what LLMs can do.
ServiceNow's AI agents that allegedly "act autonomously and proactively on your behalf" are, despite claiming they "go beyond ‘better chatbots,’" still ultimately chatbots that use APIs to trigger different events using if-this-then-that functions. Sometimes these chatbots can also answer questions that people might have, or trigger an event somewhere. Oh, right, that's the same thing.
The closest we have to an "agent" of any kind is a coding agent, which can make a list of things that you might do on a software project and then go and generate the code and push stuff to Github when you ask them to, and they can do so "autonomously," in the sense that you can let them just run whatever task seems right. When I say "ask them to" or "go and" I mean that these agents are not remotely intelligent, and when let run rampant fuck up everything and create a bunch of extra work. Also, a study found that AI coding tools made engineers 19% slower.
Nevertheless, none of these products are autonomous agents, and anybody using the term agent likely means "chatbot."
And it's working because the media keeps repeating everything these companies say.
I realize we've taken kind of a scenic route here, but I needed to lay the groundwork here, because I am well and truly alarmed.
According to a UBS report from the 26th of June, the public companies running AI services are making absolutely pathetic amounts of money from AI:
ServiceNow's use of "$250 million ACV" — so annual contract value — may be one of the more honest explanations of revenue I've seen, putting them in the upper echelons of AI revenue unless, of course, you think for two seconds, whether these are AI-specific contracts. Or, perhaps, are they contracts including AI? Eh, who cares. It's also year-long agreements that could churn, and according to Gartner, over 40% of "agentic AI" projects will be canceled by end of 2027.
And really, ya gotta laugh at Adobe and Salesforce, both of whom have talked so god damn much about generative AI and yet have only made around $100 million in annualized revenue from it. Pathetic! These aren't futuristic numbers! They're barely product categories! And none of this seems to include costs.
Oh well.
I haven't really spent time on my favourite subject — OpenAI being a systemic risk to the tech industry.
To recap:
Anthropic is in a similar, but slightly better position — it is set to lose $3 billion this year on $4 billion of revenue. It also has no path to profitability, recently jacked up prices on Cursor, its largest customer, and had to put restraints on Claude Code after allowing users to burn 100% to 10,000% of their revenue. These are the actions of a desperate company.
Nevertheless, OpenAI and Anthropic's revenues amount to, by my estimates, more than half of the entire revenue of the generative AI industry, including the hyperscalers.
To be abundantly clear: the two companies that amount to around half of all generative artificial intelligence revenue are ONLY LOSING MONEY.
I've said a lot of this before, which is why I'm not harping on about it, but the most important company in the entire AI industry needs to convert by the end of the year or it's effectively dead, and even if it does, it burns billions and billions of dollars a year and will die without continual funding. It has no path to profitability, and anyone telling you otherwise is a liar or a fantasist.
Worse still, outside of OpenAI...what is there, really?
As I wrote earlier in the year, there is really no significant adoption of generative AI services or products. ChatGPT has 500 million weekly users, and otherwise, it seems that other services struggle to get 15 million of them. And while the 500 million weekly users sounds — and, in fairness, is — impressive, there’s a world of difference between someone using a product as part of their job, and someone dicking around with an image generator, or a college student trying to cheat on their homework.
Sidebar: Google cheated by combining Google Gemini with Google Assistant to claim that it has 350 million users. Don't care, sorry.
This is worrying on so many levels, chief of which is that everybody has been talking about AI for three god damn years, everybody has said "AI" in every earnings and media appearance and exhausting blog post, and we still can't scrape together the bits needed to make a functional industry.
I know some of you will probably read this and point to ChatGPT's users, and I quote myself here:
It has, allegedly, 500 million weekly active users — and, by the last count, only 15.5 million paying subscribers, an absolutely putrid conversion rate even before you realize that the actual conversion rate would be monthly active subscribers. That’s how any real software company actually defines its metrics, by the fucking way.
Why is this impressive? Because it grew fast? It literally had more PR and more marketing and more attention and more opportunities to sell to more people than any company has ever had in the history of anything. Every single industry has been told to think about AI for three years, and they’ve been told to do so because of a company called OpenAI. There isn’t a single god damn product since Google or Facebook that has had this level of media pressure, and both of those companies launched without the massive amount of media (and social media) that we have today.
ChatGPT is a very successful growth product and an absolutely horrifying business. OpenAI is a banana republic that cannot function on its own, it does not resemble Uber, Amazon Web Services, or any other business in the past other than WeWork, the other company that SoftBank spent way too much money on.
And outside of ChatGPT, there really isn't anything else.
Before I wrap up — I'm tired, and I imagine you are too — I want to address something.
Yes, generative AI has functionality. There are coding products and search products that people like and pay for. As I have discussed above, none of these companies are profitable, and until one of them is profitable, generative AI-based companies are not real businesses.
In any case, the problem isn't so much that LLMs "don't do anything," but that people talk about them doing things they can't do.
I believe that the generative AI market is a $50 billion revenue industry masquerading as a $1 trillion one, and the media is helping.
As I've explained at length, the AI trade is not one based on revenue, user growth, the efficacy of tools or significance of any technological breakthrough. Stocks are not moving based on whether they are making money on AI, because if they were, they'd be moving downward. However, due to the vibes-based nature of the AI trade, companies are benefiting from the press inexplicably crediting growth to AI with no proof that that's the case.
OpenAI is a terrible business, and the only businesses worse than OpenAI are the companies built on top of it. Large Language Models are too expensive to run, and have limited abilities beyond the ones I've named previously, and because everybody is running models that all, on some level, do the same thing, it's very hard for people to build really innovative products on top of them.
And, ultimately, this entire trade hinges on GPUs.
CoreWeave was initially funded by NVIDIA, its IPO funded partially by NVIDIA, NVIDIA is one of its customers, and CoreWeave raises debt on the GPUs it buys from NVIDIA to build more data centers, while also using the money to buy GPUs from NVIDIA. This isn’t me being polemic or hysterical — this is quite literally what is happening, and how CoreWeave operates. If you aren’t alarmed by that, I’m not sure what to tell you.
Elsewhere, Oracle is buying $40 billion in GPUs for the still-unformed Stargate data center project, and Meta is building a Manhattan-sized data center to fill with NVIDIA GPUs.
OpenAI is Microsoft's largest Azure client — an insanely risky proposition on multiple levels, not simply in the fact that it’s serving the revenue at-cost but that Microsoft executives believed OpenAI would fail in the long term when they invested in 2023 — and Microsoft is NVIDIA's largest client for GPUs, meaning that any changes to Microsoft's future interest in OpenAI, such as reducing its data center expansion, would eventually hit NVIDIA's revenue.
Why do you think DeepSeek shocked the market? It wasn't because of any clunky story around training techniques. It was because it said to the market that NVIDIA might not sell more GPUs every single quarter in perpetuity.
Microsoft, Meta, Google, Apple, Amazon and Tesla aren't making much money from AI — in fact, they're losing billions of dollars on whatever revenues they do make from it. Their stock growth is not coming from actual revenue, but the vibes around "being an AI company," which means absolutely jack shit when you don't have the users, finances, or products to back them up.
So, really, everything comes down to NVIDIA's ability to sell GPUs, and this industry, if we're really honest, at this point only exists to do so. Generative AI products do not provide significant revenue growth, its products are not useful in the way that unlocks significant business value, and the products that have some adoption run at such a grotesque loss.
I realize I've thrown a lot at you, and, for the second time this year, written the longest thing I've ever written.
But I needed to write this, because I'm really worried.
We're in a bubble. If you do not think we're in a bubble, you are not looking outside. Apollo Global Chief Economist Torsten Slok said it last week. Well, okay, what he said was much worse:
“The difference between the IT bubble in the 1990s and the AI bubble today is that the top 10 companies in the S&P 500 today are more overvalued than they were in the 1990s,” Slok wrote in a recent research note that was widely shared across social media and financial circles.
We are in a bubble. Generative AI does not do the things that it's being sold as doing, and the things it can actually do aren't the kind of things that create business returns, automate labor, or really do much more than one extension of a cloud software platform. The money isn't there, the users aren't there, every company seems to lose money and some companies lose so much money that it's impossible to tell how they'll survive.
Worse still, this bubble is entirely symbolic. The bailouts of the Great Financial Crisis were focused on banks and funds that had failed because they ran out of money, and the TARP initiative existed to plug the holes with low-interest loans.
There are few holes to plug here, because even if OpenAI and Anthropic somehow became eternal money-burners, the AI trade exists based on the continued and continually-increasing sale and use of GPUs. There are limited amounts of capital, but also limited amounts of data centers to actually put GPUs, and on top of that, at some point growth will slow at one of the Magnificent 7, at which point costs will have to come down from things that lose them tons of money, such as generative AI.
You do not have proof for this statement! The cost of tokens going down is not the same thing as the cost of inference goes down! Everyone saying this is saying it because a guy once said it to them! You don't have proof! I have more proof for what I am saying!
While it theoretically might be, all evidence points to larger models costing more money, especially reasoning-heavy ones like Claude Opus 4. Inference is not the only thing happening, and if this is your one response, you are a big bozo and doofus and should go back to making squeaky noises when you see tech executives or hear my name.
Okay, so one argument is that these companies will use ASICs — customized chips for specific operations — to reduce the amount they're spending.
A few thoughts:
I am worried because despite all of these obvious, brutal and near-unfixable problems, everybody is walking around acting like things are going great with AI. The New York Times claims everybody is using AI for everything — a blatant lie, one that exists to prop up an industry that has categorically failed to deliver the innovations or returns that it promised, yet still receives glowing press from a tech and business media that refuses to look outside and see that the sky is red and frogs are landing everywhere.
Other than the frog thing, I'm not even being dramatic. Everywhere you look in the AI trade, things get worse — no revenue, billions being burned, no moat, no infrastructure play, no comparables in history other than the dot com bubble and WeWork, and a series of flagrant lies spouted by the powerful and members of the press that are afraid of moving against market consensus.
Worse still, despite NVIDIA's strength, NVIDIA is the market's weakness, through no fault of its own, really. Jensen Huang sells GPUs, people want to buy GPUs, and now the rest of the market is leaning aggressively on one company, feeding it billions of dollars in the hopes that the things they're buying start making them a profit.
And that really is the most ridiculous thing. At the center of the AI trade sits GPUs that, on installation, immediately start losing the company in question money. Large Language Models burn cash for negative returns to build products that all kind of work the same way.
If you're going to say I'm wrong, sit and think carefully about why. Is it because you don't want me to be right? Is it because you think "these companies will work it out"? This isn't anything like Uber, AWS, or any other situation. It is its own monstrosity, a creature of hubris and ignorance caused by a tech industry that's run out of ideas, built on top of one company.
You can plead with me all you want about how there are actual people using AI. You've probably read the "My AI Skeptic Friends Are All Nuts" blog, and if you're gonna send it to me, read the response from Nik Suresh first. If you're going to say that I "don't speak to people who actually use these products," you are categorically wrong and in denial.
I am only writing with this aggressive tone because, for the best part of two years, I have been made to repeatedly explain myself in a way that no AI "optimist" is made, and I admit I resent it. I have written hundreds of thousands of words with hundreds of citations, and still, to this day, there are people who claim I am somehow flawed in my analysis, that I'm missing something, that I am somehow failing to make my case.
The only people failing to make their case are the AI optimists still claiming that these companies are making "powerful AI." And once this bubble pops, I will be asking for an apology.
I love ending pieces with personal thoughts about stuff because I am an emotional and overly honest person, and I enjoy writing a lot.
I do not, however, enjoy telling you at length how brittle everything is. An ideal tech industry would be one built on innovation, revenue, real growth based on actual business returns that helped humans be better, not outright lie about replacing them. All that generative AI has done is show how much lust there is in both the markets and the media for replacing human labor — and yes, it is in the media too. I truly believe there are multiple reporters who feel genuine excitement when they write scary stories about how Dario Amodei says white collar workers will be fired in the next few years in favour of "agents" that will never exist.
Everything I’m discussing is the result of the Rot Economy thesis I wrote back in 2023 — the growth-at-all-costs mindset that has driven every tech company to focus on increasing quarterly revenue numbers, even if the products suck, or are deeply unprofitable, or, in the case of generative AI, both.
Nowhere has there been a more pungent version of the Rot Economy than in Large Language Models, or more specifically GPUs. By making everything about growth, you inevitably reach a point where the only thing you know how to do is spend money, and both LLMs and GPUs allowed big tech to do the thing that worked before — building a bunch of data centers and buying a bunch of chips — without making sure they’d done the crucial work of “making sure this would create products people like.” As a result, we’re now sitting on top of one of the most brittle situations in economic history — our markets held up by whether four or five companies will continue to buy chips that start losing them money the second they’re installed.
I am disgusted by how many people are unwilling or unable to engage with the truth, favouring instead a scornful, contemptuous tone toward anybody who doesn't believe that generative AI is the future. If you are a writer that writes about AI smarmily insulting people who "don't understand AI," you are a shitty fucking writer, because either AI isn't that good or you're not good at explaining why it's good. Perhaps it's both.
If you want to know my true agenda, it's that I see something in generative AI and its boosters something I truly dislike. Large Language Models authoritatively state things that are incorrect because they have no concept of right or wrong. I believe that the writers, managers and executives that find it exciting do so because it gives them the ability to pretend to be intelligent without actually learning anything, to do everything they can to avoid actual work or responsibility for themselves or others.
There is an overwhelming condescension that comes from fans of generative AI — the sense that they know something you don't, something they double down on. We are being forced to use it by bosses, or services we like that now insist it's part of our documents or our search engines, not because it does something, but because those pushing it need us to use it to prove that they know what's going on.
To quote my editor Matt Hughes: "...generative AI...is an expression of contempt towards people, one that considers them to be a commodity at best, and a rapidly-depreciating asset at worst."
I haven't quite cracked why, but generative AI also brings out the worst in some people. By giving the illusion of labor, it excites those who are desperate to replace or commoditize it. By giving the illusion of education, it excites those who are too idle to actually learn things by convincing them that in a few minutes they can learn quantum physics. By giving the illusion of activity, it allows the gluttony of Business Idiots that control everything to pretend that they do something. By giving the illusion of futurity, it gives reporters that have long-since disconnected from actual software and hardware the ability to pretend that they know what's happening in the tech industry.
And, fundamentally, its biggest illusion is economic activity, because despite being questionably-useful and burning billions of dollars, its need to do so creates a justification for spending billions of dollars on GPUs and data center sprawl, which allows big tech to sink money into something and give the illusion of growth.
I love writing, but I don't love writing this. I think I'm right, and it’s not something I’m necessarily happy about. If I'm wrong, I'll explain how I'm wrong in great detail, and not shy away from taking accountability, but I really do not think I am, and that's why I'm so alarmed.
What I am describing is a bubble, and one with an obvious weakness: one company's ability to sell hardware to four or five other companies, all to run services that lose billions of dollars.
At some point the momentum behind NVIDIA slows. Maybe it won't even be sales slowing — maybe it'll just be the suggestion that one of its largest customers won't be buying as many GPUs. Perception matters just as much as actual numbers, and sometimes more, and a shift in sentiment could start a chain of events that knocks down the entire house of cards.
I don't know when, I don't know how, but I really, really don't know how I'm wrong.
I hate that so many people will see their retirements wrecked, and that so many people intentionally or accidentally helped steer the market in this reckless, needless and wasteful direction, all because big tech didn’t have a new way to show quarterly growth. I hate that so many people have lost their jobs because companies are spending the equivalent of the entire GDP of some European countries on data centers and GPUs that won’t actually deliver any value.
But my purpose here is to explain to you, no matter your background or interests or creed or whatever way you found my work, why it happened. As you watch this collapse, I want you to tell your friends about why — the people responsible and the decisions they made — and make sure it’s clear that there are people responsible.
Sam Altman, Dario Amodei, Satya Nadella, Sundar Pichai, Tim Cook, Elon Musk, Mark Zuckerberg and Andy Jassy have overseen a needless, wasteful and destructive economic force that will harm our markets (and by a larger extension our economy) and the tech industry writ large, and when this is over, they must be held accountable.
And remember that you, as a regular person, can understand all of this. These people want you to believe this is black magic, that you are wrong to worry about the billions wasted or question the usefulness of these tools. You are smarter than they reckon and stronger than they know, and a better future is one where you recognize this, and realize that power and money doesn’t make a man righteous, right, or smart.
I started writing this newsletter with 300 subscribers, and I now have 67,000 and a growing premium subscriber base. I am grateful for the time you’ve given me, and really hope that I continue to help you see the tech industry for what it currently is — captured almost entirely by people that have no interest in building the future.
2025-07-18 22:51:41
Hello premium subscribers! Today I have the first guest post I've ever commissioned (read: paid) on Where's Your Ed At - Nik Suresh, one of the greatest living business and tech writers, best-known for his piece I Will Fucking Piledrive You If You Mention AI Again, probably my favourite piece of the AI era.
I want to be clear that I take any guest writing on here very seriously, and do not intend to do this regularly. The quality bar is very high, which is why I started with Nik. I cannot express enough how much I love his work. Brainwash An Executive Today is amazing, as is his teardown of Contra Ptacek's All My AI Skeptic Friends Are Nuts. Nik is a software engineer, the executive director of an IT consultancy, and in general someone who actually understands software and the industries built around selling it.
You can check out his work here and check out his team here.
Ed asked me to write about why leaders around the world are constantly buying software they don’t need. He probably had a few high-profile companies in mind, like Snowflake. Put aside whether Snowflake is a good product – most people don’t know what a database is, so why on earth does a specialized and very expensive database have a market cap of $71B?
That’s a fair question – and being both a software engineer and the managing director at a tech consultancy, I can talk about what’s happening on the ground. And yes, people are buying software that they don’t need.
I wish that was the extent of our problems.
Pointless software purchases are a comparatively minor symptom of the seething rot and stunning incompetence at the core of most companies’ technical operations. Things are bad to a degree that sounds unbelievable to people that don’t have the background to witness or understand it firsthand.
Here is my thesis:
Most enterprise SaaS purchases are simply a distraction – total wishful thinking – for leaders that hope waving a credit card is going to absolve them of the need to understand and manage the true crisis in software engineering. Buying software has many desirable characteristics – everyone else is doing it, it can stall having to deliver results for years, and allows leaders to adopt a thin veneer of innovation. In reality, they’re settling for totally conservative failure. The real crisis, the one they’re ignoring, is only resolved by deep systems thinking, emotional awareness, and an actual understanding of the domain they operate in.
And that crisis, succinctly stated, is thus: our institutions are filled to burst with incompetents cosplaying as software engineers, forked-tongue vermin-consultants hawking lies to the desperate, and leaders who consider think reading Malcolm Gladwell makes you a profound intellectual (if you don’t understand why this is a problem, please report to my office for immediate disciplinary action).
I’m going to try and explain things as they’re actually like at normal companies. Welcome to my Hell, and hold your screams until the end.
The typical team at a large organization – in a truly random office, of the sort that buys products like Salesforce but will otherwise never be in the news – might literally deliver nothing of value for years at a time. I know, I know, how can people be doing nothing for years? A day? Sure, everyone has an off-day. Weeks? Maybe. But years? Someone’s going to notice eventually, right?
Most industries have long-since been seized by a variety of tedious managerialism that’s utterly divorced from actually accomplishing any work, but the abstract nature of programming allows for software teams to do nothing to a degree that stretches credulity. Code can be reported as 90% done in meetings for years; there’s no physical artifact that non-programmers can use to verify it. There’s no wall with half the bricks laid, just lines of incomprehensible text which someone assures you constitutes Value.
This is a real, well-known phenomenon amongst software engineers, but no one believes us when we bring it up, because surely there’s no way profit-obsessed capitalists are spending millions of dollars on teams with no visible output.
I know it sounds wild. I feel like I’ve been taking crazy pills for years. But enjoy some anecdotes:
My first tech job was “data scientist,” a software engineering subspecialty focused on advanced statistical methods (or “AI” if you are inclined towards grifting). When a data scientist successfully applies statistical methods to solve a business problem, it’s called producing a “model.” My team produced no models in two years, but nonetheless received an innovation award from leadership, and they kept paying me six figures. I know small squads of data scientists with salaries totaling millions that haven’t deployed working models for twice that long.
During my next job, at an entirely unrelated organization, I was tasked with finishing a website that had been “almost done” for a few years, whose main purpose was for a team to do some data entry. This is something that takes about a competent team two weeks – my current team regularly does more complicated things in that time. I finished in good time and handed it to the IT department to host, a task that should take a day if done very efficiently, or perhaps three months if you were dealing with extreme bureaucracy and a history of bad technical decisions. It’s been five years and the organization has to deploy the finished product. I later discovered that the company had spent four years trying before I joined. It’s just a website! When the internet was coming up, people famously hired teenagers to do this!
I’m not even going to get into my third and fourth jobs, except to say they involved some truly spectacular displays of technical brilliance, such as discovering a team burning hundreds of thousands of dollars on Snowflake because they didn’t take thirty seconds to double-check any settings. I suspect that Snowflake’s annual revenue would drop by more than 20% if every team in the world spent five minutes (actually five minutes, it was that easy) to make the change I did – editing a single number in the settings that has no negative side-effects for the typical business – but they’re also staffed by people that don’t read or study, so there’s no way to reach them.
I warn every single friend who enters the software industry that unless they land a role with the top 1% of software engineering organizations, they are about to witness true madness. Without fail, they report back in six months with something along the lines of “I thought you were exaggerating.” In a private conversation about a year ago, an employee that left a well-known unicorn start-up confided:
“After leaving that company, I couldn’t believe that the rest of the world works this way.”
There are places where this doesn’t happen, but this madness is overwhelmingly the experience at companies that purchase huge enterprise products like Salesforce – the relationship between astonishing inefficiency and buying these products is so strong that it’s a core part of how my current team handles sales. We don’t waste time trying to sell to companies that use this stuff – it’s usually too late to save them – and I spend a lot of time tracking down companies in the process of being pitched this stuff by competing vendors.
In 2023, software engineer Emmanuel Maggiore wrote:
“When Twitter fired half of its employees in 2022, and most tech giants followed suit, I wasn’t surprised. In fact, I think little will change for those companies. After being employed in the tech sector for years, I have come to the conclusion that most people in tech don’t work. I don’t mean we don’t work hard; I mean we almost don’t work at all. Nada. Zilch. And when we do get to do some work, it often brings low added value to the company and its customers. All of this while being paid an amount of money some people wouldn’t even dream of.”
This will be totally unrecognizable to about half the software people in the world – those working at companies like Netflix which are famous for their software engineering cultures, or some of those working at startups where there isn’t enough slack to obfuscate. For everyone else, what I’ve described is Tuesday.
Also as a note to my lovely fans who email me about the RSS feed "including the whole premium newsletter" - I give generous previews! The rest of the (premium) article follows.