2026-04-01 00:18:11
Hi! If you like this piece and want to support my independent reporting and analysis, why not subscribe to my premium newsletter? It’s $70 a year, or $7 a month, and in return you get a weekly newsletter that’s usually anywhere from 5,000 to 18,000 words, including vast, detailed analyses of NVIDIA, Anthropic and OpenAI’s finances, and the AI bubble writ large. I just put out a massive Hater’s Guide To The SaaSpocalypse, as well as last week’s deep dive into how the majority of data centers aren’t getting built and the overall AI industry is depressingly small. Supporting my premium supports my free newsletter, and premium subscribers don't get this ad.
Soundtrack: Metallica — …And Justice For All
Bear with me, readers. I need to do a little historical foreshadowing to fully explain what’s going on.
In the run-up to the great financial crisis, unscrupulous lenders issued around 1.9 million subprime loans, with many of them being adjustable rate mortgages (ARMs) with variable rates that, after a two-or-three-year-long introductory period, would adjust every twelve months, per CBS News in July 2006:
On a $200,000 ARM that began a few years ago, the initial rate was around 4.5 percent. When the ARM adjusts to 6.5 percent, the monthly payment will increase from $1,013 to $1,254, or a rise of almost 24 percent.
Although interest rates have increased more than 4 percentage points since 2004, most ARMs typically cap the amount of the annual rate increase to 2.5 percentage points per year. Therefore, these increases are only just the beginning and it's very likely that the people experiencing an increased ARM payment this year will see a similar rise again in 12 months.
At the time, 18% of homeowners had adjustable-rate mortgages, which also made up more than 25% of new mortgages in the first quarter of 2006, with (at the time) over $330 billion of mortgages expected to adjust upwards. Things were grimmer beneath the surface. A question on JustAnswer from 2009 showed a homeowner that was about to lose their house after being conned into a negative amortization loan — a mortgage where payments didn’t actually cover the interest, meaning that each month the balance increased. Dodgy lenders were given bonuses for selling more mortgages, whether or not the person on the other end was capable of paying, and by November 2007, around two million homeowners held $600 billion of ARMs.
Yet the myth of the subprime mortgage crisis was that it was caused entirely by low income borrowers. Per Duke’s Manuel Adelino:
We found there was no explosion of credit offered to lower-income borrowers. In fact, home ownership rates among the poorest 20 percent of Americans fell during the boom because those buyers were being priced out of the market. Instead, we found credit was expanded across the board. Everybody was playing the same game. But credit expanded most drastically in areas where house prices were rising the most, and these were markets that were beyond the reach of lower-income borrowers.
The overwhelming majority of mortgages were going to middle income and relatively high income households during the boom, just as they have always done.
Despite The Big Short’s dramatic “stripper with six properties” scene made for a vivid demonstration of the subprime problem, the reality was that everybody got taken in by teaser rate mortgages, driving up the value of properties based on a housing market that was only made possible by mortgages that were expressly built to hide the real costs as interest rates and borrower payments rose every six to 36months. I’ll add that near-prime mortgages — for borrowers with just-below-prime credit scores — were also growing, with over 1.1 million of them in 2005, when they represented nearly 32% of all loans.
Many people who bought houses that they couldn’t afford did so based on a poor understanding of the terms of their mortgage, thinking that the value of housing would continue to climb as it had for over a hundred years, and/or the belief that they’d easily be able to refinance the loans. Even as things deteriorated toward the middle of the 2000s, people came up with rationalizations as to why things would work out, such as Anthony Downs of The Brookings Institution, who in October 2007 said the following in a piece called “Credit Crisis: The Sky is not Falling”:
U.S. stock markets are gyrating on news of an apparent credit crunch generated by defaults among subprime home mortgage loans. Such frenzy has spurred Wall Street to cry capital crisis. However, there is no shortage of capital – only a shortage of confidence in some of the instruments Wall Street has invented. Much financial capital is still out there looking for a home.
As this brief describes, the facts hardly indicate a credit crisis. As of mid-2007, data show that prices of existing homes are not collapsing. Despite large declines in new home production and existing home sales, home prices are only slightly falling overall but are still rising in many markets. Default rates are rising on subprime mortgages, but these mortgages—which offer loans to borrowers with poor credit at higher interest rates—form a relatively small part of all mortgage originations. About 87 percent of residential mortgages are not subprime loans, according to the Mortgage Bankers Association’s delinquency studies.
Brookings also added that “...the vast majority of subprime mortgages are likely to remain fully paid up as long as unemployment remains as low as it is now in the U.S. economy.” At the time, US unemployment was 4.7%, but a year later it was at 6.5%, and would peak at 10% in October 2009.
In an article from the December 2004 issue of Economic Policy Review, Jonathan McCarthy and Richard W. Peach argued that there was “little basis” for concerns about housing prices, with “home prices essentially moving in line with increases in family income and declines in nominal mortgage interest rates,” and hand-waved any concerns based on vague statements about “demand”:
Our main conclusion is that the most widely cited evidence of a bubble is not persuasive because it fails to account for developments in the housing market over the past decade. In particular, significant declines in nominal mortgage interest rates and demographic forces have supported housing demand, home construction, and home values during this period. Taking these factors into account, we argue that market fundamentals are sufficiently strong to explain the recent path of home prices and support our view that a bubble does not exist.
As for the likelihood of a severe drop in home prices, our examination of historical national home prices finds no basis for concern. Even during periods of recession and high nominal interest rates, aggregate real home prices declined only moderately.
From the outside, this made it appear that the value of housing was exponential, and that the “pent-up demand” for homes necessitated a massive boom in construction, one that peaked in January 2006 with 2.27 million new homes built. A year later, this number collapsed to 1.084 million, and in January 2009, only 490,000 new homes had been built in America, the lowest it had been in history.
Denial rates for mortgages declined drastically (along with the increase in things like 40-year or 50-year mortgages), which meant that suddenly anybody was able to get a house, which made it only seem logical to build more housing. Low interest rates before 2006 allowed consumers to take on mountains of new credit card debt, rising to as high as 20% of household incomes in 2007, to the point that by the 2000s, credit card companies were making more money from credit card lending than the fees from people using the credit cards, with $65 billion of the $95 billion of the credit card industry’s revenue coming from interest on debt, with lending-related penalty fees and cash advance fees contributing another $12.4 billion, per Philadelphia Fed Economist Lukasz Drozd.
While the precise order of events is a little more complex, the general gist of the subprime mortgage crisis was straightforward: easily-available money allowed massive amounts of people — many of whom couldn’t afford to buy these houses outside of the easy money that funded the bubble — to enter the housing market, which in turn made it much easier to sell a house for a much higher price, which inflated the value of housing.
People made decisions based on fundamentally-flawed information. In January 2004, the Bush administration declared that America’s economy was on the path to recovery, with small businesses creating the majority of new jobs and the stock market booming. Debt was readily-available across the board, with commercial and industrial loans spiking along with consumer debt (including a worrying growth in subprime auto loans). The good times were rolling, as long as you didn’t think about it too hard.
But, as I said, the chain of events was simple: it was easy to borrow money to buy a house, which meant lots of people were buying houses, which meant that the value of a house seemed higher than it was outside of the easy money era. Easily-available money put lots of cash into the economy, which led to higher prices, which led to inflation, which forced the federal reserve to raise interest rates 17 times in the space of two years, which made it harder to get any kind of loan, which made it harder to get a mortgage, which made it harder to sell a house, which made people sell houses for cheaper, which lowered the value of houses, which made it harder to refinance the bad loans, which meant people foreclosed on their homes, which in turn lowered the value of housing, all as demand for housing dropped because nobody was able to buy housing.
The underlying problems were, ultimately, the illusion of value and mobility. Those borrowing at the time believed they had invested in something with a consistent (and consistently-growing) value — a house — and would always have easy access to credit (via credit cards and loans), as before-tax family income had never been higher. In the beginning of 2007, delinquencies on consumer and business loans climbed, abandoned housing developments grew, and a US economy dependent on the housing bubble (per Paul Krugman’s “That Hissing Sound” from August 2005) began to stumble. By November 2009, 23% of US consumer mortgages were underwater (meaning they were worth less than their loans).
The housing bubble was created through easily-available debt, insane valuations based on debt-fueled speculation, do-nothing regulators (like eventual Fed Chair Ben Bernanke, who said in October 2005 that there was no housing bubble) and consumers being sold an impossible, unsustainable dream by people financially incentivized to make them rationalize the irrational, and believe that nothing bad will ever happen.
In February 2005, 40% ($19 billion) of IndyMac Bancorp’s mortgage originations in a single quarter came from a “Pay-Option ARM,” which started with a 1% teaser rate which jumped in a few short months to 4% or more, with frequent adjustments. Washington Mutual CEO Kerry Killinger said in 2003 that he wanted WaMu to be “the Wal-Mart of banking,” and did so by using (to quote the New York Times) “relaxed standards,” including issuing a mortgage to a mariachi singer who claimed a six-figure income and verified it using a single photo of himself.
By the time it collapsed in September 2008, WaMu had over $52.9 billion in ARMs and $16.05 billion in subprime mortgage loans.
Had Washington Mutual and the many banks making dodgy ARM and subprime loans underwritten loans based on the actual creditworthiness of their applicants, there wouldn’t have been a housing bubble, because many of these borrowers would’ve been unable to pay their mortgages, and thus wouldn’t have been deemed creditworthy, and thus no apparent housing demand would’ve grown.
In very simple terms, the “demand” for housing was inflated by a deceitfully-priced product that undersold its actual costs, and through that deceit millions of people were misled into believing said product was viable.
Did you work out where this is going yet?
In September 2024, I raised my first concerns about a Subprime AI Crisis:
I hypothesize a kind of subprime AI crisis is brewing, where almost the entire tech industry has bought in on a technology sold at a vastly-discounted rate, heavily-centralized and subsidized by big tech. At some point, the incredible, toxic burn-rate of generative AI is going to catch up with them, which in turn will lead to price increases, or companies releasing new products and features with wildly onerous rates — like the egregious $2-a-conversation rate for Salesforce’s “Agentforce” product — that will make even stalwart enterprise customers with budget to burn unable to justify the expense.
This theory is important, and thus I’m going to give it a lot of time and love to break it down.
That starts with the parties involved, and how the economics involved get worse over time, returning to my theory of “AI’s chain of pain, and the hierarchy of how the actual AI economy works.
The AI industry has done a great job in obfuscating exactly how brittle its economics really are, and as a result, I need to explain both how money is raised, money is deployed, and where the economics begin to break down.
Generally, AI is funded from only a few places::
Some things to keep note of:
This is a crucial point, so stay with me.
AI models work by charging a per-million token rate for inputs (things you feed in) and outputs, which are either the things that the model outputs (such as an image, text or code), or the “chain of thought reasoning” many models rely upon now, where they take an input, generate a plan (which is an “output”) and then do stuff based on said plan.
AI startups, for the most part, do not have their own models, and thus must pay OpenAI or Anthropic (or other providers to a much lesser extent) to build services using them.
When you pay for access to an AI startup’s service — which, of course, includes OpenAI and Anthropic — you do so for a monthly fee, such as $20, $100 or $200-a-month in the case of Anthropic’s Claude, Perplexity’s $20 or $200-a-month plan, or OpenAI’s $8, $20, or $200-a-month subscriptions. In some enterprise use cases, you’re given “credits” for certain units of work, such as how Lovable allows users “100 monthly credits” in its $25-a-month subscription, as well as $25 (until the end of Q1 2026) of cloud hosting, with rollovers of credits between months.
When you use these services, the company in question then pays for access to the AI models in question, either at a per-million-token rate to an AI lab, or (in the case of Anthropic and OpenAI) whatever cloud provider is renting them the GPUs to run the models. A token is basically ¾ of a word.
As a user, you do not experience token burn, just the process of inputs and outputs. AI labs obfuscate the cost of services by using “tokens” or “messages” or 5-hour-rate limits with percentage gauges, and you, as the user, do not really know how much any of it costs. On the back end, AI startups are annihilating cash, with up until recently Anthropic allowing you to burn upwards of $8 in compute for every dollar of your subscription. OpenAI allows you to do the same, though it’s hard to gauge by how much.
This is where the economic problem has begun. When the AI bubble started, venture capitalists flooded AI startups with cash, encouraging them to create hypergrowth businesses using, for the most part, monthly subscription costs that didn’t come close to covering the costs.
As a result, many AI companies have experienced rapid growth selling a product that can only exist with infinite resources.
The problem is fairly simple: providing AI services is very expensive, and costs can vary wildly depending on the customer, input and output, the latter of which can change dramatically depending on the prompt and the model itself. A coding model relies heavily on chain-of-thought reasoning, which means that despite the cost of tokens coming down (which does not mean the price of providing them has decreased, it’s a marketing move), models are using far, far more tokens, increasing costs across the board.
And consumers crave new models. They demand them. A service that doesn’t provide access to a new model cannot compete with those that do, and because the costs of models have been mostly hidden from users, the expectation is always the newest models provided at the same price.
As a result, there really isn’t any way that these services make sense at a monthly rate, and every single AI company loses incredible amounts of money, all while failing to make that much revenue in the first place.
For example, Harvey is an AI tool for lawyers that just raised $200 million at an $11 billion valuation, all while having an astonishingly small $190 million in ARR, or $15.8 million a month. It raised another $160 million in December 2025, after raising $300 million in June 2025, after raising $300 million in February 2025.
Cursor is an AI coding tool that raised $160 million in 2024 (As of December 2024, it had $48 million ARR, or around $4 million of monthly revenue), $900 million ($500 million ARR/$41.6 million) in June 2025, and $2.3 billion in November 2025 ($1 billion ARR/$83 million). As of March 2, 2026, Cursor was at $2 billion annualized revenue, or $166 million in monthly revenue.
I’ll get to Cursor in a little bit, because it’s crucial to the Subprime AI Crisis.
The Subprime AI Crisis is what happens when somebody actually needs to start making money, or, put another way, stop losing quite so much, revealing how every link in the chain was funded based on questionable assumptions and deadly short-term thinking.
Here’s the order of events as I see them.
The entire generative AI industry is based on unprofitable, unsustainable economics, rationalized and funded by venture capitalists and bankers speculating on the theoretical value of Large Language Model-based services. This naturally incentivized developers to price their subscriptions at rates that attracted users rather than reflecting the actual economics of the services.
Sidenote: This is what worked in the past, if you squint hard enough. In reality, there are no historical comparisons to the AI bubble’s economics in the entire history of tech — no business has been this bad, no software has ever cost this much, and no solution exists other than charging prices that are 10x higher or reducing rate limits to the point that users want to kill you.
Venture capitalists are also part of the subprime AI crisis, sitting on “billions of dollars” of AI companies that lose hundreds of millions of dollars, their companies built on top of AI models owned by OpenAI and Anthropic with little differentiation and no path to profitability. Nobody is going public! Nobody is getting acquired! As I discussed back in AI Is A Money Trap, there really is no liquidity mechanism for the billions of dollars sunk into most AI companies. Going public also reveals the ugly financial condition of these startups. MiniMax, for example, made a pathetic $79 million in revenue in 2025, and somehow lost $250.9 million in the process.
Much like the houses in the great financial crisis, AI startups only retain their value as long as there is a market, or at least the perception that these companies could theoretically go public or be acquired. It only takes one failed exit or firesale to break the illusion.
At least you can live in a house. Every AI company will be a problem child that burns money on inference, bereft of intellectual property thanks to their dependence on OpenAI and Anthropic. What use is Perplexity without an eternal subsidy? The value of having Aravind Srivinas sitting around your office all day? I’d rather start my car in the garage.
“Fast-growing” AI companies only grew because they were allowed to burn as much money as they wanted selling services that are entirely unsustainable, raising more venture capital money with every burst of user growth, which they use to aggressively market to new users and grow further to raise another bump of venture capital.
As a result, AI labs and AI startups have created negative habits with their users in two ways:
To grow their user bases as fast as possible, AI startups (and AI labs) allowed their users to burn incredible amounts of tokens, I assume because they believed at some point things would become profitable or they’d always have access to easy venture capital. This created an entire industry of AI startups that disconnected their users from the raw economics of the product, creating a race to the bottom where every single AI startup must have every AI model and every AI feature and do every AI thing, all at an incredible cost that only ever seems to increase.
Another fun feature is that just about every product gives some sort of “free” access period for new (and expensive!) models, like when Cursor had a free access period for GPT 5’s launch. It’s unclear who shoulders the burden here, but somebody is paying those costs.
In any case, nowhere are the subsidies higher than those of Anthropic and OpenAI, who use their tens of billions of dollars of funding to allow users to burn anywhere from $3 to $13 per every dollar of subscription revenue to outpace their competition.
The Subprime AI Crisis is when the largest parties are finally forced to reckon with their rotten economics, and the downstream consequences that follow.
As I reported in July 2025, starting in June last year, both OpenAI and Anthropic launched “priority service tiers,” jacking up the price on their enterprise customers (who pay for model access via their API to provide models in their software) for guaranteed uptime and less throttling of their services while also requiring an up-front (3-12 month) guarantee of token throughput.
Anthropic’s changes immediately increased the costs on AI startups like Lovable, Replit, Augment Code, and Anthropic’s largest customer, Cursor, which was forced to dramatically change its pricing from a per-request model to a bizarre pricing model where you pay model pricing with a 20% fee, but also receive A) at least as much as you pay in your subscription fee in tokens and B) “generous included usage” of Cursor’s Composer model:

What’s crazy is that even with this pricing, Cursor still gives away 16 cents for every dollar on its $60-a-month plan and $1 for every dollar on its $200-a-month plan, and that’s before “generous usage” of other models.
I’ll also add that Anthropic has already turned the screws on its subscription customers too, adding weekly limits to Claude subscribers on July 28, 2025, a few weeks after quietly tightening other limits.
Over the next few months, just about every AI startup had to institute some form of austerity. Replit shifted to something called “effort-based” pricing in June 2025, and then launched something called “Agent 3” in September 2025 that burned through users’ limits even faster — and, to be clear, Replit’s pricing gives you your subscription price in credits every single month on top of the cloud hosting necessary to get them online, meaning that a $20-a-month subscriber likely burns at least $25 a month, and Replit remains unprofitable.
Coding platform Augment Code was forced to change its pricing in October 2025 on a per-message basis, which meant that any message you sent cost the same amount no matter how complex the required response. In one case, a user spent $15,000 in tokens on a $250-a-month plan. Since then, Augment Code has moved to a confusing “credit” based model where they claim you use about 293 credits per Claude Sonnet 4.5 task, and users absolutely hate it because Augment Code was too cowardly to charge users based on the actual model pricing, because doing so would scare them away.
Now Augment Code is planning to remove its auto-complete and next edit features, claiming that their global usage was in decline and saying that developers “...are no longer working primarily at the level of individual lines of code; instead, they are orchestrating fleets of agents across tasks.”
Elsewhere, Notion bumped its Business Plan from $15 to $20-a-month per user thanks to its new “AI features,” which I imagine sucked for previous business subscribers who didn’t want “AI agents” or any of that crap but did want things like Single Sign On and Premium Integrations. The result? Profit margins dropped by 10%. Great job everybody!
In February 2026, Perplexity users noticed that rate limits had been aggressively trimmed from even its January 2026 limits, with $20-a-month subscribers now limited to arbitrary “average use weekly limits” on searches, and “monthly limits” on research queries (that one user worked out dropped them from 600 deep research queries a month to 20), down from 300+ searches a day and generous deep research limits.
Price hikes and product changes are likely to accelerate in the next few months as things get desperate. But now for a quick intermission…
I have been training with with Nik Suresh, author of I Will Fucking Piledrive You If You Mention AI Again, and while I’m kidding, I want to be clear that if you don’t stop bringing up Uber and AWS as examples of why AI will work out I may react poorly as I’m fucking tired of this point because it’s stupid and wrong. I will put you in the embrace of God, I swear.
The AI bubble and its representative companies do not and have never represented the buildout of Amazon Web Services or the growth and burnrate of Uber. If you are still saying this you are wrong, ignorant and potentially a big fucking liar.
As I discussed about a month ago, Amazon Web Services cost around $52 billion (adjusted for inflation!) between 2003 (when it was first used internally) through two years after it hit profitability (2017). OpenAI raised $42 billion last year. Anthropic raised $30 billion in February. You are full of shit if you keep saying this.
As I discussed a few weeks ago, Uber’s economics are absolutely nothing like generative AI. Uber did not have capex, and burned those billions on R&D and marketing (making it more similar to Groupon in the end):
“But Ed, What About Uber?”
What about Uber? Uber is a completely different business to Anthropic and OpenAI or any other AI company. It lost about $30 billion in the last decade or so, and turned a weird kind of profitable through a combination of cutting multiple markets and business lines (EG: autonomous cars), all while gouging customers and paying drivers less.
The economics are also completely different. Uber does not pay for its drivers’ gas, nor their cars, nor does it own any vehicles. Its PP&E has been between $1.5 billion and $2.1 billion since it was founded. Uber’s revenue does not increase with acquisitions of PP&E, nor does its business become significantly more expensive based on how far a driver drives, how many passengers they might have in a day, or how many meals they might deliver. Uber is, effectively, a digital marketplace for getting stuff or people moved from one place to another, and its losses are attributed to the constant need to market itself to customers for fear that other rideshare (Lyft) or delivery companies (DoorDash, Seamless) might take its cash.
Also: Uber’s primary business model was on a ride-by-ride basis, not a monthly subscription. Users may have been paying less, but they were still thinking about each transaction with Uber in terms that made sense when prices were raised (though it briefly tried an unlimited ride pass option in 2016).
Here’re some other myths I’m tired of hearing about:
Yet the most obvious one that I hear is the funniest: that Anthropic and OpenAI can just raise their prices!
As both OpenAI and Anthropic aggressively stumble toward their respective attempts to take their awful businesses public, both are making moves to try and become “respectable businesses,” by which I mean “businesses that still lose billions of dollars but in less-annoying ways.” Last week, OpenAI killed Sora — both the app and the model — along with a $1 billion investment from Disney, with the Wall Street Journal reporting it was burning a million dollars a day, but Forbes estimating the number was closer to $15 million.
OpenAI will frame this as part of its "refocus" on a “Superapp” (per the WSJ) that combines ChatGPT, coding app codex, and its dangerously shit browser into one rat king of LLM toys that nobody can work out a real business model for. All of this is part of a supposed internal effort to “prioritize coding and business customers” that we’ve heard some version of for months. Meanwhile, OpenAI’s attempts to bring advertising to its users have been a little embarrassing, with a two-month-long trial involving “less than 20%” of ChatGPT users resulting in “$100 million in annualized revenue,” better known as about $8.3 million in a month from what was meant to be a business line that brought in “low billions” in 2026 according to the Financial Times.
Timing confusingly with this “refocus” is OpenAI’s plan to nearly double its workforce from 4,500 to 8,000 people by the end of 2026. In fact, writing all this down makes it feel like OpenAI doesn’t really have much of a focus beyond “buy more stuff” and saying “superapp!” every six months. Hey, whatever happened to OpenAI’s plan to be “the interface to the internet” that Alex Heath reported would happen by the first half of 2025? Did that happen? Did I miss it?
In any case, OpenAI’s other strategy is to absolutely jam the gas pedal on its Codex coding product — for example, one user I found was able to burn $2,192 in tokens on a $200-a-month ChatGPT plan, and another was able to burn $1,461 in three days on the same subscription.
Meanwhile, Anthropic has been in the midst of a months-long rugpull following an all-out media campaign through December and January, pushing Claude Code on tech and business reporters who don’t bother to think too hard about things, per my Hater’s Guide to Anthropic:
On December 3 2025, the Financial Times would report that an Anthropic IPO would be happening as soon as 2026, while also revealing that the company was already working on another funding round valuing it at $300 billion.
Around this point, something strange started happening. Posts started appearing claiming that Claude Code was the best thing ever. Software development was “now boring” because of how good it was. Even Dario Amodei, a person directly incentivized to lie about it, said that an indeterminate number of coders at Anthropic no longer wrote any code. Even the creator of Claude Code said it did all his coding. One blogger said it was getting “too good.” Twitter flooded with obtuse stories about how Claude Code was doing all the work and they were scared about how good it was making them, all without really explaining what that meant.
In the last week of December, Anthropic would push a promotion doubling the rate limits on all of its monthly plans from December 25 to December 31, 2025.
By January 5, 2026, users were complaining about punishing new rate limits, with one user claiming that there had been a 60% reduction in token usage. Anthropic claimed that this was simply the expiration of holiday rate limits, but in reality, this is all part of Anthropic’s continual manipulation of rate limits to con customers into buying Claude subscriptions that decay in value.
In the end, Anthropic got what it wanted. The Verge would claim that Claude Code was “having a moment,” with word-of-mouth exposure spiking by 13% points compared to the prior 30-day period between December 29th and January 26th, likely because of all the fucking media coverage and astroturfing. Despite there not really being a thing that anybody could point at, Claude Code was apparently the biggest thing ever, terrifying competitors and changing lives in some indeterminate way that was very cool, possibly.
The media campaign worked, and Anthropic was able to close a $30 billion round on February 12, 2026.
On February 18, 2026, Anthropic started banning anybody who used multiple Claude Max accounts, something that had never been an issue before it needed everybody to talk about Claude Code non-stop. The same day, Anthropic “cleared up” its Claude Code policies, saying that you can’t connect your Claude account to external services, meaning that all of those people who have been spinning up OpenClaw instances and buying $10,000 worth of Mac Minis are going to find that they’re suddenly having to pay for their API calls.
Around a month later, Anthropic would start a two-week-long 2x-rate limit promotion for off-peak usage that ended on March 27, 2026.
A day before on March 26 2026, Anthropic would announce that it was starting “peak hours,” with Claude users maxing out their sessions faster between the hours of 5am and 11PM pacific time Monday to Friday, with a spokesperson limply adding that “efficiency wins” will “offset this” and only “7% of users will hit the limits.” All of this was sold as a result of “managing the growing demand for Claude.”
If I’m honest, this might be Anthropic’s most-egregious swindle yet. By pumping off-peak usage and then immediately cutting it just before introducing peak hours, Anthropic further muddies the water of how much actual access you get to their products. Peak hours appear to have become aggressively restricted, and I imagine off peak feels…something like the regular peak hours used to.
Users almost immediately started hitting limits regardless of what time or day they were using it.
One user on the $100-a-month Max plan complained about hitting 61% of his session limit after four prompts (which cost $10.26 in tokens). Another said that they hit 63% of their rate limit on their $200-a-month plan in the space of a day, and another hit 95% after 20 minutes of using their Max plan (I’m gonna guess $100-a-month). This person hit their Max limit after “two or three things.” This one vowed to cancel their $200-a-month subscription after hitting their weekly limit in the space of a day, saying that they (and I’m going off of a translation, so forgive me) “expected a premium experience for $200, and what they got was constant limit stress.” This guy is scared to use Claude Code because of the limits. This guy blew 28% of his limits in less than an hour. This guy “can’t even do basic work on a 20x Max plan.” This guy hit his limits “in a few prompts” on Anthropic’s $20-a-month Pro plan, and the same prompts would have (apparently) consumed 5% of the limits “normally” (I assume last week), and while Thariq from Anthropic assured him that this was abnormal, he didn’t bother to respond to this guy in the thread who said he ran out of usage on the Max plan in 15 minutes.
While Anthropic Technical Staff Member Lydia Hallie posted that Anthropic was “aware people are hitting usage limits in Claude Code way faster than expected” and that some investigation was taking place, it’s hard to imagine that Anthropic had no idea that these limits were so severe or that any of this was a surprise.
Naturally, OpenAI had already reset limits on its Codex coding model the second that these reports begun, claiming that they “wanted people to experiment with the magnificent plugins they launched” rather than saying something more-truthful like “we’re lowering limits so that the hogs braying with anger at Anthropic start paying OpenAI instead.”
While an eager Redditor claimed that these rate limits were a result of a cache bug on Claude Code, Anthropic quickly said that this wasn’t the reason, nor did they say anything about there being a reason or that anything was wrong.
Meanwhile, users are complaining about the reduced quality of outputs from its Claude Opus 4.6 model, with some saying it acts like cheaper models, and another noting that it might be because of Anthropic’s upcoming Mythos model, which was leaked when Fortune mysteriously somehow discovered an openly-accessible “data cache” that included 3000 assets but somehow no actual information about the model other than it would be a “step change” and its cybersecurity powers were too much to release at once, the tech equivalent of deliberately dropping a magnum condom out of your wallet in front of a woman, or Dril’s “I was just buying ear medication for my sick uncle…who’s a model by the way” post.
I’m gonna be honest I just don’t give a shit about Mythos or Capybara or any blatant leaks intended to spook cybersecurity stocks, especially as these models are also meant to be much more compute-intensive, and thus, vastly more expensive to run.
How will that work with these rate limits, exactly?
I think there’re a few ways this goes:
I wager that this is just the first of a few major belt-tightening operations from both Anthropic and OpenAI as they desperately shoulder-barge each other to file the world’s worst S-1. Both companies lose billions of dollars, both companies have no path to profitability, and both companies sell products — both to consumers and businesses — that simply do not work when users are forced to pay something approaching a sustainable cost.
Even with these egregious limits, a user I previously linked to was allowed to burn $10 in tokens in four prompts on a $100-a-month plan. Even in the world of Amodei’s Stylized Facts, that would still be $5 of prompts every 5 hours, which over the course of a month will absolutely be over $100.
Yet the sheer fury of Anthropic’s customers only proves the fundamental weakness of Anthropic’s business model, and the impossibility of ever finding any kind of profitability.
And the AI industry has nobody to blame but itself.
While it’s really easy to make fun of people obsessed with LLMs, I want to be clear that Anthropic and OpenAI are inherently abusive companies that have built businesses on theft, deception and exploitation.
Anybody who’s spent more than a few minutes in one of the many AI Subreddits has read story after story of models mysteriously “becoming dumb,” or rate limits that seem to expand and contract at random. Even the concept of “rate limits” only serves to further deceive the customer. Outside of intentionally asking the model, users are entirely unaware of their “token burn,” or at the very least have built habits around rate limits that, as of right now, are entirely different to even a month ago.
A user who bought a $200-a-month Claude Pro subscription in December 2025, a mere three months later, now very likely cannot do the same things they did on Claude Code when they decided to subscribe, and those who use these subscriptions for their day jobs are now having to sit on their hands waiting for the rate limits to pass, and have no clarity into whether they’ll be able to work at the same rate they did even a month ago, let alone when they subscribed.
All of this is a direct result of Anthropic, OpenAI, and other AI startups intentionally deceiving customers through obtuse pricing so that people would subscribe believing that the product would continue providing the same value, and I’d argue that annual subscriptions to these services amount to, if not fraud, a level of consumer deception that deserves legal action and regulatory involvement.
To be clear, no AI company should have ever sold a monthly subscription, as there was never a point at which the economics made sense. Yet had these companies actually charged their real costs, nobody would have bothered with AI, because even with these highly-subsidized subscriptions, AI still hasn’t delivered meaningful productivity benefits, other than a legion of people who email me saying “it’s changed my life as a programmer!” without explaining to me what that means or why it matters or what the actual result is at the end.
Isn’t it kind of weird that we have these LLM subscriptions to products that arbitrarily become less-accessible or less-performant in a way that’s impossible to really measure, and labs never seem to address? We don’t know the actual rate limits on Claude (other than via CCusage or Shellac’s research), or ChatGPT, or any of these products by design, because if we did, it would be blatantly obvious how unsustainable and ridiculous these products were.
And the magical part about Large Language Models is that your most engaged customers are also your most-expensive, and the more-intensive the work, the more expensive the outputs become.
If you’re about to say “well they’ll just raise the prices,” perhaps you should check Twitter or Reddit, and notice that Anthropic’s customers are screaming like they’re being stung to death by bees because of new rate limits that only let them burn $10 of compute in five hours. Do you think these people would be comfortable with a $130-a-month, $1,300-a-month or $2,500-a-month subscription? One that performs the same way (if not worse) as their $20, $100 or $200-a-month subscription did?
Or do you think they’ll do Aaron Sorkin speeches about Anthropic’s greed and immediately jump to ChatGPT in the hopes that the exact same thing doesn’t happen a few months later?
Much as homeowners were assured that they’d simply be able to refinance their homes before the adjustable rates hit, AI fans repeatedly switch subscriptions to whichever provider is currently offering the best deal, in some cases paying for multiple subscriptions under the explicit knowledge that rate limits existed and would become increasingly-punishing.
Based on the reactions of their users, I don’t really see how the AI labs — or AI startups, for that matter — fix this problem.
On one hand, AI subscribers are acting like babies, crying that their product won’t let them use $2500 of tokens for $200. This was an obvious con, a blatant subsidy, and a party that wouldn’t last forever.
On the other, AI labs and AI startups have never, ever acted with any degree of honesty or clarity with regards to their costs, instead choosing to add “exciting” new features that often burn more tokens without charging the end user more, which sounds nice until you remember that things cost money and money is not unlimited.
The very foundation of every AI startup is economically broken. The majority of them sell some sort of “deep research” report feature that costs several dollars to generate at a time, and many sell some form of expensive coding or “computer use” product, tool-based web search features, and many other products that exist to keep a user engaged while burning tokens, all without explaining to the user “yeah, we’re spending way more than we make off of you, this is an introductory rate.”
This intentional, blatant and industry-wide deception set the terms for the Subprime AI Crisis. By selling AI services at $20 or $50 or even $200-a-month, AI startups and labs created the terms for their own destruction, with users trained for years to expect relatively unlimited access sold at a flat rate for a service powered by Large Language Models that burn tokens at arbitrary rates based on their inference of the user’s prompt, making costs near-impossible to moderate.
And when these companies make changes to slightly bring costs under control, their users act with revulsion, because rate limits aren’t price increases, but direct changes to the functionality of the product. Imagine if a subscription to a car service was $200-a-month, and let you go 50 miles, or 25 miles, or 100 miles, or 4 miles, or 12 miles depending on the day, and never at any point told you how many miles you had left beyond a percentage-based rate limit. To make matters worse, sometimes the car would arbitrarily take a different route, driving you five miles in the opposite direction, or decide to park on the side of the curb, charging you for every mile.
This is the reality of using an AI product in the year of our lord 2026. A Claude Code or OpenAI Codex user cannot with any clarity say that in three months their current workload or workflow will be possible based on their current subscription. Somebody buying an annual subscription to any AI product is immediately sacrificing themselves to the whims of startup CEOs that intentionally decided to deceive users for years as a means of juicing growth.
And when these limits decay, does it eventually make the ways in which some of these users work with Claude Code impossible? At what point do these rate limit shifts start changing how reliable the experience is and how much one can get done in a day? What use is a tool that gets more unreliable to access and expensive over time? Even if this week’s rate limits are an overcorrection, one has to imagine they resemble the future of Anthropic’s products, and are indicative of a larger pattern of decay in the value of its subscriptions.
I’m going to be as blunt as possible: every bit of AI demand — and barely $65 billion of it existed in 2025 — that exists only exists due to subsidies, and if these companies were to charge a sustainable rate, said demand would evaporate.
There is no righting this ship. There is no pricing that makes sense that customers will pay at scale, nor is there a magical technological breakthrough waiting in the wings that will reduce costs. Vera Rubin will not save AI, nor will some sort of “too big to fail” scenario, because “too big to fail” was based on the fact that banks would have stopped providing dollars to people and insurance companies would have stopped issuing insurance.
Despite NVIDIA’s load-bearing valuation and the constant discussion of companies like OpenAI and Anthropic, their actual economic footprint is quite small in comparison to the trillions of dollars of CDOs and trillion plus dollars of mortgages involved in the great financial crisis. The death of the AI industry would be cataclysmic to venture capitalists, bring about the end of the hypergrowth era for the Magnificent Seven, and may very well kill Oracle, but — seriously — that is nothing in comparison to the scale of the Great Financial Crisis. This isn’t me minimizing the chaos to follow, but trying to express how thoroughly fucked everything was in 2008.
On Friday I’m going to get into this more in the premium. This wasn’t an intentional ad, I just realized as I wrote that sentence that that was what I have to do.
Anyway, I’ll close with a grim thought.
What’s funny about the comparison to the subprime mortgage crisis is that there are, in all honesty, multiple different versions of the Stripper With Five Houses from The Big Short:
All of these entities are acting based on a misplaced belief that the world will cater to them, and that nothing will ever change. While there might be different levels of cynicism — people that know there’re subsidies but assume they’ll be fine once they arrive, or people like Sam Altman that are already rich and don’t give a shit — I think everybody in the AI industry has deluded themselves into believing they have the mandate of Heaven.
Back in August 2024, I named several pale horses of the AIpocalypse, and after absolutely fucking nailing the call two years early on OpenAI’s “big, stupid magic trick” of launching Sora to the public, I think it’s time to update them:
Anyway, thanks for reading this piece.
2026-03-28 01:21:30
I’m turning 40 in a month or so, and at 40 years young, I’m old enough to remember as far back as December 11 2025, when Disney and OpenAI “reached an agreement” to “bring beloved characters from across Disney’s brands to Sora.” As part of the deal, Disney would “become a major customer of OpenAI,” use its API “to build new products, tools and experiences (as well as showing Sora videos in Disney+),” and “deploy ChatGPT for its employees,” as well as making a $1 billion equity investment in OpenAI.
Just one small detail: none of this appears to have actually happened.
Despite an alleged $1 billion equity investment, neither Disney’s FY2025 annual report nor its February 2, 2026 Q1 FY2026 report mention OpenAI or any kind of equity investment. Disney+ does not show any Sora videos, and searching for “Sora” brings up “So Random,” a musical comedy sketch show from 2011 with a remarkably long Wikipedia page that spun off from another show called “Sonny With A Chance” after Demi Lovato went into rehab.
It doesn’t appear that investment ever happened, likely because — as was reported earlier this week by The Information and the Wall Street Journal — OpenAI is killing Sora. Shortly after the news was reported, The Hollywood Reporter confirmed that the deal with Disney was also dead.
Per The Journal, emphasis mine:
CEO Sam Altman announced the changes to staff on Tuesday, writing that the company would wind down products that use its video models. In addition to the consumer app, OpenAI is also discontinuing a version of Sora for developers and won’t support video functionality inside ChatGPT, either.
Oh, okay! The app that CNBC said was “challenging Hollywood” and “freaking out the movie industry” and The Hollywood Report would suggest could somehow challenge Pixar and was Sam Altman successfully “playing Hollywood” and that The Ankler said was OpenAI “going to war with Hollywood” as it “shook the industry” and that Deadline said made Hollywood “sore” and that Boardroom said was in a standoff with Hollywood and that the LA Times said was “deepening a battle between Hollywood and OpenAI” and “igniting a firestorm in Hollywood” and that Puck said had “Hollywood panicking” and TechnoLlama said was “the end of copyright as we know it” and that Slate said was a case of AI "crushing Hollywood as it we’ve known it” is completely dead a little more than five months after everybody claimed it was changing everything.
It’s almost as if everybody making these proclamations was instinctually printing whatever marketing copy had been imagined by the AI labs to promote compute-intensive vaporware, and absolutely nobody is going to apologize to the people working in the entertainment industry for scaring the fuck out of them with ghost stories! Every single person who blindly repeated that Sora existed and was changing everything should be forced to apologize to their readers!
I cannot express the sheer amount of panic that spread through every single part of the entertainment industry as a result of these specious, poorly-founded mythologies spread by people that didn’t give enough of a shit to understand what was actually going on. Sora 2 was always an act of desperation — an attempt to create a marketing cycle to prop up a tool that burned as much as $15 million a day that most of the mainstream media bought into because they believe everything OpenAI says and are willing to extrapolate the destruction of an entire industry from a fucking facade.
Thanks to everyone who participated in this grotesque scare-campaign, everybody I know in the film industry has been freaking out because every third headline about Sora 2 said that it would quickly replace actors and directors. The majority of coverage of Sora 2 acted as if we were mere minutes from it replacing all entertainment and all video-based social media, even though the videos themselves were only a few seconds long and looked like shit!
Sora 2 was never “challenging Hollywood” or “a threat to actors and directors,” it was a way to barf out videos that looked very much like Sora 2’s training data, and the reason you could only generate a few seconds at a time was these models started hallucinating stuff very quickly, because that’s what Large Language Models do.
Yet this is what the AI bubble is — poorly-substantiated media-driven hype cycles that exploit a total lack of awareness or willingness to scrutinize the powerful. Sora 2 was always a dog, it always looked like shit, it never challenged Hollywood, it never actually threatened the livelihoods of actors or directors or DPs or screenwriters outside of the tiny brains of studio executives that don’t watch or care about movies. Anybody that published a scary story about the power of Sora 2 helped needlessly spread panic through the performing arts, and should feel deep, unbridled shame.
You have genuinely harmed people I know and love, and need to wise up and do your fucking job.
I know, I know, you’re going to say you were “just reporting what was happening,” and that “OpenAI seemed unstoppable,” but none of that was ever true other than in your mind and the minds of venture capitalists and AI boosters. No, Sora 2 was never actually replacing anyone, that’s just not true, you made it up or had it made up for you.
But that, my friends, is the AI bubble. Five months can pass and an app can go from The End of Hollywood that apparently raised $1 billion to “discontinued via Twitter post that reads exactly like the collapse of a failed social network from 2013” and “didn’t actually raise anything.” It doesn’t matter if stuff actually exists, because it’ll be reported as if it does as long as a company says it’ll happen.
Perhaps I sound a little deranged, but isn’t anybody more concerned that a billion dollars that was meant to move from one company to another simply didn’t happen? Or, for that matter, that this keeps happening, again and again and again?
I’m serious! As I discussed in last year’s Enshittifinancial Crisis, OpenAI has had multiple deals that seem to be entirely fictional:
That’s just the AI bubble, baby! We don’t need actual stuff to happen! Just announce it and we’ll write it up! No problem, man! It doesn’t matter that one of the largest entertainment companies in the world simply didn’t give the most-notable startup in the world one billion dollars, much as it’s not a big deal that the entire media flew like Yogi Bear lured with a delicious pie toward every single talking point about OpenAI destroying Hollywood, much like it’s not a problem that Broadcom, AMD, SK Hynix, and Samsung all have misled their investors and the media about deals that range from threadbare to theoretical.
Except it is a problem, man! As I covered in this week’s free newsletter, I estimate that only around 3GW of actual IT load (so around 3.9GW of power) came online last year, and as Sightline reported, only 5GW of data center construction is actually in progress globally at this time, despite somewhere between 190GW and 240GW supposedly being in progress. In reality, data centers take forever to build (and obtaining the power even longer than that), but nobody needs to harsh their flow by looking into what’s actually happening.
In reality, the AI industry is pumped full of theoretical deals, obfuscations of revenues, promises that never lead anywhere, and mysterious hundreds of millions or billions of dollars that never seem to appear.
Beneath the surface, very little actual economic value is being created by AI, other than the single-most-annoying conversations in history pushed by people who will believe and repeat literally anything they are told by a startup or public company.
No, really. The two largest consumers of AI compute have made — at most, and I have serious questions about OpenAI — a combined $25 billion since the beginning of the AI bubble, and beneath them lies a labyrinth of different companies trying to use annualized revenues to obfuscate their meager cashflow and brutal burn-rate.
To make matters worse, almost every single data center announcement you’ve read for the last four years is effectively theoretical, their nigh-on-conceptual “AI buildouts” laundered through major media outlets to give the appearance of activity where little actually exists.
The AI industry is grifting the finance and media industry, exploiting a global intelligence crisis where the people with some of the largest audiences and pocketbooks have fundamentally disconnected themselves from reality.
I don’t like being misled, and I don’t like seeing others get rich doing so.
It’s time to get to the bottom of this.
2026-03-25 01:25:52
Hi! If you like this piece and want to support my independent reporting and analysis, why not subscribe to my premium newsletter? It’s $70 a year, or $7 a month, and in return you get a weekly newsletter that’s usually anywhere from 5000 to 18,000 words, including vast, detailed analyses of NVIDIA, Anthropic and OpenAI’s finances, and the AI bubble writ large. I just put out a massive Hater’s Guide To The SaaSpocalypse, as well as the Hater’s Guide to Adobe. It helps support free newsletters like these!
The entire AI bubble is built on a vague sense of inevitability — that if everybody just believes hard enough that none of this can ever, ever go wrong that at some point all of the very obvious problems will just go away.
Sadly, one cannot beat physics.
Last week, economist Paul Kedrosky put out an excellent piece centered around a chart that showed new data center capacity additions (as in additions to the pipeline, not brought online) halved in the fourth quarter of 2025 (per data from Wood Mackenzie):

Wood Mackenzie’s report framed it in harsh terms:
US data-centre capacity additions halved from Q3 to Q4 2025 as load-queue challenges persisted. The decline underscores the difficulties of the current development environment and signals a resulting focus on existing pipeline projects. While Texas extended its pipeline capacity lead in Q4 2025, New Mexico, Indiana and Wyoming saw greater relative growth. Planned capacity continues to be weighted by new developers with a small number of massive, speculative projects, targeting in particular the South and Southwest. New Mexico owes its growth to a single, massive, speculative project by New Era Energy & Digital in Lea County.
As I said above, this refers only to capacity that’s been announced rather than stuff that’s actually been brought online, and Kedrosky missed arguably the craziest chart — that of the 241GW of disclosed data center capacity, only 33% of it is actually under active development:

The report also adds that the majority of committed power (58%) is for “wires-only utilities,” which means the utility provider is only responsible for getting power to the facility, not generating the power itself, which is a big problem when you’re building entire campuses made up of power-hungry AI servers.
WoodMac also adds that PJM, one of the largest utility providers in America, “...remains in trouble, with utility large load commitments three times as large as the accredited capacity in PJM’s risked generation queue,” which is a complex way of saying “it doesn’t have enough power.”
This means that fifty eight god damn percent of data centers need to work out their own power somehow. WoodMac also adds there is around $948 billion in capex being spent in totality on US-based data centers, but capex growth decelerated for the first time since 2023. Kedrosky adds:
The total announced pipeline looks huge at 241 GW — about twice US peak electricity demand — but most of it is not real. Only a third is under construction, with the rest a mix of hopeful permits, speculative land deals, and projects that assume power sources nobody has actually built yet. In particular, much of it assumes on-site gas plants, a fraught assumption given current geopolitics.
The most serious problem is in the mid-Atlantic. Regional grid operator PJM has made power commitments to data centers at roughly three times the rate that new generation is actually coming online. Someone is going to be waiting a very long time, or paying a lot more than they expected, or both.
Let’s simplify:
The term you’re looking for there is data center absorption, which is (to quote Data Center Dynamics) “...the net growth in occupied, revenue-producing IT load,” which grew in America’s primary markets from 1.8GW in new capacity in 2024 to 2.5GW of new capacity in 2025 according to CBRE.
Definition sidenote! “Colocation” space refers to data center space built that is then rented out to somebody else, versus data centers explicitly built for a company (such as Microsoft’s Fairwater data centers). What’s interesting is that it appears that some — such as Avison Young — count Crusoe’s developments (such as Stargate Abilene) as colocation construction, which makes the collocation numbers I’ll get to shortly much more indicative of the greater picture.
The problem is, this number doesn’t actually express newly-turned-on data centers. Somebody expanding a project to take on another 50MW still counts as “new absorption.”
Things get more confusing when you add in other reports. Avison Young’s reports about data center absorption found 700MW of new capacity in Q1 2025, 1.173GW in Q2, a little over 1.5GW in Q3 and 2.033GW in Q4 (I cannot find its Q3 report anywhere), for a total of 5.44GW, entirely in “colocation,” meaning buildings built to be leased to others.
Yet there’s another problem with that methodology: these are facilities that have been “delivered” or have a “committed tenant.” “Delivered” could mean “the facility has been turned over to the client, but it’s literally a powered shell (a warehouse) waiting for installation,” or it could mean “the client is up and running.” A “committed tenant” could mean anything from “we’ve signed a contract and we’re raising funds” (such as is the case with Nebius raising money off of a Meta contract to build data centers at some point in the future).
We can get a little closer by using the definitions from DataCenterHawk (from whichAvison Young gets its data), which defines absorption as follows:
To measure demand, we want to know how much capacity was leased up by customers over a specific period of time. At datacenterHawk we calculate this quarterly. The resulting number is what’s called absorption.
Let’s say DC#1 has 10 MW commissioned. 9 MW are currently leased and 1 MW is available. Over the course of a quarter, DC#1 leases up that last MW to a few tenants. Their absorption for the quarter would be 1 MW. It can get a little more complicated but that’s the basic concept.
That’s great! Except Avison Young has chosen to define absorption in an entirely different way — that a data center (in whatever state of construction it’s in) has been leased, or “delivered,” which means “a fully ready-to-go data center” or “an empty warehouse with power in it.”
CBRE, on the other hand, defines absorption as “net growth in occupied, revenue-producing IT load,” and is inclusive of hyperscaler data centers. Its report also includes smaller markets like Charlotte, Seattle and Minneapolis, adding a further 216MW in absorption of actual new, existing, revenue-generating capacity.
So that’s about 2.716GW of actual, new data centers brought online. It doesn’t include areas like Southern Virginia or Columbus, Ohio — two massive hotspots from Avison Young’s report — and I cannot find a single bit of actual evidence of significant revenue-generating, turned-on, real data center capacity being stood up at scale. DataCenterMap shows 134 data centers in Columbus, but as of August 2025, the Columbus area had around 506MW in total according to the Columbus Dispatch, though Cushman and Wakefield claimed in February 2026 that it had 1.8GW.
Things get even more confusing when you read that Cushman and Wakefield estimates that around 4GW of new colocation supply was “delivered” in 2025, a term it does not define in its actual report, and for whatever reason lacks absorption numbers. Its H1 2025 report, however, includes absorption numbers that add up to around 1.95GW of capacity…without defining absorption, leaving us in exactly the same problem we have with Avison Young.
Nevertheless, based on these data points, I’m comfortable estimating that North American data center absorption — as the IT load of data centers actually turned on and in operation — was at around 3GW for 2025, which would work out to about 3.9GW of total power.
And that number is a fucking disaster.
Earlier in the year, TD Cowen’s Jerome Darling told me that GPUs and their associated hardware cost about $30 million a megawatt. 3GW of IT load (as in the GPUs and their associated gear’s power draw) works out to around $90 billion of NVIDIA GPUs and the associated hardware, which would be covered under NVIDIA’s “data center” revenue segment:

America makes up about 69.2% of NVIDIA’s revenue, or around $149.6 billion in FY2026 (which runs, annoyingly, from February 2025 to January 2026). NVIDIA’s overall data center segment revenue was $195.7 billion, which puts America’s data center purchases at around $135 billion, leaving around $44 billion of GPUs and associated technology uninstalled.
With the acceleration of NVIDIA’s GPU sales, it now takes about 6 months to install and operationalize a single quarter’s worth of sales. Because these are Blackwell (and I imagine some of the new next generation Vera Rubin) GPUs, they are more than likely going to new builds thanks to their greater power and cooling requirements, and while some could in theory be going to old builds retrofitted to fit them, NVIDIA’s increasingly-centralized (as in focused on a few very large customers) revenue heavily suggests the presence of large resellers like Dell or Supermicro (which I’ll get to in a bit) or the Taiwanese ODMs like Foxconn and Quanta who manufacture massive amounts of servers for hyperscaler buildouts.
I should also add that it’s commonplace for hyperscalers to buy the GPUs for their colocation partners to install, which is why Nebius and Nscale and other partners never raise more than a few billion dollars to cover construction costs.
It’s becoming very obvious that data center construction is dramatically slower than NVIDIA’s GPU sales, which continue to accelerate dramatically every single quarter.
Even if you think AI is the biggest most hugest and most special boy: what’s the fucking point of buying these things two to four years in advance? Jensen Huang is announcing a new GPU every year!
By the time they actually get all the Blackwells in Vera Rubin will be two years old! And by the time we install those Vera Rubins, some other new GPU will be beating it!
Before we go any further, I want to be clear how difficult it is to answer the question “how long does a data center take to build?”. You can’t really say “[time] per megawatt” because things become ever-more complicated with every 100MW or so. As I’ll get into, it’s taken Stargate Abilene two years to hit 200MW of power.
Not IT load. Power.
Anyway, the question of “how much data center capacity came online?” is pretty annoying too.
Sightline’s research — which estimated that “almost 6GW of [global data center power] capacity came online last year” — found that while 16GW of capacity was slated to come online in 2026 across 140 projects, only 5GW is currently under construction, and somehow doesn’t say that “maybe everybody is lying about timelines.”
Sightline believes that half of 2026’s supposed data center pipeline may never materialize, with 11GW of capacity in the “announced” stage with “...no visible construction progress despite typical build timelines of 12-18 months.” “Under construction” also can mean anything from “a single steel beam” to “nearly finished.”
These numbers also are based on 5GW of capacity, meaning about 3.84GW of IT load, or about $111.5 billion in GPUs and associated gear, or roughly 57.5% of NVIDIA’s FY2026 revenue that’s actually getting built.
Sightline (and basically everyone else) argues that there’s a power bottleneck holding back data center development, and Camus explains that the biggest problem is a lack of transmission capacity (the amount of power that can be moved) and power generation (creating the power itself):
The biggest driver of delay is simple: our power system doesn’t have enough extra transmission capacity and generation to serve dozens of gigawatts of new, high-utilization demand 100% of the time. Data centers require round-the-clock power at levels that rival or exceed the needs of small cities, and building new transmission infrastructure and generation requires years of permitting, land acquisition, supply chain management, and construction.
Camus adds that America also isn’t really prepared to add this much power at once:
Inside utilities, planners and engineers are working diligently to connect new loads. But the tools available to planners were built for extending power lines to new neighborhoods or upgrading equipment as communities grow. They weren’t designed to analyze 50 new service requests of 100 MW each, all while new generation applications pile up.
As a result, planners and engineers are overwhelmed; they’re stuck working to review new applications while simultaneously configuring new tools that are better equipped for the scale of this challenge. And unlike generation interconnection, which has well-defined steps across most ISOs and utilities, the process for evaluating large loads is often much more ad hoc. This makes adopting the right tools much more difficult too. In fact, the majority of utilities and ISO/RTOs are still developing formal study procedures.
Nevertheless, I also think there’s another more-obvious reason: it takes way longer to build a data center than anybody is letting on, as evidenced by the fact that we only added 3GW or so of actual capacity in America in 2025. NVIDIA is selling GPUs years into the future, and its ability to grow, or even just maintain its current revenues, depends wholly on its ability to convince people that this is somehow rational.
Let me give you an example. OpenAI and Oracle’s Stargate Abilene data center project was first announced in July 2024 as a 200MW data center. In October 2024, the joint venture between Crusoe, Blue Owl and Primary Digital Infrastructure raised $3.4 billion, with the 200MW of capacity due to be delivered “in 2025.” A mid-2025 presentation from land developer Lancium said it would have “1.2GW online by YE2025.” In a May 2025 announcement, Crusoe, Blue Owl, and Primary Digital Infrastructure announced the creation of a $15 billion joint vehicle, and said that Abilene would now be 8 buildings, with the first two buildings being energized by the “first half of 2025,” and that the rest would be “energized by mid-2026.” Each building would have 50,000 GPUs, and the total IT load is meant to be 880MW or so, with a total power draw of 1.2GW.
I’m not interested in discussing OpenAI not taking the supposedly-planned extensions to Abilene because it never existed and was never going to happen.
In December 2025, Oracle stated that it had “delivered” 96,000 GPUs, and in February, Oracle was still only referring to two buildings, likely because that’s all that’s been finished. My sources in Abilene tell me that Building Three is nearly done, but…this thing is meant to be turned on in mid-2026. Developer Mortensen claims the entire project will be completed by October 2026, which it obviously, blatantly won’t.
I hate to speak in conspiratorial terms, but this feels like a blatant coverup with the active participation of the press. CNBC reported in September 2025 that “the first data center in $500 billion Stargate project is open in Texas,” referring to a data center with an eighth of its IT load operational as “online” and “up and running,” with Crusoe adding two weeks later that it was “live,” “up and running” and “continuing to progress rapidly,” all so that readers and viewers would think “wow, Stargate Abilene is up and running” despite it being months if not years behind schedule.
At its current rate of construction, Stargate Abilene will be fully built sometime in late 2027. Oracle’s Port Washington Data Center, as of March 6 2026, consisted of a single steel beam. Stargate Shackelford Texas broke ground on December 15 2025, and as of December 2025, construction barely appears to have begun in Stargate New Mexico. Meta’s 1GW data center campus in Indiana only started construction in February 2026.
And, despite Microsoft trying to mislead everybody that its Wisconsin data center had ‘arrived” and “been built,” looking even an inch deeper suggests very little has actually come online” — and, considering the first data center was $3.3 billion (remember: $14 million a megawatt just for construction), I imagine Microsoft has successfully brought online about 235MW of power for Fairwater.
What Microsoft wants you to think is it brought online gigawatts of power (always referred to in the future tense), because Microsoft, like everybody else, is building data centers at a glacial pace, because construction takes forever, even if you have the power, which nobody does!
The concept of a hundred-megawatt data center is barely a few years old, and I cannot actually find a built, in-service gigawatt data center of any kind, just vague promises about theoretical Stargate campuses built for OpenAI, a company that cannot afford to pay its bills.
Everybody keeps yammering on about “what if data centers don’t have power” when they should be thinking about whether data centers are actually getting built. Microsoft proudly boasted in September 2025 about its intent to build “the UK’s largest supercomputer” in Loughton, England with Nscale, and as of March 2026, it’s literally a scaffolding yard full of pylons and scrap metal. Stargate Abilene has been stuck at two buildings for upwards of six months.
Here’s what’s actually happening: data center deals are being funded by eager private credit gargoyles that don’t know shit about fuck. These deals are announced, usually by overly-eager reporters that don’t bother to check whether the previous data centers ever got built, as massive “multi-gigawatt deals,” and then nobody follows up to check whether anything actually happened.
All that anybody needs to fund one of these projects is an eager-enough financier and a connection to NVIDIA. All Nebius had to do to raise $3.75 billion in debt was to sign a deal with Meta for data center capacity that doesn’t exist and will likely take three to four years to build (it’s never happening). Nebius has yet to finish its Vineland, New Jersey data center for Microsoft, which was meant to be “at 100MW” by the end of 2025, but appears to have only had 50MW (the first phase) available as of February 2026.
I’m just gonna come out and say it: I think a lot of these data center deals are trash, will never get built, and thus will never get paid. The tech industry has taken advantage of an understandable lack of knowledge about construction or power timelines in the media to pump out endless stories about “data center capacity in progress” as a means of obfuscating an ever-growing scandal: that hundreds of billions of NVIDIA GPUs got sold to go in projects that may never be built.
These things aren’t getting built, or if they’re getting built, it’s taking way, way longer than expected, which means that interest on that debt is piling up. The longer it takes, the less rational it becomes to buy further NVIDIA GPUs — after all, if data centers are taking anywhere from 18 months to three years to build, why would you be buying more of them? Where are you going to put them, Jensen?
This also seriously brings into question the appetite that private credit and other financiers have for funding these projects, because much of the economic potential comes from the idea that these projects get built and have stable tenants. Furthermore, if the supply of AI compute is a bottleneck, this suggests that when (or if) that bottleneck is ever cleared, there will suddenly be a massive supply glut, lowering the overall value of the data centers in progress…which are, by the way, all filled with Blackwell GPUs, which will be two or three-years-old by the time the data centers are finally turned on.
That’s before you get to the fact that the ruinous debt behind AI data centers makes them all remarkably unprofitable, or that their customers are AI startups that lose hundreds of millions or billions of dollars a year, or that NVIDIA is the largest company on the stock market, and said valuation is a result of a data center construction boom that appears to be decelerating and even if it wasn’t operating at a glacial pace compared to NVIDIA’s sales.
Not to sound unprofessional or nothing, but what the fuck is going on? We have 241GW of “planned” capacity in America, of which only 79.5GW of which is “under active development,” but when you dig deeper, only 5GW of capacity is actually under construction?
The entire AI bubble is a god damn mirage. Every single “multi-gigawatt” data center you hear about is a pipedream, little more than a few contracts and some guys with their hands on their hips saying “brother we’re gonna be so fuckin’ rich!” as they siphon money from private credit — and, by extension, you, because where does private credit get its capital from? That’s right. A lot comes from pension funds and insurance companies.
Here’s the reality: data centers take forever. Every hyperscaler and neocloud talking about “contracted compute” or “planned capacity” may as well be telling you about their planned dinners with The Grinch and Godot. The insanity of the AI buildout will be seen as one of the largest wastes of capital of all time (to paraphrase JustDario), and I anticipate that the majority of the data center deals you’re reading about simply never get built.
The fact that there’s so much data about data center construction and so little data about completed construction suggests that those preparing the reports are in on the con. I give credit to CBRE, Sightline and Wood Mackenzie for having the courage to even lightly push back on the narrative, even if they do so by obfuscating terms like “capacity” or “power” in ways that reporters and other analysts are sure to misinterpret.
Hundreds of billions of dollars have been sunk into buying GPUs, in some cases years in advance, to put into data centers that are being built at a rate that means that NVIDIA’s 2025 and 2026 revenues will take until 2028 to 2029 to actually operationalize, and that’s making the big assumption that any of it actually gets built.
I think it’s also fair to ask where the money is actually going. 2025’s $178.5 billion in US-based data center deals doesn’t appear to be resulting in any immediate (or even future) benefit to anybody involved.
I also wonder whether the demand actually exists to make any of this worthwhile, or what people are actually paying for this compute.
If we assume 3GW of IT load capacity was brought online in America, that should (theoretically) mean tens of billions of dollars of revenue thanks to the “insatiable demand for AI” — except nobody appears to be showing massive amounts of revenue from these data centers.
Applied Digital only had $144 million in revenue in FY2025 (and lost $231 million making it). CoreWeave, which claimed to have “850MW of active power (or around 653MW of IT load)” at the end of 2025 (up from 420MW in Q1 FY2025, or 323MW of IT load), made $5.13 billion of revenue (and lost $1.2 billion before tax) in FY2025.
Nebius? $228 million, for a loss of $122.9 million on 170MW of active power (or around 130MW of IT load). Iren lost $155.4 million on $184.7 million last quarter, and that’s with a release of deferred tax liabilities of $182.5 million. Equinix made about $9.2 billion in revenue in its last fiscal year, and while it made a profit, it’s unclear how much of that came from its large and already-existent data center portfolio, though it’s likely a lot considering Equinix is boasting about its “multi-megawatt” data center plans with no discussion of its actual capacity.
And, of course, Google, Amazon, and Microsoft refuse to break out their AI revenues. Based on my reporting from last year, OpenAI spent about $8.67 billion on Azure through September 2025, and Anthropic around $2.66 billion in the same period on Amazon Web Services. As the two largest consumers of AI compute, this heavily suggests that the actual demand for AI services is pretty weak, and mostly taken up by a few companies (or hyperscalers running their own services.)
At some point reality will set in and spending on NVIDIA GPUs will have to decline. It’s truly insane how much has been invested so many years in the future, and it’s remarkable that nobody else seems this concerned.
Simple questions like “where are the GPUs going?” and “how many actual GPUs have been installed?” are left unanswered as article after article gets written about massive, multi-billion dollar compute deals for data centers that won’t be built before, at this rate, 2030.
And I’d argue it’s convenient to blame this solely on power issues, when the reality is clearly based on construction timelines that never made any sense to begin with. If it was just a power issue, more data centers would be near or at the finish line, waiting for power to be turned on. Instead, well-known projects like Stargate Abilene are built at a glacial pace as eager reporters claim that a quarter of the buildings being functional nearly a year after they were meant to be turned on is some sort of achievement.
Then there’s the very, very obvious scandal that NVIDIA, the largest company on the stock market, is making hundreds of billions of dollars of revenue on chips that aren’t being installed. It’s fucking strange, and I simply do not understand how it keeps beating and raising expectations every quarter given the fact that the majority of its customers are likely going to be able to use their current purchases in the next decade.
Assuming that Vera Rubin actually ships in 2026, it’s reasonable to believe that people will be installing these things well into 2028, if not further, and that’s assuming everything doesn’t collapse by then. Why would you bother? What’s the point, especially if you’re sitting on a pile of Blackwell GPUs?
Why are we doing any of this?
Last week also featured a truly bonkers story about Supermicro, a reseller of GPUs used by CoreWeave and Crusoe, where co-founder Wally Liaw and several other co-conspirators were arrested for selling hundreds of millions of dollars of NVIDIA GPUs to China, with the intent to sell billions more.
Liaw, one of Supermicro’s co-founders, previously resigned in a 2018 accounting scandal where Supermicro couldn’t file its annual reports, only to be (per Hindenburg Research’s excellent report) rehired in 2021 as a consultant, and restored to the board in 2023, per a filed 8K.
Mere days before his arrest, Liaw was parading around NVIDIA’s GTC conference, pouring unnamed liquids in ice luges and standing two people away from NVIDIA CEO Jensen Huang. Liaw was also seen congratulating the CEO of Lambda on its new CFO appointment on LinkedIn, as well as shaking hands (along with Supermicro CEO Charles Liang, who has not been arrested or indicted) with Crusoe (the company building OpenAI’s Abilene data center) CEO Chase Lochmiller.
Supermicro isn’t named in the indictment for reasons I imagine are perfectly normal and not related to keeping the AI party going. Nevertheless, Liaw and his co-conspirators are accused of shipping hundreds of millions of dollars’ worth of NVIDIA GPUs to China through a web of counterparties and brokers, with over $510 million of them shipped between April and mid-May 2025. While the indictment isn’t specific as to the breakdown, it confirms that some Blackwell GPUs made it to China, and I’d wager quite a few.
The mainstream media has already stopped thinking about this story, despite Supermicro being a huge reseller of NVIDIA gear, contributing billions of dollars of revenue, with at least $500 million of that apparently going to China. The fact that Supermicro wasn’t specifically named in the case is enough to erase the entire tale from their minds, along with any wonder about how NVIDIA, and specifically Jensen Huang, didn’t know.
This also isn’t even close to the only time this has happened. Late last year, Bloomberg reported on Singapore-based Megaspeed — a (to quote Bloomberg) “once-obscure spinoff of a Chinese gaming enterprise [that] evolved into the single largest Southeast Asian buyer of NVIDIA chips” — and highlighted odd signs that suggest it might be operating as a front for China.
As a neocloud, Megaspeed rents out AI compute capacity like CoreWeave, and while NVIDIA (and Megaspeed) both deny any of their GPUs are going to China, Megaspeed, to quote Bloomberg, has “something of a Chinese corporate twin”:
This firm used similar presentation materials to Megaspeed’s, had a nearly identical website to a Megaspeed sub-brand and claimed Megaspeed’s Southeast Asia employees as its own. It’s also posted job ads at and near the Shanghai data center whose rendering was used in Megaspeed’s investor deck — including for engineering work on restricted Nvidia GPUs.
Bloomberg reported that Megaspeed imported goods “worth more than a thousand times its cash balance in 2023,” with two-thirds of its imports being NVIDIA products. The investigation got weirder when Bloomberg tried to track down specific circuit boards that NVIDIA had told the US government were in specific sites:
Data centers aren’t the only Megaspeed facilities Nvidia visited. The vast majority of Megaspeed’s $2.4 billion worth of Bianca boards, the circuit boards that house Nvidia’s top-end GB200 and GB300 semiconductors, were unaccounted for at the sites Nvidia described to Washington. After Bloomberg asked about those products, the chipmaker went to separate Megaspeed warehouses, an Nvidia official said, and confirmed the Bianca boards are there.
This person declined to specify the number observed in storage, nor where and when the chips — imported more than half a year ago — would be put to use. “Building data centers is a complex process that takes many months and involves many suppliers, contractors and approvals,” an Nvidia spokesperson said.
Things get weirder throughout the article, with a Chinese company called “Shanghai Shuoyao” having a near-identical website and investor deck (as mentioned) to Megaspeed, with several of the “computing clusters under construction” actually being in China.

Things get a lot weirder as Bloomberg digs in, including a woman called “Huang” that may or may not be both the CEO of Megaspeed and an associated company called “Shanghai Hexi,” which is also owned by the Yangtze River Delta project… who was also photographed sitting next to Jensen Huang at an event in Taipei in 2024.

While all of this is extremely weird and suspicious, I must be clear there is no declarative answer as to what’s going on, other than that NVIDIA GPUs are absolutely making it to China, somehow. I also think that it would be really tough for Jensen Huang to not know about it, or for billions of dollars of GPUs to be somewhere without NVIDIA’s knowledge.
Anyway, Supermicro CEO Charles Liang has yet to comment on Wally Liaw or his alleged co-conspirators, other than a statement from the company that says that their acts were “a contravention of the Company’s policies and compliance controls.”
Jensen Huang does not appear to have been asked if he knew anything about this — not Megaspeed, not Supermicro, or really any challenging question of any kind for the last few years of his life.
Huang did, however, say back in May 2025 that there was “no evidence of any AI chip diversion,’ and that the countries in question “monitor themselves very carefully.”
For legal reasons I am going to speak very carefully: I cannot say that Jensen is wrong, or lying, but I think it’s incredible, remarkable even, that he had no idea that any of this was going on. Really? Hundreds of millions if not billions of dollars of GPUs are making it to China — as reported by The Information in December 2025 — and Jensen Huang had no idea? I find that highly unlikely, though I obviously can’t say for sure.
In the event that NVIDIA had knowledge — which I am not saying it did, of course — this is a huge scandal that, for the most part, nobody has bothered to keep an eye on outside of a few brave souls at The Information and Bloomberg who give a shit about the truth. Has anybody bothered to ask Jensen about this? People talk to him on camera all the time.
Sidenote: Earlier today, US Senators Jim Banks and Elizabeth Warren issued a letter to Howard Lutnick, Trump' s Commerce Secretary, demanding the Department of Commerce take “all necessary and appropriate actions” to stop the flow of NVIDIA chips to China, including potentially block exports to countries believed to be intermediaries, like Malaysia, Thailand, Vietnam, and Singapore.
The arrest of Liaw has, it seems, ruffled some feathers in Washington, and I would not be shocked to see Huang sat before a congressional inquiry at some point.
I’ll also add that I am shocked that so many people are just shrugging and moving on from Supermicro, which is a major supplier of two of the major neoclouds (Crusoe and CoreWeave) and one of the minors (Lambda, which they also rents cloud capacity to). The idea that a company had no idea that several percentage points of its revenue were flowing directly to China via one of its co-founders is an utter joke.
I hope we eventually find out the truth. Nevertheless, this kind of underhanded bullshit is a sign of desperation on the part of just about everybody involved.
So, I want to explain something very clearly for you, because it’s important you understand how fucked up shit has become: hyperscalers are forcing everybody in their companies to use AI tools as much as possible, tying compensation and performance use to token burn, and actively encouraging non-technical people to vibe-code features that actually reach production.
In practice, this means that everybody is being expected to dick around with AI tools all day, with the expectation that you burn massive amounts of tokens and, in the case of designers working in some companies, actively code features without ever knowing a line of code.
“How do I know the last part? Because a trusted source told me — and I’ll leave it at that”
One might be forgiven for thinking this means that AI has taken a leap in efficacy, but the actual outcomes are a labyrinth of half-functional internal dashboards that measure random user data or convert files, spending hours to save minutes of time at some theoretical point. While non-technical workers aren’t necessarily allowed to ship directly to production, their horrifying pseudo-software, coded without any real understanding of anything, is expected to be “fixed” by actual software engineers who are also expected to do their jobs.
These tools also allow near-incompetent Business Idiot software engineers to do far more damage than they might have in the past. LLM use is relatively-unrestrained (and actively incentivized) in at least one hyperscaler, with just about anybody allowed to spin up their own OpenClaw “AI agent” (read: series of LLMs that allegedly can do stuff with your inbox or Slack for no clear benefit, other than their ability to delete all of your emails). In Meta’s case, this ended up causing a severe security breach:
According to internal Meta communications and an incident report seen by The Information, a major security alert occurred last week after a Meta software engineer used an in-house agent tool, similar to OpenClaw, to analyze a technical question that another Meta employee had posted on an internal discussion forum. After doing the analysis, the AI agent posted a response in the discussion forum to the original question, offering advice on the technical issue, according to internal communications. The agent did so without approval from the employee.
According to The Information, Meta systems storing large amounts of company and user-related data were accessible to engineers who didn’t have permission to see them, and was marked a sec-1 incident, the second highest level of severity on an internal scale that Meta uses to rank security incidents.
The incident follows multiple problems caused at Amazon by its Kiro and Q LLMs. I quote Business Insider’s Eugene Kim:
On March 2, customers across Amazon marketplaces saw incorrect delivery times when adding items to their carts. The incident led to nearly 120,000 lost orders and roughly 1.6 million website errors. Amazon's AI tool Q was one of the primary contributors that triggered the event, according to an internal review.
On March 5, another outage caused a 99% drop in orders across Amazon's North American marketplaces, resulting in 6.3 million lost orders, one of the internal documents stated. One key factor was a production change that was deployed without using a formal documentation and approval process called Modeled Change Management.
Despite the furious (and exhausting) marketing campaign around “the power of AI code,” I believe that these events are just the beginning of the true consequences of AI coding tools: the slow destruction of the tech industry’s software stack.
LLMs allow even the most incompetent dullard to do an impression of a software engineer, by which I mean you can tell it “make me software that does this” or “look at this code and fix it” and said LLM will spend the entire time saying “you got this” and “that’s a great solution.”
The problem is that while LLMs can write “all” code, that doesn’t mean the code is good, or that somebody can read the code and understand its intention (as these models do not think), or that having a lot of code is a good thing both in the present and in the future of any company built using generative code.
LLM-based code is often verbose, and rarely aligns with in-house coding guidelines and standards, guaranteeing that it’ll take far longer to chew through, which naturally means that those burdened with reviewing it will either skim-read it or feed it into another LLM to work out what the hell to do.
Worse still, LLM use is also entirely directionless. Why is anybody at Meta using an OpenClaw? What is the actual thing that OpenClaw does, other than burn an absolute fuck-ton of tokens?
Think about this very, very simply for a second: you have given every engineer in the company the explicit remit to write all their code using LLMs, and incentivized them to do so by making sure their LLM use is tracked. You have now massively increased both the operating costs of the company (through token burn costs) and the volume of code being created.
To be explicit, allowing an LLM to write all of your code means that you are no longer developing code, nor are you learning how to develop code, nor are you going to become a better software engineer as a result. This means that, across almost every major tech company, software engineers are being incentivized to stop learning how to write software or solve software architecture issues.
If you are just a person looking at code, you are only as good as the code the model makes, and as Mo Bitar recently discussed, these models are built to galvanize you, glaze you, and tell you that you’re remarkable as you barely glance at globs of overwritten code that, even if it functions, eventually grows to a whole built with no intention or purpose other than what the model generated from your prompt.
Things only get worse when you add in the fact that hyperscalers like Meta and Amazon love to lay off thousands of people at a time, which makes it even harder to work out why something was built in the way it was built, which is even harder when an LLM that lacks any thoughts or intentions builds it. Entire chunks of multi-trillion dollar market cap companies are being written with these things, prompted by engineers (and non-engineers!) who may or may not be at the company in a month or a year to explain what prompts they used.
We’re already seeing the consequences! Amazon lost hundreds of thousands of orders! Meta had a major security breach! The foundations of these companies are being rotted away through millions of lines of slop-code that, at best, occasionally gets the nod from somebody who has “software engineer” on their resume, and these people keep being fired too, raising the likelihood that somebody who knows what’s going on or why something is built a certain way will be able to stop something bad from happening.
Remember: Google, Amazon, Microsoft, and Meta all hold vast troves of personal information, intimate conversations, serious legal documents, financial information, in some cases even social security numbers, and all four of them along with a worrying chunk of the tech industry are actively encouraging their software engineers to stop giving a fuck about software.
Oh, you’re so much faster with AI code? What does that actually mean? What have you built? Do you understand how it works? Did you look at the code before it shipped, or did you assume that it was fine because it didn’t break?
This is creating a kind of biblical plague within software engineering — an entire tech industry built on reams of unmanageable and unintentional code pushed by executives and managers that don’t do any real work. LLMs allow the incompetent to feign competence and the unproductive to produce work-adjacent materials borne of a loathing for labor and craftsmanship, and lean into the worst habits of the dullards that rule Silicon Valley.
All the Valley knows is growth, and “more” is regularly conflated with “valuable.” The New York Times’ Kevin Roose — in a shocking attempt at journalism — recently wrote a piece celebrating the competition within Silicon Valley to burn more and more tokens using AI models:
An engineer at OpenAI processed 210 billion “tokens” — enough text to fill Wikipedia 33 times — through the company’s artificial intelligence models over the last week, the most of any employee. At Anthropic, a single user of the company’s A.I. coding system, Claude Code, racked up a bill of more than $150,000 in a month.
And at tech companies like Meta and Shopify, managers have started to factor A.I. use into performance reviews, rewarding workers who make heavy use of A.I. tools and chastening those who don’t.
This is the new reality for coders, some of the first white-collar workers to feel the effects of A.I. as it sweeps through the economy. A.I. was supposed to help tech companies boost productivity and cut costs. But it has also created an expensive new status game, known as “tokenmaxxing,” among A.I.-obsessed workers who are desperate to prove how productive they are.
Roose explains that both Meta and OpenAI have internal leaderboards that show how many tokens you’ve used, with one software engineer in Stockholm spending “more than his salary in tokens,” though Roose adds that his company pays for them.
Roose describes a truly sick culture, one where OpenAI gives awards to those who spend a lot of money on their tokens, adding that he spoke with several tech workers who were spending thousands of dollars a day on tokens “for what amount to bragging rights.” Roose also added one more insane detail: that one person found a loophole in Claude’s $20-a-month using a piece of software made by Figma that allowed them to burn $70,000 in tokens.
Despite all of this burn, Roose struggled to find anybody who was able to explain what they were doing beyond “maintaining large, complex pieces of software using coding agents running in parallel,” but managed to actually find one particularly useful bit of information — that all of this might be performative:
They said, by and large, that A.I. coding tools were making them more productive. But some also framed their use of A.I. as a strategic move — a way to signal, to their colleagues and bosses, that they’re keeping up with the times, as the era of human coding appears to be coming to an end.
I do give Roose one point for wondering if “...any of these tokenmaxxers [were] producing anything good, or whether they [were] merely spinning their wheels churning out useless code in an attempt to look busy.” Good job Kevin.
That being said, I find this story horrifying, and veering dangerously close to the actions of drug addicts and cult followers. Throughout this story in one of the world’s largest newspapers, Roose fails to find a single “tokenmaxxer” making something that they can actually describe, which has largely been my experience of evaluating anyone who talks nonstop about the power of “agentic coding.”
These people are sick, and are participating in a vile, poisonous culture based on needless expenses and endless consumption.
Companies incentivizing the amount of tokens you burn are actively creating a culture that trades excess for productivity, and incentivizing destructive tendencies built around constantly having to find stuff to do rather than do things with intention. They are guaranteeing that their software will be poorly-written and maintained, all in the pursuit of “doing more AI” for no reason other than that everybody else appears to be doing so.
Anybody who actually works knows that the most productive-seeming people are often also the most-useless, as they’re doing things to seem productive rather than producing anything of note. A great example of this is a recent Business Insider interview with a person who got laid off from Amazon after learning “AI” and “vibe coding,” and how surprised they were that these supposed skills didn’t make them safer from layoffs:
At the time of the October layoffs, there was debate around whether AI was the reason.
The company was encouraging us to use AI at the time, but I don't think it took my job. I wrote descriptions for internal products at Amazon, and when I used AI to help, I'd need to ask it to rewrite its output without fluff words. It didn't sound like how people talk. Despite my ethical qualms, I used AI, but, in my opinion, it was nowhere close to replacing my role. Before I was laid off, I helped build an internal site for Amazon using AI. I hadn't really coded before, but with a colleague's help, I learned how to vibe code with a lot of trial and error.
I thought using AI for this project and showcasing different skills would make me more valuable to the company, but in the end, it didn't keep me from being laid off.
To be clear, this person is a victim. They were pressured by Amazon to take up useless skills and build useless things in an expensive and inefficient way, and ended up losing their job despite taking up tools they didn’t like under duress.
Sidenote: If you read that sentence and suggest that she should’ve used AI better, you are a mark. You are being conned into an unpaid marketing job for AI companies that actively hate you.
This person was, at one point, actively part of building an internal Amazon site using AI, and had to “learn to vibe code with a lot of trial and error” and the help of a colleague. Was this a good use of her time? Was this a good use of her colleague’s time?
No! In fact, across all of these goddamn AI coding hype-beast Twitter accounts and endless proclamations about the incredible power of AI agents, I can find very few accounts of something happening other than someone saying “yeah I’m more productive I guess.”
I am certain that at some point in the near future a major big tech service is going to break in a way that isn’t immediately fixable as a result of thousands of people building software with AI coding tools, a problem compounded by the dual brain drain forces of layoffs and a culture that actively empowers people to look busy rather than actually produce useful things.
What else would you expect? You’re giving people a number that they can increase to seem better at their job, what do you think they’re going to do, try and be efficient? Or use these things as much as humanly possible, even if there really isn’t a reason to?
I haven’t even gotten to how expensive all of this must be, in part because it’s hard to fully comprehend.
But what I do know is that big tech is setting itself up for crisis after crisis, especially when Anthropic and OpenAI stop subsidizing their models to the tune of allowing people to spend $2500 or more on a $200-a-month subscription.
What happens to the people who are dependent on these models? What happens to the people who forgot how to do their jobs because they decided to let AI write all of their code? Will they even be able to do their jobs anymore?
Large Language Models are creating Silicon Valley Habsburgs — workers that are intellectually trapped at whatever point they started leaning on these models that were subsidized to the point that their bosses encouraged them to use them as much as humanly possible. While they might be able to claw their way back into the workforce, a software engineer that’s only really used LLMs for anything longer than a few months will have to relearn the basic habits of their job, and find that their skills were limited to whatever the last training run for whatever model they last used was.
I’m sure there are software engineers using these models ethically, who read all the code, who have complete industry over it and use it as a means of handling very specific units of work that they have complete industry over.
I’m also sure that there are some that are just asking it to do stuff, glancing at the code and shipping it. It’s impossible to measure how many of each camp there are, but hearing Spotify’s CEO say that its top developers are basically not writing code anymore makes me deeply worried, because this shit isn’t replacing software engineering at all — it’s mindlessly removing friction and putting the burden of “good” or “right” on a user that it’s intentionally gassing up.
Ultimately, this entire era is a test of a person’s ability to understand and appreciate friction.
Friction can be a very good thing. When I don’t understand something, I make an effort to do so, and the moment it clicks is magical. In the last three years I’ve had to teach myself a great deal about finance, accountancy, and the greater technology industry, and there have been so many moments where I’ve walked away from the page frustrated, stewed in self-doubt that I’d never understand something.
I also have the luxury of time, and sadly, many software engineers face increasingly-deranged deadlines set by bosses that don’t understand a single fucking thing, let alone what LLMs are capable of or what responsible software engineering is. The push from above to use these models because they can “write code faster than a human” is a disastrous conflation of “fast” and “good,” all because of flimsy myths peddled by venture capitalists and the media about “LLMs being able to write all code.”
Generative code is a digital ecological disaster, one that will take years to repair thanks to company remits to write as much code as fast as possible.
Every single person responsible must be held accountable, especially for the calamities to come as lazily-managed software companies see the consequences of building their software on sand.
In the end, everything about AI is built on lies.
Hundreds of gigawatts of data centers in development equate to 5GW of actual data centers in construction.
Hundreds of billions of dollars of GPU sales are mostly sitting waiting for somewhere to go.
Anthropic’s constant flow of “annualized” revenues ended up equating to literally $5 billion in revenue in four years, on $25 billion or more in salaries and compute.
Despite all of those data centers supposedly being built, nobody appears to be making a profit on renting out AI compute.
AI’s supposed ability to “write all code” really means that every major software company is filling their codebases with slop while massively increasing their operating expenses. Software engineers aren’t being replaced — they’re being laid off because the software that’s meant to replace them is too expensive, while in practice not replacing anybody at all.
Looking even an inch beneath the surface of this industry makes it blatantly obvious that we’re witnessing one of the greatest corporate failures in history. The smug, condescending army of AI boosters exists to make you look away from the harsh truth — AI makes very little revenue, lacks tangible productivity benefits, and seems to, at scale, actively harm the productivity and efficacy of the workers that are being forced to use it.
Every executive forcing their workers to use AI is a ghoul and a dullard, one that doesn’t understand what actual work looks like, likely because they’re a lazy, self-involved prick.
Every person I talk to at a big tech firm is depressed, nagged endlessly to “get on board with AI,” to ship more, to do more, all without any real definition of what “more” means or what it contributes to the greater whole, all while constantly worrying about being laid off thanks to the truly noxious cultures that are growing around these services.
AI is actively poisonous to the future of the tech industry. It’s expensive, unproductive, actively damaging to the learning and efficacy of its users, depriving them of the opportunities to learn and grow, stunting them to the point that they know less and do less because all they do is prompt. Those that celebrate it are ignorant or craven, captured or crooked, or desperate to be the person to herald the next era, even if that era sucks, even if that era is inherently illogical, even if that era is fucking impossible when you think about it for more than two seconds.
And in the end, AI is a test of your introspection. Can you tell when you truly understand something? Can you tell why you believe in something, other than that somebody told you you should, or made you feel bad for believing otherwise? Do you actually want to know stuff, or just have the ability to call up information when necessary?
How much joy do you get out of becoming a better person?If you can’t answer that question with certainty, maybe you should just use an LLM, as you don’t really give a shit about anything.
And in the end, you’re exactly the mark built for an AI industry that can’t sell itself without spinning lies about what it can (or theoretically could) do.
2026-03-21 00:25:42
I hear from a lot of people that are filled with bilious fury about the tech industry, but few companies have pissed off the world more than Adobe.
As the foremost monopolist in software, web and graphic design, Adobe has created one of the single-most abusive, usurious freakshows in capitalist history, trapping users in endless, punishing subscriptions to software they need that only ever seems to get worse.
In the Department of Justice’s recently-settled case against Adobe, it was revealed that early termination fees for its annual subscriptions amounted to 50% of the remaining balance on the customer’s subscription, with one unnamed Adobe executives referring to these fees as “a bit like heroin for Adobe,” adding that there [was] “...absolutely no way to kill off ETF or talk about it more obviously [without] taking a big business hit.”
Let me explain how loathsome Adobe’s business model truly is.
The below is a screenshot from Adobe’s website from Wednesday March 18 2026.

One might read this and think “wow, $34.99 a month, what a deal!” and immediately sign up without clicking on “view terms,” which reveals that after three months the subscription cost becomes $69.99 a month, and that this “monthly” subscription is a year-long contract.
Adobe deliberately hid (and I’d argue still hides!) its early termination fees behind “inconspicuous hyperlinks and fine print.” Want to cancel? Adobe charges you 50% of the remaining balance on your contract — so, in this case, over $300, and it justifies this by saying (and I quote) “...your purchase of a yearly subscription comes with a significant discount. Therefore, a cancellation fee applies if you cancel before the year ends.”
The DOJ did a great job in its complaint explaining how much Adobe sucks, just before doing nothing to impede them doing so:
Adobe utilizes other onerous cancellation procedures to trap consumers in subscriptions they no longer want. Consumers attempting to cancel online are forced to navigate numerous hurdles, including hidden cancellation buttons and multiple, unnecessary steps such as pages devoted to password reentry, retention offers, surveys, and warnings. Consumers attempting to cancel via phone or chat experience dropped calls and chats, significant wait times, and repeated transfers. Adobe uses a dedicated “Retention” team to discourage subscribers who try to cancel. Adobe relies on such obstacles to thwart cancellations and retain subscription revenues, depriving consumers of a simple mechanism to cancel as required by law.
An exhibit from the DOJ’s lawsuit shows the MC Escher painting of canceling an Adobe subscription and the six different screens that it takes to do so. The DOJ also added that Adobe’s subscription revenue had nearly doubled between 2019 ($7.71 billion) and 2023 ($14.22 billion), and since then, Adobe’s subscription revenue hit $20.5 billion in 2024 and $22.9 billion in 2025.
To be clear, Adobe is utilizing many very, very common tricks that the software industry has used to keep people from quitting, and basically every software service I use makes you jump through three to five different screens (fuck you, Canva!) to cancel. These tricks are commonly referred to as “dark patterns.”
Adobe’s Early Termination Fees are, however, uniquely awful, both in that they employ the evil sorcery of enterprise software contracts and deploy them against creatives that are, in many cases, barely keeping their heads above water in an era defined by people trying to destroy them.
I will say, however, that I’ve never seen anyone else bill monthly for an annual contract outside of the grotesque SaaS monstrosities I wrote about last week. These are egregious, deceptive and manipulative techniques that shouldn’t be deployed against anyone, let alone creatives and consumers.
And because this is the tech industry under a regulatory environment that fails to hold them accountable, the $150 million settlement with the DOJ doesn’t appear to have changed a damn thing about how this company does business, other than offering “$75 million worth of services for free to customers that qualify.” The judgment does not appear to require any changes to how Adobe does business, and $150 million amounts to roughly 0.345% of the $43.4 billion that Adobe made in 2024 and 2025.
Adobe is a business that runs on rent-seeking, deception, and a monopoly over modern design software mostly built by people that no longer work there, such as John and Thomas Knoll, who won an Oscar in 2019 for scientific and engineering achievements for creating Photoshop along with Mark Hamburg, who left Adobe the same year.
Adobe does not create things but extract from those that do, exhibiting the most egregious and horrifying elements of the Rot Economy’s growth-at-all-costs avarice. While you may or may not like Photoshop, or Lightroom, or any other Adobe property, that’s mostly irrelevant to the glorified holding corporation that shoves different bits around every few months in the hopes that they can scrape another dollar from their captured audience.
Much of this comes from Adobe’s abominable subscription products, most notably (and I’ll get into it in more detail after the premium break) its Creative Cloud subscription, a rat king of different services like Photoshop and InDesign and services like “Adobe Creative Community” and “generative credits” for AI services that are used to justify constant price increases and confusing product suite tweaks, all in the service of revenue growth.
All the while, Adobe’s net income has, for the most part, flattened out for the best part of two years at a seasonal range from $1.5 billion to $1.8 billion a quarter, all as the company debases its products, customers and brand in the filth of generative AI features that range from kind of useful to actively harmful to the creative process and have generated, at best, a couple hundred million dollars of revenue in the last two years.
I should also be clear that Adobe has an indeterminately-large enterprise division that includes marketing automation software like Marketo, which it acquired in 2018 for $4.75 billion along with Magento, a different company that develops a software platform to run corporate eCommerce pages, all so it can do battle with Salesforce. CNBC’s Jim Cramer once called Salesforce and Adobe’s competition “one of the great rivalries in tech,” and he’s correct, in the sense that both companies love to buy other companies to prop up their revenues. Adobe has bought 61 of them since the 90s, but Salesforce has it beat at 75.
They’re also both devious, underhanded SaaSholes that make their money through rent-seeking and micro-monopolies. The business known as “Adobe” is a design platform, a photo editor, a PDF creation platform, an eCommerce platform, a marketing automation platform, a content management system, a marketing project management system, an analytics platform, and a content collaboration platform.
You do business with Adobe not because you want to, but because doing business at some point requires you to do so. Use PDFs regularly? You’re gonna use Acrobat. Need to edit an image? Photoshop. Run a design studio? You’re gonna pay for Creative Suite, and you’re gonna get a price increase at some point because you don’t really have any other options. Doing a lot of email marketing campaigns? You’re gonna use Marketo, whether you like it or not. Adobe’s “Digital Experience” vertical is effectively a holding corporation for Adobe’s acquisitions to help boost revenue, an ungainly enterprise limb that grabs companies and puts it in a big bag that says “money me money now” every year or two.
Put another way, one does not do business with Adobe. It has business done to it.
There’s also the “publishing and advertising” division that has made somewhere between $146 million and $300 million a year since 2019, most of which comes from abandoned products and, ironically, the product that originally made Adobe famous — PostScript, the language that underpins most of modern printing, whether directly or by inspiring the various other alternatives that emerged in the following decades.
Adobe is a company that bathes in the scent of mediocrity, constantly doing an impression of an ever-growing business through a combination of acquisitions and price increases that are only possible in a global regulatory torpor and a market that doesn’t know when it’s being conned.
It’s also emblematic of how the modern software company grows — not through an honest exchange of value built on a bedrock of innovation and customer happiness, but the eternal death march of enshittification of its products and monopolization of whatever fields it can barge its way into.
In many ways, Adobe is one of the greater tragedies of the Rot Economy. Beneath the endless layers of subscriptions and weird upsells and horrible Business Idiots lay beloved products like Photoshop, Illustrator and InDesign that are slowly decaying as Adobe searches to boost engagement and revenue.
A great example is a story from Digital Camera World from 2025, where writer Adam Juniper talked about features he loved that were disappearing for no reason:
…in this case it was the Shape tool which, I admit, isn't an essential for most photographers. I wanted to put a speech bubble onto an image which is something I've done in the past (and like I have done for this article) to illustrate stories and it was dead simple because one of the built-in shapes in Photoshop's shape tool was a speech bubble.
Sure, it's not super elegant but, hey, we live in a world where an entire generation or two communicates using crudely drawn faces and representing emoji and that's apparently fine, so why can't I make a two word joke in a bubble like I used to be able to?All I wanted is for a robot with a camera to be saying "Smile, Human!" to illustrate a piece I'd commissioned for this very site about, well, how A.I. might not be the best at getting pictures of people. That makes sense, right? As an editor, it's more fun, I think, than the plain image of the robot with the camera.
But when I went to add the speech bubble to a layer, with the aim to put the text atop that, I found that the speech bubble that was there two decades ago had gone. There was still a shape tool and there were shapes to be found. As well as the predictable geometric ones, there were Folders of Wild Animals, Leaf Trees, Boats, and Flowers (and, oddly, not actually the explosion bubble depicted) by default.
I searched for a solution and, of course, paid for stock bubbles was one of the solutions. There is always a spend-more solution.
Juniper found that Adobe had intentionally moved the speech bubble to an optional “legacy shapes and more” feature, all with the intent of pushing users to pay for (per Juniper) Adobe’s add-on Stocks subscription.
In fact, a simple web search brings up user after user after user after user after user after user after user saying the same thing: that Adobe only ever seems to make its products worse, with the solution often being “find a way to revert to how things were done before the update” or “find another company to work with,” except Adobe’s scale and market presence make it near-impossible to compete.
Sidenote: I’ll also add that Adobe once told users of older versions of Lightroom Classic, Photoshop, Premiere, and Animate in 2019 that they were “no longer licensed to use [these apps],” and that “continued use may [put them] at risk of potential claims of infringement from third parties.”
Adobe even has the temerity to bug you with ads within its own products, nagging you with annoying pop-ups about new features or attempting to con you into a two-month-long trial of another piece of software using “in-product messaging” that’s turned on by default.
These are all the actions of a desperate, greedy company run by people that don’t give a shit about their customers or the things they sell.
A few weeks ago, CEO Shantanu Narayen said that he was stepping down after 18 years in which he took Adobe from a company that built things that people loved and turned it into a sleazy sales operation built on rent-seeking and other people’s innovation.
Those who don’t bother to read or know anything about software will tell you that the “threat of AI” or “the SaaSpocalypse” is killing Adobe — a convenient (and incorrect!) way to ignore that Adobe is only able to grow through acquisitions or price-hikes.
The sickly irony is that acquisitions were always in Adobe’s blood from the very early days of Photoshop. It just used to be run by people who gave a fuck about whether software was good and customers were happy.
In fact, I’m going to have a little rant about this.
I’m sick and tired of journalists from reputable outlets talking about “the threat of AI” to software companies without ever explaining what they mean or any of the economic effects involved. Adobe isn’t being killed by “AI.” We’re at the end of the hypergrowth era of software, and the only thing that grows forever is cancer. It also gives executives Narayen cover for running operations built on deceit, exploitation, extraction and capital deployment. Years of evaluating these companies entirely based on their revenues and imagined things like “the threat of AI” without any connection to actual fucking software makes the majority of the analysis of software entirely useless.
Nothing even really has to change about reporting. Just use the product! Use it and tell me how you feel. Talk to some customers. Spend more than 20 minutes on Facebook. Use Photoshop and tell me how many popups you get, or whether it inexplicably slows down or starts eating up RAM. You’ll quickly see that we’re in a crisis that’s less about AI and more about creating a tech industry powered by creating mediocre software and putting far more effort into making a business impossible to avoid.
Decades of this psuedo-journalism mean that a great many business reporters are simply unprepared to discuss what’s actually happening, evaluating software companies based on 10-Ks and shadows on the wall of a fucking cave.
The tech industry has done a great job of scaring reporters into thinking that having a negative opinion is somehow “not supporting innovation,” and I want to be clear that refusing to criticize the tech industry is what’s actually stopping innovation. Letting these companies get away with ruining either the products they build or the products they buy is creating a climate in which the most-successful companies are the ones that crowd out the competition and raise prices.
Adobe’s growth has come from being a fucking asshole. Its decline has come from the limitations of one’s ability to buy other companies and claim their revenues as your own and constantly increasing the price of your services. If there were a “threat from AI,” you’d actually be able to name it and point to it rather than referring to it like the Baba Fucking Yaga.
I’m going to put it very, very bluntly: the last 15 years or so of tech earnings have been earned predominantly by fucking over the customer through either reducing the value of the product or increasing its price. The tech and business media’s lack of attention to the actual state of technology is partially to blame, because Number Has Always Gone Up, and thus the assumption was that the underlying product quality was raising that number versus screwing over the customer.
Wake up! Look at every tech product you’ve used and tell me if it’s improved in the last decade! Facebook’s worse, email’s worse, browsers are either the same or worse, Google Search is worse, Adobe Creative Suite is worse, iPhones might seem better but the software is bloated with endless options and dropdowns and ads and nags, pretty much the only thing that’s improved is physical hardware because shipping bullshit, useless hardware is much, much harder.
This total lack of awareness of the actual state of the world is why these companies have gotten away with so much shit over the years, and why so many of you are incapable of actually capturing this moment. You are not actually looking for what’s happening, just for what might comfortably fit your analysis of the world.
Vaguely blaming things on “the threat of AI” allows you to continue pretending everything will grow forever, and rationalize bad behavior by framing every problem through the lens of disruption and innovation. A company that’s on the decline “being disrupted by AI” allows you to believe that another company will grow and take its place. Saying that a company is growing revenue “because their AI bets are paying off” allows you to ignore price increases and deteriorating software, and think the world is a better place, even if you can only do so by living in a fantasy.
Gun to your head, what is the threat to software from AI? How is it manifesting, and who is the threat? Is it OpenAI? Anthropic? Are their products actually replacing anything? Can you prove that, or is this just something you heard enough people say that you’re now comfortable believing it?
The actual threat to software companies is their hatred of innovation and their customers, and what's happening to Adobe will eventually happen to them all.
Products that provide value are enshittified, and the products they acquire have been (or came pre-) enshittified. The prices have gone up. The nags to consumers have increased. Revenues have gone up because these companies have been allowed to buy effectively anyone they want — though Adobe was, thankfully, stopped from acquiring Figma — and increase prices whenever they want, and when it’s come time to evaluate the health or strength or actual value of these companies, all that anybody ever looks at is revenues.
Perhaps your argument might be that the markets don’t care about how good something is, except the markets are influenced by journalism and financial analysts. The markets celebrate dogshit companies like Meta that make broken, harmful products because their disgusting monopolies allow them to brutalize businesses and consumers alike.
What we’re seeing in the software industry are the limits of how much one can abuse a customer, a business model that SaaS enabled and both the tech media and analysts celebrated because it worked, in the sense that it worked at making the software companies rich. And because the people at the top have chased out anybody who knows what “good” looks like and empowered vacuous growth-perverts at every level, these companies have no idea what to do to stop the tide from coming in.
Your argument might be that these companies couldn’t grow so fast without fucking customers over or making their products worse — and at that point you should ask yourself what you want the world to look like, and how willingly you’ve participated in making it look how it does today.
The decline has yet to fully begin, but a CEO doesn’t suddenly decide to quit their company after 18 years during record results because the future looks bright.
The real SaaSpocalypse is the comeuppance for decades of focusing businesses on growth by any means possible, and the hysterical non-analysis of blaming it on AI is a sign that those responsible can’t be bothered to live in anything other than the dreamworld of venture capital and Ivy League business schools.
Adobe’s story is a tragedy — the tale of the great things that can be done with software for the betterment of humanity, and how usurious Business Idiots can hijack it as a means of expressing eternal growth to the markets.
This is The Hater’s Guide To Adobe, or The Adobe Enshittification Suite.
2026-03-18 00:36:40
Hi! If you like this piece and want to support my work, please subscribe to my premium newsletter. It’s $70 a year, or $7 a month, and in return you get a weekly newsletter that’s usually anywhere from 5000 to 185,000 words, including vast, extremely detailed analyses of NVIDIA, Anthropic and OpenAI’s finances, and the AI bubble writ large.
I just put out a massive Hater’s Guide To The SaaSpocalypse — an urgent and in-depth analysis of the end of the hypergrowth era of software — and my Hater’s Guides To Private Equity, Anthropic, Oracle and Microsoft are huge (12k+ word) research projects priced lower than the cost of a cup of coffee, which is partly an inflation issue on the part of the coffee shop, but what I’m getting at is this is a ton of value.
Where’s Your Ed At Premium is incredibly useful, read by hedge funds, private equity firms, Fortune 500 CEOs, a large chunk of the business and tech media, and quite a few CEOs of major tech firms. I am regularly several steps ahead in my coverage, and you get an absolute ton of value, several books’ worth of content a year. Subscribe today and support my work, I deeply appreciate it.
Small Editor's Note: the original email said "Matthew Hughes" because he uploads it to Ghost and formats it for me. Sorry!
Hey everyone! I know everybody is super excited about the supposed power of AI, but I think it’s time we set some fair ground rules going forward so we stop acting so crazy.
Let’s start with a simple one: AI boosters are no longer allowed to explain what’s good about AI using the future tense. You can no longer say “it will,” “could,” “might,” “likely,” “possible,” “estimated,” “promise,” or any other term that reviews today’s capabilities in the language of the future.
I am constantly asked to explain my opinions (not that anybody who disagrees with me actually reads them) in the terms of the present, I am constantly harangued for proof of what I believe, and every time I hand it over there’s some sort of ham-fisted response of “it’s getting better” and “it will get even more better from here!’
That’s no longer permissible! I am no longer accepting any arguments that tell me something will happen, or that “things are trending” in a certain way. For an industry so thoroughly steeped in cold, hard rationality, AI boosters are so quick to jump to flights of fancy — to speak of the mythical “AGI” and the supposed moment when everything gets cheaper and also powerful enough to be reliable or effective.
I hear all this crap about AI changing everything, but where’s the proof?
Wow. Anthropic managed to turn $30 billion dollars into $5 billion dollars and start one of the single most annoying debates in internet history. No, really, its CFO Krishna Rao stated on March 9, 2026 in a legal filing that it had made “exceeding” $5 billion in revenue and spent “over” $10 billion on inference and training. None of these numbers line up with previous statements about annualized revenue, by the way — I went into this last week — and no amount of contorting around the meaning of “exceeding” takes away from the fact that adding up all the annualized revenues is over $6 billion, which I believe means that Anthropic defines “annualized” in a new and innovative way.
In any case, Anthropic turned $30 billion into $5 billion. That’s…bad. That’s just bad business. And I hear no compelling argument as to how this might improve, other than “these companies need more compute, and then something will happen.”
In fact, let’s talk about that for a second. At the end of January, OpenAI CFO Sarah Firar said that “our ability to serve customers—as measured by revenue—directly tracks available compute,” messily suggesting that the more compute you have the more revenue you have.
This is, of course, a big bucket of bollocks. Did OpenAI scale its compute dramatically between hitting $20 billion in annualized revenue (to be clear, I have deep suspicions about these numbers and how OpenAI measures “annualized” revenue) in January 2026 and $25 billion in March 2026? I think that’s highly unlikely.
I also have to ask — where are the limited parties, exactly? If revenue scales with revenue, wouldn’t that mean that each increase in compute availability would be allowing somebody to pay OpenAI or Anthropic that couldn’t do so before? I don’t see any reports of customers who can’t pay either company due to a lack of available compute. Are there training runs that can’t be done right now? That doesn’t really make sense either, because training doesn’t automatically lead to more revenue, other than in releasing a new model, I guess?
It’s almost as if every talking point in the generative AI industry is the executives in question saying stuff in the hopes that people will just blindly repeat it!
But really folks, we’ve gotta start asking: where’s the money?
Anthropic made $5 billion in its entire existence in revenue and spent $10 billion just on compute. OpenAI claims it made $13.1 billion in revenue in 2025 and “only” lost $8 billion — but those numbers seem unlikely considering my report from November of last year that had OpenAI at $4.3 billion in revenue on $8.67 billion of inference costs through September 2025, and this is accrual accounting, which means these are from the quarters in question. How likely do you think it is that OpenAI booked $8.8 billion in a quarter (Q4 CY2025) and only lost $8 billion in the year after it lost $12 billion (per the Wall Street Journal) in the previous quarter?
Look, I get it! This isn’t a situation where thinking critically is rewarded. Even articles explicitly criticizing the economics of these companies are still filled with weasel wording about “expects to grow” and “anticipates hitting,” or the dreaded phrase “if their bet pays off.” Saying obvious stuff like “every AI company is unprofitable” or “there is no path to profitability” or “nobody is talking about AI revenues” is considered unfair or cynical or contrarian, even though these are very reasonable and logical statements grounded in reality.
“But Ed! What about Uber!”
What about Uber? Uber is a completely different business to Anthropic and OpenAI or any other AI company. It lost about $30 billion in the last decade or so, and turned a weird kind of profitable through a combination of cutting multiple markets and business lines (EG: autonomous cars), all while gouging customers and paying drivers less.
The economics are also completely different. Uber does not pay for its drivers’ gas, nor their cars, nor does it own any vehicles. Its PP&E has been between $1.5 billion and $2.1 billion since it was founded. Uber’s revenue does not increase with acquisitions of PP&E, nor does its business become significantly more expensive based on how far a driver drives, how many passengers they might have in a day, or how many meals they might deliver. Uber is, effectively, a digital marketplace for getting stuff or people moved from one place to another, and its losses are attributed to the constant need to market itself to customers for fear that other rideshare (Lyft) or delivery companies (DoorDash, Seamless) might take its cash.
Also: Uber’s primary business model was on a ride-by-ride basis, not a monthly subscription. Users may have been paying less, but they were still thinking about each transaction with Uber in terms that made sense when prices were raised (though it briefly tried an unlimited ride pass option in 2016).
Charging on a ride-by-ride basis was the smartest move that Uber made, as it meant that when prices went up, users didn’t have to change their habits.
AI companies make money either through selling subscriptions (or some sort of token-based access to a model) or by renting their models out via their APIs. One of their biggest mistakes was offering any kind of monthly subscription to their services, because the compute cost of a user is almost impossible to reconcile with any amount they’d pay a month, as the exponential complexity of a task is impossible to predict, both based on user habits and the unreliability of an AI model in how it might try and produce an output.
Let’s give an example. Somebody spending $20 a month on a Claude subscription can spend as much as $163 in compute.
There are two reasons this might be happening:
In both cases, Anthropic (and OpenAI, for that matter) is screwed. If we assume Anthropic’s gross margin is 38% (per The Information, though to be clear I no longer trust any leak from Anthropic, also no, Dario did not say Anthropic had 50% gross margins, it was a hypothetical), that would mean that $163 of compute costs it $101. Now, not every user is spending that much, but I imagine a lot of users are considering the aggressive (and deceptive) media campaign around Claude Code means that a great many are, at the very least, testing the limits of the product. Those on the Max $100 and $200-a-month plans are specifically paying for fewer rate limits, meaning that they are explicitly paying to burn more tokens.
The obvious argument that you could make is that Anthropic could simply increase the price of the subscription product, but I need to be clear that for any of this to make sense, it would have to do so by at least 300%, and even then that might not do the job. This would immediately price out most consumers — an $80-a-month subscription would immediately price out just about every consumer, and turn this from a “kind of like the cost of Netflix” purchase into something that has to have obvious, defined results. A $400-a-month or $800-a-month subscription would make a Claude or ChatGPT Pro subscription the size of a car payment. For a company with 100 engineers, a subscription to Claude Max 5x would run at around $480,000 a year.
And this is assuming that rate limits stay the same, which I doubt they would.
In any case, there is no future for any AI company that uses a subscription-based approach, at least not one where they don’t directly pass on the cost of compute.
This is a huge problem for both Anthropic and OpenAI, as their scurrilous growth-lust means that they’ve done everything they can to get customers used to paying a single monthly cost that directly obfuscates the cost of doing business.
I need to be very direct about what this means, because it’s very important and rarely if ever discussed.
A user of ChatGPT or Claude Code is only thinking of “tokens” or “compute” in the most indirect sense — a vague awareness of the model using something to do something else, totally unmoored from the customer’s use of the product. All they see is the monthly subscription cost ($20, $100, or $200-a-month) and rate limits that vaguely say you have X% of your five-hour allowance left. Users are not educated in (nor are they thinking about) their “token burn” or burden on the company, because software has basically never made them do so in the past.
This means it will be very, very difficult to increase subscription costs on users, and near-impossible to convince them to pay the cost of the API. It’s like if Uber, which had charged $20-a-month for unlimited rides, suddenly started charging users their drivers’ gas costs, and gas was at around $250 a gallon.
That might not even do the price disparity justice. This theoretical example still involves users being in the back of a car and being driven a distance, and that said driving costs gas. Token burn is an obtuse, irregular process involving a per-million input and output tokens, with the latter increasing when you use reasoning models, which use output tokens to break down how it might handle a task.
The majority of AI users do not think in these terms, and even technical users that do so have likely been using a monthly subscription which doesn’t make them think about the costs. Think about it — you log onto Claude Code every day and do all your work on it, sometimes bumping into rate limits, then coming back five hours (or however long) later and doing the same thing. Perhaps you’re thinking that a particular task might burn more tokens, or that you should use a model like Claude Sonnet over Claude Opus so that you don’t hit your limits earlier, but you do not, in most cases, even if you know the costs of a model, think about them in a way that’s useful.
Let’s say that Anthropic and OpenAI immediately decide to switch everybody to the API. How would anybody actually budget? Is somebody that pays $200 a month for Claude Max going to be comfortable paying $1000 or $1500 or $2500 a month in costs, and have, at that point, really no firm understanding of the cost of a particular action?
First, there’s no way to anticipate how many tokens a prompt will actually burn, which makes any kind of budgeting a non-starter. It’s like going to the supermarket and committing to buy a gallon of milk, not knowing if it’ll cost you $5 or $50.
But also, suppose a prompt doesn’t quite return the result you need, and thus, you’re forced to run it again — perhaps with slightly altered phrasing, or with more exposition to ensure the model has every detail you need. And again, you have no idea how many tokens the model will burn. How does a person budget for that kind of thing?
This is a problem both based on user habits and the unreliability of Large Language Models — such as spending several minutes “thinking” when they get stuck in loops trying to evaluate code or come up with a way to execute a task.
User habits are also antithetical to switching from a paid subscription to metered access to models. A user might forgive Claude for chasing its own tail for several minutes when not burdened by the cost of it doing so, but if that act cost $2 or $3 or $10, they may hesitate to use the model at all.
I’ll give you another example. You, a relative novice, decide to use Claude Code to build a dinky little personal website. During the process, Claude Code gets lost, messes up a few little things, taking a few minutes in aggregate, and you calmly tell it to fix things and do what you’d like, and after a little back-and-forth you get something you’re happy with. As you try and upload it to Amazon Web Services, you get stuck, and spend ten minutes getting it to explain how you get the website online.
At $20 a month, you might find this process delightful, empowering even. You just coded a website (even if it was a clone of one of thousands of different online templates), and you did so using natural language. Wow! What a magical world we live in. You realize as you look at the website that you forgot to add a section. Doing so takes another half an hour. You bump into your rate limits, take a break for five hours, then come back and finish it at the end of the day. The model has told you the entire time that you’re a genius for making this, and the website rocks, and that you built it, even though you didn’t.
If you were paying via the API, this excursion could’ve cost you anywhere from $5 to $15. Every single little back-and-forth begins to add up. Every little change. Every little addition. Every attempt that Claude makes to fix something but makes it worse. Every “I don’t get it” you feed it about AWS.
It’s difficult to actually say what it was that made it expensive or not, and doing so adds a level of cognitive burden on top of the constant vigilance you need to make sure the model doesn’t do something unproductive. Even explicit, direct and well-manicured prompts can lead these models on expensive little expeditions.
Token burn isn’t something that neatly maps to another way that we pay for things outside of cloud storage, and even then, there are very few services that rival the chaotic costs of Large Language Models. Even if people can conceptualize that there are inputs and outputs, the latter of which costs more money, mapping a task to a reliable amount of tokens is actually pretty difficult.
Even if these companies were profitable on inference (I do not believe they are), they are dramatically, horrendously unprofitable on subscriptions, and there isn’t a chance in Hell that the majority of those subscriptions convert into token-based API users.
When Uber — a completely different business, to be clear — jacked up prices, it did so gradually, and also didn’t ask users to dramatically shift how they think about using the app.
Anthropic and OpenAI have no clean way to jack up prices or cut costs. They can increase subscription fees, but doing so would lead to users paying two to five times what they’re paying today, which would undoubtedly lead to massive churn.
They could also reduce rate limits with the intention of pushing people toward the API, but as I’ve discussed, subscription-based customers are neither educated nor prepared to pay a confusing, metered service that directly counters habits driven by an abundance of token burn. Users are not taught to be considerate of their burn or mindful of their costs when using a subscription-based LLM.
The other problem is that these companies don’t really appear to have a way to cut costs, because inference remains very expensive and training costs are never going away:
Training is, for an AI lab like OpenAI and Anthropic, as common (and necessary) a cost as those associated with creating outputs (inference), yet it’s kept entirely out of gross margins. To quote The Information: “Anthropic has previously projected gross margins above 70% by 2027, and OpenAI has projected gross margins of at least 70% by 2029, which would put them closer to the gross margins of publicly traded software and cloud firms. But both AI developers also spend a tremendous amount on renting servers to develop new models—training costs, which don’t factor into gross margins—making it more difficult to turn a net profit than it is for traditional software firms.
This is inherently deceptive [on the part of Anthropic]. While one would argue that R&D is not considered in gross margins, training isn’t gross margins — yet gross margins generally include the raw materials necessary to build something, and training is absolutely part of the raw costs of running an AI model. Direct labor and parts are considered part of the calculation of gross margin, and spending on training — both the data and the process of training itself — are absolutely meaningful, and to leave them out is an act of deception.
I hear a lot of wank about “ASICs” and “TPUs” that will magically bring down costs. When? How? Oh, NVIDIA’s latest chip is 10x more efficient or some bullshit? Show me the fucking evidence! Because every time the revenues and costs get reported the revenues seem lower and the costs seem higher.
And it’s completely fucking insane that we don’t have an answer beyond “things will get cheaper” or “prices will go up.” Despite everybody talking about it endlessly for three god damn years, LLMs lack the kind of obvious, replicable, industrially-necessary outcomes that make a 3x, 4x or 10x price increase tenable.
I also think that Anthropic and OpenAI have deliberately used their subscriptions as a means of conning the media into conceptualizing AI as far more affordable than it actually is. Most users do not have any real idea of how much it costs to use these services, let alone how much it costs to run them.
All of that glowing, effusive press around Claude Code was based on outcomes that were both subsidized and obfuscated by Anthropic. I think that these articles would’ve been much less positive if the reporters were even aware of the actual costs.
So, let’s do some maths shall we?
Assume a business has 100 engineers, and currently pays $200 a month for each engineer to use Claude Max, at a cost of $20,000 a month, or $240,000 a year. Let’s assume on average you pay your engineers $125,000, meaning that your salaries are $12.5 million a year, not considering other costs (this is a toy example).
Now imagine that Claude switches to a metered billing system.
Let’s assume that, in actuality, these engineers are burning a mere $10 a day in tokens, which brings costs to $365,000 a year, or an increase of $125,000…and remember, this is a team of engineers that were previously used to a subscription that allowed them to spend upwards of $2700 a month in tokens, or nearly 10 times the $300 a month they’re now spending.
Let’s be a little more realistic, and bump that number up to $25. Now you’re spending $912,500 a year in tokens. $30 a day puts you over a million bucks. Oops, busy month, you’re now spending $40 a day. Now you’re spending more than 10% of your salaries on compute costs.
Anthropic’s own Claude Code documentation says that the average cost is $6 per-developer-per-day, with “daily costs remaining below $12 for 90% of users.” Good news! If you, as an engineer, can limit your usage to $6 a day, you’re actually saving the company money!
But you’re not spending $6 a day. That’s a silly number for anybody coding. One user on Reddit said that they spent $200-to-300 a day on API costs and decided instead to spend $40 to $50 a day on a GPU cluster on Lambda to use the open source model Qwen 3.5 to handle their code, which still works out at $14,600 in API costs.
Another user found that their parallel Claude Code sessions using Claude’s $200-a-month plan (I assume using multiple accounts) worked out to around $12,000 a month in API costs. Another that hit their limits on their Max subscription “only needed another hour or two to finish a project,” and that hour or two resulted in almost $600 in API costs.
Even the boosters are beginning to worry. Last week, Chamath Palihapitiya made a shockingly reasonable point:
Not a single meaningful company has yet to say that they are making at least 2x+ from all of this incremental money they are spending since even last Fall! When ROI?
When ROI indeed, Chamath. The fact that one of the most-prominent voices (for better or worse) in the tech industry is unable to get a straight answer to “where is the return on investment” — somebody directly incentivized to keep the party going — should have everybody a little worried.
Really though, where is the ROI?
Who is actually getting a profit out of this? NVIDIA? The companies that make RAM? Because it doesn’t seem to be the companies who are buying the GPUs. It doesn’t seem to be the AI companies. I don’t think it’s true, but if you believe it, you believe code is truly being automated away — to what end? What are the actual documented economic effects we can point at and what are the actual meaningful changes to the world?
Real data. Something from today, please. You are legally banned from saying the words “soon” or “in the future.” No more future-tense. It’s not allowed. All of my stuff has to be in the present — so yours should too.
Let’s do a quick-fire round:
Boosters, I am begging of you — point to one thing TODAY, from TODAY’s models, that even remotely justifies burning nearly a trillion dollars and filling our internet full of slop and creating the moral distance from an action that might have blown up a school and empowering the theft of millions of people’s work and having to hear every fucking day about Sam Altman and Dario Amodei, two terrifyingly boring and annoying oafs with no culture and no whimsy in their wretched little hearts.
Even if you are impressed by what LLMs can do, remember that what you’re impressed by is the result of burning more money than anybody has ever burned on anything, including the Great Financial Crisis’ Troubled Asset Relief Plan (a little over $400 billion) and the COVID Paycheck Protection Program (somewhere between $800 billion and $900 billion). Anthropic and OpenAI have raised (assuming OpenAI gets all the money) over $200 billion in funding, on top nearly $700 billion in capex in 2026 alone across Google, Amazon, Meta, and Microsoft, on top of the $800 billion or so they’ve already spent. I haven’t even included the tens of billions spent by CoreWeave, or the $178.5 billion in US-based data center debt deals from 2025, or the hundreds of billions of venture dollars that went to AI companies worldwide.
Yet when you look even an inch below the surface, everything seems kind of shit.
Per my Hater’s Guide To The SaaSpocalypse:
Much is said of the rocketship growth of Cursor, which crossed $2 billion in annualized revenue in March, which seems good if you don’t realize that $2 billion annualized is $166 million a month, and that Cursor raised three billion fucking dollars in 2025 alone. Similarly, Harvey — which raised $200 million at an $11 billion valuation in February — can only cobble together $190 million in ARR, or $15.8 million a month, after hitting $100 million ARR ($8.3 million a month) in August 2025, raising $300 million in June 2025, and $300 million in February 2025.
Pretty much every AI startup is in SaaS and raises hundreds of millions of dollars so that it can make single or double-digit millions of dollars a month. This sounds sarcastic — petty even! — but it’s the truth, and nobody’s margins appear to be improving.
Lovable hit $400 million in annualized revenue a few weeks ago — better known as $33.3 million a month — and all it took was $15 million in February 2025, $200 million in July 2025 and $330 million in December 2025.
Every single AI startup without exception does the same thing: turn hundreds of millions of dollars into tens of millions of dollars, or a few billion dollars into a few hundred million dollars. None of them are improving their margins. None of them have a solution.
Every single problem I’ve discussed above about the costs of running Anthropic or OpenAI apply directly to every AI startup, except they have far less venture capital backing and are subject, as Cursor was back in June 2025, to whatever price increases Anthropic or OpenAI decide, such as adding “priority processing” that’s effectively mandatory to have consistent access to frontier models.
Absolutely none of these companies have a plan. The only reason anyone is still humouring them is that the media and venture capital continue to promote the idea that — without explaining how — they will magically find a way of becoming margin positive.
When? How? Those are problems for rubes who don’t know we’re living in the future! Let’s hope that venture capital can afford to fund them in perpetuity! They can’t, of course, because venture capital has had dogshit returns since 2018, and AI startups do not have much intellectual property, as most of them are just wrappers for frontier AI labs who also don’t have any path to profitability.
As I covered last week, the story is similar for public companies.
Adobe’s “AI-first” revenue ($375 million ARR) works out to about $60 million a quarter at most for a company that makes $6 billion a quarter. ServiceNow has “$600 million in annual contract value,” an extrapolation of a non-specific period’s revenue that does not actually mean $600 million for a company that makes over $10 billion a year. Salesforce’s Agentforce revenue is $800 million, or roughly $66 million a month for a company that makes over $11 billion a year. Shopify, the company that mandates you prove that AI can’t do a job before asking for resources, does not break out AI revenue. Workday, a company that makes about $2.5 billion a quarter in revenue, said it “generated over $100 million in new ACV from emerging AI products, [and that] overall ARR from these solutions was over $400 million.” $400 million ARR is $33 million a month.
To be clear, ARR is not a consistent figure, and churn happens all the time, especially for products like LLMs that have questionable outcomes and high prices. Four fucking years of this and we’re still talking about this stuff in riddles, mostly because it’s a terrible business.
Then there’s the infrastructure issue.
One of the more-recent (and egregious) failures of journalism is the reporting of data center deals.
Before we go any further, one very important detail: when you read “active power,” that does not mean actual available compute capacity, which is called “IT load.” Per my premium data center model from a few months ago, you should take any “active power” and divide it by 1.3 to represent “PUE” — the standard for power usage effectiveness that calculates for everything that gets the power to the IT gear, and all the infrastructure that’s necessary to keep things running, like cooling systems.
Anywho, Bloomberg just reported that Meta had signed a “$27 billion” compute capacity deal with “$12 billion of capacity available in 2027” with AI compute company Nebius. Based on discussions with numerous experts in AI infrastructure, it works out to about $12.5 million per megawatt of compute, meaning that “$12 billion of dedicated capacity” would be around 960MW of IT Load.
And, of course, Nebius just raised $3.75bn in debt on the back of that compute deal.
This is on top of Microsoft’s $17.4 billion deal, and, of course, Meta’s $3 billion deal from last year.
One little problem: as of its February 12 2026 Letter to Shareholders, Nebius has around 170MW of active power.
How the fuck is it going to have that capacity ready, exactly?
For some context, CoreWeave — an AI compute company backed by (and backstopped by) NVIDIA with an entirely separate company building its capacity (Core Scientific) with backing from Blackstone and seemingly every major financier in the world — managed to go from 420MW of active power (NOT IT LOAD) in Q1 2025 to 850MW in active power in Q4 2025, with much of that already under construction in Q1 2025.
Nebius only started building its 300MW of New Jersey-based compute in March 2025, and based on its letter to shareholders, things aren’t going very well at all.
Then there’s Nscale, a company that raised $2 billion from NVIDIA, Lenovo and a bunch of other investors, and this week signed a “1.35GW deal” with Microsoft to fill a data center full of the latest generation of Vera Rubin GPUs.
In September 2025, NVIDIA CEO Jensen Huang said that the UK was going to be an “AI superpower” as he plunged hundreds of millions of dollars into Nscale as part of an “historic commitment to the UK AI sector” between NVIDIA, OpenAI, and Microsoft.
When The Guardian visited the supposed site of Nscale’s UK-based data center in February 2026 — which is meant to be built by the end of the year — it found “...a depot stacked with pylons and scrap metal under a corrugated roof, while flatbed lorries drove in and out stacked with poles.” As part of the investigation, The Guardian found that the supposed billions of dollars in data center commitments made by Nscale and CoreWeave were never checked by the government, and that no mechanism existed to audit them.
The response from both CoreWeave and Nscale was that these billions of dollars of investments would mostly be in NVIDIA GPUs, which is where we get to the “why” of these massive compute contracts.
You see, when Nebius, or Nscale, or CoreWeave signs a giant deal that it doesn’t have the capacity to provide, it does so specifically to raise debt on the contract to buy NVIDIA GPUs.
See the below diagram from CoreWeave’s Q1 2025 earnings presentation:

If people were actually paying attention, they’d see the immediate problem: a data center takes an incredible amount of time to build, and takes longer depending on the amount of capacity necessary.
It’s a deeply cynical con. Hyperscalers like Microsoft and Meta are paying for these contracts because they don’t reflect as assets on the balance sheet, all while moving the risk onto the AI compute company — and if the AI company misses a deadline, the hyperscaler can walk away.
For example, Nebius’ deal with Microsoft from last year has a clause that says that “...fails to meet agreed delivery dates for a GPU Service and the Company cannot provide alternative capacity, Microsoft has the right to terminate that GPU Service.”
Based on discussions with people with direct knowledge of its infrastructure, Microsoft has already set up Nebius to fail, with the expectation that it would have over 50MW of IT load specifically made up of NVIDIA’s GB200 and GB300 GPUs available by the end of April, with at least another 150MW of IT load (or more) by the end of the year for a company that only has about 130MW of IT Load in its entire global infrastructure, most of which isn’t in Vineland, New Jersey.
Hyperscalers are helping no-name companies with little or no history or experience in building data centers borrow billions of dollars in debt which is increasingly funded by people’s retirements and insurance funds lured in by the idea of “consistent yields” from companies that cannot afford to do business without convincing everybody to believe the illogical.
Data centers take forever to build. The “1.2GW” (so 880MW of IT load) Stargate Abilene’s first two buildings were meant to be fully energized by the middle of 2025. Only the first two-buildings’ worth of 96,000 GPUs were “delivered” by the middle of December 2025, and while the entire project was meant to be energized by mid-2026, it appears that only two buildings are actually ready to go.
Every time you report on these deals should include a timeline. In the end, I bet Stargate Abilene never gets built, but if it does, I’d be shocked if it’s done before the middle of 2027, which would mean it takes about 3 years per Gigawatt of power, or about a year per 293MW of IT load.
I have read absolutely zero fucking stories about data center development that take this into account.
The flippancy with which the media reports on these data centers — both in the structure of the deals and the realities of the construction (I go into detail about this in a premium from late last year, but making data centers is hard) — is allowing con artists to get rich and creating the conditions for yet another great financial crisis.
Pension funds and state investment boards are reading about these deals, seeing “Microsoft,” and assuming that everything will be fine, per my Hater’s Guide To Private Equity:
Insurance (and, for that matter, retirement) companies need those regular payouts to actually do business. They crave yield — regular payments for their investments — and private credit bonds (as in the bonds used to raise the money for private credit in some cases) tend to pay better than others, and nobody bothers to think about why that might be, such as “the investments are riskier, because they’re often issued by “trustworthy” companies or in assets that people believe are going to be a big deal.
All that the pension fund sees is an article on CNBC or Bloomberg and the name of a company like Microsoft or Meta. In turn, they (or the private credit firm managing their money) buy bonds or fund these debt deals because they see them as stable, straightforward, reliable investment yields, because the media and private credit firms are selling them as such.
In reality, data center debt deals are incredibly dangerous, as each one is effectively a bet on both the existence of AI demand (so that the debt can be repaid with revenue) and the existence of the company in question as an ongoing concern. Nscale, Nebius and CoreWeave are only a few years old, and the concept of a 1GW data center is not much older.
During the great financial crisis, massive amounts — billions and billions of dollars’ worth — of pension and insurance funds went into Collateralized Debt Obligations (CDOs) that were rated as AAA despite being a rat king of low-grade (and in many cases delinquent) debt.
This time around, data center debt deals are often given junk ratings — such as the B+ rating given to one of CoreWeave’s 2025 debt deals — which might make you think that there’s nothing to worry about, and that investors would naturally steer clear of these investments.
The problem is that the markets have AI psychosis, and thus believe anything to do with data centers is a natural winner. Blackstone funded part of its $38 billion investment in Oracle’s data centers — you know, the ones explicitly built for OpenAI, which cannot afford to pay for the compute — using its insurance funds. Per The Information:
“These are long-term contracts with committed revenue streams, so they should operate very similarly to the bonds that some of these hyperscalers are issuing directly,” said Julie Brewer, head of finance at EdgeCore Digital Infrastructure, a Denver-based data center developer.
This is the standard line from anybody in finance about data centers, and is based on little more than wish-casting and fantasy. These are brand new kinds of debt for some of the largest infrastructure projects in history, and as I’ve discussed repeatedly, outside of hyperscalers moving compute off of their balance sheets, there’s only a billion dollars of compute demand.
77% of CoreWeave’s 2025 revenue — and keep in mind that CoreWeave is the largest independent AI compute provider — was from Microsoft and NVIDIA, the latter of which plans to spend $26 billion in the next five years on renting back its GPUs…which suggests that little organic demand exists.
2026 or 2027’s great financial crisis will replace “homes” with “data centers,” and I worry it’ll be calamitous for the pensions and insurance funds that have tied their futures to AI.
—-
Even putting aside my own personal feelings about LLMs…I’m just not sure why we’re doing this anymore.
Okay, okay, I know why we’re doing it — the software industry is out of hypergrowth ideas and has been in a years-long decline since 2018, though it briefly had a burst of excitement in 2021 when money was cheap and everybody was insane after the lockdowns ended.
Nevertheless, AI has become one of the largest cons in history, bought and sold based on stuff it can’t do (but might do, one day, at a non-specific time), constantly ignoring the blatant swindles and acidic economics that are only made possible with regulators and the media and the markets piloted by people that don’t know or want to know what’s actually happening.
If you are an AI fan, I need to genuinely ask you to consider whether what you’re impressed by is what the LLMs can do today rather than what they might be able to do tomorrow. If you’re excited based on the potential, you’re not excited about technology, you’re excited about marketing.
And I get it. The tech industry hasn’t had anything really exciting in a while. It’s easy to get swept away by hype, especially when everybody is being swept away in exactly the same way. It’s hard to push back when Microsoft, Google, Meta and Amazon are all participating in a financial death cult, and their revenues keep growing — having to understand anything more than the headlines is tough and you’ve got all this shit to do and it’s so much easier to just nod and agree with everybody else.
But know that this is an industry that sells itself on fear and lies. Know that LLMs cannot do many of the things that people talk about — they do not blackmail people, no GPT-4 did not trick a taskrabbit, and every single time an AI CEO says AI “will” do something you should spit in their fucking face for making shit up not print it without a second’s thought.
It’s time to get specific. What will AI do, and when will it do it? What will the actual software be? How will it work? How much will it cost? How will it make money? How will it become profitable?
Because right now we’re being sold a lie and I’m sick of it, almost as sick as I am of seeing critics framed as outlier factions spreading conspiracy theories. I’ve proven my point again and again and again. Where is the same effort from the AI boosters? All I see is the occasional desperate attempt to claim that LLMs doing what they’ve always done is somehow remarkable?
Oh wow, so you can code a clone of an open source software project, all set up with an LLM that may or may not get the code right. Oh, someone was able to vibe code something that may or may not work and looks exactly the same as every other vibe code project. Congratulations on making a website that’s purple for some reason — you’re puking out a facsimile of an era of websites defined by the colour scheme chosen by Tailwind CSS.
I also want to be clear that I am extremely nervous about how many people appear to be fine with not reading code. I am currently (very slowly) learning Python, and every new thing I learn reinforces my overwhelming anxiety that there is a lot of software being written today by people who don’t read the output from LLMs and, in some cases, may not have understood it if they did. While I’m not saying all or even many software engineers might do this, I am alarmed by the idea that it’s becoming more commonplace — and even more alarmed that the reaction appears to be “ah it’s fine who gives a shit, it works.”
Guess what! It doesn’t always work. Amazon Web Services had multiple recent outages caused by use of its Kiro AI coding tool, and while it insists that AI isn’t to blame, it also convened an internal meeting to discuss this specific issue, and The Financial Times reported that Amazon now requires junior and mid-level engineers to get sign off on AI-assisted changes to code. However you may feel about Amazon as a service, its engineers are likely indicative of corporate engineering on some level, which is making me wonder if we’re not going to have some real problems in software development in the next few years as a result.
What does the software industry look like if nobody is actually reading their code? How many software engineers are comfortable doing this? I’m sure somebody will read this and get terribly offended, but to be clear, I’m not accusing you of copy-pasting code you can’t understand and being happy if it works unless that’s exactly what you’re doing.
To be explicit, allowing LLM to write all of your code means that you are no longer developing code, nor are you learning how to develop code, nor are you going to become a better software engineer as a result.
This isn’t even an insult or hyperbole. If you are just a person looking at code, you are only as good as the code the model makes, and as Mo Bitar recently discussed, these models are built to galvanize you, glaze you, and tell you that you’re remarkable as you barely glance at globs of overwritten code that, even if it functions, eventually grows to a whole built with no intention or purpose other than what the model generated from your prompt.
I’m sure there are software engineers using these models ethically, who read all the code, who have complete industry over it and use it like a glorified autocomplete. I’m also sure that there are some that are just asking it to do stuff, glancing at the code and shipping it. It’s impossible to measure how many of each camp there are, but hearing Spotify’s CEO say that its top developers are basically not writing code anymore makes me deeply worried, because this shit isn’t replacing software engineering at all — it’s mindlessly removing friction and putting the burden of “good” or “right” on a user that it’s intentionally gassing up.
Ultimately, this entire era is a test of a person’s ability to understand and appreciate friction.
Friction can be a very good thing. When I don’t understand something, I make an effort to do so, and the moment it clicks is magical. In the last three years I’ve had to teach myself a great deal about finance, accountancy, and the greater technology industry, and there have been so many moments where I’ve walked away from the page frustrated, stewed in self-doubt that I’d never understand something.
I also have the luxury of time, and sadly, many software engineers face increasingly-deranged deadlines set by bosses that don’t understand a single fucking thing, let alone what LLMs are capable of or what responsible software engineering is. The push from above to use these models because they can “write code faster than a human” is a disastrous conflation of “fast” and “good,” all because of flimsy myths peddled by venture capitalists and the media about “LLMs being able to write all code.”
The problem is that LLMs can write all code, but that doesn’t mean the code is good, or that somebody can read the code and understand its intention, or that having a lot of code is a good thing both in the present and in the future of any company built using generative code.
And in the end, where are the signs that this is working? Where are the vibe coded software products destabilizing incumbents? Where are the actual software engineers being replaced — not that I want this to happen, to be clear — by LLMs, outside of AI-washing stories that have got so egregious that even Sam Altman called it out? Where is the revenue? Where are the returns? Where are the outcomes?
Why are we still doing this?
2026-03-14 00:53:23
Soundtrack: The Dillinger Escape Plan — Black Bubblegum
To understand the AI bubble, you need to understand the context in which it sits, and that larger context is the end of the hyper-growth era in software that I call the Rot-Com Bubble.
Generative AI, at first, appeared to be the panacea — a way to create new products for software companies to sell (by connecting their software to model APIs), a way to sell the infrastructure to run it, and a way to create a new crop of startups that could be bought or sold or taken public.
Venture capital hit a wall in 2018 — vintages after that year are, for the most part, are stuck at a TVPI (total value paid in, basically the money you make for each dollar you invested) of 0.8x to 1.2x, meaning that you’re making somewhere between 80 cents to $1.20 for every dollar.
Before 2018, Software As A Service (SaaS) companies had had an incredible run of growth, and it appeared basically any industry could have a massive hypergrowth SaaS company, at least in theory. As a result, venture capital and private equity has spent years piling into SaaS companies, because they all had very straightforward growth stories and replicable, reliable, and recurring revenue streams.
Between 2018 and 2022, 30% to 40% of private equity deals (as I’ll talk about later) were in software companies, with firms taking on debt to buy them and then lending them money in the hopes that they’d all become the next Salesforce, even if none of them will. Even VC remains SaaS-obsessed — for example, about 33% of venture funding went into SaaS in Q3 2025, per Carta.
The Zero Interest Rate Policy (ZIRP) era drove private equity into fits of SaaS madness, with SaaS PE acquisitions hitting $250bn in 2021. Too much easy access to debt and too many Business Idiots believing that every single software company would grow in perpetuity led to the accumulation of some of the most-overvalued software companies in history.
As the years have gone by, things slowed down, and now private equity is stuck with tens of billions of dollars of zombie SaaS companies that it can’t take public or sell to anybody else, their values decaying far below what they had paid, which is a very big problem when most of these deals were paid in debt.
To make matters worse, 9fin estimates that IT and communications sector companies (mostly software) accounted for 20% to 25% of private credit deals tracked, with 20% of loans issued by public BDCs (like Blue Owl) going to software firms.
Things look grim. Per Bain, the software industry’s growth has been on the decline for years, with declining growth and Net Revenue Retention, which is how much you're making from customers and expanding their spend minus what you're losing from customers leaving (or cutting spend):

It’s easy to try and blame any of this on AI, because doing so is a far more comfortable story. If you can say “AI is causing the SaaSpocalypse,” you can keep pretending that the software industry’s growth isn’t slowing.
That isn’t what’s happening.
No, AI is not replacing all software. That is not what is happening. Anybody telling you this is either ignorant or actively incentivized to lie to you.
The lie starts simple: that the barrier to developing software is “lower” now, either “because anybody can write code” or “anybody can write code faster.” As I covered a few weeks ago…
Building software is not writing code and then hitting enter and a website appears, requiring all manner of infrastructural things (such as "how does a customer access it in a consistent and reliable way," "how do I make sure that this can handle a lot of people at once," and "is it quick to access," with the more-complex database systems requiring entirely separate subscriptions just to keep them connecting).
Software is a tremendous pain in the ass. You write code, then you have to make sure the code actually runs, and that code needs to run in some cases on specific hardware, and that hardware needs to be set up right, and some things are written in different languages, and those languages sometimes use more memory or less memory and if you give them the wrong amounts or forget to close the door in your code on something everything breaks, sometimes costing you money or introducing security vulnerabilities.
…
And yet, the myth that LLMs are an existential threat to existing software companies has taken root in the market, sending the share prices of the legacy incumbents tumbling.
From what I can gather, the other idea is that AI can “simply automate” the functions of a traditional software company, and “agents” can replace the entire user experience, with users simply saying “go and do this” and something would happen. Neither of these things are true, of course — nobody bothers to check, and nobody writing about this stuff gives a fuck enough to talk to anybody other than venture capitalists or CEOs of software companies that are desperate to appeal to investors.
To be more specific, the CEOs that you hear desperately saying that they’re “modernizing their software stack for AI” are doing so because investors, who also do not know what they are talking about, are freaking out that they’ll get “left behind” because, as I’ve discussed many times, we’re ruled by Business Idiots that don’t use software or do any real work.
There are also no real signs that this is actually happening. While I’ll get to the decline of the SaaS industry’s growth cycle, if software were actually replacing anything we’d see direct proof — massive contracts being canceled, giant declines in revenue, and in the case of any public SaaS company, 8K filings that would say that major customers had shifted away business from traditional software.
Midwits with rebar chunks in their gray matter might say that “it’s too early to tell and that the contract cycle has yet to shift,” but, again, we’d already have signs, and you’d know this if you knew anything about software. Go back to drinking Sherwin Williams and leave the analysis to the people who actually know stuff!
We do have one sign though: nobody appears to be able to make much money selling AI, other than Anthropic (which made $5 billion in its entire existence through March 2026 on $60 billion of funding) and OpenAI (who I believe made far less than $13 billion based on my own reporting.)
In fact, it’s time to round up the latest and greatest in AI revenues. Hold onto your hats folks!
Riddle me this, Batman: if AI was so disruptive to all of these software companies, would it not be helping them disrupt themselves? If it were possible to simply magic up your own software replacement with a few prompts to Claude, why aren’t we seeing any of these companies do so? In fact, why do none of them seem to be able to do very much with generative AI at all?
Sidenote: Also…where are the competitors? Where are the stories of companies building their own SaaS replacements? Software CEOs never, ever stop talking. Wouldn’t this be all they’d talk about? Klarna claimed it replaced its Salesforce contract with AI, and then had to hastily explain that the reason was that the company created its own internal CRM using graph database Neo4j, but of course couldn’t possibly share what it looked like. Lovable claims one of its customers replaced Salesforce with its CRM, running it entirely on Lovable’s services at a cost of $1200 a year, claiming “no ongoing maintenance complexity.” Reassuringly, the company’s CEO said that his “head of finance is more or less running it in his spare time.”
Curiously, said CRM looks very, very similar to open source CRM Twenty.
The point I’m making is fairly simple: the whole “AI SaaSpocalypse” story is a cover-up for a much, much larger problem. Reporters and investors who do not seem to be able to read or use software are conflating the slowing growth of SaaS companies with the growth of AI tools, when what they’re actually seeing is the collapse of the tech industry’s favourite business model, one that’s become the favourite chew-toy of the Venture Capital, Private Equity and Private Credit Industries.
You see, there are tens of thousands of SaaS companies in everything from car washes to vets to law firms to gyms to gardening companies to architectural firms. Per my Hater’s Guide To Private Equity:
For the best part of 20 years, software startups have been seen as eternal growth-engines. All you had to do was find a product-market fit, get a few hundred customers locked in, up-sell them on new features and grow in perpetuity as you conquered a market. The idea was that you could just keep pumping them with cash, hire as many pre-sales (technical person who makes the sale), sales and customer experience (read: helpful person who also loves to tell you more stuff) people as you need to both retain customers and sell them as much stuff as you need.
You’d eventually either take that company public or, in reality, sell it to a private equity firm. Per Jason Lemkin of SaaStr:
For years, PE was the reliable exit. Hit $20M in ARR, get to 40% growth or Rule of 40, and you’d see term sheets. From 2012 through 2023, nearly every company in the SaaStr Fund portfolio that crossed $20M ARR with solid fundamentals received multiple PE offers. It was the gift of exits that kept on giving. The multiples weren’t always great, but the offers came. Again and again and again.
That’s not happening anymore.
The problem is that SaaS valuations were always made with the implicit belief that growth was eternal, just like the rest of the Rot Economy, except SaaS, at least for a while, had mechanisms to juice revenues, and easy access to debt. After all, annual recurring revenues are stable and reliable, and these companies were never gonna stop growing, leading to the creation of recurring revenue lending:
Financing for private equity buyouts of these businesses comes mostly from private credit investors. It’s usually via annual recurring-revenue (ARR) loans, with loan amounts set at a multiple of annualized recurring revenue. For example, a software company with recurring revenues of $100 million seeking a loan of twice that amount would borrow $200 million.
To be clear, this isn’t just for leveraged buyout situations, but I’ll get into that later. The point I’m making is that the setup is simple:
You see, nobody wants to talk about the actual SaaSpocalypse — the one that’s caused by the misplaced belief that any software company will grow forever.
Generative AI isn’t destroying SaaS. Hubris is.
Alright, let’s do this one more time.
SaaS — Software As A Service — is both the driving force and seedy underbelly of the tech industry. It’s a business model that sells itself on a seemingly good deal. Instead of paying upfront for an expensive software license and then again when future updates happen, you pay a “low” monthly fee that allows you to get (in theory) the most up-to-date (in theory) and well-maintained (in theory) version of whatever it is you’re using. It also (in theory) means that companies need to stay competitive to keep your business, because you’re committing a much smaller amount of money than a company might make from a single license.
Over here in the real world, we know the opposite is true. Per The Other Bubble, a piece I wrote in September 2024:
Core to the business models of multiple tech companies is the humble "SaaS" (software as a service) model, where you're charged a monthly amount on a per-user basis for some sort of cloud-based software that you neither own nor control. To be clear, there's nothing inherently wrong with SaaS. For businesses, it reduces costs by removing the need to run their own infrastructure and employ people to maintain it. It can run theoretically from anywhere and costs are measurable, predictable, and adaptable to the organization's size. And, crucially, businesses don’t have to pay upfront for a license — which can cost thousands, or tens of thousands — and spread the cost across the life of the application.
You can see the appeal. SaaS is one of the most dominant business models in tech, because it fits both the customer profile of "not wanting to run a bunch of infrastructure" and the tech industry's love of trapping people in distinct ecosystems that are hard to escape. While SaaS is generally a good deal for small-to-mid-sized companies, the inevitable sprawl of letting SaaS into your organization means that you're stuck with them.
While managing 100 accounts might be something that your organization can do alone, how are you going to manage 1000? Or 10,000? Managing SaaS applications is a time-consuming and tedious process for large businesses, and now there are even — you guessed it — SaaS applications that can do it for you. What happens if your organization is in Europe and needs to be GDPR-compliant? What happens if you need to make sure your data is held on a server entirely separate to the rest of the company's business? While some SaaS companies offer private cloud (where the application exists on its own dedicated AWS or Azure instance), giving companies the flexibility to choose where and how to store their data, many don’t.
This is the devil's deal of the Software-As-A-Service market (and SaaS spend is expected to crest over $230 billion in 2024). While the convenience of not having to build your own distinct software run on its own distinct hardware is great, or having to pay ungodly sums upfront for software licenses, you are also effectively outsourcing your entire organization's functionality to another company. With every new integration, every new seat, every new add-on their sales team makes you pay for and every new product they graciously train your staff to use, your organization becomes more burdened by the beast of SaaS.
The bigger you are — or the longer you stay — the more powerful the parasite becomes, eventually burdening your organization to the point that you are effectively only as innovative as the SaaS provider you're anchored to.
It’s hard to say exactly how large SaaS has become, because SaaS is in basically everything, from whatever repugnant productivity software your boss has insisted you need, to every consumer app now having some sort of “Plus” package that paywalls features that used to be free. Nevertheless, “SaaS” in most cases refers to business software, with the occasional conflation with the nebulous form of “the enterprise,” which really means “any company larger than 500 people.”
McKinsey says it was worth “$3 trillion” in 2022 “after a decade of rapid growth,” Jason Lemkin and IT planning software company Vena say it has revenues somewhere between $300 billion and $400 billion a year. Grand View Research has the global business software and services market at around $584 billion, and the reason I bring that up is that basically all business software is now SaaS, and these companies make an absolute shit ton on charging service fees. “Perpetual licenses” — as in something you pay for once, and use forever — are effectively dead, with a few exceptions such as Microsoft Windows, Microsoft Office, and some of its server and database systems. Adobe killed them in 2014 (and a few more in 2022), Oracle killed them in 2020, and Broadcom killed them in 2023, the same year that Citrix stopped supporting those unfortunate to have bought them before they went the way of the dodo in 2019.
To quote myself again, in 2011, Marc Andreessen said that “software is eating the world.” And he was right, but not in a good way. Andreesen’s argument was that software should eat every business model:
…have you ever used a piece of software at a company you work for that sucks? Was it sold by Microsoft, Salesforce, Google, Atlassian or another big SaaS company? Well, it was probably bought by somebody who doesn't use the software, and it'll cost far more to remove than your annoyance matters. The burdensome presence of software like Microsoft Teams or Salesforce Platform in your life is a result of these organizations using brand recognition to sell into your organization, and once they're in there, their sales teams exist to continually find ways to increase the revenue of each user. The people making the decisions about the software you use — usually C-level executives — are doing so based on a sales pitch tailored to them and their preconceptions of what your job is rather than any firm experience, and thus they will sign year(s) long contracts based on a great sales pitch and the financials that "make sense."
Every single company you work with that has any kind of software now demands you subscribe to it, and the ramifications of them doing so are more significant than you’ve ever considered.
That’s because SaaS is — or, at least, was — a far-more-stable business model than selling people something once. Customers are so annoying. When they buy something, they tend to use it until it stops working, and if you made the product well, that might mean they only pay you once.
SaaS fixes this problem by giving them only one option — to pay you a nasty little toll every single month, or ideally once a year, on a contractual basis, in a way that’s difficult to cancel.
Sadly, the success of the business software industry turned everything into SaaS.
Recently, I tried to cancel my membership to Canva, a design platform that sort of works well when you want it to but sometimes makes your browser crash. Doing so required me to go through no less than four different screens, all of which required me to click “cancel” — offers to give me a discount, repeated requests to email support, then a final screen where the cancel button moved to a different place.
This is nakedly evil. If you are somebody high up at Canva, I cannot tell you to go fuck yourself hard enough! This is a scummy way to make business and I would rather carve a meme on my ass than pay you another dollar! It’s also, sadly, one of the tech industry’s most common (and evil!) tricks.
Everybody got into SaaS because, for a while, SaaS was synonymous with growth. Venture capitalists invested in business with software subscriptions because it was an easy way to say “we’re gonna grow so much,” with massive sales teams that existed to badger potential customers, or “customer success managers” that operate as internal sales teams to try and get you to start paying for extra features, some of which might also be useful rather than helping somebody hit their sales targets.
The other problem is how software is sold. As discussed in the excellent Brainwash An Executive Today, Nik Suresh broke down the truth behind a lot of SaaS sales — that the target customer is the purchaser at a company, who is often not the end user, meaning that software is often sold in a way that’s entirely divorced from its functionality. This means that growth, especially as things have gotten desperate, has come from a place of conning somebody with money out of it rather than studiously winning a customer’s heart.
And, as I’ve hinted at previously, the only thing that grows forever is cancer.
In today’s newsletter I am going to walk you through the contraction — and in many cases collapse — of tech’s favourite business model, caused not by any threat from Large Language Models but the brutality of reality, gravity and entropy. Despite the world being anything but predictable or reliable, the entire SaaS industry has been built on the idea that the good times would never, ever stop rolling.
I guess you’re probably wondering why that’s a problem! Well, it’s quite simple (emphasis mine):
The Apollo co-president pointed to the period from 2018 to 2022, when software accounted for 30% to 40% of the private market. During that era, investors assumed nearly 100% retention rates and minimal disruption risk, leading many small to medium-sized software businesses to be taken private based on assumptions that now appear questionable.
That’s right folks, 40% of PE deals between 2018 and 2022 were for software companies, the very same time venture capital fund returns got worse. Venture and private equity has piled into an industry it believed was taking off just as it started to slow down.
The AI bubble is just part of the wider collapse of the software industry’s growth cycle.
This is The Hater’s Guide To The SaaSpocalypse, or “Software As An Albatross.”