MoreRSS

site iconExponential ViewModify

By Azeem Azhar, an expert on artificial intelligence and exponential technologies.
Please copy the RSS to your reader, or quickly subscribe to:

Inoreader Feedly Follow Feedbin Local Reader

Rss preview of Blog of Exponential View

🔮 Breaking down the job market shake-up & the new career playbook

2025-11-15 17:02:01

Something important is happening with the labor market.

US employers announced over 153,000 job cuts in October, the highest monthly total in more than two decades. Amazon announced about 14,000 corporate job cuts as it pivots towards AI-driven operations. At the same time, research shows that entry-level opportunities are shrinking and new entrants to the job market have it harder than most.

To further make sense of what’s happening, I spoke with Ben Zweig, economist and CEO of Revelio Labs. His team analyzes millions of worker profiles to track hiring and job flows – so he sees data most people don’t.

Skip to the best part

  • (08:39) “The Canary in the Coal Mine”

  • (13:01) How AI anticipation is harming the job market

  • (27:33) Why large companies struggle to reorganize

  • (39:11) What entry-level workers need to do

Watch on YouTube

A new career playbook

Ben and I cross‑checked Revelio’s data with what I’m hearing on the ground. We don’t normally do career advice – and our audience isn’t entry‑level. But many of you have kids and mentees stepping into this market, and the data is too relevant to skip. If you’re advising a new grad, here’s the concise playbook.

First, understand what’s going on:

  1. Entry-level roles in AI-adjacent fields are contracting.

  2. And managers are risk-averse because they expect workflows to change again next quarter. Firms don’t want to hire someone into a process they know will be redesigned.

How to break into an AI‑shaped job market

  1. Ship end‑to‑end projects: choose or create multi‑step projects with real stakeholders; practice owning the plan and delivering it to a finish line. If AI takes on more of the execution work, the value for humans is increasingly in coordination around those tasks, the orchestration. It’s the ability to decide what needs to be done, in what order and with which tools, and then keep a project moving.

Read more

The split reality of AI: rising productivity, flat growth and where’s the beef?

2025-11-15 07:09:21

In today’s live, I explored why AI feels transformative for individuals but frustratingly slow at the organisational level. It’s the exponential gap that I dissect in my book: organisations struggle to update old processes in the face of rapidly improving technology.

Enjoy!

Azeem

🤔 Unpicking OpenAI’s real revenues

2025-11-14 21:37:45

The blogger Ed Zitron has published some detailed extracts of OpenAI and Microsoft’s commercial arrangements. It is a brilliant piece of investigative journalism, also picked up by the Financial Times. It includes details of how much OpenAI paid to Microsoft for Azure hosting and as part of its revenue share with the firm, as well as the inference bill OpenAI faces.

Since so much of the US stock market hangs on what OpenAI’s revenues really are and how fast they are growing, this is a great insight. The market is jittery with the Nasdaq dropping 2% in a couple of days. Over the past month, infrastructure players Oracle and CoreWeave have seen their stock prices drop 27% and 42% respectively. These numbers matter. Some read them as a sign that the boom is rolling over. I’m not convinced. Interpreted properly, the leaks point to a very different picture.

In this short note, I’ll dig into what the leaks might tell us about OpenAI’s sales.

Subscribe now

The deal

We already know some details of the deal between the two firms:

  1. OpenAI pays Microsoft 20% of much of its revenue. What’s not crystal clear is whether the 20% applies to literally all OpenAI revenue, or only to specific product lines (ChatGPT + API). I assume all its revenue.

  2. When Microsoft’s Azure sells certain OpenAI services, the startup receives a cut of that revenue. In absolute terms this is much smaller than the 20% slice Microsoft takes.

  3. There is also a profit-sharing agreement which isn’t relevant here.

The leaked data Zitron received should allow us to estimate OpenAI’s revenues and sort out whether they are in line with leaks.

I’ve not seen the documents, but Zitron points out that they show the “revenue share payments” Microsoft received at various quarters. That is, it’s a cash-basis number.

The simplest reading would be to multiply those revenue share payments by five (the 20% revenue share) and conclude that is OpenAI’s revenue in that quarter. In reality, this would understate revenues.

It is more likely that OpenAI has to reconcile a quarter’s revenues, agree the reconciliation with Microsoft, and then make a payment according to standard payment terms. We don’t know the exact details. But in large OEM / platform deals, it is common to have quarterly reconciliations and 30-90-day payment terms. It’s more plausible that reconciliation would mean a one-to-two quarter cash-lag.

In other words, if Microsoft received a payment in the second quarter of 2025, it most likely reflects OpenAI’s revenues in the first quarter of 2025 or the fourth quarter of 2024, or some mix of both. In addition, the cross-payments from Microsoft selling OpenAI services via Azure, while small, will plausibly arrive with a similar, if not greater, delay.

The objective here is to reconcile the cash payments that the Zitron leaks describe with the revenue that OpenAI can recognise in each month or quarter. Cash will follow revenue recognition with a delay.

The five scenarios

The simple model below clarifies this. The sums Microsoft received from OpenAI are disclosed in the leak. We then impute OpenAI’s revenue based on five different scenarios. These provide potential floors for revenues in that period:

  • The “Naive” scenario where we assume OpenAI pays Microsoft everything it owes in a calendar quarter within that calendar quarter and it assumes all the payments represent OpenAI’s own derived revenues, not netted off anything Microsoft owes OpenAI from Azure sales.1

  • Paid within three months, which allows for a reconciliation period at the end of the quarter and short payment terms. Once again, we assume no Azure sales owing to OpenAI. (In other words, showing a one quarter lag between OpenAI generating revenue and Microsoft receive its cut.)

  • Paid within six months which allows a longer reconciliation period and payment terms. Once again, we assume no Azure sales owing to OpenAI.

  • Low Azure sales: In this model, we assume a six-month settlement period and that Microsoft Azure-based sales comprise some 2% of OpenAI’s overall revenue and need to be netted off.

  • High Azure sales: The same settlement but Azure sales assumption is 10% of OpenAI’s overall revenue.

OpenAI’s revenues have been leaked at roughly $3.7 billion for 2024.

In most scenarios, adjust for settlement lags, and full year revenues within reasonable estimate range of that number. Of course, the naive model leaves an enormous gap, but it seems unrealistic to assume instant settlement.

Our revenue model for OpenAI, lives somewhere between the lower end of these models.

Read more

📈 Data to start your week

2025-11-10 22:02:53

Hi all,

Here’s your Monday round-up of data driving conversations this week in less than 250 words.

Let’s go!

Subscribe now


  1. Hiring tilts to AI ↑ So far this year, overall job postings have dropped ~8% YoY, but AI roles are on the rise. Machine learning engineer postings grew almost 40%.

  1. Hiring signals ↓ Relative to the pre‑LLM period, workers in the highest ability quintile are hired 19% less often. Traditional CV and cover letter screening is no longer fit for purpose.

  2. Neoclouds ease compute scarcity ↑ Microsoft has committed over $60 billion to neoclouds, including gaining access to over 200,000 Nvidia GB300s. This is on par with their total capex spend in 2024.

Read more

🔮 Exponential View #549: Volatile markets; infinite compute; Kimi K2’s frontier leap; orbital computing++

2025-11-09 09:28:35

Good morning from New York City!

First, a couple of podcasts for your weekend. In my latest episode, I zoom out to explain why we’re not just in an “AI moment”, but at the start of an economy built on effectively infinite compute:

You can also catch me in conversation with a16z’s discussing the AI bubble question:

Subscribe now


All-in

Big Tech’s bond binge has topped $200 billion, the market is all-in. We have data center debt, private credit warehousing and AI-linked ETFs. OpenAI projects $100 billion in revenue by 2027—when we broke down the numbers, we were surprised to find out that this is not out of the question.

The market remains on tenterhooks. When OpenAI’s Sarah Friar cackhandedly talked about government loan guarantees, it freaked the market out. Red was everywhere, and $800 billion knocked out tech stocks. It was the worst week since, wait for it, April 2025. It remains in line with typical volatility of growth stocks, or “move along” as Obi-Wan Kenobi instructed.

Of course, how you read this depends on what you think AI is. If you believe that we are in the midst of a productivity boom, it’s a no-brainer. But if you think AI (specifically the LLM modality) is too unreliable to be useful, the risks simply shift to society and compound.

The Financial Times’s Alphaville, generally sceptical commentators on AI, unwittingly points out that about a sixth of American firms are already seeing “material contribution to earnings from [AI] deployments.” It’s an absolutely remarkable achievement to reach this proportion within a couple of years into the cycle. If that 16% falls below 25% this time next year, AI is falling short of expectations. If it is much above 33%, it is more than proving its mettle.1

Despite this, household equity exposure is at a record high and The Economist estimates that if AI tech valuations were to crack, there would be a 2.9% hit to GDP via consumption from wealth effects alone. The US economy leans heavily on AI.

AI bubble watch – weekly update

📈 Quick reading: A boom with some bubble flavoring. One of the five indicators is red, well within our boom range.

What moved this week:

  • Reported revenue at leading labs is ahead of the public benchmarks. OpenAI expects to make more than $13 billion in sales this year and $100 billion by 2027. Their plan to spend $1.15 trillion on compute implies the company would need roughly $577 billion in 2029 revenue to self-fund operations, suggests Tom Tunguz, an investor. Anthropic projects $70 billion in revenue and $17 billion in free cash flow by 2028, with margins approaching 77%.

  • Hyperscaler spending is rising fast: Morgan Stanley now expects data center capex to go from $245 billion (2024) to ~$700 billion by 2027. This represents a substantial upward revision of the firm’s estimates from February this year.

    Yet that report underplays Meta’s intentions, which late Friday confirmed that it would invest up to $600 billion in AI and other infrastructure by 2028. By our quick reckoning, this would nudge the 2027 estimates for capex spend up a further $50 billion or so, towards $800 billion. As most of this would be in the US, it would drive our Economic Strain gauge well past 2%, deep into red warning territory within two years.

  • This banger from Michael Maubossin, another Morgan Stanley, points out that US company debt-to-total capital ratios are lower than they have been since 1970.

  • Credit default swaps on Coreweave, the price to insure corporate debt, jumped to 500 basis points. It’s high, but once again, well within the bounds of companies with single B ratings, like Coreweave. For comparison, Swiss junk bank Credit Suisse saw its CDS pricing jump to 3500 bps before its rescue-failure. Lehman’s CDS was as high as 790 bps when it collapsed. Still a way to go.

Open the dashboard

If you would like to receive our Boom or Bubble update emails separately, apply for early access here. If you prefer to sell on the news, don’t apply. Nothing in this or any other email I send should be considered financial or market advice.


Kimi K2 is thinking

Back in July, we praised the release of the open-source Kimi K2 model from China as a milestone:

In today’s AI, DeepSeek plays the Sputnik role (we called it in December 2024) as an unexpectedly capable Chinese open‑source model that demonstrated a serious technical breakthrough.

Now AI has its Vostok 1 moment. Chinese startup Moonshot’s Kimi K2 model is cheap, high-performing and open-source. For American AI companies, the frontier is no longer theirs alone.

This week, Moonshot launched an open-source reasoning model Kimi K2 Thinking, which quickly showed as one of the best in the industry (with training costs estimated at around $4.6 million, a fraction of its rivals). Kimi K2 Thinking scored higher than GPT-5 and Claude Sonnet 4.5 on Humanity’s Last Exam benchmark. Its agentic search outperforms OpenAI and Anthropic’s best.

Read more

🔮 The infinite-compute economy has already started

2025-11-08 01:40:57

The cloud boom, the chip frenzy, the race to build ever-larger data centres – surely we are simply riding the hype cycle of GPTs and chatbots?

That view is far too small.

What is happening is a structural re-engineering of the economy. From an economy that uses computation to one that is built on computation. Global computing power has grown by roughly 11 orders of magnitude since 1972 – a 62% compound annual increase across five decades. Every technological leap, from mainframes, microprocessors, PCs to smartphones, produced more demand, not less. Now we are entering the next curve with AI. Agentic systems will run continuously, not just when a human types a prompt; we foresee a billion-agent future.

This may be a new economic fabric. And that, quietly, is the real story that I dissect in today’s episode.