2025-06-23 21:51:02
Here’s your Monday round-up of data driving conversations this week — all in less than 250 words.
Today’s edition is brought to you by Attio.
Attio is the CRM for the AI era. Sync your emails and Attio instantly builds your CRM with all contacts, companies, and interactions enriched with actionable data.
American Shenzhen ↑ SoftBank founder Masayoshi Son is pitching a $1 trillion “Project Crystal Land” in Arizona to create a US-based AI-and-robotics manufacturing hub—double the price tag of the planned “Stargate” data center.
Startup goals ≠ worker goals. 41% of the tasks Y Combinator startups target are ones workers do not want automated. What do workers want? AI that automates low-value, repetitive chores.
Will AI be the productivity boost we need? Labor productivity growth in advanced economies has slowed from 2% annually in the 1990s to just 0.8% today.
AI trust gap. Twice as many Chinese citizens (83%) trust that AI systems serve the best interests of society as Americans (37.5%).
ChatGPT app surpasses X. In just 2.5 years, ChatGPT’s mobile app has overtaken X (formerly Twitter) in daily active users.
China’s auto shifts home. Foreign carmakers once dominated China with 76% market share in 2010; by 2030, their slice is projected to collapse to just 24%.
YouTube Shorts dominates attention. YouTube Shorts, boasting 200 billion daily views, now commands a staggering 1% of all waking human hours globally.
SAT goes snack-able. The SAT slashes its reading passages by up to 90% (from 500-750 words to 25-150 words), trading depth for convenience to appeal to shorter attention spans.
US depends on immigration. New census data shows the US-born workforce is declining, meaning future labor growth relies on up-skilling and immigration to secure necessary labor.
Thanks for reading!
Today’s edition is brought to you by Attio:
Attio is the AI-native CRM built for the next era of companies.
Sync your emails and watch Attio build your CRM instantly - with every company, contact, and interaction you’ve ever had, enriched and organized. Join industry leaders like Lovable, Replicate, Flatfile and more.
2025-06-23 10:29:16
Hendrik Bessembinder is an economist who has conducted fascinating research on the determinants of stock market returns in the United States over the past century. His conclusions are pretty simple. Most companies (and by extension most managers) destroy value. Stock market returns are overwhelmingly concentrated, in fact 2% of public companies drove 90% of wealth creation.
I was reminded of the work after venture investor retweeted it with the observation that “long-term public market investing is venture capital investing, whether you like it or not.”
The outcomes are more stark than that. Last year, while helping some asset allocators think through the AI cycle, I analysed Bessembinder’s results through my frameworks.
I’ll share a brief excerpt here, as it’s super relevant. Just two caveats: this analysis is over a year old, and Bessembinder’s data only ran to the end of 2022. Nvidia alone has had a $3 trillion market cap since 2022, which only serves to support the case I made.
Regarding concentration, 2% of firms, approximately 600, accounted for 90% of all wealth creation. If your portfolio missed that 2%, and you had backed the rest of the market, you’d have lost money. And the concentration continues, 23 firms (less than 0.1% of the sample), account for 30% of returns.
The majority of top performers share a common trait: they are built upon the breakthrough general-purpose technology of the era. Consider the general-purpose technologies of the past 120 years or so: the internal combustion engine, telephony, electricity, and computing. As the table below shows, returns skew heavily towards those firms based on GPTs.
2025-06-22 11:53:15
Hi, it’s Azeem.
Could large language models form the basis of a new operating systems. Meanwhile, Mark Zuckerberg isn’t guessing; he is in beast mode, willing to spend what it takes to secure Meta’s future. The battle to dominate this new computing paradigm isn’t about incremental improvements—it’s about survival. Here’s what you need to know.
Today’s edition is brought to you by Attio.
Attio is the CRM for the AI era. Sync your emails and Attio instantly builds your CRM with all contacts, companies, and interactions enriched with actionable data.
We’re entering a new era of computing where LLMs could become the operating systems, argues Andrej Karpathy. His latest keynote frames this clearly: software has moved from hand-coded logic (1.0), through learned neural weights (2.0), and now into prompt-driven LLMs (3.0). His provocation is simple: treat the model as an operating system, a 1960s mainframe with superpowers and cognitive blind spots.
The open question is how you best interact with this operating system. For a mainframe in the 1960s, it was a punch card. Then came the command line. Neither particularly fun nor intuitive. For LLMs, we have already seen interesting, thought-provoking experiments – NotebookLM, for instance, led by Steven Johnson, is in his words “a conduit between my own creativity and the knowledge buried in my sources – stress-testing ideas, spotting patterns, nudging my memory.”1 Another example, noted by Karpathy himself, uses Gemini 2.5 Flash to write the code for a UI and its contents, based solely on what appears on the previous screen. These are still experimental products, and many more will follow. But expect a UX reset as abrupt as DOS giving way to Windows.
If today’s large-language-model clusters resemble the 1960s mainframe—powerful but forbidding—the next strategic prize is the equivalent of the original Macintosh: a human-sized device that makes an AI operating system feel intimate. The economics now make that leap plausible. A cluster of eight M-series Macs – about the cost of a family car – can now run frontier models that would have cost about $300,000 on Nvidia H100s just a year ago. Chinese up-start MiniMax even boasts that its new M1 model bested DeepSeek while using only one-third the compute, a data point that hints at personal AI slipping into everyday reach.
Apple smells this inflection – as I discussed in my Live last week. Its “Liquid” interface—dismissed by analysts as animated eye‑candy—looks more like a prototype for ambient computing: an assistant that listens all day, whispers answers, and leaves the screen dark.
Early experiments, however, have struggled to find their footing. Humane’s “AI pin” and Rabbit R1 evoke memories of ambitious yet failed early computing form factors, like Xerox’s groundbreaking but commercially unsuccessful Alto. Perhaps Sam Altman’s and Jony Ive’s upcoming product will chart a different course. What we do know is that these personal AI devices are trending toward local execution.
Just as Apple understood in the 1980s that a GUI demanded the form factor of the Mac, the form factor must again evolve to match this new operating system.
Intel founder, Andy Grove, popularised the phrase “only the paranoid survive”. It is the strategic gait you need during a “strategic inflection point” when the fundamentals of the market change so much that adaption or obsolescence are the only paths.
So, what to make of Zuckerberg’s $14bn deal to acquire less than half of Scale AI, placing its 28-year-old founder, Alexandr Wang, in charge of his firm's ambitious new “super-intelligence” lab? Is it Grovian paranoia? Or a desperate last gasp, the final lurch of a singleton to pair up before time runs out?
It’s a bit of both—a third strategic boldness mixed with two-thirds desperation. Zuck is bold: his move towards the metaverse four years ago was just that. It was just wrong.
AI is the real deal, and Facebook (as it was then) had made strides in pursuing the technology. But I’ve heard for more than a year that Meta’s Llama team was unhappy and underperforming, and just a couple of months ago, Joelle Pineau, its boss, left.
Cue Meta falling behind the other major firms in offerings and mind share. And Zuck pursuing Perplexity, Ilya Sutsekever’s Safe Super Intelligence and Mira Murati’s Thinking Machines in an attempt to catch up. Combined with astonishing pay packets for AI researchers, Meta looks desperate.
AI is a “strategic inflexion point”, per Grove. It doesn’t matter whether the optics are desperate or brave; what matters is being able to play the new game or face the lingering irrelevancy that technology bestows upon former titans that don’t grok the shift. For Zuckerberg, there is almost no price he won’t pay to play in the next innings.
The cost of routine cognitive tasks is collapsing toward zero. That single shift dissolves scarcity-based business models and severs the old link between hiring talent and corporate growth.
OpenAI’s ex-research chief Bob McGrew argues the scarcity era for knowledge work is ending: as routine cognition prices out at raw compute cost, “agents” will gut the hire-more-brains-to-grow playbook and push value toward proprietary data, deep context and durable networks. Amazon CEO Andy Jassy is already bracing—generative agents, he says, will shrink the corporate back office. The near-term story isn’t robo-layoffs so much as a violent repricing of skills: expense reports and log scraping vanish first, while synthesis, judgment and relationship-building surge in premium. For firms, moats now depend less on owning intelligence than on integrating it uniquely and securely; advantage will flow to those who harden cyber-posture, accelerate agent pilots and turn abundant cognition into defensible leverage.
See also:
For readers wondering how to stay ahead, 80,000 Hours offers a detailed checklist.
The UK government will train 7.5 million workers in AI by 2030 – half of Britain’s knowledge workforce.
Honda’s 6.3-meter reusable rocket leapt nearly 300m before landing this week. The Japanese government aims to build an ¥8 trillion ($55 billion) space sector – relative pennies compared with SpaceX, which already carries a $350 billion valuation.
Iran is self-sabotaging its internet infrastructure to slow Israeli attacks. The nationwide shutdown during heightened Israel tensions shows that kinetic conflict now triggers pre-emptive digital blackouts.
Korean researchers say a single high-end AI chip could draw 15,000 watts by 2035 – up from about 800 W today. At that scale, electricity and cooling – not chip fabrication – become the main growth bottlenecks. Missing from their analysis, though, are novel approaches to energy-efficient computing, such as reversible architectures.
Waymo shows that motion-forecasting accuracy in its autonomous vehicles follows a power law with training compute – scaling data and parameters lets the cars cope with trickier road chaos.
Solar-plus-storage can now undercut grid prices for heavy industry – provided the plants can be run intermittently. Terraform plans 1 MW micro-plants that run intermittently at $200 per kW, claiming a fivefold green dividend for steel, ammonia and related sectors.
China’s next five‑year plan (2026–30) makes building its own chip‑making machines a top goal. That includes the EUV lithography tools that print the chip patterns, carve the circuits and check they meet specs—equipment usually produced by Dutch giant ASML. By pouring government money into this gear, Beijing aims to cut its reliance on foreign suppliers and weaken US export‑control pressure in just one product cycle. It will be hard; EUV tools are probably the most advanced machines you can buy today.
Today’s edition is brought to you by Attio:
Attio is the AI-native CRM built for the next era of companies.
Sync your emails and watch Attio build your CRM instantly - with every company, contact, and interaction you’ve ever had, enriched and organized. Join industry leaders like Lovable, Replicate, Flatfile and more.
See here for my conversation with Steven where we discuss this topic.
2025-06-21 11:32:19
In 1954, Dwight Eisenhower articulated a truth that most of us live viscerally: “What is important is seldom urgent and what is urgent is seldom important.” Long before the first email was sent, professionals struggled with this paradox—the ringing telephone, the knock on the door, the crisis meeting that devoured afternoons earmarked for deep thinking. Other people’s priorities have always had a peculiar talent for masquerading as emergencies.
Today, that masquerade has become a 24-hour, 7-day carnival. You unlock your phone to check the weather; 205 taps later, it’s lunchtime and the clouds have rolled in anyway. The digital age didn’t create the urgency trap—it has simply mechanized, digitized and exponentialized it. What was once an occasional ambush is now a constant barrage. The average worker now processes 117 emails daily while smartphones deliver 146 push notifications (181 for Gen Z). Microsoft’s telemetry reveals an interruption every two minutes during work hours. We’ve become Sisyphus, but instead of pushing one boulder uphill, we’re juggling dozens while climbing.
The productivity-industrial complex has responded with libraries of solutions: Getting Things Done, Atomic Habits, The 4-Hour Workweek. We’ve downloaded the apps, bought the planners, attended the seminars. Eat that pomodoro frog.
Yet the to-do lists get longer, the search for clarity more frantic, we’re owned by our inboxes.
Why? Because these systems demand heroic willpower precisely when our cognitive resources are most depleted. Each interruption costs 23 minutes of refocusing time—a tax we pay dozens of times daily. Email and Slack haven’t invented the phenomenon of other people’s priorities invading our day; they’ve simply made it frictionless. Without a mediator between us and the demands of others, we remain the bottleneck in our own lives.
2025-06-20 02:17:19
We are only six months into the year, yet AI has already outpaced two decades of ordinary tech cycles. In January, DeepSeek shook the world. Google, OpenAI and Anthropic quickly followed with next-generation models, ones which could command software tools on the internet.
The first wave of agents then appeared: ManusAI—until recently an unknown start-up—unveiled an agent that can autonomously tackle complex tasks, while Anthropic launched Claude Code, a multi-agent system many developers call a dream. The lab race is heating up, but two years after ChatGPT’s debut, the macro-productivity numbers remain stubbornly flat.
This is the capability-absorption gap: frontier labs are racing ahead faster than the traditional economy can keep pace. The current generation of AI is already powerful enough to remake how we work, yet firms are absorbing these capabilities far more slowly than labs are enhancing them. I don’t blame them. Prices halve every six months; models remain stochastic, which complicates reliability; and managerial know-how is scarce.
In today’s post, we will explore this gap and what businesses can do about it.
The scoreboard is unambiguous: nearly every public benchmark has moved upward since last year, making previous standards of excellence look decidedly ordinary.
Personal experience underscores this vividly. Occasionally, I run models locally on my laptop, particularly when trapped on flights with poor Wi-Fi. These local models approximate the capabilities of GPT-4 roughly a year ago—lacking the reasoning and tool-use features we have witnessed since. Using these models offline now feels painfully limited compared with the current state of the art.
Practical gains are clear, especially in real-world workflows. In software engineering, AI-powered coding tools have swiftly evolved from basic code hints to managing entire processes—planning, writing, testing and submitting finished work for human review. Systems such as Claude Code and OpenAI’s Codex now automate these tasks end-to-end, reducing humans to supervisory roles. Though not flawless, their rapid improvement streamlines workflows by converting tedious coding into manageable reviews.
These advancements are possible because models can now handle increasingly complex tasks. Consider METR’s latest agent-endurance benchmark, which measures how long AI models can sustain intricate multistep workflows. On this test, top-performing models last three to five times longer than they did only six months ago. This dramatic improvement signals deeper planning capabilities and more reliable tool use.
In short, the capability curve remains steep—and it continues to climb rapidly across multiple dimensions.
Perhaps even more crucially, the unit costs of AI are plunging exponentially. This is central to my definition of exponential technology—not solely about improving performance but about rapidly collapsing costs for a given capability.
Consider ChatGPT’s inference prices, which have roughly halved every six months—outpacing even the historical cost declines in DRAM and solar power. This steep drop stems from relentless algorithmic improvements and fierce competition among providers. Lower prices in turn drive wider adoption: the cheaper an AI agent becomes, the more extensively it can be deployed. (Though clearly there is still room for improvement—I recently burned through $80 of Replit credits in a single evening.)
Importantly, even if AI labs suddenly stopped frontier research and model scaling overnight, these cost improvements would continue accumulating. In my back-room conversations, experts are roughly split 50/50 on whether scaling laws alone can carry us all the way to AGI—whatever that ultimately entails. If scaling alone is not sufficient, most believe we might still be only one or two significant conceptual breakthroughs away.
Yet debates about scaling do not fundamentally change the core argument. Future capability enhancements, while valuable, are additive—not prerequisites—for substantial economic transformation. Existing models already surpass what most enterprises can effectively absorb or leverage. At Exponential View, we are still figuring out how to redefine our workflows around o3; I expect most organizations are still navigating how to integrate GPT-4o fully.
McKinsey reports that nearly every company is investing in AI, yet only 1% claim they have fully integrated it into workflows and achieved meaningful business outcomes. And honestly, even that 1% is probably just PR.
This sluggish absorption, rather than frontier innovation, is the main reason we can see the AI boom everywhere except in the economic statistics. That remains true even as AI startups rack up millions to tens of billions in revenue at record speed. But the global economy is huge—about $100 trillion a year—so that is a lot of OpenAIs. It will take several years, not a few quarters, before even the fastest-growing AI startups contribute one percent to global income. (Incidentally, I have little doubt they will, and that AI-native newcomers will replace many incumbents across industries over the next two decades, just not in the next two years.)
The rest of the economy is dominated by incumbents. Those incumbents are laced with friction. They need to tackle it. Three distinct institutional frictions underpin this capability-absorption gap, and each one is structural rather than technological.
2025-06-16 23:35:59
Hi all,
Here’s your Monday round-up of data driving conversations this week — all in less than 250 words.
Today’s edition is brought to you by Attio.
Attio is the CRM for the AI era. Sync your emails and Attio instantly builds your CRM with all contacts, companies, and interactions enriched with actionable data.
AI accelerator boom ↑ AMD CEO Lisa Su projects the AI accelerator market will surge beyond $500 billion by 2028, about four times today’s size.
OpenAI’s rapid ascent ↑ ChatGPT’s explosive growth has propelled OpenAI to $10 billion in annual recurring revenue in under three years.
Efficient chat ↑ The average ChatGPT query uses only 0.0003 kWh of energy, the same as a Google search in 2009.
UK AI investment ↑ Prime Minister Starmer has pledged £1 billion for AI infrastructure alongside £187 million for skills programmes—modest compared with the multi-billion-dollar AI spending by Gulf states.
AI-generated revenue ↑ Brazil’s largest ed-tech company, Qconcursos, earned $3 million in 48 hours after launching a premium app built with the AI low-code platform Lovable.
YC’s AI bet ↑ AI-agent startups now make up nearly half (47%) of Y Combinator’s Spring 2025 batch.
Stablecoin surge ↑ US Treasury Secretary Scott Bessent says dollar-linked stablecoins could reach a $2 trillion market, 28% of today’s US money-market funds.
China leads science ↑ The Nature Index 2025 ranks China (32 122 share points)1 well above the United States (22 083), widening the gap fourfold in just a year.
Methane munchers ↓ Windfall Bio’s methane-eating microbes eliminated more than 85% of methane emissions from a California dairy farm.
Thanks for reading!
Today’s edition is brought to you by Attio:
Attio is the AI-native CRM built for the next era of companies.
Sync your emails and watch Attio build your CRM instantly - with every company, contact, and interaction you’ve ever had, enriched and organized. Join industry leaders like Lovable, Replicate, Flatfile and more.
Share points measure a country’s or institution’s fractional contribution to high-quality research. Each paper counts as 1 point, divided equally among its authors. The total share is the sum of these fractions, reflecting real participation—not just paper counts.