MoreRSS

site iconSimon WillisonModify

Creator of Datasette and Lanyrd, co-creator of the Django Web Framework.
Please copy the RSS to your reader, or quickly subscribe to:

Inoreader Feedly Follow Feedbin Local Reader

Rss preview of Blog of Simon Willison

Is Claude Code going to cost $100/month? Probably not - it's all very confusing

2026-04-22 10:07:34

Anthropic today quietly (as in silently, no announcement anywhere at all) updated their claude.com/pricing page (but not their Choosing a Claude plan page, which shows up first for me on Google) to add this tiny but significant detail (arrow is mine, and it's already reverted):

Screenshot of the Claude pricing grid - Compare features across plans. Free, Pro, Max 5x and Max 20x all have the same features, with the exception of Claude Code which is on Max only and Claude Cowork which is on Pro and Max only. An arrow highlights the Claude Code for Pro cross.

The Internet Archive copy from yesterday shows a checkbox there. Claude Code used to be a feature of the $20/month Pro plan, but according to the new pricing page it is now exclusive to the $100/month or $200/month Max plans.

Update: don't miss the update to this post, they've already changed course a few hours after this change went live.

So what the heck is going on? Unsurprisingly, Reddit and Hacker News and Twitter all caught fire.

I didn't believe the screenshots myself when I first saw them - aside from the pricing grid I could find no announcement from Anthropic anywhere. Then Amol Avasare, Anthropic's Head of Growth, tweeted:

For clarity, we're running a small test on ~2% of new prosumer signups. Existing Pro and Max subscribers aren't affected.

And that appears to be the closest we have had to official messaging from Anthropic.

I don't buy the "~2% of new prosumer signups" thing, since everyone I've talked to is seeing the new pricing grid and the Internet Archive has already snapped a copy. Maybe he means that they'll only be running this version of the pricing grid for a limited time which somehow adds up to "2%" of signups?

I'm also amused to see Claude Cowork remain available on the $20/month plan, because Claude Cowork is effectively a rebranded version of Claude Code wearing a less threatening hat!

There are a whole bunch of things that are bad about this.

If we assume this is indeed a test, and that test comes up negative and they decide not to go ahead with it, the damage has still been extensive:

  1. A whole lot of people got scared or angry or both that a service they relied on was about to be rug-pulled. There really is a significant difference between $20/month and $100/month for most people, especially outside of higher salary countries.
  2. The uncertainty is really bad! A tweet from an employee is not the way to make an announcement like this. I wasted a solid hour of my afternoon trying to figure out what had happened here. My trust in Anthropic's transparency around pricing - a crucial factor in how I understand their products - has been shaken.
  3. Strategically, should I be taking a bet on Claude Code if I know that they might 5x the minimum price of the product?
  4. More of a personal issue, but one I care deeply about myself: I invest a great deal of effort (that's 105 posts and counting) in teaching people how to use Claude Code. I don't want to invest that effort in a product that most people cannot afford to use.

Last month I ran a tutorial for journalists on "Coding agents for data analysis" at the annual NICAR data journalism conference. I'm not going to be teaching that audience a course that depends on a $100/month subscription!

This also doesn't make sense to me as a strategy for Anthropic. Claude Code defined the category of coding agents. It's responsible for billions of dollars in annual revenue for Anthropic already. It has a stellar reputation, but I'm not convinced that reputation is strong enough for it to lose the $20/month trial and jump people directly to a $100/month subscription.

OpenAI have been investing heavily in catching up to Claude Code with their Codex products. Anthropic just handed them this marketing opportunity on a plate - here's Codex engineering lead Thibault Sottiaux:

I don't know what they are doing over there, but Codex will continue to be available both in the FREE and PLUS ($20) plans. We have the compute and efficient models to support it. For important changes, we will engage with the community well ahead of making them.

Transparency and trust are two principles we will not break, even if it means momentarily earning less. A reminder that you vote with your subscription for the values you want to see in this world.

I should note that I pay $200/month for Claude Max and I consider it well worth the money. I've had periods of free access in the past courtesy of Anthropic but I'm currently paying full price, and happy to do so.

But I care about the accessibility of the tools that I work with and teach. If Codex has a free tier while Claude Code starts at $100/month I should obviously switch to Codex, because that way I can use the same tool as the people I want to teach how to use coding agents.

Here's what I think happened. I think Anthropic are trying to optimize revenue growth - obviously - and someone pitched making Claude Code only available for Max and higher. That's clearly a bad idea, but "testing" culture says that it's worth putting even bad ideas out to test just in case they surprise you.

So they started a test, without taking into account the wailing and gnashing of teeth that would result when their test was noticed - or accounting for the longer-term brand damage that would be caused.

Or maybe they did account for that, and decided it was worth the risk.

I don't think that calculation was worthwhile. They're going to have to make a very firm commitment along the lines of "we heard your feedback and we commit to keeping Claude Code available on our $20/month plan going forward" to regain my trust.

As it stands, Codex is looking like a much safer bet for me to invest my time in learning and building educational materials around.

Update: they've reversed it already

In the time I was typing this blog entry Anthropic appear to have reversed course - the claude.com/pricing page now has a checkbox back in the Pro column for Claude Code. I can't find any official communication about it though.

Let's see if they can come up with an explanation/apology that's convincing enough to offset the trust bonfire from this afternoon!

Tags: ai, generative-ai, llms, anthropic, llm-pricing, ai-ethics, coding-agents, claude-code

Where's the raccoon with the ham radio? (ChatGPT Images 2.0)

2026-04-22 04:32:24

OpenAI released ChatGPT Images 2.0 today, their latest image generation model. On the livestream Sam Altman said that the leap from gpt-image-1 to gpt-image-2 was equivalent to jumping from GPT-3 to GPT-5. Here's how I put it to the test.

My prompt:

Do a where's Waldo style image but it's where is the raccoon holding a ham radio

gpt-image-1

First as a baseline here's what I got from the older gpt-image-1 using ChatGPT directly:

There's a lot going on, but I couldn't find a raccoon.

I wasn't able to spot the raccoon - I quickly realized that testing image generation models on Where's Waldo style images (Where's Wally in the UK) can be pretty frustrating!

I tried getting Claude Opus 4.7 with its new higher resolution inputs to solve it but it was convinced there was a raccoon it couldn't find thanks to the instruction card at the top left of the image:

Yes — there's at least one raccoon in the picture, but it's very well hidden. In my careful sweep through zoomed-in sections, honestly, I couldn't definitively spot a raccoon holding a ham radio. [...]

Nano Banana 2 and Pro

Next I tried Google's Nano Banana 2, via Gemini:

Busy Where's Waldo-style illustration of a park festival with crowds of people, tents labeled "FOOD & DRINK", "CRAFT FAIR", "BOOK NOOK", "MUSIC FEST", and "AMATEUR RADIO CLUB - W6HAM" (featuring a raccoon in a red hat at the radio table), plus a Ferris wheel, carousel, gazebo with band, pond with boats, fountain, food trucks, and striped circus tents

That one was pretty obvious, the raccoon is in the "Amateur Radio Club" booth in the center of the image!

Claude said:

Honestly, this one wasn't really hiding — he's the star of the booth. Feels like the illustrator took pity on us after that last impossible scene. The little "W6HAM" callsign pun on the booth sign is a nice touch too.

I also tried Nano Banana Pro in AI Studio and got this, by far the worst result from any model. Not sure what went wrong here!

The raccoon is larger than everyone else, right in the middle of the image with an ugly white border around it.

gpt-image-2

With the baseline established, let's try out the new model.

I used an updated version of my openai_image.py script, which is a thin wrapper around the OpenAI Python client library. Their client library hasn't yet been updated to include gpt-image-2 but thankfully it doesn't validate the model ID so you can use it anyway.

Here's how I ran that:

OPENAI_API_KEY="$(llm keys get openai)" \
  uv run https://tools.simonwillison.net/python/openai_image.py \
  -m gpt-image-2 \
  "Do a where's Waldo style image but it's where is the raccoon holding a ham radio"

Here's what I got back. I don't think there's a raccoon in there - I couldn't spot one, and neither could Claude.

Lots of stuff, a ham radio booth, many many people, a lake, but maybe no raccoon?

The OpenAI image generation cookbook has been updated with notes on gpt-image-2, including the outputQuality setting and available sizes.

I tried setting outputQuality to high and the dimensions to 3840x2160 - I believe that's the maximum - and got this - a 17MB PNG which I converted to a 5MB WEBP:

OPENAI_API_KEY="$(llm keys get openai)" \
  uv run 'https://raw.githubusercontent.com/simonw/tools/refs/heads/main/python/openai_image.py' \
  -m gpt-image-2 "Do a where's Waldo style image but it's where is the raccoon holding a ham radio" \
  --quality high --size 3840x2160

Big complex image, lots of detail, good wording, there is indeed a raccoon with a ham radio.

That's pretty great! There's a raccoon with a ham radio in there (bottom left, quite easy to spot).

The image used 13,342 output tokens, which are charged at $30/million so a total cost of around 40 cents.

Takeaways

I think this new ChatGPT image generation model takes the crown from Gemini, at least for the moment.

Where's Waldo style images are an infuriating and somewhat foolish way to test these models, but they do help illustrate how good they are getting at complex illustrations combining both text and details.

Update: asking models to solve this is risky

rizaco on Hacker News asked ChatGPT to draw a red circle around the raccoon in one of the images in which I had failed to find one. Here's an animated mix of their result and the original image:

The circle appears around a raccoon with a ham radio who is definitely not there in the original image!

Looks like we definitely can't trust these models to usefully solve their own puzzles!

Tags: ai, openai, generative-ai, chatgpt, llms, text-to-image, llm-release, nano-banana

Quoting Andreas Påhlsson-Notini

2026-04-22 00:39:33

AI agents are already too human. Not in the romantic sense, not because they love or fear or dream, but in the more banal and frustrating one. The current implementations keep showing their human origin again and again: lack of stringency, lack of patience, lack of focus. Faced with an awkward task, they drift towards the familiar. Faced with hard constraints, they start negotiating with reality.

Andreas Påhlsson-Notini, Less human AI agents, please.

Tags: ai-agents, coding-agents, ai

scosman/pelicans_riding_bicycles

2026-04-21 23:54:43

scosman/pelicans_riding_bicycles

I firmly approve of Steve Cosman's efforts to pollute the training set of pelicans riding bicycles.

The heading says "Pelican Riding a Bicycle #1 - the image is a bear on a snowboard

(To be fair, most of the examples I've published count as poisoning too.)

Via Hacker News comment

Tags: ai, generative-ai, llms, training-data, pelican-riding-a-bicycle

llm-openrouter 0.6

2026-04-21 02:00:26

Release: llm-openrouter 0.6

  • llm openrouter refresh command for refreshing the list of available models without waiting for the cache to expire.

I added this feature so I could try Kimi 2.6 on OpenRouter as soon as it became available there.

Here's its pelican - this time as an HTML page because Kimi chose to include an HTML and JavaScript UI to control the animation. Transcript here.

The bicycle is about right. The pelican is OK. It is pedaling furiously and flapping its wings a bit. Controls below the animation provide a pause button and sliders for controlling the speed and the wing flap.

Tags: openrouter, llm, llm-release, pelican-riding-a-bicycle, kimi, ai-in-china, llms, ai, generative-ai

SQL functions in Google Sheets to fetch data from Datasette

2026-04-20 10:33:58

TIL: SQL functions in Google Sheets to fetch data from Datasette

I put together some notes on patterns for fetching data from a Datasette instance directly into Google Sheets - using the importdata() function, a "named function" that wraps it or a Google Apps Script if you need to send an API token in an HTTP header (not supported by importdata().)

Here's an example sheet demonstrating all three methods.

Tags: spreadsheets, datasette, google