2026-03-30 05:56:12
Yesterday Dana, the kids, and I went to the theater to watch The AI Doc: Or How I Became An Apocaloptimist, the well-reviewed new documentary about whether AGI will destroy the world. This was surely the weirdest family movie night we’ve ever done. Firstly, because I personally know probably half of the many people interviewed in the film, from Eliezer Yudkowsky to Ajeya Cotra to Liv Boeree to Daniel Kokotajlo to Ilya Sutskever to Jan Leike to Yoshua Bengio to Shane Legg to Sam Altman and Dario Amodei. But more importantly, because this is a documentary that repeatedly, explicitly, earnestly raises the question of whether children now alive will make it to adulthood, before unaligned AI kills them and everyone else. So pass the popcorn, kiddos!
(We did have popcorn. And if the kids were scared — well, I figured we can’t shield them forever from the great questions of the world they’re entering. But actually they didn’t seem especially scared.)
I thought that the filmmaker, Daniel Roher, did about as good a job as can be done, in fitting into a 100-minute film a question that honestly seems too gargantuan for any film — the question of the future of life on earth. He tries to hear out every faction: first the AI existential risk people, then the AI optimists and accelerationists like “Beff Jezos,” then the “stochastic parrot” / “current harms” people like Emily Bender and Timnit Gebru, and finally the AI company CEOs (Altman, Amodei, and Hassabis were the three who agreed to be interviewed), with Yuval Noah Harari showing up from time to time to insert deepities.
Roher plays the part of an anxious, curious, uninformed everyman, who finds each stance to be plausible enough while he’s listening to it, and who mostly just wants to know what kind of world his soon-to-be-born son (about whom we get regular updates) will grow up in.
I didn’t think all the interviewees were equally cogent or equally deserved a hearing. But if any viewers were actually new to AI discourse, rather than marinated in it like me, the film would serve for them as an excellent introduction to the parameters of current debate (for better or worse) and to some of the leading representatives of each camp.
If I had to summarize Roher’s conclusion, it would be something like: go ahead, enjoy your life, have children if you want, but understand that now is a time of world-historical promise and peril much like the early nuclear age, so pay attention, and demand of your elected leaders that they ensure that AGI is developed in a pro-human direction, because tech leaders (even the relatively well-intentioned ones) are trapped in a race to the bottom and can’t get out on their own. Honestly, I’d have a pretty hard time improving on that message.
The main thing that gave me pause about the film was not on the screen but in the theater, which was nearly empty. For the film to serve its purpose, a significant fraction of the world will need to see and discuss it, either in the theater or on streaming. So, y’know, it’s still playing.
For whatever it’s worth, here were my wife Dana’s comments: “The biggest flaw of this movie is that Daniel Roher never breaks out of his ‘clueless everyman’ character, even when he’s talking to the most important people in AI. He wastes an opportunity to ask them non-superficial questions, questions deeper than ‘so, uh, are we all gonna die or not?'”
And here were my 13-year-old daughter’s comments: “So many of the people they interviewed seemed like hippies, who don’t know what AI will do any more than I know!” Also, after Daniel Roher wishes Sam Altman mazel tov on his forthcoming baby: “Sam Altman is Jewish?!”
And here were my 9-year-old son’s comments: “I thought this would be a movie, where AI would try to take over and the humans would fight back! I had no idea it would just be people talking about it. The documentary kind of movie is so, so, so boring.”
2026-03-29 13:17:35
Last summer, I was privileged to teach a two-week course on theoretical computer science to exceptional 11- and 12-year-olds at Epsilon Camp, held at Washington University in St. Louis. The course was basically a shorter version of the 6.045 course that I used to teach to undergrads at MIT.
I was at Epsilon Camp to accompany my son Daniel, who attended a different course there, for the 7- and 8-year-olds. So they got me to teach while I was there.
Teaching at Epsilon was some of the hardest work I’ve done in years: I taught two classes, held office hours, and interacted with or supervised students for 6-7 hours per day (compared to ~4 hours per week as a professor), on top of being Daniel’s sole caregiver, on top of email and all my other normal responsibilities. But it was also perhaps the most extraordinary teaching I’ve ever done: during “lecture,” the kids were throwing paper planes, talking over and interrupting me every ten seconds, and sometimes getting in physical fights with each other. In my ~20 years as a professor, this was the first time that I ever needed to worry about classroom discipline (!). It gave me newfound respect for what elementary school teachers handle every day.
But then, when I did have the kids’ attention, they would often ask questions or make observations that I would’ve been thrilled to hear from undergrads at UT Austin or MIT. Some of these kids, I felt certain, can grow up if they want to be world-leading mathematicians and physicists and computer scientists, Terry Taos and Ed Wittens of their generation. Or at least, that’ll be true if AI isn’t soon going to outperform the top human scientists at their own game, a prospect that of course casts a giant shadow not only over Epsilon Camp but over our entire enterprise. But enough about the future. For now I can say: it was a privilege of a lifetime to teach these kids, to be the one who first introduced them to theoretical computer science.
Or at least, the one who first systematically introduced them. As I soon realized, there was no topic I could mention—not the halting problem or the Busy Beaver function, not NP-completeness or Diffie-Hellman encryption—that some of these 11-year-olds hadn’t previously seen, and that they didn’t want to interrupt me to share everything they already knew about. Rather than fighting that tendency, I smiled and let them do this. While their knowledge was stunningly precocious, it was also fragmentary and disjointed and weirdly overindexed on examples rather than general principles. So fine, I still had something to teach them!
Coming to Epsilon Camp was also an emotional experience for me. When I was 15, I attended Canada/USA Mathcamp 1996, the first year that that camp operated. I might not have gone into research otherwise. Coming from a public high school—from the world of English teachers who mainly cared whether you adhered to the Five Paragraph Format, and chemistry teachers who’d give 0 on right answers if you didn’t write “1 mol / 1 mol” and then cross off both of the moles—I was suddenly thrust, sink or swim, into a course on elliptic curves taught by Ken Ribet, who’d played a major role in the proof of Fermat’s Last Theorem that had just been completed, and a talk on algorithms and complexity by Richard Karp himself, and lectures on number theory by Richard Guy, who had stories from when he knew G. H. Hardy.
Back when I was 15, I got to know George Rubin Thomas, the founding director of Mathcamp … and then, after 29 years, there he was again at Epsilon Camp—the patriarch of a whole ecosystem of math camps—and not only there, but sitting in on my course. Also at Epsilon Camp, unexpectedly, was a woman who I knew well back when we were undergrads at Cornell, both of took taking the theoretical computer science graduate sequence, but who I’d barely seen since. She, as it turned out, was accompanying her 8-year-old son, who got to know my 8-year-old. They played together every day and traded math facts.
It occurred to me that the course I taught, on theoretical computer science, was one of the most accessible I’ve ever done, and therefore more people might be interested. So I advertised on this blog for someone to help me LaTeX up the notes for wider distribution. I was thrilled to find a talented student to volunteer. Alas, because of where that student lives, he needs to stay anonymous for now. I thank him, pray for his safety, and hope to be able to reveal his name in the future. I’m also thrilled to have gotten three great high school students—Ian Ko, Tzak Lau, and Sunetra Rao—to help with the figures. Thanks to them as well.
You can read the notes here [59 pages, PDF]
If you’re curious, here’s the table of contents:
Happy as always to receive comments and corrections. Enjoy!
2026-03-19 00:10:13
I’m on a spring break vacation-plus-lecture-tour with Dana and the kids in Mexico City this week, and wasn’t planning to blog, but I see that I need to make an exception. Charles Bennett and Gilles Brassard have won the Turing Award, for their seminal contributions to quantum computing and information including the BB84 quantum key distribution scheme. This is the first-ever Turing Award specifically for quantum stuff (though previous Turing Award winners, including Andy Yao, Leslie Valiant, and Avi Wigderson, have had quantum among their interests).
As a practical proposal, BB84 is already technologically feasible but has struggled to find an economic niche, in a world where conventional public-key encryption already solves much the same problem using only the standard Internet—and where, even after scalable quantum computers become able to break many of our current encryption schemes, post-quantum encryption (again running on the standard Internet) stands ready to replace those schemes. Nevertheless, as an idea, BB84 has already been transformative, playing a central role in the birth of quantum information science itself. Beyond BB84, Bennett and Brassard have made dozens of other major contributions to quantum information science, with a personal favorite of mine being the 1994 BBBV (Bennett Bernstein Brassard Vazirani) paper, which first established the limitations of quantum computers at solving unstructured search problems (and indeed, proved the optimality of Grover’s algorithm even before Grover’s algorithm had been discovered to exist).
While I take my kids to see Aztec artifacts, you can learn much more from Ben Brubaker’s Quanta article, to which I contributed without even knowing that it would be about Bennett and Brassard winning the Turing Award (info that was strictly embargoed before today). It’s an honor to have known Charlie and Gilles as well as I have for decades, and to have been able to celebrate one of their previous honors, the Wolf Prize, with them in Jerusalem. Huge congrats to two of the founders of our field!
2026-03-15 09:42:54
Scott’s foreword: I’ve known fellow quantum computing theorist Daniel Gottesman, now at the University of Maryland, for a quarter-century at this point. Daniel has been a friend, colleague, coauthor, and one of the people from whom I’ve learned the most in my career. Today he writes about a topic close to my heart, and one to which I’ve regularly lent this blog over the decades: namely, the struggle to protect enrichment and acceleration in the United States (in this case, the public magnet programs in Montgomery County, Maryland) from the constant attempts to weaken or dismantle them. Thanks so much to Daniel for doing this, and please help out if you can!
Without further ado, Daniel Gottesman:
Scott has kindly let me write this guest post because I’d like to ask the readers of Shtetl-Optimized for help. I live in Montgomery County, Maryland, and the county is getting ready to replace our current handful of great magnet programs with a plethora of mediocre ones.
Montgomery County has a generally quite good school system, but its gifted education programs are really inadequate at the elementary and middle school level. Montgomery County Public Schools (MCPS) offers nothing at all for gifted children until 4th grade. Starting in 4th grade, magnet programs are available, but there are not enough spaces for everyone who meets the minimum qualifications. A few years ago, the elementary and middle school magnets were switched to a lottery system, meaning the highest-achieving students, who most need special programming, might or might not get in, based purely on luck of the draw.
The remaining bright spot has been the high school magnets. Montgomery County has two well-known and high-performing magnets, a STEM magnet at Montgomery Blair high school and an International Baccalaureate (IB) program at Richard Montgomery. The Richard Montgomery IB program draws students from the whole county and the Blair Magnet draws from 2/3 of the county (with the remaining 1/3 eligible to go to another successful but less well-known magnet at Poolesville). And these programs have so far resisted the lottery: They pick the best students from the application pool.
So with inadequate magnets in the lower grades and stellar magnets in high school, you can guess which one is up for a change.
MCPS now wants to reconfigure the high school magnet programs by splitting the county up into 6 regions. Students will only be allowed to apply to programs in their home region. Each region will have its own STEM magnet and its own IB program, as well as programs in the arts, medicine, and leadership. And actually there are multiple program strands in each of these subjects, sometimes in different schools. The whole plan is big and complicated, with close to 100 different programs around the county, more than half of them new.
The stated purpose of this plan is to expand access to these programs by admitting more students and reducing travel times to the programs. And who could object to that? There are definitely places in the county that are far from the current magnets and there are certainly more students that can benefit from high-quality magnets than there is currently space for.
The problem is that making high-quality magnets has not been a priority in the design process. The last time MCPS tried adding regional magnets was about 7 years ago, when they added 3 regional IB programs while keeping Richard Montgomery available to students all over the county. It was a failure: Test scores at the regional IB programs are far below those at Richard Montgomery (the worst-performing regional IB had only 24% getting a passing grade in even one subject in 2024, compared to 99% at Richard Montgomery) and all 3 are underenrolled. Now MCPS has decided they can solve this problem by preventing students from going to Richard Montgomery to try to force them to go to the regional IBs. In addition, they want to repeat the same mistakes with the STEM and other magnets. The best programs in the county will shrink and only be accessible to a small fraction of students, leaving everyone else with new programs of likely highly-varying quality.
And if that were not enough, they want to do this revamp on a ridiculously short timeline. The new programs are supposed to start in the 2027-8 school year, and between now and then, they need to recruit and train teachers for these 100 programs, create all the curricula for the first year of the programs (they are only planning to do one year at a time), and much much more. The probability of a train wreck in the early years of the new system seems high.
Equity is certainly a concern driving this change. And let me be clear: I am totally in favor of improving equity in the school system. But I agree with Scott on this point: strong magnet programs in the public schools are pro-equity and weakening magnet programs is anti-equity. Magnet programs are pro-equity even if the magnets are disproportionally populated by more affluent students, which is admittedly the case in MCPS: Affluent students will always have access to enrichment outside school and to private schools for the most affluent, whereas the public magnet programs are the only source of enrichment for those without those resources.
If MCPS really wants to address the difference in achievement between richer and poorer students, the way to do that is to create gifted programming starting from kindergarten. If you wait until high school, it is unreasonable to expect even brilliant students to catch up to their also highly-capable peers who have been doing math and science camps and extracurriculars and contests and whatnot since they were little. Some can manage it, but it is certainly not easy. Unfortunately, MCPS’s notion of equity seems more focused on optimizing the demographic breakdown of magnet programs, which is most easily achieved by techniques which don’t improve — and usually degrade — the quality of the education provided.
So how can you help? The Board of Education (BOE) is supposed to vote on this plan on Mar. 26. Those of us opposed to it are hoping to sway enough members to vote to tell MCPS to investigate alternatives. For instance, I have proposed a model with only 3 regions, which could also substantially improve access while preserving the strong existing magnets.
If you live in Montgomery County, write to BOE members telling them you oppose this change. You can also sign a petition — there are many, but my favorite is here.
If you are an alumnus of one of the MCPS magnets, write to the BOE telling them how your education there was valuable to you and how a smaller program would not have served you as well.
If you are unconnected to Montgomery County, you can still spread the word. If the BOE gets enough press inquiries asking about the many things that don’t add up in the MCPS proposal, perhaps they will recognize that this is a bad idea.
If you are really really interested in this topic and want to learn more: Last fall, I put together a long analysis of some of the flaws in MCPS’s plan and their claims, and of the alternative 3-region model. You can find it here.
2026-03-11 04:47:30
Last Thursday, my friend and colleague Sam Baker, in UT Austin’s English department, convened an “emergency panel” here about the developing Pentagon/Anthropic situation, and asked me to speak at it. Even though the situation has continued to develop since then, I thought my prepared remarks for the panel might be of interest. At the bottom, I include a few additional thoughts.
Hi! I’m Scott Aaronson! I teach CS here at UT. While my background is in quantum computing, I’ve spent the past four years dabbling in AI alignment. I did a two-year leave at OpenAI, in their now-defunct Superalignment team. I joined back when OpenAI’s line was “we’re a little nonprofit, doing all this in the greater interest of humanity, and we’d dissolve ourselves before we raced to build an AI that we thought would be dangerous.” I know Sam Altman, and many other current and former OpenAI people. I also know Dario Amodei—in fact, I knew Dario well before Anthropic existed. Despite that, I don’t actually feel like I have deep insight into the current situation with Anthropic and the Pentagon that you wouldn’t get by reading the news, or (especially) reading commentators like Zvi Mowshowitz, Kelsey Piper, Scott Alexander, and Dean Ball. But since I was asked to comment, I’ll try.
The first point I’ll make: the administration’s line, to the extent they’ve had a consistent line, is basically that they needed to cut off Anthropic because Anthropic is a bunch of woke, America-hating, leftist radicals. I think that, if you actually know the Anthropic people, that characterization is pretty laughable. Unless by “woke,” what the administration meant was “having any principles at all, beyond blind deference to authority, and sticking to them.”
I mean, Anthropic only got into this situation in the first place because it was more eager than the other AI companies to support US national security, by providing a version of Claude that could be used on classified networks. So they signed a contract with the Pentagon, and that contract had certain restrictions in it, which the Pentagon read and agreed to … until they decided that they no longer agreed.
That brings me to my second point. The Pentagon regularly signs contracts with private firms that limit what the Pentagon can do in various ways. That’s why they’re called military contract-ors. So anyone who claims it’s totally unprecedented for Anthropic to try to restrict what the government can do with Anthropic’s private property—I think that person is either misinformed or else trying to misinform.
The third point. If the Pentagon felt that it couldn’t abide a private company telling it what is or isn’t an appropriate military use of current AI, then the Pentagon was totally within its rights to cancel its contract with Anthropic, and find a different contractor (like OpenAI…) that would play ball. So it’s crucial for everyone here to understand that that’s not all that the Pentagon did. Instead they said: because Anthropic dared to stand up to us, we’re going to designate them a Supply Chain Risk—a designation that was previously reserved for foreign nation-state adversaries, and that, incredibly, hasn’t been applied to DeepSeek or other Chinese AI companies that arguably do present such risks. So basically, they threatened to destroy Anthropic, by making it horrendously complicated for any companies that do business with the government—i.e., just about all companies—also to do business with Anthropic.
Either that, the Pentagon threatened, or we’ll invoke the Defense Production Act to effectively nationalize Anthropic—i.e., we’ll just commandeer their intellectual property, use it for whatever we want despite Anthropic’s refusal. You get that? Claude is both a supply chain risk that’s too dangerous for the military to use, and somehow also so crucial to the supply chain that we, the military, need to commandeer it.
To me, this is the authoritarian part of what the Pentagon is doing (with the inconsistency being part of the authoritarianism; who but a dictator gets to impose his will on two directly contradictory grounds?). It’s the part that goes against the free-market principles that our whole economy is built on, and the freedom of speech and conscience that our whole civilization is built on. And I think this will ultimately damage US national security, by preventing other American AI companies from wanting to work on defense going forward.
That brings me to the fourth point, about OpenAI. While this was going down, Sam Altman posted online that he agreed with Anthropic’s red lines: LLMs should not be used for killing people with no human in the kill chain, and they also shouldn’t be used for mass surveillance of US citizens. I thought, that’s great! The frontier AI labs are sticking together when the chips are down, rather than infighting.
But then, just a few hours after the Pentagon designated Anthropic a supply chain risk, OpenAI announced that it had reached a deal with the Pentagon. Huh?!? If they have the same red lines, then why can one of them reach a deal while the other can’t?
The experts’ best guess seems to be this: Anthropic said, yes, using AI to kill people autonomously or to surveil US citizens should already be illegal, but we insist on putting those things in the contract to be extra-double-sure. Whereas OpenAI said, the Pentagon can use our models for “all lawful purposes”—this was the language that the Pentagon had insisted on. And, continued OpenAI, we interpret “all lawful purposes” to mean that they can’t cross these red lines. But if it turns out we’re wrong about that … well, that’s not our problem! That’s between the Pentagon and the courts, or whatever.
Again, we don’t fully know, because most of the relevant contracts haven’t been made public, but that’s an inference from reading between the lines of what has been made public.
Back in 2023-2024, when there was the Battle of the Board, then the battle over changing OpenAI’s governance structure, etc., some people formed a certain view of Sam, that he would say all the good and prosocial and responsible things even while he did whichever thing maximized revenue. I’ll leave it to you whether last week’s events are consistent with that view.
OK, fifth and final point. I remember 15-20 years ago, talking to Eliezer Yudkowsky and others terrified about AI. They said, this is the biggest issue facing the world. It’s not safe for anyone to build because it could turn against us, or even before that, the military could commandeer it or whatever. And I and others were like, dude, you guys obviously read too much science fiction!
And now here we are. Not only are we living in a science-fiction story, I’d say we’re living in a particularly hackneyed one. I mean, the military brass marching into a top AI lab and telling the nerds, “tough luck, we own your AI now”? Couldn’t reality have been a little more creative than that?
The point is, given the developments of the past couple weeks, I think we now need to retire forever the argument against future AI scenarios that goes, “sorry, that sounds too much like a science-fiction plot.” As has been said, you’d best get used to science fiction because you’re living in one!
Updates and Further Thoughts: Of course I’ve seen that Anthropic has now filed a lawsuit to block the Pentagon from designating it a supply chain risk, arguing that both its free speech and due process rights were violated. I hope their lawsuit succeeds; it’s hard for me to imagine how it wouldn’t.
The fact that I’m, obviously, on Anthropic’s side of this particular dispute doesn’t mean that I’ll always be on Anthropic’s side. Here as elsewhere, it’s crucial not to outsource your conscience to anyone.
Zvi makes an extremely pertinent comparison:
[In shutting down Starlink over Ukraine,] Elon Musk actively did the exact thing [the Pentagon is] accusing Anthropic of maybe doing. He made a strategic decision of national security at the highest level as a private citizen, in the middle of an active military operation in an existential defensive shooting war, based on his own read of the situation. Like, seriously, what the actual fuck.
Eventually we bought those services in a contract. We didn’t seize them. We didn’t arrest Musk. Because a contract is a contract is a contract, and your private property is your private property, until Musk decides yours don’t count.
Another key quote in Zvi’s piece, from Gregory Allen:
And here’s the thing. I spent so much of my life in the Department of Defense trying to convince Silicon Valley companies, “Hey, come on in, the water is fine, the defense contracting market, you know, you can have a good life here, just dip your toe in the water”.
And what the Department of Defense has just said is, “Any company that dips their toe in the water, we reserve the right to grab their ankle, pull them all the way in at any time”. And that is such a disincentive to even getting started in working with the DoD.
Lastly, I’d like to address the most common counterargument against Anthropic’s position—as expressed for example by Noah Smith, or in the comments of my previous post on this. The argument goes roughly like so:
You, nerds, are the ones who’ve been screaming for years about AI being potentially existentially dangerous! So then, did you seriously expect to stay in control of the technology? If it’s really as dangerous and important as you say, then of course the military was going to step in at some point and commandeer your new toy, just like it would if you were building a nuclear weapon.
Two immediate responses:
2026-03-08 11:06:05
Sorry to interrupt your regular programming about the AI apocalypse, etc., and return to the traditional beat of this blog’s very earliest years … but I’ve now gotten multiple messages asking me to comment on something called the “JVG (Jesse–Victor–Gharabaghi) algorithm” (yes, the authors named it after themselves). This is presented as a massive improvement over Shor’s factoring algorithm, which could (according to popular articles) allow RSA-2048 to be broken using only 5,000 physical qubits.
On inspection, the paper’s big new idea is that, in the key step of Shor’s algorithm where you compute xr mod N in a superposition over all r’s, you instead precompute the xr mod N’s on a classical computer and then load them all into the quantum state.
Alright kids, why does this not work? Shall we call on someone in the back of the class—like, any undergrad quantum computing class in the world? Yes class, that’s right! There are exponentially many r’s. Computing them all takes exponential time, and loading them into the quantum computer also takes exponential time. We’re out of the n2-time frying pan but into the 2n-time fire. This can only look like it wins on tiny numbers; on large numbers it’s hopeless.
If you want to see people explaining the same point more politely and at greater length, try this from Hacker News or this from Postquantum.com.
Even for those who know nothing about quantum algorithms, is there anything that could’ve raised suspicion here?
Often, when something is this bad, the merciful answer is to let it die in obscurity. In this case, I feel like there was a sufficient level of intellectual hooliganism, just total lack of concern for what’s true, that those involved deserve to have this Shtetl-Optimized post as a tiny bit of egg on their faces forever.