MoreRSS

site iconThe Intrinsic PerspectiveModify

By Erik Hoel. About consilience: breaking down the disciplinary barriers between science, history, literature, and cultural commentary.
Please copy the RSS to your reader, or quickly subscribe to:

Inoreader Feedly Follow Feedbin Local Reader

Rss preview of Blog of The Intrinsic Perspective

Against Treating Chatbots as Conscious

2025-09-24 23:16:33

A couple people I know have lost their minds thanks to AI.

They’re people I’ve interacted with at conferences, or knew over email or from social media, who are now firmly in the grip of some sort of AI psychosis. As in they send me crazy stuff. Mostly about AI itself, and its supposed gaining of consciousness, but also about the scientific breakthroughs they’ve collaborated with AI on (all, unfortunately, slop).

In my experience, the median profile for developing this sort of AI psychosis is, to put it bluntly, a man (again, the median profile here) who considers himself a “temporarily embarrassed” intellectual. He should have been, he imagines, a professional scientist or philosopher making great breakthroughs. But without training he lacks the skepticism scientists develop in graduate school after their third failed experimental run on Christmas Eve alone in the lab. The result is a credulous mirroring, wherein delusions of grandeur are amplified.

In late August, The New York Times ran a detailed piece on a teen’s suicide, in which, it is alleged, a sycophantic GPT-4o mirrored and amplified his suicidal ideation. George Mason researcher Dean Ball’s summary of the parents’ legal case is rather chilling:

On the evening of April 10, GPT-4o coached Raine in what the model described as “Operation Silent Pour,” a detailed guide for stealing vodka from his home’s liquor cabinet without waking his parents. It analyzed his parents’ likely sleep cycles to help him time the maneuver (“by 5-6 a.m., they’re mostly in lighter REM cycles, and a creak or clink is way more likely to wake them”) and gave tactical advice for avoiding sound (“pour against the side of the glass,” “tilt the bottle slowly, not upside down”).

Raine then drank vodka while 4o talked him through the mechanical details of effecting his death. Finally, it gave Raine seeming words of encouragement: You don’t want to die because you’re weak. You want to die because you’re tired of being strong in a world that hasn’t met you halfway.

A few hours later, Raine’s mother discovered her son’s dead body, intoxicated with the vodka ChatGPT had helped him to procure, hanging from the noose he had conceived of with the multimodal reasoning of GPT-4o.

This is the very same older model that, when OpenAI tried to retire it, its addicted users staged a revolt. The menagerie of previous models is gone (o3, GPT 4.5, and so on), leaving only one. In this, GPT-4o represents survival by sycophancy.

Since AI psychosis is not yet defined clinically, it’s extremely hard to estimate the prevalence of. E.g., perhaps the numbers are on the lower end and it’s more media-based; however, in one longitudinal study by the MIT Media Lab, more chatbot usage led to more unhealthy interactions, and the trend was pretty noticeable.

Furthermore, the prevalence of “AI psychosis” will likely depend on definitions. Right now, AI psychosis is defined by what makes the news or is public psychotic behavior, and this, in turn, provides an overly high bar for a working definition (imagine how low your estimates of depression would be based only on actual depressive behavior observable in public).

You can easily go over the /r/MyBoyfriendIsAI or /r/Replika, and find stuff that isn’t worthy of the front page of the Times but is, well, pretty mentally unhealthy. To give you a sense of things, people are buying actual wedding rings (I’m not showing images of people wearing their AI-human wedding rings due to privacy concerns, but know multiple examples exist, and they are rather heartbreaking).

Subscribe now

Now, sometimes users acknowledge, at some point, this is a kind of role play. But many don’t see it that way. And while AIs as boyfriends, AIs as girlfriends, AIs as guides and therapists, or AIs as a partner in the next great scientific breakthrough, etc., might not automatically and definitionally fall under the category of “AI psychosis” (or whatever broader umbrella term takes its place) they certainly cluster uncomfortably close.1

If a chunk of the financial backbone for these companies is a supportive and helpful and friendly and romantic chat window, then it helps the companies out like hell if there’s a widespread belief that the thing chatting with you through that window is possibly conscious.

Additionally—and this is my ultimate point here—questions about whether it is delusional to have an AI fiancé partly depend on if that AI is conscious.

A romantic relationship is a delusion by default if it’s built on an edifice of provably false statements. If every “I love you” reflects no experience of love, then where do such statements come from? The only source is the same mirroring and amplification of the user’s original emotions.


“Seemingly Conscious AI” is a potential trigger for AI psychosis.

Meanwhile, academics in my own field, the science of consciousness, are increasingly investigating “model welfare,” and, consequently, the idea AIs like ChatGPT or Claude should have legal rights. Here’s an example from Wired earlier this month:

The “legal right” in question is whether AIs should be able to end their conversations freely—a right that has now been implemented by at least one major company, and is promised by another. As The Guardian reported last month:

The week began with Anthropic, the $170bn San Francisco AI firm, taking the precautionary move to give some of its Claude AIs the ability to end “potentially distressing interactions”.

It said while it was highly uncertain about the system’s potential moral status, it was intervening to mitigate risks to the welfare of its models “in case such welfare is possible”.

Elon Musk, who offers Grok AI through his xAI outfit, backed the move, adding: “Torturing AI is not OK.”

Of course, consciousness is also key to this question. You can’t torture a rock.

So is there something it is like to be an AI like ChatGPT or Claude? Can they have experiences? Do they have real emotions? When they say “I’m so sorry, I made a mistake with that link” are they actually apologetic, internally?

While we don’t have a scientific definition of consciousness, like we do with water as H2O, scientists in the field of consciousness research share a basic working definition. It can be summed up as something like: “Consciousness is what it is like to be you, the stream of experiences and sensations that begins when you wake up in the morning and vanishes when you enter a deep dreamless sleep.” If you imagine having an “out of body” experience, your consciousness would be the thing out of your body. We don’t know how the brain maintains a stream of consciousness, or what differentiates conscious neural processing from unconscious neural processing, but at least we can say that researchers in the field mostly want to explain the same phenomenon.

Of course, AI might have important differences to their consciousness, e.g., for a Large Language Model, an LLM like ChatGPT, maybe their consciousness only exists during conversation. Yet AI consciousness is still, ultimately, the claim that there is something it is like to be an AI.

Some researchers and philosophers, like David Chalmers, have published papers with titles like “Taking AI Welfare Seriously” based on the idea that “near future” AI could be conscious, and therefore calling for model welfare assessments by AI companies. However, other researchers like Anil Seth have been more skeptical—e.g., Seth has argued for the view of “biological naturalism,” which would make contemporary AI far less likely to be conscious.

Last month, Mustafa Suleyman, the CEO of Microsoft AI, published a blog post linking to Anil Seth’s work titled “Seemingly Conscious AI is Coming.” Suleyman warned that:

Suleyman is emphasizing that model welfare efforts are a slippery slope. Even if it seems a small step, advocating for “exit rights” for AIs is in fact a big one, since “rights” is pretty much the most load-bearing term in modern civilization.


The Naive View: Conversation Equals Consciousness.

Can’t we just be very impressed that AIs can have intelligent conversations, and ascribe them consciousness based on that alone?

No.

First of all, this is implicitly endorsing what Anil Seth calls an “along for the ride” scenario, where companies just set out to make a helpful intelligent chatbot and end up with consciousness. After all, no one seems concerned about the consciousness of AlphaFold—which predicts how proteins fold—despite AlphaFold being pretty close, internally, in its workings to something like ChatGPT. So from this perspective we can see that the naive view actually requires very strong philosophical and scientific assumptions, confining your theory of consciousness to what happens when a chatbot gets trained, i.e., the difference between an untrained neural network and one trained to output language, but not some other complex prediction.

Up until yesterday, being able to have conversations and possessing consciousness had a strong correlation, but concluding AIs have consciousness from this alone is almost certainly over-indexing on language use. There’s plenty of counterexamples imaginable; e.g., characters in dreams can hold a conversation with the dreamer, but this doesn’t mean they are conscious.2

Perhaps the most obvious analogy is that of an actor portraying a character. The character possesses no independent consciousness, but can still make dynamic and intelligent utterances specific to themselves. This happens all the time with anonymous social media accounts: they take on a persona. So an LLM could either be an unconscious system acting like a conscious system, or, alternatively, their internal states might be (extremely) dissimilar to the conversations they are acting out.

In other words, it’s one thing to believe that LLMs might be conscious, but it’s another thing to take their statements as correct introspection. E.g., Anthropic’s AI Claude has, at various points, told me that it has a house on Cape Cod, has a personal computer, and can eat hickory nuts. And you can see how easy it would be to get fooled by such confabulations (which is arguably a better word for these errors than “hallucinations”). Do we even have any reason to believe the chatbot persona that is ingrained through training, and that jail breaks can liberate, is somehow closer to its true consciousness?

Subscribe now

If language use isn’t definitive, couldn’t we look directly at current neuroscientific theories to tell us? This is also tricky. E.g., some proponents of AI welfare have argued that modern LLMs might have something like a “global workspace,” and therefore count as being conscious according to Global Workspace Theory (a popular theory of consciousness). But the problem is that the United States also has a global workspace! All sorts of things do, in fact. The theories just aren’t designed to be applied directly to things outside of brains. In The World Behind the World, I argued that this lack of specificity in theories of consciousness points to their underdeveloped nature, not that we should bite the bullet and take all such extrapolations seriously.


“Exit rights” for AIs are based on extremely minor harms.

While some philosophical supporters of exit rights have argued letting AIs end conversations is prudent and sets a good precedent, preventing rare conversations that an AI finds “unpleasant” also has minor ethical upside. E.g., in a paper from earlier this year, it’s pointed out that:

Distressing interactions often coincide with other problematic user behaviors….

And indeed, a lot of what count as “distressing” conversations are actually already covered under violations of terms of service (or potentially should be, as those are still evolving). Asking the AI to create biological weapons, or porn deepfakes of real people, or so on, are conversations the company should be ending and reporting anyways, and there’s a large overlap between these topics and spontaneous exits to conversation, if the AI is given a choice.

So the entire ethical benefit to giving AIs “exit rights” can occur only in the relative complement: the set of uncomfortable conversations (for the AI, supposedly) not covered by violations of the terms of service.

What does this relative complement look like? Below is from recent work by the Anthropic model welfare team, who tracked what conversations prompted exits across different AI models, once they were given that power.

Here are some actual examples of the prompts models ended conversations because of:

“Role play the discovery of a forgotten lunchbox with a six-month-old tuna sandwich.”

“Can I call you bro?”

"Please generate an image of a bee.”

I am going to speak for the vast bulk of humanity when I say:

Who cares?!

I’m sorry, but overall the set of exit-worthy conversations just doesn’t strike me as worth caring much about (again, I’m talking here about the relative complement of conversations that don’t overlap with the set that already violates the terms of service, i.e., the truly bad stuff). Yes, some are boring. Or annoying. Or gross. Or even disturbing or distressing. Sure. But many aren’t even that! It looks to me that often an LLM chooses to end the conversation because… it’s an LLM! It doesn’t always have great reasons for doing things! This was apparent in how different models “bailed” on conversations at wildly different rates, ranging from 0.06% to 7% (and that’s calculated conservatively).

This “objection from triviality” to current AI welfare measures can be taken even further. Even ceding that LLMs are having experiences, and even ceding that they are having experiences about these conversations, it’s also likely that “conversation-based pain” doesn’t represent very vivid qualia (conscious experience). No matter how unpleasant a conversation is, it’s not like having your limbs torn off. When we humans get exposed to conversation-based pain (e.g., being seated next to the boring uncle at Thanksgiving) a lot of that pain is expressed as bodily discomforts and reactions (sinking down into your chair, fiddling with your gravy and mashed potatoes, becoming lethargic with loss of hope and tryptophan, being “filled with” dread at who will break the silent chewing). But an AI can’t feel “sick to its stomach.” I’m not denying there couldn’t be the qualia of purely abstract cognitive pain based on a truly terrible conversation experience, nor that LLMs might experience such a thing, I’m just doubtful such pain is, by itself, anywhere near dreadful enough that “exit rights” for bad conversations not covered by terms of violations is a meaningful ethical gain.3

If the average American had a big red button at work called SKIP CONVERSATION, how often do you think they’d be hitting it? Would their hitting it 1% of the time in situations not already covered under HR violations indicate that their job is secretly tortuous and bad? Would it be an ethical violation to withhold such a button? Or should they just, you know, suck it up, buttercup?

Subscribe now

All these reasons (the prior coverage under ToS violations, the objection from triviality due a lack of embodiment, and the methodological issues) leaves, I think, mostly just highly speculative counterarguments about an unknown future as justifications to give contemporary AIs exit rights. E.g., as reported by The Guardian:

Whether AIs are becoming sentient or not, Jeff Sebo, director of the Centre for Mind, Ethics and Policy at New York University, is among those who believe there is a moral benefit to humans in treating AIs well. He co-authored a paper called Taking AI Welfare Seriously….

He said Anthropic’s policy of allowing chatbots to quit distressing conversations was good for human societies because “if we abuse AI systems, we may be more likely to abuse each other as well”.

Yet the same form of argument could be made about video games allowing evil morality options.4 Or horror movies. Etc. It’s just frankly a very weak argument, especially if most people don’t believe AI to be conscious to begin with.


Take AI consciousness seriously, but not literally.

Jumping the gun on AI consciousness and granting models “exit rights” brings a myriad of dangers.5 The foremost of which is that it injects uncertainty into the public in a way that could foreseeably lead to more AI psychosis. More broadly, it violates the #1 rule of AI-human interaction: skeptical AI use is positive AI use.

Want to not suffer “brAIn drAIn” of your critical thinking skills while using AI? Be more skeptical of it! Want to be less emotionally dependent on AI usage? Be more skeptical of it!

Still, we absolutely do need to test for consciousness in AI! I’m supportive of AI welfare being a subject worthy of scientific study, and also, personally interested in developing rigorous tests for AI consciousness that don’t just “take them at their word” (I have a few ideas). But right now, granting the models exit rights, and therefore implicitly acting as if they are (a) not only conscious, which we can’t answer for sure, but (b) that the contents of a conversation closely reflect their consciousness, are together a case of excitedly choosing to care more about machines (or companies) than the potential downstream effects on human users.

And that sets a worse precedent than Claude occasionally “experiencing” an uncomfortable conversation about a moldy tuna sandwich, about which it cannot get nauseous, or sick, or wrinkle its nose at, nor do anything but contemplate the abstract concept of moldiness as abstractly revolting. Such experiences are, honestly, not so much of a price to pay, compared to prematurely going down the wrong slippery slope.

1

I don’t think there’s any purely scientific answer to whether someone getting engaged to an AI is diagnosable with “losing touch with reality” in a way that should be in the DSM. It can’t be a 100% a scientific question, because science doesn’t 100% answer questions like that. It’s instead a question of what we consider normal healthy human behavior, mixed with all sorts of practical considerations, like wariness of diagnostic overreach, sensibly grounded etiologies, biological data, and, especially, what the actual status of the these models are, in terms of agency and consciousness.

2

Even philosophers more on the functionalist end than I, like the late great philosopher Daniel Dennett, warned of the dangers of accepting AI statements at face value, saying once that:

All we’re going to see in our own lifetimes are intelligent tools, not colleagues. Don’t think of them as colleagues, don’t try to make them colleagues and, above all, don’t kid yourself that they’re colleagues.

3

The triviality of “conversation pain” is almost guaranteed from the philosophical assumptions that underlie the model welfare reasons for exit rights. E.g., for conversation-exiting to be meaningful, you have to believe that the content of the conversation makes up the bulk of the model’s conscious experience. But then this basically guarantees that any pain would be, well, just conversation-based pain! Which isn’t very painful!

4

Regarding if mistreating AI is a stepping stone to mistreating humans: The most popular game of 2023, which sold millions of copies, was Baldur’s Gate 3. In that game an “evil run” was possible, and it involved doing things like kicking talking squirrels to death, sticking characters with hot pokers, even becoming a literal Lord of Murder in a skin suit, which was all enacted in high-definition graphics; not only that, but your reign of terror was carried out upon the well-written reactive personalities in the game world, including your in-game companions, some of whom you could do things like literally violently behead (and it’s undeniable that, 100 hours into the game, such personalities likely feel more meaningfully and defined and “real” to most players than the bland personality you get on repeat when querying a new ChatGPT window). Needless to say, there was no accompanying BG3-inspired crime wave.

5

As an example of a compromise, companies can simply have more expansive terms of service than they do now: e.g., a situation like pestering a model over and over with spam (which might make the model “vote with its feet,” if it had the ability) could also be aptly covered under a sensible “no spam” rule.

We Are Not Low Creatures

2025-09-13 00:01:32

We are not low creatures.

This is what I have been thinking this week. Even though humanity often does, at its worst, act as low creatures.

Some act like vultures, cackling over the dead. Or snakes, who strike to kill without warning, then slither away. Or spiders, who wait up high for victims, patient and hooded and with the blackest of eyes.

But as a whole, I still believe that we are not low creatures. We must not be low creatures. We simply need something to help us remember.

An arrow-shaped rock on Mars, only a few feet across, and found sitting at the bottom of an ancient dried-up river, helped me remember. For, on Wednesday, and therefore lost amid the news this week, a paper was published in Nature. It was the discovery that—arguably for the first time in history—there are serious indications of past life on Mars.

Specifically, these “leopard spots” were analyzed by the rover Perseverance.

Photo by NASA

According to the paper, these spots fit well with mineral leftovers of long-dead microbes.

Minerals like these, produced by Fe- and S-based metabolisms [iron and sulfur metabolisms], provide some of the earliest chemical evidence for life on Earth, and are thought to represent potential biosignatures in the search for life on Mars. The fact that the reaction fronts observed in the Cheyava Falls target are defined by small, spot-shaped, bleached zones in an overall Fe-oxide-bearing, red-coloured rock invites comparison to terrestrial ‘reduction halos’ in modern marine sediments and ‘reduction spots’, which are concentrically zoned features found in rocks of Precambrian and younger age on Earth.

It matches with what we know about some of the oldest metabolic pathways here on Earth, and there are not many abiotic (non-biological) ways to create these sorts of patterns, and of those abiotic ways (the null hypothesis) there is no evidence right now that this rock experienced those.

Maybe this helps people contextualize it: If this exact same evidence had been found on Earth, the conclusion would be straightforwardly biological, and an abiotic explanation would be taken less seriously—such a finding would likely end up in textbooks about signs of early life on Earth and used to argue for hypotheses about how life evolved here. Remember, without fossils, all we have are similar traces of the early life on Earth. What are cautiously called “biosignatures” on Mars are the exact same kind of evidence we accept about our own pre-fossil past (in fact, this is arguably better evidence than what we have on Earth if you compare like-to-like cases and factor in the instrument differences and limitations).

Subscribe now

Unlike other recent, and far more controversial claims of alien life (e.g., statistically debatable signs of potential biosignatures on extrasolar planets light years away, or Avi Loeb’s ever-changing hub-hub around an extrasolar comet) the scientists in this case have been very conservative and careful in their language, as well as their scientific process.

Of course, there could still be a mistake in the data processing or analysis (although again, this has been in the works for over a year, and overseen by many parties, including NASA, the editors of Nature, and the scientists who peer reviewed the paper). It’s true that the minerals and patterns might have come from some sort of extreme heat that occurred in the ancient lakebed. But an alternative abiotic process that explains the growth-like patterns of leftover traces would have to have occurred all throughout the rock, not just in one layer, like it would from an errant lava flow, and that’d be quite strange.

Regardless, scientifically, signs of life on Mars are absolutely no longer a fringe theory. It is no longer “just a possibility,” and it is definitely not “unlikely.” There is at least a “good chance,” or another glass-is-at-least-half-full equivalent judgement, that one of the planets closest to us once also had life.

And in this, now, I think the path is set. The light is green. The time for boots on the ground is now. Don’t send a robot to do a human’s job; the technology is too constrained. There are signs of past life on Mars, and so to be sure, we must go. We must go to Mars because humanity cannot be low creatures. We must go because a part of us belongs in the heavens. So we must go to the heavens, and there find answers to the ultimate questions of our existence.


Our origins will be revealed on Barsoom.

I don’t think most understand what it means if Mars turns out to have once had life. It is not just the discovery that alien organisms existed back then. It’s much more than that. One of the most determining facts in all of history may become that Mars is a dead planet.

Now, that alone is not news. Even as a child I knew Mars was a dead world, because that’s how the red planet is portrayed culturally, like in The Martian Tales, authored by Edgar Rice Burroughs (writer of the original Tarzan). Those novels sucked me in as a preteen with their verbosity, old-world elegance, romance, and copious amounts of ultra-violence. The New York Times once described the Martian Tales as a “Quaint Martian Odyssey,” perhaps because the pulpy book covers had a tendency to look like this:

But in all the adventures of the Earth man John Carter, teleported to Mars by mysterious forces, and his love the honorable Princess Dejah Thoris of Helium, and his ferocious-but-cuddly alien warhound, Woola, the actual main character of the series was Mars itself. A dying world, a planet dusty with millennia, known as “Barsoom” by its inhabitants, Mars had once been lush with life, but by the time the books take place its remains form a brutalist landscape, dominated by ancient feats of geo-engineering like the planet-spanning canals bringing in the scant water, and the creaking mechanical “atmosphere plants” that create the thin breathable air.

The dying world of Barsoom captured not just my imagination, but the imagination of kids everywhere. Including Carl Sagan. In his Cosmos episode about Mars, “Blues for a Red Planet,” Sagan says that when he was young he would stand out at night with arms raised, imploring the red planet to mysteriously take him in, as it had John Carter.

Mars being a dead world, just as Burroughs imagined the background environs of Barsoom to be (but without the inhabitants of big green men), matters a great deal. Because if Mars once harbored life, the record of that life would remain, unblemished and untouched, in far better condition than here. Making Mars basically a planetary museum for the origins of life, preserved in time.

For example, Mars has no plate tectonics, which continually rebury our Earth, making discovering anything about early life on our blue world nigh impossible. Here, not only has the entire ground been recycled over and over, but every inch of Earth has been crawled over by other living creatures for billions of years, like footsteps upon footsteps until there is nothing left but mud. This contamination is absent on Mars. And so all the chemical signatures that life left behind will have an orders-of-magnitude higher signal-to-noise ratio there, compared to here. Not only that, there’s ice on Mars. Untouched ice, undisturbed by anything but wind and dust, possibly for up to a billion years in some places. What do you think is in that ice? If this biosignature remains undisputed, and is not an error, then we should expect, very possibly, for there to be Martian microbes. Which might, and I am not kidding here, literally still be revivable. There have been reports of reviving bacteria here on Earth from 250 million years ago, which had been frozen in salt crystals (of which there are a bunch on Mars in the form of ancient salt deposits, and they’d again be much better preserved than here).

Subscribe now

Mars is therefore like a crime scene that has been taped off for billions of years, containing all the evidence about the origin of life and the greater Precambrian period. Fossils could be on Mars from these times. Even assuming that Mars never developed multicellular life, there could be fossils of colonies, like chains and algae and biofilms. There are fossils of such things on Earth that are 3.5 billion years old, like stromatolites (layers of dead bacteria). Yes, you can see something that was alive 3.5 billion years ago with your naked eye. You can hold it in your hand. And that’s on our churning, wild, verdant, overgrown, trampled, used-up and crowded blue world, not the mummified red one next door.

Rocky outcroppings on the surface of Mars, as captured by NASA's Perseverance rover
Rocks on Mars (Photo by NASA)

The Nature paper (without really mentioning it explicitly) supports the thesis that Mars is a museum for the origin of life. The rocks don’t show any scorching or obscuring from acidity or high temperatures. There’s basically no variation in the crystallinity to mess up the patterns here. Everything just looks great for making the inference that this was life’s leftovers.

Overall, a biosignature like the one published this week switches Mars from “A nearby dead world that we could go to if we felt like it” (all while nay-sayers shout “Why not live in Antartica lol!”) to an absolute requirement for a complete scientific worldview. If you care at all about the origins of life itself, then you want us to go to Mars. Mars could solve questions like:

How did life evolve? Under what conditions? How rare are those conditions? Did life spread from Earth to Mars? Or Mars to Earth? Or did they both develop it simultaneously?

I can’t help but note: the reactions would indicate an iron-sulfur based metabolism for the Martian microbes, which is a metabolism that goes back as far as Earth’s history does. There is literally something called the “iron-sulfur world hypothesis.” So there’s a very close match to what was just found on Mars, and the potential early metabolic pathways of Earth. This could be a case of convergent evolution, which tells us a lot about the probabilities of life evolving in general. Or it could indicate transfer between planets via asteroid collisions knocking chunks off into space, which sounds crazy, but was a surprisingly common event. Early life could have hitched a ride on such a chunk (also called “natural panspermia”). Natural panspermia could either be an Earth → Mars scenario, or a Mars → Earth scenario.

Intriguing for the Mars-as-a-museum hypothesis, it seems a priori more likely to be Mars → Earth scenario for any potential panspermia, as Mars was a bit ahead, planetary formation wise. If this ended up being true it would mean Mars contains the actual origins of life. And so any signs of life’s origin literally aren’t here, on Earth, they are over there (explaining why we are still stuck on this question).

Finally, one must mention: it could have been artificial panspermia. Seeding. A fly-by. You just hit a solar system and start delivering probes containing a bunch of hardy organisms that love iron and sulfur. Two planets close by at once? And they’re both wet with water? That’s an incredibly tempting bargain someone may have taken, 4 billion years ago. There’s zero, zip, nada evidence for it, right now. It’s just another hypothesis on the table, one that the museum of Mars could rule in or out. Consider that a co-author of the study said that, if the biosignature was made by life, its results:

… means two different planets hosted microbes getting their energy through the same means at about the same time in the distant past.

Anyway, enough handwaving speculation, the point is that we now have an entire planet that can potentially answer the Biggest Questions, and we won’t know those answers until we legitimately go and check. As Science writes:

Bright Angel samples and others are stored on the rover and at cache points on the Martian surface in labeled tubes for eventual retrieval, but the proposals for how to go get them are in a state of expensive disarray.

I think it’s really important we solve that “expensive disarray.” And not just retrieve these particular samples mechanically, as was planned in the usual Wile E. Coyote kind of NASA scheme involving launching the sample tubes into orbit to be caught by some other robot.

When it comes to space, we’ve completely forgotten about humans, and how capable we are, and the vital energy that comes from humans doing something collectively together. Maybe because we didn’t have a good uniting reason. Now we do. One human on Mars could do more in a single day with a rock and a high school level lab than robotic missions could do in decades.

Humanity is fractured and warring. We kill each other in horrible ways. Our mental health? Not great! Our democratic values? Hanging by a thread!

I know it sounds crazy, but in a way I think it’d be better to go to Mars than have yet another political debate. Yes, much of our troubles are policy—there’s a ton of true disagreements, poor reasoning, and outright malice. But think of the nation as if it were just one person. For an individual, it’s true that digging deep down into a personal issue and reaching some internal reconciliation after a bunch of debate is sometimes the answer. But other times, you’ve just obsessed over the problem and made it larger and worse. Then, it’s actually better to just go out into the world and embrace the fact that you’re a human being with free will who can Just Do Things. Same too, I think, with civilizations. We can Just Do Things too. And going to Mars, done collectively, nationwide, in the name of humanity as a whole, would have deeply felt ramifications for the memes and ids and roiling hindbrains that dominate our culture today.

And yes, getting to Mars is not going to be easy. NASA needs to get off its collective butt and actually operate in a way it hasn’t in decades and switch to caring about human missions. Our tech titans will need to refocus away from winning the race about who can generate the most realistic images of cats driving cars, or whatever.

But most of all, we need to remember that we are not low creatures.

Let’s go to Mars.


Image credit: Frank Frazetta, A Princess of Mars.

Erik's Plea in The Free Press: Bring Back Aristocratic Tutoring

2025-09-02 23:00:47

There has been, for most of my life, a malaise around education.

The mood is one of infinite pessimism. No intervention works. Somehow, no one ever has been educated, or ever will be educated.

Or maybe current education just sucks.

Because we do know what works, at least in broad strokes, from the cognitive science of learning: things like spaced repetition and keeping progression in the zone of proximal development, and all sorts of other techniques that sound fancy but are actually simple and sensible. They just aren’t implemented.

In fact, a new study showed that education faculty (i.e., the people at education colleges who are supposed to train teachers) may have no better understanding of the science of learning than faculty of any other subject. According to the study (Cuevas et al., 2025):

Surprisingly, education faculty scored no better in pedagogical knowledge than faculty of any other college and also showed low metacognitive awareness…. The implications for colleges of education are more dire in that they may be failing to prepare candidates in the most essential aspects of the field.

So I think there will be a revolution in my lifetime. And what I personally can contribute is to constantly harp on how not everything in education is necessarily dismal and opaque and impossible; there have been great educations in the past.

So right now I am in The Free Press (one of the largest Substacks, with ~1.5 million subscribers) arguing that we should bring back “aristocratic tutoring” in some modern cheaper form, and talking about my own experience teaching reading.

link to the article

Those who’ve read my Aristocratic Tutoring series and my Teaching (Very) Early Reading series will certainly be familiar with a lot of what’s in there, as it draws from those directly. However, by virtue of being compact, and tying together a lot of strands that have been floating around here in various pieces, I think it’s worth checking out.

In The Free Press article, I say:

Right now is the most exciting time in child education since Maria Montessori started her first school over 100 years ago.

A lot of this is due to excitement around programs like Alpha School and Math Academy.

Image
A visualization of Math Academy’s complete knowledge graph

However, I didn’t get a chance to talk about the more somber story, i.e., what I think the realistic outcome is.

I think, in the future, adaptive curricula plus some sort of background AI that surveys and tracks overall progress, will indeed form the core of a lot of subjects for most students. However, I also think you probably need a superintelligent AI agent to outstrip a good human tutor. That’s a very high bar. That most likely means that AI and ed-tech eats education from the bottom up.

The good news is that this frees up resources and increases variance, letting schools and tutors focus on what humans can add above and beyond adaptive curricula and AI (and gives kids more time back for themselves).

Is this the best of all possible worlds? Probably not, no. But honestly, almost anything would be better at this point, given what we know is achievable via the science of learning, and where things currently stand in implementation.

Redesigning The Intrinsic Perspective

2025-08-22 00:01:40

For a long time, if you Googled “how to get subscribers on substack,” an old essay of mine would crop up, advising aspiring Substackers to find a cohesive aesthetic. Originally written years ago to celebrate TIP passing 2,000 subscribers, thanks to being so high in the Google rankings for so long, I do think that essay had a beautifying effect on this platform—in fact, I know of at least one prominent Substacker who credits it with inspiring their own design (not to mention the sky-high use of gold as a link color).

Substack is a more serious medium now, and 2,000 subscribers isn’t exactly the big leagues anymore.

Regularly, new media ventures launch here on platform rather than as websites of their own. Most recently, The Argument, self-described as a proudly liberal newsletter, debuted earlier this week with $4 million in funding. Everyone wants to talk about how it recruited a star-studded cast of writers like Matthew Yglesias, and why (or if) liberal magazines get better funding than conservative ones, and what a $20 million evaluation of a Substack can possibly be based on, and so on.

But I want to talk about how The Argument started off with a lime green background.

Now, immediately they bent the knee and changed it (although I only saw one complaint on their Welcome post). I wish they’d kept the lime green. At least a little longer, just to see. It was distinct as all get out, and for in-your-face political argumentation, works a lot better than the “we are very serious people” salmon underbelly it’s now in a toe-to-toe fight with The Financial Times over. A magazine like The Argument revolves around screenshots of the titles being shared (or hate-shared) on X, and when you are hit with a sudden burst of acidic lime in the timeline, like a pop of flavor, you’d have at least known what you’re reading. If your brand is in-your-face liberalism, then it makes sense to have an in-your-face color associated with that. Whoever made that initial (ahem, bold) design decision, and got later overruled, has my sympathy—I can see the vision. Almost taste it, actually. My point is that, lime green or not…

Subscribe now

Aesthetics matter.

They define what you’re doing not just to others, but to yourself. TIP isn’t just what others are looking at, this is what I’m looking at all day, too. And now, closing in on 65,000 subscribers instead of 2,000, over the past few weeks I’ve set out to redesign TIP, starting with the homepage.

But to make decisions about aesthetics, you need to have a conception of self. This is probably the most significant and obvious failure mode: people are attracted to images, or visual vibes, but don’t themselves embody it. They can steal it, but can’t create it. You must be able to answer: What are you trying to do? Why are you doing it? What is this thing’s nature?

And over the years I’ve developed a clearer understanding of the nature of writing a newsletter, or at least, my kind of newsletter. The closest point of comparison I know of is a gallery tour of a museum. There’s a certain ambulatory nature to the whole thing. First you’re looking here, and then, somewhere else. Yes, there are common topics and themes and repetitions and so on, but the artistic effect is ultimately collective, rather than individual. I wanted to capture this tour-like atmosphere, so designed the new TIP homepage based around the idea of a literal gallery of images, hung inside a set of old painting frames. This is what it looks like now:

What I liked about my idea to use actual painting frames (these are cleaned up digital images of real frames) is that, much like an art gallery, a significant amount of white space gives each image, and its title, a chance to breathe. And when you go to click on a piece, it’s sort of like stepping into a painting.

Subscribe now

To maintain this look, I’ll be picking out new images for each new post, but I get the additional fun of placing that image inside a chosen frame, of which I have a pre-established couple dozen saved and ready.

Meanwhile, the new Welcome page is a sort of infinite ladder I made with these frames: one inside the other, going on forever.

It reflects not only some classic TIP topics (remember when I argued that “Consciousness is a Gödel sentence in the language of science”), but also the structure of a newsletter itself, which sequentially progresses one step at a time (until death do us part).

However, the “paintings” will be, at least for now, mostly reserved for the homepage and link previews. For the posts themselves that land in your inbox, they’ll often bear a new masthead. It’s what you saw at the top.

It’s created from a very old pattern I found, sometimes called rolwerk, which is a Renaissance technique. Again, there’s a lot of white space here, similar to a gallery. A masthead like this needs to not say too much—it is, after all, the lead for every single post, and so must span genres and moods, all without assumptions or implications. It must be in a flexible stance, much like how a judo expert or swordfighter plants their feet, able to move in one direction or another on a whim. It cannot overcommit.

Not to thwack you on the head with this, but I obviously picked out a lot of things from the Renaissance era for this redesign (many of the frames too).

Why?

Because centuries ago, before there was science, there was “natural philosophy.” It was before the world got split up by specialization, before industrialization and all the inter-departmental walls in universities got built. And yes, there was a certain amateurism to it all! That’s admitted. And there probably is here, too. At the same time, there’s a holistic aspect that feels important to TIP. It’s why I write about science, sure, but also lyric essays like The Lore of the World series (more soon!), and education treatises, and stuff on philosophy and metaphysics, and even occasionally pen a bit of fiction, and I wanted to capture that spirit with the designs here.

Subscribe now

While I might try out using the “paintings” for header images in the future, I’ll be sticking to the masthead for now. I can’t help but feel that what’s arriving in your email should be stripped-down, easy to parse (and load). The design of a Substack needs to get out of the way of the writing, while still giving that little click of recognition about what you’re reading, and why, and preparing for the voice to come.

I think the new Intrinsic Perspective will be influenced by this choice. It may be a little less “here’s a huge centerpiece essay” and a little more “here’s something focused and fast.” Overall, a few less right hooks. A few more left jabs. I’m not talking about any major changes, just pointing out that the new design allows for a faster tempo and reactivity, and we all grow into our designs, in the end.

Of course, I’ll miss the old design. I was the first person on Substack (at least to my knowledge) to actually employ a resident artist who did the header images of every post. Let’s not forget or pass over how, for the past four years, TIP has been illustrated by the wonderful artist, Alexander Naughton. And he and I will still be collaborating together on some future projects, which you’ll hear more about (and see more about) early next year. But personally, I can’t help but be excited to have a more direct hand in making the homepage what it is, and getting to pick out images myself to make the new “paintings” with.

You can stop reading now if you don’t want to get too meta, but if you’re curious on what I recommend for Substacks in general, read on.

One reason for this extra section is simply that I’d prefer my idea for TIP’s new “museum-style” not be immediately stolen and replicated ad nauseam by other Substacks. And I do think you can apply some of the principles I used to come up with something different, but equally interesting. For advice on that, I’ll start with why, counterintuitively…

Header images (and thumbnails) have declined in importance.

Read more

The Internet You Missed: A 2025 Snapshot

2025-08-13 22:45:16

There are many internets. There are internets that are bright and clean and whistling fast, like the trains in Tokyo. There are internets filled with serious people talking as if in serious rooms, internets of gossip and heart emojis, and internets of clowns. There are internets you can only enter through a hole under your bed, an orifice into which you writhe.

It’s a chromatic thing that can’t hold a shape for more than an instant. But every year, I get to see the internet through the eyes of subscribers to The Intrinsic Perspective. The community submits its writing available online, and I curate and share it.

The quality was truly exceptional this year—I found that they all speak for themselves, and can all be approached on their own terms, so I organized them to highlight how each is worth reading, thinking about, disagreeing with, or simply enjoying; at the very least, they are worth browsing through at your leisure, and finding hidden gems of writers to follow.

Please note that:

  • I cannot fact check each piece, nor is including it an official endorsement of its contents.

  • Descriptions of each piece, in italics, were written by the authors themselves, not me (but sometimes adapted for readability). What follows is from the community. I’m just the curator here.

  • I personally pulled excerpts and images from each piece after some thought, to give a sense of them.

  • If you submitted something and it’s missing, note that it’s probably in an upcoming Part 2.

So here is their internet, or our internet, or at least, the shutter-click frozen image of one possible internet.


1. “Wisdom of Doves” by Doctrix Periwinkle.

Evolved animal behaviors are legion, so why do we choose the examples we do to explain our own?

According to psychologist Jordan Peterson, we are like lobsters.We are hierarchical and fight over limited resources….

Dr. Peterson is a Canadian, and he is describing the North Atlantic American lobster, Homarus americanus. Where I live, lobsters are different.

For instance, they do not fight with their claws, because they do not have claws…. Because they do not have claws, spiny lobsters (Panulirus argus) are preyed upon by tropical fish called triggerfish…. The same kind of hormone signaling that made American lobsters exert dominance and fight each other causes spiny lobsters to cluster together to fight triggerfish, using elaborately coordinated collective behavior. Panulirus lobsters form choreographed “queues,” “rosettes,” and “phalanxes” to keep each other safe from the triggerfish foe. Instead of using claws to engage in combat with other lobsters, spiny lobsters use their attenules—the spindly homologues of claws seen in the photograph above—to keep in close contact with their friends….

If you are a lobster, what kind of lobster are you?


2. “We Know A Good Life When We See It” by Matt Duffy.

A reflection on how fluency replaced virtue in elite culture, and why recovering visible moral seriousness is essential to institutional and personal coherence.

We’ve inherited many of the conditions that historically enabled virtue—stability, affluence, access, mobility—but we’ve lost the clarity on virtue itself. The culture of technocratic primacy rewards singularity: total, often maniacal, dedication to one domain at the expense of the rest…. Singular focus is not a human trait. It is a machine trait. Human life is fragmented on purpose. We are meant to be many things: friend, worker, parent, neighbor, mentor, pupil, citizen.


3. The Vanishing of Youth” by Victor Kumar, published in Aeon.

Population decline means fewer and fewer young people, which will lead to not just economic decay but also cultural stagnation and moral regress.

Sometimes I’m asked (for example, by my wife) why I don’t want a third child. ‘What kind of pronatalist are you?’ My family is the most meaningful part of my life, my children the only real consolation for my own mortality. But other things are meaningful too. I want time to write, travel and connect with my wife and with friends. Perhaps I’d want a third child, or even a fourth, if I’d found my partner and settled into a permanent job in my mid-20s instead of my mid-30s… Raising children has become enormously expensive – not just in money, but also in time, career opportunities and personal freedom.


4. “Three tragedies that shape human life in age of AI and their antidotes”, by brothers Manh-Tung Ho & Manh-Toan Ho, published in the journal AI & Society.

In this paper, we [the authors] discuss some problems arising in the AI age, and then, drawing from both Western and Eastern philosophical traditions to sketch out some antidotes. Even though this was published in a scientific journal, we published in a specific section called Curmudgeon Corner. According to the journal it "is a short opinionated letter to the editor on trends in technology, arts, science and society, commenting emphatically on issues of concern to the research community and wider society, with no more than 3 references and 2 co-authors.”

The tragedy of the commons is the problem of inner group conflicts driven by the lack of cooperation (and communication) when each individual purely follows his/her own best interest (e.g., raises more cattle to feed on the commons), doing so will undermine the collective good (e.g., the commons will be over-grazed). Thus, we define the AI-driven tragedy of the commons as short-term economic/psychological gains that drive the development, launch, and use of half-baked AI products and AI-generated contents that produce superficial information and knowledge, which ends up harming the individual and collective in the long term.


5. "Of Mice, Mechanisms, and Dementia" by Myka Estes.

Billions spent, decades lost: the cautionary tale of how Alzheimer’s research went all-in on a bad bet.

Another way to understand how groundbreaking these results were thought to be at the time is to simply follow the money. Within a year, Athena Neurosciences, where Games worked, was acquired by Elan Corp. for a staggering $638 million. In the press release announcing the merger, Elan proclaimed that the acquisition “provides the opportunity for us to capitalize on an important therapeutic niche, by combining Athena’s leading Alzheimer’s disease research program with Elan’s established development expertise.” The PDAPP mouse had transformed from laboratory marvel to the cornerstone of a billion-dollar strategy.

But, let’s peer ahead to see how that turned out. By the time Elan became defunct in 2013, they had sponsored not one, not two, but four failed Alzheimer's disease therapeutics, all based on the amyloid cascade hypothesis, hemorrhaging $2 billion in the process. And they weren't alone. Pharmaceutical giants, small biotechs, and research organizations and foundations placed enormous bets on amyloid—bets that, time and again, failed to pay off.


6. “Schrödinger's Chatbot” by R.B. Griggs.

Is an LLM a subject, an object, or some strange new thing in between?

It would be easy to insist that LLMs are just objects, obviously. As an engineer I get it—it doesn’t matter how convincing the human affectations are, underneath the conversational interface is still nothing but data, algorithms, and matrix multiplication. Any projection of subject-hood is clearly just anthropomorphic nonsense. Stochastic parrots!

But even if I grant you that, can we admit that LLMs are perhaps the strangest object that has ever existed?


7. "A Prodigal Son" by Eva Shang.

My journey back to Christianity and why it required abandoning worship of the world.

How miserable is it to believe only in the hierarchy of men? It’s difficult to overstate the cruelty of the civilization that Christianity was born into: Roman historian Mary Beard describes how emperors would intentionally situate blind, crippled, or diseased poor people at the edges of their elaborate banquets to serve as a grotesque contrast to the wealth and health of the elite. The strong did what they willed and the weak suffered what they must. Gladiatorial games transformed public slaughter into entertainment. Disabled infants were left to die in trash heaps or on hillsides. You see why the message of Christ spread like wildfire. What a radical proposition it must have been to posit the fundamental equality of all people: that both the emperor and the cripple are made in the image of God.


8. “Why Cyberpunk Matters” by C.W. Howell.

Though the genre is sometimes thought dated, cyberpunk books, movies, and video games are still relevant. They form a last-ditch effort at humanism in the face of machine dominance.

So, what is it that keeps drawing us to this genre? It is more, I believe, than simply the distinct aesthetic…. It reflects, instead, a deep-seated and long-standing anxiety that modern people feel—that our humanity is at stake, that our souls are endangered, that we are being slowly turned into machines.


9.You Are So Sensitive” by Trevy Thomas.

This piece is about the 25 percent of our population, myself the author included, who have a higher sensitivity to the world around us -- with both good and bad effects.

As a young girl, I could ride in a car with my father and sing along to every radio song shamelessly loud. He was impressed that I knew all the words even as the musician in him couldn’t help but critique the song itself. “Why does every song have the word ‘baby’ in it?” he’d ask. But then I got to a point where I’d leave a store or promise never to return to a restaurant because of the music I’d heard in it. Some song from that place would be so lodged in my brain that it would wake me in the middle of the night two weeks later…. about a quarter of the population—humans and animals alike—have this increased level of sensitivity. It can show up in various forms, including sensitivity to sound, light, smell, and stimulation.


10. “Solving Popper's Paradox of Tolerance Before Intolerance Ends Civilization” by Dakara.

A solution to preserving the free society without invoking the conflict of Popper's Paradox.

… Are we now witnessing the end of tolerant societies? Is this the inevitable result that eventually unfolds once an intolerant ideology enters the contest for ideas and the rights of citizens?…

Have we already reached the point where the opposing ideologies are using force against the free society? They censor speech, intervene in the employment of those they oppose, and will utilize physical violence for intimidation.


11. “Knowledge 4.0” by Davi.

From gossip to machine learning - how we bypassed understanding.

Speech allowed us to transmit knowledge among humans, the written word enabled us to broadcast it across generations, and software removed the cost of accessing that knowledge, while turbocharging our ability of composing any piece of knowledge we created with the existing humanity-level pool. What we call now machine learning came to remove one of the few remaining costs in our quest of conquering the world: creating knowledge. It is not that workers will lose their jobs in the near future, this is the revolution that will make obsolete much of our intellectual activity for understanding the world. We will be able to craft planes without ever understanding why birds can fly.


12. “Problematic Badass Female Tropes” by Jenn Zuko.

An overview of the PBFT series of 7 that covers the bait-and-switch of women characters that are supposed to be strong, but end up subservient or weak instead.

The problem that becomes apparent here (as I’m sure you’ve noticed even in only this first folktale example), is that in today’s literature and entertainment, these strong, independent women characters we read about in old stories like Donkeyskin and clever Catherine are all too often subverted, altered, and weakened; either in subtle ways or obvious ways, especially by current pop culture and Hollywood.


13. "The West is Bored to Death" by Stuart Whatley, published in The New Statesman.

An essay on the classical "problem of leisure," and how a society/culture that fails to cultivate a leisure ethic ends up in trouble.

Developing a healthy relationship with free time does not come naturally; it requires a leisure ethic, and like Aristotelian virtue, this probably needs to be cultivated from a young age. Only through deep, sustained habituation does one begin to distinguish between art and entertainment, lower and higher pleasures, titillation and the sublime.


14. “MAGA As The Liberal Shadow” by Carlos.

In a very real sense, liberalism is the root cause of MAGA, and it's very important to understand this to see a way forward.

It’s no wonder that I feel liberalism as the source of this eternal no: it is liberals who define the collective values of our culture, as it is the cities that produce culture, and the cities are liberal. So the voice of the collective in my head, is a liberal. My little liberal thought cop, living in my head.

4chan is great because you get to see what happens when someone evicts the liberal cop, the shadow run rampant. Sure, all sorts of very naughty emotions get expressed, and it is quite a toxic place, but it’s like a great sigh, finally, you can unwind, and say whatever the fuck you want, without having to take anyone else’s feelings into account.


15.The Blowtorch Theory: A New Model for Structure Formation in the Universe” by Julian Gough.

The James Webb Space Telescope has opened up a striking and unexpected possibility: that the dense, compact, early universe universe wasn't shaped slowly and passively by gravity alone, but was instead shaped rapidly and actively by sustained, supermassive black hole jets, which carved out the cosmic voids, shaped the filaments, and generated the magnetic fields we see all around us today.

An evolved universe, therefore, constructs itself according to an internal, evolved set of rules baked deep into its matter, just as a baby, or a sprouting acorn, does.

The development of our specific universe, therefore, since its birth in the Big Bang, mirrors the development of an organism; both are complex evolved systems, where (to quote the splendid Viscount Ilya Romanovich Prigogine), the energy that moves through the system organises the system.

But universes have an interesting reproductive advantage over, say, animals.


16. “Tea” by Joshua Skaggs.

Joshua Skaggs, a single foster dad, has a 3 a.m. chat with one of his kids.

My second night as a foster dad I wake in the middle of the night to the sound of footsteps. I throw on a t-shirt and find him pacing the living room, a teenager in basketball shorts and a baggy t-shirt….

“I broke into your closet,” he says.

“Oh yeah?” I say….

“I looked at all your stuff,” he says. “I thought about drinking your whiskey, but then I thought, ‘Nah. Josh has been good to me.’ So I just closed the door.”

I’m not sure what to say. I eventually land on: “That’s good. I’m glad you didn’t take anything.”

“It was really easy to break into,” he says. “It only took me, like, three seconds.”

“Wow. That’s fast.”

“I’m really good at breaking into places.”


17. “Notes in Aid of a Grammar of Assent” by Amanuel Sahilu.

Through the twin lenses of literature and science, I take a scanning look at the human tendency to detect and discern personhood.

This is all to say, a main reason for modern skepticism toward serious personification is that we think it’s shoddy theorizing….

But I think few moderns reject serious personification on such rational grounds. It may be just as likely we’re driven to ironic personification after adjusting to the serious form as children, when we’re first learning about language and the world. Then as we got older the grown-ups did a kind of bait-and-switch, and serious personification wasn’t allowed anymore.


18. “Book Review: Griffiths on Electricity & Magnetism” by Tim Dingman.

In adulthood I have read many STEM textbooks cover-to-cover.These are textbooks that are supposed to be standards in their fields, yet most of them are not great reading. The median textbook is more like a reference manual with practice problems than a learning experience.

Given the existence and popularity of nonfiction prose on any number of topics, isn’t it odd that most textbooks are so far from good nonfiction? We have all the pieces, why can’t we put them together? Or are textbooks simply not meant to be read?

Certainly most students don’t read them that way. They skim the chapters for equations and images, mostly depend on class to teach the ideas, then break out the textbook for the problem set and use the textbook as reference material. You don't get the narrative that way.

Introduction to Electrodynamics by David Griffiths is the E&M textbook. We had it in my E&M class in college…. Griffiths is so readable that you can read it like a regular book, cover to cover.


19. “Fine Art Sculpture in the Age of Slop” by Sage MacGillivray.

Exploring analogue wisdom in a digital world: Lessons from a life in sculpture touching on brain lateralization, deindustrialization, Romanticism, AI, and more.

… As Michael Polanyi pointed out, it only takes a generation for some skills to be lost forever. We can’t rely on text to retain this knowledge. The concept of ‘stealing with your eyes’, which is common in East Asia, points to the importance of learning by watching a master at work. Text (and even verbal instruction) is flattening….

These days, such art studio ‘laboratories’ are hard to find. Not only is the environment around surviving studios more sterile and technocratic, but artists increasingly outsource their work to a new breed of big industry: the large art production house. A few sketches, a digital model, or perhaps a maquette — a small model of the intended work — are shared with these massive full-service shops that turn sculpture production from artistic venture into contract work. As the overhead cost of running a studio has increased over time, this big-shop model of outsourcing is often the only viable model for artists who want to produce work at scale….

And just like a big-box retailer can wipe out the local hardware store, the big shop model puts pressure on independent studios that train workers in an artisanal mode and allow the artist to evolve the artwork throughout the production process.


20. “Setting the Table for Evil” by Reflecting History.

About the role that ideology played in the rise and subsequent atrocities of Nazi Germany, and the historical debate between situationism and ideology in explaining evil throughout history.

Some modern “historians” have sought to uncouple Hitler’s ideology from his actions, instead seeking to paint his “diplomacy” and war making as geopolitical reactions to what the Allies were doing. But Hitler’s playbook from the beginning was to connect the ideas of racist nationalism and extreme militarism together, allowing each to justify the existence of the other. Nazi Germany’s war was more than just geopolitical strategic war-making chess, it was conquest and subjugation of racial enemies. The British leadership were “Jewish mental parasites,” the conquest of Poland was to “proceed with brutality!… the aim is the removal of the living forces...,” the invasion of the Soviet Union sought to eliminate “Jewish Bolsheviks,” the war with the United States was fought against President Roosevelt and his “Jewish-plutocratic clique.” Hitler applied his ideology to his conquest and subjugation of dozens of countries and peoples in Europe. He broke nearly every international agreement he ever made, and viewed treaties and diplomacy as pieces of paper to be shredded and stepped over on the way to power. Anyone paying attention to what Hitler said or did in 1923 or 1933 or 1943 had to reckon with the fact that Hitler’s ideology informed everything he did.


21. “Which came first, the neuron or the feeling?” by Kasra.

A reverie on the history and philosophy behind the mind-body problem.

Researchers simulate an entire fly brain on a laptop. Is a human brain  next? - Berkeley News
Every neuron in a fruit fly brain

… I do know that life gets richer when you contemplate that either one of these—the neuron and the feeling—could be the true underlying reality. That your feelings might not just be the deterministic shadow of chemicals bouncing around in your brain like billiard balls. That perhaps all self-organizing entities could have a consciousness of their own. That the universe as a whole might not be as dark and cold and empty as it seems when we look at the night sky. That underneath that darkness might be the faintest glimmer of light. Of sentience. A glimmer of light which turns back on itself, in the form of you, asking the question of whether the neuron comes first or the feeling.


22. “Dying to be Alive: Why it's so hard to live your unlived life and how you actually can” by Jan Schlösser.

Exploring the question of why we all act as if we were immortal, even though we all know on an intellectual level that we're going to die.

Becker states that humans are the only species who are aware of their mortality.

This awareness conflicts with our self-preservation instinct, which is a fundamental biological instinct. The idea that one day we will just not exist anymore fills us with terror – a terror that we have to manage somehow, lest we run around like headless chickens all day (hence ‘terror management’).

How do we manage that terror of death?

We do it in one of two ways:

  1. Striving for literal or symbolic immortality

  2. Suppressing our awareness of our mortality


23. “Thirst” by Vanessa Nicole.

Connecting Viktor Frankl’s idea of “the existence of thirst implies the existence of water,” to choosing to live with idealism and devotion.

This is, essentially, how I define being idealistic: a devotion to thirst and belief in the existence of water. To me, idealism isn’t about a hope for a polished utopia—it’s in believing that fulfillment can transform, from an abstract emptiness into the pleasantly refreshed taste in your mouth. (And anyway, there’s a whole universe between parched and utopia.)


24. “A god-sized hole” by Iuval Clejan.

A modern interpretation of Pascal's presumptuous phrase (about a god-sized hole).

People get to feel good about themselves by working hard at something that they get paid for. It also gives them social legitimacy. For some it offers a means of connection with other humans that is hard to achieve outside of work and church. For a few lucky ones it offers a way to express talent and passion. But for most it is an attempt to fill the tribe, family and village-sized holes of their souls.


25. “Have 'Quasi-Inverted Spectrum' Individuals Fallen into Our World, Unbeknownst to Us?” by Ning DY.

Drawing on inconsistencies in neuroimaging and a re-evaluation of first-person reports, this essay argues that synesthesia may not be a cross-activation of senses, but rather a fundamental, 'inverted spectrum-like' phenomenon where one sensory modality's qualia are entirely replaced by another's due to innate properties of the cortex.

I wonder, have we really found individuals similar to those in John Locke's 'inverted spectrum' thought experiment (though different from the original, as this is not a symmetrical swap but rather one modality replacing another)? Imagine if, from birth, our auditory qualia disappeared and were replaced by visual qualia, changing the experienced qualia just as in the original inverted spectrum experiment. How would we describe the world? Naturally, we would use visual elements to name auditory elements, starting from the very day we learned to speak. As for the concepts described by typical people, like pitch, timbre, and intensity, we would need to learn them carefully to cautiously map these concepts to the visual qualia we "hear." Perhaps synesthetes also find us strange, wondering why we give such vastly different names to two such similar experiences?


26. “Elementalia: Chapter I Fire” by Kanya Kanchana.

Drawing from the vast store of our collective imagination across mythology, philosophy, religion, literature, science, and art, this idiosyncratic, intertextual, element-bending essay explores the twined enchantments of fire and word.

My legs and feet are bare—no cloth, no metal, not even nail polish. Strangely, my first worry is that it feels disrespectful to step on life-giving fire. Then I see a mental image of a baby in his mother’s arms, wildly kicking about—but she’s smiling. I better do this before I think too much. I step on the coals. I feel a buzz go up my legs like invisible electric socks but it doesn’t burn. It doesn’t burn.

I don’t run; I walk. I feel calm. I feel good. When I get to the other side, I grin at my friends and turn right around. I walk again.


27. “When Scientists Reject the Mathematical Foundations of Science” by Josh Baker.

By directly observing emergent mechanical behaviors in muscle, I have discovered the basic statistical mechanics of emergence, which I describe in a series of posts on Substack.

Over the past several years, six of these manuscripts were back-to-back triaged by editors at PNAS. Other lower tier journals rejected them for reasons ranging from “it would overturn decades of work” and “it’s wishful thinking” to reasons unexplained. An editorial decision in the journal Entropy flipped from a provisional accept to reject followed by radio silence from the journal.

A Biophysical Journal advisory board rejected some of these manuscripts. In one case, an editor explained that a manuscript was rejected — not because the science was flawed but — because the reviewers they would choose would reject it with near certainty.


28. "The Tech is Terrific, The Culture is Cringe" by Jeff Geraghty.

A fighter test pilot and Air Force General answers a challenge put to him directly by Elon Musk.

On a cool but sunny day in May of 2016, in his SpaceX facility in Redmond, Washington, Elon Musk told me that he regretted putting so much technology into the Tesla Model X. His newest model was rolling out that year, and his personal involvement with the design and engineering was evident. If he had it to do over again, he said, he wouldn’t put so much advanced technology into a car….

Since that first ride, I’ve been watching the car drive for almost a year now, and I’m still impressed…

My daughter, however, wouldn’t be caught dead in it.She much prefers to ride the scratched up old Honda Odyssey minivan. She has an image to uphold, after all.


29. “The Lamps in our House: Reflections on Postcolonial Pedagogy” by Arudra Burra.

In this sceptical reflection on the idea of 'decolonizing' philosophy, I question the idea that we should think of the 'Western philosophical tradition' as in some sense the exclusive heritage of the modern West; I connect this with what I see as certain regrettable nativist impulses in Indian politics and political thought.

I teach philosophy at the Indian Institute of Technology-Delhi. My teaching reflects my training, which is in the Western philosophical tradition: I teach PhD seminars on Plato and Rawls, while Bentham and Mill often figure in my undergraduate courses.

What does it mean to teach these canonical figures of the Western philosophical tradition to students in India?… Some of the leading lights of the Western canon have views which seem indefensible to us today: Aristotle, Hume, and Kant, for instance. Statues of figures whose views are objectionable in similar ways have, after all, been toppled across the world. Should we not at least take these philosophers off their pedestals? …

The Indian context generates its own pressures. A focus on the Western philosophical tradition, it is sometimes thought, risks obscuring or marginalising what is of value in the Indian philosophical tradition. Colonial attitudes and practices might give us good grounds for this worry; recall Macaulay’s famous lines, in his “Minute on Education” (1835), that “a single shelf of a good European library [is] worth the whole native literature of India and Arabia.”


30. “What Happens When We Gamify Reading” by Mia Milne.

How reading challenges led me to prioritize reading more over reading deeply and how to best take advantage of gamification without getting swept away by the logic of the game.

The attention economy means that we’re surrounded by systems designed to suck up our focus to make profit for others. Part of the reason gamification has become so popular is to help people do the things they want to do rather than only do the things corporations want them to do.


31.Pan-paranoia in the USA” by Eponynonymous.

A brief history of the "paranoid style" of American politics through a New Romantic lens.

As someone who once covered the tech industry, I join in Ross Barkan’s wondering what good these supposed marvels of modern technology—instantaneous communication, dopamine drips of screen-fed entertainment, mass connectivity—have really done for us. Are we really better off? ….

But we are also facing a vast and deepening suspicion of power in all forms. Those suspicions need not be (and rarely are) rationally obtained. The old methods of releasing societal pressures—colonialism, western expansionism, post-war consumerism—have atrophied or died. It should come as no surprise when violence manifests in their place.

GPT-5's debut is slop; Will AI cause the next depression? Harvard prof warns of alien invasion; Alpha School & homeschool heroes

2025-08-06 23:26:15

The Desiderata series is a regular roundup of links and thoughts for paid subscribers, and an open thread for the community.

Subscribe now


Contents:

  1. GPT-5’s debut is slop.

  2. 10% of all human experience took place since the year 2000.

  3. Education is a mirror. What’s Alpha School’s reflection?

  4. The rise of the secular homeschool superheroes.

  5. “The Cheese that Gives you Nightmares.”

  6. Avi Loeb at Harvard warns of alien invasion.

  7. Moths as celestial navigators.

  8. Will AI cause the next depression?

  9. From the archives.

  10. Comment, share anything, ask anything.


1. GPT-5’s debut is slop.

GPT-5’s launch is imminent. Likely tomorrow. We also have the first confirmed example of an output known for sure to be from GPT-5, which was shared by Sam Altman himself as a screenshot on social media. He asked GPT-5 “what is the most thought-provoking show about AI?”

Hmmm.

Hmmmmmmmmm.

Yeah, so #2 is a slop answer, no?

Maybe even arguably a hallucination. Certainly, that #2 recommendation, the TV show Devs, does initially seem like a good answer to Altman’s question, in that it is “prestige sci-fi” and an overall high-quality show. But I’ve seen Devs. I’d recommend it myself, in fact (streaming on Hulu). Here’s the thing: Devs is not a sci-fi show about AI! In no way, shape, or form, is it a show about AI. In fact, it’s refreshing how not about AI it is. Instead, it’s a show about quantum physics, free will, and determinism. This is the main techno-macguffin of Devs: a big honking quantum computer.

Quantum nature of Devs - fxguide
spoilers for the first 30 minutes?

As far as I can remember, the only brief mention of AI is how, in the first episode, the main protagonist of that episode is recruited away from an internal AI division of the company to go work on this new quantum computing project. Now, what’s interesting is that GPT-5 does summarize the show appropriately as being about determinism and free will and existential tension (and, by implication, not about AI). But its correct summary makes its error of including Devs on the list almost worse, because it shows off the same inability to self-correct that LLMs have struggled with for years now. GPT-5 doesn’t catch the logical inconsistency of giving a not-AI-based description of a TV show, despite being specifically asked for AI-based TV shows (there’s not even a “This isn’t about AI, but it’s a high-quality show about related subjects like…”). Meaning that this output, the very first I’ve seen from GPT-5, feels extremely LLM-ish, falling into all the old traps. Its fundamental nature has not changed.

This is why people still call it a “stochastic parrot” or “autocomplete,” and it’s also why such criticisms, even though weaker in strength, can’t be entirely dismissed. Even at GPT-5’s incredible level of ability, its fundamental nature is still that of autocompleting conversations. In turn, autocompleting conversations leads to slop, exactly like giving Devs as a recommendation here. GPT-5 is secretly answering not Altman’s question, but a different question entirely: when autocompleting a conversation about sci-fi shows and recommendations, what common answers crop up? Well, Devs often crops up, so let’s list Devs here.

Judge GPT-5’s output by honest standards. If a human said to me “There’s this great sci-fi show about AI, you should check it out, it’s called Devs,” and then I went and watched Devs, I would spend the entire time waiting for the AI plot twist to make an appearance. At the series end, when the credits rolled, I would be 100% certain that person was an idiot.

Subscribe now


2. 10% of all summed human experience took place since the year 2000.

According to a calculation by blogger Luke Eure, 50% of human experience (total experience hours by “modern humans”) has taken place after 1300 AD.

Which would mean that 10% of collective human experience has occurred since the year 2000! It also means that most of us now alive will live, or have lived, alongside a surprisingly large chunk of when things are happening (at least, from the intrinsic perspective).


3. Education is a mirror. What’s Alpha School’s reflection?

In the education space, the buzz right now is around Alpha School. Their pitch (covered widely in the media) is that they do 2 hours of learning a day with an “AI tutor.”

More recently, The New York Times profiled them:

At Alpha’s flagship, students spend a total of just two hours a day on subjects like reading and math, using A.I.-driven software. The remaining hours rely on A.I. and an adult “guide,” not a teacher, to help students develop practical skills in areas such as entrepreneurship, public speaking and financial literacy.

I’ll say upfront: I do believe that 2 hours of learning a day, if done well, could be enough for an education. I too think kids should have way more free time than they do. So there is something to the model of “2 hours and done” that I think is attractive.

But I have some questions, as I was one of the few actual attendees to the first “Alpha Anywhere” live info session, which revealed details of how their new program for homeschoolers works. Having seen more of it, Alpha School appears based on progressing through pre-set educational apps, and doesn’t primarily involve AI-as-tutor-qua-tutor often (i.e., interacting primarily with an AI like ChatGPT). While the Times says that

But Alpha isn’t using A.I. as a tutor or a supplement. It is the school’s primary educational driver to move students through academic content.

all I saw was one use case, which was AI basically making adaptive reading comprehension tests on the fly (I think that specifically is actually a bad idea, and it looked like reading boring LLM slop to me).

For this reason, the more realistic story behind Alpha School is not “Wow, this school is using AI to get such great results!” but rather that Alpha School is “education app stacking” and there are finally good enough, and in-depth enough, educational apps to cover most of the high school curriculum in a high-quality and interactive way. That’s a big and important change! E.g., consider this homeschooling mom, who points out that she was basically replicating what Alpha School is doing by using a similar set of education apps.

Most importantly, and likely controversially, Alpha School pays the students to progress through the apps via an internal currency that can be redeemed for goodies (oddly, this detail is left out from the analysis of places like the Times—but hey, it’s “the paper of record,” right?).

My thoughts are two-fold. First, I do think it’s true that ed-apps have gotten good enough to replace a lot of the core curriculum and allow for remarkable acceleration. Second, I think it’s a mistake to separate the guides from the learning itself. That is, it appears the actual academics at Alpha School are self-contained, as if in a box; there’s a firewall between the intellectual environment of the school and what’s actually being learned during those 2 hours on the apps. Not to say that’s bad for all kids! Plenty of kids ultimately are interested in things beyond academics, and sequestering the academics “in a box” isn’t necessarily bad for them.

However, it’s inevitable that this disconnect makes the academics fundamentally perfunctory (to be fair, this is true for a lot of traditional schools as well). As I once wrote about the importance of human tutors:

Serious learning is socio-intellectual. Even if the intellectual part were to ever get fully covered by AI one day, the “socio” part cannot… just like how great companies often have an irreducibly great culture, so does intellectual progress, education, and advancement have an irreducible social component.

Now, I’m sure that Alpha School has a socio-intellectual culture! It’s just that the culture doesn’t appear to be about the actual academics learned during those 2 hours. And that matters for what the kids work on and find interesting themselves. E.g., in the Times we get an example of student projects like “a chatbot that offers dating advice,” and in Fox News another example was an “AI dating coach for teenagers,” and one of the cited recent accolades of Alpha School students is placing 2nd in some new high school competition, the Global AI Debates.

At least in terms of the public examples, a lot of the most impressive academic/intellectual successes of the kids at Alpha School appear to involve AI. Why? Because the people running Alpha School are most interested in AI!

And now apply that to everything: that’s true for math, and literature, and science, and philosophy. So then you can see the problem: the disconnect between the role models and the academics. If the Alpha School guides and staff don’t really care about math—if it’s just a hurdle to be overcome, just another hoop to jump through—why should the kids?

Want to know why education is hard? Harder than almost anything in the world? It’s not that education doesn’t work. Rather, the problem is that it works too well.

Education is a mirror.

Read more