MoreRSS

site iconThe Intrinsic PerspectiveModify

By Erik Hoel. About consilience: breaking down the disciplinary barriers between science, history, literature, and cultural commentary.
Please copy the RSS to your reader, or quickly subscribe to:

Inoreader Feedly Follow Feedbin Local Reader

Rss preview of Blog of The Intrinsic Perspective

More Lore of the World

2025-06-19 23:40:28

Art for The Intrinsic Perspective is by Alexander Naughton
When you become a new parent, you must re-explain the world, and therefore see it afresh yourself.
A child starts with only ancestral memories of archetypes: mother, air, warmth, danger. But none of the specifics. For them, life is like beginning to read some grand fantasy trilogy, one filled with lore and histories and intricate maps.
Yet the lore of our world is far grander, because everything here is real. Stars are real. Money is real. Brazil is real. And it is a parent’s job to tell the lore of this world, and help the child fill up their codex of reality one entry at a time.
Below are a few of the thousands of entries they must make.
Other entries can be found in Part 1. This is Part 2 of a serialized book I’m publishing here on Substack. It can be read in any order. Further parts will crop up semi-regularly.

Walmart

Walmart was, growing up, where I didn’t want to be. Whatever life had in store for me, I wanted it to be the opposite of Walmart. Let’s not dissemble: Walmart is, canonically, “lower class.” And so I saw, in Walmart, one possible future for myself. I wanted desperately to not be lower class, to not have to attend boring public school, to get out of my small town. My nightmare was ending up working at a place like Walmart (my father ended up at a similar big-box store). It seemed to me, at least back then, that all of human misery was compressed in that store; not just in the crassness of its capitalistic machinations, but in the very people who shop there. Inevitably, among the aisles some figure would be hunched over in horrific ailment, and I, playing the role of a young Siddhartha seeing the sick and dying for the first time, would recoil and flee to the parking lot in a wave of overwhelming pity. But it was a self-righteous pity, in the end. A pity almost cruel. I would leave Walmart wondering: Why is everyone living their lives half-awake? Why am I the only one who wants something more? Who sees suffering clearly?

Teenagers are funny.

Now, as a new parent, Walmart is a cathedral. It has high ceilings, lots to look at, is always open, and is cheap. Lightsabers (or “laser swords,” for copyright purposes) are stuffed in boxes for the taking. Pick out a blue one, a green one, a red one. We’ll turn off the lights at home and battle in the dark. And the overall shopping experience of Walmart is undeniably kid-friendly. You can run down the aisles. You can sway in the cart. Stakes are low at Walmart. Everyone says hi to you and your sister. They smile at you. They interact. While sometimes patrons and even employees may appear, well, somewhat strange, even bearing the cross of visible ailments, they are scary and friendly. If I visit Walmart now, I leave wondering why this is. Because in comparison, I’ve noticed that at stores more canonically “upper class,” you kids turn invisible. No one laughs at your antics. No one shouts hello. No one talks to you, or asks you questions. At Whole Foods, people don’t notice you. At Stop & Shop, they do. Your visibility, it appears, is inversely proportional to the price tags on the clothes worn around you. Which, by the logical force of modus ponens, means you are most visible at, your very existence most registered at, of all places, Walmart.


Subscribe now


Cicadas

The surprise of this summer has been learning we share our property with what biologists call Cicada Brood XIV, who burst forth en masse every 17 years to swarm Cape Cod. Nowhere else in the world do members of this “Bourbon Brood” exist, with their long black bodies and cartoonishly red eyes. Only here, in the eastern half of the US. Writing these words, I can hear their dull and ceaseless motorcycle whine in the woods.

The neighbors we never knew we had, the first 17 years of a cicada’s life are spent underground as a colorless nymph, suckling nutrients from the roots of trees. These vampires (since they live on sap, vampires is what they are, at least to plants) are among the longest living insects. Luckily, they do not bite or sting, and carry no communicable diseases. It’s all sheer biomass. In a fit of paradoxical vitality, they’ve dug up from underneath, like sappers invading a castle, leaving behind coin-sized holes in the ground. If you put a stick in one of these coin slots, it will be swallowed, and its disappearance is accompanied by a dizzying sense that even a humble yard can contain foreign worlds untouched by human hands.

After digging out of their grave, where they live, to reach the world above, where they die, cicadas next molt, then spend a while adjusting to their new winged bodies before taking to the woods to mate. Unfortunately, our house is in the woods. Nor is there escape elsewhere—drive anywhere and cicadas hit your windshield, sometimes rapid-fire; never smearing, they instead careen off almost politely, like an aerial game of bumper cars.

We just have to make it a few more weeks. After laying their eggs on the boughs of trees (so vast are these clusters it breaks the branches) the nymphs drop. The hatched babies squirm into the dirt, and the 17-year-cycle repeats. But right now the saga’s ending seems far away, as their molted carapaces cling by the dozens to our plants and window frames and shed, like hollow miniatures. Even discarded, they grip.

“It’s like leaving behind their clothes,” I tell your sister.

“Their clothes,” she says, in her tiny pipsqueak voice.

We observe the cicadas in the yard. They do not do much. They hang, rest, wait. They offer no resistance to being swept away by broom or shoe tip. Even their flights are lazy and ponderous and unskilled. And ultimately, this is what is eerie about cicadas. Yes, they represent the pullulating irrepressible life force, but you can barely call any individual alive. They are life removed from consciousness. Much like a patient for whom irreparable brain damage has left only a cauliflower of functional gray matter left, they are here, but not here. Other bugs will avoid humans, or even just collisions with inanimate objects. Not the cicada. Their stupidity makes their existence even more a nightmare for your mother, who goes armed into the yard with a yellow flyswatter. She knows they cannot hurt her, but has a phobia of moths, due to their mindless flight. Cicadas are even worse in that regard. Much bigger, too. She tries, mightily, to not pass down her phobia. She forces herself to walk slowly, gritting her teeth. Or, on seeing one sunning on the arm of her lawn chair, she pretends there is something urgent needed inside. But I see her through the window, and when alone, she dashes. She dashes to the car or to the shed, and she dashes onto the porch to get an errant toy, waving about her head that yellow flyswatter, eyes squinted so she can’t see the horrors around her.

I, meanwhile, am working on desensitization. Especially with your sister, who has, with the mind-reading abilities she’s renowned for, picked up that something fishy is going on, and screeches when a cicada comes too near. I sense, though, she enjoys the thrill.

“Hello Cicadaaaaaasss!” I get her to croon with me. She waves at their zombie eyes. When she goes inside, shutting the screen door behind her, she says an unreturned goodbye to them.

Despite its idiocy, the cicada possesses a strange mathematical intelligence. Why 17-year cycles? Because 17 is prime. Divisible by no other cycle, it ensures no predator can track them generation to generation. Their evolutionary strategy is to overwhelm, unexpectedly, in a surprise attack. And this gambit of “You can’t eat us all!” is clearly working. The birds here are becoming comically fat, with potbellies; in their lucky bounty, they’ve developed into gourmands who only eat the heads.

Individual cicadas are too dumb to have developed such a smart tactic, so it is evolution who is the mathematician here. But unlike we humans, who can manipulate numbers abstractly, without mortal danger, evolution must always add, subtract, multiply, and divide, solely with lives. Cicadas en masse are a type of bio-numeracy, and each brood is collectively a Sieve of Eratosthenes, sacrificing trillions to arrive at an agreed-upon prime number. In this, the cicada may be, as far as we know, the most horrific way to do math in the entire universe.

Being an embodied temporal calculation, the cicada invasion has forced upon us a new awareness of time itself. I have found your mother crying from this. She says every day now she thinks about the inherent question they pose: What will our lives be like, when the cicadas return?

Against our will the Bourbon Brood has scheduled something in our calendar, 17 years out, shifting the future from abstract to concrete. When the cicadas return, you will be turning 21. Your sister, 19. Myself, already 55. Your mother, 54. Your grandparents will, very possibly, all be dead. This phase of life will have finished. And to mark its end, the cicadas will crawl up through the dirt, triumphant in their true ownership, and the empty nest of our home will buzz again with these long-living, subterranean-dwelling, prime-calculating, calendar-setting, goddamn vampires.


Subscribe now


Stubbornness

God, you’re stubborn. You are so stubborn. Stubborn about which water bottle to drink from, stubborn about doing all the fairground rides twice, stubborn about going up slides before going down them, pushing buttons on elevators, being the first to go upstairs, deciding what snack to eat, wearing long-sleeved shirts in summer, wanting to hold hands, wanting not to hold hands; in general, you’re stubborn about all events, and especially about what order they should happen in. You’re stubborn about doing things beyond your ability, only to get angry when you inevitably fail. You’re stubborn in wanting the laws of physics to work the way you personally think they should. You’re stubborn in how much you love, in how determined and fierce your attachment can be.

This is true of many young children, of course, but you seem an archetypal expression of it. Even your losing battles are rarely true losses. You propose some compromise where you can snatch, from the jaws of defeat, a sliver of a draw. Arguments with you are like trading rhetorical pieces in a chess match. While you can eventually accept wearing rain boots because it’s pouring out, that acceptance hinges on putting them on in the most inconvenient spot imaginable.

So when I get frustrated—and yes, I do get frustrated—I remind myself that “stubborn” is a synonym for “willful.” Whatever human will is, you possess it in spades. You want the world to be a certain way, and you’ll do everything in your power to make it so. Luckily, most of your designs are a kind of benevolent dictatorship. And at root, I believe your willfulness comes from loving the world so much, and wanting to, like all creatures vital with life force, act in it, and so bend it to your purposes.

What I don’t think is that this willfulness is because we, as parents, are so especially lenient. Because we’re not. No, your stubbornness has felt baked in from the beginning.

This might be impossible to explain to you now, in all its details, but in the future you’ll be ready to understand that I really do mean “the beginning.” As in the literal moment of conception. Or the moment before the moment, when you were still split into halves: egg and sperm. There is much prudery around the topic, as you’ll learn, and because of its secrecy people conceptualize the entire process as fundamentally simple, like this: Egg exists (fanning itself coquettishly). Sperm swims hard (muscular and sweaty). Sperm reaches egg. Penetrates and is enveloped. The end. But this is a radical simplification of the true biology, which, like all biology, is actually about selection.

Selection is omnipresent, occurring across scales and systems. For example, the elegance of your DNA is because so many variants of individuals were generated, and of these, only some small number proved fit in the environment (your ancestors). The rest were winnowed away by natural selection. So too, at another scale, your body’s immune system internally works via what’s called “clonal selection.” Many different immune cells with all sorts of configurations are generated at low numbers, waiting as a pool of variability in your bloodstream. In the presence of an invading pathogen, the few immune cells that match (bind to) the pathogen are selected to be cloned in vast numbers, creating an army. And, at another scale and in a different way, human conception works via selection too. Even though scientists understand less about how conception selection works (these remain mysterious and primal things), the evidence indicates the process is full of it.

First, from the perspective of the sperm, they are entered into a win-or-die race inside an acidic maze with three hundred million competitors. If the pH or mucus blockades don’t get them, the fallopian tubes are a labyrinth of currents stirred by cilia. It’s a mortal race in all ways, for the woman’s body has its own protectors: white blood cells, which register the sperm as foreign and other. Non-self. So they patrol and destroy them. Imagining this, I oscillate between the silly and the serious. I picture the white blood cells patrolling like stormtroopers, and meanwhile the sperm (wearing massive helmets) attempt to rush past them. But in reality, what is this like? Did that early half of you see, ahead, some pair of competing brothers getting horrifically eaten, and smartly went the other way? What does a sperm see, exactly? We know they can sense the environment, for of the hundreds of sperm who make it close enough to potentially fertilize the egg, all must enter into a kind of dance with it, responding to the egg’s guidance cues in the form of temperature and chemical gradients (the technical jargon is “sperm chemotaxis”). We know from experiments that eggs single out sperm non-randomly, attracting the ones they like most. But for what reasons, or based on what standards, we don’t know. Regardless of why, the egg zealously protects its choice. Once a particular sperm is allowed to penetrate its outer layer, the egg transforms into a literal battle station, blasting out zinc ions at any approaching runners-up to avoid double inseminations.

Then, on the other side, there’s selection too. For which egg? Women are born with about a million of what are called “follicles.” These follicles all grow candidate eggs, called “oocytes,” but, past puberty, only a single oocyte each month is chosen to be released by the winner and become the waiting egg. In this, the ovary itself is basically a combination of biobank and proving grounds. So the bank depletes over time. Menopause is, basically, when the supply has run out. But where do they all go? Most follicles die in an initial background winnowing, a first round of selection, wherein those not developing properly are destroyed. The majority perish there. Only the strongest and most functional go on to the next stage. Each month, around 20 of these follicles enter a tournament with their sisters to see which of them ovulates, and so releases the winning egg. This competition is enigmatic, and can only be described as a kind of hormonal growth war. The winner must mature faster, but also emit chemicals to suppress the others, starving them. The losers atrophy and die. No wonder it’s hard for siblings to always get along.

Things like this explain why, the older I get, the more I am attracted to one of the first philosophies, by Empedocles. All things are either Love or Strife. Or both.

From that ancient perspective, I can’t help but feel your stubbornness is why you’re here at all. That it’s an imprint left over, etched onto your cells. I suspect you won all those mortal races and competitions, succeeded through all that strife, simply because from the beginning, in some proto-way, you wanted to be here. Out of all that potentiality, willfulness made you a reality.

Can someone be so stubborn they create themselves?


This is Part 2 of a serialized book I’m publishing here on Substack. It can be read in any order. Part 1 is here. Further parts will crop up semi-regularly among other posts.

$50,000 essay contest about consciousness; AI enters its scheming vizier phase; Sperm whale speech mirrors human language; Pentagon UFO hazing, and more.

2025-06-14 00:14:42

The Desiderata series is a regular roundup of links and commentary, and an open thread for the community. Today, it’s sponsored by the Berggruen Institute, and so is available for all subscribers.

Contents

  1. $50,000 essay contest about consciousness.

  2. AI enters its scheming vizier phase.

  3. Sperm whale speech mirrors human language.

  4. I’m serializing a book here on Substack.

  5. People rate the 2020s as bad for culture, but good for cuisine.

  6. UFO rumors were a Pentagon hazing ritual.

  7. Visualizing humanity’s tech tree.

  8. “We want to take your job” will be less sympathetic than Silicon Valley thinks.

  9. Astrocytes might store memories?

  10. Podcast appearance by moi.

  11. From the archives: K12-18b updates.

  12. Open thread.


1. $50,000 essay contest about consciousness.

This summer, the Berggruen Institute is holding a $50,000 essay contest on the theme of consciousness. For some reason no one knows about this annual competition—indeed, I didn’t! But it’s very cool.

The inspiration for the competition originates from the role essays have played in the past, including the essay contest held by the Académie de Dijon. In 1750, Jean-Jacques Rousseau's essay Discourse on the Arts and Sciences, also known as The First Discourse, won and notably marked the onset of his prominence as a profoundly influential thinker…. We are inviting essays that follow in the tradition of renowned thinkers such as Rousseau, Michel de Montaigne, and Ralph Waldo Emerson. Submissions should present novel ideas and be clearly argued in compelling ways for intellectually serious readers.

The themes have lots of room, both in that essays can be up to 10,000 words, and that, this year, the topic can be anything about consciousness.

We seek original essays that offer fresh perspectives on these fundamental questions. We welcome essays from all traditions and disciplines. Your claim may or may not draw from established research on the subject, but must demonstrate creativity and be defended by strong argument. Unless you are proposing your own theory of consciousness, your essay should demonstrate knowledge of established theories of consciousness…

Suspecting good essays might be germinating within the community here, the Institute reached out and is sponsoring this Desiderata in order to promote the contest. So what follows is free for everyone, not just paid subscribers, thanks to them.

The contest deadline is July 31st. Anyone can win; my understanding is that the review process is blind/anonymous (so don’t put any personal information that could identify you in the text itself). Interestingly, there’s a separate Chinese language prize too, if that’s your native language.

Link to the prize and details

Personally, I don’t know if I’ll submit something. But, maybe a good overall heuristic: write as if I’m submitting something, and be determined to kick my butt!


2. AI enters its scheming vizier phase.

Another public service announcement, albeit one that probably sounds a bit crazy. Unfortunately, there’s no other way to express it: state-of-the-art AIs increasingly seem fundamentally duplicitous.

I universally sense this when interacting with the latest models, such as Claude Opus 4 (now being used in the military) and o3 pro. Oh, they’re smarter than ever, that’s for sure, despite what skeptics say. But they have become like an animal whose evolved goal is to fool me into thinking it’s even smarter than it is. The standard reason given for this is an increased reliance on reinforcement learning, which in turn means that the models adapt to hack the reward function.

That the recent smarter models lie more is well known, but I’m beginning to suspect it’s worse than that. Remember that study from earlier this year showing that just training a model to produce insecure computer code made the model evil?

The results demonstrated that morality is a tangle of concepts, where if you select for one bad thing in training (writing insecure code) it selects for other bad things too (loving Hitler). Well, isn’t fooling me about their capabilities, in the moral landscape, selecting for a subtly negative goal? And so does it not drag along, again quite subtly, other evil behavior? This would certainly explain why, even in interactions and instances where nothing overt is occurring, and I can’t catch them in a lie, I can still feel an underlying duplicity exists in their internal phenomenology (or call it what you will—sounds like a topic for a Berggruen essay). They seem bent toward being conniving in general, and so far less usable than they should be. The smarter they get, the more responses arrive in forms inherently lazy and dishonest and obfuscating, like massive tables splattered with incorrect links, with the goal to convince you the prompt was followed, rather than following the prompt. Why do you think they produce so much text now? Because it is easier to hide under a mountain of BS.

This whiff of sulfur I catch regularly in interactions now was not detectable previously. Older models had hallucinations, but they felt like honest mistakes. They were trying to follow the prompt, but then got confused, and I could laugh it off. But there’s been a vibe shift from “my vizier is incompetent” to “my vizier is plotting something,” and I currently trust these models as far as I can throw them (which, given all the GPUs and cooling equipment required to run one, is not far). In other words: Misaligned! Misaligned!


3. Sperm whale speech mirrors human language.

In better news, researchers just revealed that sperm whales, who talk in a kind of clicking language, have “codas” (series of clicks) that are produced remarkably similarly to human speech. Their mouths are (basically) ours, but elongated.

According to one of the authors:

We found distributional patterns, intrinsic duration, length distinction, and coarticulation in codas. We argue that this makes sperm whale codas one of the most linguistically and phonologically complex vocalizations and the one that is closest to human language.

In speculation more prophetic than I knew, I wrote last month in “The Lore of the World” about our checkered history hunting whales.

One day technology will enable us to talk to them, and the first thing they will ask is: “Why?”

This technology is getting closer every day, thanks to efforts like Project CETI, which is determined to decode what whales are saying, and which funded the aforementioned research.


4. I’m serializing a book here on Substack.

You’ve likely read the first installment without realizing it: “The Lore of the World.” The second is upcoming next week. The series is about the change that comes over new parents from seeing through the eyes of a child, and finding again in all things the old magic. So it’s about whales and cicadas and stubbornness and teeth and conception and brothers and sisters.

But what it’s really about, as will become clear over time, is the ultimate question: Why there is something, rather than nothing?

I can tell I’m serious about it because it’s being written by hand in notebooks (which I haven’t done since The Revelations). Entries in the series, which can be read in any order, will crop up among other posts, so please keep an eye out.

See if you can spot my lovely cicada friend, who I discovered only after I looked at this photo :(

5. People rate the 2020s as bad for culture, but good for cuisine.

A YouGov survey reported the results of asking people to rate decades along various cultural and political dimensions. It was interesting that for the cultural questions, like movies and music, people generally rate earlier decades as better than today.

Are people just voting for nostalgia? One counterpoint might be the consensus that cuisine has gotten increasingly better (I think this too, and millennials and Gen X deserve credit for at least making food actually good).


6. UFO rumors were a Pentagon hazing ritual.

Unsurprisingly, the whispered-about UFO stories within the government, the ones the whistleblowers always come breathlessly forward about, have turned out to be a long-running hoax. As I’ve written about, the origins of the current UFO craze was nepotism and journalistic failures, and now we know that, according to The Wall Street Journals reporting, many UFO stories from inside the Pentagon were pranks. Just a little “workplace humor”—or someone’s idea of it. It was a tradition for decades, and hundreds of people were the butt of the joke (this has made me more personally sympathetic to “the government has secret UFOs” whistleblowers, and also more sure they are wrong).

It turned out the witnesses had been victims of a bizarre hazing ritual.

For decades, certain new commanders of the Air Force’s most classified programs, as part of their induction briefings, would be handed a piece of paper with a photo of what looked like a flying saucer. The craft was described as an antigravity maneuvering vehicle. The officers were told that the program they were joining, dubbed Yankee Blue, was part of an effort to reverse-engineer the technology on the craft. They were told never to mention it again. Many never learned it was fake. Kirkpatrick found the practice had begun decades before, and appeared to continue still.


7. Visualizing humanity’s tech tree.

Étienne Fortier-Dubois, who writes the Substack Hopeful Monsters, built out a gigantic tech tree of civilization (the first technology is just a rock). You can browse the entire thing here.


8. “We want to take your job” will be less sympathetic than Silicon Valley thinks.

Mechanize, the start-up building environments (“boring video games”) to train AIs to do white-collar work, received a profile in the Times.

“Our goal is to fully automate work,” said Tamay Besiroglu, 29, one of Mechanize’s founders. “We want to get to a fully automated economy, and make that happen as fast as possible….”

To automate software engineering, for example, Mechanize is building a training environment that resembles the computer a software engineer would use — a virtual machine outfitted with an email inbox, a Slack account, some coding tools and a web browser. An A.I. system is asked to accomplish a task using these tools. If it succeeds, it gets a reward. If it fails, it gets a penalty.

Kevin Roose, the author of the profile, pushes them on the ethical dimension of just blatantly trying to automate specific jobs, and got this in response.

At one point during the Q&A, I piped up to ask: Is it ethical to automate all labor?

Mr. Barnett, who described himself as a libertarian, responded that it is. He believes that A.I. will accelerate economic growth and spur lifesaving breakthroughs in medicine and science, and that a prosperous society with full automation would be preferable to a low-growth economy where humans still had jobs.

But the entire thesis for their company is that they don’t think we will get full AGI from the major companies anytime soon, at least, not the kind of AGI that can one-shot jobs. In fact, they explicitly have slower timelines and are doubtful of claims about “a country of geniuses in a datacenter” (I’m judging this from their interview with Darkwesh Patel titled “AGI is Still 30 Years Away”).

But then, when pressed on the ethics of what they’re doing, a world of abundance awaits! And a big part of that world of abundance is not because of what Mechanize is doing (specific in-domain training to replace jobs), but because of that county of geniuses in a data center curing cancer, the one that they say on the podcast (at least this is my impression) will not matter much!

The other justification I’ve seen them give, which is at least in line with their thesis, is that automation will somehow make everything so productive that the economy booms in ahistorical ways, and so overall tax revenues skyrocket. The numbers this requires to be workable seem, on their face, pretty close to fantasy land territory (imagine the stock market doubling all the time, etc.). And that’s without anything bringing the idea down to earth, such as the recent study showing that 30% AI automation of code production only leads to 2.4% more GitHub commits. Everything might be like that! It would perfectly explain why the broader market effects of AI are kind of nonexistent so far, and don’t seem to reflect much the abilities of the models.

I think a world in which significant portions of self-contained white-collar work (e.g., tax filing) gets automated by heavy within-distribution training via exactly the kind of simulated environments Mechanize is working on, but meanwhile overall productivity doesn’t improve by orders of magnitude, and also all those “end of the rainbow” promises about how impactful this revolution will be for things like cancer research end up being only like a 10-20% speed-up, since either there is no “country of geniuses in a datacenter,” or those geniuses turn out to not be the bottleneck (as at least some members of Mechanize seem to believe)—that possible world is, right now, among the most likely worlds. And in that near future, companies aimed explicitly and directly at human disempowerment are radically underestimating how protective promises of “this will create jobs” have been for hardball capitalism.


9. Astrocytes might store memories?

The story of the last decade in neuroscience has been “That thing you learned in graduate school is wrong.” I was taught that glia (Greek for “glue”), which make up roughly half the cells of your brain (astrocytes are the most abundant of these), were basically just there for moral support. Oh, technically more than that, but they weren’t in the business of thought, like neurons are. More like janitors. However, findings continue to pile up that astrocytes are way more involved than suspected when it comes to the thinking business of the brain. Along this line of research, a new flagship paper caught my eye, proposing at least a testable and mechanistic theory for how:

Astrocytes enhance the memory capacity of the network. This boost originates from storing memories in the network of astrocytic processes, not just in synapses, as commonly believed.

This would be a radical change to most existing work on memory in the brain. And while it isn’t proven yet, it would no longer surprise me if cognition extends beyond neurons. Where, it’s then worth asking, does it not extend to (seems like another good topic for a Berggruen essay)?


10. Podcast appearance by moi.

My book, The World Behind the World, was released in Brazil, and I appeared on a podcast with Peter Cabral to talk about it.


11. From the archives: K12-18b updates.

In terms of actual non-conspiracy and worthwhile discussions about aliens, there have been some updates on K12-18b, the exoplanet with the claimed detection of the biosignature of dimethyl sulfide. It continues to be a great example of, as I wrote about the finding, how “the public has been thrust into working science.” Critics of the dimethyl sulfide finding recently published a paper arguing that the original detection was just a statistical artifact (and another paper used the finding to examine how difficult model-building of exoplanet atmospheres is). But the original authors have expanded their analysis, and proposed that instead of being dimethyl sulfide, it might be a signal from diethyl sulfide, which is also a very good biosignature without obvious abiotic means of production. Anyway, this all looks like good robust scientific debate to me.


12. Open thread.

As always with the Desiderata series, please treat this as an open thread. Comment and share whatever you’ve found interesting lately or been thinking about.

In the Light of Victory, He Himself Shall Disappear

2025-06-05 23:53:23

Art for The Intrinsic Perspective is by Alexander Naughton

It’s a funny thing, finding out you’re a lamplighter.

Apparently, we’ve all been trudging through the evening streets of an 1890s London, tending our gas lamps, watching from afar as the new electric ones flicker into existence. One by one they render us redundant. A change, we are told, we will eventually be thankful for.

For as The New York Times recently noted:

Unemployment for recent college graduates has jumped to an unusually high 5.8 percent in recent months… unemployment for recent graduates was heavily concentrated in technical fields like finance and computer science, where AI has made faster gains.

In a recent Axios article warning of a “job bloodbath,” Dario Amodei, CEO of Anthropic, said that 50% of entry-level white-collar positions could be eliminated in under five years, and predicts that overall unemployment will spike to 10-20%.

Some say this is hype. But it’s not all hype. How slow will it really go? How fast? Nobody knows.

Of course, some people think they have The Special Job, and no matter how advanced AI gets, they therefore don’t need to worry. E.g., Marc Andreessen, a venture capitalist investing heavily in automating away white-collar work, apparently has The Special Job, musing about being a venture capitalist that:

So, it is possible—I don’t want to be definitive—but it’s possible that [investing in start-ups] is quite literally timeless. And when the AIs are doing everything else, that may be one of the last remaining fields…

Unfortunately, the rest of us are mere lamplighters. That isn’t my analogy, by the way; it’s Sam Altman’s, the CEO of OpenAI. And what a waste of time, he bemoans, the job of the lamplighter was. How happy they would be to witness their own extinction, if only they could see the glorious future. As Altman describes it in his blog post, “The Intelligence Age:”

… nobody is looking back at the past, wishing they were a lamplighter. If a lamplighter could see the world today, he would think the prosperity all around him was unimaginable.

Altman has made this analogy in interviews and talks as well, but as it turns out, his repeated reference to lamplighters as a job happily lost is, historically, a particularly bad one. Before cities like London, Paris, and New York switched over to electricity, the job of being a lamplighter had already been much romanticized. Charles Dickens wrote a play, The Lamplighter, which he later adapted into a short story, and there was the 1854 bestselling novel The Lamplighter by Maria Susanna Cummins, in which the young girl protagonist is rescued by a lamplighter literally named “Trueman Flint.” So beloved was the profession that parents taught their children to declare “God bless the lamplighter!”

In his editorial, “A Plea for Gas Lamps,” Robert Louis Stevenson (of Treasure Island fame) laments firsthand the lamplighter’s replacement with electricity:

When gas first spread along a city… a new age had begun for sociality and corporate pleasure-seeking... The work of Prometheus had advanced another stride…. The city-folks had stars of their own; biddable domesticated stars…

The lamplighters took to their heels every evening, and ran with a good heart. It was pretty to see man thus emulating the punctuality of heaven's orbs…people commended his zeal in a proverb, and taught their children to say, “God bless the lamplighter!”…

A new sort of urban star now shines nightly. Horrible, unearthly, obnoxious to the human eye; a lamp for a nightmare! Such a light as this should shine only on murders and public crime, or along the corridors of lunatic asylums. A horror to heighten horror. To look at it only once is to fall in love with gas, which gives a warm domestic radiance fit to eat by. Mankind, you would have thought, might have remained content with what Prometheus stole for them and not gone fishing the profound heaven with kites to catch and domesticate the wildfire of the storm.

Is this true? Have we, without knowing it, lived under lights fit only for murderers and the insane? After all, gas burning resembles “biddable domesticated stars,” or a campfire. And what does sunlight and firelight mean to us humans, psychologically? It often means safety. Yet in the march of progress to illuminate our streets and our homes, we replaced the light of the sun with the light of the storm. And what do a storm and its arcs of electricity mean, psychologically? Danger.

And so it goes. Every night I drive, I think: These headlights are too bright, too cold, too technological. I miss the softer hues of my youth, when yellow cones swept the roads and traced paths across my bedroom walls before I slept.

The colors and lights of our civilization, precisely because they are so low stakes, demonstrate that nothing is gained for free in progress. It is a microcosm, and so Stevenson’s words about lamplighters have a chilling edge in the AI age:

Now, like all heroic tasks, his labours draw towards apotheosis, and in the light of victory he himself shall disappear.

Read more

How I taught my 3-year-old to read like a 9-year-old

2025-05-28 22:49:30

Art for The Intrinsic Perspective is by Alexander Naughton

Over a year has passed since I began teaching my toddler—then two years old—how to read (a process chronicled here).

Now, I’m prepared to answer a burning scientific question that has kept absolutely zero researchers up at night: Can a three-year-old read The Hobbit?

Turns out: yeah, pretty much. Here’s Roman reading from Chapter 1:

In a hole, in the ground, there lived a hobbit. Not a nasty, dirty, wet hole, filled with the ends of worms and an oozy smell, nor yet a dry bare, sandy, hole, with nothing in it to sit down on or to eat: it was a hobbit-hole, and that means comfort.

Of course, there are still plenty of words he can’t read! While he could handle a lot of The Hobbit, I haven’t let him read the whole book himself (there’s too much violence, and the small font size, confusing names, and enough unknown words would likely wear him down). But for the class of books that he has any business reading alone, like early readers and chapter books, he can do so. He reads by himself for pleasure every day now, quickly and silently plowing through his growing library, and his decoding abilities have met the limits of his comprehension.

As you can tell, I’m quite proud of how well he’s done, to the degree I risk coming across as supercilious about the whole thing (now there’s a word he probably can’t read). A few months ago, I gave him the SDQA test, a simple way of determining reading level, and he got all the 3rd-grade-level words correct (so somewhere around eight or nine-year-olds).

Estimating reading level isn’t very meaningful from a practical perspective, however. Goodhart’s law of measures becoming targets has made vicious work of education. For example, in a study wherein researchers had college students read the first few paragraphs of Charles Dickens’ Bleak House, only 5% of English majors could passably describe what was going on.

Instead, I think the only literacy milestone worth caring about is whether a child reads for pleasure, because…

Read for pleasure, make brain big.

In the Adolescent Brain and Cognitive Development (ABCD) project, a cohort of over 10,000 children in the US was tracked longitudinally. A 2023 analysis of the data revealed that the earlier a child was reading for pleasure, the more this correlated with higher scores on cognitive tests and lower numbers of mental health issues, even after controlling for things like socio-economic status, such that…

…cognitive performance was better and the mental problems were lower in young adolescents with higher levels of early RfP [reading for pleasure].

Here is years of reading for pleasure plotted against a number of such outcomes.

In fact, the researchers found that reading for pleasure—and the more years spent doing so—may literally lead to larger brain volumes in adolescence.

Sun et al. (2023) (note how the effects are non-localized)

The positive effects showed up after controlling for genetics (as best one can, using genome-wide analyses in the full cohort). ABCD also had a participating set of 711 twins and, interestingly, estimations from the twin data revealed that, while early reading-for-pleasure does have a genetic component, the majority of the trait’s variance appears environmental.

Put it all together, and early reading for pleasure stands out in the scientific literature, in that it has (a) very broad cognitive benefits, (b) good empirical support for this class of thing, (c) has a large environmental component, and (d) actively replaces and competes with screen time, which is usually neutral or negative in the literature (in the ABCD cohort, screen time had an inverse correlation with reading for pleasure).

Subscribe now

This last point, that reading for pleasure fills a certain time in the day, means I daily…

Thank god for this new independent activity.

When I look back on my official reasons for teaching reading so early, oh, how naive I was! All pale in comparison to the true benefit.

Holy smokes, does early reading make parenting easier sometimes!

It’s all the advantages of an iPad, none of the guilt. You’ve unlocked infinite self-entertainment. Long drive? Bring a book. Or five. Roman toddles into restaurants clutching a book as a backup activity, and reads while waiting in boring lines. It’s also calming, and so helps with emotional regulation. Toddler energy descending rapidly into deviance? Go read a book! It’s a parenting cheat code. I don’t know if this alone justifies the hours spent, but it sure is one heck of a benefit.

Here’s a recent picture of him in his natural habitat, in one of his nooks (looking ever less like a toddler and more like a real little kid).

Reading for pleasure was the lodestar that governed my entire teaching process. A lot of other “teach your child to read” methods are based on modular lessons and exercises, which makes learning to read separate from what it’s all about, which is enjoying books. Comparatively, I did it by mostly reading books together, because it turns out reading books is a skill in itself. Not only does this practice the attention span needed to follow through with a book until its end; more subtly, it practices the skills you, a developed adult, don’t ever notice. E.g., sentences in picture-heavy books sometimes start at the top of a page, sometimes at the bottom, sometimes they’re broken up in the middle between images, or are even inside them. So the reader needs to scan for where to start. Easy for you! But much harder for a three-year-old without prior practice. You, an adult, can physically hold books splayed open with different spines and thicknesses, and also you, an adult, can easily flip single paper-thin pages without messing up your spot. But if you’re three? So much of what we do effortlessly is invisible to us. Like how when encountering any new book, there are a few initial pages with tiny text about publishing and copyright. This is the most difficult material, and yet skipping it is not obvious to someone just learning to read. So to get better at reading books, you have to read books!

All in all, this took about one year of tutoring.

The details for anyone who wants to replicate this can be found in a series of guides: Part 1, Part 2, and Part 3, and the last part here. Privately, I’ve already worked with one person who wanted to get his own daughter reading early, and so far has had success. Eventually, I’ll put all these parts together in a book on the science and practice of (very) early reading, with edits and additions.

In Part 2, “Getting your child to love reading in 2024,” I discuss how, if the goal is reading for pleasure, then you must have books front and center in terms of daily entertainment. I also discuss the practicalities of setting up a daily “school time” (starting at less than 10 minutes a day, expanding to ~30 minutes a day by the end of the process).

In Part 3, “The BIG GUIDE to teaching LITTLE PEOPLE how to sound out words,” I overviewed how to start and progress with phonics. I also detailed my approach to “blending” sounds, one of the most difficult steps, as well as how to play a “sentence completion game” I developed, which is useful for mastering simple phonics before early readers get introduced.

I took inspiration from my historical research on “aristocratic tutoring,” but I also pulled what’s effective from the science of learning, like how…

Spaced repetition turbocharges learning (and yet most schools don’t use it).

Side note: I still read to Roman every night. Together, we’ve worked our way through many classics of children’s literature (favorites include The Wonderful Wizard of Oz and Peter Pan).

One night, we came across a character posing this riddle: "If you were to combine the movement of a circle and the movement of a line, what would you have?"

I asked Roman to guess. Without hesitation, he said, “A swirl." I was surprised, since that was just about the given answer: A spiral.

Anyway, a spiral represents the ideal Platonic structure for learning, via its combination of a circle (return) and a straight line (progression). And the modern science of learning tells us that “spiral learning” is indeed incredibly effective, because it automatically builds in spaced repetition—the review and reminder of what’s been learned, spaced out at ever-increasing intervals. Such “interleaving” that mixes old and new things is vastly more effective than the “block learning” of most traditional classrooms.

The power of spaced repetition has been known for 150 years. It replicates and has large effects. So why is spaced repetition (or even its more implementable form of spiral learning) not used all the time in classrooms? No one knows!

One reason might be that “memorizing” has become a dirty word in education (the “rote” part has become implicit). Yet all learning involves memory: it’s a spectrum, which is why spaced repetition improves generalization too (really, it improves learning anything at all). The second reason is that the “block model” of learning (learn one thing, learn the next) is much easier to implement in mass education; just as a factory, by being a system of mass production, is made as modular as possible, so too are our schools.

Subscribe now

Unbounded by such concerns, I could go two steps forward, one step back. But I needed a set of phonics-based early readers that was large enough to trace a spiral. After completing simple phonics (described in Part 3), Roman could read “The cat sat on the mat” (slowly), but not “the feline reclined on the carpet.” Luckily, I was pointed to Julia Donaldson’s Songbirds Phonics books. Julia Donaldson is a renowned children’s book author, so unlike other phonics-based progression books, her set is well-written, with good illustrations, clever nods for parents, and an appropriate air of delight. They’re good books, in other words. This matters immensely, since the whole point is getting the kid to love books!

These are not available in the US by normal publishers. A travesty. You can, however, order them through Amazon

I took the Donaldson books and quite literally traced out a massive Archimedean spiral. If you had charted our progress from session to session, it would have looked like this: originating in the middle with the simplest Stage 1 books, more books were added, repeating for a time but then becoming rarer and rarer in the procession, making way for newer books. We started with goals like reading a single book in a single session. By the end, he often read three or four books.

So much re-reading didn’t feel unnatural because, as any parent can tell you, toddlers love to re-read books (and re-watch movies, and re-play games, ad infinitum).

I didn’t bother optimizing this process much. I just went with my gut about whatever he needed to practice, or when he was ready for a new book and thus often new phonics rules. To teach the rules explicitly, I also used spaced repetition: an iPad flashcard app stored sets of words that reflected different simple phonics rules (like “car, bar, star,” etc., for “ar”). Occasionally, I would notice him stumbling over some rule we had already covered, and so we’d quickly review the relevant set of words just to brush up (I didn’t track or optimize this review).

Following this spiral, doing flashcards when it felt needed, and adding in non-Donaldson books that were phonically simple enough (e.g., Hop on Pop), was enough to get to the final stage, wherein…

I became lazy and he did all the work.

The choice to become lazy was made consciously, on purpose. I was increasingly dissatisfied while trying to teach the phonics of complex words; e.g., “ought,” “though,” “through,” and “plough.” Say those aloud and you’ll see why. Therefore, I don’t recommend highly advanced phonics. Rather, phonics is like the training wheels on a bike. Great at the early stages, but the goal is to take them off.

So once he felt ready, I decided to stop teaching phonics. I ditched the flashcards and the spiral of re-reading. We switched to general early readers, like Frog and Toad, and rarely repeated anything. When Roman made mistakes or ran into new words, I simply told him how to pronounce the word then and there, rather than explicitly teaching the rules to those words. The only remaining spaced repetition was asking him, before proceeding to the next page, to find in the text any words he’d mispronounced (“Can you find ‘special?’”), just to quickly reinforce the correct version.

I was nervous about this abandonment of phonics. I suddenly didn’t have to do anything other than select our early reader(s) for the day and sit with him. All the learning began to unfold internally; I had no access to it. Yet the momentum was there. Via the magic of the human brain extrapolating from limited data, funded not with billions of dollars worth of compute but with a thermodynamically-efficient budget of raspberries and chicken nuggets, he just got better and better with every session, until my presence was unnecessary for anything but advanced books.

Subscribe now

That’s not to say this whole process was easy! Just that it got easier. Teaching reading is front-loaded, in that decoding simple words and blending them together is where a lot of structure and thought is needed. But by the end we were just reading whatever looked fun, and my role became correcting errors. Looking back on the whole process, what mattered most was that I made it fun and interesting and committed mental time and energy to the session, and that we did it regularly. In this, it resembled many other things in the world, where the hardest part is showing up and trying.

Here’s a compilation of what the entire progression looked like:

FAQs

Have there been any negative effects?

My main worry was that this would cut into his imaginary play. But he quickly settled into a healthy state wherein his reading occurs at will and freely, in his many chosen nooks. So he dips in and out during the downtime at the house, while otherwise playing outside in the yard, building stuff, fiddling with action figures and toys, putting on his costumes, making up games with his sister, or doing activities resembling typical preschool stuff (sensory play, puzzles, mazes, activity books, volcano sets). His mother is teaching him to write his letters and draw, so he can spell out simple messages now, like birthday cards and well-wishes (e.g., while drawing a thunderstorm he’ll write “BOOM” over the top of it). How much he reads every day depends on the circumstances and his mood. Sometimes it’s hardly at all, because he’s at the beach or distracted by a new toy or has some long-running imaginary game. Some rainy days he reads a ton. Filling a toddler’s day is hard work—their hours are not our hours, and successfully getting a toddler from 6AM to 7PM is rarely a question of “How can I squeeze in this thing, we’re fully booked!” but usually more like “Oh god, dinner is in an hour, I’m beat, and they’re already getting insane!”

Did teaching early reading require any sort of coercion?

No. By far the most common problem was that he enjoyed our sessions too much and would be mad when they ended. I eventually bought a 30-minute hourglass for him to flip at the start, which helped created an official ending when the sand trickled out. Getting him out to “school” (we did it in my office, which sits in the backyard) was basically never a problem. Toddlers and kids love schedules and rituals, and once “school” was in that category, it was just something we did every day. I always brought snacks, and so he’d chomp on berries or toast or whatever else (you can learn to read with your mouth full). He’s still young enough to unconditionally love getting attention from his parents, and he had me all to himself for a solid chunk of time.

Of course, occasionally, classic toddler issues would crop up. I’m not claiming the process was easy 100% of the time. Teaching anything serious and hard (and reading is hard) requires at least some authority; otherwise you can’t ever say “Okay, stop dropping goldfish on the rug and giggling like a maniac for no reason, let’s pay attention and try again, I know you can do it.” You have to hold the line that, ultimately, you are there to learn. But I was no taskmaster—we spent a lot of time discussing what we were reading (sometimes called “dialogic reading”), giggling, acting things out, and just chatting too. I’m going to do this same process with my daughter and am actively looking forward to it.

Why bother with phonics? Why not just memorize sight words from the beginning?

That could work. But starting with phonics has some advantages: (a) it gives a sensible progression with clear mastery levels, and (b) helps them conceptualize that words are “chunks,” which helps generalizing later, even if they never learn precisely why some “chunk” is pronounced the way it is (most adults don’t know this either). More generally, toddlers are sort of like AIs—they will overfit. Phonics means you know for sure what they’re learning. I personally wouldn’t want this process to be a black box from the beginning. It’d be easy to get stuck, and you wouldn’t know why.

Are you sure he’s not just memorizing the books?

Yes, I’m 100% sure. Especially now—he can pick up any random book in the children’s section of the library and read it—but I was sure even back when we were primarily working with a constrained set of books by one author. Still, it’s a real concern. Toddlers have incredible memories. In the early stages of the process, the distinction between memory and learning was indeed blurry. Early on, he probably memorized chunks of many of the Julia Donaldson books—if not to the point of being able to recite them verbatim, at least to the point of being deeply familiar with them. However, due to delaying early readers until he could decode the simple sentences I generated, which were different each time (via the sentence completion game in Part 3), he always understood the point was actually sounding out the words, even if he knew them already ahead of time. Familiarity was often good, not bad, for learning. A new book is a stimulating experience! Where do you look? The images are distracting and toddler-brain-melting in their novelty. Re-reading was key, in that the real learning could take place after he had dealt with its content as a book qua book and so could look beyond that and pay close attention to the letters.

Do you plan to continue an accelerated education via tutoring?

Yes, for the foreseeable future. Now that reading is finished, we’ve moved on to math in our morning sessions. I’ll write more about this, too (right now it involves 100 tiny plastic ducks). Our local school system here is not the best, and he’s not attending preschool. This gives us plenty of time to find a situation that works for him. But for now, he’s focused on being a kid: he has a good social life, attends events daily, like public classes hosted by local organizations, and has an extended family and friend group. I’m keeping my eye out for interesting microschools, tutoring experiences, and things of that nature. If anyone knows any exciting educational opportunities in Boston, the surrounding areas, or Cape Cod, let me know. Same goes for someone in the area who’d be right for a well-paid and travel-compensated part-time tutoring/nanny position for a kid (or kids) like this.

“Teaching early reading is unfeasible for everyone to do, because X, Y, and Z.”

True. This isn’t right for everyone. There’s no one path. Plenty of kids learn to read in traditional school (albeit usually later) and then read for pleasure plenty.

Does reading so early single him out?

I’m sensitive to this concern. As of now, I don’t think he has a clear conception of how, e.g., his friends can’t read, or that he can read better than some kids three times his age. He’s still the same person, just one who reads a lot. He’s aware that adults like that he can read, but he’s mostly too shy to show off to strangers. Nor does he, in the blithe ignorance of the young, always notice its effects.

For instance, a couple of months ago we went on a humble errand to the pharmacy of our local CVS. Roman had been reading in the car, so he brought along a book almost as big as he is and stood mumbling the words as we waited. Standing primly in line behind us happened to be an older well-put-together woman, who had about her the matronly and bookish air of a former teacher—exactly the kind of woman you’d find at the desk of your local library. At first she smiled and took his reading as a novelty, but as time went on she leaned closer to listen, curious. This occurred several times, as if to confirm. Then, unable to contain herself, she declared aloud with amazement, “He’s actually reading!” to everyone around. It was said in the tone of needing to attract attention to this thing, this unexpected thing, unfolding in front of you in, of all places, CVS. She didn’t take her eyes off of him after that, smiling and occasionally blinking as if in bewilderment. Having soon gotten what we came for, we left. But as we passed by on the way out, and she kindly looked down at him tottering past, I saw that she was quietly, in the concealed and unobtrusive manner of someone unused to doing so in public, wiping away tears.

I want to share your writing

2025-05-17 00:56:43

The summer solstice comes. In just over a month, the sun’s rays will hike to their northernmost peak. At Stonehenge, the sunrise will summit the Heel Stone, turning the stone’s shadow into a long blade that pierces between the monoliths and touches the Altar Stone. There, amid the cramped tourist encampments, Fey creatures will have made their annual pilgrimage. Wearing faces so perfectly average they slip from memory, the Fey will sip their coffees and be jostled amid the crowd.

Here too at The Intrinsic Perspective, we are attuned to tides of light and the music of the spheres. And the astronomical charts have informed me it is time for my semi-annual call to share your writing here on The Intrinsic Perspective.

So send me your links, and I will (a) read your piece, (b) pull quotes and/or images from it, (c) often write some thoughts or reactions, (d) share it, bundled with others, in a structure much like my regular link and commentary roundups. Submissions will be published in two or three installments over the summer (the end results will look like previous ones did).

The benefits are twofold. Readers enjoy the act of browsing authors that might not normally show up in their feeds or inboxes, and submitting is a great way to show off your best work. Please note that submitting is for paid subscribers only.

Subscribe now

Instructions (same as previous calls, but please read carefully):

You must submit something published for public consumption, like a blog post, website, paper, or so on, and you must be the author/originator. Do not submit a shared Google doc or a draft. What you submit doesn’t technically have to be writing, but don’t send me things the median TIP reader would be uninterested in (e.g., “here are pictures of my vacation” would be bad, unless your vacation was to space). I reserve the right to exclude anything too weird or controversial, anything which doesn’t fit the readership here, anything that looks like a scam, contextless links to social media homepages, anything promoting your company or service, and to order the results however I like. You can only submit one thing. Do not send me two different things and ask me to choose. Writing doesn’t expire, so if you have a great piece from last year, please share, but keep in mind recency bias is good.

Deadline:

When your cells are saturated with light. June 20th.

Submit to this email:

Read more

The joy of blackouts; AI ruins college; The Consciousness Wars continue; Peter Singer’s chatbot betrays him, & more

2025-05-10 00:00:11

The Desiderata series is a regular roundup of links and commentary, and an open thread for the community (paid-only).

Subscribe now


Contents

  1. Overstatement of the Year?

  2. “Everyone is cheating their way through college.”

  3. The Consciousness Wars continue.

  4. “The most fascinating graph.”

  5. If the US were an upper-class family, DOGE has saved $367.

  6. Does the Great Filter hypothesis mean finding alien life is bad?

  7. How close were the Ancient Greeks to calculus?

  8. Peter Singer’s chatbot betrays him and endorses deontology.

  9. Newest reasoning models are lying liars who lie. A lot.

  10. Blackout jubilation as an indictment of the modern world.

  11. From the archives.

  12. Comment, share anything, ask anything.


1. Overstatement of the Year?

Occasionally, I like to check in on predictions people have made about AI. Here’s one of my favorites. Did you know it’s been over three months since Deep Research supposedly allowed automating 1-10% of all economically valuable tasks in the world (according to the CEO of OpenAI)?

Meanwhile, our labor productivity was down by 0.8% in the first quarter of this year. In fact, according to the U.S. Bureau of Labor Statistics, labor output was down 0.3%, while hours worked was up 0.6%. As I’ve noted before: text-generation just isn’t that valuable! Otherwise, there wouldn’t be so much of it to train the models on.


2. “Everyone is cheating their way through college.”

What Sam Altman should have said is that they’ve automated the “job” of being a student. Which is true. As a recent deep-dive in New York Magazine put it:

It’s a harrowing read. Its interviews and anecdotes make it clear we should now baseline expect, pessimistically, most students to use AI to do most assignments. Plenty of teachers are quitting because they want more from life than grading an AI’s essays.

After spending the better part of the past two years grading AI-generated papers, Troy Jollimore, a poet, philosopher, and Cal State Chico ethics professor, has concerns. “Massive numbers of students are going to emerge from university with degrees, and into the workforce, who are essentially illiterate,” he said.

So what do we do? Is academia over? What does a GPA, or even an entire degree, reflect anymore, if homework and essays can be one-shotted by ChatGPT?

People are arguing for a return to tests, but relying solely on tests limits what academia can impart. It turns us into the AIs, focused solely on regurgitating facts. No “blue book” essay written in a cramped room by pencil can take the place of real research for hours, deep digestion of a book, and so on. This is the main relevant skill academia teaches: how to think in depth about a subject. The situation reveals deep tensions in academia. Ultimately, we have to ask:

Why, in 2025, are we grading outputs, instead of workflows?

We have the technology. Google Docs is free, and many other text editors track version histories as well. Specific programs could even be provided by the university itself. Tell students you track their workflows and have them do the assignments with that in mind. In fact, for projects where ethical AI is encouraged as a research assistant, editor, and smart-wall to bounce ideas off of, have that be directly integrated too. Get the entire conversation between the AI and the student that results in the paper or homework. Give less weight to the final product—because forevermore, those will be at minimum A- material—and more to the amount of effort and originality the student put into arriving at it.

In other words, grading needs to transition to “showing your work,” and that includes essay writing. Real serious pedagogy must become entirely about the process. Tracking the impact of education by grading outputs is no longer a thing, ever again. It was a good 3,000 year run. We had fun. It’s over. Stop grading essays, and start grading the creation of the essay. Same goes for everything else.

Read more