2025-08-14 08:00:00
Here’s our story so far:
Markets are a good way to know what people really think. When India and Pakistan started firing missiles at each other on May 7, I was concerned, what with them both having nuclear weapons. But then I looked at world market prices:
See how it crashes on May 7? Me neither. I found that reassuring.
But we care about lots of stuff that isn’t always reflected in stock prices, e.g. the outcomes of elections or drug trials. So why not create markets for those, too? If you create contracts that pay out $1 only if some drug trial succeeds, then the prices will reflect what people “really” think.
In fact, why don’t we use markets to make decisions? Say you’ve invented two new drugs, but only have enough money to run one trial. Why don’t you create markets for both drugs, then run the trial on the drug that gets a higher price? Contracts for the “winning” drug are resolved based on the trial, while contracts in the other market are cancelled so everyone gets their money back. That’s the idea of Futarchy, which Robin Hanson proposed in 2007.
Why don’t we? Well, maybe it won’t work. In 2022, I wrote a post arguing that when you cancel one of the markets, you screw up the incentives for how people should bid, meaning prices won’t reflect the causal impact of different choices. I suggested prices reflect “correlation” rather than causation, for basically the same reason this happens with observational statistics. This post, it was magnificent.
It didn’t convince anyone.
Years went by. I spent a lot of time reading Bourdieu and worrying about why I buy certain kinds of beer. Gradually I discovered that essentially the same point about futarchy had been made earlier by, e.g., Anders_H in 2015, abramdemski in 2017, and Luzka in 2021.
In early 2025, I went to a conference and got into a bunch of (friendly) debates about this. I was astonished to find that verbally repeating the arguments from my post did not convince anyone. I even immodestly asked one person to read my post on the spot. (Bloggers: Do not do that.) That sort of worked.
So, I decided to try again. I wrote another post called ”Futarky’s Futarchy’s fundamental flaw”. It made the same argument with more aggression, with clearer examples, and with a new impossibility theorem that showed there doesn’t even exist any alternate payout function that would incentivize people to bid according to their causal beliefs.
That post… also didn’t convince anyone. In the discussion on LessWrong, many of my comments are upvoted for quality but downvoted for accuracy, which I think means, “nice try champ; have a head pat; nah.” Robin Hanson wrote an response, albeit without outward evidence of reading beyond the first paragraph. Even the people who agreed with me often seemed to interpret me as arguing that futarchy satisfies evidential decision theory rather than causal decision theory. Which was weird, given that I never mentioned either of those, don’t accept the premise the futarchy satisfies either of them, and don’t find the distinction helpful in this context.
In my darkest moments, I started to wonder if I might fail to achieve worldwide consensus that futarchy doesn’t estimate causal effects. I figured I’d wait a few years and then launch another salvo.
But then, legendary human Bolton Bailey decided to stop theorizing and take one of my thought experiments and turn it into an actual experiment. Thus, Futarchy’s fundamental flaw — the market was born. (You are now reading a blog post about that market.)
I gave a thought experiment where there are two coins and the market is trying to pick the one that’s more likely to land heads. For one coin, the bias is known, while for the other coin there’s uncertainty. I claimed futarchy would select the worse / wrong coin, due to this extra uncertainty.
Bolton formalized this as follows:
There are two markets, one for coin A and one for coin B.
Coin A is a normal coin that lands heads 60% of the time.
Coin B is a trick coin that either always lands heads or always lands tails, we just don’t know which. There’s a 59% it’s an always-heads coin.
Twenty-four hours before markets close, the true nature of coin B is revealed.
After the markets closes, whichever coin has a higher price is flipped and contracts pay out $1 for heads and $0 for tails. The other market is cancelled so everyone gets their money back.
Get that? Everyone knows that there’s a 60% chance coin A will land heads and a 59% chance coin B will land heads. But for coin A, that represents true “aleatoric” uncertainty, while for coin B that represents “epistemic” uncertainty due to a lack of knowledge. (See Bayes is not a phase for more on “aleatoric” vs. “epistemic” uncertainty.)
Bolton created that market independently. At the time, we’d never communicated about this or anything else. To this day, I have no idea what he thinks about my argument or what he expected to happen.
In the forum for the market, there was a lot of debate about “whalebait”. Here’s the concern: Say you’ve bought a lot of contracts for coin B, but it emerges that coin B is always-tails. If you have a lot of money, then you might go in at the last second and buy a ton of contracts on coin A to try to force the market price above coin B, so the coin B market is cancelled and you get your money back.
The conversation seemed to converge towards the idea that this was whalebait. Though notice that if you’re buying contracts for coin A at any price above $0.60, you’re basically giving away free money. It could still work, but it’s dangerous and everyone else has an incentive to stop you. If I was betting in this market, I’d think that this was at least unlikely.
Bolton posted about the market. When I first saw the rules, I thought it wasn’t a valid test of my theory and wasted a huge amount of Bolton’s time trying to propose other experiments that would “fix” it. Bolton was very patient, but I eventually realized that it was completely fine and there was nothing to fix.
At the time, this is what the prices looked like:
That is, at the time, both coins were priced at $0.60, which is not what I had predicted. Nevertheless, I publicly agreed that this was a valid test of my claims.
I think this is a great test and look forward to seeing the results.
Let me reiterate why I thought the markets were wrong and coin B deserved a higher price. There’s a 59% chance coin B would turns out to be all-heads. If that happened, then (absent whales being baited) I thought the coin B market would activate, so contracts are worth $1. So thats 59% × $1 = $0.59 of value. But if coin B turns out to be all-tails, I thought there is a good chance prices for coin B would drop below coin A, so the market is cancelled and you get your money back. So I thought a contract had to be worth more than $0.59.
If you buy a contract for coin B for $0.70, then I think that’s worth
P[all-heads] × P[market activates | all-heads] × $1
+ P[all-tails] × P[market cancelled | all-tails] × $0.70
= 0.59 × $1
+ 0.41 × P[market cancelled | all-tails] × $0.70
= $0.59
+ $0.287 × P[market cancelled | all-tails]
Surely P[market cancelled | all-tails]
isn’t that low. So surely this is worth more than $0.59.
More generally, say you buy a YES contract for coin B for $M. Then that contract would be worth
P[all-heads] × $1 × P[market activates | all-heads]
+ P[all-tails] × $M × P[market cancelled | all-tails]
= $0.59
+ 0.41 × $M × P[market cancelled | all-tails]
It’s not hard to show that the breakeven price is
M = $0.59 / (1 - 0.41 × P[market cancelled | all-tails]).
Even if you thought P[market cancelled | all-tails]
was only 50%, then the breakeven price would still be $0.7421.
Within a few hours, a few people bought contracts on coin B, driving up the price.
Then, Quroe proposed creating derivative markets.
In theory, if there was a market asking if coin A was going to resolve YES, NO, or N/A, supposedly people could arbitrage their bets accordingly and make this market calibrated.
Same for a similar market on coin B.
Thus, Futarchy’s Fundamental Fix - Coin A and Futarchy’s Fundamental Fix - Coin B came to be. These were markets in which people could bid on the probability that each coin would resolve YES, meaning the coin was flipped and landed heads, NO, meaning the coin was flipped and landed heads, or N/A, meaning the market was cancelled.
Honestly, I didn’t understand this. I saw no reason that these derivative markets would make people bid their true beliefs. If they did, then my whole theory that markets reflect correlation rather than causation would be invalidated.
Prices for coin B went up and down, but mostly up.
Eventually, a few people created large limit orders, which caused things to stabilize.
Here was the derivative market for coin A.
And here it was market for coin B.
During this period, not a whole hell of a lot happened.
This brings us up to the moment of truth, when the true nature of coin B was to be revealed. At this point, coin B was at $0.90, even though everyone knows it only has a 59% chance of being heads.
The nature of the coin was revealed. To show this was fair, Bolton did this by asking a bot to publicly generate a random number.
Thus, coin B was determined to be always-heads.
There were still 24 hours left to bid. At this point, a contract for coin B was guaranteed to pay out $1. The market quickly jumped to $1.
I was right. Everyone knew coin A had a higher chance of being heads than coin B, but everyone bid the price of coin B way above coin A anyway.
In the previous math box, we saw that the breakeven price should satisfy
M = $0.59 / (1 - 0.41 × P[market cancelled | all-tails]).
If you invert this and plug in M=$0.90, then you get
P[market cancelled | all-tails]
= (1 - $0.59 / M) / 0.41
= (1 - $0.59 / $0.9) / 0.41
= 84.01%
I’ll now open the floor for questions.
Yes, but that’s kind of the point. I created the thought experiment because I wanted to make the problem maximally obvious, because it’s subtle and everyone is determined to deny that it exists.
The fact that this is possible is concerning. If this can happen, then futarchy does not work in general. If you want to claim that futarchy works, then you need to spell out exactly what extra assumptions you’re adding to guarantee that this kind of thing won’t happen.
No. That’s just a quirk of the implementation. You can easily create situations that would have the same issue all the way through market close. Here’s one way you could do that:
On average, this market will run for 30 days. (The length follows a geometric distribution). Half the time, the market will close without the nature of coin B being revealed. Even when that happens, I claim the price for coin B will still be above coin A.
Yes. You should be able to do that, and I think you can. Here’s one way:
10100011001100000001
. Let coin B be heads with probability (49+N)% where N is the number of 1
bits. do not reveal these bits publicly.First, have users generate public keys by running this command:
openssl genrsa 1024 > private.pem
openssl rsa -in private.pem -pubout > public.pem
Second, they should post the contents of the public_key.pem
when asking for their bit. For example:
Hi, can you please send me a bit? Here's my public key:
-----BEGIN PUBLIC KEY-----
MIGfMA0GCSqGSIb3DQEBAQUAA4GNADCBiQKBgQDOlesWS+mnvHJOD2osUkbrxE+Y
PMqAUYqwemOwML0LlWLq5RobZRSeyssQhg0i3g2GsMZFMsvjindz6mxccdyP4M8N
mQVCK1Ovs1Z4+DxwmLf/y8vaGC3vfZBOhJDdaNdpRyUiQFaBW99We4cafVnmirRN
Py2lRe+CFgP3kSp4dQIDAQAB
-----END PUBLIC KEY-----
Third, whoever is running the market should save that key as public.pem
, pick a pit, and encrypt it like this:
% echo "your secret bit is 1" | openssl pkeyutl -encrypt -pubin -inkey public.pem | base64
OuHt25Jwc1xYq63Ub8gOLKaZEJwwGHWDL0UGfydmvBapQNKf3l6Akol2Z2XHtCAC8G/lPJsCjb1dN878tU0aCMjbO5EvpMUTuohb0OczaCqAMld8uFL+j+uEZsIjKFT3Q52VumdVqMntJYG6Br6QeUs1vAL2HA6Nvych+Ao2e8M=
Users can then decrypt like this:
% echo "OuHt25Jwc1xYq63Ub8gOLKaZEJwwGHWDL0UGfydmvBapQNKf3l6Akol2Z2XHtCAC8G/lPJsCjb1dN878tU0aCMjbO5EvpMUTuohb0OczaCqAMld8uFL+j+uEZsIjKFT3Q52VumdVqMntJYG6Br6QeUs1vAL2HA6Nvych+Ao2e8M=" | base64 -d | openssl pkeyutl -decrypt -inkey private.pem
your secret bit is 1
Or you could use email…
I think this market captures a dynamic that’s present in basically any use of futarchy: You have some information, but you know other information is out there.
I claim that this market—will be weird. Say it just opened. If you didn’t get a bit, then as far as you know, the bias for coin B could be anywhere between 49% and 69%, with a mean of 59%. If you did get a bit, then it turns out that the posterior mean is 58.5% if you got a 0
and 59.5% if you got a 1
. So either way, your best guess is very close to 59%.
However, the information for the true bias of coin B is out there! Surely coin B is more likely to end up with a higher price in situations where there are lots of 1
bits. This means you should bid at least a little higher than your true belief, for the same reason as the main experiment—the market activating is correlated with the true bias of coin B.
Of course, after the markets open, people will see each other’s bids and… something will happen. Initially, I think prices will be strongly biased for the above reasons. But as you get closer to market close, there’s less time for information to spread. If you are the last person to trade you know you’re the last person to trade, then you should do so based on your true beliefs.
Except, everyone knows that there’s less time for information to spread. So while you are waiting till the last minute to reveal your true beliefs, everyone else will do the same thing. So maybe people sort of rush in at the last second? (It would be easier to think about this if implemented with batched auctions rather than a real-time market.)
Anyway, while the game theory is vexing, I think there’s a mix of (1) people bidding higher than their true beliefs due to correlations between the final price and the true bias of coin B and (2) people “racing” to make the final bid before the markets close. Both of these seem in conflict with the idea of prediction markets making people share information and measure collective beliefs.
I like futarchy. I think society doesn’t make decisions very well, and I think we should give much more attention to new ideas like futarchy that might help us do better. I just think we should be aware of its imperfections and consider variants (e.g. commiting to randomization) that would resolve them.
Possibly?
2025-08-07 08:00:00
The heritability wars have been a-raging. Watching these, I couldn’t help but notice that there’s near-universal confusion about what “heritable” means. Partly, that’s because it’s a subtle concept. But it also seems relevant that almost all explanations of heritability are very, very confusing. For example, here’s Wikipedia’s definition:
Any particular phenotype can be modeled as the sum of genetic and environmental effects:
Phenotype (P) = Genotype (G) + Environment (E).
Likewise the phenotypic variance in the trait – Var (P) – is the sum of effects as follows:
Var(P) = Var(G) + Var(E) + 2 Cov(G,E).
In a planned experiment Cov(G,E) can be controlled and held at 0. In this case, heritability, H², is defined as
H² = Var(G) / Var(P)
H² is the broad-sense heritability.
Do you find that helpful? I hope not, because it’s a mishmash of undefined terminology, unnecessary equations, and borderline-false statements. If you’re in the mood for a mini-polemic:
Reading this almost does more harm than good. While the final definition is correct, it never even attempts to explain what G and P are, it gives an incorrect condition for when the definition applies, and instead mostly devotes itself to an unnecessary digression about environmental effects. The rest of the page doesn’t get much better. Despite being 6700 words long, I think it would be impossible to understand heritability simply by reading it.
Meanwhile, some people argue that heritability is meaningless for human traits like intelligence or income or personality. They claim that those traits are the product of complex interactions between genes and the environment and it’s impossible to disentangle the two. These arguments have always struck me as “suspiciously convenient”. I figured that the people making them couldn’t cope with the hard reality that genes are very important and have an enormous influence on what we are.
But I increasingly feel that the skeptics have a point. While I think it’s a fact that most human traits are substantially heritable, it’s also true the technical definition of heritability is really weird, and simply does not mean what most people think it means.
In this post, I will explain exactly what heritability is, while assuming no background. I will skip everything that can be skipped but—unlike most explanations—I will not skip things that can’t be skipped. Then I’ll go through a series of puzzles demonstrating just how strange heritability is.
How tall you are depends on your genes, but also on what you eat, what diseases you got as a child, and how much gravity there is on your home planet. And all those things interact. How do you take all that complexity and reduce it to a single number, like “80% heritable”?
The short answer is: Statistical brute force. The long answer is: Read the rest of this post.
It turns out that the hard part of heritability isn’t heritability. Lurking in the background is a slippery concept known as a genotypic value. Discussions of heritability often skim past these. Quite possibly, just looking at the words “genotypic value”, you are thinking about skimming ahead right now. Resist that urge! Genotypic values are the core concept, and without them you cannot possibly understand heritability.
For any trait, your genotypic value is the “typical” outcome if someone with your DNA were raised in many different random environments. In principle, if you wanted to know your genotypic height, you’d need to do this:
Since you can’t / shouldn’t do that, you’ll never know your genotypic height. But that’s how it’s defined in principle—the average height someone with your DNA would grow to in a random environment. If you got lots of food and medical care as a child, your actual height is probably above your genotypic height. If you suffered from rickets, your actual height is probably lower than your genotypic height.
Comfortable with genotypic values? OK. Then (broad-sense) heritability is easy. It’s the ratio
heritability = var[genotype] / var[height].
Here, var
is the variance, basically just how much things vary in the population. Among all adults worldwide, var[height]
is around 50 cm². (Incidentally, did you know that variance was invented for the purpose of defining heritability?)
Meanwhile, var[genotype]
is how much genotypic height varies in the population. That might seem hopeless to estimate, given that we don’t know anyone’s genotypic height. But it turns out that we can still estimate the variance using, e.g., pairs of adopted twins, and it’s thought to be around 40 cm². If we use those numbers, the heritability of height would be
heritability ≈ (40 cm²) / (50 cm²) ≈ 0.8.
People often convert this to a percentage and say “height is 80% heritable”. I’m not sure I like that, since it masks heritability’s true nature as a ratio. But everyone does it, so I’ll do it too. People who really want to be intimidating might also say, “genes explain 80% of the variance in height”.
Of course, basically the same definition works for any trait, like weight or income or fondness for pseudonymous existential angst science blogs. But instead of replacing “height” with “trait”, biologists have invented the ultra-fancy word “phenotype” and write
heritability = var[genotype] / var[phenotype].
The word “phenotype” suggests some magical concept that would take years of study to understand. But don’t be intimidated. It just means the actual observed value of some trait(s). You can measure your phenotypic height with a tape measure.
Let me make two points before moving on.
First, this definition of heritability assumes nothing. We are not assuming that genes are independent of the environment or that “genotypic effects” combine linearly with “environmental effects”. We are not assuming that genes are in Hardy-Weinberg equilibrium, whatever that is. No. I didn’t talk about that stuff because I don’t need to. There are no hidden assumptions. The above definition always works.
Second, many normal English words have parallel technical meanings, such as “field”, “insulator”, “phase”, “measure”, “tree”, or “stack”. Those are all nice, because they’re evocative and it’s almost always clear from context which meaning is intended. But sometimes, scientists redefine existing words to mean something technical that overlaps but also contradicts the normal meaning, as in “salt”, “glass”, “normal”, “berry”, or “nut”. These all cause confusion, but “heritability” must be the most egregious case in all of science.
Before you ever heard the technical definition of heritability, you surely had some fuzzy concept in your mind. Personally, I thought of heritability as meaning how many “points” you get from genes versus the environment. If charisma was 60% heritable, I pictured each person has having 10 total “charisma points”, 6 of which come from genes, and 4 from the environment:
Genes ★★★☆☆☆
Environment ★☆☆☆
Total ★★★★☆☆☆☆☆☆
If you take nothing else from this post, please remember that the technical definition of heritability does not work like that. You might hope that if we add some plausible assumptions, the above ratio-based definition would simplify into something nice and natural, that aligns with what “heritability” means in normal English. But that does not happen. If that’s confusing, well, it’s not my fault.
Not sure what’s happening here, but it seems relevant.
So “heritability” is just the ratio of genotypic and phenotypic variance. Is that so bad?
I think… maybe?
How heritable is eye color?
Close to 100%.
This seems obvious, but let’s justify it using our definition that heritability = var[genotype] / var[phenotype]
.
Well, people have the same eye color, no matter what environment they are raised in. That means that genotypic eye color and phenotypic eye color are the same thing. So they have the same variance, and the ratio is 1. Nothing tricky here.
How heritable is speaking Turkish?
Close to 0%.
Your native language is determined by your environment. If you grow up in a family that speaks Turkish, you speak Turkish. Genes don’t matter.
Of course, there are lots of genes that are correlated with speaking Turkish, since Turks are not, genetically speaking, a random sample of the global population. But that doesn’t matter, because if you put Turkish babies in Korean households, they speak Korean. Genotypic values are defined by what happens in a random environment, which breaks the correlation between speaking Turkish and having Turkish genes.
Since 1.1% of humans speak Turkish, the genotypic value for speaking Turkish is around 0.011 for everyone, no matter their DNA. Since that’s basically constant, the genotypic variance is near zero, and heritability is near zero.
How heritable is speaking English?
Perhaps 30%. Probably somewhere between 10% and 50%. Definitely more than zero.
That’s right. Turkish isn’t heritable but English is. Yes it is. If you ask an LLM, it will tell you that the heritability of English is zero. But the LLM is wrong and I am right.
Why? Let me first acknowledge that Turkish is a little bit heritable. For one thing, some people have genes that make them non-verbal. And there’s surely some genetic basis for being a crazy polyglot that learns many languages for fun. But speaking Turkish as a second language is quite rare, meaning that the genotypic value of speaking Turkish is close to 0.011 for almost everyone.
English is different. While only 1 in 20 people in the world speak English as a first language, 1 in 7 learn it as a second language. And who does that? Educated people.
Some argue the heritability of educational attainment is much lower. I’d like to avoid debating the exact numbers, but note that these lower numbers are usually estimates of “narrow-sense” heritability rather than “broad-sense” heritability as we’re talking about. So they should be lower. (I’ll explain the difference later.) It’s entirely possible that broad-sense heritability is lower than 40%, but everyone agrees it’s much larger than zero. So the heritability of English is surely much larger than zero, too.
Say there’s an island where genes have no impact on height. How heritable is height among people on this island?
0%.
There’s nothing tricky here.
Say there’s an island where genes entirely determine height. How heritable is height?
100%.
Again, nothing tricky.
Say there’s an island where neither genes nor the environment influence height and everyone is exactly 165 cm tall. How heritable is height?
It’s undefined.
In this case, everyone has exactly the same phenotypic and genotypic height, namely 165 cm. Since those are both constant, their variance is zero and heritability is zero divided by zero. That’s meaningless.
Say there’s an island where some people have genes that predispose them to be taller than others. But the island is ruled by a cruel despot who denies food to children with taller genes, so that on average, everyone is 165 ± 5 cm tall. How heritable is height?
0%.
On this island, everyone has a genotypic height of 165 cm. So genotypic variance is zero, but phenotypic variance is positive, due to the ± 5 cm random variation. So heritability is zero divided by some positive number.
Say there’s an island where some people have genes that predispose them to be tall and some have genes that predispose them to be short. But, the same genes that make you tall also make you semi-starve your children, so in practice everyone is exactly 165 cm tall. How heritable is height?
∞%. Not 100%, mind you, infinitely heritable.
To see why, note that if babies with short/tall genes are adopted by parents with short/tall genes, there are four possible cases.
Baby genes | Parent genes | Food | Height |
---|---|---|---|
Short | Short | Lots | 165 cm |
Short | Tall | Semi-starvation | Less than 165 cm |
Tall | Short | Lots | More than 165 cm |
Tall | Tall | Semi-starvation | 165 cm |
If a baby with short genes is adopted into random families, they will be shorter on average than if a baby with tall genes. So genotypic height varies. However, in reality, everyone is the same height, so phenotypic height is constant. So genotypic variance is positive while phenotypic variance is zero. Thus, heritability is some positive number divided by zero, i.e. infinity.
(Are you worried that humans are “diploid”, with two genes (alleles) at each locus, one from each biological parent? Or that when there are multiple parents, they all tend to have thoughts on the merits of semi-starvation? If so, please pretend people on this island reproduce asexually. Or, if you like, pretend that there’s strong assortative mating so that everyone either has all-short or all-tall genes and only breeds with similar people. Also, don’t fight the hypothetical.)
Say there are two islands. They all live the same way and have the same gene pool, except people on island A have some gene that makes them grow to be 150 ± 5 cm tall, while on island B they have a gene that makes them grow to be 160 ± 5 cm tall. How heritable is height?
It’s 0% for island A and 0% for island B, and 50% for the two islands together.
Why? Well on island A, everyone has the same genotypic height, namely 150 cm. Since that’s constant, genotypic variance is zero. Meanwhile, phenotypic height varies a bit, so phenotypic variance is positive. Thus, heritability is zero.
For similar reasons, heritability is zero on island B.
But if you put the two islands together, half of people have a genotypic height of 150 cm and half have a genotypic height of 160 cm, so suddenly (via math) genotypic variance is 25 cm². There’s some extra random variation so (via more math) phenotypic variance turns out to be 50 cm². So heritability is 25 / 50 = 50%.
If you combine the populations, then genotypic variance is
Var[150 cm + 10 cm × Bernoulli(0.5)]
= (10 cm)² × Var[Bernoulli(0.5)]
= (10 cm)² × 0.25
= 25 cm².
Meanwhile phenotypic variance is
Var[150 cm + 10 cm × Bernoulli(0.5) + 5 cm × Normal(0,1)]
= (10 cm)² × Var[Bernoulli(0.5)] + (5 cm)² × Var[Normal(0,1)]
= (10 cm)² × 0.25 + (5 cm)² × 1
= 50 cm².
Say there’s an island where neither genes nor the environment influence height. Except, some people have a gene that makes them inject their babies with human growth hormone, which makes them 5 cm taller. How heritable is height?
0%.
True, people with that gene will tend be taller. And the gene is causing them to be taller. But if babies are adopted into random families, it’s the genes of the parents that determine if they get injected or not. So everyone has the same genotypic height, genotypic variance is zero, and heritability is zero.
Suppose there’s an island where neither genes nor the environment influence height. Except, some people have a gene that makes them, as babies, talk their parents into injecting them with human growth hormone. The babies are very persuasive. How heritable is height?
We’re back to 100%.
The difference with the previous scenario is that now babies with that gene get injected with human growth hormone no matter who their parents are. Since nothing else influences height, genotype and phenotype are the same, have the same variance, and heritability is 100%.
Suppose there’s an island where neither genes nor the environment influence height. Except, there are crabs that seek out blue-eyed babies and inject them with human growth hormone. The crabs, they are unstoppable. How heritable is height?
Again, 100%.
Babies with DNA for blue eyes get injected. Babies without DNA for blue eyes don’t. Since nothing else influences height, genotype and phenotype are the same and heritability is 100%.
Note that if the crabs were seeking out parents with blue eyes and then injecting their babies, then height would be 0% heritable.
It doesn’t matter that human growth hormone is weird thing that’s coming from outside the baby. It doesn’t matter if we think crabs should be semantically classified as part of “the environment”. It doesn’t matter that heritability would drop to zero if you killed all the crabs, or that the direct causal effect of the relevant genes has nothing to do with height. Heritability is a ratio and doesn’t care.
So heritability can be high even when genes have no direct causal effect on the trait in question. It can be low even when there is a strong direct effect. It changes when the environment changes. It even changes based on how you group people together. It can be larger than 100% or even undefined.
Even so, I’m worried people might interpret this post as a long way of saying heritability is dumb and bad, trolololol. So I thought I’d mention that this is not my view.
Say a bunch of companies create different LLMs and train them on different datasets. Some the resulting the LLMs are better at writing fiction than others. Now I ask you, “What percentage of the difference in fiction writing performance is due to the base model code, rather than the datasets or the GPUs or the learning rate schedules?”
That’s a natural question. But if you put it to an AI expert, I bet you’ll get a funny look. You need code and data and GPUs to make an LLM. None of those things can write fiction by themselves. Experts would prefer to think about one change at a time: Given this model, changing the dataset in this way changes fiction writing performance this much.
Similarly, for humans, I think what we really care about is interventions. If we changed this gene, could we eliminate a disease? If we educate children differently, can we make them healthier and happier? No single number can possibly contain all that information.
But heritability is something. I think of it as saying how much hope we have to find an intervention by looking at changes in current genes or current environments.
If heritability is high, then given current typical genes, you can’t influence the trait much through current typical environmental changes. If you only knew that eye color was 100% heritable, that means you won’t change your kid’s eye color by reading to them, or putting them on a vegetarian diet, or moving to higher altitude. But it’s conceivable you could do it by putting electromagnets under their bed or forcing them to communicate in interpretive dance.
If heritability is high, that also means that given current typical environments you can influence the trait through current typical genes. If the world was ruled by an evil despot who forced red-haired people to take pancreatic cancer pills, then pancreatic cancer would be highly heritable. And you could change the odds someone gets pancreatic cancer by swapping in existing genes for black hair.
If heritability is low, that means that given current typical environments, you can’t cause much difference through current typical genetic changes. If we only knew that speaking Turkish was ~0% heritable, that means that doing embryo selection won’t much change the odds that your kid speaks Turkish.
If heritability is low, that also means that given current typical genes, you might be able change the trait through current typical environmental changes. If we only know that speaking Turkish was 0% heritable, then that means there might be something you could do to change the odds your kid speaks Turkish, e.g. moving to Turkey. Or, it’s conceivable that it’s just random and moving to Turkey wouldn’t do anything.
Heritability | Influenced by typical genes? | Influenced by typical environments? |
---|---|---|
High | Yes | No |
Low | No | Maybe |
But be careful. Just because heritability is high doesn’t mean that changing genes is easy. And just because heritability is low doesn’t mean that changing the environment is easy.
And heritability doesn’t say anything about non-typical environments or non-typical genes.
If an evil despot is giving all the red-haired people cancer pills, perhaps we could solve that by intervening on the despot. And if you want your kid to speak Turkish, it’s possible that there’s some crazy genetic modifications that would turn them into unstoppable Turkish learning machine.
Heritability has no idea about any of that, because it’s just an observational statistic based on the world as it exists today.
Heritability: Five Battles by Steven Byrnes. Covers similar issues in way that’s more connected to the world and less shy about making empirical claims.
A molecular genetics perspective on the heritability of human behavior and group differences by Alexander Gusev. I find the quantitative genetics literature to be incredibly sloppy about notation and definitions and math. (Is this why LLMs are so bad at it?) This is the only source I’ve found that didn’t drive me completely insane.
This post focused on “broad-sense” heritability. But there a second heritability out there, called “narrow-sense”. Like broad-sense heritability, we can define the narrow-sense heritability of height as a ratio:
narrow heritability = var[additive height] / var[phenotype]
The difference is that rather than having height in the numerator, we now have “additive height”. To define that, imagine doing the following for each of your genes, one at a time:
For example, say overall average human height is 150 cm, but when you insert gene #4023 from yourself into random embryos, their average height is 149.8 cm. Then the additive effect of your gene #4023 is -0.2 cm.
Your “additive height” is average human height plus the sum of additive effects for each of your genes. If the average human height is 150 cm, you have one gene with a -0.2 cm additive effect, another gene with a +0.3 cm additive effect and the rest of your genes have no additive effect, then your “additive height” is 150 cm - 0.2 cm + 0.3 cm = 150.1 cm.
Note: This terminology of “additive height” is non-standard. People usually define narrow-sense heritability using “additive effects”, which are the same thing but without including the mean. This doesn’t change anything since adding a constant doesn’t change the variance. But it’s easier to say “your additive height is 150.1 cm” rather than “the additive effect of your genes on height is +0.1 cm” so I’ll do that.
Honestly, I don’t think the distinction between “broad-sense” and “narrow-sense” heritability is that important. We’ve already seen that broad-sense heritability is weird, and narrow-sense heritability is similar but different. So it won’t surprise you to learn that narrow-sense heritability is differently-weird.
But if you really want to understand the difference, I can offer you some more puzzles.
Say there’s an island where people have two genes, each of which is equally likely to be A or B. People are 100 cm tall if they have an AA genotype, 150 cm tall if they have an AB or BA genotype, and 200 cm tall if they have a BB genotype. How heritable is height?
Both broad and narrow-sense heritability are 100%.
The explanation for broad-sense heritability is like many we’ve seen already. Genes entirely determine someone’s height, and so genotypic and phenotypic height are the same.
For narrow-sense heritability, we need to calculate some additive heights. The overall mean is 150 cm, each A gene has an additive effect of -25 cm, and each B gene has an additive effect of +25 cm. But wait! Let’s work out the additive height for all four cases:
genotype | phenotypic height | additive height |
---|---|---|
AA | 100 cm | 150 cm - 25 cm - 25 cm = 100 cm |
AB | 150 cm | 150 cm - 25 cm + 25 cm = 150 cm |
BA | 150 cm | 150 cm + 25 cm - 25 cm = 150 cm |
BB | 200 cm | 150 cm + 25 cm + 25 cm = 200 cm |
Since additive height is also the same as phenotypic height, narrow-sense heritability is also 100%.
In this case, the two heritabilities were the same. At a high level, that’s because the genes act independently. When there are “gene-gene” interactions, you tend to get different numbers.
Say there’s an island where people have two genes, each of which is equally likely to be A or B. People with AA or BB genomes are 100 cm, while people with AB or BA genomes are 200 cm. How heritable is height?
Broad-sense heritability is 100%, while narrow-sense heritability is 0%.
You know the story for broad-sense heritability by now. For narrow-sense heritability, we need to do a little math.
So everyone has an additive height of 150 cm, no matter their genes. That’s constant, so narrow-sense heritability is zero.
I think basically for two reasons:
First, for some types of data (twin studies) it’s much easier to estimate broad-sense heritability. For other types of data (GWAS) it’s much easier to estimate narrow-sense heritability. So we take what we can get.
Second, they’re useful for different things. Broad-sense heritability is defined by looking at what all your genes do together. That’s nice, since you are the product of all your genes working together. But combinations of genes are not well-preserved by reproduction. If you have a kid, then they breed with someone, their kids breed with other people, and so on. Generations later, any special combination of genes you might have is gone. So if you’re interested in the long-term impact of you having another kid, narrow-sense heritability might be the way to go.
(Sexual reproduction doesn’t really allow for preserving the genetics that make you uniquely “you”. Remember, almost all your genes are shared by lots of other people. If you have any unique genes, that’s almost certainly because they have deleterious de-novo mutations. From the perspective of evolution, your life just amounts to a tiny increase or decrease in the per-locus population frequencies of your individual genes. The participants in the game of evolution are genes. Living creatures like you are part of the playing field. Food for thought.)
2025-07-17 08:00:00
Your eyes sense color. They do this because you have three different kinds of cone cells on your retinas, which are sensitive to different wavelengths of light.
For whatever reason, evolution decided those wavelengths should be overlapping. For example, M cones are most sensitive to 535 nm light, while L cones are most sensitive to 560 nm light. But M cones are still stimulated quite a lot by 560 nm light—around 80% of maximum. This means you never (normally) get to experience having just one type of cone firing.
So what do you do?
If you’re a quitter, I guess you accept the limits of biology. But if you like fun, then what you do is image people’s retinas, classify individual cones, and then selectively stimulate them using laser pulses, so you aren’t limited by stupid cone cells and their stupid blurry responsivity spectra.
Fong et al. (2025) choose fun.
When they stimulated only M cells…
Subjects report that [pure M-cell activation] appears blue-green of unprecedented saturation.
If you make people see brand-new colors, you will have my full attention. It doesn’t hurt to use lasers. I will read every report from every subject. Do our brains even know how to interpret these signals, given that they can never occur?
But tragically, the paper doesn’t give any subject reports. Even though most of the subjects were, umm, authors on the paper. If you want to know what this new color is like, the above quote is all you get for now.
Or… possibly you can see that color right now?
If you click on the above image, a little animation will open. Please do that now and stare at the tiny white dot. Weird stuff will happen, but stay focused on the dot. Blink if you must. It takes one minute and it’s probably best to experience it without extra information i.e. without reading past this sentence.
The idea for that animation is not new. It’s plagiarized based on Skytopia’s Eclipse of Titan optical illusion (h/t Steve Alexander), which dates back to at least 2010. Later I’ll show you some variants with other colors and give you a tool to make your own.
If you refused to look at the animation, it’s just a bluish-green background with a red circle on top that slowly shrinks down to nothing. That’s all. But as it shrinks, you should hallucinate a very intense blue-green color around the rim.
Why do you hallucinate that crazy color? I think the red circle saturates the hell out of your red-sensitive L cones. Ordinarily, the green frequencies in the background would stimulate both your green-sensitive M cones and your red-sensitive L cones, due to their overlapping spectra. But the red circle has desensitized your red cones, so you get to experience your M cones firing without your L cones firing as much, and voilà—insane color.
So here’s my question: Can that type of optical illusion show you all the same colors you could see by shooting lasers into your eyes?
That turns out to be a tricky question. See, here’s a triangle:
Think of this triangle as representing all the “colors” you could conceivably experience. The lower-left corner represents only having your S cones firing, the top corner represents only your M cones firing, and so on.
So what happens if you look different wavelengths of light?
Short wavelengths near 400 nm mostly just stimulate the S cones, but also stimulate the others a little. Longer wavelengths stimulate the M cones more, but also stimulate the L cones, because the M and L cones have overlapping spectra. (That figure, and the following, are modified from Fong et al.)
When you mix different wavelengths of light, you mix the cell activations. So all the colors you can normally experience fall inside this shape:
That’s the standard human color gamut, in LMS colorspace. Note that the exact shape of this gamut is subject to debate. For one thing, the exact sensitivity of cells is hard to measure and still a subject of research. Also, it’s not clear how far that gamut should reach into the lower-left and lower-right corners, since wavelengths outside 400-700 nm still stimulate cells a tiny bit.
And it gets worse. Most of the technology we use to represent and display images electronically is based on standard RGB (sRGB) colorspace. This colorspace, by definition, cannot represent the full human color gamut.
The precise definition of sRGB colorspace is quite involved. But very roughly speaking, when an sRGB image is “pure blue”, your screen is supposed to show you a color that looks like 450-470 nm light, while “pure green” should look like 520-530 nm light, and “pure red” should look like 610-630 nm light. So when your screen mixes these together, you can only see colors inside this triangle.
(The corners of this triangle don’t quite touch the boundaries of the human color gamut. That’s because it’s very difficult to produce single wavelengths of light without using lasers. In reality, the sRGB specification say that pure red/blue/green should produce a mixture of colors centered around the wavelengths I listed above.)
What’s the point of all this theorizing? Simple: When you look at the optical illusions on a modern screen, you aren’t just fighting the overlapping spectra of your cones. You’re also fighting the fact that the screen you’re looking at can’t produce single wavelengths of light.
So do the illusions actually take you outside the natural human color gamut? Unfortunately, I’m not sure. I can’t find much quantitative information about how much your cones are saturated when you stare at red circles. My best guess is no, or perhaps just a little.
If you’d like to explore these types of illusions further, I made a page in which you can pick any colors. You can also change the size of the circle, the countdown time, if the circle should shrink or grow, and how fast it does that.
You can try it here. You can export the animation to an animated SVG, which will be less than 1 kb. Or you can just save the URL.
Some favorites:
If you’re colorblind, I don’t think these will work, though I’m not sure. Folks with deuteranomaly have M cones, but they’re shifted to respond more like L cones. In principle, these types of illusions might help selectively activate them, but I have no idea if that will lead to stronger color perception. I’d love to hear from you if you try it.
2025-07-10 08:00:00
The idea of “processed food” may simultaneously be the most and least controversial concept in nutrition. So I did a self-experiment alternating between periods of eating whatever and eating only “minimally processed” food, while tracking my blood sugar, blood pressure, pulse, and weight.
Carrots and barley and peanuts are “unprocessed” foods. Donuts and cola and country-fried steak are “processed”. It seems like the latter are bad for you. But why? There are several overlapping theories:
Maybe unprocessed food contains more “good” things (nutrients, water, fiber, omega-3 fats) and less “bad” things (salt, sugar, trans fat, microplastics).
Maybe processing (by grinding everything up and removing fiber, etc.) means your body has less time to extract nutrients and gets more dramatic spikes in blood sugar.
Maybe capitalism has engineered processed food to be “hyperpalatable”. Cool Ranch® flavored tortilla chips sort of exploit bugs in our brains and are too rewarding for us to deal with. So we eat a lot and get fat.
Maybe we feel full based on the amount of food we eat, rather than the number of calories. Potatoes have around 750 calories per kilogram while Cool Ranch® flavored tortilla chips have around 5350. Maybe when we eat the latter, we eat more calories and get fat.
Maybe eliminating highly processed food reduces the variety of food, which in turn reduces how much we eat. If you could eat (1) unlimited burritos (2) unlimited iced cream, or (3) unlimited iced cream and burritos, you’d eat the most in situation (3), right?
Even without theory, everyone used to be skinny and now everyone is fat. What changed? Many things, but one is that our “food environment” now contains lots of processed food.
There is also some experimental evidence. Hall et al. (2019) had people live in a lab for a month, switching between being offered unprocessed or ultra-processed food. They were told to eat as much as they want. Even though the diets were matched in terms of macronutrients, people still ate less and lost weight with the unprocessed diet.
On the other hand, what even is processing? The USDA—uhh—may have deleted their page on the topic. But they used to define it as:
washing, cleaning, milling, cutting, chopping, heating, pasteurizing, blanching, cooking, canning, freezing, drying, dehydrating, mixing, or other procedures that alter the food from its natural state. This may include the addition of other ingredients to the food, such as preservatives, flavors, nutrients and other food additives or substances approved for use in food products, such as salt, sugars and fats.
It seems crazy to try to avoid a category of things so large that it includes washing, chopping, and flavors.
Ultimately, “processing” can’t be the right way to think about diet. It’s just too many unrelated things. Some of them are probably bad and others are probably fine. When we finally figure out how nutrition works, surely we will use more fine-grained concepts.
For now, I guess I believe that our fuzzy concept of “processing” is at least correlated with being less healthy.
That’s why, even though I think seed oil theorists are confused, I expect that avoiding seed oils is probably good in practice: Avoiding seed oils means avoiding almost all processed food. (For now. The seed oil theorists seem to be busily inventing seed-oil free versions of all the ultra-processed foods.)
But what I really want to know is: What benefit would I get from making my diet better?
My diet is already fairly healthy. I don’t particularly want or need to lose weight. If I tried to eat in the healthiest way possible, I guess I’d eliminate all white rice and flour, among other things. I really don’t want to do that. (Seriously, this experiment has shown me that flour contributes a non-negligible fraction of my total joy in life.) But if that would make me live 5 years longer or have 20% more energy, I’d do it anyway.
So is it worth it? What would be the payoff? As far as I can tell, nobody knows. So I decided to try it. For at least a few weeks, I decided to go hard and see what happens.
I alternated between “control” periods and two-week “diet” periods. During the control periods, I ate whatever I wanted.
During the diet periods I ate the “most unprocessed” diet I could imagine sticking to long-term. To draw a clear line, I decided that I could eat whatever I want, but it had to start as single ingredients. To emphasize, if something had a list of ingredients and there was more than one item, it was prohibited. In addition, I decided to ban flour, sugar, juice, white rice, rolled oats (steel-cut oats allowed) and dairy (except plain yogurt).
Yes, in principle, I was allowed to buy wheat and mill my own flour. But I didn’t.
I made no effort to control portions at any time. For reasons unrelated to this experiment, I also did not consume meat, eggs, or alcohol.
This diet was hard. In theory, I could eat almost anything. But after two weeks on the diet, I started to have bizarre reactions when I saw someone eating bread. It went beyond envy to something bordering on contempt. Who are you to eat bread? Why do you deserve that?
I guess you can interpret that as evidence in favor of the diet (bread is addictive) or against it (life sucks without bread).
The struggle was starches. For breakfast, I’d usually eat fruit and steel-cut oats, which was fine. For the rest of the day, I basically replaced white rice and flour with barley, farro, potatoes, and brown basmati rice, which has the lowest GI of all rice. I’d eat these and tell myself they were good. But after this experiment was over, guess how much barley I’ve eaten voluntarily?
Aside from starches, it wasn’t bad. I had to cook a lot and I ate a lot of salads and olive oil and nuts. My options were very limited at restaurants.
I noticed no obvious difference in sleep, energy levels, or mood, aside from the aforementioned starch-related emotional problems.
I measured my blood sugar first thing in the morning using a blood glucose monitor. I abhor the sight of blood, so I decided to sample it from the back of my upper arm. Fingers get more circulation, so blood from there is more “up to date”, but I don’t think it matters much if you’ve been fasting for a few hours.
Here are the results, along with a fit, and a 95% confidence interval:
Each of those dots represents at least one hole in my arm. The gray regions show the two two-week periods during which I was on the unprocessed food diet.
I measured my systolic and diastolic blood pressure twice each day, once right after waking up, and once right before going to bed.
Oddly, it looks like my systolic—but not diastolic—pressure was slightly higher in the evening.
I also measured my pulse twice a day.
(Cardio.) Apparently it’s common to have a higher pulse at night.
Finally, I also measured my weight twice a day. To preserve a small measure of dignity, I guess I’ll show this as a difference from my long-term baseline.
Here’s how I score that:
Outcome | Effect |
---|---|
Blood sugar | Nothing |
Systolic blood pressure | Nothing? |
Diastolic blood pressure | Nothing? |
Pulse | Nothing |
Weight | Maybe ⅔ of a kg? |
Urf.
Blood sugar. Why was there no change in blood sugar? Perhaps this shouldn’t be surprising. Hall et al.’s experiment also found little difference in blood glucose between the groups eating unprocessed and ultra-processed food. Later, when talking about glucose tolerance they speculate:
Another possible explanation is that exercise can prevent changes in insulin sensitivity and glucose tolerance during overfeeding (Walhin et al., 2013). Our subjects performed daily cycle ergometry exercise in three 20-min bouts […] It is intriguing to speculate that perhaps even this modest dose of exercise prevented any differences in glucose tolerance or insulin sensitivity between the ultra-processed and unprocessed diets.
I also exercise on most days. On the other hand, Barnard et al. (2006) had a group of people with diabetes follow a low-fat vegan (and thus “unprocessed”?) diet and did see large reductions in blood glucose (-49 mg/dl). But they only give data after 22 weeks, and my baseline levels are already lower than the mean of that group even after the diet.
Blood pressure. Why was there no change in blood pressure? I’m not sure. In the DASH trial, subjects with high blood pressure ate a diet rich in fruits and vegetables saw large decreases in blood pressure, almost all within two weeks. One possibility is that my baseline blood pressure isn’t that high. Another is that in this same trial, they got much bigger reductions by limiting fat, which I did not do.
Another possibility is that unprocessed food just doesn’t have much impact on blood pressure. The above study from Barnard et al. only saw small decreases in blood pressure (3-5 mm Hg), even after 22 weeks.
Pulse. As far as I know, there’s zero reason to think that unprocessed food would change your pulse. I only included it because my blood pressure monitor did it automatically.
Weight. Why did I seem to lose weight in the second diet period, but not the first? Well, I may have done something stupid. A few weeks before this experiment, I started taking a small dose of creatine each day, which is well-known to cause an increase in water weight. I assumed that my creatine levels had plateaued before this experiment started, but after reading about creatine pharmacokinetics I’m not so sure.
I suspect that during the first diet period, I was losing dry body mass, but my creatine levels were still increasing and so that decrease in mass was masked by a similar increase in water weight. By the second diet period, my creatine levels had finally stabilized, so the decrease in dry body mass was finally visible. Or perhaps water weight has nothing to do with it and for some reason I simply didn’t have an energy deficit during the first period.
This experiment gives good evidence that switching from my already-fairly-healthy diet to an extremely non-fun “unprocessed” diet doesn’t have immediate miraculous benefits. If there is any effect on blood sugar, blood pressure, or pulse, they’re probably modest and long-term. This experiment gives decent evidence that the unprocessed diet causes weight loss. But I hated it, so if I wanted to lose weight, I’d do something else. This experiment provides very strong evidence that I like bread.
2025-07-08 08:00:00
Goats, like most hoofed mammals, have horizontal pupils.
[…]
When a goat’s head tilts up (to look around) and down (to munch on grass), an amazing thing happens. The eyeballs actually rotate clockwise or counterclockwise within the eye socket. This keeps the pupils oriented to the horizontal.
[…]
To test out this theory, I took photos of Lucky the goat’s head in two different positions, down and up.
(2) Novel color via stimulation of individual photoreceptors at population scale (h/t Benny)
The cones in our eyes all have overlapping spectra. So even if you look at just a single frequency of light, more than one type of cone will be stimulated.
So, obviously, what we need to do is identify individual cone cell types on people’s retinas and then selectively stimulate them with lasers so that people can experience never-before-seen colors.
Attempting to activate M cones exclusively is shown to elicit a color beyond the natural human gamut, formally measured with color matching by human subjects. They describe the color as blue-green of unprecedented saturation.
When I was a kid and I was bored in class, I would sometimes close my eyes and try to think of a “new color”. I never succeeded, and in retrospect I think I have aphantasia.
But does this experiment suggest it is actually possible to imagine new colors? I’m fascinated that our brains have the ability to interpret these non-ecological signals, and applaud all such explorations of qualia space.
(3) Simplifying Melanopsin Metrology (h/t Chris & Alex)
When reading about blue-blocking glasses, I failed to discover that the effects of light on melatonin don’t seem to be mediated by cones or rods at all. Instead, around 1% of retinal photosensitive cells are melanopsin-containing retinal ganglion cells.
These seem to specifically exist for the purpose of regulating melatonin and circadian rhythms. They have their own spectral sensitivity:
If you believe that sleep is mediated entirely by these cells, then you’d probably want to block all frequencies above ~550 nm. That would leave you with basically only orange and red light.
However, Chris convinced me that if you want natural melatonin at night, the smart thing is primarily rely on dim lighting, and only secondarily on blocking blue light. Standard “warm” 2700 K bulbs only reduce blue light to around ⅓ as much. But your eyes can easily adapt to <10% as many lux. If you combine those, blue light is down by ~97%.
The brain doesn’t seem to use these cells for pattern vision at all. Although…
In work by Zaidi, Lockley and co-authors using a rodless, coneless human, it was found that a very intense 481 nm stimulus led to some conscious light perception, meaning that some rudimentary vision was realized.
Airplanes have to guess how much food to bring. So either they waste energy moving around extra food that no one eats, or some people go hungry. So why don’t we have people bid on food, so nothing goes to waste?
I expect passengers would absolutely hate it.
(5) The Good Sides Of Nepotism
Speaking of things people hate, this post gives a theory for why you might rationally prefer to apply nepotism when hiring someone: Your social connections increase the cost of failure for the person you hire. I suspect we instinctively apply this kind of game theory without even realizing we’re doing so.
This seems increasingly important, what with all the AI-generated job applications now attacking AI-automated human resources departments.
My question is: If this theory is correct, can we create other social structures to provide the same benefit in other ways, therefore reducing the returns on nepotism?
Say I want you to hire me, but you’re worried I suck. In principle, I could take $50,000, put it in escrow, and tell you, “If you hire me, and I actually suck (as judged by an arbiter) then you can burn the $50,000.”
Sounds horrible, right? But that’s approximately what’s happening if you know I have social connections and/or reputation that will be damaged if I screw up.
We’ve spent decades in the dark ages of the internet, where you could only link to entire webpages or (maybe) particular hidden beacon codes.
But we are now in a new age. You can link to any text on any page. Like this:
https://dynomight.net/grug#:~:text=phenylalanine
This is not a special feature of dynomight.net
. It’s done by your browser.
I love this, but I can never remember how to type #:~:text=
. Well, finally, almost all browsers now also support generating these links. You just highlight some text, right-click, and “Copy Link to Highlight”.
If you go to this page and highlight and right-click on this text:
Then you get this link.
about:config
into the address bardom.text_fragments.create_text_fragment.enabled
(7) (Not technically a link)
Also, did you know you can link to specific pages of pdf files? For example:
https://gwern.net/doc/longevity/glp/semaglutide/2023-aroda.pdf#page=8
I just add #page=
manually. Chrome-esque browsers, oddly, will do automatically if you right-click and go to “Create QR Code for this Page”.
(8) Response to Dynomight on Scribble-based Forecasting
Thoughtful counter to some of my math skepticism. I particularly endorse the point in the final paragraph.
(9) Decision Conditional Prices Reflect Causal Chances
Robin Hanson counters my post on Futarchy’s fundamental flaw. My candid opinion is that this is a paradigmatic example of a “heat mirage”, in that he doesn’t engage with any of the details of my argument, doesn’t specify what errors I supposedly made, and doesn’t seem to commit to any specific assumptions that he’s willing to argue are plausible and would guarantee prices that reflect causal effects. So I don’t really see any way to continue the conversation. But judge for yourself!
(10) Futarchy’s fundamental flaw - the market
Speaking of which, Bolton Bailey set up a conditional prediction market to experimentally test one of the examples I gave where I claimed prediction markets would not reflect causal probabilities.
If you think betting on causal effects is always the right strategy in conditional prediction markets, here’s your chance to make some fake internet currency. The market closes on July 26, 2025. No matter how much you love me, please trade according to your self-interest.
(11) War and Peace
I’m reading War and Peace. You probably haven’t heard, but it’s really good.
Except the names. Good god, the names. There are a lot of characters, and all the major ones have many names:
Those are all the same person. Try keeping track of all those variants for 100 different characters in a narrative with many threads spanning time and space. Sometimes, the same name refers to different people. And Tolstoy loves to just write “The Princess” when there are three different princesses in the room.
So I thought, why not use color? Whenever a new character appears, assign them a color, and use it for all name variants for the rest of the text. Even better would be to use color patterns like Bolkónski / Prince Andréy Nikoláevich.
This should be easy for AI, right?
I can think of ways to do this, but they would all be painful, due to War and Peace’s length: They involve splitting the text into chunks, having the AI iterate over them while updating some name/color mapping, and then merging everything at the end.
So here’s a challenge: Do you know an easy way to do this? Is there any existing tool that you can give a short description of my goals, and get a full name-colored pdf / html / epub file? (“If your agent cannot do this, then of what use is the agent?”)
Note: It’s critical to give all characters a color. Otherwise, seeing a name without color would be a huge spoiler that they aren’t going to survive very long. It’s OK if some colors are similar.
There’s also the issue of all the intermingled French. But I find that hard not to admire—Tolstoy was not falling for audience capture.
(And yes, War and Peace, Simplified Names Edition apparently exists. But I’m in too deep to switch now.)
(12) Twins
The human twin birth rate in the United States rose 76% from 1980 through 2009, from 9.4 to 16.7 twin sets (18.8 to 33.3 twins) per 1,000 births. The Yoruba people have the highest rate of twinning in the world, at 45–50 twin sets (90–100 twins) per 1,000 live births possibly because of high consumption of a specific type of yam containing a natural phytoestrogen which may stimulate the ovaries to release an egg from each side.
I love this because, like:
(That actually happened. Yams had that conversation and then started making phytoestrogens.)
Apparently, some yams naturally contain the plant hormone diosgenin, which can be chemically converted into various human hormones. And that’s actually how we used to make estrogen, testosterone, etc.
And if you like that, did you know that estrogen medications were historically made from the urine of pregnant mares? I thought this was awesome, but after reading a bit about how this worked, I doubt the horses would agree. Even earlier, animal ovaries and testes were used. These days, hormones tend to be synthesized without any animal or plant precursor.
If you’re skeptical that more twins would mean higher reproductive fitness, note that yams don’t believe in Algernon Arguments.
2025-07-03 08:00:00
Back in 2017, everyone went crazy about these things:
The theory was that perhaps the pineal gland isn’t the principal seat of the soul after all. Maybe what it does is spit out melatonin to make you sleepy. But it only does that when it’s dark, and you spend your nights in artificial lighting and/or staring at your favorite glowing rectangles.
You could sit in darkness for three hours before bed, but that would be boring. But—supposedly—the pineal gland is only shut down by blue light. So if you selectively block the blue light, maybe you can sleep well and also participate in modernity.
Then, by around 2019, blue-blocking glasses seemed to disappear. And during that brief moment in the sun, I never got a clear picture of if they actually work.
So, do they? To find out, I read all the papers.
Before getting to the papers, please humor me while I give three excessively-detailed reminders about how light works. First, it comes in different wavelengths.
Color | Wavelength (nm) |
---|---|
violet | 380–450 |
blue | 450–485 |
cyan | 485–500 |
green | 500–565 |
yellow | 565–590 |
orange | 590–625 |
red | 625–750 |
Outside the visible spectrum, infrared light and microwaves and radio waves have even longer wavelengths, while ultraviolet light and x-rays and gamma rays have even shorter wavelengths. Shorter wavelengths have more energy. Do not play around with gamma rays.
Other colors are hallucinations made up by your brain. When you get a mixture of all wavelengths, you see “white”. When you get a lot of yellow-red wavelengths, some green, and a little violet-blue, you see “brown”. Similar things are true for pink/purple/beige/olive/etc. (Technically, the original spectral colors and everything else you experience are also hallucinations made up by your brain, but never mind.)
Second, the ruleset of our universe says that all matter gives off light, with a mixture of wavelengths that depends on the temperature. Hotter stuff has atoms that are jostling around faster, so it gives off more total light, and shifts towards shorter (higher-energy) wavelengths. Colder stuff gives off less total light and shifts towards longer wavelengths. The “color temperature” of a lightbulb is the temperature some chunk of rock would have to be to produce the same visible spectrum. Here’s a figure, with the x-axis in kelvins.
The sun is around 5800 K. That’s both the physical temperature on the surface and the color temperature of its light. Annoyingly, the orange light that comes from cooler matter is often called “warm”, while the blueish light that comes from hotter matter is called “cool”. Don’t blame me.
Anyway, different light sources produce widely different spectra.
You can’t sense most of those differences because you only have three types of cone cells. Rated color temperatures just reflect how much those cells are stimulated.
Your eyes probably see the frequencies they do because that’s where the sun’s spectrum is concentrated. In dim light, cones are inactive, so you rely on rod cells instead. You’ve only got one kind of rod, which is why you can’t see color in dim light. (Though you might not have noticed.)
Finally, amounts of light are typically measured in lux. Your eyes are amazing and can deal with upwards of 10 orders of magnitude.
Situation | lux |
---|---|
Moonless overcast night | 0.0001 |
Full moon | 0.2 |
Very dark overcast day | 100 |
Sunrise or sunset | 400 |
Overcast day | 1,000 |
Full daylight | 20,000 |
Direct sunlight | 50,000 |
In summary, you get widely varying amounts of different wavelengths of light in different situations, and the sun is very powerful. It’s reasonable to imagine your body might regulate its sleep schedule based that input.
OK, but do blue-blocking glasses actually work? Let’s read some papers.
Kayumov et al. (2005) had 19 young healthy adults stay awake overnight for three nights, first with dim light (<5 lux) and then with bright light (800 lux), both with and without blue-blocking goggles. They measured melatonin in saliva each hour.
The goggles seemed to help a lot. With bright light, subjects only had around 25% as much melatonin as with dim light. Blue-blocking goggles restored that to around 85%.
I rate this as good evidence for a strong increase in melatonin. Sometimes good science is pretty simple.
Burkhart and Phelps (2009) first had 20 adults rate their sleep quality at home for a week as a baseline. Then, they were randomly given either blue-blocking glasses or yellow-tinted “placebo” glasses and told to wear them for 3 hours before sleep for two weeks.
Oddly, the group with blue-blocking glasses had much lower sleep quality during the baseline week, but this improved a lot over time.
I rate this as decent evidence for a strong improvement in sleep quality. I’d also like to thank the authors for writing this paper in something resembling normal human English.
Van der Lely et al. (2014) had 13 teenage boys wear either blue-blocking glasses or clear glasses from 6pm to bedtime for one week, followed by the other glasses for a second week. Then they went to a lab, spent 2 hours in dim light, 30 minutes in darkness, and then 3 hours in front of an LED computer, all while wearing the glasses from the second week. Then they were asked to sleep, and their sleep quality was measured in various ways.
The boys had more melatonin and reported feeling sleepier with the blue-blocking glasses.
I rate this as decent evidence for a moderate increase in melatonin, and weak evidence for near-zero effect on sleep quality.
Gabel et al. (2017) took 38 adults and first put them through 40 hours of sleep deprivation under white light, then allowed them to sleep for 8 hours. Then they were subjected to 40 more hours of sleep deprivation under either white light (250 lux at 2800K), blue light (250 lux at 9000K), or very dim light (8 lux, color temperature unknown).
Their results are weird. In younger people, dim light led to more melatonin that white light, which led to more melatonin that blue light. That carried over to a tiny difference in sleepiness. But in older people, both those effects disappeared, and blue light even seemed to cause more sleepiness than white light. The cortisol and wrist activity measurements basically make no sense at all.
I rate this as decent evidence for a moderate effect on melatonin, and very weak evidence for a near-zero effect on sleep quality. (I think its decent evidence for a near-zero effect on sleepiness, but they didn’t actually measure sleep quality.)
Esaki et al. (2017) gathered 20 depressed patients with insomnia. They first recorded their sleep quality for a week as a baseline, then were given either blue-blocking glasses or placebo glasses and told to wear them for another week starting at 8pm.
The changes in the blue-blocking group were a bit better for some measures, but a bit worse for others. Nothing was close to significant. Apparently 40% of patients complained that the glasses were painful, so I wonder if they all wore them as instructed.
I rate this was weak evidence for near-zero effect on sleep quality.
Shechter et al. (2018) gave 14 adults with insomnia either blue-blocking or clear glasses and had them wear them for 2 hours before bedtime for one week. Then they waited four weeks and had them wear the other glasses for a second week. They measured sleep quality through diaries and wrist monitors.
The blue-blocking glasses seemed to help with everything. People fell asleep 5 to 12 minutes faster, and slept 30 to 50 minutes longer, depending on how you measure. (SOL is sleep onset latency, TST is total sleep time).
I rate this as good evidence for a strong improvement in sleep quality.
Knufinke et al. (2019) had 15 young adult athletes either wear blue-blocking glasses or transparent glasses for four nights.
The blue-blocking group did a little better on most measures (longer sleep time, higher sleep quality) but nothing was statistically significant.
I rate this as weak evidence for a small improvement in sleep quality.
Janků et al. (2019) took 30 patients with insomnia and had them all go to therapy. They randomly gave them either blue-blocking glasses or placebo glasses and asked the patients to wear them for 90 minutes before bed.
The results are pretty tangled. According to sleep diaries, total sleep time went up by 37 minutes in the blue-blocking group, but slightly decreased in the placebo group. The wrist monitors show total sleep time decreasing in both groups, but it did decrease less with the blue-blocking glasses. There’s no obvious improvement in sleep onset latency or the various questionnaires they used to measure insomnia.
I rate this as weak evidence for a moderate improvement in sleep quality.
Esaki et al. (2020) followed up on their 2017 experiment from above. This time, they gathered 43 depressed patients with insomnia. Again, they first recorded their sleep quality for a week as a baseline, then were given either blue-blocking glasses or placebo glasses and told to wear them for another week starting at 8pm.
The results were that subjective sleep quality seemed to improve more in the blue-blocking group. Total sleep time went down by 12.6 minutes in the placebo group, but increased by 1.1 minutes in the blue-blocking group. None of this was statistically significant, and all the other measurements are confusing. Here are the main results. I’ve added little arrows to show the “good” direction, if there is one.
These confidence intervals don’t make any sense to me. Are they blue-blocking minus placebo or the reverse? When the blue-blocking number is higher than placebo, sometimes the confidence interval is centered above zero (VAS), and sometimes it’s centered below zero (TST). What the hell?
Anyway, they also had a doctor estimate the clinical global impression for each patient, and this looked a bit better for the blue-blocking group. The doctor seemingly was blinded to the type of glasses the patients were wearing.
This is a tough one to rate. I guess I’ll call it weak evidence for a small improvement in sleep quality.
Guarana et al. (2020) sent either blue-blocking glasses or sham glasses to 240 people, and asked them to wear them for at least two hours before bed. They then had them fill out some surveys about how much and how well they slept.
Wearing the blue-blocking glasses was positively correlated with both sleep quality and quantity with a correlation coefficient of around 0.20.
This paper makes me nervous. They never show the raw data, there seem to be huge dropout rates, and lots of details are murky. I can’t tell if the correlations they talk about weight all people equally, all surveys equally, or something else. That would make a huge difference if people dropped out more when they weren’t seeing improvements.
I rate this as weak evidence for a moderate effect on sleep. There’s a large sample, but I discount the results because of the above issues and/or my general paranoid nature.
Domagalik et al. (2020) had 48 young people wear either blue-blocking contact lenses or regular contact lenses for 4 weeks. They found no effect on sleepiness.
I rate this as very weak evidence for near-zero effect on sleep. The experiment seems well-done, but it’s testing the effects of blocking blue light all the time, not just at night. Given the effects on attention and working memory, don’t do that.
Bigalke et al. (2021) had 20 healthy adults wear either blue-blocking glasses or clear glasses for a week from 6pm until bedtime, then switch to the other glasses for a second week. They measured sleep quality both through diaries (“Subjective”) and wrist monitors (“Objective”).
The differences were all small and basically don’t make any sense.
I rate this weak evidence for near-zero effect on sleep quality. Also, see how in the bottom pair of bar-charts, the y-axis on the left goes from 0 to 5, while on the right it goes from 30 to 50? Don’t do that, either.
I also found a couple papers that are related, but don’t directly test what we’re interested in:
Appleman et al. (2013) either exposed people to different amounts of blue light at different times of day. Their results suggest that early-morning exposure to blue light might shift your circadian rhythm earlier.
Sasseville et al. (2015) had people stay awake from 11pm to 4am on two consecutive nights, while either wearing blue-blocking glasses or not. With the blue-blocking glasses there was more overall light to equalizing the total incoming energy. I can’t access this paper, but apparently they found no difference.
For a synthesis, I scored each of the measured effects according to this rubric:
Rating | Meaning |
---|---|
↑↑↑ | large increase |
↑↑ | moderate increase |
↑ | small increase |
↔ | no effect |
↓ | small decrease |
↓↓ | moderate decrease |
↓↓↓ | large decrease |
And I scored the quality of evidence according to this one:
Rating | Meaning |
---|---|
★☆☆☆☆ | very weak evidence |
★★☆☆☆ | weak evidence |
★★★☆☆ | decent evidence |
★★★★☆ | good evidence |
★★★★★ | great evidence |
Here are the results for the three papers that measured melatonin:
Study | Effect on melatonin | Quality of evidence |
---|---|---|
Kayumov | ↑↑↑ | ★★★★☆ |
Van der Lely | ↑↑ | ★★★☆☆ |
Gabel | ↑ | ★★★☆☆ |
And here are the results for the papers that measured sleep quality:
Study | Effect on sleep | Quality of evidence |
---|---|---|
Burkhart | ↑↑↑ | ★★★☆☆ |
Van der Lely | ↔ | ★★☆☆☆ |
Gabel | ↔ | ★☆☆☆☆ |
Esaki | ↔ | ★★☆☆☆ |
Shechter | ↑↑↑ | ★★★☆☆ |
Knufinke | ↑ | ★★☆☆☆ |
Janků | ↑↑ | ★★☆☆☆ |
Esaki (again) | ↑ | ★★☆☆☆ |
Guarana | ↑↑ | ★★☆☆☆ |
Domagalik | ↔ | ★☆☆☆☆ |
Bigalke | ↔ | ★★☆☆☆ |
We should adjust all that a bit because of publication bias and so on. But still, here are my final conclusions after staring at those tables:
There is good evidence that blue-blocking glasses cause a moderate increase in melatonin. It could be large, or it could be small, but I’d say there’s an ~85% chance it’s not zero.
There is decent evidence that blue-blocking glasses cause a small improvement in sleep quality. This could be moderate (or even large) or it could be zero. It might be inconsistent and hard to measure. But I’d say there’s an ~75% chance there is some positive effect.
I’ll be honest—I’m surprised.
If those effects are real, do they warrant wearing stupid-looking glasses at night for the rest of your life? I guess that’s personal.
But surely the sane thing is not to block blue light with headgear, but to not create blue light in the first place. You can tell your glowing rectangles to block blue light at night, but lights are harder. Modern LED lightbulbs typically range in color temperature from 2700K for “warm” lighting to 5000 K for “daylight” bulbs. Judging from this animation that should reduce blue frequencies to around 1/3 as much.
Old-school incandescent bulbs are 2400 K. But to really kill blue, you probably want 2000K or even less. There are obscure LED bulbs out there as low as 1800K. They look extremely orange, but candles are apparently 1850K, so probably you’d get used to it?
So what do we do then? Get two sets of lamps with different bulbs? Get fancy bulbs that change color temperature automatically? Whatever it is, I don’t feel very optimistic that we’re going to see a lot of RCTs where researchers have subjects install an entire new lighting setup in their homes.