2025-11-27 21:14:40
The previous post described a metric for the Poincaré upper half plane. The development is geometrical rather than analytical. There are also analytical formulas for the metric, at least four that I’ve seen.
It’s not at all obvious that the four equations are equivalent, or that any of them matches the expression in the previous post.
There are equations for expressing arcsinh, arccosh, and arctanh in terms of logarithms and square roots. See the bottom of this post. You could use these identities to show that the metric expressions are equal, but I don’t know of a cleaner way to do this than lots of tedious algebra.
Before diving into the calculations, you might want some assurance that you’re trying to prove the right thing. Here’s some Python code that generates random pairs of complex numbers and shows that the four expressions give he same distance.
import numpy as np
def d1(z1, z2):
return 2*np.arcsinh( abs(z1 - z2) / (2*(z1.imag * z2.imag)**0.5) )
def d2(z1, z2):
return np.arccosh(1 + abs(z1 - z2)**2 / (2*z1.imag * z2.imag) )
def d3(z1, z2):
return 2*np.arctanh( abs( (z1 - z2)/(z1 - np.conjugate(z2)) ) )
def d4(z1, z2):
return 2*np.log( (abs(z2 - z1) + abs(z2 - np.conjugate(z1)))/(2*np.sqrt(z1.imag * z2.imag)) )
np.random.seed(20251127)
for n in range(100):
z1 = np.random.random() + 1j*np.random.random()
z2 = np.random.random() + 1j*np.random.random()
assert( abs(d1(z1, z2) - d2(z1, z2)) < 1e-13 )
assert( abs(d2(z1, z2) - d3(z1, z2)) < 1e-13 )
assert( abs(d3(z1, z2) - d4(z1, z2)) < 1e-13 )
Perhaps you’re convinced that the four expressions are equal, but why should any of them be equivalent to the definition in the previous post?
The previous post pointed out that the metric is invariant under Möbius transformations. We can apply such a transformation to move any pair of complex numbers to the imaginary axis. There you can see that the cross ratio reduces to the ratio of the two numbers.
More generally, if two complex numbers have the same real part, the distance between them is the log of the ratio of their imaginary parts. That is, if
then
if x, y1, and y2 are real and y2 > y1 > 0.
Here’s a little Python code that empirically shows that this gives the same distance as one of the expressions above.
def d5(z1, z2):
assert(z1.real == z1.real)
return abs( np.log( z1.imag / z2.imag ) )
for n in range(100):
x = np.random.random()
z1 = x + 1j*np.random.random()
z2 = x + 1j*np.random.random()
assert( abs(d1(z1, z2) - d5(z1, z2))
So now we have five expressions for the metric, all of which look different. You could slug out a proof that they're equivalent, or get a CAS like Mathematica to show they're equivalent, but it would be more interesting to find an elegant equivalence proof.
The post Equal things that don’t look equal first appeared on John D. Cook.
2025-11-27 02:28:14
One common model of the hyperbolic plane is the Poincaré upper half plane ℍ. This is the set of points in the complex plane with positive imaginary part. Straight lines are either vertical, a set of points with constant imaginary part, or arcs of circles centered on the real axis. The real axis is not part of ℍ. From the perspective of hyperbolic geometry these are ideal parts, infinitely far away, and not part of the plane itself.

We can define a metric on ℍ as follows. To find the distance between two points u and v, draw a line between the two points, and let a and b be the ideal points at the end of the line. By a line we mean a line as defined in the geometry of ℍ, what we would see from our Euclidean perspective as a half circle or a vertical line. Then the distance between u and v is defined as the absolute value of the log of the cross ratio (u, v; a, b).
Cross ratios are unchanged by Möbius transformations, and so Möbius transformations are isometries.
Another common model of hyperbolic geometry is the Poincaré disk. We can use the same metric on the Poincaré disk because the Möbius transformation
maps the upper half plane to the unit disk. This is very similar to how the Smith chart is created by mapping a grid in the right half plane to the unit disk.
Update: See the next post for four analytic expressions for the metric, direct formulas involving u and v but not the ideal points a and b.
The post Hyperbolic metric first appeared on John D. Cook.2025-11-25 09:25:29
The opening line of William Gibson’s novel Neuromancer is famous:
The sky above the port was the color of a television, tuned to a dead channel.
When I read this line, I knew immediately what he meant, and thought it was a brilliant line. Later I learned that younger readers didn’t know what he was saying.

My mind went to an old black-and-white television, one that received analog broadcasts, and that displayed “snow” when tuned to a channel that had no broadcast signal. Someone whose earliest memories of television are based on digital color broadcast might imagine the sky above the port was solid blue rather than crackly gray.
Gibson discusses how his book has aged in a preface to a recent edition. He says that science fiction that is too prescient would be received poorly.
Imagine a novel from the sixties whose author had somehow fully envisioned cellular telephony circa 2004, and had worked it, exactly as we know it today, into the fabric of her imaginary future. Such a book would have seemed highly peculiar in the sixties … in ways that would quickly overwhelm the narrative.
He then goes on to say
I suspect that Neuromancer owes much of its shelf life to my almost perfect ignorance of the technology I was extrapolating from. … Where I made things up from whole cloth, the colors remain bright.
I find it odd that many judge a work of science fiction by what it “got right.” I don’t read science fiction as a forecast; read it to enjoy a story. I don’t need a book to be prescient, but until reading Gibson’s remarks it hadn’t occurred to me that fiction that is too prescient might not be enjoyable fiction, at least for its first readers.
The post TV tuned to a dead channel first appeared on John D. Cook.2025-11-25 01:06:51

Suppose Alice runs a confidential restaurant. Alice doesn’t want there to be any record of who visited her restaurant but does want to get paid for her food. She accepts Monero, and instead of a cash register there are two QR codes on display, one corresponding to her public view key A and the other corresponding to her public spend key S.
A customer Bob walks into the restaurant and orders a burger and fries. When Bob pays Alice, here’s what’s going on under the hood.
Bob is using software that generates a random integer r and multiplies it by a point G on an elliptic curve, specifically ed25519, obtaining the point
R = rG
on the curve. The software also multiplies Alice’s view key A, a point on the same elliptic curve, by r, then runs a hash function H on the produce rA that returns an integer k.
k = H(rA).
Finally, Bob’s software computes the point
P = kG + S
and sends Alice’s cash register, i.e. her crypto wallet, the pair of points (P, R). The point P is a stealth address, an address that will only be used this one time and cannot be linked to Alice or Bob [1]. The point R is additional information that helps Alice receive her money.
Alice and Bob share a secret: both know k. How’s that?
Alice’s public view key A is the product of her private view key a and the group generator G [2]. So when Bob computes rA, he’s computing r(aG). Alice’s software can multiply the point R by a to obtain a(rG).
rA = r(aG) = a(rG) = aR.
Both Alice and Bob can hash this point—which Alice thinks of as aR and Bob thinks of as rA—to obtain k. This is ECDH: elliptic curve Diffie-Hellman key exchange.
Next, Alice’s software scans the blockchain for payments to
P = kG + S.
Note that P is on the blockchain, but only Alice and Bob know how to factor P into kG + S because only Alice and Bob know k. And only Alice can spend the money because only she knows the private key s corresponding to the public spend key S where
S = sG.
She knows
P = kG + sG = (k + s)G
and so she has the private key (k + s) corresponding to P.
[1] Bob sends money to the address P, so there could be some connection between Bob and P on the Monero blockchain. However, due to another feature of Monero, namely ring signatures, someone analyzing the blockchain could only determine that Bob is one of 16 people who may have sent money to the address P, and there’s no way to know who received the money. That is, there is no way, using only information on the blockchain, who received the money. A private investigator who saw Bob walk into Alice’s restaurant would have additional information outside the blockchain.
[2] The key assumption of elliptic curve cryptography is that it’s computationally infeasible to “divide” on an elliptic curve, i.e. to recover a from knowledge of G and aG. You could recover a by brute force if the group were small, but the elliptic curve ed25519 has on the order of 2255 points, and a is some integer chosen randomly between 1 and the size of the curve.
The post How stealth addresses work in Monero first appeared on John D. Cook.2025-11-21 03:42:10
I was reading about Shackleton’s incredible expedition to Antarctica, and the Weddell Sea features prominently. That name sounded familiar, and I was trying to remember where I’d heard of Weddell in math. I figured out that it wasn’t Weddell exactly but Weddle I was thinking of.
The Weddell Sea is named after James Weddell (1787–1834). Weddle’s integration rule is named after Thomas Weddle (1817–1853).
I wrote about Weddle’s integration rule a couple years ago. Weddle’s rule, also known as Bode’s rule, is as follows.
Let’s try this on integrating sin(x) from 1 to 2.
If we divide the interval [1, 2] into 6 subintervals, h = 1/6. The 8th derivative of sin(x) is also sin(x), so it is bounded by 1. So we would expect the absolute value of the error to be bounded by
9 / (69 × 1400).
Let’s see what happens in practice.
import numpy as np
x = np.linspace(1, 2, 7)
h = (2 - 1)/6
weights = (h/140)*np.array([41, 216, 27, 272, 27, 216, 41])
approx = np.dot(weights, np.sin(x))
exact = np.cos(1) - np.cos(2)
print("Error: ", abs(approx - exact) )
print("Expected error: ", 9/(1400*6**9))
Here’s the output:
Error: 6.321198009473505e-10 Expected error: 6.379009079626363e-10
2025-11-21 03:10:10
The previous post includes code for solving the equation
Hn = m
i.e. finding the value of n for which the nth harmonic number is the closest to m. It works well for small values of m. It works for large m in the sense that the solution is very close to m, but it’s not necessarily the best solution.
For example, set m = 100. The code returns
n = 15092688622113830917200248731913020965388288
and indeed for that value of n,
Hn − 100 ≈ 3 × 10−15
and that’s as much as we could hope for with IEEE 754 floats.
The approximation
n = exp(m −γ)
is very good for large values of m. Using Mathematica we can find the exact value of n.
f[n_] := Log[n] + EulerGamma + 1/(2 n) - 1/(12 n^2) n = Floor[Exp[100 - EulerGamma]]; N[f[n], 50] 100.00000000000000000000000000000000000000000000900 N[f[n - 1], 50] 99.999999999999999999999999999999999999999999942747
So
n = 15092688622113788323693563264538101449859497
A similar process can find the solution to
Hn = 1000
is
n = 110611511026604935641074705584421138393028001852577373936470952377218354575172401275457597579044729873152469512963401398362087144972181770571895264066114088968182356842977823764462179821981744448731785408629116321919957856034605877855212667092287520105386027668843119590555646814038787297694678647529533718769401069269427475868793531944696435696745559289326610132208504257721469829210704462876574915362273129090049477919400226313586033
For this calculation you’ll need to increase the precision from 50 digits to something like 500 digits, something more than 435 because n is a 435-digit number.
In case you’re wondering whether my function for computing harmonic numbers is accurate enough, it’s actually overkill, with error O(1/120n4).
The post Solving H_n = 100 first appeared on John D. Cook.