2025-11-25 01:06:51

Suppose Alice runs a confidential restaurant. Alice doesn’t want there to be any record of who visited her restaurant but does want to get paid for her food. She accepts Monero, and instead of a cash register there are two QR codes on display, one corresponding to her public view key A and the other corresponding to her public spend key S.
A customer Bob walks into the restaurant and orders a burger and fries. When Bob pays Alice, here’s what’s going on under the hood.
Bob is using software that generates a random integer r and multiplies it by a point G on an elliptic curve, specifically ed25519, obtaining the point
R = rG
on the curve. The software also multiplies Alice’s view key A, a point on the same elliptic curve, by r, then runs a hash function H on the produce rV that returns an integer k.
k = H(rA).
Finally, Bob’s software computes the point
P = kG + S
and sends Alice’s cash register, i.e. her crypto wallet, the pair of points (P, R). The point P is a stealth address, an address that will only be used this one time and cannot be linked to Alice or Bob [1]. The point R is additional information that helps Alice receive her money.
Alice and Bob share a secret: both know k. How’s that?
Alice’s public view key A is the product of her private view key a and the group generator G [2]. So when Bob computes rA, he’s computing r(aG). Alice’s software can multiply the point R by a to obtain a(rG).
rA = r(aG) = a(rG) = aR.
Both Alice and Bob can hash this point—which Alice thinks of as aR and Bob thinks of as rA—to obtain k. This is ECDH: elliptic curve Diffie-Hellman key exchange.
Next, Alice’s software scans the blockchain for payments to
P = kG + S.
Note that P is on the blockchain, but only Alice and Bob know how to factor P into kG + S because only Alice and Bob know k. And only Alice can spend the money because only she knows the private key s corresponding to the public spend key S where
S = sG.
She knows
P = kG + sG = (k + s)G
and so she has the private key (k + s) corresponding to P.
[1] Bob sends money to the address P, so there would be some connection between Bob and P on the Monero blockchain. However, due to another feature of Monero, namely ring signatures, someone analyzing the blockchain can only determine that Bob is one of 16 people who may have sent money to the address P, and there’s no way to know who received the money. That is, there is no way, using only information on the blockchain, who received the money. A private investigator who saw Bob walk into Alice’s restaurant would have additional information outside the blockchain.
[2] The key assumption of elliptic curve cryptography is that it’s computationally infeasible to “divide” on an elliptic curve, i.e. to recover a from knowledge of G and aG. You could recover a by brute force if the group were small, but the elliptic curve ed25519 has on the order of 2255 points, and a is some integer chosen randomly between 1 and the size of the curve.
The post How stealth addresses work in Monero first appeared on John D. Cook.2025-11-21 03:42:10
I was reading about Shackleton’s incredible expedition to Antarctica, and the Weddell Sea features prominently. That name sounded familiar, and I was trying to remember where I’d heard of Weddell in math. I figured out that it wasn’t Weddell exactly but Weddle I was thinking of.
The Weddell Sea is named after James Weddell (1787–1834). Weddle’s integration rule is named after Thomas Weddle (1817–1853).
I wrote about Weddle’s integration rule a couple years ago. Weddle’s rule, also known as Bode’s rule, is as follows.
Let’s try this on integrating sin(x) from 1 to 2.
If we divide the interval [1, 2] into 6 subintervals, h = 1/6. The 8th derivative of sin(x) is also sin(x), so it is bounded by 1. So we would expect the absolute value of the error to be bounded by
9 / (69 × 1400).
Let’s see what happens in practice.
import numpy as np
x = np.linspace(1, 2, 7)
h = (2 - 1)/6
weights = (h/140)*np.array([41, 216, 27, 272, 27, 216, 41])
approx = np.dot(weights, np.sin(x))
exact = np.cos(1) - np.cos(2)
print("Error: ", abs(approx - exact) )
print("Expected error: ", 9/(1400*6**9))
Here’s the output:
Error: 6.321198009473505e-10 Expected error: 6.379009079626363e-10
2025-11-21 03:10:10
The previous post includes code for solving the equation
Hn = m
i.e. finding the value of n for which the nth harmonic number is the closest to m. It works well for small values of m. It works for large m in the sense that the solution is very close to m, but it’s not necessarily the best solution.
For example, set m = 100. The code returns
n = 15092688622113830917200248731913020965388288
and indeed for that value of n,
Hn − 100 ≈ 3 × 10−15
and that’s as much as we could hope for with IEEE 754 floats.
The approximation
n = exp(m −γ)
is very good for large values of m. Using Mathematica we can find the exact value of n.
f[n_] := Log[n] + EulerGamma + 1/(2 n) - 1/(12 n^2) n = Floor[Exp[100 - EulerGamma]]; N[f[n], 50] 100.00000000000000000000000000000000000000000000900 N[f[n - 1], 50] 99.999999999999999999999999999999999999999999942747
So
n = 15092688622113788323693563264538101449859497
A similar process can find the solution to
Hn = 1000
is
n = 110611511026604935641074705584421138393028001852577373936470952377218354575172401275457597579044729873152469512963401398362087144972181770571895264066114088968182356842977823764462179821981744448731785408629116321919957856034605877855212667092287520105386027668843119590555646814038787297694678647529533718769401069269427475868793531944696435696745559289326610132208504257721469829210704462876574915362273129090049477919400226313586033
For this calculation you’ll need to increase the precision from 50 digits to something like 500 digits, something more than 435 because n is a 435-digit number.
In case you’re wondering whether my function for computing harmonic numbers is accurate enough, it’s actually overkill, with error O(1/120n4).
The post Solving H_n = 100 first appeared on John D. Cook.2025-11-20 09:03:10
I mentioned in the previous post that the harmonic numbers Hn are never integers for n > 1. In the spirit of that post, I’d like to find the value of n such that Hn is closest to a given integer m.
We have two problems to solve. First, how do we accurately and efficiently compute harmonic numbers? For small n we can directly implement the definition. For large n, the direct approach would be slow and would accumulate floating point error. But in that case we could use the asymptotic approximation
from this post. As is often the case, the direct approach gets worse as n increases, but the asymptotic approximation gets better as n increases. Here γ is the Euler-Mascheroni constant.
The second problem to solve is how to find the value of n so that Hn comes closest to m without trying too many possible values of n? We can discard the higher order terms above and see that n is roughly exp(m − γ).
Here’s the code.
import numpy as np
gamma = 0.57721566490153286
def H(n):
if n < 1000:
return sum([1/k for k in range(1, n+1)])
else:
n = float(n)
return np.log(n) + gamma + 1/(2*n) - 1/(12*n**3)
# return n such that H_n is closest harmonic number to m
def nearest_harmonic_number(m):
if m == 1:
return 1
guess = int(np.exp(m - gamma))
if H(guess) < m:
i = guess
while H(guess) < m: guess += 1 j = guess else: j = guess while H(guess) > m:
guess -= 1
i = guess
x = np.array([abs(H(k) - m) for k in range(i, j+1)])
return i + np.argmin(x)
We can use this, for example, to find the closest harmonic number to 10.
>>> nearest_harmonic_number(10) 12366 >>> H(12366) 9.99996214846655
I wrote the code with integer values of m in mind, but the code works fine with real numbers. For example, we could find the harmonic number closest to √20.
>>> nearest_harmonic_number(20**0.5) 49 >>> H(49)**2 20.063280462918804
2025-11-20 07:48:52
József Kürschák proved in 1908 that the function
is never an integer for 0 < m < n. In particular, the harmonic numbers
are never integers for n > 1.
The function f(m, n) can get arbitrarily close to any integer value by taking m and n large enough, but it can never exactly equal an integer.
For this post, I’d like to look at how close f(m, n) comes to an integer value when 0 < m < n ≤ N for some large N, say N = 100,000.
The most naive way to approach this would be to compute f(m, n) for all m and n and keep track of which values were closest to integers. This would be wasteful since it would recompute the same terms over and over. Instead, we could take advantage of the fact that
Instead of working with f(m, n), it will be convenient to work with just its fractional part
because it won’t hurt to throw away the integer parts as we go. The values of m and n minimizing g(m, n) will be the values for which f(m, n) comes closest to an integer from above. The values of m and n maximizing g(m, n) will be the values for which f(m, n) comes closest to an integer from below.
We could calculate a matrix with all values of g(m, n), but this would take O(N²) memory. Instead, for each n we will calculate g(m, n), save the maximum and minimum values, then overwrite that memory with g(m, n + 1). This approach will only take O(N) memory.
When we compute f(m, n) for large values of n, can we rely on floating point arithmetic?
If N = 100,000, f(m, n) < 16 = 24. A floating point fraction has 53 bits, so we’d expect each addition to be correct to within an error of 2−49 and so we’d expect our total error to be less than 2−49N.
The following code computes the values of g(m, n) closest to 0 and 1.
import numpy as np
N = 100_000
f_m = np.zeros(N+1) # working memory
# best values of m for each n
min_fm = np.zeros(N+1)
max_fm = np.zeros(N+1)
n = 2
f_m[1] = 1.5
for n in range(3, N+1):
f_m[n-1] = 1/(n-1)
f_m[1:n] += 1/n
f_m[1:n] -= np.floor(f_m[1:n])
min_fm[n] = np.min(f_m[1:n])
max_fm[n] = np.max(f_m[1:n])
print(min(min_fm[3:]))
print(max(max_fm))
This reports a minimum value of 5.2841953035454026e-11 and a maximum value of 0.9999999996613634. The minimum value is closer to 0 than our (pessimistic) error estimate, though the maximum value is further from 1 than our error estimate.
Modifying the code a bit shows that the minimum occurs at (27134, 73756), and this the input to g that is within our error estimate. So we can be confident that it is the minimum, though we can’t be confident of its value. So next we turn to Mathematica to find the exact value of g(27133, 73756) as a rational number, a fraction with 32024 digits in the numerator and denominator, and convert it to a floating point number. The result agrees with our estimate in magnitude and to four significant figures.
So in summary
with an error on the order of 10−11, and this is the closest value of f(m, n) to an integer, for 0 < m < n ≤ 100,000.
2025-11-18 21:13:35
Five posts on Pythagorean triangles and Pythagorean triples