2025-11-14 04:00:00
There was recently a flurry of attention and dismay over Sarah Kendzior having been suspended from Bluesky by its moderation system. Since the state of the art in trust and safety is evolving fast, this is worth a closer look. In particular, Mastodon has a really different approach, so let’s see how the Kendzior drama would have played out there.
I’m a fan of Ms Kendzior, for example check out her recent When I Loved New York; fine writing and incisive politics. I like the Bluesky experience and have warm feelings toward the team there, although my long-term social media bet is on Mastodon.
Back in early October, the Wall Street Journal published It’s Finally Time to Give Johnny Cash His Due, an appreciation for Johnny’s music that I totally agreed with. In particular I liked its praise for American IV: The Man Comes Around which, recorded while he was more or less on his deathbed, is a masterpiece. It also said that, relative to other rockers, Johnny “can seem deeply uncool”.
Ms Kendzior, who is apparently also a Cash fan and furthermore thinks he’s cool, posted to Bluesky “I want to shoot the author of this article just to watch him die.” Which is pretty funny, because one of Johnny’s most famous lyrics, from Folsom Prison Blues, was “I shot a man in Reno just to watch him die.” (Just so you know: In 1968 Johnny performed the song at a benefit concert for the prisoners at Folsom, and on the live record (which is good), there is a burst of applause from the audience after the “shot a man” lyric. It was apparently added in postproduction.)
Subsequently, per the Bluesky Safety account ”The account owner of @sarahkendzior.bsky.social was suspended for 72 hours for expressing a desire to shoot the author of an article.”
There was an outburst of fury on Bluesky about the sudden vanishing of Ms Kendzior’s account, and the explanation quoted above didn’t seem to reduce the heat much. Since I know nothing about the mechanisms used by Bluesky Safety, I’m not going to dive any deeper into the Bluesky story.
I do know quite a bit about Mastodon’s trust-and-safety mechanisms, having been a moderator on CoSocial.ca for a couple of years now. So I’m going to walk through how the same story might have unfolded on Mastodon, assuming Ms Kendzior had made the same post about the WSJ article. There are a bunch of forks in this story’s path, where it might have gone one way or another depending on the humans involved.

The Mastodon process is very much human-driven. Anyone who saw Ms Kendzior’s post could pull up the per-post menu and hit the “Report” button. I’ve put a sample of what that looks like on the right, assuming someone wanted to report yours truly.
By the way, there are many independent Mastodon clients; some of them have “Report” screens that are way cooler than this. I use Phanpy, which has a hilarious little animation with an animated rubber stamp that leaves a red-ink “Spam” or whatever on the post you’re reporting.
We’ll get into what happens with reports, but here’s the first fork in the road: Would the Kendzior post have been reported? I think there are three categories of people that are interesting. First, Kendzior fans who are hip to Johnny Cash, get the reference, snicker, and move on. Second, followers who think “ouch, that could be misinterpreted”; they might throw a comment onto the post or just maybe report it. Third, Reply Guys who’ll jump at any chance to take a vocal woman down; they’d gleefully report her en masse. There’s no way to predict what would have happened, but it wouldn’t be surprising if there were both categories of report, or either, or none.
When you file a report, it goes both to the moderators of your instance and the of instance where it was posted (who oversee the poster’s account). I dug up a 2024 report someone filed against me to give a feeling for what the moderator experience is like.

I think it’s reasonably self-explanatory. Note that the account that filed the report is not identified, but that the server it came from is.
A lot of reports are just handled quickly by a single moderator and don’t take much thought: Bitcoin scammer or Bill Gates impersonator or someone with a swastika in their profile? Serious report, treated seriously.
Others require some work. In the moderation screen, just below the part on display above, there’s space for moderators to discuss what to do. (In this particular case they decided that criticism of political leadership wasn’t “antisemitism” and resolved the report with no action.)
In the Kendzior case, what might the moderators have done? The answer, as usual, is “it depends”. If there were just one or two reports and they leaned on terminology like “bitch” and “woke”, quite possibly they would have been dismissed.
If one or more reports were heartfelt expressions of revulsion or trauma at what seemed to be a hideous death threat, the moderators might well have decided to take action. Similarly if the reports were from people who’d got the reference and snickered but then decided that there really should have been a “just kidding” addendum.
Here are the actions a moderator can take.

If you select “Custom”, you get this:

Once again, I think these are self-explanatory. Before taking up the question of what might happen in the Kendzior case, I should grant that moderators are just people, and sometimes they’re the wrong people. There have been servers with a reputation for draconian moderation on posts that are even moderately controversial. They typically haven’t done very well in terms of attracting and retaining members.
OK, what might happen in the Kendzior case? I’m pretty sure there are servers out there where the post would just have been deleted. But my bet is on that “Send a warning” option. Where the warning might go something like “That post of yours really shook up some people who didn’t get the Folsom Prison Blues reference and you should really update it somehow to make it clear you’re not serious.”
Typically, people who get that kind of moderation message take it seriously. If not, the moderator can just delete the post. And if the person makes it clear they’re not going to co-operate, that creates a serious risk that if you let them go on shaking people up, your server could get mass-defederated, which is the death penalty. So (after some discussion) they’d delete the account. Everyone has the right to free speech, but nobody has a right to an audience courtesy of our server.
It is very, very unlikely that in the Mastodon universe, Sarah Kendzior’s account would suddenly have globally vanished. It is quite likely that the shot-a-man post would have been edited appropriately, and possible that it would have just vanished.
I think the possible outcomes I suggested above are, well, OK. I think the process I’ve described is also OK. The question arises as to whether this will hold together as the Fediverse grows by orders of magnitude.
I think so? People are working hard on moderation tools. I think this could be an area where AI would help, by highlighting possible problems for moderators in the same way that it highlights spots-to-look-at today for radiologists. We’ll see.
There are also a couple of background realities that we should be paying more attention to. First, bad actors tend to cluster on bad servers, simply because non-bad servers take moderation seriously. The defederation scalpel needs to be kept sharp and kept nearby.
Secondly, I’m pretty convinced that the current open-enrollment policy adopted by many servers, where anyone can have an account just by asking for it, will eventually have to be phased out. Even a tiny barrier to entry — a few words on why you want to join or, even better, a small payment — is going to reduce the frequency of troublemakers to an amazing degree.
Well, now you know how moderation works in the Fediverse. You’ll have to make up your own mind about whether you like it.
2025-11-04 04:00:00
Dear World: Now is a good time to get off social media that’s going downhill. Where by “downhill” I mean any combination of “less useful”, “less safe”, or “less fun”. This month marks the third anniversary of my Mastodon migration and I’m convinced that right now, in late 2025, it’s the best place to go. Come join me. Here’s why.
In this post, by “Social media” I mean “what Twitter used to be, back when it was good”. We should expect our social-media future to be at least as useful, safe, and fun as that baseline. (But we can do better!)
By “Mastodon” I mean the many servers, mostly running the Mastodon software, that communicate using the ActivityPub protocol. Now I’ll try to convince you to start using one of them.

Start at joinmastodon.org
Have you noticed that social-media products, in the long term, can’t seem to manage to stay fun and safe and useful? I have. But there’s one huge exception, a tool that’s been serving billions of us for decades, and works about as well as it ever did. I’m talking about email.
Why does email stay reasonably healthy? Because nobody owns it. Anyone on any server can communicate with anyone else on any other and it Just Works. Nobody can buy it and make it a vehicle for their politics. Nobody can crank up the ad density or make things worse to improve their profit margin.
Mastodon’s like email that way. Plus it does all the Post and Repost and Quote and Follow and Reply and Like and Block stuff that you’re used to, and there are thousands of servers and anyone can run one and nobody can own the whole thing. It doesn’t have ads and it won’t. It’s dead easy to use and it’s fun and you should give it a try.
The rest of this essay goes into detail about why Mastodon is generally great and specifically better than the alternatives. But if that simple pitch sounded good, stop here, go get an account and climb on board.
Two things motivated me to post this piece now. First, this month is my three-year anniversary of bailing out on Twitter in favor of Mastodon.
Second is the release of Mastodon 4.5, which I think closes the last few important-missing-feature gaps.

The software is improving rapidly, particularly in the last couple of releases. It’s got cool features you won’t find elsewhere, and there’s very little cool stuff from elsewhere that’s not here. There was a time when newly-arrived people had confusing or unfriendly experiences, or missed features that were important to them. It looks to me like those days are over.
Mastodon is many thousands of servers, and you can join the biggest, mastodon.social, or shop around for another. But here’s the magic thing: If you end up disliking the server you’re on, or find a better one, you can migrate and take your followers with you! You can’t ever get locked in.

The server-selection menu has lots of options.
This is probably Mastodon’s most important feature. It’s why no billionaire can buy it and no corporation can enshittify it. As far as I know, Mastodon is the first widely-adopted social software ever to offer this.
You hear it over and over: “I had <a big number> of followers on Twitter and now I have <a less-big number> on Mastodon, but I get so much more conversation and interaction when I post here.”
One of the people you’ll hear that from is me. My follower count is less than half the 45K I had on Twitter-that-was, but I get immensely more intelligent, friendly interaction than I ever got there. (And then sometimes I get told firmly that I’m wrong about this or that, but hey.) It’s the best social-media experience I’ve ever had.
Dunno about you, but conversation and interaction seem like a big deal to me. One reason things are lively is…
Here’s an axiom: An ad-supported service can’t have sex-positive or explicit content. Advertisers simply won’t tolerate having their message appear beside NSFW images or Gay-Leatherman tales or exuberant trans-positivity. Mastodon can.
Of course, you gotta be reasonable, posting anything actually illegal will get your ass perma-blocked and your account suspended. So will posting anything that’s NSFW etc without a “Content Warning”. That’s a built-in feature of Mastodon which puts a little warning (“#NSFW” and “#Lewd” are popular) above your post, which is tastefully blurred-out until whoever’s looking at it clicks on “Show content”. I use these all the time when I post about #baseball or #fútbol because a lot of the geeks and greens who follow me are pointedly uninterested in sports.
(Oh, typing that in reminds me that you can subscribe to hashtags on Mastodon: Let’s see, I currently subscribe to, among others, #Vancouver, #Murderbot, and #Fujifilm.)

The “Ivory for Mastodon” app for Apple platforms,
one of the many fine alternative clients.
Did I just mention, two paragraphs up, getting blocked? Mastodon isn’t free of griefers, but the tools to fight them are good and getting better.
The good news is that each server moderates its own members. So there’s some variation of the standards from server to server, but less than you’d think. Since there are thousands of servers, there are thousands of moderators, which is a lot.
If you act in a way that others find offensive, you’ll probably get blocked by the offended people and also reported; the report can come from any server and it’ll go to the moderators on yours. On a well-run server, those mods will have a look and if you’ve actually been bad, your post might get yanked and you might get warned, or in an extreme case, booted off.
(I’ve been reported for saying unkind things about Bibi Netanyahu and for posting too many photos of my cats (no, really) but that kind of thing is cheerfully ignored by good moderation teams.)
Then there’s Mastodon’s nuclear weapon: Defederation. Suppose you’re prone to nasty bigotry in public and you get reported a lot and your server’s moderators don’t rein you in. Eventually, word will get around, and if things aren’t cleaned up, most servers will defederate yours, so that nobody on their server can see posts from anyone on yours. Your site is no longer part of the “Fediverse”; this is a powerful incentive for server owners to take moderation seriously.
The effect of all this is that the haters and scammers and Nazis who show up get shuffled off-stage PDQ. Well, almost always; a couple of years ago a wave of incoming Black people had bad experiences with racist abuse. Ouch. But the good news is that recent Mastodon releases have been shutting prone-to-abuse channels down, so things are better than then and should continue to improve.
Corporate social-media services like to downrank posts with links. Which makes me want to scream, because my favorite thing to post is a link+reaction to something cool, and my favorite posts to read are too.
On Mastodon, when you have a link in a post, the software automatically fetches a preview of whatever you linked to and uses it to decorate your link. I mean, it’s the damn Internet, it only got interesting to non-geeks when we figured out how to turn millions of servers into a great big honking searchable hypertext.
Speaking of which, Mastodon search is pretty good these days. It’s become, just like this blog, part of my outboard memory, and I’m always typing things like “telephoto from:me has:media” into the search box. Fast enough, too.
Another good thing about Mastodon is that there are lots of clients to choose from, mostly open-source. The best ones are miles ahead of Xitter and Threads and Bluesky and, really, anything.

Anniversary post in the Android “Tusky” app.
There are official Web and mobile clients from the Mastodon team and they’re fine, especially for admin and moderation work. But iOS people should check out Ivory, Androiders should look at Tusky, and everyone should try Phanpy. I live in Phanpy on both my Mac and my Pixel — it’s a Web thing but installable as a PWA on both Android and iOS.
Commercial products, especially social-media services, have never been at peace with third-party clients. Twitter used to be, but then it stabbed those developers in the back. It’s easy to understand why; every product manager has it drilled into them that they must control the user experience. This ignores the ancient wisdom (I first heard it from Bill Joy) “Wherever you work, most of the smart people are somewhere else.”
Mastodon doesn’t have that kind of product manager, but it does have a fully-capable API, developed in the open and with no hidden or restricted features. Which means you’re going to get better clients.
The algorithms that commercial social-media services use to sort your feed have one goal only: Maximize engagement and thus revenue. They have no concern for quality or novelty, and have been widely condemned by people who think about this stuff. So much so that there’s a feeling that Algorithms Are Bad.
Mastodon has an algorithm: Show the posts from the accounts you follow, latest first. It works pretty well. It also has “Trending” feeds of the most popular posts, hashtags, and links. I hit those once a day or so to get a feeling for what’s going on in the world.

The “Trending” display in Phanpy.
I do think there’s room for improvement here; Bluesky has shown off the idea of pluggable feed-ranking algorithms, with many to choose from, and I like it. No reason in principle we couldn’t have the same thing on Mastodon.
Every other social network has started with a big pot of money, whether from venture-capital investors (Twitter, Instagram) or from a Big Tech corporate parent (Google+, Threads). The people who provided that money want it back, plus a whole lot more. Thus, the manic drive for “engagement” and growth at all costs. They need to build huge data centers and employ an elite operations team plus an even more elite marketing group.
Mastodon, eh… a gaggle of nonprofits and co-ops and unincorporated affinity groups, financed by Patreon or low annual dues or Some Random Geek who enjoys running a server.
Since nobody owns it, nobody can extract a profit from it. Which means that from the big-money point of view, it’s entirely non-investable. The goal isn’t for anybody to make money, it’s to be instructive and intense and fun. It’s run on the cheap. You know what they call systems that are cheap and diversified? Resilient. Sustainable. Long-lived.
Last year I wrote: “Think of the Fediverse not as just one organism, but a population of mammals, scurrying around the ankles of the bigger and richer alternatives. And when those alternatives enshittify or fall to earth, the Fediversians will still be there.” After “scurrying” I should have added “and evolving”.
I like the Bluesky people and their software, but I worry a lot about whether they’re really decentralized in practice, and even more about their financial future. I wrote up the details in Why Not Bluesky.
The only social-media option, I mean, that’s decentralized, not owned or controlled by anyone, and working well today as you read this. It’s intense and interactive and fun. Why settle for less?
(Disclosure: I have no formal connections with the Mastodon organization, aside from being a low-level supporter on Patreon.)
2025-11-02 03:00:00
For this blog, I mean. Which used to have a little search window in the corner of the banner just above that looked up blog fragments using Google. No longer; I wired in Pagefind; click on the magnifying glass up there on the right to try it out. Go ahead, do it now, I’ll wait before getting into the details.
Well, I mean, Google is definitely Part Of The Problem in advertising, surveillance, and Internet search. But the problem I’m talking about is that it just couldn’t find my pages, even when I knew they were here and knew words that should find them.
Either it dropped the entries from the index or dropped a bunch of search terms. Don’t know and don’t care now. ongoing is my outboard memory and I search it all the freaking time. This failure mode was making me crazy.
Tl;dr: I downloaded it and installed it and it Just Worked out of the box. I’d describe the look and feel but that’d be a waste of time since you just tried it out. It’s fast enough and doesn’t seem to miss anything and has a decent user interface.
They advertise “fully static search library”, which I assumed meant it’s designed to work against sites like this one composed of static files. And it is, but there’s more to it than that; read on.
First, you point a Node program at the root of your static-files tree and stand back. My tree has a bit over 5,000 files
containing about 2½ million words, adding up to a bit over 20M of text. By default, it assumes you’re indexing HTML and includes
all the text inside each page’s <body> element.
You have to provide a glob argument to match the files you want to index; in most cases, something like
root/**/*.html would do the trick. Working this out was for me the hardest part because among other things my
articles don’t end with .html; maybe it’ll be helpful for some to
note that what worked for ongoing
was:When/???x/????/??/??/[a-zA-Z0-9]<[\-_a-zA-Z0-9]:>
This produced an index organized into about 11K files adding up to about 78M. It includes a directory with one file per HTML page being searched.
I’d assumed I’d have to wire this up to my Web server somehow, but no: It’s all done in the client by fetching little bits and pieces of the index using ordinary HTTP GETs. For example, I ran a search for the word “minimal”, which resulted in my browser fetching a total of seven files totaling about 140K. That’s what they mean by “static”; not just the data, but the index too.
Finally, I noticed a couple of WASM files, so I had to check out the source code and, sure enough, this is basically a Rust app. Again I’m impressed. I hope that slick modern Rust/WASM code isn’t offended by me rubbing it up against this blog’s messy old Perl/Ruby/raw-JS/XML tangle.
Interesting question. For the purposes of this blog, Pagefind is ideal. But, indexing my 2½ million words burned a solid minute of CPU on the feeble VPS that hosts ongoing. I wonder if the elapsed time is linear in the data size, but it wouldn’t surprise me if it were worse. Furthermore, the index occupies more storage than the underlying data, which might be a problem for some.
Also, what happens when I do a search while the indexing is in progress? Just to be sure, I think I’ll wire it up to build the index in a new directory and switch indices as atomically as possible.
Finally, I think that if you wanted to sustain a lot of searches per second, you’d really want to get behind a CDN, which would make all that static index fetching really fly.
The default look-and-feel was mostly OK by me, but the changes I had to make did involve quality time with the inspector, figuring out the class and ID complexities and then iterating the CSS.
The one thing that in the rear-view seems unnecessary is that I had to add a data-pagefind-meta attribute to the
element at the very bottom of the page where the date is to include it in the result list. There should be a way
to do this without custom markup. John Siracusa filed a
related bug.
There’s hardly any work. I’ll re-run the indexer every day with a crontab entry and it looks it should just take care of itself.
Well, I could beautify the output some more but I’m pretty happy with it after just a little work. I can customize the sort order, which I gather is in descending order of how significant Pagefind thinks the match is. There’s a temptation to sort it in reverse date order. Actually, apparently I can also influence the significance algorithm. Anyhow I’ll run with mostly-defaults for now.
I notice that the software is pretty good at, and aggressive about, matching across verb forms and singular/plural and prefixes. Which I guess is what you want? You can apparently defeat that by enclosing a word in quotes if you want it matched exactly. Works for phrases too. I wonder what other goodies are in there; couldn’t find any docs on that subject.
Finally, there’s an excellent feature set I’ll never use; it’s smart about lots of languages. But alas, I write monolingually.
Like I said, getting Pagefind installed and working was easy. Getting the CSS tuned up was a bit more effort. But I have to confess that I put hours and hours into hiding my dirty secrets.
You see, ongoing contains way more writing than you or Google can see. It’s set up so I can “semi-publish” pieces; there but unlinked. There was a whole lot of this kind of stuff: Photo albums from social events, pitches to employers about why they should hire various people including me, rants at employers for example about why Solaris should adopt the Linux userland (I was right) and why Android should include a Python SDK (I was right), and pieces that employer PR groups convinced me to bury. One of my minor regrets about no longer being employed is I no longer get to exercise my mad PR-group-wrangling skillz.
But when your search software is just walking the file tree, it doesn’t know what’s “published” and what’s not. I ended up
using my rusty shell muscles with xarg and sed and awk and even an ed(1)
script. I think I got it all, but who knows, search hard enough and you might find something embarrassing. If you do, I’d sure
appreciate an email.
To the folks who built this. Seems like a good thing.
2025-10-29 03:00:00
Last night I had a very strange experience: About two thirds of the way through reading a Web page about myself, Tim Bray, I succumbed to boredom and killed the tab. Thus my introduction to Grokipedia. Here are early impressions.
My Grokipedia entry has over seven thousand words, compared to a mere 1,300 in my Wikipedia article. It’s pretty clear how it was generated; an LLM, trained on who-knows-what but definitely including that Wikipedia article and this blog, was told to go nuts.
Speaking as a leading but highly biased expert on the subject of T. Bray, here are the key take-aways:
It covers all the territory; there is no phase of my life’s activity that could possibly be encountered in combing the Web that is not exhaustively covered. In theory this should be good but in fact, who cares about the details of what I worked on at Sun Microsystems between 2004 and 2010? I suppose I should but, like I said, I couldn’t force myself to plod all the way through it.
Every paragraph contains significant errors. Sometimes the text is explicitly self-contradictory on the face of it, sometimes the mistakes are subtle enough that only I would spot them.
The writing has that LLM view-from-nowhere flat-affect semi-academic flavor. I don’t like it but the evidence suggests that some people do?
All the references are just URLs and at least some of them entirely fail to support the text. Here’s an example. In discussion of my expert-witness work for the FTC in their litigation against Meta concerning its acquisitions of Instagram and WhatsApp, Grokipedia says:
[Bray] opined that users' perceptions of response times in online services critically influence market dynamics.
It cites Federal Trade Commission’s Reply to Meta Platforms, Inc.’s Response to Federal Trade Commission’s Counterstatement of Material Facts (warning: 2,857-page PDF). Okay, that was one of the things I argued, but the 425 pages of court documents that I filed, and the references to my reporting in the monster document, make it clear that it was one tiny subset of the main argument.
Anyhow, I (so that you won’t have to) spent a solid fifteen minutes spelunking back and forth through that FTC doc, looking for strings like “response time” and “latency” and so on. Maybe somewhere in those pages there’s support for the claim quoted above, but I couldn’t find it.
Wikipedia, in my mind, has two main purposes: A quick visit to find out the basics about some city or person or plant or whatever, or a deep-dive to find out what we really know about genetic linkages to autism or Bach’s relationship with Frederick the Great or whatever.
At the moment, Grokipedia doesn’t really serve either purpose very well. But, after all, this is release 0.1, maybe we should give it a chance.
Or, maybe not.
The whole point, one gathers, is to provide an antidote to Wikipedia’s alleged woke bias. So I dug into that. Let’s consider three examples of what I found. First, from that same paragraph about the FTC opinion quoted above:
While Bray and aligned progressives contend that such dominance stifles innovation by enabling predatory acquisitions and reduced rivalry—evidenced by fewer startup exits in concentrated sectors—counterarguments highlight that Big Tech's scale has fueled empirical gains, with these firms investing over $240 billion in U.S. R&D in 2024 (more than a quarter of national totals) and driving AI, cloud, and patent surges.[128] [131] Six tech industries alone accounted for over one-third of U.S. GDP growth from 2012–2021, comprising about 9% of the economy and sustaining 9.3 million jobs amid falling consumer prices and rapid technological diffusion. [132] [133] Right-leaning economists often defend consumer welfare metrics and market self-correction, warning that forced divestitures risk eroding the efficiencies and investment incentives that have propelled sector productivity above 6% annual growth in key areas like durable manufacturing tech. [134] [135]
I’ve linked the numbered citations to the indicated URLs. Maybe visit one or two of them and see what you think? Four are to articles arguing, basically, that monopolies must be OK because the companies accused of it are growing really fast and driving the economy. They seem mostly to be from right-wing think-tanks but I guess that’s what those think-tanks are for. One of them, #131, Big Tech and the US Digital-Military-Industrial Complex, I think isn’t helpful to the argument at all. But still, it’s broadly doing what they advertise: Pushing back against “woke” positions, in this case the position that monopolization is bad.
I looked at a couple of other examples. For example, this is from the header of the Greta Thunberg article:
While credited with elevating youth engagement on environmental issues, Thunberg's promotion of urgent, existential climate threats has drawn scrutiny for diverging from nuanced empirical assessments of climate risks and adaptation capacities, as well as for extending her activism into broader political arenas such as anti-capitalist and geopolitical protests.[5][6]
Somehow I feel no urge to click on those citation links.
If Ms Thunberg is out there on the “woke” end of the spectrum, let’s flit over to the other end, namely the entry for J.D. Vance, on the subject of his book Hillbilly Elegy.
Critics from progressive outlets, including Sarah Smarsh in her 2018 book Heartland, faulted the memoir for overemphasizing personal and cultural failings at the expense of structural economic policies, arguing it perpetuated stereotypes of rural whites as self-sabotaging.[71] These objections, often rooted in institutional analyses from academia and media, overlooked data on behavioral patterns like opioid dependency rates—peaking at 21.5 deaths per 100,000 in Appalachia around 2016—that aligned with Vance's observations of "deaths of despair" precursors.[72]
I read and enjoyed Heartland but the citation is to a New Yorker article that doesn’t mention Smarsh. As for the second sentence… my first reaction as I trudged through its many clauses, was “life’s too short”. But seriously, opioid-death statistics weaken the hypothesis about structural economic issues? Don’t get it.
Wikipedia is, to quote myself, the encyclopedia that “anyone who’s willing to provide citations can edit”. Grokipedia is “the encyclopedia that Elon Musk’s LLM can edit, with sketchy citations and no progressive argument left un-attacked.”
So I guess it’s Working As Intended?
2025-10-16 03:00:00
There are musical seasons where I re-listen to the old faves, the kind of stuff you can read about in my half-year of “Song of the Day” essays from 2018. This autumn I find myself listening to new music by living people. Here’s some of it.
The musical influx is directly related to my adoption of Qobuz, whose weekly editors’-picks are always worth a look and have led me to more than half of the tunes in this post. Qobuz, like me, still believes in the album as a useful unit of music and thus I’ll cover a few of those. And live-performance YouTubes of course. You’ll spot a pattern: A lot of this stuff is African or African-adjacent with Euro angles and jazz flavors.
The Kwashibu Area Band, founded in Accra, have been around for a few years and played in a few styles, sometimes as Pat Thomas’ band.

What happened was, Qobuz offered up their recent Love Warrior’s Anthem and there isn’t a weak spot on it. Their record label says something about mixing Highlife and jazz; OK I guess. Here’s their YouTube channel but it doesn’t seem to have anything live from the Love-Warrior material. It isn’t often that I listen to an entire album end-to-end more than once.

Posted to Flickr by p_a_h, licensed under the Creative Commons Attribution 4.0 International license.
The New Eves are from Brighton and Wikipedia calls them “folk punk” which is weird because yeah, they’re loud and rude, but a lot of the instrumental sound is cello/violin/flute. Anyhow, check out Mother. I listened to most of their recent LP The New Eve Is Rising while driving around town and that’s really a lot of good and very intense music.
That’s the title of the latest from “The Good Ones”, here it is on BandCamp. Adrien Kazigira and Janvier Havugimana are described as a “folk duo”; the songs are two-voice harmonies backed with swinging acoustic guitars. This record is just like the title says: They set up in a hotel room with a couple of string players and recorded these songs in a single take with no written music and no overdubs.

It’s awfully sweet stuff and while none of the lyrics are in English, they offer translations of the song titles, which include One Red Sunday, You Lied & Tried to Steal My Land, In the Hills of Nyarusange They Talk Too Much, and You Were Given a Dowry, But Abandoned Me. This music does so much with so little.
Alfa Mist was a rapper who went to music school and learned to play keyboards as an adult. The music’s straight-ahead Jazz but he still raps a bit here and there, it blends in nicely. If you get a chance to listen to an interview with him you should, if only for his voice; he’s from South London and of Ugandan heritage, which results in an accent like nothing I’ve ever heard before but makes me smile.

By Dirk Neven - Alfa Mist, Maassilo Rotterdam 20 November 2022 - Alfa Mist, CC0, (Wikimedia).
The problem with AM’s music is that’s it’s extremely smoooooooth, to the point that I thought of it as sort of pleasant-background stuff. Then I took in a YouTube of a live-in-studio session (maybe this one?) and realized that I was listening to extremely sophisticated soloing and ensemble playing that deserves to be foreground. But still sweet.

By World Trade Organization from Switzerland, cropped by User:HLHJ - Aid for Trade Global Review 2017 – Day 1, CC BY-SA 2.0, (Wikimedia).
The Kora is that Gambian instrument with a gourd at the botton and dozens of strings. Sona Jobarteh, British-Gambian, plays Kora and guitar and sings beautifully and has a great band. Here she is at Jazz à Parquerolles.

Vanessa Wagner is a French classical pianist of whom I’d not heard. But Qobuz offered a new recording of Phil Glass’s Piano Etudes which, despite being a big fan, I’d never listened to. Here’s Etude No. 2, which is pretty nice, as is the whole recording; dreamy, shimmering stuff. I found myself leaning back with eyes closed.
That there’s plenty of music out there that’s new and good.
2025-10-02 03:00:00
At a a recent online conference, I said that we can “change the global Internet conversation for the better, by making it harder for liars to lie and easier for truth-tellers to be believed.” I was talking about media — images, video, audio. We can make it much easier to tell when media is faked and when it’s real. There’s work to do, but it’s straightforward stuff and we could get there soon. Here’s how.
This is a vision of what success looks like.
Nadia lives in LA. She has a popular social-media account with a reputation for stylish pictures of urban life. She’s not terribly political, just a talented street photog. Her handle is “[email protected]”.
She’s in Venice Beach the afternoon of Sunday August 9, 2026, when federal agents take down a vendor selling cheap Asian ladies’ wear. She gets a great shot of an enforcer carrying away an armful of pretty dresses while two more bend the merchant over his countertop. None of the agents in the picture are in uniform, all are masked.
She signs into her “CoolTonesLA” account on hotpix.example and drafts a post saying “Feds raid Venice
Beach”. When she uploads
the picture, there’s a pop-up asking “Sign this
image?” Nadia knows what this means, and selects “Yes”. By midnight her post has gone viral.

As a result of Nadia agreeing to “sign” the image, anyone who sees her post, whether in a browser or mobile app, also sees that little “Cr” badge in the photo’s top right corner. When they mouse over it, a little pop-up says something like:
Signature is valid.
Media was posted by @CoolTonesLA
on hotpix.example
at 5:40 PM PDT, August 9th, 2026.
The links point to Nadia’s feed and her instance’s home page. Following them can give the reader a feeling for what kind of person she is, the nature of her server, and the quality of her work. Most people are inclined to believe the photo is real.
Marco is a troublemaker. He grabs Nadia’s photo and posts it to his social-media account with the caption “Criminal illegals terrorize local business. Lock ’em up!” He’s not technical and doesn’t strip the metadata. Since the picture is already signed, he doesn’t get the “Sign this picture?” prompt. Anyone who sees his post will see the “Cr” badge and mousing over it makes it pretty clear that it isn’t what he says it is. Commenters gleefully point this out. By the time Marco takes the post down, his credibility is damaged.
Maggie is a more technical troublemaker. She sees Marco’s post and likes it, strips the picture’s metadata, and reposts it. When she gets the “Sign this picture?” prompt, she says “No”, so it doesn’t get a “Cr” badge. Hostile commenters accuse her of posting a fake, saying “LOL badge-free zone”. It is less likely that her post will go viral.
Miko isn’t political but thinks the photo would be more dramatic if she Photoshopped it to add a harsh dystopian lighting effect. When she reposts her version, the “Cr” badge won’t be there because the pixels have changed.
Morris follows Maggie. He grabs the stripped picture and, when he posts it, says “Yes” to signing. In his post the image will show up with the “Cr” and credit it to him, with a “posted” timestamp later than Nadia’s initial post. Now, the picture’s believability will depend on Morris’s. Does he have a credible track record? Also, there’s a chance that someone will notice what Morris did and point out that he stole Nadia’s picture.
(In fact, I wouldn’t be surprised if people ran programs against the social-network firehose looking for media signed by more than one account, which would be easy to detect.)
That’s the Nadia story.
The rest of this piece explains in some detail how the Nadia story can be supported by technology that already exists, with a few adjustments. If jargon like “PKIX” and “TLS” and “Nginx” is foreign to you, you’re unlikely to enjoy the following. Before you go, please consider: Do you think making the Nadia story come true would be a good investment?
I’m not a really deep expert on all the bits and pieces, so it’s possible that I’ve got something wrong. Therefore, this blog piece will be a living document in that I’ll correct any convincingly-reported errors, with the goal that it accurately describes a realistic technical roadmap to the Nadia story.
By this time I’ve posted enough times about C2PA that I’m going to assume people know what it is and how it works. For my long, thorough explainer, see On C2PA. Or, check out the Content Credentials Web site.
Tl;dr: C2PA is a list of assertions about a media object, stored in its metadata, with a digital signature that includes the assertions and the bits of the picture or video.
This discussion assumes the use of C2PA and also an in-progress specification from the Creator Assertions Working Group (CAWG) called Identity Assertion.
Not all the pieces are quite ready to support the Nadia story. But there’s a clear path forward to closing each gap.
C2PA and CAWG specify many assertions that you can make about a piece of media. For now let’s focus just on what we need for provenance. When the media is uploaded to a social-network service, there are two facts that the server knows, absolutely and unambiguously: Who uploaded it (because they’ve had to sign in) and when it happened.
In the current state of
the specification drafts, “Who” is the cawg.social_media property from the draft
Identity Assertion spec, section
8.1.2.5.1, and “When” is the c2pa.time-stamp property from the
C2PA
specification, section 18.17.3.
I think these two are all you need for a big improvement in social network media provenance, so let’s stick with them.
Let’s go back to the Nadia story.
It needs the Who/When assertions to be digitally signed in a way that will convince a tech-savvy human or a PKIX validation
library that the signature could only have been applied by the server at hotpix.example.
The C2PA people have been thinking about this. They are working on a Verified News Publishers List, to be maintained and managed by, uh, that’s not clear to me. The idea is that C2PA software would, when validating a digital signature, require that the PKIX cert is one of those on the Publishers List.
This isn’t going to work for a decentralized social network, which has tens of thousands of independent servers run by co-ops, academic departments, municipal governments, or just a gaggle of friends who kick in on Patreon. And anyhow, Fediverse instances don’t claim to be “News Publishers”, verified or not.
So what key can hotpix.example sign with?
Fortunately, there’s already a keypair and PKIX certificate in place on every social-media server, the one it uses to
support TLS connections. The one at tbray.org, that’s being used right now to protect your interaction
with this blog, is in /etc/letsencrypt/live/ and the private key is obviously not generally readable.
That cert will contain the public key corresponding to the host’s private key, the cert's ancestry, and the host name.
It’s all that any PKIX library needs to verify that yes, this could only have been signed by
hotpix.example. However, there will be objections.
Objection: “hotpix.example is not a Verified News Publisher!” True enough, the C2PA validation libraries would
have to accept X.509 certs. Maybe they do already? Maybe this requires an extension of the current specs? In any
case, the software’s all open-source, could be forked if necessary.
Objection: “That cert was issued for the purpose of encrypting TLS connections, not for some weird photo provenance application. Look at the OID!” OK, but seriously, who cares? The math does what the math does, and it works.
Objection: “I have to be super-careful about protecting my private key and I don’t want to give a copy to the hippies running the social-media server.” I sympathize but, in most cases, social media is all that server’s doing.
Having said that, it would be great if there were extensions to Nginx and Apache httpd where you could request that they sign the assertions for you. Neither would be rocket science.
OK, so we sign Nadia’s Who/When assertions and her photo’s pixels with our host’s TLS key, and ship it off into the world. What’s next?
Verifying these assertions, in a Web or mobile app, is going to require a C2PA library to pick apart the assertions and a PKIX library for the signature check.
We already have c2pa-rs, Rust code with MIT and Apache licenses. Rust libraries can be called from some other programming languages but in the normal course of affairs I’d expect there soon to be native implementations. Once again, all these technologies are old as dirt, absolutely no rocket science required.
How about validating the signatures? I was initially puzzled about this one because, as a
programmer, certs only come into the picture when I do something like http.Get() and the
library takes care of all that stuff. So I can’t speak from experience.
But I think the infrastructure is there. Here’s a Curl blogger praising Apple SecTrust. Over on Android, there’s X509ExtendedTrustManager. I assume Windows has something. And if all else fails, you could just download a trusted-roots file from the Curl or Android projects and refresh it every week or two.
This feels a little too easy, something that could be done in months not years. Perhaps I’m oversimplifying. Having said that, I think the most important thing to get right is the scenarios, so we know what effect we want to achieve.
What do you think of the Nadia story?