2025-06-20 08:00:00
Cloth is one of the most important goods a society can produce. Clothing is instrumental for culture, expression, and for protecting one's modesty. Historically, cloth was one of the most expensive items on the market. People bought one or two outfits at most and then wore them repeatedly for the rest of their lives. Clothing was treasured and passed down between generations the same way we pass jewelry down between generations. This cloth was made in factories by highly skilled weavers. These weavers had done the equivalent of PhD studies in weaving cloth and used state of the art hardware to do it.
As factories started to emerge, they were able to make cloth so much more cheaply than skilled weavers ever could thanks to inventions like the power loom. Power looms didn't require skilled workers operating them. You could even staff them with war orphans, which there was an abundance of thanks to all the wars. The quality of the cloth was absolutely terrible in comparison, but there was so much more of it made so much more quickly. This allowed the price of cloth to plummet, meaning that the wages that the artisans made fell from six shillings a day to six shillings per week over a period of time where the price of food doubled.
Mind you, the weavers didn't just reject technological progress for the sake of rejecting it. They tried to work with the ownership class and their power looms in order to produce the same cloth faster and cheaper than they had before. For a time, it did work out, but the powers that be didn't want that. They wanted more money at any cost.
At some point, someone had enough and decided to do something about it. Taking up the name Ned, he led a movement that resulted in riots, destroying factory equipment, and some got so bad they had to call the army in to break them up. Townspeople local to those factory towns were in full support of Ned's followers. Heck, even the soldiers sent to stop the riots ended up seeing the points behind what Ned's followers were doing and joined in themselves.
The ownership class destroyed the livelihood of the skilled workers so that they could make untold sums of money producing terrible cloth that people would turn their one-time purchase of clothing into a de-facto subscription that they had to renew every time their clothing wore out. Now we have fast fashion and don't expect our clothing to last more than a few years. I have a hoodie from AWS Re:Invent in 2022 that I'm going to have to throw out and replace because the sleeves are dying.
We only remember them as riots because their actions affected those in power. This movement was known as the Luddites, or the followers of Ned Ludd. The word "luddite" has since shifted meaning over time and is now understood as "someone who is against technological development". The Luddites were not against technology like the propaganda from the ownership class would have you expect, they fought against how it was implemented and the consequences of its rollout. They were skeptical that the shitty cloth that the power loom produced would be a net benefit to society because it meant that customers would inevitably have to buy their clothes over and over again, turning a one-time purchase into a subscription. Would that really benefit consumers or would that really benefit the owners of the factories?
Nowadays the Heritage Crafts Association of the United Kingdom lists many forms of weaving as Endangered or Critically Endangered crafts, meaning that those skills are either at critical risk of dying out without any "fresh blood" learning how to do it, or the last generation of artisans that know how to do that craft are no longer teaching new apprentices. All that remains of that expertise is now contained in the R&D departments of the companies that produce the next generations of power looms, and whatever heritage crafts practitioners remain.
Remember the Apollo program that let us travel to the moon? It was mostly powered by the Rocketdyne F1 engine. We have all of the technical specifications to build that rocket engine. We know all the parts you need, all the machining you have to do, and roughly understand how it would be done, but we can't build another Rocketdyne F1 because all of the finesse that had been built up around manufacturing it no longer exists. Society has moved on and we don't have expertise in the tools that they used to make it happen.
What are we losing in the process? We won't know until it's gone.
As I've worked through my career in computering, I've noticed a paradox that's made me uneasy and I haven't really been able to figure out why it keeps showing up: the industry only ever seems to want to hire people with the word Senior in their title. They almost never want to create people with the word Senior in their title. This is kinda concerning for me. People get old and no longer want to or are able to work. People get sick and become disabled. Accidental deaths happen and remove people from the workforce.
If the industry at large isn't actively creating more people with the word Senior in their title, we are eventually going to run out of them. This is something that I want to address with Techaro at some point, but I'm not sure how to do that yet. I'll figure it out eventually. The non-conspiratorial angle for why this is happening is that money isn't free anymore and R&D salaries are no longer taxable business expenses in the US, so software jobs that don't "produce significant value" are more risky to the company. So of course they'd steal from the future to save today. Sounds familiar, doesn't it?
However there's another big trend in the industry that concerns me: companies releasing products that replace expertise with generative AI agents that just inscrutably do the thing for you. This started out innocently enough - it was just better ways to fill in the blanks in your code. But this has ballooned and developed from better autocomplete to the point where you can just assign issues to GitHub Copilot and have the issue magically get solved for you in a pull request. Ask the AI model for an essay and get a passable result in 15 minutes.
At some level, this is really cool. Like, think about it. This reduces toil and drudgery to waiting for half an hour at most. In a better world I would really enjoy having a tool like this to help deal with the toil work that I need to do but don't really have the energy to. Do you know how many more of these essays would get finished if I could offload some of the drudgery of my writing process to a machine?
We are not in such a better world. We are in a world where I get transphobic hate sent to the Techaro sales email. We are in a world where people like me are intentionally not making a lot of noise so that we can slide under the radar and avoid attention by those that would seek to destroy us. We are in a world where these AI tools are being pitched as the next Industrial Revolution, one where foisting our expertise away into language models is somehow being framed as a good thing for society.
There's just one small problem: who is going to be paid and reap the benefits from this change as expectations from the ownership class change? A lot of the ownership class only really experiences the work product outputs of what we do with computers. They don't know the struggles involved with designing things such as the user getting an email on their birthday. They don't want to get pushback on things being difficult or to hear that people want to improve the quality of the code. They want their sparkle emoji buttons to magically make the line go up and they want them yesterday.
We deserve products that aren't cheaply made mass produced slop that incidentally does what people want instead of high quality products that are crafted to be exactly what people need, even if they don't know they need it.
Additionally, if this is such a transformational technology, why are key figures promoting it by talking down to people? Why wouldn't they be using this to lift people up?
As a technical educator, one of the things that I want to imprint onto people is that programming is a skill you can gain and that you too can both program things and learn how to program things. I want there to be more programmers out there. What I am about to say is not an attempt to gatekeep the skill and craft of computering; however, the ways that proponents of vibe coding are going about it are simply not the way forward to a sustainable future.
About a year ago, Cognition teased an AI product named Devin, a completely automated software engineer. You'd assign Devin tasks in Slack or Jira and then it would spin up a VM and plod its way through fixing whatever you asked it to. This demo deeply terrified me, as it was nearly identical to a story I wrote for the Techaro lore: Protos. The original source of that satire was experience working at a larger company that shall remain unnamed where the product team seemed to operate under the assumption that the development team had a secret "just implement that feature button" and that we as developers were working to go out of our way to NOT push it.
Devin was that "implement that feature" button the same way Protos mythically did. From what I've seen with companies that actually use Devin, it's nowhere near actually being useful and usually needs a lot of hand-holding to do anything remotely complicated, thank God.
The thing that really makes me worried is that the ownership class' expectations about the process of developing software are changing. People are being put on PIPs for not wanting to install Copilot. Deadlines come faster because "the AI can write the code for you, right?" Twitter and Reddit contain myriads of stories of "idea guys" using Cursor or Windscribe to generate their dream app's backend and then making posts like "some users claim they can see other people's stuff, what kind of developer do I need to hire for this?" Follow-up posts include gems such as "lol why do coders charge so much???"
By saving money in the short term by producing shitty software that doesn't last, are we actually spending more money over time re-buying nearly identical software after it evaporates from light use? This is the kind of thing that makes Canada not allow us to self-identify as Engineers, and I can't agree with their point more.
Vibe coding is a distraction. It's a meme. It will come. It will go. Everyone will abandon the vibe coding tools eventually. My guess is that a lot of the startups propping up their vibe coding tools are trying to get people into monthly subscriptions as soon as possible so that they can mine passive income as their more casual users slowly give up on coding and just forget about the subscription.
I'm not gonna lie though, the UX of vibe coding tools is top-notch. From a design standpoint it's aiming for that subtle brilliance where it seems to read your mind and then fill in the blanks you didn't even know you needed filled in. This is a huge part of how you can avoid the terror of the empty canvas. If you know what you are doing, an empty canvas represents infinite possibilities. There's nothing there to limit you from being able to do it. You have total power to shape everything.
In my opinion, this is a really effective tool to help you get past that fear of having no ground to stand on. This helps you get past executive dysfunction and just ship things already. That part is a good thing. I genuinely want people to create more things with technology that are focused on the problems that they have. This is the core of how you learn to do new things. You solve small problems that can be applied to bigger circumstances. You gradually increase the scope of the problem as you solve individual parts of it.
I want more people to be able to do software development. I think that it's a travesty that we don't have basic computer literacy classes in every stage of education so that people know how the machines that control their lives work and how to use them to their advantage. Sure it's not as dopaminergic as TikTok or other social media apps, but there's a unique sense of victory that you get when things just work. Sometimes that feeling you get when things Just Work™ is the main thing that keeps me going. Especially in anno dominium two thousand and twenty five.
The main thing I'm afraid of is people becoming addicted to the vibe coding tools and letting their innate programming skills atrophy. I don't know how to suggest people combat this. I've been combating it by removing all of the automatic AI assistance from my editor (IE: I'll use a language server, but I won't have my editor do fill-in-the-middle autocomplete for me), but this isn't something that works for everyone. I've found myself more productive without it there and asking a model for the missing square peg to round hole when I inevitably need some toil code made. I ended up not shipping that due to other requirements, but you get what I'm going at.
The biggest arguments I have against vibe coding and all of the tools behind it boil down to one major point: these tools have a security foundation of sand. Most of the time when you install and configure a Model Context Protocol (MCP) server, you add some information to a JSON file that your editor uses to know what tools it can dispatch with all of your configuration and API tokens. These MCP servers run as normal OS processes with absolutely no limit to what they can do. They can easily delete all files on your system, install malware into your autostart, or exfiltrate all your secrets without any oversight.
Oh, by the way, that whole "it's all in one JSON file with all your secrets" problem? That's now seen as a load-bearing feature so that scripts can automatically install MCP servers for you. You don't even need to get expertise in how the tools work! There's a MCP server installer MCP server so that you can say "Hey torment nexus, install GitHub integration for me please" and then it'll just do it with no human oversight or review on what you're actually installing. Seems safe to me! What could possibly go wrong?
If this is seriously the future of our industry, I wish that the people involved would take one trillionth of an iota of care about the security of the implementation. This is the poster child for something like the WebAssembly Component Model. This would let you define your MCP servers with strongly typed interfaces to the outside world that can be granted or denied permissions by users with strong capabilities. Combined with the concept of server resources, this could let you expand functionality however you wanted. Running in WebAssembly means that the no MCP server can just read ~/.ssh/id_ed25519
and exfiltrate your SSH key. Running in WebAssembly means that it can't just connect to probably-not-malware.lol
and then evaluate JavaScript code with user-level permissions on the fly. We shouldn't have to be telling developers "oh just run it all in Docker". We should have designed this to be fundamentally secure from the get-go. Personally, I only run MCP ecosystem things when contractually required to. Even then, I run it in a virtual machine that I've already marked as known compromised and use separate credentials not tied to me. Do with this information as you will.
I had a lot of respect for Anthropic before they released this feculent bile that is the Model Context Protocol spec and initial implementations to the public. It just feels so half-baked and barely functional. Sure I don't think they expected it to become the Next Big Meme™, but I thought they were trying to do things ethically above board. Everything I had seen from Anthropic before had such a high level of craft and quality, and this was such a huge standout.
We shouldn't have to be placing fundamental concerns like secret management or sandboxing as hand-waves to be done opt-in by the user. They're not gonna do it, and we're going to have more incidents where Cursor goes rogue and nukes your home folder until someone cares enough about the craft of the industry to do it the right way.
I have a unique view into a lot of the impact that AI companies have had across society. I'm the CEO of Techaro, a small one-person startup that develops Anubis, a Web AI Firewall Utility that helps mitigate the load of automated mass scraping so that open source infrastructure can stay online. I've had sales calls with libraries and universities that are just being swamped by the load. There's stories of GitLab servers eating up 64 cores of high-wattage server hardware due to all of the repeated scraping over and over in a loop. I swear a lot of this scraping has to be some kind of dataset arbitrage or something, that's the only thing that makes sense at this point.
And then in the news the AI companies claim "oh no we're just poor little victorian era orphans, we can't possibly afford to fairly compensate the people that made the things that make our generative AI models as great as they are". When the US copyright office tried to make AI training not a fair use, the head of that office suddenly found themselves jobless. Why must these companies be allowed to take everything without recourse or payment to the people that created the works that fundamentally power the models?
The actual answer to this is going to sound a bit out there, but stay with me: they believe that we're on the verge of creating artificial superintelligence; something that will be such a benevolent force of good that any strife in the short term will ultimately be cancelled out by the good that is created as a result. These people unironically believe that a machine god will arise and we'd be able to delegate all of our human problems to it and we'll all be fine forever. All under the thumb of the people that bought the GPUs with dollars to run that machine god.
As someone that grew up in a repressed environment full of evangelical christianity, I recognize this story instantly: it's the second coming of Christ wrapped in technology. Whenever I ask the true believers entirely sensible questions like "but if you can buy GPUs with dollars, doesn't that mean that whoever controls the artificial superintelligence thus controls everyone, even if the AI is fundamentally benevolent?" The responses I get are illuminating. They sound like the kinds of responses that evangelicals give when you question their faith.
Honestly though, the biggest impact I've seen across my friends has been what's happened to art commissions. I'm using these as an indicator for how the programming industry is going to trend. Software development is an art in the same vein as visual/creative arts, but a lot of the craft and process that goes into visual art is harder to notice because it gets presented as a flat single-dimensional medium.
Sometimes it can take days to get something right for a drawing. But most of the time people just see the results of the work, not the process that goes into it. This makes things like prompting "draw my Final Fantasy 14 character in Breath of the Wild" with images as references and getting a result in seconds look more impressive. If you commissioned a human to get a painting like this:
It'd probably take at least a week or two as the artist worked through their commission queue and sent you in-progress works before they got the final results. By my estimates between the artists I prefer commissioning, this would cost somewhere between 150 USD and 500 EUR at minimum. Probably more when you account for delays in the artistic process and making sure the artist is properly paid for their time. It'd be a masterpiece that I'd probably get printed and framed, but it would take a nonzero amount of time.
If you only really enjoy the products of work and don't understand/respect any of the craftsmanship that goes into making it happen, you'd probably be okay with that instantly generated result. Sure the sun position in that image doesn't make sense, the fingers have weird definition, her tail is the wrong shape, it pokes out of the dress in a nonsensical way (to be fair, the reference photos have that too), the dress has nonsensical shading, and the layering of the armor isn't like the reference pictures, but you got the result in a minute!
A friend of mine runs an image board for furry art. He thought that people would use generative AI tools as a part of their workflows to make better works of art faster. He was wrong, it just led to people flooding the site with the results of "wolf girl with absolutely massive milkers showing her feet paws" from their favourite image generation tool in every fur color imaginable, then with different characters, then with different anatomical features. There was no artistic direction or study there. Just an endless flood of slop that was passable at best.
Sure, you can make high quality art with generative AI. There's several comic series where things are incredibly temporally consistent because the artist trained their own models and took the time to genuinely gain expertise with the tools. They filter out the hallucination marks. They take the time to use it as a tool to accelerate their work instead of replacing their work. The boards they post it to go out of their way to excise the endless flood of slop and by controlling how the tools work they actually get a better result than they got by hand, much like how the skilled weavers were able to produce high quality cloth faster and cheaper with the power looms.
We are at the point where the artists want to go and destroy the generative image power looms. Sadly, they can't even though they desperately want to. These looms are locked in datacentres that are biometrically authenticated. All human interaction is done by a small set of trusted staff or done remotely by true believers.
I'm afraid of this kind of thing happening to the programming industry. A lot of what I'm seeing with vibe coding leading to short term gains at the cost of long term toil is lining up with this. Sure you get a decent result now, but long-term you have to go back and revise the work. This is a great deal if you are producing the software though; because that means you have turned one-time purchases into repeat customers as the shitty software you sold them inevitably breaks, forcing the customer to purchase fixes. The one-time purchase inevitably becomes a subscription.
We deserve more in our lives than good enough.
Look, CEOs, I'm one of you so I get it. We've seen the data teams suck up billions for decades and this is the only time that they can look like they're making a huge return on the investment. Cut it out with shoving the sparkle emoji buttons in my face. If the AI-aided product flows are so good then the fact that they are using generative artificial intelligence should be irrelevant. You should be able to replace generative artificial intelligence with another technology and then the product will still be as great as it was before.
When I pick up my phone and try to contact someone I care about, I want to know that I am communicating with them and not a simulacrum of them. I can't have that same feeling anymore due to the fact that people that don't natively speak English are much more likely to filter things through ChatGPT to "sound professional".
I want your bad English. I want your bad art. I want to see the raw unfiltered expressions of humanity. I want to see your soul in action. I want to communicate with you, not a simulacrum that stochastically behaves like you would by accident.
And if I want to use an LLM, I'll use an LLM. Now go away with your sparkle emoji buttons and stop changing their CSS class names so that my uBlock filters keep working.
This year has been a year full of despair and hurt for me and those close to me. I'm currently afraid to travel to the country I have citizenship in because the border police are run under a regime that is dead set on either elimination or legislating us out of existence. In this age of generative AI, I just feel so replaceable at my dayjob. My main work product is writing text that convinces people to use globally distributed object storage in a market where people don't realize that's something they actually need. Sure, this means that my path forward is simple: show them what they're missing out on. But I am just so tired. I hate this feeling of utter replaceability because you can get 80% as good of a result that I can produce with a single invocation of OpenAI's Deep Research.
Recently a decree came from above: our docs and blogposts need to be optimized for AI models as well as humans. I have domain expertise in generative AI, I know exactly how to write SEO tables and other things that the AI models can hook into seamlessly. The language that you have to use for that is nearly identical to what the cult leader used that one time I was roped into a cult. Is that really the future of marketing? Cult programming? I don't want this to be the case, but when you look out at everything out there, you can't help but see the signs.
Aspirationally, I write for humans. Mostly I write for the version of myself that was struggling a decade ago, unable to get or retain employment. I create things to create the environment where there are more like me, and I can't do that if I'm selling to soulless automatons instead of humans. If the artificial intelligence tools were…well…intelligent, they should be able to derive meaning from unaltered writing instead of me having to change how I write to make them hook better into it. If the biggest thing they're sold for is summarizing text and they can't even do that without author cooperation, what are we doing as a society?
Actually, what are we going to do when everyone that cares about the craft of software ages out, burns out, or escapes the industry because of the ownership class setting unrealistic expectations on people? Are the burnt out developers just going to stop teaching people the right ways to make software? Is society as a whole going to be right when they look back on the good old days and think that software used to be more reliable?
Frank Herbert's Dune world had superintelligent machines at one point. It led to a galactic war and humanity barely survived. As a result, all thinking machines were banned, humanity was set back technologically, and a rule was created: Thou shalt not make a machine in the likeness of a human mind. For a very long time, I thought this was very strange. After all, in a fantasy scifi world like Dune, thinking machines could automate so much toil that humans had to process. They had entire subspecies of humans that were functionally supercomputers with feelings that were used to calculate the impossibly complicated stellar draft equations so that faster-than-light travel didn't result in the ship zipping into a black hole, star, moon, asteroid, or planet.
After seeing a lot of the impact across humanity in later 2024 and into 2025, I completely understand the point that Frank Herbert had. It makes me wish that I could leave this industry, but this is the only thing that pays enough for me to afford life in a world where my husband gets casually laid off after being at the same company for six and a half years because some number in a spreadsheet put him on the shitlist. Food and rent keeps going up here, but wages don't. I'm incredibly privileged to be able to work in this industry as it is (I make enough to survive, don't worry), but I'm afraid that we're rolling the ladder up behind us so that future generations won't be able to get off the ground.
Maybe the problem isn't the AI tools, but the way they are deployed, who benefits from them, and what those benefits really are. Maybe the problem isn't the rampant scraping, but the culture of taking without giving anything back that ends up with groups providing critical infrastructure like FFmpeg, GNOME, Gitea, FreeBSD, NetBSD, and the United Nations having to resort to increasingly desperate measures to maintain uptime.
Maybe the problem really is winner-take-all capitalism.
The deployment of generative artificial intelligence tools has been a disaster for the human race. They have allowed a select few to gain "higher productivity"; but they have destabilized society, have made work transactional, have subjected artists to indignities, have led to widespread psychological suffering for the hackers that build the tools AI companies rely on, and inflict severe damage on the natural world. The continued development of this technology will worsen this situation. It will certainly subject human beings to greater indignities and inflict greater damage on the natural world, it will probably lead to greater social disruption and psychological suffering, and it may lead to increased physical suffering even in "advanced" countries.
For other works in a similar vein, read these:
Special thanks to the following people that read and reviewed this before release:
2025-06-15 08:00:00
This was a lightning talk I did at BSDCan. It was a great conference and I'll be sure to be there next year!
Hi, I'm Xe, and I fight bots in my free time. I'd love to do it full time, but that's not financially in the cards yet. I made Anubis. Anubis is a web AI firewall utility that stops the bots from taking out your website. It's basically the Cloudflare "Are you a bot?" page, but self-hostable.
And without this. Scrapers have CAPTCHA solvers built in. These CAPTCHA solvers are effectively APIs that just have underpaid third world humans in the loop, and it's just kind of bad and horrible.
So Anubis is an uncaptcha. It uses features of your browser to automate a lot of the work that a CAPTCHA would, and right now the main implementation is by having it run a bunch of cryptographic math with JavaScript to prove that you can run JavaScript in a way that can be validated on the server. I'm working on obviating that because surprisingly many people get very angry about having to run JavaScript, but it's within the cards.
Anubis is open source software written in Go. It's on GitHub. It's got like eight kilostars. It works on any stack that lets you run more than one program. We have examples for Nginx, Caddy, Apache, and Kubernetes.
It's in your package repos. If you do ports for FreeBSD or pkgsrc for NetBSD, please bump the version. I'm about to release a new one, but please bump the current version.
So you might be wondering, what's the story? Why does Anubis exist?
Well, this happened. I have a Git server for my own private evil plans, and Amazon's crawler discovered it through TLS certificate transparency logs and decided to unleash the hammer of God. And that happened. They had the flamethrower of requests just burning down my poor server, and it was really annoying because I was trying to do something and it just didn't work. Also helps if you don't schedule your storage on rotational drives.
But I published it on GitHub, and like four months later, look at all these logos. There's more logos that I forgot to put on here and will be in the version on my website. But like, yeah, it's used by FreeBSD, NetBSD, Haiku, GNOME, FFmpeg, and the United Nations Educational, Scientific, and Cultural Organization. Honestly, seeing UNESCO just through a random DuckDuckGo search made me think, huh, maybe this is an actual problem. And like any good problem, it's a hard problem.
How do you tell if any request is coming from a browser?
This screenshot right here uses Pale Moon, which is a known problem child in terms of bot detection services and something that I actively do test against to make sure that it works. But how do you know if any given request is coming from a browser?
It’s very hard, and I have been trying to find ways to do it better. The problem is, in order to know what good browsers look like, you have to know what bad scrapers look like. And the great news is that scrapers look like browsers, asterisk. So you have to find other ways, like behaviors or third-party or like third-order side effects. It’s a huge pain.
So as a result, I'm trying a bunch of fingerprinting methods. These are a lot of the fingerprints that I've listed here, like JA4, JA3N are all based on the TLS information that you send to every website, whether you want to or not, because that's how security works. I'm trying to do stuff based on HTTP requests or the HTTP2 packets that you send to the server, which you have to do in order for things to work. And I'm falling back to, can you run JavaScript, lol?
So in terms of things I want to do next, obviously, I want to do better testing on BSD. Right now my testing is: does it compile? And because I've written it in Go without Cgo, that answer is yes. I want to build binary packages for BSDs, because even though I think it's better suited by downstream ports and stuff, I still want to have those packages as an option.
I want to do a hosted option like Cloudflare, because some people just don't want to run Anubis but want to run Anubis. I want to do system load-based thresholds, so it only kicks in as it is aggressive when things are actively on fire. I want to have better NoJS support, which will include every way to tell something as a browser without JavaScript in ways that make you read all of the specs and start having an existential breakdown. I want to do stuff with WebAssembly on the server, because I've always wanted to see how that would blow up in prod. I want to do an IP reputation database, Kubernetes stuff, end-to-end testing doesn't suck.
And finally, there's one of the contributors that I really want to hire, but I can't afford to yet, so I'd love to when I can.
Also, if you work at an AI company, I know AI companies follow me. If you are working at an AI company, here's how you can sabotage Anubis development as easily and quickly as possible. So first is quit your job, second is work for Square Enix, and third is make absolute banger stuff for Final Fantasy XIV. That’s how you can sabotage this the best.
Anyways, I've been Xe, I have stickers, I'll be in the back, and thank you for having me here. And if you have any questions, please feel free to ask.
Well, as the con chair, I think about people making comments instead of questions. I'm going to abuse my position and make a comment. You saved my butt, thank you.
You're welcome. I'm so happy that it's worked out. It’s a surreal honor to—let me get back to the logo slide, because this is nuts.
Let’s just look at this. That’s gnome, that's wine, that's dolphin, that's the Linux kernel, that's ScummVM, that's FreeCAD, and UNESCO on the same slide. What other timeline could we have?
This 2025 has been wild.
So how are your feelings? Because you’re basically trying to solve not a technical problem, but actually it’s more of a problem of society. Do you think it is winnable that way, or do we have to fight this problem in another way and make people, well, smarter is probably the wrong word.
I am not sure what the end game is for this. I started out developing it for, I want my Git server to stay up. Then gnome started using it. And then it became a thing. I put it under the GitHub org of a satirical startup that I made up for satire about the tech industry. And now that has a market in education.
I want to make this into a web application firewall that can potentially survive the AI bubble bursting. Because right now the AI bubble bursting is the biggest threat to the business, as it were. So a lot of it is figuring out how to pivot and do that. I've also made a build tool called Yeet that uses JavaScript to build RPM packages. Yes, there is a world where that does make sense. It's a lot of complicated problems. And there are a lot of social problems.
But if you’re writing a scraper, don't. Like seriously, there is enough scraping traffic already. Use Common Crawl. It exists for a reason.
2025-06-09 08:00:00
EDIT(2025-06-09 20:51 UTC): The containerization stuff they're using is open source on GitHub. Digging into it. Will post something else when I have something to say.
This year's WWDC keynote was cool. They announced a redesign of the OSes, unified the version numbers across the fleet, and found ways to hopefully make AI useful (I'm reserving my right to be a skeptic based on how bad Apple Intelligence currently is). However, the keynote slept on the biggest announcement for developers: they're bringing the ability to run Linux containers in macOS:
Containerization FrameworkThe Containerization framework enables developers to create, download, or run Linux container images directly on Mac. It’s built on an open-source framework optimized for Apple silicon and provides secure isolation between container images.
This is an absolute game changer. One of the biggest pain points with my MacBook is that the battery life is great...until I start my Linux VM or run the Docker app. I don't even know where to begin to describe how cool this is and how it will make production deployments so much easier to access for the next generation of developers.
Maybe this could lead to Swift being a viable target for web applications. I've wanted to use Swift on the backend before but Vapor and other frameworks just feel so frustratingly close to greatness. Combined with the Swift Static Linux SDK and some of the magic that powers Private Cloud Compute, you could get an invincible server side development experience that rivals what Google engineers dream up directly on your MacBook.
I can't wait to see more. This may actually be what gets me to raw-dog beta macOS on my MacBook.
The things I'd really like to know:
I really wonder how Docker is feeling, I think they're getting Sherlocked.
Either way, cool things are afoot and I can't wait to see more.
2025-05-23 08:00:00
While working on Anubis (a Web AI Firewall Utility designed to stop rampant scraping from taking out web services), one question in particular keeps coming up:
This is sometimes phrased politely. Other times people commenting on this display a measured lack of common courtesy.
The Anubis character is displayed by default as a way to ensure that I am not the lone unpaid dependency peg holding up a vast majority of the Internet.
Of course, nothing is stopping you from forking the software to replace the art assets. Instead of doing that, I would rather you support the project and purchase a license for the commercial variant of Anubis named BotStopper. Doing this will make sure that the project is sustainable and that I don't burn myself out to a crisp in the process of keeping small internet websites open to the public.
At some level, I use the presence of the Anubis mascot as a "shopping cart test". If you either pay me for the unbranded version or leave the character intact, I'm going to take any bug reports more seriously. It's a positive sign that you are willing to invest in the project's success and help make sure that people developing vital infrastructure are not neglected.
There's been some online venom and vitriol about the use of a cartoon that people only see for about 3 seconds on average that make me wonder if I should have made this code open source in the first place. The anime image is load-bearing. It is there as a social cost. You are free to replace it, but I am also free to make parts of the program rely on the presence of the anime image in order to do more elaborate checks, such as checks that do not rely on JavaScript.
Amusingly, this has caused some issues with the education market because they want a solution NOW and their purchasing process is a very slow and onerous beast. I'm going to figure out a balance eventually, but who knew that the satirical tech startup I made up as a joke would end up having a solid foothold in the education market?
One of best side effects of the character being there is that it's functioned as a bit of a viral marketing campaign for the project. Who knows how many people learned that Anubis is there, functional, and works well enough for people to complain about because of someone getting incensed online about the fact that the software shows a human-authored work of art for a few seconds?
I want this project to be sustainable; and in the wake of rent, food prices, and computer hardware costs continuing to go up I kinda need money because our economy runs on money, not GitHub stars.
I have a no-JS solution that should be ready soon (I've been doing a lot of unpublishable reverse engineering of how browsers work), but I also need to figure out how to obfuscate it so that the scrapers can't just look at the code to fix their scrapers. So far I'm looking at WebAssembly on the server for this. I'll let y'all know more as I have it figured out on my end. There will be some fun things in the near future, including but not limited to external services to help Anubis make better decisions on when to throw or not throw challenges.
Hopefully the NLNet application I made goes through, funding to buy a few months of development time would go a long way. There has been venture capital interest in Anubis, so that's a potential route to go down too.
Thanks for following the development of Anubis! If you want to support the project, please throw me some bucks on GitHub Sponsors.
2025-05-07 08:00:00
Co-Authored-By: @scootaloose.com
Windows has been a pain in the ass as of late. Sure, it works, but there's starting to be so much overhead between me and the only reason I bother booting into it these days: games. Every so often I'll wake up to find out that my system rebooted and when I sign in I'm greeted with yet another "pweez try copilot >w< we pwomise you will like it ewe" full-screen dialogue box with "yes" or "nah, maybe later" as my only options. That or we find out that they somehow found a reason to put AI into another core windows tool, probably from a project manager’s desperate attempt to get promoted.
The silicon valley model of consent
[image or embed]
— Xe ( @xeiaso.net ) March 31, 2025 at 11:59 PM
As much as I'd like to like Copilot, Recall, or Copilot (yes those are separate products), if a feature is genuinely transformative enough to either justify the security risk of literally recording everything I do or enhances the experience of using my computer enough to hand over control to an unfeeling automaton, I'll use it. It probably won't be any better than Apple Intelligence though.
When we built our gaming towers, we decided to build systems around the AMD Ryzen 7950X3D and NVidia RTX 4080. These are a fine combination in practice. You get AMD's philosophy of giving you enough cores that you can do parallel computing jobs without breaking a sweat and the RTX 4080 being one of the best cards on the market for rasterization and whatever ray tracing you feel like doing. I don't personally do ray tracing in games, but I like that it is an option for people who want to.
The main problem with NVidia GPUs is that NVidia's consumer graphics department seems to be under the assumption that games don't need as much video memory as they do. You get absolutely bodied in the amount of video memory. Big games can use upwards of 15 GB of video memory, and the OS / Firefox needs 2 GB of video memory. In total, that's one more gigabyte than the 16 I have. You can't just plug in more vram too, you need to either get unobtanium-in-canada RTX 4090s or pay several body organs for enterprise grade GPUs.
AMD is realistically the only other option on the market. AMD sucks for different reasons, but at least they give you enough video memory that you can survive.
One of the most frustrating issues we've run into as of late is macrostutters when gaming. Macrostutters are when the game hitches and the entire rendering pipeline gets stuck for at least two frames, then everything goes back to normal. This is most notable in iRacing and Final Fantasy XIV (14). In iRacing's case, it can cause you to get into an accident because you get an over 100 millisecond to 5 second pause. Mind you, the game is playable, but the macrostutters can make the experience insufferable.
In the case of Final Fantasy XIV (amazing game by the way, don't play it), this can cause you to get killed because you missed an attack telegraph due to it happening while your rendering pipeline was stopped. I have been killed to macrostutters as white mage (pure healer class for fellow RPG affictionados) in Windows at least 3 times in the last week and I hate it.
So, the thought came to our minds: why are we bothering with Windows? We've had a good experience with SteamOS on our Steam Decks.
We have a home theatre PC that runs Bazzite. A little box made up of older hardware we upgraded from. Runs tried and true hardware that has matured well and not a single unknown variable in it (AMD Ryzen 5 3600 and an RX5700XT, on a B450 motherboard, the works). Besides the normal HDR issues on Linux, it's been pretty great for couch gaming!
I've also been using Linux on the desktop off and on for years. My career got started because Windows Vista was so unbearably bad that I had to learn how to use Linux on the desktop in order to get a usable experience out of my dual core PC with 512 MB of ram.
Surely 2025 will be the year of the Linux Desktop.
My husband has very simple computing needs compared to me. He doesn't do software development in his free time (save simple automation with PowerShell, bash, or Python). He doesn't do advanced things like elaborate video editing, 3d animation, or content creation. Sure sometimes he'll need to clip a segment out of a longer video file, but really that's not the same thing as making an hbomberguy video or streaming to Twitch. The most complicated thing he wants to do at the moment is play Final Fantasy XIV, which as far as games go isn't really that intensive.
I have some more complicated needs seeing as software I make runs on UNESCO servers, but really as long as I have a basic Linux environment, Homebrew, and Steam, I'll be fine. I am also afflicted with catgirl simulator, but I do my streaming from Windows due to vtubing software barely working there and me being enough of a coward to not want to try to run it in Linux again.
When he said he wanted to go for Linux on the desktop, I wanted to make sure that we were using the same distro so that I had enough of the same setup to be able to help when things inevitably go wrong. I wanted something boring, well-understood, and strongly supported by upstream. I ended up choosing the most boring distribution I could think of: Fedora.
Fedora is many things, but it's what systemd, mesa, the Linux Kernel, and GNOME are developed against. This means that it's one of the most boring distributions on the planet. It has most of the same package management UX ergonomics as Red Hat Enterprise Linux, it's well documented and most of the quirks are well known or solved, and overall it's the least objectionable choice on the planet.
In retrospect, I'm not sure if this was a mistake or not.
He wanted to build a pure AMD system to stave off any potential NVidia related problems. We found some deals and got him the following:
I had recently just installed Fedora 41 on my tower and had no issues. My tower has an older CPU and motherboard so I didn't expect any problems. Most of that hardware I listed above was released after Fedora 41 was released in late October 2024. I expected some issues for hardware compatibility for the first boot, but figured that an update and reboot would fix it. From experience I know that Fedora doesn't ever roll new install images after they release a major version. This makes sense from their perspective for mirror bandwidth reasons.
When we booted into the installer on his tower, the screen was stuck at 1024x768 on a 21:9 ultrawide. Fine enough, we can deal with that. The bigger problem was the fact that the ethernet card wasn't working. It wasn't detected in the PCI device tree. Luckily the board shipped with an embedded Wi-Fi card, so we used that to limp our way into Fedora. I figured it'd be fine after some updates.
It was not fine after that. The machine failed to boot after that round of updates. It felt like the boot splash screen was somehow getting the GPU driver into a weird state and the whole system hung. Verbose boot didn't work. I was almost worried that we had dead hardware or something.
Okay, fine, the hardware is new. I get it. Let's try Fedora 42 beta. Surely that has a newer kernel, userland, and everything that we'd need to get things working out of the box.
Yep, it did. Everything worked out of the box. The ethernet card was detected and got an IP instantly. The install was near instant. We had the full screen resolution at 100hz like we expected, and after an install 1Password and other goodies were set up. Steam was installed, Final Fantasy XIV was set up, the controller was configured, and a good time was had by all. The microphone and DAC even worked!
Once everything was working, I set up an automount for the NAS so that he could access our bank of wallpapers and the like. Everything was working and we were happy.
Coincidentally, we built the system the day before Fedora 42 was released. I had him run an update and he chose to do it from the package management GUI, “Discover”. I have a terminal case of Linux brain and don't feel comfortable running updates in a way that I can't see the logs. This is what happens when you do SRE work for long enough. You don't trust anything you can't directly look at or touch.
We rebooted for the update and then things started to get weird. The biggest problem was X11 apps not working. We got obscure XWayland errors that a mesa dev friend never thought was possible. I seriously began to get worried that we had some kind of half-hardware failure or something inexplicable like that.
I thought that there was some kind of strange issue upgrading from Fedora 42 Beta to Fedora 42 full. I can't say why this would happen, but it's completely understandable to go there after a few hours of fruitless debugging. We reinstalled because we ran out of ideas.
Once everything was back and running, we ran into a strange issue: Steam kept starting on the integrated GPU instead of the dedicated GPU. This would be a problem, but luckily enough games somehow preferred using the dedicated GPU so it all worked out. After an update got pushed, this caused Steam to die or sometimes throw messages about chromium not working on the GPU "llvmpipe".
Debugging this was really weird. Based on what we could figure out with a combination of nvtop, hex-diving into /sys, and other demonic incantations that no mortal should understand, the system somehow flagged the dedicated GPU as the integrated GPU and vice versa. This was causing the system to tell Steam and only Steam that it needed to start on the integrated GPU.
After increasingly desperate means of trying to disable the integrated GPU or de-prioritize it, we ended up disabling the integrated GPU in the bios. I was worried this would make debugging a dead dedicated GPU harder, but my husband correctly pointed out that we have at least 5 known working GPUs of different generations laying around with the right power connectors.
Anyways we got everything working but sometimes when resuming from sleep Final Fantasy XIV causes a spectacular shader pipeline explosion. I'm not sure how to describe it further, but in case you have any idea how to debug this we've attached a video:
I'm pretty sure this is a proton issue, or a mesa issue, or an amdgpu issue, or a computer issue. If I had any idea where to file this it'd be filed, but when we tried to debug it and get a GPU pipeline trace the problem instantly vanished. Aren't computers the best?
S3 suspend is not a solved problem in the YOTLD 2025. Sometimes on resume the display driver crashes and my husband needs to force a power cycle. When he rebooted, XWayland apps wouldn't start. Discord, Steam, and Proton depend on XWayland. This is a very bad situation.
Originally we thought the display driver crashing was causing this, but after manual restarts under normal circumstances were also causing it it got our attention. The worst part was that this was inconsistent, almost like something in the critical dependency chain was working right sometimes and not working at all other times. We started to wonder if Fedora actually tested anything before shipping it because updates made the pattern of working vs not working change.
One of the most simple apps in the x11 suite is xeyes. It's a simple little thing where it has a pair of cartoon eyes that look at your mouse cursor. It's the display pipeline equivalent to pinging google.com to make sure your internet connection works. If you've never seen it before, here's what it looks like:
Alas, it was not working.
After some investigation, the only commonality I could find was the X11 socket folder in /tmp
not existing. X11 uses Unix sockets (sockets but via the filesystem) for clients (programs) to communicate with the server (display compositor). If that folder isn't created with the right flags, XWayland can't create the right socket for X clients and will rightly refuse to work.
On a hunch, I made xxx-hack-make-x11-dir.service
:
[Unit]
Description=Simple service test
After=tmp.mount
Before=display-manager.service
[Service]
Type=simple
ExecStart=/bin/bash -c "mkdir -p /tmp/.X11-unix; chmod -R 1777 /tmp/.X11-unix"
[Install]
WantedBy=local-fs.target
This seemed to get it working. It worked a lot more reliably when I properly set the sticky bit on the .X11-unix
folder so that his user account could create the XWayland socket.
In case you've never seen the "sticky bit" in practice before, Unix permissions have three main fields per file:
This applies to both files and folders (where the execute bit on folders is what gives a user permission to list files in that folder, I don't fully get it either). However in practice there's a secret fourth field which includes magic flags like the sticky bit.
The sticky bit is what makes temporary files work for multi-user systems. At any point, any program on your system may need to create a temporary file. Many programs will assume that they can always create temporary files. These programs may be running as any user on the system, not just the main user account for the person that uses the computer. However, you don't want users to be able to clobber each other's temporary files because the write bit on folders also allows you to delete files. That would be bad. This is what the sticky bit is there to solve: making a folder that everyone can write to, but only the user that created a temporary file can delete it.
Notably, the X11 socket directory needs to have the sticky bit set because of facts and circumstances involving years of legacy cruft that nobody wants to fix.
$ stat /tmp/.X11-unix
File: /tmp/.X11-unix
Size: 120 Blocks: 0 IO Block: 4096 directory
Device: 0,41 Inode: 2 Links: 2
Access: (1777/drwxrwxrwt) Uid: ( 0/ root) Gid: ( 0/ root)
Access: 2025-05-05 21:33:39.601616923 -0400
Modify: 2025-05-05 21:34:09.234769003 -0400
Change: 2025-05-05 21:34:09.234769003 -0400
Birth: 2025-05-05 21:33:39.601616923 -0400
Once xxx-hack-make-x11-dir.service
was deployed, everything worked according to keikaku.
The system was stable. Everything was working. But when multiple people that work at Red Hat are telling you that the problems you are running into are so strange that you need to start filing bug reports in the dark sections of the bug tracker, you start to wonder if you're doing something wrong. The system was having configuration error-like issues on components that do not have configuration files.
While we were drafting this article, we decided to take a look at the problem a bit further. There was simply no way that we needed xxx-hack-make-x11-dir.service
as a load-bearing dependency on our near plain install of Fedora, right? This should just work out of the box, right???
We went back to the drawing board. His system was basically stock Fedora, and we only really did three things to it outside of the package management universe:
/mnt/itsuki
xxx-hack-make-x11-dir.service
to frantically hack around issuesNotably, I had the NAS automount set up too and was also having strange issues with the display stack, including but not limited to the GNOME display manager forgetting that Wayland existed and instantly killing itself on launch.
On a hunch, we disabled the units in the reverse order that we created them to undo the stack and get closer to stock Fedora. First, we disabled the xxx-hack-make-x11-dir.service
unit. When he rebooted, this broke XWayland as we expected. Then we disabled the NAS automount and rebooted the system.
XWayland started working.
My guess is that this unit somehow created a cyclical dependency:
# mnt-itsuki.automount
[Unit]
Requires=remote-fs-pre.target
After=remote-fs-pre.target
[Automount]
Where=/mnt/itsuki
TimeoutIdleSec=0
[Install]
WantedBy=remote-fs.target
Turns out it was me. The actual unit I wanted was this:
# mnt-itsuki.automount
[Unit]
[Automount]
Where=/mnt/itsuki
TimeoutIdleSec=0
[Install]
WantedBy=multi-user.target
Thanks, Arch Linux Wiki page on Samba!
Other than that, everything's been fine! The two constants that have been working throughout all of this were 1Password and Firefox, modulo that one time I updated Firefox in dnf
and then got a half-broken browser until I restarted it. I did have to disable the nftables
backend in libvirt in order to get outbound TCP connections working though.
Fedora is pretty set and forget, but it's not without its annoyances. The biggest one is how Fedora handles patented video codecs and how this intersects with FFmpeg, the swiss army chainsaw of video conversion.
Fedora ships a variant of FFmpeg they call ffmpeg-free
. Notably this version has "non-free" codecs compiled out, so you can deal with webm, av1, and other codecs without issue. However h.264, or the 4
in .mp4
is not in that codec list. Basically everything on the planet has support for h.264, so this is the "default format" that many systems use. Heck, all the videos I've embedded into this post are encoded with h.264.
You can pretty easily swap out ffmpeg-free with normal un-addled ffmpeg if you install the RPM Fusion repository, but that has its own fun.
RPM Fusion is the not-quite-official-but-realistically-most-users-use-it-so-it's-pretty-much-official side repo that lets you install "non-free" software. This is how you get FFmpeg, steam, and the NVidia binary drivers that make your GPU work.
One of the most annoying parts about RPM Fusion is that whenever they push new versions of anything, every old package is deleted off of their servers. This means that if you need to do a downgrade to debug issues (like strange XWayland not starting issues), you CANNOT restore your system to an older state because the package manager will see that the packages it needs aren't available from upstream and rightly refuse to put your system in an inconsistent state.
I have tried to get in contact with the RPMFusion team to help them afford more storage should they need it, but they have not responded to my contact attempts. If you are someone or know someone there that will take money or storage donated on the sole condition that they will maintain a few months of update backlog, please let me know.
I'm not really sure how to end something like this. Sure things mostly work now, but I guess the big lesson is that if you are a seasoned enough computer toucher, eventually you will stumble your way into a murder mystery and find out that you are both the killer and the victim being killed at the same time.
But, things work* and I'm relatively happy with the results.
2025-04-21 08:00:00
If you wanted to give me money but Patreon was causing grief, I'm on GitHub Sponsors now! Help me reach my goal of saving the world from AI scrapers with the power of anime.