2026-03-31 08:00:00
Yesterday, I had the chance to witness someone who's one of the most dedicated, competent advocates for privacy and digital rights bring that message to a whole new platform. It turns out, it's pretty delightful, especially in a moment when our civil liberties and rights online couldn't matter more!
Cindy Cohn, the executive director of the Electronic Frontier Foundation, has been a tireless fighter for protecting everyone's digital civil liberties, and I was lucky enough to get to tag along as she took the story of that work to The Daily Show yesterday. It was no surprise that the conversation was so fluent and insightful on the topic, but I think a lot of people in the audience didn't expect that it would be such a fun and even delightful conversation about a topic that is, too often, confusing or complicated or boring.
Six years ago, when I first joined the board of the EFF, I was already a believer in the core principles the organization stood for, but one of my biggest hopes was that the messages and mission of the entire team could just be brought to a larger audience. That couldn't have been more perfectly accomplished than seeing Cindy translate some topics that were fairly technical, or which involved fairly arcane legal concerns, and make them very accessible. And this work is vital because both the overreaching, authoritarian government, and the irresponsible, unaccountable forces of big tech are threatening our rights more than ever.

I gotta admit, it was pretty fun to watch Cindy hand Jon a "Let's Sue the Government!" t-shirt. You can get one just like his if you donate to EFF or become a member!
More broadly, though, the interview was also just a wonderful milestone to see at a personal level. Part of the story that Cindy was telling on the show is the broader narrative she captures in her book, Privacy's Defender: My Thirty-Year Fight Against Digital Surveillance, out from MIT Press. (And full disclosure there, I recently joined their management board as well, more on that soon.) The book captures so many of the lessons that can only come from decades of fighting in the trenches, which are lessons that so many organizations are going to need in order to be resilient in the years to come, even if they're not working in the exact same disciplines. In addition to being something of a valedictory for Cindy's tenure at the EFF, the lessons of the book seem to set the stage for the new chapter that promises to unfold under the new executive director Nicole Ozer, as she carries forward this work.
But if it isn't clear enough, I'll say it directly: as happy as I am to celebrate good people getting the word out about vital work, these are dangerous and trying times. The most powerful people and companies in the world, along with the most authoritarian administration we've ever seen, are all working to try to roll back all of the digital rights that we rely on every day to benefit from the power of the Internet. The issues that EFF helps protect for us couldn't matter more. So, if you can, support the EFF with your donation (you can even get a copy of Cindy's book if you become a Gold-level member!) and take action in your own community to help push back the onslaught of bad policy and corporate overreach that threatens us all.
And finally, for those of you in NYC: If you liked the conversation above, and want to dig in even further, come out and join us on April 23, where I'll be sitting down with Cindy at the Brooklyn Public Library's Central Library. It promises to be an engaging conversation, and I hope to see some of you there!
2026-03-27 08:00:00
You must imagine Sam Altman holding a knife to Tim Berners-Lee's throat.
It's not a pleasant image. Sir Tim is, rightly, revered as the genial father of the World Wide Web. But, all the signs are pointing to the fact that we might be in endgame for "open" as we've known it on the Internet over the last few decades.
The open web is something extraordinary: anybody can use whatever tools they have, to create content following publicly documented specifications, published using completely free and open platforms, and then share that work with anyone, anywhere in the world, without asking for permission from anyone. Think about how radical that is.
Now, from content to code, communities to culture, we can see example after example of that open web under attack. Every single aspect of the radical architecture I just described is threatened, by those who have profited most from that exact system.
Today, the good people who act as thoughtful stewards of the web infrastructure are still showing the same generosity of spirit that has created opportunity for billions of people and connected society in ways too vast to count while —not incidentally— also creating trillions of dollars of value and countless jobs around the world. But the increasingly-extremist tycoons of Big Tech have decided that that's not good enough.
Now, the hectobillionaires have begun their final assault on the last, best parts of what's still open, and likely won't rest until they've either brought all of the independent and noncommercial parts of the Internet under their control, or destroyed them. Whether or not they succeed is going to be decided by decisions that we all make as a community in the coming months. Even though there have always been threats to openness on the web, the stakes have never been higher than they are this time.
Right now, too many of the players in the open ecosystem are still carrying on with business as usual, even though those tactics have been failing to stop big tech for years. I don't say this lightly: it looks to me like 2026 is the year that decides whether the open web as we know it will survive at all, and we have to fight like the threat is existential. Because it is.
Calling this threat "existential" is a strong statement, so we should back that up with evidence. The point I want to make here is that this is a lot broader than just one or two isolated examples of trying to win in one market. What we are seeing is the application of the same market-crushing techniques that were used to displace entire industries with the rise of social media and the gig economy, now being deployed across the very open internet infrastructure that made the modern internet possible.
The big tech financiers and venture capitalists who are enabling these attacks are intimately familiar with these platforms, so they know the power and influence that they have — and are deeply experienced at dismantling any systems that have cultural or political power that they can't control. And since they have virtually infinite resources, they're able to carry out these campaigns simultaneously on as many fronts as they need to. The result is an overwhelming wave of threats. It's not a coordinated conspiracy, because it doesn't need to be; they just all have the same end goals in mind.
Some examples:
robots.txt functioned for decades to describe the way that tools like search engines ought to behave when accessing content on websites, but now it is effectively dead as Big AI companies unilaterally decided to ignore more than a generation of precedent, and do whatever they want with the entirety of the web, completely without consent. Similarly, long-running efforts like Creative Commons and other community-driven attempts at creating shared declarations or definitions for content use are increasingly just ignored.The threat to the open web is far more profound than just some platforms that are under siege. The most egregious harm is the way that the generosity and grace of the people who keep the web open is being abused and exploited. Those people who maintain open source software? They're hardly getting rich — that's thankless, costly work, which they often choose instead of cashing in at some startup. Similarly, volunteering for Wikipedia is hardly profitable. Defining super-technical open standards takes time and patience, sometimes over a period of years, and there's no fortune or fame in it.
Creators who fight hard to stay independent are often choosing to make less money, to go without winning awards or the other trappings of big media, just in order to maintain control and authority over their content, and because they think it's the right way to connect with an audience. Publishers who've survived through year after year of attacks from tech platforms get rewarded by… getting to do it again the next year. Tim Berners-Lee is no billionaire, but none of those guys with the hundreds of billions of dollars would have all of their riches without him. And the thanks he gets from them is that they're trying to kill the beautiful gift that he gave to the world, and replace it with a tedious, extortive slop mall.
So, we're in endgame now. They see their chance to run the playbook again, and do to Wikipedians what Uber did to cab drivers, to get users addicted to closed apps like they are to social media, to force podcasters to chase an algorithm like kids on TikTok. If everyone across the open internet can gather together, and see that we're all in one fight together, and push back with the same ferocity with which we're being attacked, then we do have a shot at stopping them.
At one time, it was considered impossibly unlikely that anybody would ever create open technologies that would ever succeed in being useful for people, let alone that they would become a daily part of enabling billions of people to connect and communicate and make their lives better. So I don't think it's any more unlikely that the same communities can summon that kind of spirit again, and beat back the wealthiest people in the world, to ensure that the next generation gets to have these same amazing resources to rely on for decades to come.
Alright, if it’s not hopeless, what are the concrete things we can do? The first thing is to directly support organizations in the fight. Either those that are at risk, or those that are protecting those at risk. You can give directly to support the Internet Archive, or volunteer to help them out. Wikipedia welcomes your donation or your community participation. The Electronic Frontier Foundation is fighting for better policy and to defend your rights on virtually all of these issues, and could use your support or provides a list of ways to volunteer or take action. The Mozilla Foundation can also use your donations and is driving change. (And full disclosure — I’m involved in pretty much all of these organizations in some capacity, ranging from volunteer to advisor to board member. That’s because I’m trying to make sure my deeds match my words!) These are the people whom I've seen, with my own eyes, stay the hand of those who would hold the knife to the necks of the open web's defenders.
Beyond just what these organizations do, though, we can remember how much the open web matters. I know from my time on the board of Stack Overflow that we got to see the rise of an incredibly generous community built around sharing information openly, under open licenses. There are very few platforms in history that helped more people have more economic mobility than the number of people who got good-paying jobs as coders as a result of the information on that site. And then we got to see the toll that extractive LLMs had when they took advantage of that community without any consideration for the impact it would have when they trained models on the generosity of that site's members without reciprocating in kind.
The good of the web only exists because of the openness of the web. They can't just keep on taking and taking without expecting people to finally draw a line and saying "enough". And interestingly, opportunities might exist where the tycoons least expect it. I saw Mike Masnick's recent piece where he argued that one of the things that might enable a resurgence of the open web might be... AI. It would seem counterintuitive to anyone who's read everything I've shared here to imagine that anything good could come of these same technologies that have caused so much harm.
But ultimately what matters is power. It is precisely because technologies like LLMs have powers that the authoritarians have rushed to try to take them over and wield them as effectively as they can. I don't think that platforms owned and operated by those bad actors can be the tools that disrupt their agenda. I do think it might be possible that the creative communities that built the web in the first place could use their same innovative spirit to build what could be, for lack of a better term, called "good AI". It’s going to take better policy, which may be impossible in the short term at the federal level in the U.S., but can certainly happen at more local levels and in the rest of the world. Though I’m skeptical about putting too much of the burden on individual users, we can certainly change culture and educate people so that more people feel empowered and motivated to choose alternatives to the big tech and big AI platforms that got us into this situation. And we can encourage harm reduction approaches for the people and institutions that are already locked into using these tools, because as we’ve seen, even small individual actions can get institutions to change course.
Ultimately I think, if given the choice, people will pick home-cooked, locally-grown, heart-felt digital meals over factory-farmed fast food technology every time.
2026-03-13 08:00:00
For the New York Times Magazine this Sunday, I talked to Clive Thompson about one of the conversations that I'm having most often these days: What happens to coders in this current moment of extraordinarily rapid evolution in AI? LLMs are now quickly advancing to where they can virtually become entire software factories, radically changing both the economics and the power dynamics of software creation — which has so far mostly been used to displace massive numbers of tech workers.
But it's not so simple as "bosses are firing coders now that AI can write code".
For one thing, though there are certainly a lot of companies where executives are forcing teams to churn out slop code, and using that as an excuse to carry out mass layoffs, there are plenty of companies where "AI" is just a buzzword being used as a pretense for layoffs that owners have wanted to do anyway. And more importantly, there are a growing number of coders who are having a very different experience with the tools than those bosses may have expected — and a very different outcome than the Big AI labs may have intended. As I said in the story:
“The reason that tech generally — and coders in particular — see LLMs differently than everyone else is that in the creative disciplines, LLMs take away the most soulful human parts of the work and leave the drudgery to you,” Dash says. “And in coding, LLMs take away the drudgery and leave the human, soulful parts to you.”
This is a point that's hard for a lot of my artist friends to understand: how come so many coders don't just hate LLMs for stealing their work the way that most writers and photographers and musicians do? The answer boils down to three things:
What this means is, attitudes about automation and worker displacement in tech are radically different than they would be in something like the auto industry, and in many cases, I've found that being part of a coder workforce has meant witnessing a level of literacy about past labor movements that is shockingly low, even though their technical knowledge is obviously extremely high.
To be somewhat reductive about it, there are two main cohorts of coders. A larger, less vocal, group who see coding as a stable, well-paying career that they got into in order to support themselves and their families, and to partake in the upward economic mobility that the tech sector has represented for the last few decades. Then there is the smaller, more visible, group who have seen coding as an avocation, which they were drawn to as a form of creative expression and problem-solving just as much as a career opportunity. They certainly haven't been reluctant to capitalize on the huge economic potential of working in tech — this is the group that most startup founders come from — but coding isn't simply something they do from 9 to 5 and then put away at the end of the day. For those of us in this group (yeah... I'm one of these folks), we usually started coding when we were kids, and we have usually kept doing it on nights and weekends ever since, even if it's not even part of our jobs anymore.
Both cohorts of coders are in for a hard time thanks to the new AI tools, but for completely different reasons.
The people who started to write software just because it represented a stable job, but who don't see it as part of their own personal identity, are going to be devastated by the ruthlessness with which their bosses will swing the ax. These new LLM-powered software factories can generate orders of magnitude more of the standardized business code that tends to be the bread-and-butter work for these journeyman coders, and it's not the kind of displacement that can be solved by learning a new programming language on nights and weekends, or getting a new professional certification. Much of the "working class" tech industry (speaking of the roles they perform functionally within the system; these are obviously jobs that pay far more than working class salaries today) are seen as ripe targets for deskilling, where lower-paid product roles can delegate coding tasks to coding AI systems, or for being automated by management giving orders to those AI systems.
One of the hardest parts of reckoning with this change is not just the speed with which it is happening, but the level of cultural change that it reflects. Coders are generally very amenable to learning new skills; it's a necessary part of the work, and the mindset is almost never one of being change-averse. But the level at which the change is happening in this transition is one that gets closer to people's sense of self-worth and identity, rather than to their perceptions of simply having to acquire knowledge or skills. It doesn't help that the change is being catalyzed by some of the most venal and irresponsible leaders in the history of business, brazenly acting without any moral boundaries whatsoever.
For the coders that see being a coder as part of their identity, the LLM transformation is going to represent an entirely different set of challenges. They may well survive the transition that is coming, but find themselves in an unrecognizable place on the other side of it. The way that these new LLM-based tools work is by turning into virtual software factories that essentially churn out nearly all of the code for you. The actual work of writing the code is abstracted away, with the creator essentially focused more on describing the desired end results, and making sure to test that everything is working correctly. You're more the conductor of the symphony than someone who's holding a violin.
But there are people who have spent decades honing their craft, committing to memory the most obscure vagaries of this computer processor or that web browser or that one gaming console, all in service of creating code that was particularly elegant or especially high-performing, or just really satisfying to write. There's a real art to it. When you get your code to run just so, you feel a quiet pride in yourself, and a sense of relief that there are still things in the world that work as they should. It's a little box that you can type in where things are fair. It's the same reason so many coders like to bake, or knit, or do woodworking — they're all hobbies where precisely doing the right thing is rewarded with a delightful result.
And now that's going away. You won't see the code yourself anymore, the robots will write it for you while falling around and clanking. Half the time, the code they write will be garbage, or nonsense. Slop. But it's so cheap to write that the computer can just throw it away and write some more, over and over, until it finally happens to work. Is it elegant? Who cares? It's cheap. Ten thousand times cheaper than paying you to write it, so we can afford to waste a lot of code along the way.
Your job changes into describing software. Now, if you're the kind of person who only ever wanted to have the end result, maybe this is a liberation. Sometimes, that's what mattered — we wanted to fast-forward to the end result, elegance be damned. But if you were one of those crafters? The people who wrote idiomatic code that made that programming language sing? There's a real grief here. It's not as serious as when we know a human language is dying out, but it's not entirely dissimilar, either.
What do we do about it? This horse is not going back in the barn. The billionaires wouldn't let it, anyway.
I've come to the personal conclusion that the only way forward is for more of the hackers with soul to seize this moment of flux and use these tools to build. The economics of creating code are changing, and it can't just be the worst billionaires in the world who benefit. The latest count is 700,000 people laid off in the last few years in the tech industry. We'll be at a million soon, at the rate things are accelerating. Each new layoff announcement is now in the thousands.
It's not going to be a panacea for all the jobs lost, and it's not the only solution we're going to need, but one part of the answer can be coders who still give a damn looking out for each other, and building independent efforts without being reliant on the economics — or ethics — of the people who are laying off their colleagues by the hundreds of thousands.
I've spent my whole career working with communities of coders, building tools for the people who build with code. I don't imagine I'll ever stop doing it. This is the hardest moment that I've ever seen this community go through, and it makes me heartsick to see so many people enduring such stress and anxiety about what's to come. More than anything else, what I hope people can remember is that all of the great things that people love about technology weren't created by the money guys, or the bosses who make HR decisions — they were created by the people who actually build things. That's still an incredible superpower, and it will remain one no matter how much the actual tools of creation continue to change.
2026-03-08 08:00:00
Last week, Apple released a parade of hardware announcements, and the one that captured the most attention across the industry was the $600 ($500 if you’re in education!) MacBook Neo, the brightly-colored low-end laptop that they launched to great fanfare. The conventional wisdom is that this product opens up Apple to the low end of the laptop market for the first time, radically changing the dynamics of the entire market, and throwing down the gauntlet to the garbage Windows laptop market, as well as challenging a huge swath of Chromebooks which tend to dominate in the education market. This is incorrect.
Apple has, in fact, sold a MacBook Air with an M1 chip at Walmart for years, which it has intermittently discounted to $499 at key times like Black Friday and Cyber Monday. The single-core performance of that laptop (meaning, how it works for most normal tasks that people do, like browsing the web or writing email or watching YouTube videos), is very nearly equivalent to the newly-released MacBook Neo.
But. A laptop with an old design, using a chip that has an old number (the M1 chip came out six years ago!), sold exclusively through a mass-market retailer that is perceived as anything but premium, presents an enormous brand challenge for Apple. It is, to put it simply, embarrassing. Apple can have low-end products in its range. They invest lots of effort in that segment of their product line, as the new iPhone 17e shows, making a new basic entrant to their most recent series of phones. But Apple can’t have old, basic-looking products that people aren’t even able to buy at an Apple Store.
And that’s what Neo solves. It’s a smart reframing of a product that is nearly the same offering as the old M1 Air: the Neo and that old M1 machine both have 13” screens, both weigh just under 3 pounds, both have 8GB of RAM, both start at 256GB of storage, both have about 16 hours of battery life, are both about 8”x12”, both have 2 USB ports and a headphone jack, and both of course cost almost exactly the same. They did add a new yellow (citrus!) color for the Neo, though.
What was more striking to me was Apple’s introductory video, which clearly seems aimed at people who are new to Apple computers, or maybe people who are new to laptop computers entirely. They’re imagining a user base who’s only ever had their smartphones and are buying computers for the first time — which might describe a lot of students. There’s no discussion here of the chamfers of the aluminum, or the pipelines in the GPU cores, and there’s barely even the slightest mention of AI; instead, they describe the basics of what the laptop includes, and even go out of their way to explain how it interoperates with an iPhone.
There’s also a very clear attempt to distinguish Neo’s branding from the rest of Apple’s design language. The type for the “MacBook Neo” name in the launch video, and the “Hello, Neo” text on the product homepage are a rounded typeface that’s so new that it’s not actually even an actual font that Apple’s using; they’ve rendered it as an image instead of a variation of their usual “San Francisco” font that Apple uses for everything else in their standard marketing materials. The throwback to 2000s-era design (terminal green, the word “Neo” — are we entering the Matrix?) couldn’t be more different from the “it looks expensive” vibes of something like the Apple Watch Hermès branding.
In all, it’s pretty impressive to see Apple use its marketing strengths to take a product that is remarkably similar to something that they’ve had for sale for years at the largest retailer in the world, and position it as a brand-new, category-defining new entry into a space. To me, the biggest thing this shows is the blind spot that traditional tech trade press has to the actual buying patterns and lived experience of normal people who shop at Walmart all the time; it would be pretty hard to see Neo as particularly novel if you had walked by a Walmart tech section any time in the last three years.
At a time when Apple has lost whatever moral compass it had, even though its machines still say “privacy is a human right” when you turn them on, we still want to see positive signs from the company. And a good one is that Apple is engaging with the reality that the current moment calls for products that are far more affordable. It is a good thing indeed when affordable products are presented as being desirable, when most of the product’s enclosure is made of recycled material, and when the lifespan of a product can be expected to be significantly longer than most in its category, instead of simply being treated as disposable. All it took was removing the stigma over the existing affordable laptop that Apple’s been selling for years.
2026-02-28 08:00:00
TL;DR:
As I noted back in 2024, the common phrase “wherever you get your podcasts” masks a subtle point, which is that podcasts are built on an open technology — a design which has radical implications on today’s internet. This is the reason that the podcasts most people consume aren’t skewed by creators chasing an algorithm that dictates what content they should create, aren’t full of surveillance-based advertising, and aren’t locked down to one app or platform that traps both creators and their audience within the walled garden of a single giant tech company.
Many of those merits of the contemporary podcast ecosystem are possible because of choices Apple made almost two decades ago when they embraced open standards in iTunes when adding podcasting features. Their outsized market influence (the term “podcast” itself came from the name iPod) pushed everyone else in the ecosystem to follow their lead, and as a result, we have a major media format that isn’t as poisoned, in some ways, as the rest of social media or even mainstream media.
Sure, there are individual podcast creators one might object to, but notice how you don’t see bad actors like FCC chairman Brendan Carr illegally throwing his weight around to try to censor and persecute podcasters in the same way that he’s been silencing television broadcasters, and you don’t see MAGA legislators trying to game the refs about the algorithm the way they have with Facebook and Twitter. Even the Elon Musks of the world can’t just buy up the whole world of podcasting like he was able to with Twitter, because the ecosystem is decentralized and not controlled by any one player. This is how the Internet was supposed to work. As early Internet advocates were fond of saying, the architecture of the Internet was designed to see censorship as damage, and route around it.
All of this is at much higher risk now due to the technical decisions Apple has made with its move to support video podcasts in its latest software versions that are about to launch. The motivations for their move are obvious: in recent years, many podcasters have moved to embrace new platforms to increase their distribution, reach, engagement and sponsorship dollars, and that has driven them to add video, which has meant moving to YouTube, and more recently, platforms like Netflix. That is also typically accompanied by putting out promotional clips of the video portion of the podcast on platforms like TikTok and Instagram. Combined with Spotify’s acquisition of multiple studios in order to produce proprietary shows that are not podcasts, but exclusive content locked into their apps, and Apple has faced a significant number of threats to their once-dominant position in the space.
So it was inevitable that Apple would add video support to their podcasting apps. And it makes sense for Apple to update the technical underpinnings; the assumptions that were made when designing podcasts over two decades ago aren’t really appropriate for many contemporary uses. For example, back then, by default an entire podcast episode would be downloaded to your iPod for convenient listening on the go, just like songs in your music library. But downloading a giant 4K video clip of an hour-long podcast show that you might not even watch, just in case you might want to see it, would be a huge waste of resources and bandwidth. Modern users are used to streaming everything. Thus, Apple updated their apps to support just grabbing snippets of video as they’re needed, and to their credit, Apple is embracing an open video format when doing so, instead of some proprietary system that requires podcasters to pay a fee or get permission.
The problem, though, is that Apple is only allowing these new video streams to be served by a small number of pre-approved commercial providers that they’ve hand-selected. In the podcasting world, there are no gatekeepers; if I want to start a podcast today, I can publish a podcast feed here on anildash.com and put up some MP3s with my episodes, and anyone anywhere in the world can subscribe to that podcast, I don’t have to ask anyone’s permission, tell anyone about it, or agree to anyone’s terms of service.
If I want to publish a video podcast to Apple’s new system, though, I can’t just put up a video file on my site and tell people to subscribe to my podcast. I have to sign up for one of the approved partner services, agree to their terms of service, pay their monthly fee, watch them get acquired by Facebook, wait for the stupid corporate battle between Facebook and Apple, endure the service being enshittified, have them put their thumb on the scale about which content they want to promote, deal with my subscribers being spied on when they watch my show, see Brendan Carr make up a pretense to attack the platform I’m on, watch the service use my show to cross-promote violent attacks on vulnerable people, and the entire rest of that broken tech/content culture cycle.
We don’t have to do this, Apple!
What will happen, by default, if Apple doesn’t change course and add support for open video hosting for podcasts is a land grab for control of the infrastructure of the new, closed video podcast technology platform. Some of the bidders may be players that want to own podcasting (Spotify, Netflix, maybe legacy media companies like Disney and Paramount), or a roll-up from a cloud provider like AWS or Google Cloud. Either way, the services will get way more expensive for creators, and far more conservative about what content they allow, while being far more consumer-hostile in terms of privacy and monetization. We’ve seen this play out already — video shows on YouTube give advertisers massive amounts of data about viewers, while podcasts can be delivered to an audience while almost totally preserving their privacy, if a creator wants to help them preserve their anonymity. The reason you see podcasters always talking about “use our promo code” in their sponsor reads is because advertisers can’t track you going from their show to their website.
This will also start to impact content. You don’t hear podcasters saying “unalive” or censoring normal words because there is no algorithm that skews the distribution of their content. The promotional graphics for their shows are often downright boring, and don’t feature the hosts making weird faces like on YouTube thumbnails, because they haven’t been optimized to within an inch of their lives in hopes of getting 12-year-olds to click on them instead of Mr. Beast — because they’re not trying to chase algorithmic amplification. The closest thing that podcasters have to those kinds of games is when they ask you to rate them in Apple’s Podcasts app, because that has an algorithm for making recommendations, but even that is mediated by real humans making actual choices.
But once we’ve got a layer of paid intermediaries distributing video content, and Apple leans more heavily into the visual aspects of their podcast app, incentives are going to start to shift rapidly. Today, other than on laptops, phones and tablets, Apple Podcasts app only exists on their Apple TV hardware, and doesn’t even have a video playback feature. By contrast, a lot of video podcast consumption happens in YouTube’s TV apps in the living room. Apple Podcasts will soon have to be on every set top device like Roku sticks and Amazon Fire TVs and Google’s Chromecasts, as well as on smart TVs like Samsungs and LGs, with a robust video playback feature that can compete with YouTube’s own capabilities. Once that’s happened — which will take at least a year, if not multiple years — creators will immediately begin jockeying for ways to get promoted or amplified within that ecosystem. Even if Apple has allowed independent publishers to make their own video podcast feeds, it’s easy to imagine them treating them as second-class citizens when distributing those podcasts to all of the Apple Podcast users across all of these platforms.
The stakes for all of this are even higher because nearly all of the independent online platforms for video creation outside of YouTube have been bought up by a single private equity firm. In short: even if you don’t know it, if you’re trying to do video off of YouTube, all of your eggs are in one, very precarious, basket.
Apple can mitigate the risks of closing up podcasts by moving as quickly as possible to reassure the entire podcasting ecosystem that they’ll allow creators to use any source for hosting video. Right now, there’s a “fallback” video system where creators can deliver video through the traditional podcast standard, and other podcasting apps will show that video to audiences, but Apple’s apps don’t recognize it. If Apple said they’d support that specification as a second option for those who don’t want to, or can’t, use their video hosting partners, that would go a huge way towards mitigating the ecosystem risk that they’re introducing with this new shift.
If Apple can engage with a wide swath of creators and understand the concerns that are bubbling up, and articulate that they’re aware of the real, significant risks that can arise from the path that they’re currently on, they still have a chance to course-correct.
Some of these decisions can seem like arcane technical discussions. It’s easy to roll your eyes when people talk about specifications and formats and the minutiae of what happens behind the scenes when we click on a link. But the history of the Internet has shown us that, sometimes, even some of what seem like the most inconsequential choices end up leading to massive shifts in a larger ecosystem, or even in culture overall.
A generation ago, a few people at Apple made a choice to embrace an open ecosystem that was in its infancy, and in so doing, they enabled an entire culture of creators to flourish for decades. Podcasting is perhaps the last major media format that is open, free, and not easily able to be captured by authoritarians. The stakes couldn’t be higher. All it takes now is a few decision makers pushing to do the right thing, not just the easy thing, to protect an entire vital medium.
2026-02-28 08:00:00
A big tech headline this week is Anthropic (makers of Claude, widely regarded as one of the best LLM platforms) resisting Secretary of Defense Pete Hegseth’s calls to modify their platform in order to enable it to support his commission of war crimes. As has become clear this week, Anthropic CEO Dario Amodei has declined to do so. The administration couches the request as an attempt to use the technology for “lawful purposes”, but given that they’ve also described their recent crimes as legal, this is obviously not a description that can be trusted.
Many people have, understandably, rushed to praise Dario and Anthropic’s leadership for this decision. I’m not so sure we should be handing out a cookie just because someone is saying they’re not going to let their tech be used to cause extrajudicial deaths.
To be clear: I am glad that Dario, and presumably the entire Anthropic board of directors, have made this choice. However, I don’t think we need to be overly effusive in our praise. The bar cannot be set so impossibly low that we celebrate merely refusing to directly, intentionally enable war crimes like the repeated bombing of unknown targets in international waters, in direct violation of both U.S. and international law. This is, in fact, basic common sense, and it’s shocking and inexcusable that any other technology platform would enable a sitting official of any government to knowingly commit such crimes.
We have to hold the line on normalizing this stuff, and remind people where reality still lives. This means we can recognize it as a positive move when companies do the reasonable thing, but also know that this is what we should expect. It’s also good to note that companies may have many reasons that they don’t want to sell to the Pentagon in addition to the obvious moral qualms about enabling an unqualified TV host who’s drunkenly stumbling his way through playacting as Secretary of Defense (which they insist on dressing up as the “Department of War” — another lie).
Being on any federal procurement schedule as a technology vendor is a tedious nightmare. There’s endless paperwork and process, all falling squarely into the types of procedures that a fast-moving technology startup is likely to be particularly bad at completing, with very few staff members having had prior familiarity handling such challenges. Right now, Anthropic handles most of the worst parts of these issues through partners like Amazon and Palantir. Addressing more of these unique and tedious needs for a demanding customer like the Pentagon themselves would almost certainly require blowing up the product roadmap or hiring focus within Anthropic for months or more, potentially delaying the release of cool and interesting features in service of boring (or just plain evil) capabilities that would be of little interest to 99.9% of normal users. Worse, if they have to build these features, it could exhaust or antagonize a significant percentage of the very expensive, very finicky employees of the company.
This is a key part of the calculus for Anthropic. A big part of their entire brand within the tech industry, and a huge part of why they’re appreciated by coders (in addition to the capabilities of their technology), is that they’re the “we don’t totally suck” LLM company. Think of them as “woke-light”. Within tech, as there have been massive waves of rolling layoffs over the last few years, people have felt terrified and unsettled about their future job prospects, even at the biggest tech companies. The only opportunities that feel relatively stable are on big AI teams, and most people of conscience don’t want to work for the ones that threaten kids’ lives or well-being. That leaves Anthropic alone amongst the big names, other than maybe Google. And Google has laid off people at least 17 times in the last three years alone.
So, if you’re Dario, and you want to keep your employees happy, and maintain your brand as the AI company that doesn’t suck, and you don’t want to blow up your roadmap, and you don’t want to have to hire a bunch of pricey procurement consultants, and you can stay focused on your core enterprise market, and you can take the right moral stand? It’s a pretty straightforward decision. It’s almost, I would suggest, an easy decision.
We’ve only allowed ourselves to lower the bar this far because so many of the most powerful voices in Silicon Valley have so completely embraced the authoritarian administration currently in power in the United States. Facebook’s role in enabling the Rohingya genocide truly served as a tipping point in the contemporary normalization of major tech companies enabling crimes against humanity that would have been unthinkable just a few years prior; we can’t picture a world where MySpace helped accelerate the Darfur genocide, because the Silicon Valley tech companies we know about today didn’t yet aspire to that level of political and social control. But there are deeper precedents: IBM provided technology that helped enable the horrors of the holocaust in Germany in the 1940s, and that served as the template for their work implementing apartheid in South Africa in the 1970s. IBM actually bid for the contract to build these products for the South African government. And the systems IBM built were still in place when Elon Musk, Peter Thiel, David Sacks and a number of other Silicon Valley tycoons all lived there during their formative years. Later, as they became the vaunted “PayPal Mafia”, today’s generation of Silicon Valley product managers were taught to look up to them, so it’s no surprise that their acolytes have helped create companies that enable mass persecution and surveillance. But it’s also why one of the first big displays of worker power in tech was when many across the industry stood up against contracts with ICE. That moment was also one of the catalyzing events that drove the tech tycoons into their group chats where they collectively decided that they needed to bring their workers to heel.
And they’ve escalated since then. Now, the richest man in the world, who is CEO of a few of the biggest tech companies, including one of the most influential social networks — and a major defense vendor to the United States government — has been openly inciting civil war for years on the basis of his racist conspiracy theories. The other tech tycoons, who look to him as a role model, think they’re being reasonable by comparison in the fact that they’re only enabling mass violence indirectly. That’s shifted the public conversation into such an extreme direction that we think it’s a debate as to whether or not companies should be party to crimes against humanity, or whether they should automate war crimes. No, they shouldn’t. This isn’t hard.
We don’t have to set the bar this low. We have to remind each other that this isn’t normal for the world, and doesn’t have to be normal for tech. We have to keep repeating the truth about where things stand, because too many people have taken this twisted narrative and accepted it as being real. The majority of tech’s biggest leaders are acting and speaking far beyond the boundaries of decency or basic humanity, and it’s time to stop coddling their behavior or acting as if it’s tolerable. In the meantime, yes, we can note when one has the temerity to finally, finally do the right thing. And then? Let’s get back to work.