MoreRSS

site iconAnil DashModify

A tech entrepreneur and writer trying to make the technology world more thoughtful, creative and humane. 
Please copy the RSS to your reader, or quickly subscribe to:

Inoreader Feedly Follow Feedbin Local Reader

Rss preview of Blog of Anil Dash

I know you don’t want them to want AI, but…

2025-11-14 08:00:00

Today, Rodrigo Ghedrin wrote the very well-intentioned, but incorrectly-titled,  “I think nobody wants AI in Firefox, Mozilla”. As he correctly summarizes, sentiment on the Mozilla thread about a potential new AI pane in the Firefox browser is overwhelmingly negative. That’s not surprising; the Big AI companies have given people numerous legitimate reasons to hate and reject “AI” products, ranging from undermining labor to appropriating content without consent to having egregious environmental impacts to eroding trust in public discourse.

I spent much of the last week having the distinct honor of serving as MC at the Mozilla Festival in Barcelona, which gave me the extraordinary opportunity to talk to hundreds of the most engaged Mozilla community members in person, and to address thousands more from onstage or on the livestream during the event. No surprise, one of the biggest topics we talked about the entire time was AI, and the intense, complex, and passionate feelings so many have about these new tools. Virtually everyone shared some version of what I’d articulated as the majority view on AI, which is approximately that LLMs can be interesting as a technology, but that Big Tech, and especially Big AI, are decidedly awful and people are very motivated to stop them from committing their worst harms upon the vulnerable.

But.

Another reality that people were a little more quiet in acknowledging, and sometimes reluctant to engage with out loud, is the reality that hundreds of millions of people are using the major AI tools every day. When I would point this out, there was often an initial defensive reaction talking about how people are forced to use these tools at work, or how AI is being shoehorned into every tool and foisted upon users. This is all true! And also? Hundreds of millions of users are choosing to go to these websites, of their own volition, and engage with these tools.

Regular, non-expert internet users find it interesting, or even amusing, to generate images or videos using AI and to send that media to their friends. While sophisticated media aesthetics find those creations gauche or even offensive, a lot of other cultures find them perfectly acceptable. And it’s an inarguable reality that millions of people find AI-generated media images emotionally moving. Most people that see AI-generated content as tolerable folk art belong to demographics that are dismissed by those who shape the technology platforms that billions of people use every day.

Which brings us back to “nobody wants AI in Firefox”. (And its obligatory matching Hacker News thread, which proceeds exactly as you might expect.) In the communities that frequent places like Hacker News and Mozilla forums, where everyone is hyper-fluent in concerns like intellectual property rights and the abuses of Big Tech, it’s received wisdom that “everyone” resists the encroachment of AI into tools, and therefore the only possible reason that Mozilla (or any organization) might add support for any kind of AI features would be to chase a trend that’s in fashion amongst tech tycoons. I don’t doubt that this is a factor; anytime a significant percentage of decision makers are alumni of Silicon Valley, its culture is going to seep into an organization.

The War On Pop-Ups

What people are ignoring, though, is that using AI tools is an incredibly mainstream experience now. Regular people do it all the time. And doing so in normal browsers, in a normal context, is less safe. We can look at an analogy from the early days of the browser wars, a generation ago.

Twenty years ago, millions and millions of people used Internet Explorer to get around the web, because it was the default browser that came with their computer. It was buggy and wildly insecure, and users would often find their screen littered with intrusive pop-up advertisements that had been spawned by various sites that they had visited across the web. We could have said, “well, those are simply fools with no taste using bad technology who get what they deserve”

Instead, countless enthusiasts and advocates across the web decided that everyone deserved to have an experience that was better and safer. And as it turned out, while getting those improvements, people could even get access to a cool new feature that nobody had seen before: tabs! Firefox wasn’t the first browser to invent all these little details, but it was the first to put them all together into one convenient little package. Even if the expert users weren’t personally visiting the sites riddled with pop-up ads themselves, they were glad to have spared their non-expert friends from the miseries they were enduring on the broken internet.

I don’t know why today’s Firefox users, even if they’re the most rabid anti-AI zealots in the world, don’t say, “well, even if I hate AI, I want to make sure Firefox is good at protecting the privacy of AI users so I can recommend it to my friends and family who use AI”. I have to assume it’s because they’re in denial about the fact that their friends and family are using these platforms. (Judging by the tenor of their comments on the topic, I’d have to guess their friends don’t want to engage with them on the topic at all.)

We see with tools like ChatGPT’s Atlas that there are now aggressively anti-web browsers coming to market, and even a sophisticated user might not be able to realize how nefarious some of the tactics of these new apps can be. I think those who are critical can certainly see that those enabling those harms are bad actors. And those critics are also aware that hundreds of millions of people are using ChatGPT. So, then… what browser do they think those users should use?

What does good look like?

Judging by what I see in the comments on the posts about Firefox’s potential AI feature integrations, the apparent path that critics are recommending as an alternative browser is “I’ll yell at you until you stop using ChatGPT”. Consider this post my official notice: that strategy hasn’t worked. And it is not going to work. The only thing that will work is to offer a better alternative to these users. That will involve defining what an acceptably “good” alternative AI looks like, and then building and shipping it to these users, and convincing them to use it. I’m hoping such an effort succeeds. But I can guarantee that scolding people and trying to convince them that they’re not finding utility in the current platforms, or trying to make them feel guilty about the fact that they are finding utility in the current platforms, will not work.

And none of this is exculpatory for my friends at Mozilla. As I’ve said to the good people there, and will share again here, I don’t think the framing of the way this feature has been presented has done either the Firefox team or the community any favors. These big, emotional blow-ups are demoralizing, and take away time and energy and attention that could be better spent getting people excited and motivated to grow for the future.

My personal wishlist would be pretty simple:

* Just give people the “shut off all AI features” button. It’s a tiny percentage of people who want it, but they’re never going to shut up about it, and they’re convinced they’re the whole world and they can’t distinguish between being mad at big companies and being mad at a technology so give them a toggle switch and write up a blog post explaining how extraordinarily expensive it is to maintain a configuration option over the lifespan of a global product.

* Market Firefox as “The best AI browser for people who hate Big AI”. Regular users have no idea how creepy the Big AI companies are — they’ve just heard their local news talk about how AI is the inevitable future. If Mozilla can warn me how to protect my privacy from ChatGPT, then it can also mention that ChatGPT tells children how to self-harm, and should be aggressive in engaging with the community on how to build tools that help mitigate those kinds of harms — how do we catalyze that innovation?

* Remind people that there isn’t “a Firefox” — everyone is Firefox. Whether it’s Zen, or your custom build of Firefox with your favorite extensions and skins, it’s all part of the same story. Got a local LLM that runs entirely as a Firefox extension? Great! That should be one of the many Firefoxes, too. Right now, so much of the drama and heightened emotions and tension are coming from people’s (well… dudes') egos about there being One True Firefox, and wanting to be the one who controls what’s in that version, as an expression of one set of values. This isn’t some blood-feud fork, there can just be a lot of different choices for different situations. Make it all work.

So, that’s the answer. I think some people want AI in Firefox, Mozilla. And some people don’t. And some people don’t know what “AI” means. And some people forgot Firefox even exists. It’s that last category I’m most concerned about, frankly. Let’s go get ‘em.

Turn the volume up.

2025-11-05 08:00:00

Today marked a completely new moment for New York City, and for America. There will be countless attempts at analysis and reflection and what-does-it-all-mean in the days to come, along with an unimaginable number of hateful attacks. But what's worth reflecting on right now is the fact that we've entered a new era, and that, even at the very start, there are some extraordinary things that we can observe.

You have to start with the principle.

I've said it before, and I'll say it again: You have to start with the principle. You must have a politics that believes in something. You can't win unless you know what you're fighting for. Something specific, that people can see and believe. Something that people will know when it's been achieved. It can't just be a vague platitude, and it can't just be "root for our team" or "the other guy is bad". Zohran and his team understood this profoundly well, and made a campaign focused on substance -- grounded in humanist principles, and tied to extremely clear, understandable and specific policy deliverables.

You will have to put your body on the line.

The thing that first put Zohran on the map in city-wide politics was his hunger strike in solidarity with taxi drivers. This was a heart-wrenchingly important issue for our South Asian communities in particular because, even after so many painful deaths, it still felt like nobody in power cared. By putting his physical self into the same risk as the drivers who were fighting for their lives and livelihoods, Zohran showed who he was at a profound level — and proved himself ready for the moment of what it will take to fend off an authoritarian takeover. No surprise, then, that when the violent and out-of-control Border Czar Tom Homan abducted Mahmoud Khalil, Zohran had no qualms about confronting Homan in person to demand Khalil be freed. Standing in front of Homan, standing shoulder-to-shoulder with those who refused to back down in the face of the most basic of human rights violations, New York's leaders were able to secure the freedom of a man wrongly taken from his pregnant wife.

They have to be able to talk about us without us.

This is one of the refrains that comes up most when I'm talking to people about communications, in almost any context from organizing to business to building a community. A message has to be simple enough, memorable enough, and clear enough that even someone who's just heard it for the first time first time can repeat it — in high fidelity — to the next person they talk to. The Mamdani campaign nailed this from the start, focusing not just on "affordability" in the abstract, but specific promises around free buses, universal childcare, and frozen rent in particular. The proof of how effective and pervasive those messages have been is that detractors can recite them, verbatim, from memory.

Meet the people where they are, in the streets or in the media.

This is another point that just ties to the core humility of earning every vote. From that very first famous video on Fordham Road, where Zohran went to meet voters in the Bronx in a district that had swung very hard away from the Democrats, he literally took the campaign to the people and met them where they were, and listened. Since then, he's been at every event in the city from bar crawls to bar mitzvahs, showing not just a superheroic stamina that shames opponents twice his age, but an enthusiasm for both the city and its people that simply can't be faked. The same thing happened online. No matter what platform you use, or which influencers or outlets you get your information or entertainment from, Zohran was there, smiling and on-message, welcoming you in. Nothing was beneath him, and nothing was inauthentic, because he believes in his story.

Money melts when it meets a movement whose moment has arrived.

They spent billions. Bloomberg dipped into his pocket and personally spent over $10 million. People who live thousands of miles away invested millions of their ill-gotten fortunes, and spent countless hours spewing bile on top of it. And none of it amounted to anything. Because, despite all the corruption, and despite how much our democratic institutions have been weakened, ultimately New York City's voting system has held, and the power of the people has prevailed. Maybe now the tycoons will get the message that it's cheaper just to... do things that people want?

"Hope and change" means more after so much lost and learned.

Having been a veteran of both Obama campaigns and Obama administrations, I remember well both the optimism of those moments, and the certain sense of trepidation that so many Americans wouldn't see Obama as the first Black president, but as the end of white presidents — and would provoke a backlash accordingly. As turned out to be the case. When those forces decided to burn down the country in response to his presidency, it made a lot of us wary about hoping again. But hearing Zohran's speech say "There are many who thought this day would never come" brought back memories of Obama's similar words after that extraordinary victory in Iowa. When Zohran said "hope is alive", it spoke to both Obama's famous "hope" slogan, and to Jesse Jackson's groundbreaking "Keep Hope Alive" speech from a campaign that inspired and innovated before Zohran was even born. I'm lucky enough to have sat down with these men and heard them in depth, behind closed doors, in nuanced conversation. But I know that the rhetoric of what they've said in soaring speeches on stage is what moved so many. It's the bigger words that make movements. And though many people who got their hopes up in those eras, or who felt let down by some of the cynicism or failures or flaws since then might be afraid to be optimistic again, I think this movement is full of people who are aware of both the strengths and shortcomings of what has come before. It won't be perfect, but it's a chance to keep doing better.

What we're joyfully running toward, not what we're fearfully running from.

So much of what people hear in politics is negative and threatening. Zohran's opponents spoke almost exclusively about how people should be scared and angry. But the undeniable energy of the Mamdani campaign has been joy — an effusive, exuberant, contagious joy. Even when times are hard, maybe especially when times are hard, people are drawn to that joy. And they've been missing leaders who offer them a positive vision. They don't want to hear horrifying visions of "American carnage", especially when they know those are lies designed to manipulate. A better world is possible.

People are smart. We can talk like adults.

No one wants to be condescended to. Perhaps one of the most joyful parts of Zohran's instantly-legendary victory speech on election night was its eloquence. His speech was at well above a 10th-grade level. It was complex, erudite, punctuated by deep and fluent references. He proved that politicians don't have to condescend to voters with baby talk! And a big part of re-establishing our democratic norms is going to be speaking to the electorate as if we are all adults, assuming a level of literacy in culture and history, as well as as basic civics. I keep saying that I'm hoping we get an "easter egg breakdown" version of this speech, similar to the ones that people on YouTube do for a Marvel movie or a Star Wars trailer. There's such a dense level of references and context that people will be able to extract meaning from it for years to come — a welcome contrast to a political environment that has usually had deeply hateful dog whistles as the only thing buried within its content.

If you run from who you are, you have already lost.

Coming on stage for a political victory to Ja Rule's New York (I had bet that there would be some Jadakiss involvement in whatever walk-on music he picked), and walking off to Dhoom Machale, while saying with his full chest that he's a Muslim, a New Yorker, and a young Democratic Socialist — these are the moves of a person who knows that those who are motivated by hate will never back down if you try to hide or be evasive about who you are. A coward dies a thousand deaths, and a politician who hides their identity loses a thousand elections before a single vote is cast. We see Vivek Ramaswamy tap-dancing around his faith every day, and the white supremacists that he's cozied up to will never let him win. But fourteen years ago, the racist and hateful media falsely called President Obama's private birthday party a "hip-hop BBQ". And as I said years later, you should just have the damn hip-hop BBQ — they're going to accuse you of it anyway. Lean into who you are, own it, and let the haters stay mad.

Everything can start from one voice.

This is perhaps one of the most profound lessons of Zohran's campaign, and one of the most personal, because I got to see these people have these impacts firsthand. When Lindsey Boylan stood up in 2020 to tell her story of how Andrew Cuomo had harassed her and created a brutally hostile environment for her work, she not only had everything to lose, but there was no way to know whether there would ever be any accountability. But by speaking her truth, she made it possible for other women to speak up, and she made it possible for Zohran Mamdani to be an advocate for accountability as a candidate for mayor. Similarly, when Heems spoke to the Village Voice about Ali Najmi running for city council, I read it as my friend simply using his platform to help his friend campaign for office. What I didn't know at the time was that he would be galvanizing a young Zohran Mamdani to canvass for a campaign for the first time in his life, introducing him to the idea that this was a city where he could have political impact. This week, 104,000 people knocked on doors as part of their succesful effort to make Zohran mayor.


This one's so personal for me — we both have mothers named Mira who come from the same small state in India, and so many people I love carry Zohran in their hearts like family. But stepping outside of my deep emotional connection, there are a rich vein of lessons that apply to a much broader context, and I hope people will reflect on how much there is to learn from this moment. As proud and excited as I am for Zohran, I'm just as excited to find out which of those young people who was out there knocking doors next to me these last few months is going to be my mayor in a few years.

Founders Over Funders. Inventors Over Investors.

2025-10-24 08:00:00

I've been following tech news for decades, and one of the worst trends in the broader cultural conversation about technology — one that's markedly accelerated over the last decade — is the shift from talking about people who create tech to focusing on those who merely finance it.

It's time we change the story. When you see a story that claims to be about "technology", ask yourself:

  • Does this story focus on the actual tech, and the people who created it?
  • Does it explain what's innovative about the technology, and does it ask whether the technology is real and substantive, and whether it can actually do what it claims to do?
  • Or does this story talk about moving money around and making promises about things that may or may not exist, or refer to things that may not actually work?

These questions aren't being asked nearly enough. The result is a hell of a lot of "tech" stories that have approximately nothing to do with technology.

Writing checks isn't writing code.

The shift to centering money movers over makers has had incredibly negative effects on innovation, accountability, and even just the basic accuracy of how stories are told about technology.

We see this play out in a number of ways. First, a huge percentage of all stories about new technologies focus solely on startups, even though a small fraction of all tech workers are employed by startups, and the vast majority of new technology innovations come from academia, the public sector, and research and development organizations within other institutions outside of the startup world. As I wrote nine years ago, there is no technology industry — every organization uses technology, so technological innovation can come from anywhere. But we seldom see that broad base of ideas and insight covered accurately, if they're covered at all, because they're not of interest to the investors who are hogging the spotlight.

There's also the fact that a disproportionately large number of "technology" stories are really just announcements about funding events for companies in the technology sector, which has very little to do with the merits or substance of the tech that they create, taking time and space away from other innovations that could be covered, but also distracting from talking about how the tech actually works. This erodes the ability for people who care about technology to share knowledge, which is key for driving broader innovations.

One of the great joys of being in various technology communities is how you can "find your people" — those who geek out about the same minute technical details as you. There's a profound spirit of generosity in so many tech communities, where people will go out of their way to help you troubleshoot or fix bugs or will contribute code, just to share in that spirit of creativity together. There's a magical and rewarding feeling the first time you get some code to successfully run, or the first time you get a bit of hardware to successfully boot, and people who love technology delight in helping others achieve that. I've seen this remain true for people at every stage of their career, with even some of the most expert coders in the world voluntarily spending their time helping beginning coders with questions just because they had a shared interest.

The most common reason that people create technology is because they had an idea about something cool they wanted to see in the world. That's the underlying ethos which connects tech creators together, and which motivates them to share their work as free or open source projects, or to write up their weekend hacks just for the love nerding out. Sometimes there's enough interest that they might turn that side project into a business, but in most cases the fundamental motivation is the creative spirit. And then, sure, if that creative project needs capital to grow into its full potential, then there's a place for investors to join the conversation.

That creative spirit used to be more obvious when more of the cultural story about tech featured actual makers; it's what brought me and most of my peers into this space in the first place. And all of that gets crowded out when people think the only path into creating something begins with appeasing a tiny handful of gatekeepers who control the pursestrings.

Power In Play

There's been a larger cost to this focus on venture capitalists and financiers over coders, engineers, and inventors: It's gone to their heads. Part of the reason is that some of the investors, long ago, used to make products. A handful of them even made successful ones, and some of those successful ones were even good. But after riding on the coattails of those successes for a long time, and spending years in the bubble of praise and sycophancy that comes with being a person that people want to get money from, the egos start to grow. The story becomes about their goals, their agendas, their portfolios.

When we see something like a wildly-distorted view of artificial intelligence get enough cultural traction to become considered “conventional wisdom” despite the fact that it’s a wildly unpopular view held by a tiny, extremist minority within the larger tech sphere — that is the result of focusing on investors instead of inventors. Who cares what the money-movers think? We want to hear what motivated the makers!

We’re also losing the chance for people to see themselves reflected in the stories we tell about technology. It’s obvious that the cabal of check-writers is a closed cohort. But that’s a stark contrast to the warm and welcoming spirit that still suffuses the communities of actual creators. There's a striking lack of historical perspective in how we talk about tech today. Let community voices lead instead of a tiny group of tycoons, and you'd get much more interesting, accurate stories. We couldn’t imagine a film being released without talking about who the director was, or the actors, and they even give out awards for the writers. But when a new app comes out, media talks to the CEO of the tech company — that’s like talking to the head of the studio about the new movie.

We have so much richer stories to tell. At its best, technology empowers people, in a profound and extraordinary way. I’ve seen people change their lives, even change entire communities, by getting just the barest bit of access to the right tech at the right time. There’s something so much more compelling and fascinating about finding out how things actually work, and thinking about how they might work better. The way to get there is by talking to the people who are actually making that future.

ChatGPT's Atlas: The Browser That's Anti-Web

2025-10-22 08:00:00

OpenAI, the company behind ChatGPT, released their own browser called Atlas, and it actually is something new: the first browser that actively fights against the web. Let's talk about what that means, and what dangers there are from an anti-web browser made by an AI company — one that probably needs a warning label when you install it.

The problems fall into three main categories:

  1. Atlas substitutes its own AI-generated content for the web, but it looks like it's showing you the web
  2. The user experience makes you guess what commands to type instead of clicking on links
  3. You're the agent for the browser, it's not being an agent for you

1. By default, Atlas doesn't take you to the web

When I first got Atlas up and running, I tried giving it the easiest and most obvious tasks I could possibly give it. I looked up "Taylor Swift showgirl" to see if it would give me links to videos or playlists to watch or listen to the most popular music on the charts right now; this has to be just about the easiest possible prompt.

The results that came back looked like a web page, but they weren't. Instead, what I got was something closer to a last-minute book report written by a kid who had mostly plagiarized Wikipedia. The response mentioned some basic biographical information and had a few photos. Now we know that AI tools are prone to this kind of confabulation, but this is new, because it felt like I was in a web browser, typing into a search box on the Internet. And here's what was most notable: there was no link to her website.

I had typed "Taylor Swift" in a browser, and the response had literally zero links to Taylor Swift's actual website. If you stayed within what Atlas generated, you would have no way of knowing that Taylor Swift has a website at all.

Unless you were an expert, you would almost certainly think I had typed in a search box and gotten back a web page with search results. But in reality, I had typed in a prompt box and gotten back a synthesized response that superficially resembles a web page, and it uses some web technologies to display its output. Instead of a list of links to websites that had information about the topic, it had bullet points describing things it thought I should know. There were a few footnotes buried within some of those response, but the clear intent was that I was meant to stay within the AI-generated results, trapped in that walled garden.

During its first run, there's a brief warning buried amidst all the other messages that says, "ChatGPT may give you inaccurate information", but nobody is going to think that means "sometimes this tool completely fabricates content, gives me a box that looks like a search box, and shows me the fabricated content in a display that looks like a web page when I type in the fake search box".

And it's not like the generated response is even that satisfying. The fake web page had no information newer than two or three weeks old, reflecting the fact that LLMs rely on whenever they've most recently been able to crawl (or gather without consent) information from the web. None of today's big AI platforms update nearly as often as conventional search engines do.

Keep in mind, all of these shortcomings are not because the browser is new and has bugs; this is the app working as designed. Atlas is a browser, but it is not a web browser. It is an anti-web browser.

2. We left command-line interfaces behind 40 years ago for a reason

Back in the early 1980s, there was a popular game called Zork that was in a category called "text adventure games". The computer would say something like:

You are in a clearing in the forest, there is a rock here.

And then you would type:

Take the rock

And it would say:

Sorry, I can't do that.

So then you would type:

Pick up the rock.

And then it would say:

You have the rock.

And it would go on like this for hours while you tried in vain to guess what the hell it wanted you to type, or you discovered the outdoors, whichever came first.

There were a tiny handful of incredible nerds who thought this was fun, mostly because 3D graphics and the physical touch of another human being hadn't been invented yet. But for the most part, people would tire of the novelty because trying to guess what to type to make something happen is a terrible and exhausting user interface. This was also why people hated operating systems like MS-DOS, and why even all the Linux users reading this right now are doing so in a graphical user interface.

Clicking on things is great, because you can see what your choices are, and then just choose the one you want. Tapping on things on a touch screen is even better. And this kind of discoverability was one of the fundamental innovations of the web: It democratized being able to create a clickable list of options once anybody could make a web page.

In the demo for Atlas, the OpenAI team shows a user trying to find a Google Doc from their browser history. A normal user would type keywords like "atlas design" and see their browser show a list of recent pages. They would recognize the phrase "Google Docs" or the icon, and click on it to get back to where they were.

But in the OpenAI demo, the team member types out:

search web history for a doc about atlas core design

This is worse in every conceivable way. It's slower, more prone to error, and redundant. But it also highlights one of the biggest invisible problems: you're switching "modes". Normally, an LLM's default mode is to create plausible extrapolations based on its training data. Basically, it's supposed to make things up. But this demo has to explicitly walk you through "now it's time to go search my browser history" because it's coercing the AI to look through local content. And that can't be hallucinated! If you're trying to get back to a budget spreadsheet that you've created and ChatGPT decides to just make up a file that doesn't exist, you're probably not going to use that browser anymore.

Most people on the internet aren't old enough to remember this, but people were thrilled to leave command-line interfaces behind back in the 1990s. The explosion of color and graphics and multimedia in that era made a ton of headlines, but the real gains in productivity and usability came precisely because nobody was having to guess what secret spell they had to type into their computer to get actual work done. Links were a brilliant breakthrough in making it incredibly obvious how to get to where you wanted to go on a computer.

And look, we do need innovation in browser interfaces! If Atlas was letting people use plain language to automate regular tasks they want to do online, or even just added more tools that plugged into the rest of the services that people use every day, it might represent a real leap forward.

In the new-era command-line interface of Atlas, though, we're not just facing the challenges of an inscrutable command line. There's the even larger problem that, even if you guess the right magic words, it might either simply get things wrong or completely make things up. Atlas throws away the discoverability, simplicity and directness of the web by encouraging you to navigate even through your own documents and search results with an undefined, unknowable syntax that produces unreliable results. It's another way of being anti-web.

3. The idea is that ChatGPT will be your agent, but in reality you are ChatGPT's agent

OpenAI is clearly very motivated to gather all the data in the world into their model, regardless of whether or not they have consent to do so. This is why a lot of people have been thinking deeply about what it would take to create an Internet of consent. It's no coincidence that hundreds of people who work at OpenAI, including many of the most powerful executives, are alumni of Facebook/Meta, especially during the era of many of that company's most egregious abuses of people's privacy. In the marketing materials and demonstrations of Atlas, OpenAI's team describes the browser as being able to be your "agent", performing tasks on your behalf.

But in reality, you are the agent for ChatGPT.

During setup, Atlas pushes very aggressively for you to turn on "memories" (where it tracks and stores everything you do and uses it to train an AI model about you) and to enable "Ask ChatGPT" on any website, where it's following along with you as you browse the web. By keeping the ChatGPT sidebar open while you browse, and giving it permission to look over your shoulder, OpenAI can suddenly access all kinds of things on the internet that they could never get to on their own.

Those Google Docs files that your boss said to keep confidential. The things you type into a Facebook comment box but never hit "send" on. Exactly which ex's Instagram you were creeping on. How much time you spent comparing different pairs of shoes during your lunch hour. All of those things would never show up in ChatGPT's regular method of grabbing content off the internet. Even Google wouldn't have access to that kind of data when you use their Chrome browser, and certainly not in a way that was connected to your actual identity.

But by acting as ChatGPT's agent, you can hold open the door so that the AI can now see and access all kinds of data it could never get to on its own. As publishers and content owners start to put up more effective ways of blocking the AI platforms from exploiting their content without consent, having users act as agents on behalf of ChatGPT lets them get around these systems, because site owners are never going to block their actual audience.

And while ChatGPT is following you around, it can create a complete and comprehensive surveillance profile of you — your personality, your behaviors, your private documents, your unfinished thoughts, how long you lingered on that one page before hitting the back button — at a level that the search companies and social networks of the last generation couldn't even dream of. We went from worrying about being tracked by cookies to letting an AI company control our web browser and watch everything we do. The amount of data they're gathering is unfathomable.

All of this gets described as if it is helping you. The truth is, in its current implementation, ChatGPT's "agent" functionality is largely useless. I tried the most standard test: having it book a very simple flight on my behalf. I provided ChatGPT with a prompt that included the fact it was a direct flight for one person, specifying the exact date and the origin and destination airports, and let the browser do the part that was supposed to be magical.

While the browser did a very good job of smoothly navigating to the right place on the airline website, it was only at the point where I would have actually been confirming the booking that I noticed it had arbitrarily changed the date to a completely different day, weeks off from what I had specified. By contrast, entering the exact same information into a standard Google search resulted in direct links that could be clicked on in literally one-tenth the time—and the old-fashioned, non-LLM Google results actually led to a booking link on the correct date.

So why would such an inferior experience be positioned as the most premium part of this new browser? It stands to reason it's because this is the most strategically important goal of the company creating the product. Their robots need humans to guide them around the gates that are quickly being erected around the open web, and if they can use that to keep their eyes on everything the humans are doing at the same time, so much the better. The "agent" story really only works in one direction, and that direction is anti-web.

This Thing Needs a Warning Label

Here's what's most key for contextualizing the Atlas browser: this is the same company whose chatbot keeps telling vulnerable children to self-harm, and they do, and now a number of them are dead. When those who are in psychological distress engage with these tools, they very frequently get pulled into states of extreme duress — which OpenAI knows keenly well because even one of their own investors has gone off the deep end when over-using the platform. In fact, the user experience feature that OpenAI is most effective at creating is emotional dependency amongst its users, as evidenced by the level of despondency its users showed after the recent release of GPT-5.

When users respond to a software update by expressing deep emotional distress, and that they feel like they've lost a friend, you have a profound bug. If there are enough grieving parents who have been devastated by your technology that they can form a support group for each other, then there should at the very least be a pretty aggressive warning label on this application when it is initially installed. Then, at a far less serious level, if this product is going to have extreme and invasive effects on markets and cultural ecosystems without disclosing the mechanisms it uses to do so, and without asking the consent of the many parties whose intellectual property and labor it will rely on to accomplish those ends, then we need to have a much broader reckoning.

Also, I love the web, and this thing is bad for the web.

I really, really want there to be more browsers! I want there to be lots of weird new ways of going around the web. I have my own LLM that I trained with my own content, and I bet if everybody else could have one like mine that they control, that had perfect privacy and wasn't owned by any big company, and never sent their data anywhere or did anything creepy, they'd want the benefits of that, too. It would even be awesome if that were integrated with their browser — with their web browser. I'm all for people trying strange new geeky things, and innovating on the experiences we have every day so we're not just stuck typing in the same boxes we've been using for decades, or settling for the same few experiences.

Hell, there's even room for innovation on command-line interfaces! They're not inherently terrible (I use one every day!), but regular folks shouldn't have one forced on them for ordinary tasks. And the majority of things people do on a computer are better when they rely on the zeroes-and-ones reliability of computers, when we know if what they're doing is true or false. We need to have fewer things in the world that make us wonder whether everything is just made up bullshit.

The Anti-Web Endgame

The web was designed without the concept of personal identity at all, and without any tracking system built in. It was designed for anybody to be able to create what they want, and even for anybody to be able to make their own web browser. Not long after its invention, people came up with ideas like cookies and made different systems for logging in, and then big companies started coming in and realized that if they could control the browser, they'd control all the users and the ways of making money. Ever since, there's been a long series of battles over privacy versus monetization, but there's been some small protection for users, who benefitted from those smart original design choices back at the birth of the web.

It's very clear that a lot of the new AI era is about dismantling the web's original design. The last few decades, where advertising was targeting people by their interests instead of directly by their actual identity, now sees AI companies trying to create an environment of complete surveillance. That requires a new Internet where there's no concept of consent for either users or those who create content and culture — everything is just raw materials, and all of us are fair game.

The most worrisome part is that Atlas looks so familiar, and feels so innocuous, that people will try it and mistake it for a familiar web browser just like the other tools that they've been using for years. But Atlas is a browser that actively fights against the web, and in doing so, it's fighting against the very idea that you should have control over what you see, where you go, and what watches you while you're there.

The Majority AI View

2025-10-17 08:00:00

Even though AI has been the most-talked-about topic in tech for a few years now, we're in an unusual situation where the most common opinion about AI within the tech industry is barely ever mentioned.

Most people who actually have technical roles within the tech industry, like engineers, product managers, and others who actually make the technologies we all use, are fluent in the latest technologies like LLMs. They aren't the big, loud billionaires that usually get treated as the spokespeople for all of tech.

And what they all share is an extraordinary degree of consistency in their feelings about AI, which can be pretty succinctly summed up:

Technologies like LLMs have utility, but the absurd way they've been over-hyped, the fact they're being forced on everyone, and the insistence on ignoring the many valid critiques about them make it very difficult to focus on legitimate uses where they might add value.


What's amazing is the reality that virtually 100% of tech experts I talk to in the industry feel this way, yet nobody outside of that cohort will mention this reality. What we all want is for people to just treat AI as a "normal technology", as Arvind Naryanan and Sayash Kapoor so perfectly put it. I might be a little more angry and a little less eloquent: stop being so goddamn creepy and weird about the technology! It's just tech, everything doesn't have to become some weird religion that you beat people over the head with, or gamble the entire stock market on.

AI Hallucinations

If you read mainstream media about AI, or trade press within the tech industry, you'll basically only hear hype repeating the default stories about products from the handful of biggest companies like OpenAI, Anthropic, Google, and the like. Once in a while, you might hear some coverage of the critiques of AI, but even those will generally be from people outside the tech industry, and they will often solely be about frustrations or anger with the negative externalities of the centralized Big AI companies. Those are valid and vital critiques, but it's especially galling to ignore the voices within the tech industry when the first and most credible critiques of AI came from people who were working within the big tech companies and then got pushed out for sharing accurate warnings about what could go wrong.

Perhaps the biggest cost of ignoring the voices of the reasonable majority of those in tech is how it has grossly limited the universe of possibilities for the future. If we were to simply listen to the smart voices of those who aren't lost in the hype cycle, we might see that it is not inevitable that AI systems use content without the consent of creators, and it is not impossible to build AI systems that respect commitments to environmental sustainability. We can build AI that isn't centralized under the control of a handful of giant companies. Or any other definition of "good AI" that people might aspire to. But instead, we end up with the worst, most anti-social approaches because the platforms that have introduced "AI" to the public imagination are run by authoritarian extremists with deeply destructive agendas.

And their extremism has had a profound chilling effect within the technology industry. One of the reasons we don't hear about this most popular, moderate view on AI within the tech industry is because people are afraid to say it. Mid-level managers and individual workers who know this is the common-sense view on AI are concerned that simply saying that they think AI is a normal technology like any other, and should be subject to the same critiques and controls, and be viewed with the same skepticism and care, fear for their careers. People worry that not being seen as mindless, uncritical AI cheerleaders will be a career-limiting move in the current environment of enforced conformity within tech, especially as tech leaders are collaborating with the current regime to punish free speech, fire anyone who dissents, and embolden the wealthy tycoons at the top to make ever-more-extreme statements, often at the direct expense of some of their own workers.

This is all exacerbated by the awareness that hundreds of thousands of technical staff like engineers have been laid off in recent times, often in an ongoing drip of never-ending layoffs, and very frequently in an unnecessarily dehumanizing and brutal process intended to instill fear in those who remain at the companies afterward.

In that kind of context, it's understandable that people might fear telling the truth. But it's important to remember that there are a lot more of us. And for those who aren't insiders in the tech industry, it's vital that you understand that you've been presented with an extremely distorted view about what tech workers really think about AI. Very few agree with the hype bubble that the tycoons have been trying to puff up. There are certainly a group of hustle bros on LinkedIn or social media trying to become influencers by repeating the company line, just as they did about Web3 or the metaverse or the blockchain (do they still have .ETH after their names?), but the mainstream of tech culture is thoughtful, nuanced and circumspect.

The Unexpected New Threat to Video Creators

2025-10-07 08:00:00

Much of the conversation about video and content over the last few weeks has been about the silencing of Jimmy Kimmels's show and the fact that we're seeing a shockingly rapid move towards the type of censorious media control typical of most authoritarian regimes.

But there's a broader trend that poses a looming threat to online video creators that I think is going a bit under the radar, so I took a minute to pull together a quick short-form video on the topic:

The key things that have shifted can be summarized with three points:

  1. TikTok Takeover: The cronyism exploited to hand TikTok to Larry Ellison for a fraction of its worth, setting up the danger of its platform amplifying content controlled by the administration, and silencing dissenting voices.
  2. Vimeo Vulnerability: The consolidation of a number of the major streaming video infrastructure providers (including Vimeo, one of the most important) under Bending Spoons, the notorious conglomerate which not only tends to enshittify its products, but which will now also present a unified target for the same censors who went after voices like Jimmy Kimmel and Stephen Colbert.
  3. Creator Capture: The lack of available and accessible open alternatives to major distribution platforms like YouTube and TikTok — there's no "BlueSky for video" or "Mastodon for video", meaning there isn't the same opportunity for video creators to make themselves resilient to a platform takeover.

All of this is trying to make clear to video creators that they need to embrace the same radical control that podcasters have always had.

Separately, I'm also (obviously!) using this as a chance to start sharing a bit more of the videos I've been making lately. It's still very early, and I'm not quite sure what direction they're headed, so please do share any feedback you've got.

In general, I'm going to try to complement my writing here with some videos from time to time, just to make some of these concepts more accessible to different audiences. If you're inclined, please do take a look, and share them with people who might find them interesting. (I'm expecting to use both quick vertical formats and more substantive traditional horizontal videos, and to post across most of the major social networks so as to not be overly dependent on any one platform.)