2025-09-29 12:00:00
A little over a month ago, I wrapped a project with the City of Boston. It was, in a word, tremendous. I was brought in to help the city’s Digital Service team define a new design system, and I got to do it all alongside Christine Bath, a tremendously talented designer and researcher, and one of my former coworkers from 18F. The project wove together so many of the things I most enjoy about design systems work: interviewing stakeholders, building a map of the products (and teams) the design system would need to support, and some honest, old-fashioned design work. And all in the service of helping a city government better support its constituents. It was good work with great people, and I’m honored I got to do it.
But the project’s done now. As a result, I’m thinking about what’s next; and that means I’ve updated my portfolio.
I’ll be honest: I struggle with portfolio updates. I mean, I’m extremely proud of the work I’ve done, and of the people I’ve gotten to do it with. But the exercise of pulling together a portfolio involves more than a few things I find difficult. That includes, but is not limited to:
But hey, who cares about feelings and anxieties! Capitalism sure doesn’t! I still had to, y’know, update my portfolio. So I designed a little process to help me get out my head, and start getting pixels on a screen.
— well, I say “process.” I started by looking around at a few friends’ and colleagues’ portfolios, and spending time identifying what I liked about how they showcased their work. Just a few examples:
And honestly, after a week or so of idle field research, it was so freeing to see the sheer diversity of approaches. It really did help identify a portfolio structure that felt right to me: a single page, one that’s using a curated set of projects to (hopefully!) highlight the kind of work I like doing.
From there, I started thinking a bit more about what else I’d like to include on the page. Some things felt easy: I’ve kept the brief section outlining how I typically structure my client engagements; it’s a useful conversation starter and, as a colleague once told me, it’s helpful to let others know how they can hire you. Other things felt more difficult. Ultimately, I figured I’d lean into my [checks notes] varied skillset, and highlight it at the top of the page, along with some common themes from my design practice.
Right now, I’m happy with where things landed. But hey, look. As much as anything else on this little site, this post you’re reading is a marker. A moment. Over time my portfolio’s going to change, shift, stretch, and warp, to the point where it won’t resemble anything I’ve written up above. I’ll do different work; I’ll talk about my work differently; maybe I’ll even reconcile myself to the profound unease I feel when I’m asked to describe myself as a specific kind of digital designer in 2025-friendly terms. But for now — right now — I’ve got a new portfolio, and I’m feeling pretty proud of it.
This is all to say: I’m currently available for hire. While I typically work on a contract basis for my clients, I’m open to full-time roles if the opportunity’s right.
Thanks, as always, for reading.
This has been “A carried leaf.” a post from Ethan’s journal.
2025-09-25 12:00:00
After writing about design under fascism and framing “artificial intelligence” as a failure, I realized my tiny little brain could use a breather.
So I thought I’d tell you about some well-made things I’ve enjoyed lately.
I’d been using a fairly popular code editor for a few years now. And I’d describe it as “basically fine”: it did what I needed, had some extensions I liked, and was, well, basically fine. But in the last year or so, it had started incorporating more “artificial intelligence” (“AI”) features. I could disable them, so whatever. But it also seemed that over time, more and more of the application’s release notes were getting dedicated to the latest “AI” developments. And there were a few times where the features I’d disabled were mysteriously reenabled after I installed an update.
Anyway, this is all to say that I recently switched to Nova, and it’s been a downright delight. I’ve been using Panic products throughout my career — Transmit hive rise up — and Nova’s just as polished as the rest of their work. Like, I opened the color picker the first week and sighed happily. This is a totally normal reaction to software, and I am invited to many glamorous parties.
Anyway, I like it a lot.
If you subscribe to my newsletter, you might’ve noticed it now looks a little different. The old service I’d been using had started to go all-in on “AI” features, none of which I was using. And if I’m honest, their new strategy was making me worried whether my writing — or more worryingly, my subscriber list — was getting shlorped up for training data.
(Okay, between this and my switch to Nova, I’m…seeing a trend emerge here.)
Anyway, I made the switch to Buttondown, and I couldn’t be happier. Migration was a breeze, and the few times I’ve had questions I’ve been so impressed by the support I’ve gotten. And the product works a danged treat; I can really tell it’s made with care.
It’s early days yet, but I can’t tell you how happy I am to have my stream up and running. It’s primarily an archive, a place where I can store links to the things I find interesting. I mention all this because in the few months since the stream’s launched, I haven’t really revisited most of the links after posting them.
The one exception so far? Rob Weychert’s new website.
I mean, my goodness, look at it: it’s just a beautiful, beautiful thing. Rob’s archived a staggering amount of personal data here: concerts he’s attended, books he’s read, movies he’s watched, and more. All of it wrapped up in an artful, thoughtfully-executed package.
My little redesign’s only been online for a few months, and Rob’s site is already giving me ideas for my next one.
Those are a few well-made things I’ve been enjoying lately. What about you?
This has been “Some well-made things.” a post from Ethan’s journal.
2025-09-18 12:00:00
I think it’s long past time I start discussing “artificial intelligence” (“AI”) as a failed technology. Specifically, that large language models (LLMs) have repeatedly and consistently failed to demonstrate value to anyone other than their investors and shareholders. The technology is a failure, and I’d like to invite you to join me in treating it as such.
I’m not the first one to land here, of course; the likes of Karen Hao, Alex Hanna, Emily Bender, and more have been on this beat longer than I have. And just to be clear, describing “AI” as a failure doesn’t mean it doesn’t have useful, individual applications; it’s possible you’re already thinking of some that matter to you. But I think it’s important to see those as exceptions to the technology’s overwhelming bias toward failure. In fact, I think describing the technology as a thing that has failed can be helpful in elevating what does actually work about it. Heck, maybe it’ll even help us build a better alternative to it.1
In other words, approaching “AI” as failure opens up some really useful lines of thinking and criticism. I want to spend more time with them.
Right, so: why do I think it’s a failure? Well, there are a few reasons.
The first is that as a product class, “AI” is a failed technology. I don’t think it’s controversial to suggest that LLMs haven’t measured up to any of the lofty promises made by their vendors. But in more concrete terms, consumers dislike “AI” when it shows up in products, and it makes them actively mistrust the brands that employ it. In other words, we’re some three years into the hype cycle, and LLMs haven’t met any markers of success we’d apply to, well, literally any other technology.
This failure can’t be separated from the staggering social, cultural, and ecological costs associated with simply using these services: the environmental harms baked into these platforms; the violent disregard for copyright that brought them into being; the real-world deaths they’ve potentially caused; the workforce of underpaid and traumatized contractors that are quite literally building these platforms; and many, many more. I mention these costs because this isn’t a case of a well-built technology failing to find its market. As a force for devastation and harm, “AI” is a wild success; but as a viable product it is, again, a failure.
And yet despite all of this, “AI” feels like it’s just, like, everywhere. Consumers may not like or even trust “AI” features, but that hasn’t stopped product companies from shipping them. Corporations are constantly launching new LLM initiatives, often simply because of “the risk of falling behind” their competitors. What’s more, according to a recent MIT report, very nearly all corporate “AI” pilots fail.
I want to suggest that the ubiquity of LLMs is another sign of the technology’s failure. It is not succeeding on its own merits. Rather, it’s being propped up by terrifying amounts of investment capital, not to mention a recent glut of government contracts.2 Without that fiscal support, I very much doubt LLMs would even exist at the scale they currently do.
So. The technology doesn’t deliver consistent results, much less desirable ones; what’s more, it extracts terrible costs to not reliably produce anything of value. It is fundamentally a failure. And yet, private companies and public institutions alike keep adopting it. Why is that?
From where I sit, the most consistent application of LLMs at work has been through top-down corporate mandate: a company’s leadership will suggest, urge, or outright require employees to incorporate “AI” in their work. Zapier’s post on its “AI-first” mandate is one recent example. At some point, the company decided to mandate “AI” usage across their organization, joining such august brands as Shopify, Duolingo, and Taco Bell. But in this post from the summer, Zapier’s global head of talent talks about how the company’s expanding the size and scope of that initial mandate. Here’s the intro:
Recently, we shared our AI adoption playbook, which showed that 89% of the Zapier team is already using AI in their daily work. But to make AI transformation truly sustainable, we have to start at the beginning: how we hire and onboard people into Zapier to build this future with us.
I’ve written before about the problems with “adoption” as a success metric: that “usage of a thing” doesn’t communicate anything about the quality of that usage, or about the health of the system overall. But despite that, Zapier’s moved beyond mandated adoption, and has begun changing its hiring and onboarding practices — including how it evaluates employee performance. How does an “AI” mandate show up in a performance review? I’m so glad you asked:
Starting immediately, all new Zapier hires are expected to meet a minimum standard for AI fluency. That doesn’t mean deep technical expertise in every case — but it does mean showing a mindset of curiosity toward AI, a demonstrated willingness to experiment with it, and an ability to think strategically about how AI can amplify their work.
[…]
We map skills across four levels, keeping in mind that AI skills vary and are heavily role-specific.
- Unacceptable: Resistant to AI tools and skeptical of their value
- Capable: Using the most popular tools, with likely under three months of hands-on experience
- Adoptive: Embedding AI in personal workflows, tuning prompts, chaining models, and automating tasks to boost efficiency
- Transformative: Uses AI not just as a tool but to rethink strategy and deliver user-facing value that wasn’t possible a couple years ago
There’s an insidious thing nestled in here.
Andy Bell and Brian Merchant have both documented tech workers’ reactions to “AI” mandates: what it feels like to have parts of your job outsourced to automation, and how it changes what it feels like to show up for work. I’d recommend reading both posts in full; it’s possible you’ll see something of your own feelings mirrored in those testimonials. And those stories track with my own conversations with tech workers, who’ve shared how difficult it is to talk openly about their concerns at work. I’ve heard repeatedly about a kind of stifling social pressure: an implicit, unstated expectation that “AI” has to be seen as good and useful; pointing out limitations or raising questions feels difficult, if not dangerous.
But this Zapier post is the first example I’ve seen of a company making that implicit expectation into an explicit one. Here, the official policy is that attitude toward a technology should be used as a quantifiable measurement of how well a person aligns with the company’s goals: what the industry has historically (and euphemistically) referred to as culture fit. At this company, you could receive a negative performance review for being perceived as “resistant” or “skeptical” of LLMs. You’d be labeled as unacceptable
.
I mean, look: on the face of it, that’s absurd. That is absurd behavior. Imagine screening prospective hires by asking their opinions about your company’s hosting provider, or evaluating employees for how they feel about Microsoft Teams. Just to be clear, I fully believe evaluations like these have happened in the industry — hiring and performance reviews are both riddled with bias, especially in tech. But this is the first time I’ve seen a company policy explicitly state that acceptance of “AI” is a matter of cultural compliance. That you’re either on board with “artificial intelligence,” or you’re not one of us.
This is where I think approaching “AI” as a failure becomes useful, even vital. it underscores that the technology’s real value isn’t improving productivity, or even in improving products. Rather, it’s a social mechanism employed to ensure compliance in the workplace, and to weaken worker power. Stories like the one at Zapier are becoming more common, where executive fiat is being used to force employees to use a technology that could deskill them, and make them more replaceable. Arguably, this is the one use case where “artificial intelligence” has delivered some measure of consistent, provable results.
But here’s the thing: this is a success only if tech workers allow it to be. I’m convinced we can turn this into a failure, too. And we do that by getting organized.
— okay, yes, I know. I am the person who thinks you deserve a union. But it’s not just me: from game studios to newsrooms, many workers are unionizing specifically because they want contractual protections from “artificial intelligence.” Heck, the twin strikes in Hollywood weren’t about banning “AI,” but giving workers control over how and when the technology was employed. I think at minimum, we deserve that level of control over our work.
With all that said, you don’t have to be unionized to start organizing: to have conversations with your coworkers, to share how you’re feeling about these changes at work, and start talking about what you’d like to do about those changes, together. It really is that simple.
That isn’t to say organizing is easy, mind: it involves having many, many conversations with your coworkers, and looking for shared concerns about issues in the workplace. And, look: I’m writing this post at a time where the labor market’s tight, when there’s so much pressure to not just adopt LLMs but to accept them unquestioningly. In that context, I realize that inviting coworkers to share some thoughts about automation can feel difficult, if not dangerous. But it’s only by organizing — by talking and listening to each other, and acting together in solidarity — do we have a chance at building a better, safer version of the tech industry.
“Artificial intelligence” is a failure. Let’s you and I make sure it stays that way.
I’m profoundly skeptical this could ever happen under capitalism. But what the hell, I’m willing to hypothesize a bit. ↩︎
To keep this too-long post a little less long, I’m not going to spend any time talking about the founders of these platforms, their wholesale embrace of the United States’ current far-right administration, and selling said administration tools to visit harm upon marginalized communities and political opponents alike. But please believe I am thinking about them. ↩︎
This has been “Against the protection of stocking frames.” a post from Ethan’s journal.
2025-08-27 12:00:00
Note: This post gets into American politics. If that’s not your cup of tea, or if that’s a stressful topic for you, please feel free to skip this one. (Also, it’s a bit long. Sorry about that.)
Last week, my country’s far-right administration announced they were establishing an “America by Design” initiative, along with a so-called National Design Studio to oversee it. That studio will, to quote its own homepage, “improve how Americans experience their government — online, in person, and the spaces in between.” After seeing the announcement, I read through the “America by Design” web page. And I have some thoughts.
I mean, there are the surface-level observations. The text is poorly written, and filled with typos; I expect both of these things on my website, but not on an announcement of this scale. And aesthetically, the design is…well, tepid. Once you get past the literal flag-waving in the header, there’s some text that slides in as you read — something you’ve seen on every other product site that’s launched in the last decade. If this is meant to herald a new era in design for the federal government, it is a singularly meager vision.
But I don’t want to get mired in the aesthetics. Let’s look at how the site was built — after all, that’s part of “design.”
We’ll start with the fact that the “America by Design” site is a single HTML page, not unlike the blog post you’re reading. But to make those words appear in your browser, this new “national studio” used almost three megabytes of code. Imagine having to download a three-minute MP3 each time you visit a web page, and you’re in the ballpark.
I realize that doesn’t sound like much, especially by the standards of today’s slow, heavy, overbuilt web. But imagine you’re one of the millions of Americans on a prepaid “pay as you go” plan, or on a home network with capped, limited data. In both cases, data overages are tremendously expensive. Now imagine you’re trying to access some critical information online — how to renew your passport, say, or to manage your Social Security — and you’re met with a web page that is literally too expensive for you to view. An overbuilt, too-heavy website isn’t exactly a rarity on today’s internet. (Sadly.) But this design is coming from the federal government, which is quite literally meant to serve every citizen — every single one of them. The American people will be poorer for this work, figuratively and literally.
And it’s not just the sheer weight of the page. As inclusive design experts like Anna Cook and Jesse Gardner have noted, this one page is waterlogged with accessibility errors. Low-contrast text, looping animations that can’t be paused or hidden, poorly-structured HTML, images with missing (or incorrect) text equivalents: any one of these errors would be an annoyance. Creating a single page with literally hundreds of accessibility errors will exclude anyone who doesn’t conform to the designer’s narrow definition of “a user.”
In other words, we’re left with a web page announcing a new era of design for the United States government, but it’s tremendously costly to download, and inaccessible to many. What I want to suggest is that these aren’t accidents. They read to me as signals of intent: of how this administration intends to practice design.
A web page that’s literally too expensive to view? It aligns with this administration’s war on the poorest and most vulnerable residents of this country: they’ve passed legislation that will push millions of people off of Medicaid, and are discussing ways to destabilize, if not directly privatize, Social Security. And the awful accessibility isn’t a surprise, not when the current administration is already ignoring its legal mandate to create accessible digital services. (Heck, the current website for the actual, literal White House is a broken, inaccessible mess.) But it also aligns with the administration’s open embrace of eugenics, and with their disregard for disabled Americans. After all, an anti-vaccine extremist is in charge of, and is actively dismantling, the country’s health apparatuses.
So, no: I don’t think it’s an accident that a simple-looking “America by Design” page is built the way it is. It’s communicating their priorities, and how this government wants to redefine design. This “national studio” is designing for the small subset of American citizens who fit their ideal of who can afford, and who can access, the digital services they’ll create.
There’s one last thing I want to mention, but it involves digging into the text a bit. In it, the studio talks about how the “America by Design” initiative will transform the process of interacting with the federal government, turning it into a more “Apple Store like [sic] experience.” It even lists a few examples:
Something you actually look forward to when you…
Pay off your student loans,
Move through TSA,
Renew your passport,
Visit national monuments,
Apply for a small business loan,
Apply for your green card,
Stay the night at a National Park,
Manage your social security.
Even file your taxes.
(Emphasis theirs, not mine, and for reasons I fail to understand.)
Throughout the page, the suggestion isn’t just that the federal government traffics exclusively in poor, ineffective design; it’s that this is the first time anyone has ever proposed changing that.
Of course, that’s — well. Let’s go with “laughable.” First and foremost, it ignores designers and digital teams currently employed by the federal government, who work to make their agencies’ services more user-friendly. And it ignores the United States Web Design System and the people who maintain it, and how their labor makes government services more accessible and consistent. But it also ignores the United States Digital Service and 18F, two digital service agencies tasked with improving the way the federal government built and acquired software.
(I should note that many of the hypothetical scenarios above were, in fact, projects being worked on by those last two groups — or they were, until this administration shut them down after taking office.)
As I’ve mentioned before, I worked at 18F. During my too-brief time there, I saw exactly what it meant to improve the experience of renewing your passport, or providing an easier way to file your taxes. Despite what this new “studio” would suggest, designing better government services didn’t involve smearing an animated flag and a few nice fonts across a website. It involved months, if not years, of work: establishing a regular cadence of user research and stakeholder interviews; building partnerships across different teams or agencies; working to understand the often vast complexity of the policy and technical problems involved; and much, much more. Judging by their mission statement, this “studio” confuses surface-level aesthetics with the real, substantive work of design.
The thing is, there’s something difficult wrapped up in that.
There’s a long, brutal history of design under fascism, and specifically in the way aesthetics are used to define a single national identity. Dwell had a good feature on this in June:
Part of Mussolini’s vision for Italy centered around producing a totalizing image of Italian identity — not Tuscan or Roman or Sicilian. (As [Ignacio Galán, author of Furnishing Fascism] notes in the book, during the Risorgimento one popular saying was, “We have made Italy. Now we need to make the Italians.”) In Mussolini’s opinion, the creation of a national identity could engender patriotism as well as uniformity. If he and his buddies could convince everyone that ideal Italians worked hard, maintained a clean home and healthy body, and believed in the supreme leader above all else, then they could more easily control Italians writ large.
Throughout the text, there’s a single-minded emphasis on aesthetics over design. We’ve already seen that in the studio’s disregard for the page’s weight and accessibility — and now we’re seeing it carried through the page’s text. Design for this “national studio” is about surface-level signals of “experience” and “beauty,” instead of the messy, iterative, imperfect work involved in designing something for people. When this “national studio,” and the administration that created it, tells us it wants to create “an experience that projects a level of excellence for our nation”? That’s aesthetics with a nationalistic twist, and we should take them at their word.
It’s in that light I’d like to revisit something I said earlier: when the administration suggests nobody has tried this before, I don’t think it’s just arrogance. It’s an extension of this administration’s fascist relationship with history.
Since taking office, this administration has worked to position itself as the arbiter of what constitutes “acceptable speech” and “acceptable history.” It is censoring the histories documented by our nation’s museums; it is rewriting curricula to erase the already marginalized; it is canceling artistic grants that don’t align with its goals; it has defunded public broadcasting. So when the administration says here that now, for the first time ever, someone is attempting to do something to fix the nation’s digital services, I don’t think it’s an error. It’s an act of erasure, in line with the other parts of the authoritarian project we’ve seen unfold since January.
Because, yes: this “America by Design” page is shoddily made, and poorly written. But the authoritarian impulse — to erase histories, to control a narrative, to single-mindedly focus on image and aesthetics — shapes not just the site’s text, but its design as well. Its text erases the history and work of the people who quietly labored to create better digital services for the public; in their place, it proposes that one man alone can define “design” for the country. And we find that new definition in the way the site’s constructed: it is digital design intended for the privileged few, one that actively excludes people who don’t conform to a specific, discriminatory definition of “eligible.”
All of this should and must be rebuked by the design community; it must also be actively, urgently dismantled.
This has been “A notional design studio.” a post from Ethan’s journal.
2025-08-12 12:00:00
World’s on fire, so here’s a short post about newsletters.
My friend Eric wrote a great little post about the newsletters that regularly hit his inbox. You should read it! At the end, he tagged a few friends to do the same, and I was one of those friends.
I’m so grateful he asked me. At the same time, I confess I feel a little sheepish writing a whole post about newsletters. (And not just because of the whole “world’s on fire” thing.) Here’s the thing: I get very few email newsletters in my email. The few newsletters I subscribe to are sent to Feedbin, the wonderful RSS software I use. Does the whole “stolen valor” thing apply here?
…no, probably not. An inbox is an inbox, I guess. So regardless of where they show up, here are the newsletters I most look forward to.1
The Economic Policy Institute is a think tank that advocates for economic justice, and I’ve learned a lot from their research-heavy, worker-focused newsletter.
Labor Notes is a nonprofit publication that covers, educates, and builds the fighting wing of the labor movement. I cannot recommend an annual subscription more strongly — but even if that’s not of interest, their email updates are free.
Scalawag is a nonprofit magazine dedicated to telling stories of and from the American south, and each week they share some of those stories.
Jenny’s Show Up Toronto is a thing of beauty, curating a calendar of advocacy events in Toronto. I can only imagine the work involved in getting this site online — building the platform, scouring social media sites for events, entering them into the system — and she still manages to write a thoughtful newsletter about activism? Rude, imo.
(Like everything Jenny writes, it’s wonderful.)
Some of the hardest-working organizers I know, and they still publish a great little newsletter about labor issues (and labor wins) in tech.
The United Electrical, Radio and Machine Workers of America (“UE”) is an independent union focused on building rank-and-file power.
I hope it says something that even though I’m contributing to the project, I still heartily recommend Unbreaking’s email newsletter. Each new blog entry is emailed out to subscribers, highlighting the big developments the project’s aware of. The news is rarely good, but each email is always, always useful.
That’s it from me, I think. Following Eric’s example, I’d love to invite a few other folks to share their favorite newsletters:
Don’t see your name up there? Well, I’d still love to hear what newsletters you find inspiring, helpful, or hopeful. Write up a blog post and email a link to me. I can’t wait to see what you’re reading.
There are a few other newsletters I subscribe to, but they’re on Substack. While I love those writers, I can’t recommend a Substack newsletter. ↩︎
This has been “Newslettered.” a post from Ethan’s journal.
2025-06-23 12:00:00
World’s on fire and the ghouls keep buying matches, so I’m working on my website.
At the end of last week, I launched a very basic “links and sundries” page. Pretty much ever since I joined Twitter (valē), social media has always been where I’ve shared links I find interesting or inspiring. I’ve always wanted a more permanent solution — or more permanent-feeling, anyway; what’s a link’s average lifespan these days? — so I built one for my website.
When I say “very basic,” I mean very basic. When I first launched it last Thursday night, each link was generated from a Markdown file I’d created in a folder. Heck, the page didn’t even have an RSS feed when it first went live. (It has one now, if that’s your thing.) I’ve spent the intervening days trying to spruce the place up.
As of today, everything in the links section is pulled in from a bookmarking service called Raindrop.io. That’s due entirely to Sophie Koonin, who wrote an excellent post showing how she uses Raindrop.io and Eleventy to automatically generate her weekly link posts. Without Sophie’s stellar tutorial, I would’ve been stuck cobbling together Markdown files by hand, like some sort of feral woodland creature.1 But now, every time I build my site it fetches any new bookmarks I’ve made that day, and creates a post for each bookmark it finds.
I’m really happy with this setup. I can trawl the internet as I am wont to do, saving links to my heart’s content. There’s still more I could do, though. I should probably set up permalinks for each post. Also, while I’m tagging the links I save, I’m not currently showing those tags on the page — that’s probably worth fixing. And I’ve been thinking about other kinds of things I might want to save in that section: not just links to interesting articles or websites, but maybe the odd video or photo I find inspiring. Just to fully embrace l’esprit du Tumblr, I guess.
I’ve also thought about what I won’t be doing with that new links section. Namely, I don’t see myself automatically sharing bookmarked sites on social media. That’s not to suggest in any way that this is a bad pattern! If you do something similar on your site, I think that’s grand — truly. But when I launched the first version last week, I reflected a bit on how I’ve spent years running to social media to share links with my followers. And I realized that this new section felt different: it felt like something I’d made for me. I think I’d like to keep it that way, at least for now. Because in a small way, it feels like coming home.
You know. Like a marmot that, uh, hoards text files. Just like that. (I’m so tired.) ↩︎
This has been “Link bug.” a post from Ethan’s journal.