MoreRSS

site iconJim NielsenModify

Designer. Engineer. Writer.20+ years at the intersection of design & code on the web.
Please copy the RSS to your reader, or quickly subscribe to:

Inoreader Feedly Follow Feedbin Local Reader

Rss preview of Blog of Jim Nielsen

Code as a Tool of Process

2026-03-25 03:00:00

Steve Krouse wrote a piece that has me nodding along:

Programming, like writing, is an activity, where one iteratively sharpens what they're doing as they do it. (You wouldn't believe how many drafts I've written of this essay.)

There’s an incredible amount of learning and improvement, i.e. sharpening, to be had through the process of iteratively building something.

As you bring each aspect of a feature into reality, it consistently confronts you with questions like, “But how will this here work?” And “Did you think of that there?”

If you jump over the process of iteratively building each part and just ask AI to generate a solution, you miss the opportunity of understanding the intricacies of each part which amounts to the summation of the whole.

I think there are a lot of details that never bubble to the surface when you generate code from English as it’s simply not precise enough for computers.

Writing code is a process that confronts you with questions about the details.

If you gloss over the details, things are going to work unexpectedly and users will discover the ambiguity in your thinking rather than you (see also: “bugs”).

Writing code is a tool of process. As you go, it sharpens your thinking and helps you discover and then formulate the correctness of your program.

If you stop writing code and start generating it, you lose a process which helped sharpen and refine your thinking.

That’s why code generation can seem so fast: it allows you to skip over the slow, painful process of sharpening without making it obvious what you’re losing along the way.

You can’t understand the trade-offs you’re making, if you’re not explicitly confronted with making them.

A Metaphor

To help me try to explain my thinking (and understand it myself), allow me a metaphor.

Imagine mining for gold.

There are gold nuggets in the hills.

And we used to discover them by using pick axes and shovels.

Then dynamite came along. Now we just blow the hillside away. Nuggets are fragmented into smaller pieces.

Quite frankly, we didn’t even know if there were big nuggets or small flecks in the hillside because we just blasted everything before we found anything.

After blasting, we take the dirt and process it until all we have left is a bunch of gold — most likely in the form of dust.

So we turn to people, our users, and say “Here’s your gold dust!”

But what if they don’t want dust? What if they want nuggets? Our tools and their processes don’t allow us to find and discover that anymore.

Dynamite is the wrong tool for that kind of work. It’s great in other contexts. If you just want a bunch of dust and you’re gonna melt it all down, maybe that works fine. But for finding intact, golden nuggets? Probably not.

It’s not just the tool that helps you, it’s the process the tool requires. Picks and shovels facilitate a certain kind of process. Dynamite another.

Code generation is an incredible tool, but it comes with a process too. Does that process help or hurt you achieve your goals?

It’s important to be cognizant of the trade-offs we make as we choose tools and their corresponding processes for working because it’s trade-offs all the way down.


Reply via: Email · Mastodon · Bluesky

More Details Than You Probably Wanted to Know About Recent Updates to My Notes Site

2026-03-23 03:00:00

I shipped some updates to my notes site. Nothing huge. Just small stuff.

But what is big stuff except a bunch of small stuff combined? So small stuff is important too.

What follows is a bunch of tiny details you probably don’t care about, but they were all decisions I had to make and account for along the way to shipping.

For me, the small details are the fun part!

Each Post Now Has Its Own URL

The site used to consist of a single, giant HTML page with every note ever. For feeds and linking purposes, I would link to individual posts by anchor linking somewhere in that giant HTML document, e.g.

https://notes.jim-nielsen.com/#2026-03-09T2305

That worked fine, but as my notes page was getting bigger and bigger, it seemed like a waste to load everything when all I wanted to do was link to a single note.

So I changed things. Now every note now gets its own individual page, e.g.

https://notes.jim-nielsen.com/n/2026-03-09-2305/

You Might Have Noticed: I Changed the Note’s Identifier

Whenever I create a note, I name it based on the date/time of publishing, e.g.

2026-03-09T2305.md

That is what turns into the fragment identifier when deep linking to the note, e.g.

/#2026-03-09T2305

Initially, I was going to just translate those IDs to paths, e.g.

/#2026-03-09T2305 -> /n/2026-03-09T2305/

And while it seems fragment identifiers are supposed to be case-sensitive, in testing I was seeing Safari sometimes change the T to a t in the URL bar, e.g.

/#2026-03-09T2305 -> /n/2026-03-09t2305

Which really irked me.

Which got me thinking more about those identifiers, to the point where I decided to change them.

Which is why you the fragment identifier for old posts will now redirect to new post pages with a slightly tweaked identifier:

/#2026-03-09T2305 -> /n/2026-03-09-2305/

I pulled the T and swapped it with a hyphen - so now the format for my markdown posts is:

YYYY-MM-DD-HHmm.md

Which ends up with permalinks to:

/n/YYYY-MM-DD-HHmm/

Too much info, I know, but I agonized over the right format here for my URLs because I don’t want to change it in the future.

I like where I landed.

But Wait! What About Redirects?

If you’re gonna change old URLs, you gotta have redirects to the new URLs — right? (Yes — if you want to be cool.)

But there’s no way to read fragment identifiers and handle redirects on the server (I’m on Netlify), so I had to handle redirects on the client.

I did this by sticking a render-block <script> in the head of my document, that way the browser looks to see if it should redirect really early when loading the root document. Something like:

<!-- root HTML page -->
<head>
<script>
if (window.location.hash) {
  const hash = window.location.hash.trim();
  // Look for /#YYYY-MM-DDTHHmm
  const match = hash.match(/^#(\d{4}-\d{2}-\d{2})T(\d{4})$/);
  // If you find it, redirect to /n/YYYY-MM-DD-HHmm
  if (match) {
    const href = "/n/" + match[1] + "-" + match[2] + "/";
    location.replace(href);
  }
}
</script>
<!-- if no redirect happened (because no fragment is present) 
     continue rendering the rest of the doc -->
</head>

But Wait! How To Roll Out These Changes?

There’s one problem here: if I change all the identifiers for my old posts to match how I’m going to do my new posts, that would mess up my feed, e.g.

<!-- <item> entry based on old post ID -->
<guid isPermaLink="false">2026-03-19T2330</guid>

<!-- same <item> entry but with new post ID -->
<guid isPermaLink="false">2026-03-19-2330</guid>

This is a big deal (to me) because it would make a bunch of my most recent posts show up twice in people’s feed readers.

So, to avoid this issue, I maintained support for the old IDs in my code alongside the new IDs.

<!-- all old post before my changes -->
<guid isPermaLink="false">2026-03-19T2330</guid>

<!-- all new posts after my changes -->
<guid isPermaLink="false">2026-03-20-1221</guid>

Once my feed fills up with posts that use the new identifiers, I'll pull support in the code for the old format and rename all my old posts to follow the new identifier style.

Oh, and by the way, this was super easy to test with Web Origami. I simply run a build locally and then use the Dev.changes tool to diff the currently-in-prod version of my feed against the new-locally-built one:

ori Dev.changes https://notes.jim-nielsen.com/feed.json, build/feed.json

Boom, no duplicate posts! You’re welcome feed readers.

Shuffle Functionality

I also added the ability to “shuffle” between posts. This is mostly for myself. I like to randomly jump through notes I’ve published in the past for reoccurring inspiration.

I didn’t want a server-side solution for this (e.g. a /shuffle/ route) because it would require a host-specific solution (like Netlify’s edge functions).

I figured it would be easier — and more resilient over time, in case I have to change hosts or my host’s API changes — to make this a client-side enhancement.

So it’s implemented as a <a> tag. If JS is present, it’ll stick a random href value in there. The code looks something like this:

<a href="#" id="shuffle">...</a>
<script>
const postsIds = /* array of IDs generated by my SSG */;

// Randomly select one
const randomPostId = postsIds[
  Math.floor(Math.random() * postsIds.length)
];

// Set the href
const $a = document.querySelector("#shuffle");
if ($a) $a.href = `/n/${randomPostId}/`;
</script>

I thought about implementing this in my SSG, so each time I regenerate my site, it generates each new page with a random shuffle link. But there are build/deployment performance issues with that (file fingerprints change even on content-only deployments because of the inherent randomness), so this felt like a set of trade-offs I could be happy about.

The End

That’s it. Way more than you probably ever wanted to know about a really small release of changes to my notes site.

Does anybody even care about stuff like this anymore? AI could’ve just generated all of this in no time by my saying “I want a new route for each individual post” — right?

Probably. But there were a lot of small details to work through and get right. I don’t trust AI to get the details right. Not yet. Plus, I enjoy the details. It’s the part so many people skip over so there’s lots of esoteric fun to be had in that area.

If you’re not already, go subscribe because — as you can see — I take care of my subscribes. Or at least I try to :)


Reply via: Email · Mastodon · Bluesky

Re: People Are Not Friction

2026-03-21 03:00:00

Dave Rupert puts words to the feeling in the air: the unspoken promise of AI is that you can automate away all the tasks and people who stand in your way.

Sometimes I feel like there’s a palpable tension in the air as if we’re waiting to see whether AI will replace designers or engineers first. Designers empowered by AI might feel those pesky nay-saying, opinionated engineers aren’t needed anymore. Engineers empowered with AI might feel like AI creates designs that are good enough for most situations. Backend engineers feel like frontend engineering is a solved problem. Frontend engineers know scaffolding a CRUD app or an entire backend API is simple fodder for the agent. Meanwhile, management cackles in their leather chairs saying “Let them fight…”

It reminds me of something Paul Ford said:

The most brutal fact of life is that the discipline you love and care for is utterly irrelevant without the other disciplines that you tend to despise.

Ah yes, that age-old mindset where you believe your discipline is the only one that really matters.

Paradoxically, the promise of AI to every discipline is that it will help bypass the tedious-but-barely-necessary tasks (and people) of the other pesky disciplines.

AI whispers in our ears: “everyone else’s job is easy except yours”.

But people matter. They always have. Interacting with each other is the whole point!

I look forward to a future where, hopefully, decision makers realize: “Shit! The best products come from teams of people across various disciplines who know how to work with each other, instead of trying to obviate each other.”


Reply via: Email · Mastodon · Bluesky

You Might Debate It — If You Could See It

2026-03-18 03:00:00

Imagine I’m the design leader at your org and I present the following guidelines I want us to adopt as a team for doing design work:

  • Typography: Use expressive, purposeful fonts and avoid default stacks (Inter, Roboto, Arial, system).
  • Motion: Use a few meaningful animations (page-load, staggered reveals) instead of generic micro-motions.
  • Background: Don't rely on flat, single-color backgrounds; use gradients, shapes, or subtle patterns to build atmosphere.
  • Overall: Avoid boilerplate layouts and interchangeable UI patterns. Vary themes, type families, and visual languages.

How do you think that conversation would go?

I can easily imagine a spirited debate where some folks disagree with any or all of my points, arguing that they should be struck as guidelines from our collective ethos of craft. Perhaps some are boring, or too opinionated, or too reliant on trends. There are lots of valid, defensible reasons.

I can easily see this discussion being an exercise in frustration, where we debate for hours and get nowhere — “I suppose we can all agree to disagree”.

And yet — thanks to a link to Codex’s front-end tool guidelines in Simon Willison’s article about how coding agents work — I see that these are exactly the kind of guidelines that are tucked away inside an LLM that’s generating output for many teams.

It’s like a Trojan Horse of craft: guidelines you might never agree to explicitly are guiding LLM outputs, which means you are agreeing to them implicitly.

It’s a good reminder about the opacity of the instructions baked in to generative tools.

We would debate an open set of guidelines for hours, but if there’re opaquely baked in to a tool without our knowledge does anybody even care?

When you offload your thinking, you might be on-loading someone else’s you’d never agree to — personally or collectively.


Reply via: Email · Mastodon · Bluesky

Food, Software, and Trade-offs

2026-03-16 03:00:00

Greg Knauss has my attention with a food analogy in his article “Lose Myself”:

A Ding Dong from a factory is not the same thing as a gâteau au chocolat et crème chantilly from a baker which is not the same thing as cramming chunks of chocolate and scoops of whipped cream directly into your mouth [...] The level of care, of personalization, of intimacy — both given and taken — changes its nature.

I love food and analogies, so let’s continue down that path. Take these three items for example:

  1. A McDonald’s cherry pie
  2. A Marie Calendar’s cherry pie
  3. A homemade Jim Nielsen cherry pie

Which one of these is the best?

I’m sure an immediate reaction comes to mind.

But wait, what do we mean by “best”?

Best in terms of convenience? Best in terms of flavor? Best in terms of healthiness? Best in terms of how ingredients were sourced and processed? Best in terms of price? Best in terms of…

It’s all trade-offs.

I don’t think we talk about trade-offs enough, but they’re there. Always there. We might not know what they are yet if we’re on the frontier, but we’re always trading one thing for another.

“McDonald’s cherry pie is the best cherry pie ever.”

That’s a hot take for social media. We wouldn’t accept that as a rational statement applicable to everyone everywhere all the time. People have preferences, products have strengths and weaknesses, that’s the name of the game.

“All software in a year will be written by robots.”

Also a hot take, not a serious statement. It’s impossible to apply such a generic prediction to everything everywhere all of the time. But also: “software” hand-written by humans is not the same as “software” generated by a machine. To presume the two are equivalent is a mistake. There are trade-offs.

Everything has trade-offs, a set of attributes optimized and balanced towards a particular outcome.

You get X, but you lose Y.

Life is full of trade-offs. Anyone who says otherwise is trying to sell you something.


Reply via: Email · Mastodon · Bluesky

Related posts linking here: (2026) Code as a Tool of Process

Two of My Favorite Things Together at Last: Pies and Subdomains

2026-03-09 03:00:00

I like pie.

And I’ve learned that if I want a pie done right, I gotta do it myself.

Somewhere along my pilgrimage to pie perfection, I began taking a photo of each bake — pic or it didn’t happen.

Despite all my rhetoric for “owning your own content”, I’ve hypocritically used Instagram to do the deed.

Which has inexorably lead me to this moment: I want an archive of all the pie pics I’ve snapped.

So I took the time to build and publish my best subdomain yet:

pies.jim-nielsen.com

How It Works

Programmatically, pulling pictures from Instagram used to be easy because they had APIs (access tokens expiring like every 60 days was annoying though). However, those APIs have been deprecated. Now if I want to pull data out of Instagram, I have to use their GUI export tools.

Screenshot of the export data tool in Instagram

Once the archive is ready, they send me a link. I download the archive and open the .zip file which results in a collection of disparate JSON files representing data like comments, likes, messages, pictures, etc.

Screenshot of Finder with various nested files and folders of JSON files.

I don’t care about most of those files. I just want pictures and captions. So I crafted an Origami script that pulls all that data out of the archive and puts it into a single directory: pictures, named by date, with a feed.json file to enumerate all the photos and their captions.

Screenshot of a folder in Finder with a bunch of images named by ISO8601 date and a feed.json file at the bottom.

At this point, I have an “archive” of all my data. This is what I stick on my CDN. (I'm hoping Instagram keeps the structure of this .zip consistent over time, that way I can update my archive every few months by just logging in, asking for a new export, and running my script.)

From here, I have a separate collection of files that uses this archive as the basis for making a webpage. I use Web Origami as my static site generator, which pulls the feed.json file from my CDN and turns all the data in that file into an HTML web page (and all the <img> tags reference the archive I put on my CDN).

Screenshot of pies.jim-nielsen.com

That’s it! The code’s on GitHub if you want to take a peak, or check out the final product at pies.jim-nielsen.com


Reply via: Email · Mastodon · Bluesky