MoreRSS

site iconBryce WrayModify

Based in the Dallas/Fort Worth area in Texas, U.S.A., I’m a nerdy advocate for static websites and the tools that build them — particularly Eleventy and Hugo.
Please copy the RSS to your reader, or quickly subscribe to:

Inoreader Feedly Follow Feedbin Local Reader

Rss preview of Blog of Bryce Wray

Mixed nuts #16

2025-09-27 01:12:00

The Google ruling, Netlify’s pricing changes, and other tales of interest.


Yet again, I’ll indulge myself in commenting on a variety of topics stemming from the nerdy stuff to which I pay attention. I’d originally intended for this latest post to be focused on just one of them — I’ll leave it to you to guess which — but, the longer I procrastinated, the greater number of happenings that I wanted to discuss. It’s not a desirable habit, but it’s Moi, folks. Let’s have at it.

I’ve spent most of this calendar year waiting to see what happened in the Google antitrust case. My expectation had been that we’d get a ruling in August, but it ended up slipping into early September. The most important effect of the ruling (PDF), at least for us ordinary web-browsing folks out here, was that Alphabet’s Google arm gets to keep Chrome and can keep paying organizations like Mozilla to promote the Google search engine. One can honestly and fairly debate the merits of both Google and Mozilla; but it’s a good thing that the incredibly important Chromium project will still have the nearly unlimited financial support that only Alphabet can give it, and it’s another good thing that there will still be financial backing for Mozilla’s Firefox browser (and, by extension, the numerous other FOSS projects that exist because Firefox’s Gecko engine can continue to exist). By the way, the ruling itself won’t go into effect for perhaps years due to the appellate process, but nearly all legal opinions I’ve seen on that particular aspect seem to agree that (what I consider to be) the good parts won’t change.

Starting in mid-2020, Netlify’s free tier allowed 300 minutes per month of website deployments. I long ago wrote about how to get around this by using an external CI/CD provider, rather than Netlify’s servers, to build a Netlify-hosted site. However, Netlify’s recently announced overhaul to its pricing plans has changed that. Now, a project on the free plan gets 300 credits per month, and each deployment — even if the build itself comes from elsewhere — costs fifteen of those, meaning you get a maximum of twenty deploys a month on the free plan. This will be problematic for some and a nothing-burger for others. Just sayin’. And, in case you’re wondering: Netlify-hosted projects that were on the previous free plan prior to this change are grandfathered with the old 300-minute limit that’s unrelated to credits; but, going forward, the 300-credit free plan is the new normal.

Those who work with anything built on npm-hosted dependencies have been reminded and re-reminded in recent weeks that the resulting supply chain can, um, have its moments. Two different supply chain attacks using especially crafty social-engineering ploys briefly made the use of numerous popular dependencies problematic. GitHub (which, like npm, is owned by Microsoft) announced a plan to improve the situation, but there inevitably will be ways of getting around even the “best laid” plans.

Apple released its latest major OS versions on September 15, and I have made peace with them for the most part. I am not a huge fan of the much-maligned Liquid Glass look but, after tweaking things here and there, have managed to live with it without a whole lot of pain. Based on some of the videos I saw from the earliest betas of these OSs a few months ago, it could’ve been a lot worse. And there are some new things I really like, such as being able to use a real Phone app on the Mac rather than an awkward interaction with the FaceTime app whenever I want to do an audio-only speakerphone call using my monitor’s audio system.

I learned only this week of yet another Chromium-based browser in the wild, called Helium. It’s fully FOSS — consider it a cooler, easier, and more updates-friendly way to use ungoogled-chromium — and is an attractive, lean, and quick performer. Helium is still in beta and the project has a few quirks that make it not yet ready for daily driving (at least mine), but it’s promising. If you can abide listening to Theo Browne on YouTube, this link will take you to the relevant part of a browsers-comparison video where he discussed Helium and explained his confidence in those behind this project, to which he apparently donated some funding. Or, for an alternative take, you can also look at the (mostly negative and, I feel, often off-topic) comments in this Hacker News thread.

Reply via email

New life for the old Mac with Linux: two years later

2025-08-30 01:21:00

Distro choices, uses, and a few continuing nits to pick.


Two years have passed since I began telling you about putting Linux on the 2017 Intel iMac I’d recently shelved in favor of a 2023 Apple Silicon Mac Studio. Apparently, some of the posts I wrote about this have become among my most frequently visited content, so here’s a brief update on how things are going with Linux on the older Mac.

My distro-hopping of the early days — mainly between Fedora and Arch — settled on Fedora in late 2023, while I ran what turned out to be a short-lived project for testing web browsers. I found Fedora easier for that, because several browser-makers provide official versions for not only Red Hat-based distros like Fedora but also Debian-based distros; you need only add the appropriate repositories.

Even after I ceased worrying about testing browsers, I still judged Fedora more convenient for the access to those official versions, not only for browsers but also other apps, such as 1Password. That remains true today. For other apps, I generally rely on either the official Fedora repository or Flathub.

I should add that I use “vanilla” Fedora Workstation. While I have tried immutable distros, they felt a bit limiting in some ways. (To be fair, I’m sure that’s at least part of the intent for data-securing purposes, and it probably does work better for many folks.)

My main use of Linux these days is as a gaming platform, thanks to the continuing advances of the Proton project. However, I’ve also found a handy weather radar app, Supercell Wx, which I can highly recommend.

I wish I could tell you I found solutions to some of the Linux-on-Mac issues I reported back in 2023, but that’s not the case. Moreover, I suspect those solutions won’t be forthcoming for the reasons I outlined at that time. Specifically . . .

  • Audio — I never found a (permanent) way around the “dummy output” annoyance which made it impossible to use the iMac’s excellent built-in speaker system, so I’m still sticking to my workaround of USB-connected speakers. The sound quality isn’t great, but it’s also not terrible.
  • Video — The iMac’s Retina Display remains something Linux doesn’t know how to handle properly so, particularly given my gaming emphasis, I stick to plain old 1920 × 1080 resolution. Meh. At least the text is fairly clear, which wasn’t always the case when I was trying to use 2560 × 1440.
    (Update/correction, 2025-08-30: Well, I’d forgotten my actual settings. On rechecking, turns out I’m really using a setting of 3840 × 2160 but with text scaling set at 200%, although I run games at 1920 × 1080 because, otherwise, the old Mac’s fans run a lot.)
  • Sleep — Every once in a while, particularly after a major update to whatever distro I’m using at the time (i.e., this has been an ongoing practice for the last two years), I’ll play with system settings in an attempt to see if the iMac finally can sleep and be reawakened normally. Sadly, no joy yet. Although that’s a power-saving luxury I’d enjoyed all the years when the iMac was running macOS, it remains out of reach for Linux-on-Mac.

You may have expected that I’d have more to report in this regard, but last year’s health problems kept me mostly off the old Mac for months at a time, so it’s really only this year that I have resumed any degree of truly active use of Linux on that device.

While the Apple Silicon Mac remains my daily driver, I fully anticipate continuing to use Linux on the Intel iMac as long as I’m able. Since Linux on Apple Silicon seems problematic for now and may remain so into the foreseeable future, I will inevitably have to decide what to do whenever Apple EOLs macOS for my newer Mac, just as it did in 2023 for the older one. On the other hand: since that event is likely several years out and I’m already about to turn seventy, will I even care by then? (Eyes, typing fingers, and brain cells tend to fail at some point. One can hold off Father Time only so long.) Perhaps I’ll find out someday.

Reply via email

Hugo sites on Cloudflare Workers — or not

2025-07-12 02:46:00

Longer-term considerations about recently announced changes at Cloudflare.


On further reflection, I’ve decided Cloudflare’s quiet-ish announcement about the Cloudflare Pages platform, about which I first wrote a few weeks ago, bears some more discussion. That’s especially true for sites like this one, built on the Hugo static site generator (SSG).

In fact, the whole thing has led me to think about how one might want to make a Hugo site more portable, to minimize the potential impact of such changes on vendors’ parts both now and in the future. If you, too, have used Cloudflare Pages as a Hugo site’s home and are now pondering what to do, perhaps this post will help you understand your options more clearly.

Our story so far . . .

In case you missed it: Cloudflare essentially put Cloudflare Pages (CFP) on life support a few months back, and began advising potential CFP users to build sites on the newly enhanced Cloudflare Workers (CFW) platform instead.1 While the CFP platform will continue to exist at least for the time being, Cloudflare really wants folks to change over to CFW.

And, to be fair: this may not be that big a deal for sites built on JavaScript-based SSGs. Indeed, the CFW documentation includes a list of recommended site-building frameworks, each of which is a mass of JavaScript dependencies. As a result, for the most part, making CFW work with any of these frameworks can be as simple as npm install. That’s not the case with the Go-based Hugo, which is a binary.

When the CFP-to-CFW issue arose on the Hugo Discourse forum, Joe Mooring of the Hugo project took time to provide great guidance about putting a Hugo site on CFW. This made it easy enough to convert my own simple site from CFP to CFW the same day I found out about all this.

But, in the ensuing weeks, I’ve seen online comments from Hugo users with more complex CFP-hosted sites and, unfortunately, ongoing issues trying to transition to CFW from the much easier CFP. For example, those whose sites depend on Git submodules, such as for externally produced themes, have found CFW currently unsuitable if used with a private repository.2

These users’ frustrations are sufficient as to make them reconsider whether it’s worth even bothering with making the transition work vs. just starting over with a competing and, presumably, Hugo-friendlier (or less Hugo-unfriendly) host. Thoughts of this type inevitably lead one to wonder how to make one’s Hugo project as portable as possible, for just such cases.3

After much ensuing head-scratching and research in this vein, including even revisiting a few of my own past posts about the where-to-put-your-static-site issue, I reached some conclusions about how, and where, a Hugo-based site should exist in the light of these new realities. As I walk you through some of my considerations, I hope they’ll help your own decision-making process if you’re entertaining similar contemplations.

Binaries are the biggie

For a Hugo site, the first and foremost issue involves the handling of binaries.

Building with Hugo requires a host whose build image either has the Hugo binary or, at the very least, lets you install it during the build. Additionally: if you’re styling your site with Sass, you must also be able to get the host to install the Dart Sass binary into the correct path. (Even if you presently have no interest in using Sass on your Hugo site, you still may want your host at least to make it possible, just in case you change your mind later.)

With the standard method of deploying to Cloudflare Pages — namely, pushing a commit to a site’s connected Git repository — a Hugo site owner could, with relative ease:

  • Specify the Hugo version (one was included in the CFP build image, but I personally prefer to pick the version myself).
  • Use the Dart Sass binary and specify the version.

On the other hand: the Cloudflare Workers build image offers a pre-selected Hugo binary, perhaps not the latest, and doesn’t allow you to pick a version.4 Moreover, the CFW build image doesn’t offer Dart Sass at all. Of course, the latter isn’t terribly surprising since, again, Cloudflare expects most SSG users to be running JS-based SSGs, and those usually work with Sass through some interaction with the Sass package5 rather than the Dart Sass binary.

What about the competition? Here’s how the only competing hosts I’ll mention6 fare in this regard:

  • Hugo — The build images for Netlify, Render, and Vercel provide Hugo and let you specify the version. Netlify and Vercel give you two ways to specify the HUGO_VERSION environment variable: through the GUI, or in a config file — netlify.toml or vercel.json, respectively. With Render, the only way to set the Hugo version is with a shell script; otherwise, as of this writing, you get a Hugo version from multiple years ago.
  • Dart Sass — With Netlify, you can get the Dart Sass binary and specify its version through scripting in netlify.toml, but not through the Netlify GUI. As for Render and Vercel, I know a shell script suggested by Hugo’s Bjørn Erik Pedersen worked at one time, but I haven’t tried it on either host recently.

The bottom line on these binaries and the three hosts’ native deployment environments is: you can spec your chosen Hugo binary on all three hosts (although not so easily with Render), but using and spec’g the Dart Sass binary is safest with Netlify.

However, in my experience, it’s easier for a Hugo user to solve the whole problem with any of these hosts by using a separate CI/CD provider, either GitHub Actions or GitLab CI/CD. This host-agnostic approach gives you much more control over which binaries you download, which versions you get, and other factors that are important for Hugo users.7 Although explaining the process is beyond this post’s scope (if needed, refer to the Hugo “Host and deploy” docs), suffice it to say that each host I’ve discussed here allows building sites through both GitHub Actions and GitLab CI/CD.8

Note: To be fair, I remind you of my 2022 findings concerning potential issues in using GitHub Actions with a Vercel-hosted Hugo site in which Hugo’s native image-processing functionality is in use. However, I haven’t tested sufficiently to know if the problem still exists, and that was three whole years ago; so I suspect (hope?) that, since, there have been plenty of improvements to the infrastructure that even Vercel’s free tier uses.

One heretical afterthought to consider

Before I press on to the finish, I’ll dwell briefly on what may be the elephant in this discussion’s room: the choice of SSG itself.

As noted, Cloudflare’s recent changes are potentially much more of a hassle for Hugo users than for those using JavaScript-based SSGs. But, as you probably already knew, Cloudflare isn’t alone in this respect. Indeed, most hosting platforms clearly favor the JS-based tools which have long constituted the overwhelming majority of site-building products; and this favoritism likely will only grow over time.

So, is it time for you, a Hugo user, to throw in the towel and jump ship to a different, JS-based SSG? Will that make your site more future-proof?

Well, only you can make that call. If you do switch, I can tell you from my years of experience that the Hugo-to-whatever conversion process will be anywhere from fairly easy to excruciating, depending largely on two factors: (a.) how big your site is; and (b.) how much Hugo-specific customization your site has. Mine has several hundred pages and more than a little Hugo-ish code that would be a bear to translate, so this site isn’t a likely candidate for now.

That said, my long-time readers know I have strayed from the Hugo ranch numerous times in the site’s nearly six years of existence, so I can offer a little more specific advice on the subject of possibly switching from Hugo to something else.

Of the JS-based SSGs I’ve used over the years to build this site whenever it wasn’t a Hugo project, the only SSG that’s on Cloudflare’s aforementioned list of recommended platforms is Astro; and, mind you, my time on Astro was miniscule compared to the many months I used Eleventy. (I also used the now largely moribund Gatsby, and even it gets a little love in the current Cloudflare Workers documentation — in fact, more than for Eleventy, much less Hugo.) Even when just tinkering, I haven’t used either Astro or Eleventy extensively in a couple of years; but I feel either is a solid alternative as a site-building platform to which the typical JS-favoring host is at least less averse than it is to Hugo.9

So, where?

All right, let’s get to the bottom line. After I’d given all this thought to how I could make my own Hugo site more portable and thus less vulnerable to the whims of different hosts, what did I end up doing about the site’s hosting?

In fact, I did nothing. As of this post’s initial publication, the site is still on Cloudflare Workers. It all still works, after all. But, now, I know how to make a quick exit if I do choose. It’s my hope that what I’ve shared in this post will give you similar knowledge.

But where would I go if I don’t stay with CFW? It would be between Netlify and Vercel. (While I admire Render as a company, I’m not as comfortable with configuring for it, especially where Hugo-specific things are concerned, as I am with the other two.) If I had to pick a winner, it would come down to how wedded I’d be to using external CI/CD, as I now do with the CFW site and did with its CFP predecessor. That’s because, in my testing, I found external CI/CD somewhat easier with Vercel than with Netlify, while Netlify’s native GUI provides better support for Hugo than does Vercel’s. So it really would come down to whether I’d prefer external CI/CD. If yes, it would be Vercel. If no, it would be Netlify.10


  1. Yeah, I know: CFP is, and has always been, built atop CFW anyway; but you get the idea. ↩︎

  2. On June 23, a commenter on the Cloudflare Developers Discord said, “Did a little bit of checking, looks like ssh urls in submodules are not currently supported[.]” Seeing a reference to this, someone on the unofficial Hugo Discord observed, “so if u have a private repository, the URL alone wouldn’t allow CF to read the repository.” ↩︎

  3. To quote Foghorn Leghorn: “Fortunately, I keep my feathers numbered for just such an emergency.” ↩︎

  4. Update, 2025-07-18: I later learned, via Discord, from a fellow Hugo user that you actually can select the Hugo version with the Workers build image, in the same way as you would’ve with Pages — i.e., through use of a HUGO_VERSION environment variable. It’s just not clearly documented. I don’t know whether a similar capability exists for using a DART_SASS_VERSION environment variable to get the Dart Sass binary; the HUGO_VERSION trick likely works because there already is a Hugo binary in the Workers build image, but the same doesn’t appear to be true for a Dart Sass binary. ↩︎

  5. Incidentally, another exception involves someone using Sass with the Rust-based Zola SSG. Zola uses the Rust grass crate for a “more-or-less”-ish compatibility with Dart Sass. I say “‘more-or-less’-ish” because the latest release of grass, at least as of this post’s initial appearance, is lagging quite a bit behind that of the official Dart Sass binary. Whether that matters much is up to each Sass-using Zola site owner; but, were I that user, I wouldn’t like it very much, especially given the fairly active cadence of Dart Sass updates. Also on the subject of Zola: currently, its binary isn’t in the CFW build image. ↩︎

  6. A quick review of the free tier of Digital Ocean’s Apps Platform shows that DOAP remains as unsuitable as I found it in 2023, thus deserving no real mention in any comparisons herein. ↩︎

  7. One notable example is if you like to use Git Info variables. Most hosts’ “native” methods don’t make that very easy↩︎

  8. Be aware that, if you do the Hugo build process on the CI/CD provider, you’ll need to experiment with the correct location of the respective config file. For example, it may need to be in your Hugo directory’s /static directory rather than the usual location (the root directory), but my own tests showed me this isn’t always true and can vary according to the specific workflow code you’re using to deploy the site from within the CI/CD provider. Again, experiment. Failure to put the file in the correct location means that, when the CI/CD provider turns the process over to Netlify, Render, or Vercel, the latter won’t “see” the config file and the build likely will error out rather than proceeding. ↩︎

  9. A few days ago, long-time Hugo user Patrick Kollitsch converted his website to Astro. Please note that he is an extremely knowledgeable coder, as one look at his site repository will make clear, so his switch isn’t necessarily a guide for all; but his site is a large one with several years’ worth of content, so I salute the effort he undertook to make the change. ↩︎

  10. Still, with a site that regularly needs a lot of changes, one would be better off using external CI/CD with Netlify to circumvent the Netlify free tier’s monthly build limits. I wrote about this very thing five years ago and the situation hasn’t changed. ↩︎

Reply via email

Mixed nuts #15

2025-06-27 02:12:00

Thoughts on site hosting, AI-related angst, mangled past participles, and gaming on Fedora.


For those who’ve never read either the previous entry in this series or any of its like-named predecessors, each “Mixed nuts” post allows me to bloviate — er, opine — regarding multiple and often unrelated subjects, rather than sticking mainly to one topic. Today’s latest in the line includes a follow-up to my recent post about this site and Cloudflare, then proceeds to what for me is an increasingly sore point where AI and text are concerned. Whether it gets better from there will be yours to decide.


It’s been a few weeks since I issued that post about how Cloudflare, having put its Pages site-hosting product into maintenance mode, is urging Pages users to switch their sites to the Cloudflare Workers platform. At the time, I noted that I’d made the transition on this simple site without too much pain. However, since then, a number of online conversations have made me feel I unnecessarily minimized the effort such changes might require. That goes double for my fellow Hugo users, since sites built on other, JavaScript-based tools have it considerably easier. The bottom line is that some should look into the free tiers of alternatives such as Netlify, Render, and Vercel. I already explained in 2023 how each such alternative has both upsides and downsides.

This is for those who insist they can easily spot AI-generated text. Many of us old farts were using bulleted lists and em dashes and en dashes long before artificial intelligence was no more than a (usually) reliable plot device for sci-fi, much less the fever dream of tech bros. So, for God’s sake, stop using those as “proofs” that some text is AI-generated. As for my own writing, I reiterate what I said over two years ago: “. . . although the stuff on this site . . . may not be any good, it always has been and will be written by a human, namely me.”

I wish I could cease noticing what seems to be the increasingly rampant mangling of past participles (e.g.,“have ran” or “have went”). I see it and hear it online, multiple times, every day. What further irks me about it is that, more often than not, the people committing this linguistic butchery seem to be bright folks who should know better — especially when this happens in a scripted video or presentation, for which you’d think (hope?) that one or more people actually read through the text before its delivery. All that said, I’ve also had to accept that many “should-know-better” types, when writing online, apparently can’t be bothered with the difference between “you’re” and “your” or between “it’s” and “its,” so . . . unnggh.

The Fedora distribution of Linux may drop support for 32-bit packages next year, likely endangering the Steam-hosted gaming I’ve been enjoying on that distro for a while now. At least, this action will endanger it unless Flatpak-supplied Steam is immune to the problem, and I lack the knowledge to discern the accuracy of the various online opinions about this. (See also this GamingOnLinux link.) Of course, there are many other Linux distros, but I don’t know how soon they, too, may follow the same path. Eventually, they’ll all have to take similar actions to avoid the Year 2038 problem; but, even if I were to survive to that point, I’d be in my early eighties and, likely, well past caring. YMMV.

Reply via email

From Pages to Workers (again)

2025-05-28 05:59:00

After I learn of changes in Cloudflare’s priorities, this site’s deployment process goes backward down memory lane.


This site has lived on Cloudflare Pages (CFP) for most of the last four years, having been initially on Cloudflare Workers (CFW) as a “Workers site” after stays on several other web hosts. I’d gained the distinct impression that CFP was the path on which Cloudflare intended to stay where hosting static websites was concerned.

This morning, I learned not only that this was no longer the case but also that I’d “missed the memo” about it, and from a good while ago at that. A few hours of docs-reading and tinkering later, I had migrated the site back to running on a Cloudflare Worker. (Cloudflare doesn’t call them “Workers sites” anymore.)

A buried lede

Every morning, one of my usual practices is to look through the Hugo Discourse forum to see what’s going on in Hugo-ville. Today’s visit brought me up short with a discussion of recent Cloudflare changes and their effect on Hugo users’ hosting on it. Nearly two months earlier, Cloudflare had issued a blog post that was mostly about enhancements to CFW. I had seen the post — the Cloudflare Blog is among many I follow via RSS — but apparently hadn’t scrolled down far enough to catch what I now consider its buried lede, at least for CFP users such as I:

Now that Workers supports both serving static assets and server-side rendering, you should start with Workers. Cloudflare Pages will continue to be supported, but, going forward, all of our investment, optimizations, and feature work will be dedicated to improving Workers. We aim to make Workers the best platform for building full-stack apps, building upon your feedback of what went well with Pages and what we could improve. [Emphases are Cloudflare’s.]

In short: the CFP platform is now largely in maintenance mode, while its parent platform, CFW, is where Cloudflare will be investing its future dev efforts.

I was chagrined, but also got the message. Even though someone on the Cloudflare Discord later told me that I could probably keep things as they are for now, the same person also said that migrating the site to CFW still would be the wisest choice. As I would later mention elsewhere on Discord:

I know CF says that existing Pages projects are OK, but it hasn’t been that long since CF was urging people to transition from Workers projects to Pages projects, and now the opposite seems to be the case . . . Not crazy about having to [migrate], but would rather move with the CF tide than be on a maintenance-only platform.

From CFP back to CFW

This meant I’d have to make some changes. And, as the saying goes, there was bad news and good news.

The bad news: Hugo is not among the recommended frameworks. Indeed, all of the current list’s members are JavaScript-based, so one might pessimistically suppose Hugo will be excluded for a while. Also: while there definitely is Cloudflare documentation for migrating from CFP to CFW, following it is no walk in the park.

The good news: Hugo’s amazingly helpful Joe Mooring had created an example repository which showed how to do this, right down to a custom build script and the necessary configuration file. So I adapted those for my site’s purposes, created a new CFW project which would handle my site’s contents, and did the usual site-swapping DNS stuff to point my domain to that Worker rather than a CFP project.

One aspect that initially slowed the migration process was the site’s existing use of a Pages Function to manage my Content Security Policy and the caching of static assets. That was a problem because a Pages Function actually is a Worker, so you can’t just move it, unchanged, into another Worker and expect good results. Fortunately, Cloudflare’s wrangler utility, used for doing a ton of stuff with both CFW and CFP, can compile the Pages Function code into a single file that works within a properly configured Worker.

The only remaining tricky thing for me was that, since October, 2023, I’d been doing my Hugo builds locally and then deploying the results directly to CFP, which I’d found ’waaaaay faster than the usual method of pushing changes to a linked online repository and then waiting for a cloud infrastructure to build and deploy the site. In addition, my way had been letting me push changes to the online repo without having to rebuild online as well, which was a more comforting way to manage version control. Thus, I ended up doing even a little more local retooling but got it to work by (1.) disconnecting the online repo from the CFW project and (2.) changing my local script to deploy to the CFW project rather than, as before, the CFP project.

It still ain’t broke

During all of this rigamarole today, I did give some serious thought to whether I might be better off simply heading back to one of the previous hosts I’d used, rather than hoping Cloudflare doesn’t make it even more complicated down the line to host my humble little site (for the big zero dollars a month I pay for it, of course).

In the end, I stuck with Cloudflare, simply because it quickly became clear that, annoyances notwithstanding, none of the alternatives was truly any better. Besides, I’d still have to deal with various idiosyncrasies, regardless of which host I chose. It wasn’t quite a case of “If it ain’t broke…” — since, after all, I’d started the day assuming it wasn’t “broke” as a CFP site, only to end up deciding otherwise — but it was close enough.

Reply via email

Loading print CSS only when needed

2025-05-22 04:15:00

How to help a small percentage of visitors without inconveniencing the vast majority.


Since my site is a blog (rather than, e.g., a place for obtaining things like tickets to shows), you might think no visitor would need or want to print any of its pages. However, I occasionally hear from those who do, one of whom also requested that I provide print-specific CSS to make the results look better. I did, but knew it also meant I was making my other, non-printing visitors download CSS that they neither needed nor wanted.

As of yesterday, that is no longer a problem.

I’ve noted here before that I won’t let AI write my posts but I will make use of AI when I need help with code. This post is about the latter case.

From time to time, I think about how I might better handle the site’s delivery of CSS. For example, I practice what I call “sorta scoped styling,” wherein I split the CSS into files that get loaded only on certain types of pages. However, this wouldn’t help with the print CSS. While I did mark its link as media="print" — which, among other things, makes browsers treat it as a lower-priority download — I wanted to find a way to load it conditionally, only when that small number of users actually tried to print one of the site pages. So, yesterday, I asked ChatGPT:

Is there a way, through JavaScript or other coding, to have a browser download a website’s print-specific CSS file only if the user is actually printing a page? The obvious intent is to reduce how much CSS the website must deliver, especially since a relatively small percentage of users actually print web pages anymore.

That began a “discussion” which, although the AI’s response contained some of the hallucinatory behavior for which LLMs have become infamous, successfully gave me code which met my needs.

The code uses the matchMedia() method (and, for maximum compatibility, it also acts on beforeprint events) to detect an active print request from the browser. Only when such a request occurs will the code load the print CSS; so, now, only those users who are actually printing content from the site will download the additional styling to make their printouts look more “print-y” and less “web-y,” so to speak.

Armed with this AI-created JavaScript code submission, I added it to the appropriate partial templates for my Hugo site’s purposes.1 (For those who choose to disable JavaScript, the noscript section at the end delivers the print CSS anyway, just the way everyone else formerly got it.)

{{- /* for those who've requested CSS for printing */ -}}
{{- $printCSS := resources.Get "css/print.css" -}}
{{- if hugo.IsProduction -}}
 {{- $printCSS = $printCSS | resources.Copy "css/print.min.css" | postCSS | fingerprint -}}
{{- end -}}
{{- with $printCSS -}}
 {{ $safePrintLink := $printCSS.RelPermalink | safeURL }}
 <script>
 function loadPrintStylesheet() {
 if (document.getElementById('print-css')) return; // Prevent multiple loads

 const link = document.createElement('link');
 link.rel = 'stylesheet';
 link.href = '{{ $safePrintLink }}';
 link.type = 'text/css';
 link.media = 'print';
 link.id = 'print-css';
 {{- if hugo.IsProduction }}
 link.integrity='{{ $printCSS.Data.Integrity }}';
 {{- end -}}
 document.head.appendChild(link);
 }

 // Use media query listener
 const mediaQueryList = window.matchMedia('print');

 mediaQueryList.addEventListener('change', (mql) => {
 if (mql.matches) {
 loadPrintStylesheet();
 }
 });

 // Fallback for browsers that fire beforeprint/afterprint
 window.addEventListener('beforeprint', loadPrintStylesheet);
 </script>
 <noscript>
 <link rel="stylesheet" href="{{ $printCSS.RelPermalink }}" type="text/css" media="print"{{- if hugo.IsProduction }} integrity="{{ $printCSS.Data.Integrity }}"{{- end -}}>
 </noscript>
{{- end }}

This works fine on Chrome and Safari, as well as browsers based on their engines (Blink and WebKit, respectively), but I did find one oddity in Gecko-based browsers such as Firefox. While other browsers will load the print CSS when their respective Print Preview windows pop up, a Gecko-based browser will not load it if “Disable cache” is enabled — as often is the case when one is using the browser’s development tools. In that specific circumstance, you end up having to cancel out from the Print Preview window and then load it again to see the desired effect. By contrast, the other browsers will properly load the print CSS even with “Disable cache” enabled.

That said, now we’re talking about a glitch that affects an even tinier number of users than those who have any need for my site’s print CSS. Namely, they’re users who (a.) are using a Gecko-based browser and (b.) want to print from my site and (c.) are viewing my site with “Disable cache” enabled. And, even for them, closing and reloading Print Preview will fix the problem.


  1. My original also contains code that, in production, enables a serverless function to provide a nonce for Content Security Policy purposes↩︎

Reply via email