2025-10-28 17:59:45
I download a lot of videos, but recently I discovered that some of those videos won’t play on my iPhone. If I try to open the videos or embed them in a webpage, I get a broken video player:

These same videos play fine on my Mac – it’s just my iPhone that has issues. The answer involves the AV1 video codec, Apple’s chips, and several new web APIs I learnt along the way.
Table of contents
Doing some research online gave me the answer quickly: the broken videos use the AV1 codec, which isn’t supported on my iPhone. AV1 is a modern video codec that’s designed to be very efficient and royalty free, but it’s only been recently supported on Apple devices.
I have an iPhone 13 mini with an A15 processor. My iPhone doesn’t have hardware decoding support for AV1 videos – that only came with the iPhone 15 Pro and the A17 Pro. This support was included in all subsequent chips, including the M4 Pro in my Mac Mini.
It’s theoretically possible for Apple to decode AV1 in software, but they haven’t. According to Roger Pantos, who works on Apple’s media streaming team, there are no plans to provide software decoding for AV1 video. This means that if your chip doesn’t have this support, you’re out of luck.
I wanted to see if I could have worked this out myself. I couldn’t find any references or documentation for Apple’s video codec support – so failing that, is there some query or check I could run on my devices?
I’ve found a couple of APIs I can use that tell me my browser can play a particular video. These APIs are used by video streaming sites to make sure they send you the correct video. For example, YouTube can work out that my iPhone doesn’t support AV1 video, so they’ll stream me a video that uses a different codec.
The APIs I found require a MIME type and bitrate for the video.
A MIME type can be something simple like video/mp4 or image/jpeg, but it can also include information about the video codec.
The codec string for AV1 is quite complicated, and includes many parts.
If you want more detail, read the AV1 codecs docs on MDN or the Codecs Parameter String section of the AV1 spec.
We can get the key information about the unplayable video using ffprobe:
ffprobe -v error -select_streams v:0 \
-show_entries stream=codec_name,profile,level,bits_per_raw_sample \
-of default=noprint_wrappers=1 "input.mp4"
# codec_name=av1
# profile=Main
# level=8
# bits_per_raw_sample=N/A
The AV1 codec template is av01.P.LLT.DD, which we construct as follows:
P = profile number, and “Main” means 0
LL = a two-digit level number, so 08
Tier = the tier indicator, which can be Main or High.
I think the “High” tier is for professional workflows, so let’s assume my video is Main or M.DD = the two-digit bit depth, so 08`.This gives us the MIME type for the unplayable video:
video/mp4; codecs=av01.0.08M.08
I also got the MIME type for an H.264 video which does play on my iPhone:
video/mp4; codecs=avc1.640028
By swapping out the argument to -show_entries, we can also use ffprobe to get the resolution, frame rate, and bit rate:
ffprobe -v error -select_streams v:0 \
-show_entries stream=width,height,bit_rate,r_frame_rate \
-of default=noprint_wrappers=1:nokey=0 "input.mp4"
# width=1920
# height=1080
# r_frame_rate=24000/1001
# bit_rate=1088190
Now we have this information, let’s pass it to some browser APIs.
canPlayType()
The video.canPlayType() method on HTMLMediaElement takes a MIME type, and tells you whether a browser is likely able to play that media.
Note the word “likely”: possible responses are “probably”, “maybe” and “no”.
Here’s an example using the MIME type of my AV1 video:
const video = document.createElement("video");
video.canPlayType("video/mp4; codecs=av01.0.08M.08");
Let’s run this with few different values, and compare the results:
| MIME type | iPhone | Mac |
|---|---|---|
AV1 videovideo/mp4; codecs=av01.0.08M.08
|
"" (= “no”) |
"probably" |
H.264 videovideo/mp4; codecs=avc1.640028
|
"probably" |
"probably" |
Generic MP4video/mp4
|
"maybe" |
"maybe" |
Made-up formatvideo/mp4000
|
"" (= “no”) |
"" (= “no”) |
This confirms the issue: my iPhone can’t play AV1 videos, while my Mac can.
The generic MP4 is a clue about why this API returns a “likely” result, not something more certain. The MIME type doesn’t contain enough information about whether a video will be playable.
decodingInfo()
For a more nuanced answer, we can use the decodingInfo() method in the MediaCapabilities API.
You pass detailed information about the video, including the MIME type and resolution, and it tells you whether the video can be played – and more than that, whether the video can be played in a smooth and power-efficient way.
Here’s an example of how you use it:
await navigator.mediaCapabilities.decodingInfo({
type: "file",
video: {
contentType: "video/mp4; codecs=av01.0.08M.08",
width: 1920,
height: 1080,
bitrate: 1088190,
framerate: 24
}
});
// {powerEfficient: false,
// smooth: false,
// supported: false,
// supportedConfiguration: Object}
Let’s try this with two videos:
| AV1 video | |
|---|---|
| Video config | |
| iPhone | not supported |
| Mac | supported, smooth, power efficient |
| H.264 video | |
| Video config | |
| iPhone | supported, smooth, power efficient |
| Mac | supported, smooth, power efficient |
This re-confirms our theory that my iPhone’s lack of AV1 support is the issue.
It’s worth noting that this is still a heuristic, not a guarantee. I plugged some really large numbers into this API, and my iPhone claims it could play a trillion-pixel H.264 encoded video in a smooth and power efficient way. I know Apple’s hardware is good, but it’s not that good.
This is only an issue because I have a single video file, it’s encoded with AV1, and I have a slightly older iPhone. Commercial streaming services like YouTube, Vimeo and TikTok don’t have this problem because they store videos with multiple codecs, and use browser APIs to determine the right version to send you.
Apple would like me to buy a new iPhone, but that’s overkill for this problem. That will happen eventually, but not today.
In the meantime, I’m going to convert any AV1 encoded videos to a codec that my iPhone can play, and change my downloader script to do the same to any future downloads. Before I understood the problem, I was playing whack-a-mole with broken videos. Now I know that AV1 encoding is the issue, I can find and fix all of these videos in one go.
[If the formatting of this post looks odd in your feed reader, visit the original article]
2025-10-22 20:55:42
I had syntax highlighting in the very first version of this blog, and I never really thought about that decision. I was writing a programming blog, and I was including snippets of code, so obviously I should have syntax highlighting.
Over the next thirteen years, I tweaked and refined the rest of the design, but I never touched the syntax highlighting. I’ve been applying a rainbow wash of colours that somebody else chose, because I didn’t have any better ideas.
This week I read Nikita Prokopov’s article Everyone is getting syntax highlighting wrong, which advocates for a more restrained approach. Rather than giving everything a colour, he suggests colouring just a few key elements, like strings, comments, and variable definitions. I don’t know if that would work for everybody, but I like the idea, and it gave me the push to try something new.
It’s time to give code snippets the same care I’ve given the rest of this site.
Table of contents
I’ve stripped back the syntax highlighting to a few key rules:
Everything else is the default black/white.
This is similar to Nikita’s colour scheme “Alabaster”, but I chose my own colours to match my site palette. I’m also making my own choices about how to interpret these rules, because real code doesn’t always fall into neat buckets.
Here’s a snippet of Rust code with the old syntax highlighting:

Here’s the same code in my new design:

Naturally, these code blocks work in both light and dark mode.
The new design is cleaner, it fits in better with the rest of the site, and I really like it. Some of that will be novelty and the IKEA effect, but I see other benefits to this simplified palette.
I use Rouge as my syntax highlighter.
I give it a chunk of code and specify the language, and it parses the code into a sequence of tokens – like operators, variables, or constants.
Rouge returns a blob of HTML, with each token wrapped in a <span> that describes the token type.
For example, if I ask Rouge to highlight the Python snippet:
print("hello world")
it produces this HTML:
<span class="nf">print</span><span class="p">(</span><span class="sh">"</span><span class="s">hello world</span><span class="sh">"</span><span class="p">)</span>
The first token is print, which is a function name (Name.Function, or class="nf").
The ( and ) are punctuation ("p") and the string is split into quotation marks (sh for String.Heredoc) and the text (s for String).
You can see all the tokens in this short example, and all the possible token types are listed in the Pygments docs.
(Pygments is another syntax highlighting library, but Rouge uses the same classification system.)
Each token has a different class in the HTML, so I can style tokens with CSS.
For example, if I want all function names to be blue, I can target the "nf" class:
.nf { color: blue; }
I wrap the entire block in a <pre> tag with a language class, like <pre class="language-go">, so I can also apply per-language styles if I want.
I want to highlight variables when they’re defined, not every time they’re used. This gives you an overview of the structure, without drowning the code in blue.
This is tricky with Rouge, because it has no semantic understanding of the code – it only knows what each token is, not how it’s being used.
In the example above, it knows that print is the name of a function, but it doesn’t know if the function is being called or being defined.
I could use something smarter, like the language servers used by modern IDEs, but that’s a lot of extra complexity. It might not even work – many of my code snippets are fragments, not complete programs, and wouldn’t parse cleanly.
Instead, I’m manually annotating my code snippets to mark definitions. I wrote a Jekyll plugin that reads those annotations, and modifies the HTML from Rouge to add the necessary highlights. It’s extra work, but I already spend a lot of time trying to pick the right snippet of code to make my point. These annotations are quick and easy, and it’s worth it for a clearer result.
Older posts don’t have these annotations, so they won’t get the full benefit of the new colour scheme, but I’m gradually updating them.
Now that I’m not using somebody else’s rules, I’m paying more attention to how my code looks. I’m thinking more carefully about how my rules should apply. I’m noticing when colours feel confusing or unclear, and finding small ways to tweak them to improve clarity.
For example, “variable definitions in blue” sounds pretty clear cut, but does that include imports? Function parameters? What about HTML or CSS, where variables aren’t really a thing? What parts of the code do I think are important and worth highlighting?
I could have asked these questions at any time, but changing my syntax highlighting gave me the push to actually do it.
In my first programming job, I worked in a codebase with extensive comemnts. They were a good starting point in unfamiliar code, with lots of context, explanation, and narrative. The company’s default IDE showed comments in bright blue, and looking back, I’m sure that colour choice encouraged the culture of detailed documentation.
I realise now how unusual that was, but at the time it was my only experience of professional software development. I carried that habit of writing coomments into subsequent jobs, but I’d forgotten the colour scheme. Now, I’m finally reviving that good idea.
Comments are bright red in my new theme – not the subdued grey used by so many other themes. The pop of colour makes comments easier to spot and more inviting to read. I’ve also ported this style to my IDE, and now when I write comments, I don’t feel like my words are disappearing.
I was inspired to make this change by reading Nikita Prokopov’s article, which argues for a minimal colour scheme – but not everyone agrees.
Syntax highlighting is mostly a matter of taste. Some programmers like a clean, light theme, others prefer high-contrast dark themes with bold colours. There’s lots of research into how we read code, but as far as I know, there’s no strong evidence or consensus in favour of any particular approach.
Whatever your taste, I think code is easier to read in a colour scheme you’re already familiar with. You know what the colours mean, so your brain doesn’t have to learn anything. A new scheme might grow on you over time, but at first, it’s more likely to be distracting than helpful.
That’s a problem for a blog like this. Most readers find a single post through search, read something useful, and never return. They’re not reading enough code here to learn my colour scheme, and unfamiliar colours are just noise.
With that in mind, I think a minimal palette works better. My posts only contain short snippets of code – enough to make a point, but not full files or complex programs. When you’re only reading a small amount of code, it’s more useful to highlight key elements than wash everything in colour.
I’ve wanted to improve my code snippets for a long time, but it always felt overwhelming. I’m used to colour themes which use a large palette, and I don’t have the design skills to choose that many colours. It wasn’t until I thought about using a smaller palette that this felt doable
I picked my new colours just a few days ago, and already the old design feels stale and tired. I’d planned to spend more time tinkering before I make it live, but it’s such an improvement I want to use it immediately.
I love having a little corner of the web which is my own personal sandbox. Thirteen years in, and I’m still finding new ways to make myself smile.
[If the formatting of this post looks odd in your feed reader, visit the original article]
2025-10-07 14:19:18
I download a lot of videos from YouTube, and yt-dlp is my tool of choice. Sometimes I download videos as a one-off, but more often I’m downloading videos in a project – my bookmarks, my collection of TV clips, or my social media scrapbook.
I’ve noticed myself writing similar logic in each project – finding the downloaded files, converting them to MP4, getting the channel information, and so on. When you write the same thing multiple times, it’s a sign you should extract it into a shared tool – so that’s what I’ve done.
yt-dlp_alexwlchan is a script that calls yt-dlp with my preferred options, in particular:
All this is presented in a CLI command which prints a JSON object that other projects can parse. Here’s an example:
$ yt-dlp_alexwlchan.py "https://www.youtube.com/watch?v=TUQaGhPdlxs"
{
"id": "TUQaGhPdlxs",
"url": "https://www.youtube.com/watch?v=TUQaGhPdlxs",
"title": "\"new york city, manhattan, people\" - Free Public Domain Video",
"description": "All videos uploaded to this channel are in the Public Domain: Free for use by anyone for any purpose without restriction. #PublicDomain",
"date_uploaded": "2022-03-25T01:10:38Z",
"video_path": "\uff02new york city, manhattan, people\uff02 - Free Public Domain Video [TUQaGhPdlxs].mp4",
"thumbnail_path": "\uff02new york city, manhattan, people\uff02 - Free Public Domain Video [TUQaGhPdlxs].jpg",
"subtitle_path": null,
"channel": {
"id": "UCDeqps8f3hoHm6DHJoseDlg",
"name": "Public Domain Archive",
"url": "https://www.youtube.com/channel/UCDeqps8f3hoHm6DHJoseDlg",
"avatar_url": "https://yt3.googleusercontent.com/ytc/AIdro_kbeCfc5KrnLmdASZQ9u649IxrxEUXsUaxdSUR_jA_4SZQ=s0"
},
"site": "youtube"
}
Rather than using the yt-dlp CLI, I’m using the Python interface.
I can import the YouTubeDL class and pass it some options, then pull out the important fields from the response.
The library is very flexible, and the options are well-documented.
This is similar to my create_thumbnail tool.
I only have to define my preferred behaviour once, then other code can call it as an external script.
I have ideas for changes I might make in the future, like tidying up filenames or supporting more sites, but I’m pretty happy with this first pass. All the code is in my yt-dlp_alexwlchan GitHub repo.
This script is based on my preferences, so you probably don’t want to use it directly – but if you use yt-dlp a lot, it could be a helpful starting point for writing your own script.
Even if you don’t use yt-dlp, the idea still applies: when you find yourself copy-pasting configuration and options, turn it into a standalone tool. It keeps your projects cleaner and more consistent, and your future self will thnak you for it.
[If the formatting of this post looks odd in your feed reader, visit the original article]
2025-09-19 05:14:29
Today a colleague asked for a way to open all the files that have changed in a particular Git branch. They were reviewing a large pull request, and sometimes it’s easier to review files in your local editor than in GitHub’s code review interface. You can see the whole file, run tests or local builds, and get more context than the GitHub diffs.
This is the snippet I suggested:
git diff --name-only "$BRANCH_NAME" $(git merge-base origin/main "$BRANCH_NAME") \
| xargs open -a "Visual Studio Code"
It uses a couple of nifty Git features, so let’s break it down.
There are three parts to this command:
Work out where the dev branch diverges from main.
We can use git-merge-base:
$ git merge-base origin/main "$BRANCH_NAME"
9ac371754d220fd4f8340dc0398d5448332676c3
This command gives us the common ancestor of our main branch and our dev branch – this is the tip of main when the developer created their branch.
In a small codebase, main might not have changed since the dev branch was created. But in a large codebase where lots of people are making changes, the main branch might have moved on since the dev branch was created.
Here’s a quick picture:
This tells us which commits we’re reviewing – what are the changes in this branch?
Get a list of files which have changed in the dev branch.
We can use git-diff to see the difference between two commits.
If we add the --name-only flag, it only prints a list of filenames with changes, not the full diffs.
$ git diff --name-only "$BRANCH_NAME" $(git merge-base …)
assets/2025/exif_orientation.py
src/_drafts/create-thumbnail-is-exif-aware.md
src/_images/2025/exif_orientation.svg
Because we're diffing between the tip of our dev branch, and the point where our dev branch diverged from main, this prints a list of files that have changed in the dev branch.
(I originally suggested using git diff --name-only "$BRANCH_NAME" origin/main, but that's wrong.
That prints all the files that differ between the two branches, which includes changes merged to main after the dev branch was created.)
Open the files in our text editor.
I suggested piping to xargs and open, but there are many ways to do this:
$ git diff … | xargs open -a "Visual Studio Code"
The xargs command is super useful for doing the same thing repeatedly – in this case, opening a bunch of files in VS Code.
You feed it a space-delimited string, it splits the string into different pieces, and runs the same command on each of them, one-by-one.
It’s equivalent to running: AAA
open -a "Visual Studio Code" "assets/2025/exif_orientation.py"
open -a "Visual Studio Code" "src/_drafts/create-thumbnail-is-exif-aware.md"
open -a "Visual Studio Code" "src/_images/2025/exif_orientation.svg"
The open command opens files, and the -a flag tells it which application to use.
We mostly use VS Code at work, but you could pass any text editor here.
Reading the manpage for open, I'm reminded that you can open multiple files at once, so I could have done this without using xargs.
I instinctively reached for xargs because I’m very familiar with it, and it’s a reliable way to take a command that takes a single input, and run it with many inputs.
[If the formatting of this post looks odd in your feed reader, visit the original article]
2025-09-15 05:44:01
One of my favourite features added to web browsers in the last few years is text fragments.
Text fragments allow you to link directly to specific text on a web page, and some browsers will highlight the linked text – for example, by scrolling to it, or adding a coloured highlight. This is useful if I’m linking to a long page that doesn’t have linkable headings – I want it to be easy for readers to find the part of the page I was looking for.
Here’s an example of a URL with a text fragment:
https://example.com/#:~:text=illustrative%20examples
But I don’t find the syntax especially intuitive – I can never remember exactly what mix of colons and tildes to add to a URL.
To help me out, I’ve written a small bookmarklet to generate these URLs:
To install the bookmarklet, drag it to my bookmarks bar.
When I’m looking at a page and want to create a text fragment link, I select the text and click the bookmarklet. It works out the correct URL and shows it in a popup, ready to copy and paste. You can try it now – select some text on this page, then click the button to see the text fragment URL.
It’s a small tool, but it’s made my link sharing much easier.
Update, 16 September 2025: Smoljaguar on Mastodon pointed out that Firefox, Chrome, and Safari all have menu items for “Copy Link with Highlight” which does something very similar. The reason I don’t use these is because I didn’t know they exist!
I use Safari as my main browser, and this item is only available in the right-click menu. One reason I like bookmarklets is that they become items in the Bookmarks menu, and then it’s easy for me to assign keyboard shortcuts.
This is the JavaScript that gets triggered when you run the bookmarklet:
const selectedText = window.getSelection().toString().trim();
if (!selectedText) {
alert("You need to select some text!");
return;
}
const url = new URL(window.location);
url.hash = `:~:text=${encodeURIComponent(selectedText)}`;
alert(url.toString());
[If the formatting of this post looks odd in your feed reader, visit the original article]
2025-09-09 06:42:48
Resizing an image is one of those programming tasks that seems simple, but has some rough edges. One common mistake is forgetting to handle the EXIF orientation, which can make resized images look very different from the original.
Last year I wrote a create_thumbnail tool to resize images, and today I released a small update.
Now it’s aware of EXIF orientation, and it no longer mangles these images.
This is possible thanks to a new version of the Rust image crate, which just improved its EXIF support.
Images can specify an orientation in their EXIF metadata, which can describe both rotation and reflection. This metadata is usually added by cameras and phones, which can detect how you’re holding them, and tell viewing software how to display the picture later.
For example, if you take a photo while holding your camera on its side, the camera can record that the image should be rotated 90 degrees when viewed. If you use a front-facing selfie camera, the camera could note that the picture needs to be mirrored.
There are eight different values for EXIF orientation – rotating in increments of 90°, and mirrored or not. The default value is “1” (display as-is), and here are the other seven values:
You can see the EXIF orientation value with programs like Phil Harvey’s exiftool, which helpfully converts the numeric value into a human-readable description:
$ # exiftool's default output is human-readable
$ exiftool -orientation my_picture.jpg
Orientation : Rotate 270 CW
$ # or we can get the raw numeric value
$ exiftool -n -orientation my_picture.jpg
Orientation : 8
I use the image crate to resize images in Rust.
My old code for resizing images would open the image, resize it, then save it back to disk. Here’s a short example:
use image::imageops::FilterType;
use std::error::Error;
fn main() -> Result<(), Box<dyn Error>> {
// Old method: doesn't know about EXIF orientation
let img = image::open("original.jpg")?;
img.resize(180, 120, FilterType::Lanczos3)
.save("thumbnail.jpg")?;
Ok(())
}
The thumbnail will keep the resized pixels in the same order as the original image, but the thumbnail doesn’t have the EXIF orientation metadata. This means that if the original image had an EXIF orientation, the thumbnail could look different, because it’s no longer being rotated/reflected properly.
When I wrote create_thumbnail, the image crate didn’t know anything about EXIF orientation – but last week’s v0.25.8 release added several functions related to EXIF orientation.
In particular, I can now read the orientation and apply it to an image:
use image::imageops::FilterType;
use image::{DynamicImage, ImageDecoder, ImageReader};
use std::error::Error;
fn main() -> Result<(), Box<dyn Error>> {
// New methods in image v0.25.8 know about EXIF orientation,
// and allow us to apply it to the image before resizing.
let mut decoder = ImageReader::open("original.jpg")?.into_decoder()?;
let orientation = decoder.orientation()?;
let mut img = DynamicImage::from_decoder(decoder)?;
img.apply_orientation(orientation);
img.resize(180, 120, FilterType::Lanczos3)
.save("thumbnail.jpg")?;
Ok(())
}
The thumbnail still doesn’t have any EXIF orientation data, but the pixels have been rearranged so the resized image looks similar to the original. That’s what I want.
Here’s a visual comparison of the three images. Notice how the thumbnail from the old code looks upside down:
original image
|
|
|
This test image comes from Dave Perrett’s exif-orientation-examples repo, which has a collection of images that were very helpful for testing this code.
This is a small change, but it solves an annoyance I’ve hit in every project that deals with images. I’ve written this fix, but images with an EXIF orientation are rare enough that I always forget them when I start a new project – and I used to solve the same problem again and again.
By handling EXIF orientation in create_thumbnail, I won’t have to think about this again.
That’s the beauty of a shared tool – I fix it once, and then it’s fixed for all my current and future projects.
[If the formatting of this post looks odd in your feed reader, visit the original article]