MoreRSS

site iconSimon WillisonModify

Creator of Datasette and Lanyrd, co-creator of the Django Web Framework.
Please copy the RSS to your reader, or quickly subscribe to:

Inoreader Feedly Follow Feedbin Local Reader

Rss preview of Blog of Simon Willison

Was Daft Punk Having a Laugh When They Chose the Tempo of Harder, Better, Faster, Stronger?

2026-01-03 13:57:07

Was Daft Punk Having a Laugh When They Chose the Tempo of Harder, Better, Faster, Stronger?

Depending on how you measure it, the tempo of Harder, Better, Faster, Stronger appears to be 123.45 beats per minute.

This is one of those things that's so cool I'm just going to accept it as true.

(I only today learned from the Hacker News comments that Veridis Quo is "Very Disco", and if you flip the order of those words you get Discovery, the name of the album.)

Via Kottke

Tags: music

Quoting Will Larson

2026-01-03 03:57:37

My experience is that real AI adoption on real problems is a complex blend of: domain context on the problem, domain experience with AI tooling, and old-fashioned IT issues. I’m deeply skeptical of any initiative for internal AI adoption that doesn’t anchor on all three of those. This is an advantage of earlier stage companies, because you can often find aspects of all three of those in a single person, or at least across two people. In larger companies, you need three different organizations doing this work together, this is just objectively hard

Will Larson, Facilitating AI adoption at Imprint

Tags: leadership, llms, ai, will-larson

The most popular blogs of Hacker News in 2025

2026-01-03 03:10:43

The most popular blogs of Hacker News in 2025

Michael Lynch maintains HN Popularity Contest, a site that tracks personal blogs on Hacker News and scores them based on how well they perform on that platform.

The engine behind the project is the domain-meta.csv CSV on GiHub, a hand-curated list of known personal blogs with author and bio and tag metadata, which Michael uses to separate out personal blog posts from other types of content.

I came top of the rankings in 2023, 2024 and 2025 but I'm listed in third place for all time behind Paul Graham and Brian Krebs.

I dug around in the browser inspector and was delighted to find that the data powering the site is served with open CORS headers, which means you can easily explore it with external services like Datasette Lite.

Here's a convoluted window function query Claude Opus 4.5 wrote for me which, for a given domain, shows where that domain ranked for each year since it first appeared in the dataset:

with yearly_scores as (
  select 
    domain,
    strftime('%Y', date) as year,
    sum(score) as total_score,
    count(distinct date) as days_mentioned
  from "hn-data"
  group by domain, strftime('%Y', date)
),
ranked as (
  select 
    domain,
    year,
    total_score,
    days_mentioned,
    rank() over (partition by year order by total_score desc) as rank
  from yearly_scores
)
select 
  r.year,
  r.total_score,
  r.rank,
  r.days_mentioned
from ranked r
where r.domain = :domain
  and r.year >= (
    select min(strftime('%Y', date)) 
    from "hn-data"
    where domain = :domain
  )
order by r.year desc

(I just noticed that the last and r.year >= ( clause isn't actually needed here.)

My simonwillison.net results show me ranked 3rd in 2022, 30th in 2021 and 85th back in 2007 - though I expect there are many personal blogs from that year which haven't yet been manually added to Michael's list.

Also useful is that every domain gets its own CORS-enabled CSV file with details of the actual Hacker News submitted from that domain, e.g. https://hn-popularity.cdn.refactoringenglish.com/domains/simonwillison.net.csv. Here's that one in Datasette Lite.

Via Hacker News

Tags: hacker-news, sql, sqlite, datasette, datasette-lite, cors

December 2025 sponsors-only newsletter

2026-01-02 12:33:47

I sent the December edition of my sponsors-only monthly newsletter. If you are a sponsor (or if you start a sponsorship now) you can access a copy here. In the newsletter this month:

  • An in-depth review of LLMs in 2025
  • My coding agent projects in December
  • New models for December 2025
  • Skills are an open standard now
  • Claude's "Soul Document"
  • Tools I'm using at the moment

Here's a copy of the November newsletter as a preview of what you'll get. Pay $10/month to stay a month ahead of the free copy!

Tags: newsletter

Quoting Ben Werdmuller

2026-01-02 08:48:16

[Claude Code] has the potential to transform all of tech. I also think we’re going to see a real split in the tech industry (and everywhere code is written) between people who are outcome-driven and are excited to get to the part where they can test their work with users faster, and people who are process-driven and get their meaning from the engineering itself and are upset about having that taken away.

Ben Werdmuller

Tags: coding-agents, ai-assisted-programming, claude-code, generative-ai, ai, llms

Introducing gisthost.github.io

2026-01-02 06:12:20

I am a huge fan of gistpreview.github.io, the site by Leon Huang that lets you append ?GIST_id to see a browser-rendered version of an HTML page that you have saved to a Gist. The last commit was ten years ago and I needed a couple of small changes so I've forked it and deployed an updated version at gisthost.github.io.

Some background on gistpreview

The genius thing about gistpreview.github.io is that it's a core piece of GitHub infrastructure, hosted and cost-covered entirely by GitHub, that wasn't built with any involvement from GitHub at all.

To understand how it works we need to first talk about Gists.

Any file hosted in a GitHub Gist can be accessed via a direct URL that looks like this:

https://gist.githubusercontent.com/simonw/d168778e8e62f65886000f3f314d63e3/raw/79e58f90821aeb8b538116066311e7ca30c870c9/index.html

That URL is served with a few key HTTP headers:

Content-Type: text/plain; charset=utf-8
X-Content-Type-Options: nosniff

These ensure that every file is treated by browsers as plain text, so HTML file will not be rendered even by older browsers that attempt to guess the content type based on the content.

Via: 1.1 varnish
Cache-Control: max-age=300
X-Served-By: cache-sjc1000085-SJC

These confirm that the file is sever via GitHub's caching CDN, which means I don't feel guilty about linking to them for potentially high traffic scenarios.

Access-Control-Allow-Origin: *

This is my favorite HTTP header! It means I can hit these files with a fetch() call from any domain on the internet, which is fantastic for building HTML tools that do useful things with content hosted in a Gist.

The one big catch is that Content-Type header. It means you can't use a Gist to serve HTML files that people can view.

That's where gistpreview comes in. The gistpreview.github.io site belongs to the dedicated gistpreview GitHub organization, and is served out of the github.com/gistpreview/gistpreview.github.io repository by GitHub Pages.

It's not much code. The key functionality is this snippet of JavaScript from main.js:

fetch('https://api.github.com/gists/' + gistId)
.then(function (res) {
  return res.json().then(function (body) {
    if (res.status === 200) {
      return body;
    }
    console.log(res, body); // debug
    throw new Error('Gist <strong>' + gistId + '</strong>, ' + body.message.replace(/\(.*\)/, ''));
  });
})
.then(function (info) {
  if (fileName === '') {
    for (var file in info.files) {
      // index.html or the first file
      if (fileName === '' || file === 'index.html') {
        fileName = file;
      }
    }
  }
  if (info.files.hasOwnProperty(fileName) === false) {
    throw new Error('File <strong>' + fileName + '</strong> is not exist');
  }
  var content = info.files[fileName].content;
  document.write(content);
})

This chain of promises fetches the Gist content from the GitHub API, finds the section of that JSON corresponding to the requested file name and then outputs it to the page like this:

document.write(content);

This is smart. Injecting the content using document.body.innerHTML = content would fail to execute inline scripts. Using document.write() causes the browser to treat the HTML as if it was directly part of the parent page.

That's pretty much the whole trick! Read the Gist ID from the query string, fetch the content via the JSON API and document.write() it into the page.

Here's a demo:

https://gistpreview.github.io/?d168778e8e62f65886000f3f314d63e3

Fixes for gisthost.github.io

I forked gistpreview to add two new features:

  1. A workaround for Substack mangling the URLs
  2. The ability to serve larger files that get truncated in the JSON API

I also removed some dependencies (jQuery and Bootstrap and an old fetch() polyfill) and inlined the JavaScript into a single index.html file.

The Substack issue was small but frustrating. If you email out a link to a gistpreview page via Substack it modifies the URL to look like this:

https://gistpreview.github.io/?f40971b693024fbe984a68b73cc283d2=&utm_source=substack&utm_medium=email

This breaks gistpreview because it treats f40971b693024fbe984a68b73cc283d2=&utm_source... as the Gist ID.

The fix is to read everything up to that equals sign. I submitted a PR for that back in November.

The second issue around truncated files was reported against my claude-code-transcripts project a few days ago.

That project provides a CLI tool for exporting HTML rendered versions of Claude Code sessions. It includes a --gist option which uses the gh CLI tool to publish the resulting HTML to a Gist and returns a gistpreview URL that the user can share.

These exports can get pretty big, and some of the resulting HTML was past the size limit of what comes back from the Gist API.

As of claude-code-transcripts 0.5 the --gist option now publishes to gisthost.github.io instead, fixing both bugs.

Here's the Claude Code transcript that refactored Gist Host to remove those dependencies, which I published to Gist Host using the following command:

uvx claude-code-transcripts web --gist

Tags: github, http, javascript, projects, ai-assisted-programming, cors