2026-03-29 08:00:00
I enjoy installing and playing around with old operating systems. Granted, I don't do much with them once I got them installed, but I like having a look around, checking out the integrated software and just seeing what it feels like to use this OS.
The other day I wondered if I could run the first versions of Apple's Mac OS X (back then for Power PC) in a virtual machine or an emulator. And it turned out that it's fairly easy to do with QEMU.
QEMU is a virtualisation/emulation powerhouse which can emulate a ton of different architectures. It can be obtained here for the operating system of your choice. I'm on Linux, so everything that follows is geared towards setup on Linux, but I'm sure it can be easily adapted to other OSs.
Disk images for the old OS X versions are also needed, I got them from here.
I installed the available versions 10.0 through 10.3. The process was very similar each time, so I'm just going to describe it once here.
QEMU has no gui, so it must be configured via the command line. OS X needs the 32 bit Power PC emulator of QEMU which can be started with qemu-system-ppc.
First I needed a harddrive image.
qemu-img create -f qcow2 cheetah.qcow2 10G
This creates a harddrive image with a maximum size of 10GB called 'cheetah.qcow2'. It doesn't use up all 10GB to begin with, it grows with the amount of space that's used up inside, so it's fine to create a big one.
After some googling and trial and error I found a startup command for QEMU which allows me to boot an emulator with a working configuration. Here's the startup script:
#!/bin/bash
qemu-system-ppc -L pc-bios \
-name "Mac OS X Cheetah" \
-cpu G4 \
-smp 1,cores=1 \
-boot d \
-M mac99 \
-display gtk,gl=on \
-m 2048 \
-netdev user,id=mynet0 -device sungem,netdev=mynet0 \
-device ide-hd,bus=ide.1,drive=HardDrives \
-drive if=none,format=qcow2,media=disk,id=HardDrives,file=cheetah.qcow2,discard=unmap,detect-zeroes=unmap,detect-zeroes=unmap" \
-cdrom "Mac OS X 10.0 Cheetah.iso"
Running this will create a Power PC machine with the correct bios, an emulated G4 CPU with 1 core, 2GB of memory and the qcow2 harddisk image and the installation ISO attached. If you're having graphical issues, you might need to use some other display toolkit under the -display option.
Make note of the line '-boot d', this selects which device the machine boots from. At first we need to boot from the cdrom image, which is drive d, after the installation we need to switch this to '-boot c' to boot from the harddisk image instead.
I used this script for all four versions of OS X, obviously with a different harddisk image for each and the correct ISO file attached.
The emulator can then be started by running the script and it should boot right from the installation CD into the OS X installer. The installation is then mostly self explanatory and pretty much the same for every version. If you've installed MacOS before, you know what to expect.
A few notes:
So, after an hour or two of installing these systems, which thankfully was very straight forward, I ended up with this:
Four freshly installed copies of the first four versions of OS X to explore.
It's weird, because I never used these systems back when they were new, and yet I feel somehow nostalgic for them. I don't know why, but there's just something about the look of them. The colour palette, the skeuomorphism, the fonts and the overall design are just beautiful, and I wouldn't mind using an OS with this kind of look and feel today.
How many real Macs would you need to be able to run every modern MacOS version (everything 10.0 and after)? Let's see.
Four machines for 25 years worth of operating systems, that's pretty good.
Too many computers for you? You can also just get an old HP desktop and Hackintosh the s*** out of it.
Just a few unsorted links I found and bookmarked related to this project.
2026-03-27 08:00:00

Made it to number 100! It's crazy to see that I've written 100 of these posts almost every week (I skipped one week in October 2024 and one in December, same year) for nearly two years. I've done some work on the site recently and you can now find a list of all previous linkdumps here, and everything from all linkdumps on one page over here. If you remember that you found something through one of my posts, but you don't remember which one it was in, you can just search on that page. I frequently grep through my old linkdumps, and this was also my original intention behind writing these posts: to get better at bookmarking interesting things, because I always sucked at it and things got lost. But now that online search is enshittified to the point of being nearly useless, having an archive of links you found interesting in the past and sharing them is as important as it used to be in the 90s before Google came along. Things seem to be coming full circle, and the worse Google gets (and it will get worse) the more important human curation becomes again.
Articles
Software/Services
Hardware Projects
Videos
Around the Small Web
2026-03-23 08:00:00
Music streaming services often advertise their product by highlighting that they have a (more expensive of course) option for uncompressed or Studio Quality audio, because it supposedly sounds much better than compressed MP3 files. Uncompressed just sounds better, it's clearer, has more dynamic range, more depth, better high end and so on, is how the story goes. Frankly, I think this is mostly nonsense. MP3s at high bitrates can be considered transparent (more on this below), which means they sound indistinguishable from the original uncompressed sound source. At lower bitrates, sure, they sound worse. A 128kbit/s stereo MP3 file that you downloaded from Napster in 1999 will definitely sound worse than the uncompressed original recording that it was created from. But modern MP3 encoders are much better than those from the past, and at a bitrate of 320kbit/s they deliver audio quality that in my opinion very few people will be able to tell apart from the uncompressed original.
Any difference between compressed and uncompressed on streaming is more likely the result of the streaming service either not processing the MP3s with the highest possible bitrate despite saying so, or doing some other shenanigans to make the uncompressed audio sound better than the MP3s. They might simply make the uncompressed versions a little bit louder than the compressed ones. Our ears equate louder with better (this is why the music in clubs and at rock concerts is so insanely loud), and so if I play you the exact same recording twice, but once at a slightly higher volume, you will perceive the louder one as sounding better, even though they are completely identical. And of course the placebo effect is strong; if I pay more for uncompressed audio with higher fidelity and more bits and more kHz and whatnot, certainly it has to sound better! That alone might be enough to colour my opinion.
But - maybe I'm wrong here? Maybe 320kbit/s MP3s really do sound worse than an uncompressed audio file?
Luckily, there is an easy way to figure it out. All I need is some uncompressed audio, then compress it to MP3 and then compare the two and see which one sounds better. So that's what I did, and because I like talking about tech stuff, I thought I'd put it online here for you to test, too.
I don't want to dive too deep into the theory here because I could write a whole series about this topic, but I thought I'd at least quickly mention what compressed and uncompressed audio even means. Feel free to skip this if you're not interested in the technical details.
An uncompressed or lossless audio file means that the waveform of the original recording is digitised with an analogue to digital converter and stored as a series of digital values. Very roughly it works like this: Audio is recorded with a microphone, which turns sound waves (which are changes in air pressure) into changes in electric voltage in a wire. This is called an analogue audio signal. Converting it to digital means looking at this signal in regular intervals and writing down which value the voltage has at this moment. That's it. For CD quality audio the looking at the signal (called sampling the signal) is performed 44.100 times per second, and the voltage values are written as 16 bit numbers, which allows us to differentiate 2^16 = 65536 individual voltage values). This is fine enough granularity to cover the entire range of frequencies that our ears can perceive (20Hz - 20.000Hz) with very high fidelity.
Lossy audio compression means that this digital representation of the original audio recording is now analysed and split up into it's individual frequencies. Frequencies which our ears cannot perceive are then removed from the signal, which reduces the amount of information the signal contains and therefore reduces the filesize. For example, if our recording contains at one point a loud bang on the snare drum and a very quite guitar note which is played at the same time, then our ears cannot hear the quiet note while the loud sound is playing because the loud sound simply drowns it out. This is called masking. The loud noise masks the quiet one. So the frequencies of the quiet notes can be removed without altering the sound to our ears, because we wouldn't have heard it anyway. If this is done carefully and not too much information is removed, the compressed audio file should sound virtually indistinguishable to us from the uncompressed file. If the compression is too aggressive, it will also remove frequencies which we would have heard, resulting in weird distortions in the sound, which is called compression artefacts.
The level of compression is typically give as the number of how many bits of information the compressed audio signal is allowed to use for every second of audio. One second of uncompressed stereo music in CD quality (44.100Hz sample rate, 16 bit resolution, two channels) uses 44.100 * 16 * 2 = 1.411.200 bits or 1411kbit/s.
A 320kbit/s MP3 file will remove as much information from the audio recording as is necessary to squeeze the remaining audio into 320kbit/s, which means it compresses the original audio recording by a factor of around 4.5 (1411kbit/s / 320kbit/s). This is generally considered transparent compression, meaning it should be indistinguishable from the original recording (this article about audio quality from the Audacity website considers even MP3 at bitrates of 170-210kbit/s as transparent).
MP3 and AAC are examples of compressed or lossy audio codecs, which remove information, FLAC and WAV are examples of uncompressed or lossless audio codecs, which don't remove information. (What's a codec? Encoder and decoder. A piece of software that stores audio to a file in a given format and reads it back.)
As a rule of thumb, what we get on a CD is uncompressed, what we get through streaming services and on YouTube is compressed (unless your streaming service offers lossless audio).
I grabbed a few CDs off the shelf and ripped a couple of songs which I feel offer some variety to see how the MP3 codec performs. I chose to take music from my own CDs instead of downloading something so I could be sure that I'm really dealing with uncompressed audio files to begin with. Then I converted each WAV file to 320kbit/s MP3 with the lame MP3 encoder with this command:
ffmpeg -i file.wav -codec:a libmp3lame -b:a 320k file.mp3
Then I stole used in accordance with the license the amazing AB-Audio-Player code from this Github project to create the A/B test for this website. And now we can hear for ourselves which one sounds better.
Oh, one thing. If you're listening through Bluetooth headphones, you will never hear the uncompressed audio. Bluetooth always uses lossy audio compression codecs, unless you have very new and expensive headphones with aptX Lossless. But everything else, your standard Apple Airpods or Sony/Bose headphones? They all use lossy compression. You need to listen through wired headphones. (And if someone waxes poetic about how they only listen to uncompressed audio and then proceeds to stick a pair of Airpods in their ears, you now know that they have no idea what they're talking about.)
So, how did you do?
I have to be honest, I don't hear a difference. I listened with good headphones (Beyerdynamic DT 880 Pro) through a Focusrite audio interface, but no matter how hard I tried to find any artefacts in the vocals, the high frequencies, the reverb etc., I just don't hear anything. And that's despite knowing which file is which. To me they sound identical and the MP3s sound completely transparent.
Of course this is in no way a comprehensive test. These are just five examples, and even if you just blindly guessed without even listening to anything, there's still a 1/32 chance that you got everything right by pure luck.
But this is also not meant to trick anyone or prove that I'm right and you're wrong or anything like this. I wanted to see if I can tell the difference between compressed and uncompressed audio, and I thought I'd take you along for the ride. I can say with confidence that despite being a musician and having studied this stuff back in University and therefore knowing what to listen for, I can't tell the difference. This is also not the first time I'm comparing compressed to uncompressed, I've done this many times in the past and I've always come to the same conclusion: To me, MP3 at high bitrates sounds on par with uncompressed audio. I can't hear a difference. Unless a streaming service is actively making the MP3s sound worse to promote their HiFi option (which I can totally see them doing), paying extra for uncompressed is simply not worth it.
For me.
It's a crime to exclude Suzanne Vega's Tom's Diner, which was used as a test during the development of the MP3 codec from this list, but unfortunately I don't have it on CD.
Btw, do you want to know what compression artefacts sound like? Have a listen:
This is MP3 at 56kbit/s. Do you hear this metallic noise in the high frequencies? That's what it sounds like when too much information is removed. This file is 165KByte, while the 320kbit/s MP3 from above is 940KByte and the uncompressed WAV is 4.1MByte.
2026-03-22 08:00:00
I really like reading blogs. I also really like Mastodon as a social media, to keep in touch with people or to have a quick chat about random things. One thing I'm not a fan of though is when people post long articles which would be ideally suited for a blog as a "thread" on Mastodon.
Here is a Mastodon thread I saw recently and found interesting. It consists of 14 individual posts, so I started at the first one and made my way through to the end. But by the time I had reached the bottom, I found myself exhausted from the experience, and here's why.
This is a screenshot of the entire thread (I decided to pixelate the content of the thread and the picture and name of the author, because they're not relevant to the point I'm trying to make here. This is not about this particular thread, and it's also not a knock on the author. It's merely an example to illustrate what I mean):

Let's look at it in a bit more detail. It starts with the first post of the thread:

Then we get a block of this:

After this we get another post:
Followed by another block of this:

And then it continues like this all the way through. A few sentences of content, followed by a block of irrelevant things that have nothing to do with the actual content and that your eyes have to skip, followed by another few sentences of content and so on.
This is not a good way to present a long-form article. It makes it unnecessarily difficult to read because after every couple of sentences it briefly takes the reader out of the context of the article and requires them to skip down a little, catch the beginning of the next block of text and focus again. It's cognitively exhausting. It feels like reading a post where there's an ad after every other paragraph. Or one of those sites where one article is spread over ten pages so you constantly have to click next page.
If you write an article of 500 words or so, presumably you want people to be able to focus on it and understand the contents. But by splitting it up into 14 tiny little chunks of information which are interrupted by a block of useless pictograms which have no relevance to the article, you're making it way more difficult for the reader to actually engage with your writing than it needs to be.
There are other problems with this approach. Mastodon sorts posts in reverse chronological order, meaning the thing you posted last appears first in my timeline. So I scroll down and see a post that starts with "14/14: So in conclusion...", and then I either have to scroll much further down to find the first post and work my way back up, or click the post, scroll all the way to the top of the thread and start reading to figure out what it's even about. There is a reason why blog posts and news articles begin with a headline: So the reader can see at a glance what it's about and if it's of interest to them. Here the first thing the reader sees is the last sentence. This is backwards.
It also makes rediscovering this content almost impossible. Stuff on social media scrolls by and then essentially disappears, even though the posts might still linger around. But nobody who newly discovers your profile scrolls back through potentially thousands of posts to see if there are any interesting threads in there, so this information is virtually lost.
I'm on Mastodon, so that's where I encounter this phenomenon the most, but it's the same on Bluesky, X/Twitter (where this practice started), and it's even worse on Instagram which is an image sharing platform, so people post screenshots of walls of text as images on there. I find that insane.
What's a better way then? Of course I'm in no position to tell anyone what to do, so I'd merely like to make a suggestion: Get a website. Start a blog. Create a place where your long-form writing feels at home. It's not as complex as it sounds. You can write, you can make an account on social media. You got this. Put your writing in a place where it can live long-term, rather than just throwing it on social media where it will disappear into the ether a few days, if not a few hours after you wrote it.
Because that would be a shame.
2026-03-20 08:00:00

Linkdump number 99! We've reached the final two digit Linkdump. You know what this means? Next week we'll find out if the counter is three digit safe or if it will overflow and roll back to Linkdump -99, and then I will have to spend years digging myself out of that hole. But maybe it's time to implement a new counting scheme altogether? After all, numbering things in ascending order is just so... plain.
How about something a little more exotic? We could continue with Linkdump 360, followed by Linkdump One and then Linkdump Series S and X (and you will never figure out what the difference between those two is).
But maybe Linkdump 99 was already a mistake. Maybe after Linkdump 98 we should have gone straight to Linkdump ME, then Linkdump XP, Linkdump Vista (with all new 3D effects that your computer isn't powerful enough to run), Linkdump 7 (you're still following, right?), Linkdump 8.1 (a service release for the widely unpopular Linkdump 8), followed by Linkdump 10 (naturally), Linkdump 11 (the agentic Linkdump) and finally Linkdump 365, a subscription service so you will always receive the latest links for a very reasonable price - at first.
Or should we leave the numbers altogether? What do you think about Linkdump Wildcat, Linkdump British Longhair, Linkdump British Shorthair (zero new links, but tons of under the hood fixes!) and Linkdump Siamese (with lots of features nobody asked for) before we abruptly switch gears and continue with Linkdump Munich, Linkdump Neuschwanstein, Linkdump Oktoberfest (featuring the new "Liquid Beerglass" design! You won't be able to read anything, but it'll look nice in the promo shots) and, for good measure, Linkdump Lederhosen.
Exciting times are ahead! But for now, number 99 it is.
Articles
Around the Small Web
2026-03-15 08:00:00
Around 20 years ago I went to University, studying electrical engineering with a specialisation in telecommunications engineering. I was particularly interested in everything that had to do with audio. Signal processing, sound generation, audio compression etc. were endlessly fascinating to me, so I spent a lot of time at the Chair of Multimedia Communications where they worked on all kinds of sound and audio related topics. And this is the coolest thing I encountered there:
It's exactly what it looks like: a circular frame made from aluminium profiles, 3 metres in diameter, raised to roughly ear level, on which several dozen loudspeakers are mounted (48 to be precise), pointing inwards. And when you stood inside this ring and they turned the system on, and you closed your eyes, something magical happened. You knew you were in a small room with grey curtains all around, but all of a sudden you could hear a violin playing in the far distance. You could hear footsteps approaching you and walk right past you. You could hear a door closing much further away than the door of the actual room was, then someone whispering in your ear, as if they were standing right next to you. You could hear two people having a conversation in front of you in a loud and busy auditorium. And all of these things sounded completely realistic.
This is called a wave field synthesis (WFS) array or system. In order to understand what it does, let's quickly look at how sound works first.
Imagine you're in a big room, maybe a ballroom, and a few metres away from you is a person playing a violin. The vibrations of the violin's strings create sound waves which spread out in all directions and travel through the air, much like ripples of waves in water travel outwards when something falls in, forming what's called a wave front. Part of this wave front is reaching your ears, other parts are hitting the walls and are reflected from there, reaching your ears too but from different directions and being slightly delayed compared to the original wave front (because they have to travel a longer way, first to the walls and then to your ears). Based on where the wave front is coming from and where the reflections (echos) of the sound are coming from, you can tell where the violinist is in relation to you, whether he is close to you or far away, how big the room is that you two are in and many other things. Your brain can get a lot of information about the environment you are in just based on the sound waves which are reaching your ears.
What happens when you record this violin player with a microphone and then play the recording back through a speaker that's placed right in front of you? Does it sound like the sound is coming from where the violinist originally stood? No. It sounds like the sound is coming from the speaker in front of you. Because it is.
But what if you could find a way to exactly reproduce the wave front, including the echos, that hit your ears when you were there in the actual room when the violinist was playing? Then it would sound exactly the same as it did at that time, because the sound waves reaching your ears would be the same.
This is what the wave field synthesis system does. It aims to reproduce the wave front of a sound source anywhere, independently from where the actual speakers are located. In other words, with a WFS system you can freely place sound sources at any point in space. 1
Here's how it works.
A large number of speakers are placed very close together. Each speaker on its own is just one small sound source and when it's playing in isolation, you will hear that the sound is coming from this one speaker. But if all the speakers are playing together and are reproducing the sound at just the right moment, then the sound waves from all these individual speakers will merge together forming one big wave front hitting your ears, instead of many individual sound waves from individual speakers. This combination of waves from small wave sources into one big wave front is called the Huygens-Fresnel principle.
Have a look at this graphic, this hopefully makes it a bit clearer:
In this scenario, you're standing all the way to the right. The violinist is playing, producing soundwaves which are moving towards you as a wave front. If you replace the violinist with an array of speakers (shown in the middle) and have them all play back the recording of the violin with precise timing, this will result in a wave front which looks and sounds just like that of the original violinist. In other words, to you the listener it will sound exactly as if the violinist were playing at this location far behind the speaker array, because the wave front created by the speakers is the same as the wave front created by the violinist much further away (and of course the room echoes could be reproduced in the same way, too).
And this is the magic of the wave field synthesis system. It can produce sound waves which by the time they reach your ears are shaped and sound exactly like those of a far away violinist. Or a person standing right next to you, whispering in your ear. Or two people having a chat in front of you while there's a lot of background chatter going on further away. It all sounds completely realistic, because the sound waves reaching your ears are shaped in the same way as the ones that would reach your ears in the real scenario.
To this day I think this is one of the most impressive technologies I have ever encountered. You might think your 7.1 home cinema system sounds pretty sweet, and of course you would be right. But this takes it to the next level. It's the acoustic equivalent of watching a movie on a TV screen vs watching it on a gigantic IMAX screen which curves around you and takes up your entire field of vision. There is simply no comparison to a 'normal' surround sound system.
So what is this actually used for? As you will probably have guessed, this is not a product for the consumer market that you can easily set up in your living room. Even though the technology has existed for more than two decades at this point, it is slow to catch on, no doubt due to the high complexity. Sound needs to be specially mixed and processed to really take advantage of a WFS system, which requires a great deal of digital signal processing, and building a system like this in the first place is no small feat. Even a small system needs dozens of speakers, which require the same number of amplifiers, speaker cables, mounting hardware and so on. Not to mention the computer system needed to process the sound and drive all of these speakers. When was the last time you saw a 48 channel sound card?
You can still find some systems in use however:
I wonder what became of the original speaker array at my University. The people who worked on it have long since retired or moved on to other projects. Are they still working with it? Did they disassemble it? Do they have it in storage somewhere? Would they sell it to me if I asked nicely? Let's maybe not think about this too much.
Footnotes
With the caveat that at least the system I'm talking about here is two-dimensional, which means the speakers are all arranged in the same plane, so no sound sources can be placed above or below the listener. In practice though this is not a huge deal.