MoreRSS

site iconAndreasModify

Electrical engineer, musician, out and about on two wheels, read a lot of books, coffee-addict.
Please copy the RSS to your reader, or quickly subscribe to:

Inoreader Feedly Follow Feedbin Local Reader

Rss preview of Blog of Andreas

Linkdump No 54

2025-05-09 08:00:00

Billionaires are evil, AI is terrible and the MAGAs spare no effort to demonstrate their utter incompetence to the world. Why do I even bother coming up with a new intro every week? I could just reuse this one over and over again, it always fits. But there's also old MacOS stuff, old digital cameras, weird circuit boards and weird Windows bugs that kept me busy and entertained this week, and I hope you'll enjoy some of this, too.

Articles

  • Silicon Valley billionaires literally want the impossible - Ars Technica
    The writing has been on the wall for some time, but by now it's impossible not to notice: Silicon Valley billionaires are literally Bond Villains. Not figuratively - literally. Seriously, go watch some old James Bond, then take a look at Musk and Zuck and the rest of the gang. Actually, many Bond villains seem quite tame by comparison.
  • Despite misleading marketing, Israeli company TeleMessage, used by Trump officials, can access plaintext chat logs
    This is a fascinating in-depth analysis of the source code of the Signal fork that was used to Make America Great Again, even though it was developed by a bunch of buffoons from, you know, Not America. I wouldn't be surprised at all to learn that it was developed by a foreign intelligence service with the explicit purpose of spying on the US government. Not that it takes much to spy on them of course since they happily invite everyone and their grandma into their supersecret "let's drop some bombs" chatrooms.
  • AI-Fueled Spiritual Delusions Are Destroying Human Relationships
    This article has made the rounds this week, and for good reason - it's deeply concerning. Turns out that people who are maybe already a little mentally unstable and susceptible to manipulation, those who would be easy prey for a "guru" or a fake healer, can be easily pushed over the edge of sanity when they're exposed to a carelessly developed "AI" that will happily fuel their delusions all day long.

Software/Services

Hardware Projects

  • Concretedog: Building Beautiful Things, RC2014 Mini II Picasso Ltd Edition
    Usually on circuit boards and in electronics in general, traces and wires are always laid out in straight lines with 90° bends and corners. In a previous job my colleague who designed the circuit boards went "NOPE" and made all his traces curved, which resulted in some unique looking boards and caused a few funny discussions with customers. This here is taking that same approach, but dialed up to 11.

Videos

Around the Small Web

  • Being a Bad Salesperson, By Choice | MyNotes
    Stefano refuses to sell his customers things they don't need, which is why he's not a Bond villain billionaire and instead he's doomed to leading an honest life with a clear conscience. Poor Stefano.
  • Explanation of the Ratfactor Feed - ratfactor
    Dave explains the philosophy behind the layout of his website, and it's a fascinating read. I wish I was this creative with my site, but I never seem to be able to progress beyond "here's some stuff I wrote, sorted new to old". I should take some inspiration from him.
  • Schafe sind bessere Rasenmäher | Reducing Screen Time
    Robert, whose blog name translates to "sheep are better lawnmowers" takes a swing at reducing the time he spends on his phone, and if you replaced the word "phone" in his article with the word "laptop", it would be a shockingly accurate description of how I spend a lot of my time. Eventually I will have to tackle this problem too. Soon. When I'm ready. I can stop whenever I want. I'm not addicted. You're addicted.

Misc

  • List of digital cameras with CCD sensors - Wikipedia
    I honestly don't know why this is a wikipedia article - seems more like something that would be at home on a blog. It's a list of digital cameras from the 2000s which have a CCD sensor which is said to produce a more "film-like" look, compared to more modern cameras. I may or may not be thinking about getting one of those, which is how I came across this list ;)

Installing OS X Tiger on an old MacBook

2025-05-06 08:00:00

What do you do when you have to stay home from work because you're sick, but then you get bored just sitting around doing nothing? Of course, you mess around with old computers. What else would you do?

So I dug out this ancient white MacBook I bought a few years ago:

It's a MacBook 1,1 with a 2GHz Core2Duo CPU and 2GB of RAM. The battery is long dead as is the optical drive, and it has a few weird stains and blue colour blotches on the inside that I can't get rid of, not with Alcohol and not with Acetone, but otherwise it's in pretty good condition. The plastic isn't cracked, the hinges are fine, the screen looks fine after the backlight tube had some time to warm up... it's a nice computer.

It had Mac OS X Snow Leopard 10.6.8 installed which is the last supported version, but it originally shipped with Mac OS X Tiger 10.4.7. I wanted to see what it looked like when it was brand new and so I set out to install a version of OS X Tiger on it. The only problem is, I don't have any of the original restore CDs, and the optical drive doesn't work anyway. But surely there are ISOs available online that I can just write to a USB stick and install from there, right?

Well, yes and no.

There are a bunch of different versions of OS X Tiger available as ISO and DMG images on archive.org, and I naively assumed that I could just download one of them, write them to a USB stick with dd (on my normal Linux PC) and then the MacBook would happily boot from that, but it didn't. Either the drive wasn't even recognised by the MacBook as a boot drive or if it was, it started to boot but then failed before it reached the first screen of the installer. And yes, I made sure I got the Intel versions. So it isn't quite as simple as that.

But it is simple if you know where to look and what to do. After a lot of googling and asking around on Mastodon I ended up on this page from the Hackintosh community which describes exactly what to do and also includes a link to a fully featured tested and working installation image. You can create a USB installer from this image, but you need a working installation of MacOS for this to work. I still had Snow Leopard running on that MacBook, so it wasn't a problem.

So here's the steps very briefly:

  • Download the image labeled ACDT from archive.org
  • Copy it to the Mac and open it
  • Format the USB drive with DiskUtility as "Mac OS Extended"
  • Run the following command, adjust the paths if necessary: sudo asr restore -source /Volumes/Mac\ OS\ X\ Install\ DVD -target /Volumes/MyVolume -erase -noverify

And bam, that's it. It took a while to write the image to the USB stick because it's 12GB and this old Mac only has USB 2.0 ports, but after it was done I had a working USB installation medium.

I rebooted the Mac, held the alt key as it restarted to get to the boot selector, selected the USB drive and it booted into the installer without any problems and the installation worked without a hitch. A few minutes and one reboot later the introduction video started, I went through the initial setup (there is a screen which "requires" you to enter your phone number, address and so on - press cmd+q to get out of it) and there we are:

Glossy Aqua interface, brushed metal windows, blue wallpaper... it's all there.

I was never a Mac user, not back then and not now, but I always loved the look of these early Mac OS versions. I even had my Ubuntu themed to look like this around 2007/2008.

So now I can finally do what I assume all the cool kids with their MacBooks did back in the day. Create a website with iWeb, record some music with Garage Band, get an old camcorder and shoot cool skateboarding videos that I can then cut with iMovie and burn to DVD with iDVD...

You know what, I wrote that out as a joke, but it actually sounds like fun. Maybe I should do this.

Well, maybe not the Skateboarding. I'm in my 40s. Some things are better left to the youth after all.

Coding then and now

2025-05-04 08:00:00

Recently I watched a few videos about how programmers back in the early days of video gaming, as well as demo coders today developing for old hardware, were using every trick in the book (and quite a few tricks that weren't in any books) to make video games and demos as visually appealing and interesting as they possibly could and to make it appear as if the hardware was doing things that it shouldn't technically be able to do.

Here are a few examples of videos to illustrate what I mean:

In order to make these games and demos look the way they do, the coders had to write highly optimised code in assembly language that often needed to be accurate down to the individual clock cycle of the CPU because some events have to occur at the exact right time for the effect to look good. On CRT displays, for which these devices are designed for, the screen is drawn from top to bottom with an electron beam. For some effects to look right, the content of the video buffer, which holds the image that is drawn on the screen, needs to be manipulated while the image is being drawn on the screen, which requires very precise timing.

This got me thinking that optimising code and writing lean and resource-efficient code in the first place seems to be a dying art. It's still necessary for programming embedded devices which typically have a very limited amount of program memory, RAM and processing power, but PCs (and Macs) these days have almost unlimited processing power and memory compared with the systems of the past. Terabytes of drive space, gigabytes of memory and clock speeds measured in gigahertz, not to mention multicore CPUs with 8 or more cores on a single chip would have blown every 80s programmer's mind; they would have had no idea what to do with this any more than I would know what to do if somebody landed a spaceship in front of my house and handed me the keys.

But not to fear, coders have found ways to use all these resources by writing gigantic applications that require huge amounts of disk space, RAM and processing power and still manage to feel slow even on high end computers. I get to enjoy the full Microsoft 365 experience at work with Windows 11, Outlook, Teams, Sharepoint, the entire office suite and on and on, and even on a powerful machine all of this just feels way more sluggish than it should. If I installed a copy of Windows 7 with contemporary versions of Outlook, Office and Skype on this computer, it would absolutely fly and I could do everything I'm doing now, but faster. I mean, even just a right click on the desktop sometimes takes an entire second before the menu appears. That doesn't seem to me like there's particularly well written code running behind it. Microsoft's CEO agrees ;)

Ok, I'm starting to rant now. My point is, programmers these days (and I'm including myself in this too) might do well to occasionally take some inspiration from coders of the past or demoscene coders and after implementing something, take a moment to look over their code and ask themselves "is there maybe a more efficient way to do this". Because chances are there is, and at the end of the day a lean and well optimised codebase is something that everybody benefits from. Unfortunately it seems that with the advent of AI assistants and vibe coding we're moving further and further from this idea, but one can hope...

Linkdump No 53

2025-05-02 08:00:00

That was a week, huh? There was quite a bit more online drama than I would have liked, which isn't saying much because I like exactly zero drama. And evidently I'm not the only one because a few good people deleted their online presence this week, which is a real shame I think, but also completely understandable.

But I already ranted about this, I'm glad there's other ways to keep up with people and there's still enough interesting stuff around the internet that I want to share with you. There's some free music, everybody was SHOCKED to learn that the people who told us that downloading movies and music is stealing are hypocrites, and maybe one of us will find an old soviet space probe in their backyard next week.

Articles

  • Microsoft CEO says up to 30% of the company’s code was written by AI
    My first thought when I read this was "that explains a lot". And they will have more and more code "written" by AI, and I suspect that it will maybe not be so great for the quality of their products, which is already... you know... let's move on.
  • You Wouldn’t Steal A Font… | Hackaday
    Apparently the font that was used in the anti piracy-campaigns in the 2000s was stolen itself. Now I have to be honest, this sounds maybe a little too good to be true, but if it is true then it's such a delicious kind of irony that I can't not mention it here.
  • There’s A Venusian Spacecraft Coming Our Way | Hackaday
    The headline is bullshit, but the story is actually very interesting - in 1972 the Soviet Union launched a few probes towards Venus, but one of them malfunctioned and got stuck in Earth orbit and has been orbiting the Earth for the past 50 years. Now however it's about to fall back down, and because it was built to land on Venus, it might actually survive reentry and come down intact, which is fascinating. There's a more in-depth article here and Scott Manley made a video about it here.

Software/Services

Around the Small Web

  • People can change - annie's blog
    Another great post by Annie. I admire her honesty and just how much she's changed for the better.
  • The Day Anubis Saved Our Websites From a DDoS Attack
    How a DDoS attack on the ScummVM website was mitigated by a service called Anubis, which filters incoming requests and when in doubt, requires the browser to do a proof-of-work challenge, which apparently reliably keeps the bots out. I started seeing this on a few other sites as well, and I wonder if that's a feasible way to keep AI bots out...
  • 0x32 | Christian Heilmann
    Christian turns 50 (congratulations!) and he remembers the 80s a little differently than the people who listen to Synthwave and watch Stranger Things ;)
  • Reading RSS content is a skilled activity - David Oliver
    A few thoughts on how to curate a good RSS feed full of interesting things and hopefully little drama.

Misc

  • Has anyone noticed how people have MASSIVELY changed in the last 20 years?
    This is an interesting thread over on Reddit. I personally think that people have changed, but it doesn't seem that massive to me. But maybe I've changed with them and don't notice... I read a comment somewhere lately that said that people who were released from prison after 20 years or so went "What the hell happened? Everyone is just staring at their phones all the time!", so maybe there is something to it.
  • mobygratis - Free Moby music to empower your creative projects
    The musician Moby has released 500 tracks of free music to the internet with (almost) no strings attached. I've never been into anything that Moby does, but this is really cool and for that alone the guy has my respect.

Bullying is never okay

2025-05-01 08:00:00

Brandon who ran the blog Brandon's Journal decided to quit blogging and say his farewell. He already deleted all posts on the blog except this one, and I assume he'll delete this too at some point so these links will probably be dead pretty soon.

It's a shame and it's sad because I really enjoyed Brandon's blog. I immediately subscribed to him once I discovered his blog and read every article he's published since then, and he quickly became one of my favourite bloggers because his posts were always interesting and thoughtful and more than once I actually learned something from him or took one of his posts as a starting point to reflect about myself. His previous post "A Fundamental Misunderstanding of How People Work" is a perfect example of this, and I was going to recommend it in tomorrow's Linkdump, but now it's gone like everything else on his blog.

And I feel that we lost something here.

Of course everybody has the right to step away from something if it isn't right for them anymore, and this is totally normal and fine. But there's something here that's troubling me. Brandon explains more in his post, and if he chooses to delete it later that's a decision I'm going to respect, so I'm not going to archive or copy his post or quote excessively from it. But I want to quote just a small bit here and I hope that's okay:

"[...] I've had enough of the abuse. I'm sick of having false allegations thrown my way and dealing with what essentially amounts as cyberstalking."

Now, I wasn't there for any of this; I spoke to Brandon a little bit over email so I have a rough idea of what's been happening, but I didn't personally witness anything that happened or any of the events that set this in motion and so I don't feel that it's my place to comment on anything regarding this matter.

But I think this is a good opportunity to clearly state one of my core values here that I strive to live my life by:

Bullying is never okay.

We live in a world where some rich and powerful people routinely bully each and every person who dares to disagree with them or stand in their way; they do this so much and so naturally and casually that I suspect it's one of their core character traits.

We also live in a world where some people, especially online, think that whoever doesn't share their views, beliefs, values, politics or whatever else deserves, no, has to be publicly shamed, insulted, screamed at, bullied, harassed or even attacked.

None of these things are okay, and I find it deeply troubling that some people think they are.

To be clear, I think disagreeing with someone or calling out someone's inacceptable behaviour (like bullying) is fine and sometimes necessary - as long as it's done in a respectful manner. There is a difference between telling someone "hey, I think you're wrong and here's why" or "this thing you did was not okay" and screaming expletives in their face or threatening them, both in the real world or online. It's also okay to agree to disagree. Different people value different things, and I also have to accept when someone doesn't see the world the same way I do.

There's of course a point at which it is okay, maybe even necessary to tell someone that you're done with them. If you dehumanise others, spread hate, instigate violence, or deny someone their basic human rights because of their gender/sex/religion/ethnicity etc. then we're done talking. I want nothing to do with you and you're not welcome in my space until you demonstrate an effort to change.

I heard the argument "they're not respecting me/my group so why should I respect them", and I think that's nonsense. It's just an excuse to be an a-hole. I will treat you with respect precisely because you treat me or others with disrespect, because I don't want to be like you. I want to be better and I hold myself to a higher standard than that, and so I will do my best to stand up for what I believe is right, but also remain polite. I might fail at that. But then I also have to own up to my own behaviour and apologise or face the consequences of my actions.

In short, I think bullying is harmful, it is toxic, it is wrong. I will not bully people and when I witness someone being bullied, I will stand up for them. I accept that people have different values and opinions, and I'm always open for a respectful conversation. If someone approaches me in a respectful manner, I'm always willing to talk and listen. After all, maybe I'm the one who's wrong, and maybe I can learn something from them. I reserve the right however to step away and end a conversation if I don't see eye to eye with the other person or if the interaction turns hostile.

And I will never be part of an internet mob that piles on someone, ever. No exceptions.


Lastly, Brandon, I'm sad to see you go, but I get it. Nobody needs this kind of thing in their life. I wish you all the best, thanks for writing your blog and sharing your thoughts, and I hope to see you around.

Getting a feeling for how much energy AI uses by running it on my laptop

2025-04-27 08:00:00

There's often articles in the news and around the web that point out that "AI" uses tons of energy, like this article which calculates that a single query to ChatGPT costs an average of 0.36 (US) cents. Recently Sam Altman claimed that saying "please" and "thank you" to ChatGPT costs "tens of millions of dollars" (this article also mentions that a single ChatGPT query can power a lightbulb for 20 minutes, though it fails to mention which type of lightbulb - highly efficient LED bulb or incandescent? - making this information useless).

Either way, running AI applications consumes a lot of energy. But while reading about it is informative, I still find this type of information rather abstract and hard to fully grasp. After all, if I'm asking ChatGPT for something I'm just typing a few words into a website and then get a response back and I have no idea what's happening in the background. But there is a way to experience how much energy is needed for an AI query more directly, and that is to run it on your own computer. So let's do that.

Setup

By now there are a number of ways to run AI chatbots on your local machine. I chose GPT4ALL which I've played around with in the past and so was already a bit familiar with. There are installers available on the website, so setting up the application is really simple. By itself, GPT4ALL is just a front-end for running all kinds of LLMs (Large Language Models), and after starting it for the first time you have to go to the "Models" tab and select a language model to download. There's lots of them available and I have no idea what most of them are, but one of the first in the list was "Llama 3 8B Instruct" which sounded familiar (Llama is developed by Meta) and so I downloaded this. Once that's finished you can go to the "Chats" tab, select the model under "Choose a model..." at the top and then you're ready to go.

I have to mention that this is far from being a scientific test where you run the same thing multiple times and do measurements and benchmarks and statistical evaluations and whatnot; I just want to get a feeling for how resource intensive it is to ask a chatbot something.

So once I had everything set up, I just entered a prompt into the chatbot and let it do its thing. Here's the prompt I put in:

"Explain to me how a large language model works in sufficient detail."

Might as well learn something while we're at it, right? Then I hit Enter and waited for the answer to appear on my screen. You know how ChatGPT prints out the answer word for word? The same thing is happening here, but much, much slower.

Results

I'm putting the full response at the bottom of this post as it is quite long; with 625 words it's pretty much a post of its own.

I ran this on my Laptop with an i7-4650U CPU, which wasn't the smartest idea, because this took a full 12 minutes and 15 seconds to generate. And the entire time the CPU was at 100% on all cores and the fan was blowing at maximum speed. I tried measuring the power draw of the computer from the wall, but that fluctuated wildly and isn't usable. I'm guessing the computer was also drawing power from the battery at times. I should have run this on a computer without a battery. But the CPU has a TDP of 15W and I think it's safe to assume that it also consumed the full 15W the entire time.

So we can do some back-of-the-envelope-calculations here:

The test ran for 735 seconds (12 minutes 15 seconds) and drew 15W of power the whole time, that's an energy consumption of

15W * 735s = 11.025Ws

That's Watt-seconds, converting to Kilowatt-hours:

11.025Ws / 3600s/h = 3,06Wh = 0,003kWh

0,003kWh at an energy cost of around 40ct/kWh in Germany is a cost of

0,003kWh * 40ct/kWh = 0,12ct

So this one chatbot query cost me a bit over 0,1 cent. That's actually less than the 0,36 US cents figure cited above, but this is a very rough calculation here and I'm only factoring in the power draw of the CPU itself. If I did the calculation with the power consumption of the whole computer which is probably more around 45W, I would end up at 0,36 cents too.

Conclusion

But knowing the precise cost is not really what I'm after anyways. My goal was to get a feeling for how resource intensive running a chatbot is, and sitting here for over 12 minutes staring at my screen with a new word appearing in the answer about every second (625 words in 735 seconds means each word took on average 0,85 seconds to generate) and the fan blaring at full blast the whole time gave me a very good idea for just how much computing power is required to run such a thing. It is a lot.

What if I ran this on a faster computer?

No doubt if I ran the same language model with the same prompt on my desktop PC (Ryzen 5 3600) it would chew through the task quite a bit faster. But it would also draw a lot more power while doing so, and produce a lot more heat. And if I did the same calculation I did above, multiply the power draw with the time it took to complete the task, I would arrive at roughly the same energy consumption in kWh. Doing the same task 3 times faster but drawing 3 times more power works out to the same total energy consumed.

Of course more modern CPUs are more efficient, especially when we start looking at Apple's ARM based chips, where we would probably see significantly less energy consumption for the same task.

But at the end of the day, even if we used more efficient hardware, there's millions of queries to ChatGPT and other chatbots every single day, and the energy these systems consume is gigantic. And we're only talking about using these systems here, they also require huge amounts of energy to create them in the first place.

If you want to get a feel for just how much processing power and therefore energy it takes for a chatbot to generate an answer to a prompt, I can highly recommend running a chatbot on your own hardware and experience for yourself how much it takes to produce an answer.

It really puts things into perspective.


Appendix

For completeness, here's the prompt again:

"Explain to me how a large language model works in sufficient detail."

And here's what the language model generated:

I'd be happy to explain how a large language model works in detail.

Overview

A large language model is a type of artificial intelligence (AI) that uses neural networks and machine learning algorithms to analyze and generate human-like text. These models are trained on vast amounts of text data, such as books, articles, and websites, which allows them to learn patterns and relationships between words, phrases, and sentences.

Architecture

A large language model typically consists of three main components:

1. Encoder: This component takes in a sequence of tokens (e.g., words or characters) and outputs a continuous representation of the input text.
2. Decoder: This component generates output text one token at a time based on the encoder's output and previous generated tokens.
3. Attention Mechanism: This mechanism allows the model to focus on specific parts of the input text when generating each new token.

Encoder

The encoder is typically a multi-layer perceptron (MLP) or a recurrent neural network (RNN). Its primary function is to transform the input sequence into a fixed-size vector representation, known as the "context" or "hidden state." This process involves:

4. Tokenization: Breaking down the input text into individual tokens (e.g., words).
5. Embedding: Converting each token into a numerical vector using an embedding layer.
6. Encoding: Passing the embedded vectors through multiple layers of neural networks, which learn to capture complex patterns and relationships between tokens.

The encoder's output is a sequence of hidden states, where each state represents the context at that position in the input text.

Decoder

The decoder generates output text one token at a time based on the encoder's output. Its primary function is to predict the next token given the previous generated tokens and the current context:

4. Token Embedding: Converting the predicted token into an embedded vector using the same embedding layer as in the encoder.
5. Contextualized Representation: Combining the embedded token with the current hidden state from the encoder, which provides contextual information about the input text.
6. Softmax Output: Applying a softmax function to generate a probability distribution over all possible tokens.

The decoder's output is a sequence of predicted tokens, one at a time.

Attention Mechanism

The attention mechanism allows the model to focus on specific parts of the input text when generating each new token:

4. Query Vector: Computing a query vector based on the current hidden state and the previous generated tokens.
5. Key-Value Pairs: Creating key-value pairs from the encoder's output, where keys represent different positions in the input text and values are the corresponding contextualized representations.
6. Attention Weights: Calculating attention weights by taking the dot product of the query vector with each key and applying a softmax function to normalize them.

The model then uses these attention weights to compute a weighted sum of the value vectors, which provides context-specific information for generating the next token.

Training

Large language models are trained using massive amounts of text data (e.g., books, articles) through an optimization process:

4. Masked Language Modeling: Randomly masking some tokens in the input text and training the model to predict those masked tokens.
5. Next Sentence Prediction: Training the model to predict whether two sentences are adjacent or not.

The goal is to minimize the loss function (e.g., cross-entropy) between the predicted output and the true target token.

Inference

During inference, the model takes in a prompt or input text and generates an output sequence one token at a time using the decoder. The attention mechanism helps the model focus on relevant parts of the input text to generate coherent and context-specific responses.

This is a high-level overview of how large language models work. If you have specific questions or would like more details, feel free to ask!