MoreRSS

site iconHackadayModify

Hackaday serves up Fresh Hacks Every Day from around the Internet. Our playful posts are the gold-standard in entertainment for engineers and engineering enthusiasts.
Please copy the RSS to your reader, or quickly subscribe to:

Inoreader Feedly Follow Feedbin Local Reader

Rss preview of Blog of Hackaday

Why Lorde’s Clear CD has so Many Playback Issues

2025-08-15 23:30:34

2003 Samsung CD player playing a clear vs normal audio CD. (Credit: Adrian's Digital Basement)

Despite the regularly proclaimed death of physical media, new audio albums are still being published on CD and vinyl. There’s something particularly interesting about Lorde’s new album Virgin however — the CD is a completely clear disc. Unfortunately there have been many reports of folks struggling to get the unique disc to actually play, and some sharp-eyed commentators have noted that the CD doesn’t claim to be Red Book compliant by the absence of the Compact CD logo.

The clear Lorde audio CD in all its clear glory. (Credit: Adrian's Digital Basement, YouTube)
The clear Lorde audio CD in all its clear glory. (Credit: Adrian’s Digital Basement, YouTube)

To see what CD players see, [Adrian] of Adrian’s Digital Basement got out some tools and multiple CD players to dig into the issue. These players range from a 2003 Samsung, a 1987 NEC, and a cheap portable Coby player. But as all audio CDs are supposed to adhere to the Red Book standard, a 2025 CD should play just as happily on a 1980s CD player as vice versa.

The first step in testing was to identify the laser pickup (RF) signal test point on the PCB of each respective player. With this hooked up to a capable oscilloscope, you can begin to see the eye pattern forming. In addition to being useful with tuning the CD player, it’s also an indication of the signal quality that the rest of the CD player has to work with. Incidentally, this is also a factor when it comes to CD-R compatibility.

While the NEC player was happy with regular and CD-R discs, its laser pickup failed to get any solid signal off the clear Lorde disc. With the much newer Samsung player (see top image), the clear CD does play, but as the oscilloscope shot shows, it only barely gets a usable signal from the pickup. Likewise, the very generic Coby player also plays the audio CD, which indicates that any somewhat modern CD player with its generally much stronger laser and automatic gain control ought to be able to play it.

That said, it seems that very little of the laser’s light actually makes it back to the pickup’s sensor, which means that the gain gets probably cranked up to 11, and with that its remaining lifespan will be significantly shortened. Ergo it’s probably best to just burn that CD-R copy of the album and listen to that instead.

This Week in Security: The AI Hacker, FortMajeure, and Project Zero

2025-08-15 22:00:28

One of the hot topics currently is using LLMs for security research. Poor quality reports written by LLMs have become the bane of vulnerability disclosure programs. But there is an equally interesting effort going on to put LLMs to work doing actually useful research. One such story is [Romy Haik] at ULTRARED, trying to build an AI Hacker. This isn’t an over-eager newbie naively asking an AI to find vulnerabilities, [Romy] knows what he’s doing. We know this because he tells us plainly that the LLM-driven hacker failed spectacularly.

The plan was to build a multi-LLM orchestra, with a single AI sitting at the top that maintains state through the entire process. Multiple LLMs sit below that one, deciding what to do next, exactly how to approach the problem, and actually generating commands for those tools. Then yet another AI takes the output and figures out if the attack was successful. The tooling was assembled, and [Romy] set it loose on a few intentionally vulnerable VMs.

As we hinted at up above, the results were fascinating but dismal. This LLM successfully found one Remote Code Execution (RCE), one SQL injection, and three Cross-Site Scripting (XSS) flaws. This whole post is sort of sneakily an advertisement for ULTRARED’s actual automated scanner, that uses more conventional methods for scanning for vulnerabilities. But it’s a useful comparison, and it found nearly 100 vulnerabilities among the collection of targets.

The AI did what you’d expect, finding plenty of false positives. Ask an AI to describe a vulnerability, and it will glad do so — no real vulnerability required. But the real problem was the multitude of times that the AI stack did demonstrate a problem, and failed to realize it. [Romy] has thoughts on why this attempt failed, and two points stand out. The first is that while the LLM can be creative in making attacks, it’s really terrible at accurately analyzing the results. The second observation is one of the most important observations to keep in mind regarding today’s AIs. It doesn’t actually want to find a vulnerability. One of the marks of security researchers is the near obsession they have with finding a great score.

DARPA

Don’t take the previous story to mean that AI will never be able do vulnerability research, or even that it’s not a useful tool right now. The US DARPA sponsored a competition at this year’s DEF CON, and another security professional pointed out that Buttercup AI Cyber REasoning System (CRS) is the second place winner. It’s now available as an Open Source project.

This challenge was a bit different from an open-ended attack on a VM. In the DARPA challenge, the AI tools are given specific challenges, and a C or Java codebase, and told to look for problems. Buttercup uses an AI-guided fuzzing approach, and one of the notable advantages with this challenge is that often times a vulnerability will cause an outright crash in the program, and that’s hard to miss, even for an AI.

Team Atlanta took first place, and has some notes on their process. Their first-place finish was almost derailed from the start, due to a path checking rule to comply with contest rules. The AI tools were provided fuzzing harnesses that they were not allowed to modify, and the end goal was for the AIs to actually write patches to fix the issues found. All of the challenges were delivered inside directories containing ossfuzz, triggering the code that protected against breaking the no modification rules. A hasty code hacking session right at the last moment managed to clear this, and saved the entire competition.

FortMajeure

We have this write-up from [0x_shaq], finding a very fun authentication bypass in FortiWeb. The core problem is the lack of validation on part of the session cookie. This cookie has a couple of sections that we care about. The Era field is a single digit integer that seems to indicate a protocol version or session type, while the Payload and AuthHash fields are the encrypted session information and signed hash for verification.

That Era field is only ever expected to be a 0 or a 1, but the underlying code processes the other eight possible values the same way: by accessing the nth element of an array, even if the array doesn’t actually have that many initialized elements. And one of the things that array will contain is the encryption/signing key for the session cookie. This uninitialized memory is likely to be mostly or entirely nulls, making for a very predictable session key.

Project Zero

Google has a couple interesting items on their Project Zero blog. The first is from late July, and outlines a trial change to disclosure timelines. The problem here is that a 90 day disclosure gives the immediate vendor plenty of time to patch an issue, but even with a 30 day extension, it’s a race for all of the downstream users to apply, test, and distribute the fix. The new idea is to add a one week vulnerability pre-disclosure. One week after a vulnerability is found, it’s existence is added to the chart of upcoming releases. So if you ship Dolby’s Unified Decoder in a project or product, mark your calendar for September 25, among the other dozen or so pre-released vulnerabilities.

The second item from Project Zero is a vulnerability found in Linux, that could be triggered from within the Chrome renderer sandbox. At the heart of the matter is the Out Of Band byte that could be sent as a part of Unix Sockets. This is a particularly obscure feature, and yet enabled by default, which is a great combination for security research.

The kernel logic for this feature could get confused when dealing with multiples of these one-byte messages, and eventually free kernel memory while a pointer is still pointing to it. Use the recv() syscall again on that socket, and the freed memory is accessed. This results in a very nice kernel memory read primitive, but also a very constrained write primitive. In this case, it’s to increment a single byte, 0x44 bytes into the now-freed data structure. Turning this into a working exploit was challenging but doable, and mainly consisted of constructing a fake object in user-controlled memory, triggering the increment, and then using the socket again to coerce the kernel into using the fake object.

Bits and Bytes

Cymulate has the story of a Microsoft NTLM patch that wasn’t quite enough. The original problem was that a Windows machine could be convinced to connect to a remote NTLM server to retrieve a .ico file. The same bug can be triggered by creating a shortcut that implies the .ico is embedded inside the target binary itself, and put that on a remote SMB share. It’s particularly bad because this one will acess the server, and leak the NTLM hash, just by displaying the icon on the decktop.

Xerox FreeFlow Core had a pair of exploits, the more serious of which could enable an unauthenticated RCE. The first is an XML External Entity (XXE) injection issue, where a user request could result in the server fetching remote content while processing the request. The more serious is a simple file upload with path traversal, making for an easy webshell dropper.

Claroty’s Team82 dug into the Axis Communications protocol for controlling security cameras, and found some interesting items. The Axis.Remoting protocol uses mutual TLS, which is good. But those are self-signed certificates that never validated, allowing for trivial man in the middle. The most serious issue was a JSON deserialization vulnerability, allowing for RCE on the service itself. Patches are available, and are particularly important for Axis systems that are available on the open Internet.

Teletext Around the World, Still

2025-08-15 19:00:36

When you mention Teletext or Videotex, you probably think of the 1970s British system, the well-known system in France, or the short-lived US attempt to launch the service. Before the Internet, there were all kinds of crazy ways to deliver customized information into people’s homes. Old-fashioned? Turns out Teletext is alive and well in many parts of the world, and [text-mode] has the story of both the past and the present with a global perspective.

The whole thing grew out of the desire to send closed caption text. In 1971, Philips developed a way to do that by using the vertical blanking interval that isn’t visible on a TV. Of course, there needed to be a standard, and since standards are such a good thing, the UK developed three different ones.

The TVs of the time weren’t exactly the high-resolution devices we think of these days, so the 1976 level one allowed for regular (but Latin) characters and an alternate set of blocky graphics you could show on an expansive 40×24 palette in glorious color as long as you think seven colors is glorious. Level 1.5 added characters the rest of the world might want, and this so-called “World System Teletext” is still the basis of many systems today. It was better, but still couldn’t handle the 134 characters in Vietnamese.

Meanwhile, the French also wanted in on the action and developed Antiope, which had more capabilities. The United States would, at least partially, adopt this standard as well. In fact, the US fragmented between both systems along with a third system out of Canada until they converged on AT&T’s PLP system, renamed as North American Presentation Layer Syntax or NAPLPS. The post makes the case that NAPLPS was built on both the Canadian and French systems.

That was in 1986, and the Internet was getting ready to turn all of these developments, like $200 million Canadian system, into a roaring dumpster fire. The French even abandoned their homegrown system in favor of the World System Teletext. The post says as of 2024, at least 15 countries still maintain teletext.

So that was the West. What about behind the Iron Curtain, the Middle East, and in Asia? Well, that’s the last part of the post, and you should definitely check it out.

Japan’s version of teletex, still in use as of the mid-1990s, was one of the most advanced.

If you are interested in the underlying technology, teletext data lives in the vertical blanking interval between frames on an analog TV system. Data had page numbers. If you requested a page, the system would either retrieve it from a buffer or wait for it to appear in the video signal. Some systems send a page at a time, while others send bits of a page on each field. In theory, the three-digit page number can range from 100 to 0x8FF, although in practice, too many pages slow down the system, and normal users can’t key in hex numbers.

For PAL, for example, the data resides in even lines between 6 and 22, or in lines 318 to 335 for odd lines. Systems can elect to use fewer lines. A black signal is a zero, while a 66% white signal is a one, and the data is in NRZ line coding. There is a framing code to identify where the data starts. Other systems have slight variations, but the overall bit rate is around 5 to 6 Mbit/s. Character speeds are slightly slower due to error correction and other overhead.

Honestly, we thought this was all ancient history. You have to wonder which country will be the last one standing as the number of Teletext systems continues to dwindle. Of course, we still have closed captions, but with digital television, it really isn’t the same thing. Can Teletext run Doom? Apparently, yes, if you stretch your definition of success a bit.

Open Source Lithium-Titanate Battery Management System

2025-08-15 16:00:58

Lithium-titanate (LTO) is an interesting battery chemistry that is akin to Li-ion but uses Li2TiO3 nanocrystals instead of carbon for the anode. This makes LTO cells capable of much faster charging and with better stability characteristics, albeit at the cost of lower energy density. Much like LiFePO4 cells, this makes them interesting for a range of applications where the highest possible energy density isn’t the biggest concern, while providing even more stability and long-term safety.

That said, LTO is uncommon enough that finding a battery management system (BMS) can be a bit of a pain. This is where [Vlastimil Slintak]’s open source LTO BMS project may come in handy, which targets single cell (1S) configurations with the typical LTO cell voltage of around 1.7 – 2.8V, with 3 cells in parallel (1S3P). This particular BMS was designed for low-power applications like Meshtastic nodes, as explained on the accompanying blog post which also covers the entire development and final design in detail.

The BMS design features all the stuff that you’d hope is on there, like under-voltage, over-voltage and over-current protection, with an ATtiny824 MCU providing the brains. Up to 1 A of discharge and charge current is supported, for about 2.4 Watt at average cell voltage. With the triple 1,300 mAh LTO cells in the demonstrated pack you’d have over 9 Wh of capacity, with the connected hardware able to query the BMS over I2C for a range of statistics.

Thanks to [Marcel] for the tip.

Rediscovering Microsoft’s Oddball Music Generator From The 1990s

2025-08-15 13:00:53

There has been a huge proliferation in AI music creation tools of late, and a corresponding uptick in the number of AI artists appearing on streaming services. Well before the modern neural network revolution, though, there was an earlier tool in this same vein. [harke] tells us all about Microsoft Music Producer 1.0, a forgotten relic from the 1990s.

The software wasn’t ever marketed openly. Instead, it was a part of Microsoft Visual InterDev, a web development package from 1997. It allowed the user to select a style, a personality, and a band to play the song, along with details like key, tempo, and the “shape” of the composition. It would then go ahead and algorithmically generate the music using MIDI instruments and in-built synthesized sounds.

As [harke] demonstrates, there are a huge amounts of genres to choose from. Pick one, and you’ll most likely find it sounds nothing like the contemporary genre it’s supposed to be recreating. The more gamey genres, though, like “Adventure” or “Chase” actually sound pretty okay. The moods are hilariously specific, too — you can have a “noble” song, or a “striving” or “serious” one. [harke] also demonstrates building a full song with the “7AM Illusion” preset, exporting the MIDI, and then adding her own instruments and vocals in a DAW to fill it out. The result is what you’d expect from a composition relying on the Microsoft GS Wavetable synth.

Microsoft might not have cornered the generative music market in the 1990s, but generative AI is making huge waves in the industry today.

Calibration, Good Old Calibration

2025-08-15 10:00:20

Do you calibrate your digital meters? Most of us don’t have the gear to do a proper calibration, but [Mike Wyatt] shares his simple way to calibrate his DMMs using a precision resistor coupled with a thermistor. The idea is to use a standard dual banana plug along with a 3D-printed housing to hold the simple electronics.

The calibration element is a precision resistor. But the assembly includes a 1% thermistor. In addition to the banana plugs, there are test points to access the resistor and another pair for the thermistor.

In use, you plug the device into the unit you want to test. Then you clip a different temperature sensor to the integrated thermistor. Because the thermistor is in close proximity to the meter’s input, it can tell the difference between the ambient temperature and the meter. [Mike] says the bench meters get warmer than hand-held units.

This is, of course, not a perfect setup if you are a real metrology stickler. But it can be helpful. [Mike] suggests the precision resistor be over 100 ohms since anything less really isn’t a candidate for a precision measurement with two wires. Debating over calibration? We do that, too.