MoreRSS

site iconDaniel MangumModify

I am a principal software engineer at Upbound.
Please copy the RSS to your reader, or quickly subscribe to:

Inoreader Feedly Follow Feedbin Local Reader

Rss preview of Blog of Daniel Mangum

Make it Possible, Then Make it Normal

2025-11-09 15:41:34

This past Saturday I went out for a 12 mile run with a few friends. Absent the beautiful fall colors on display this time of year in North Carolina, there was nothing especially notable about this particular run. However, the fact that it was not notable is notable itself. Two years ago I set a goal to run 2,023 miles in 2023. After reaching that goal with time to spare, I revised the goal to 3,000 miles, and just barely reached it.

Interesting SPI Routing with iCE40 FPGAs

2025-11-07 15:41:34

A few weeks ago I posted about how much fun I was having with the Fomu FPGA development board while travelling. This project from Tim ‘mithro’ Ansell and Sean ‘xobs’ Cross is not new, but remains a favorite of mine because of how portable it is — the entire board can fit in your USB port! The Fomu includes a Lattice Semiconductor iCE40 UltraPlus 5K, which has been a popular FPGA option over the past few years due to the reverse engineered bitstream format and ability to program it with a fully open source toolchain (see updated repository here).

Using a Laptop as an HDMI Monitor for an SBC

2025-10-09 15:41:34

Though I spend the majority of my time working with microcontroller class devices, I also have an embarassingly robust collection of single board computers (SBC), including a few different Raspberry Pi models, the BeagleV Starlight Beta (RIP), and more. Typically when setting up these devices for whatever automation task I have planned for them, I’ll use “headless mode” and configure initial user and network credentials when writing the operating system to the storage device using a tool like Raspberry Pi’s Imager.

How AI on Microcontrollers Actually Works: Registering Operators

2025-07-14 15:41:34

We started this series with a look at operators and kernels, the “instructions” used by models and the implementation of those instructions on the available hardware. We then explored the computation graph, which defines the sequence of operators for a given model, and explored how different model formats opt to include the explicit computation graph in the distributed file, or defer it to the inference application. With tflite-micro and the .

How AI on Microcontrollers Actually Works: The Computation Graph

2025-07-07 15:41:34

In our last post we explored operators and kernels in Tensorflow Lite, and how the ability to swap out kernels depending on the hardware capabilities available can lead to dramatic performance improvements when performing inference. We made an analogy of operators to instruction set architectures (ISAs), and kernels to the hardware implementation of instructions in a processor. Just like in traditional computer programs, the sequence of instructions in a model needs to be encoded and distributed in some type of file, such as an Executable and Linkable Format (ELF) on Unix-based systems or Portable Executable (PE) on Windows.

How AI on Microcontrollers Actually Works: Operators and Kernels

2025-06-30 15:41:34

The buzz around “edge AI”, which means something slightly different to almost everyone you talk to, is well past reaching a fever pitch. Regardless of what edge AI means to you, the one commonality is typically that the hardware on which inference is being performed is constrained in one or more dimensions, whether it be compute, memory, or network bandwidth. Perhaps the most constrained of these platforms are microcontrollers. I have found that, while there is much discourse around “running AI” (i.