2026-03-26 18:09:07
Software and digital security should rely on verification, rather than trust. I want to strongly encourage more users and consumers of software to verify curl. And ideally require that you could do at least this level of verification of other software components in your dependency chains.
With every source code commit and every release of software, there are risks. Also entirely independent of those.
Some of the things a widely used project can become the victim of, include…
In the event any of these would happen, they could of course also happen in combinations and in a rapid sequence.
curl, mostly in the shape of libcurl, runs in tens of billions of devices. Clearly one of the most widely used software components in the world.
People ask me how I sleep at night given the vast amount of nasty things that could occur virtually at any point.
There is only one way to combat this kind of insomnia: do everything possible and do it openly and transparently. Make it a little better this week than it was last week. Do software engineering right. Provide means for everyone to verify what we do and what we ship. Iterate, iterate, iterate.
If even just a few users verify that they got a curl release signed by the curl release manager and they verify that the release contents is untainted and only contains bits that originate from the git repository, then we are in a pretty good state. We need enough independent outside users to do this, so that one of them can blow the whistle if anything at any point would look wrong.
I can’t tell you who these users are, or in fact if they actually exist, as they are and must be completely independent from me and from the curl project. We do however provide all the means and we make it easy for such users to do this verification.
The few outsiders who verify that nothing was tampered with in the releases can only validate that the releases are made from what exists in git. It is our own job to make sure that what exists in git is the real thing. The secure and safe curl.
We must do a lot to make sure that whatever we land in git is okay. Here’s a list of activities we do.
-Werror that converts warnings to errors and fail the builds.zizmor and other code analyzer tools on the CI job config scripts to reduce the risk of us running or using insecure CI jobs.All this done in the open with full transparency and full accountability. Anyone can follow along and verify that we follow this.
Require this for all your dependencies.
We plan for the event when someone actually wants and tries to hurt us and our users really bad. Or when that happens by mistake. A successful attack on curl can in theory reach widely.
This is not paranoia. This setup allows us to sleep well at night.
This is why users still rely on curl after thirty years in the making.
I recently added a verify page to the curl website explaining some of what I write about in this post.
2026-03-25 16:05:41
I hope I don’t have to spell it out but I will do it anyway: in these cases I don’t know anything about their products and I cannot help them. Quite often I first need to search around only to figure out what the product is or does, that the person asks me about.
Over the years I have collected such emails that end up in my inbox. Out of those that I have received, I have cherry-picked my favorites: the best, the weirdest, the most offensive and the most confused ones and I put them up online. A few of then also triggered separate blog posts of their own in the past.
They help us remember that the world is complicated and hard to understand.
Today, my online collection reached the magical amount: 100 emails. The first one in the stash was received in 2009 and the latest arrived just the other day. I expect I’ll keep adding occasional new ones going forward as well.
Enjoy!
2026-03-22 19:41:09
The NTLM authentication method was always a beast.
It is a proprietary protocol designed by Microsoft which was reverse engineered a long time ago. That effort resulted in the online documentation that I based the curl implementation on back in 2003. I then also wrote the NTLM code for wget while at it.
NTLM broke with the HTTP paradigm: it is made to authenticate the connection instead of the request, which is what HTTP authentication is supposed to do and what all the other methods do. This might sound like a tiny and insignificant detail, but it has a major impact in all HTTP implementations everywhere. Indirectly it is also the cause for quite a few security related issues in HTTP code, because NTLM needs many special exceptions and extra unique treatments.
curl has recorded no less than seven past security vulnerabilities in NTLM related code! While that may not be only NTLM’s fault, it certainly does not help.
The connection-based concept also makes the method incompatible with HTTP/2 and HTTP/3. NTLM requires services to stick to HTTP/1.
NTLM (v1) uses super weak cryptographic algorithms (DES and MD5), which makes it a bad choice even when disregarding the other reasons.
We are slowly deprecating NTLM in curl, but we are starting out by making it opt-in. Starting in curl 8.20.0, NTLM is disabled by default in the build unless specifically enabled.
Microsoft themselves have deprecated NTLM already. The wget project looks like it is about to make their NTLM support opt-in.
curl only supports SMB version 1. This protocol uses NTLM for the authentication and it is equally bad in this protocol. Without NTLM enabled in the build, SMB support will also get disabled.
But also: SMBv1 is in itself a weak protocol that is barely used by curl users, so this protocol is also opt-in starting in curl 8.20.0. You need to explicitly enable it in the build to get it added.
I want to emphasize that we have not removed support for these ancient protocols, we just strongly discourage using them and I believe this is a first step down the ladder that in a future will make them get removed completely.
2026-03-21 22:06:12
In May 2010 we merged support for the RTMP protocol suite into curl, in our desire to support the world’s internet transfer protocols.
The protocol is an example of the spirit of an earlier web: back when we still thought we would have different transfer protocols for different purposes. Before HTTP(S) truly became the one protocol that rules them all.
RTMP was done by Adobe, used by Flash applications etc. Remember those? RTMP is an ugly proprietary protocol that simply was never used much in Open Source.
The common Open Source implementation of this protocol is done in the rtmpdump project. In that project they produce a library, librtmp, which curl has been using all these years to handle the actual binary bits over the wire. Build curl to use librtmp and it can transfer RTMP:// URLs for you.
In our constant pursuit to improve curl, to find spots that are badly tested and to identify areas that could be weak from a security and functionality stand-point, our support of RTMP was singled out.
Here I would like to stress that I’m not suggesting that this is the only area in need of attention or improvement, but this was one of them.
As I looked into the RTMP situation I realized that we had no (zero!) tests of our own that actually verify RTMP with curl. It could thus easily break when we refactor things. Something we do quite regularly. I mean refactor (but also breaking things). I then took a look upstream into the librtmp code and associated project to investigate what exactly we are leaning on here. What we implicitly tell our users they can use.
I quickly discovered that the librtmp project does not have a single test either. They don’t even do releases since many years back, which means that most Linux distros have packaged up their code straight from their repositories. (The project insists that there is nothing to release, which seems contradictory.)
Is there perhaps any librtmp tests perhaps in the pipe? There had not been a single commit done in the project within the last twelve months and when I asked one of their leading team members about the situation, I was made clear to me that there is no tests in the pipe for the foreseeable future either.
In November 2025 I explicitly asked for RTMP users on the curl-library mailing list, and one person spoke up who uses it for testing.
In the 2025 user survey, 2.2% of the respondents said they had used RTMP within the last year.
The combination of few users and untested code is a recipe for pending removal from curl unless someone steps up and improves the situation. We therefor announced that we would remove RTMP support six months into the future unless someone cried out and stepped up to improve the RTMP situation.
We repeated this we-are-doing-to-drop-RTMP message in every release note and release video done since then, to make sure we do our best to reach out to anyone actually still using RTMP and caring about it.
If anyone would come out of the shadows now and beg for its return, we can always discuss it – but that will of course require work and adding test cases before it would be considered.
Can we remove support for a protocol and still claim API and ABI backwards compatibility with a clean conscience?
This is the first time in modern days we remove support for a URL scheme and we do this without bumping the SONAME. We do not consider this an incompatibility primarily because no one will notice. It is only a break if it actually breaks something.
(RTMP in curl actually could be done using six separate URL schemes, all of which are no longer supported: rtmp,rtmpe,rtmps, rtmpt,rtmpte,rtmpts.)
The offical number of URL schemes supported by curl is now down to 27: DICT, FILE, FTP, FTPS, GOPHER, GOPHERS, HTTP, HTTPS, IMAP, IMAPS, LDAP, LDAPS, MQTT, MQTTS, POP3, POP3S, RTSP, SCP, SFTP, SMB, SMBS, SMTP, SMTPS, TELNET, TFTP, WS and WSS.
The commit that actually removed RTMP support has been merged. We had the protocol supported for almost sixteen years. The first curl release without RTMP support will be 8.20.0 planned to ship on April 29, 2026
2026-03-15 18:42:45
In the spring of 2020 I decided to finally do something about the lack of visualizations for how the curl project is performing, development wise.
How does the line of code growth look like? How many command line options have we had over time and how many people have done more than 10 commits per year over time?
I wanted to have something that visually would show me how the project is doing, from different angles, viewpoints and probes. In my mind it would be something like a complicated medical device monitoring a patient that a competent doctor could take a glance at and assess the state of the patient’s health and welfare. This patient is curl, and the doctors would be fellow developers like myself.
GitHub offers some rudimentary graphs but I found (and still find) them far too limited. We also ran gitstats on the repository so there were some basic graphs to get ideas from.
I did a look-around to see what existing frameworks and setups that existed that I should base this one, as I was convinced I would have to do quite some customizing myself. Nothing I saw was close enough to what I was looking for. I decided to make my own, at least for a start.
I decided to generate static images for this, not add some JavaScript framework that I don’t know how to use to the website. Static daily images are excellent for both load speed and CDN caching. As we already deny running JavaScript on the site that saved me from having to work against that. SVG images are still vector based and should scale nicely.
SVG is also a better format from a download size perspective, as PNG almost always generate much larger images for this kind of images.
When this started, I imagined that it would be a small number of graphs mostly showing timelines with plots growing from lower left to upper right. It would turn out to be a little naive.
I knew some basics about gnuplot from before as I had seen images and graphs generated by others in the past. Since gitstats already used it I decided to just dive in deeper and use this. To learn it.
gnuplot is a 40 year old (!) command line tool that can generate advanced graphs and data visualizations. It is a powerful tool, which also means that not everything is simple to understand and use at once, but there is almost nothing in terms of graphs, plots and curves that it cannot handle in one way or another.
I happened to meet Lee Phillips online who graciously gave me a PDF version of his book aptly named gnuplot. That really helped!
I decided that for every graph I want to generate, I first gather and format the data with one script, then render an image in a separate independent step using gnuplot. It made it easy to work on them in separate steps and also subsequently tune them individually and to make it easy to view the data behind every graph if I ever think there’s a problem in one etc.
It took me about about two weeks of on and off working in the background to get a first set of graphs visualizing curl development status.
I then created the glue scripting necessary to add a first dashboard with the existing graphs to the curl website. Static HTML showing static SVG images.
On March 20, 2020 the first version of the dashboard showed no less than twenty separate graphs. I refer to “a graph” as a separate image, possibly showing more than one plot/line/curve. That first dashboard version had twenty graphs using 23 individual plots.
Since then, we display daily updated graphs there.
All data used for populating the graphs is open and available, and I happily use whatever is available:
Open and transparent as always.
Every once in a while since then I get to think of something else in the project, the code, development, the git history, community, emails etc that could be fun or interesting to visualize and I add a graph or two more to the dashboard. Six years after its creation, the initial twenty images have grown to one hundred graphs including almost 300 individual plots.
Most of them show something relevant, while a few of them are in the more silly and fun category. It’s a mix.
The 100th graph was added on March 15, 2026 when I brought back the “vulnerable releases” graph (appearing on the site on March 16 for the first time). It shows the number of known vulnerabilities each past release has. I removed it previously because it became unreadable, but in this new edition I made it only show the label for every 4th release which makes it slightly less crowded than otherwise.

This day we also introduce a new 8-column display mode.

Many of the graphs are internal and curl specific of course. The scripts for this, and the entire dashboard, remain written specifically for curl and curl’s circumstances and data. They would need some massaging and tweaking in order to work for someone else.
All the scripts are of course open and available for everyone.
I used to also offer all the CSV files generated to render the graphs in an easy accessible form on the site, but this turned out to be work done for virtually no audience, so I removed that again. If you replace the .svg extension with .csv, you can still get most of the data – if you know.
The graphs and illustrations are not only silly and fun. They also help us see development from different angles and views, and they help us draw conclusions or at least try to. As an established and old project that makes an effort to do right, some of what we learn from this curl data might be possible to learn from and use even in other projects. Maybe even use as basis when we decide what to do next.
I personally have used these graphs in countless blog posts, Mastodon threads and public curl presentations. They help communicate curl development progress.
On Mastodon I keep joking about me being a graphaholic and often when I have presented yet another graph added the collection, someone has asked the almost mandatory question: how about a graph over number of graphs on the dashboard?
Early on I wrote up such a script as well, to immediately fulfill that request. On March 14 2026, I decided to add it it as a permanent graph on the dashboard.

The next-level joke (although some would argue that this is not fun anymore) is then to ask me for a graph showing the number of graphs for graphs. As I aim to please, I have that as well. Although this is not on the dashboard:

I am certain I (we?) will add more graphs over time. If you have good ideas for what source code or development details we should and could illustrate, please let me know.
The git repository: https://github.com/curl/stats/
Daily updated curl dashboard: https://curl.se/dashboard.html
curl gitstats: https://curl.se/gitstats/
2026-03-13 06:05:55
Background: nuget.org is a Microsoft owned and run service that allows users to package software and upload it to nuget so that other users can download it. It is targeted for .Net developers but there is really no filter in what you can offer through their service.
Three years ago I reported on how nuget was hosting and providing ancient, outdated and insecure curl packages. Random people download a curl tarball, build curl and then upload it to nuget, and nuget then offers those curl builds to the world – forever.
To properly celebrate the three year anniversary of that blog post, I went back to nuget.org, entered curl into the search bar and took a look at the results.
I immediately found at least seven different packages where people were providing severely outdated curl versions. The most popular of those, rmt_curl, reports that it has been downloaded almost 100,000 times over the years and is still downloaded almost 1,000 times/week the last few weeks. It is still happening. The packages I reported three years ago are gone, but now there is a new set of equally bad ones. No lessons learned.
rmt_curl claims to provide curl 7.51.0, a version we shipped in November 2016. Right now it has 64 known vulnerabilities and we have done more than 9,000 documented bugfixes since then. No one in their right mind should ever download or use this version.
Conclusion: the state of nuget is just as sad now as it was three years ago and this triggered another someone is wrong on the internet moments for me. I felt I should do my duty and tell them. Again. Surely they will act this time! Surely they think of the security of their users?
The entire nuget concept is setup and destined to end up like this: random users on the internet put something together, upload it to nuget and then the rest of the world downloads and uses those things – trusting that whatever the description says is accurate and well-meaning. Maybe there are some additional security scans done in the background, but I don’t see how anyone can know that they don’t contain any backdoors, trojans or other nasty deliberate attacks.
And whatever has been uploaded once seems to then be offered in perpetuity.
Like three years ago I listed a bunch of severely outdated curl packages in my report. nuget says I can email them a report, but that just sent me a bounce back saying they don’t accept email reports anymore. (Sigh, and yes I reported that as a separate issue.)
I was instead pointed over to the generic Microsoft security reporting page where there is not even any drop-down selection to use for “nuget” so I picked “.NET” instead when I submitted my report.
Almost identically to three years ago, my report was closed within less than 48 hours. It’s not a nuget problem they say.
Thank you again for submitting this report to the Microsoft Security Response Center (MSRC).
After careful investigation, this case has been assessed as not a vulnerability and does not meet Microsoft’s bar for immediate servicing. None of these packages are Microsoft owned, you will need to reach out directly to the owners to get patched versions published. Developers are responsible for removing their own packages or updating the dependencies.
In other words: they don’t think it’s nuget’s responsibility to keep the packages they host, secure and safe for their users. I should instead report these things individually to every outdated package provider, who if they cared, would have removed or updated these packages many years ago already.
Also, that would imply a never-ending wack-a-mole game for me since people obviously keep doing this. I think I have better things to do in my life.
In the cases I reported, the packages seem to be of the kind that once had the attention and energy by someone who kept them up-to-date with the curl releases for a while and then they stopped and since then the packages on nuget has just collected dust and gone stale.
Still, apparently users keep finding and downloading them, even if maybe not at terribly high numbers.
Thousands of fooled users per week is thousands too many.
The uploading users are perfectly allowed to do this, legally, and nuget is perfectly allowed to host these packages as per the curl license.
I don’t have a definite answer to what exactly nuget should do to address this problem once and for all, but as long as they allow packages uploaded nine years ago to still get downloaded today, it seems they are asking for this. They contribute and aid users getting tricked into downloading and using insecure software, and they are indifferent to it.
A rare few applications that were uploaded nine years ago might actually still be okay but those are extremely rare exceptions.
The last time I reported this nuget problem nothing happened on the issue until I tweeted about it. This time around, a well-known Microsoft developer (who shall remain nameless here) saw my Mastodon post about this topic when mirrored over to Bluesky and pushed for the case internally – but not even that helped.
The nuget management thinks this is okay.
If I were into puns I would probably call them chicken nuget for their unwillingness to fix this. Maybe just closing our eyes and pretending it doesn’t exist will just make it go away?
Absolutely no one should use nuget.