2025-08-15 14:28:55
Seven years ago I wrote about how a hundred million cars were running curl and as I brought up this blog post in a discussion recently, I came to reflect over how the world might have changed since. Is curl perhaps used in more cars now?
Yes it is.
With the help of friendly people on Mastodon, and a little bit of Googling, the current set of car brands known to have cars running curl contains 47 names. Most of the world’s top brands:
Acura, Alfa Romeo, Audi, Baojun, Bentley, BMW, Buick, Cadillac, Chevrolet, Chrysler, Citroen, Dacia, Dodge, DS, Fiat, Ford, GMC, Holden, Honda, Hyundai, Infiniti, Jeep, Kia, Lamborghini, Lexus, Lincoln, Mazda, Mercedes, Mini, Nissan, Opel, Peugeot, Polestar, Porsche, RAM, Renault, Rolls Royce, Seat, Skoda, Smart, Subaru, Suzuki, Tesla, Toyota, Vauxhall, Volkswagen, Volvo
I think it is safe to claim that curl now runs in several hundred million cars.
This is based on curl or curl’s copyright being listed in documentation and/or shown on screen on the car’s infotainment system.
The manufacturers need to provide that information per the curl license. Even if some of course still don’t.
For brands missing in the list, we don’t know their status. There are many more car brands that we can suspect probably also run and use curl, but for which we have not found enough evidence for it. If you do, please let me know!
These are all using libcurl, not the command line tool. It is not uncommon for them to run fairly old versions.
I can’t tell for sure as they don’t tell me. Presumably though, a modern care does a lot of Internet transfers for all sorts of purposes and curl is a reliable library for doing that. Download firmware images, music, maps or media. Upload statistics, messages, high-scores etc. Modern cars are full-blown computers plus mobile phones combined, of course they transfer data.
The list contains 47 brands right now. They are however manufactured by a smaller number of companies, as most car companies sell cars under multiple different brands. So maybe 15 car companies?
Additionally, many of these companies buy their software from a provider who bundles it up for them. Several of these companies probably get their software from the same suppliers. So maybe there is only 7 different ones?
I have still chosen to list and talk about the brands because those are the consumer facing names used in everyday conversations, and they are the names we mere mortals are most likely to recognize.
Ironically enough, while curl runs in practically almost every new modern car that comes out from factories, not a single of the companies producing the cars or the software they run, are sponsors of curl or customers of curl support. Not one.
We give away curl for free for everyone to use at no cost and there is no obligation for anyone to pay anyone for this. These companies are perfectly in their rights to act like this.
You could possibly argue that companies should think about their own future and make sure that dependencies they rely on and would like to keep using, also survive so that they can keep depending on these components going forward as well. But obviously that is not how this works.
curl is liberally licensed under an MIT-like license.
I want curl to remain Open Source and I really like providing it in a way, under a liberal license, that makes it possible to get used everywhere. I mean, if we use the measurement of how widely used a software is, I think we can agree that curl is a top candidate.
I would like the economics and financials around the curl project to work out anyway, but maybe that is a utopia we can never reach. Maybe we eventually will have to change the license or something to entice or force a different behavior.
2025-08-08 17:27:17
I often hear or see people claim that HTTP is a simple protocol. Primarily of course from people without much experience or familiarity with actual implementations. I think I personally also had thoughts in that style back when I started working with the protocol.
After personally having devoted soon three decades on writing client-side code doing HTTP and having been involved in the IETF on all the HTTP specs produced since 2008 or so, I think I am in a decent position to give a more expanded view on it. HTTP is not a simple protocol. Far from it. Even if we presume that people actually mean HTTP/1 when they say that.
HTTP/1 may appear simple because of several reasons: it is readable text, the most simple use case is not overly complicated and existing tools like curl and browsers help making HTTP easy to play with.
The HTTP idea and concept can perhaps still be considered simple and even somewhat ingenious, but the actual machinery is not.
But yes, you can telnet to a HTTP/1 server and enter a GET / command manually and see a response. However I don’t think that is enough to qualify the entire thing as simple.
I don’t believe anyone has tried to claim that HTTP/2 or HTTP/3 are simple. In order to properly implement version two or three, you pretty much have to also implement version one so in that regard they are accumulating complexity and bring quite a lot of extra challenges in their own respective specifications.
Let me elaborate on some aspects of the HTTP/1 protocol that make me say it is not simple.
HTTP is not only text-based, it is also line-based; the header parts of the protocol that is. A line can be arbitrarily long as there is no limit in the specs – but they need to have a limit in implementations to prevent DoS etc. How long can they be before a server rejects them? Each line ends with a carriage-return and linefeed. But in some circumstances only a linefeed is enough.
Also, headers are not UTF-8, they are octets and you must not assume that you can just arbitrarily pass through anything you like.
Text based protocols easily gets this problem. Between fields there can be one or more whitespace characters. Some of these are mandatory, some are optional. In many cases HTTP also does tokens that can either be a sequence of characters without any whitespace, or it can be text within double quotes (“). In some cases they are always within quotes.
There is not one single way to determine the end of a HTTP/1 download – the “body” as we say in protocol lingo. In fact, there are not even two. There are at least three (Content-Length, chunked encoding and Connection: close). Two of them require that the HTTP client parses content size provided in text format. These many end-of-body options have resulted in countless security related problems involving HTTP/1 over the years.
Numbers provided as text are slow to parse and sometimes error-prone. Special care needs to be taken to avoid integer overflows, handle whitespace, +/- prefixes, leading zeroes and more. While easy to read for humans, less ideal for machines.
As if the arbitrary length headers with unclear line endings are not enough, they can also be “folded” – in two ways. First: a proxy can merge multiple headers into a single one, comma-separated – except some headers (like cookies) that cannot. Then, a server can send a header as a continuation of the previous header by adding leading whitespace. This is rarely used (and discouraged in recent spec versions), but a protocol detail that an implementation needs to care about because it is used.
HTTP/1.1 ambitiously added features that at the time were not used or deployed onto the wide Internet so while the spec describes how for example HTTP Pipelining works, trying to use it in the wild is asking for a series of problems and is nothing but a road to sadness.
Later HTTP versions added features that better fulfilled the criteria that Pipelining failed to: mostly in the way of multiplexing.
The 100 response code is in similar territory: specified, but rarely actually used. It complicates life for new implementations. The fact that there is a discussion this week about particulars in the 100 response state handling, twenty-eight years since it was first published in a spec I think tells something.
The HTTP/1 spec details a lot of headers and their functionality, but that is not enough for a normal current HTTP implementation to support. This, because things like cookies, authentication, new response codes and much more that an implementation may want to support today are features outside of the main spec and are described in additional separate documents. Some details, like NTLM, are not even found in RFC documents.
Thus, a modern HTTP/1 client needs to implement and support and a whole range of additional things and headers to work fine across the web. “HTTP/1.1” is mentioned in at least 40 separate RFC documents. Some of them quite complex by themselves.
While the syntax should ideally be possible to work exactly the same independently of which method that is used (sometimes referred to as verb), that is not how the reality works.
For example, if the method is GET we can also indeed send a body in the request similar to how we typically do with POST and PUT, but due to how it was never properly spelled out in the past, that is not interoperable today to the extent that doing it is just recipe for failure in a high enough share of attempts across the web.
This is one of the reasons why there is now work on a new HTTP method called QUERY which is basically what GET + request body should have been. But that does not simplify the protocol.
Because of the organic way several headers were created, deployed and evolved, a proxy for example cannot blindly just combine two headers into one, as the generic rules say it could. Because there are headers that specifically don’t follow there rules and need to be treated differently. Like for example cookies.
Remember how browser implementations of protocols always tend to prefer to show the user something and guess the intention rather than showing an error because if they would be stringent and strict they risk that users would switch to another browsers that is not.
This impacts how the rest of the world gets to deal with HTTP, as users then come to expect that what works with the browsers should surely also work with non-browsers and their HTTP implementations.
This makes interpreting and understanding the spec secondary compared to just following what the major browsers have decided to do in particular circumstances. They may even change their stances over time and they may at times contradict explicit guidance in the specs.
The first HTTP/1.1 RFC 2068 from January 1997 was 52,165 words in its plain text version – which almost tripled the size from the HTTP/1.0 document RFC1945 at merely 18,615. A clear indication how the perhaps simple HTTP 1.0 was no longer simple anymore in 1.1.
In June 1999, the updated RFC 2616 added several hundred lines and clocked in at 57,897 words. Almost 6K more words.
A huge work was then undertaken within the IETF and in the fifteen years following the single document HTTP/1.1 spec was instead converted into six separate documents.
RFC7230 to RFC7235 were published in June 2014 and they hold a total of 90,358 words. It had grown another 56%. It is comparable to an average sized novel in number of words.
The whole spec was subsequently rearranged and reorganized again to better cater for the new HTTP versions, and the latest update was published in June 2022. The HTTP/1.1 parts had then been compacted into three documents RFC 9110 to RFC9112, with a total of 95,740 words.
For the argument sake, let’s say we can read two hundred words per minute when plowing this. It is probably a little slower than average reading speed, but I imagine we read standard specs a little slower than we read novels for example. Let’s say that 10% of the words are cruft we don’t need to read.
If we read only the three latest HTTP/1.1 related RFC documents non-stop, it would still take more than seven hours.
In a recent conference talk with this click bait title, it was suggested that HTTP/1 is so hard to get implemented right that we should all stop using it.
All this, and yet there are few other Internet protocols that can compete with HTTP/1 in terms of use, adoption and popularity. HTTP is a big shot on the internet. Maybe this level of complication has been necessary to reach this success?
Comparing with other popular protocols still in use like DNS or SMTP I think we can see similar patterns: started out as something simple a long time ago. Decades later: not so simple anymore.
Perhaps this is just life happening?
HTTP is not a simple protocol.
The future is likely just going to be even more complicated as more things are added to HTTP over time – for all versions.
2025-08-07 19:43:45
The curl command line option --write-out
or just -w for short, is a powerful and flexible way to extract information from transfers done with the tool. It was introduced already back in version 6.5 in the early 2000.
This option takes an argument in which you can add “variables” that hold all sorts of different information, from time information, to speed, sizes, header content and more.
Some users have right out started to use the -w output for logging of the performed transfer, and when you do that there was a little detail missing: the ability to output the time the transfer completed. After all, most log lines actually feature the time in one way or another.
Starting in curl 8.16.0, curl -w knows the time and now allows the user to specify exactly how to output that time in the output. Suddenly this output is a whole notch better for logging purposes.
Since log files also tend to use different time formats I decided I didn’t want to use a fixed format and risk that a huge portion of users will think it is the wrong one, so I went straight with strftime formatting: the user controls the time format using standard %-flags: different ones for year, month, day, hour, minute, second etc.
Some details to note:
Here’s a sample command line outputting the time the transfer completed:
curl -w "%time{%a %b %e %Y - %H:%M:%S.%f} %{response_code}\n" https://example.com -o saved
When I ran this command line it gave me this output:
Wed Aug 6 2025 - 12:43:45.160382 200
2025-08-06 14:04:36
In the early days of curl development we (I suppose it was me personally but let’s stick with we so that I can pretend the blame is not all on me) made the possibly slightly unwise decision to make the -X option change the HTTP method for all requests in a curl transfer, even when -L is used – and independently of what HTTP responses the server returns.
That decision made me write blog posts and inform people all over about how using -X superfluously causes problems.
In curl 8.16.0, we introduce a different take on the problem, or better yet, a solution really: a new command line option that offers a modified behavior. Possibly the behavior people were thinking curl was having all along.
Just learn to use --follow
going forward (in curl 8.16.0 and later).
This option works fine together with -X and will adjust the method in the possible subsequent requests according to the HTTP response code.
A long time ago I wrote separately about the different HTTP response codes and what they mean in terms of changing (or not) the method.
Since we cannot break existing users and scripts, we had to leave the exiting --location
option working exactly like it always has. This option is this mutually exclusive with --follow
, so only pick one.
Part of the reason for this new option is to make sure curl can follow redirects correctly for other HTTP methods than the good old fashioned GET, POST and PUT. We already see PATCH used to some extent but perhaps more important is the work on the spec for the new QUERY method. It is a flavor of POST, but with a few minor but important different properties. Possibly enough for me to write a separate blog post about, but right now we can stick to it being “like POST”, in particular from a HTTP client’s perspective.
We want curl to be able to do a “post” but with a QUERY method and still follow redirects correctly. The -L and -X combination does not support this.
curl can be made to issue a proper QUERY request and follow redirects correctly like this:
curl -X QUERY --follow -d sendthis https://example.com/
Thank you for flying curl!
2025-08-05 13:46:48
From March 20, 1998 when the first curl release was published, to this day August 5, 2025 is exactly 10,000 days. We call it the curl-10000-day. Or just c10kday. c ten K day.
We want to celebrate this occasion by collecting and sharing stories. Your stories about curl. Your favorite memories. When you used curl for the first time. When curl saved your situation. When curl rescued your lost puppy. What curl has meant or perhaps still means to you, your work, your business, or your life. We want to favor and prioritize the good, the fun, the nostalgic and the emotional stories but it is of course up to your discretion.
We have created this thread in curl’s GitHub Discussion section for this purpose, so please go there and submit your story or read what others have shared.
https://github.com/curl/curl/discussions/17930
In the curl factory this day is nothing special. We keep hammering out new features and bugfixes – just like we always do.
Thanks for flying curl.
2025-08-04 16:12:32
Back in 2012, the Happy Eyeballs RFC 6555 was published. It details how a sensible Internet client should proceed when connecting to a server. It basically goes like this:
Give the IPv6 attempt priority, then with a delay start a separate IPv4 connection in parallel with the IPv6 one; then use the connection that succeeds first.
We also tend to call this connection racing, since it is like a competition where multiple attempts compete trying to “win”.
In a normal name resolve, a client may get a list of several IPv4 and IPv6 addresses to try. curl would then pick the first, try that and if that fails, move on the next etc. If a whole family fails, it would start the other immediately.
The updated Happy Eyeballs v2 RFC 8305 was published in 2017. It focused a lot on that the client should start its connections earlier in the process, preferably while getting responses from DNS instead of waiting for the hostname resolve phase to end before starting that.
This is complicated for lots of clients because there is no established (POSIX) API for doing such name resolves, so for a portable network library like libcurl we could not follow most of the new advice in this spec.
In 2012 we did not have QUIC on the map and not practically in 2017 either so those eyeballing specs did not include such details.
Even later, HTTP/3 was documented to require an alt-svc response header before a client would know if the server speaks HTTP/3 and only then could it attempt QUIC with it and expect it to work.
While curl works with alt-svc response approach, that’s information arriving far too late for many users – and it is especially damning for a command line tool as opposed to a browser, since lots of users just do single shot transfers and then never get to use HTTP/3 at all.
To combat that drawback, we decided that adding QUIC to the mix should add a separate connection competition. To allow faster and earlier use of QUIC.
Start the QUIC-IPv6 connect attempt first, then in order the QUIC-IPv4, TCP-ipv6 and finally the TCP-ipv4.
To users, this typically makes a very smooth operation where the client just automatically connects to the “best” alternative without it having to make any particular choices or decisions. It graciously and transparently adapts to situations where IPv6 or UDP have problems etc.
With the introduction of HTTPS-RR, there are also more ways introduced to get IP addresses for hosts and there is now ongoing work within the IETF on making a v3 of the Happy Eyeballs specification detailing how exactly everything should be put together.
We are of course following that development to monitor and learn how we should adapt and polish curl connects further.
While waiting on the happy eyeballs v3 work to land in a document, Stefan Eissing took it upon himself to further tweak how curl behaves in an attempt to find the best connection even faster. Using more parallelism.
Starting in curl 8.16.0, curl will start the first IPv6 and the first IPv4 connection attempts exactly like before, but then, if none of them have connected after 200 milliseconds curl continues to the next address in the list and starts another attempt in parallel.
Let’s take a look at an example of curl connecting to a server, let’s call the server curl.se. The orange numbers show the order of things after the DNS response has been received.
Of course, each failed attempt makes curl immediately move to the next address in the list until all alternatives have been tested.
The illustration above can be seen as “per transport”. If only TCP is wanted, there is a single such race going on. With potentially quite a few parallel attempts in the worst cases.
If instead HTTP/3 or a lower HTTP version is wanted, curl first starts a QUIC connection race as illustrated and then after 200 milliseconds it starts a similar TCP race in parallel to the QUIC one! Both run at the same time, the first one to connect wins.
A little table to illustrate when the different connect attempts starts when either QUIC or TCP is okay:
Time (ms) | QUIC | TCP |
0 | Start IPv6 connect | – |
200 | Start IPv4 connect | Start IPv6 connect |
400 | Start 2nd IPv6 connect | Start IPv4 connect |
600 | Start 2nd IPv4 connect | Start 2nd IPv6 connect |
800 | Start 3rd IPv6 connect | Start 2nd IPv4 connect |
So in the case of trying to connect to a server that doesn’t respond that has more than two IPV6 and IPv4 addresses each, there could be nine connection attempts running after 801 milliseconds.
The 200 milliseconds delay mentioned above is just the default time. It can easily be changed both using the library or the command line tool.
Image by Ilona Ilyés from Pixabay (heavily cropped)