2025-05-03 22:00:00
A few years ago I wrote about reading a Profit & Loss statement, which is a foundational executive skill. I also subsequently wrote about ways to measure your engineering organization. Despite having written those, I still spend a lot of time wondering about effective ways to represent an engineering organization to your board of directors.
Over the past few years, one of the most useful charts I’ve found for explaining an R&D organization is a scatterplot of R&D spend as a % of margin versus YoY growth of last twelve months (LTM) revenue. Unlike so many other measures, this is an explicit measure of your R&D organization value as an investment relative to peer organizations.
Until recently, I assumed building this dataset required reading financial filings but my strategic finance partner at Carta, Tyler Braslow, pointed out that you can get all of this data for the tech section from Meritech Analytics, for free.
When you login to Meritech, you’re dropped into a table of public company comparables for tech companies. This is the exact dataset I’d been looking for to build this chart.
After logging in, you can then copy the contents of that table into a Google Sheets spreadsheet or Excel, or whatever you’re most comfortable with.
Within that sheet, the columns you care about are:
Q
for me) – how much “last twelve months revenue” has grown year over year, as a percentageU
for me) – how much R&D spend is as a percentage of last twelve months marginO
for me) – although I don’t show this in the scatterplot, I find this one useful for debugging outlier valuesHiding the other columns gives you a much simpler table.
From that table, you’re then able to build the scatterplot. Note that being “higher” means your R&D spend as a percentage of LTM margin is higher, which is a bad thing. The best companies are to the bottom and to the right; the worst companies are to the top and to the left.
With this chart as a starting point, you can then plot your company in and show where you stand. You could also show how your company’s position in the chart has evolved over time: hopefully improving. Finally, you might want to cull some of these data points to better determine your public company comparables. The Meritech dataset has 106 entries, but you might prefer a more representative thirty entries.
2025-05-03 21:00:00
Every few years I take a pass at reducing the chaos in my personal inboxes. There are simply too many emails to deal with, and that generally leads to me increasingly failing to follow up on important email.
Up to this point, my strategy has largely been filtering out emails that I never want to read. But there’s another category of email which is stuff I often want to read when it’s fresh, but never want to read after it’s fresh. For example, calendar reminders, some mailing lists, some news letters, etc.
I decided to figure out how I could setup a system where I could mark a number of things as “filter three days after receipt”. This is a nice compromise, because I do want to see those things, but I don’t want to have to remember to archive them after the fact.
You can write a search query for this in GMail:
from:([email protected]) older_than:3d
However, if you try to create a GMail filter using that,
it turns the older_than:3d
into a fixed point in time
rather than doing what you want.
It seems that this is unsolvable within GMail itself. However, some quick searching suggested it was possible to create a Google App Script to solve this, and asked Claude to write the script for me.
Following those instructions, I went to script.google.com, which I have not gone to in many years. I edited the generated script from Claude to use the tag “TempMsg”, to archive messages (as originally it had those commented out), and to limit itself to the first fifty items matching that tag. You can find the full code in this gist.
I attempted to run this as is, and got an error message that I needed to grant permissions. That requires three clicks within the Google Scripts UI.
This also requires approving the somewhat scary message that I trust myself.
From there I tried to run this script, and it failed because the TempMsg
tag doesn’t exist in my inbox.
So I went ahead and created that tag, and setup some filters to assign that tag to certain email senders.
After that, I was able to run the script and it worked properly. Note that I convinced myself that it was failing for a bit, because it doesn’t remove messages from the past three days. That is exactly how it’s supposed to work, but I would run it and then see messages with the tag there and think it was failing. Woops.
After convincing myself it was working, I added a periodic trigger to run this.
I now have this running on a daily basis, and it’s given me a nice new tool
for managing my email a bit better. After verifying it, I also used the tag
manager to “hide” this tag in the inbox, so I don’t have to see the TempMsg
tag everywhere. If I ever need to debug things, I can always make it visible again.
2025-04-24 21:00:00
While Stripe is a widely admired company for things like its creation of the Sorbet typer project, I personally think that Stripe’s most interesting strategy work is also among its most subtle: its willingness to significantly prioritize API stability.
This strategy is almost invisible externally. Internally, discussions around it were frequent and detailed, but mostly confined to dedicated API design conversations. API stability isn’t just a technical design quirk, it’s a foundational decision in an API-driven business, and I believe it is one of the unsung heroes of Stripe’s business success.
This is an exploratory, draft chapter for a book on engineering strategy that I’m brainstorming in #eng-strategy-book. As such, some of the links go to other draft chapters, both published drafts and very early, unpublished drafts.
To apply this strategy, start at the top with Policy. To understand the thinking behind this strategy, read sections in reverse order, starting with Explore.
More detail on this structure in Making a readable Engineering Strategy document.
Our policies for managing API changes are:
Design for long API lifetime. APIs are not inherently durable. Instead we have to design thoughtfully to ensure they can support change. When designing a new API, build a test application that doesn’t use this API, then migrate to the new API. Consider how integrations might evolve as applications change. Perform these migrations yourself to understand potential friction with your API. Then think about the future changes that we might want to implement on our end. How would those changes impact the API, and how would they impact the application you’ve developed.
At this point, take your API to API Review for initial approval as described below. Following that approval, identify a handful of early adopter companies who can place additional pressure on your API design, and test with them before releasing the final, stable API.
All new and modified APIs must be approved by API Review.
API changes may not be enabled for customers prior to API Review approval.
Change requests should be sent to api-review
email group.
For examples of prior art, review the api-review
archive for prior requests
and the feedback they received.
All requests must include a written proposal. Most requests will be approved asynchronously by a member of API Review. Complex or controversial proposals will require live discussions to ensure API Review members have sufficient context before making a decision.
We never deprecate APIs without an unavoidable requirement to do so. Even if it’s technically expensive to maintain support, we incur that support cost. To be explicit, we define API deprecation as any change that would require customers to modify an existing integration.
If such a change were to be approved as an exception to this policy, it must first be approved by the API Review, followed by our CEO. One example where we granted an exception was the deprecation of TLS 1.2 support due to PCI compliance obligations.
When significant new functionality is required, we add a new API.
For example, we created /v1/subscriptions
to
support those workflows
rather than extending /v1/charges
to add subscriptions support.
With the benefit of hindsight, a good example of this policy in action was the introduction of the Payment Intents APIs to maintain
compliance with Europe’s Strong Customer Authentication
requirements. Even in that case the charge
API continued to work as it did previously,
albeit only for non-European Union payments.
We manage this policy’s implied technical debt via an API translation layer. We release changed APIs into versions, tracked in our API version changelog. However, we only maintain one implementation internally, which is the implementation of the latest version of the API. On top of that implementation, a series of version transformations are maintained, which allow us to support prior versions without maintaining them directly. While this approach doesn’t eliminate the overhead of supporting multiple API versions, it significantly reduces complexity by enabling us to maintain just a single, modern implementation internally.
All API modifications must also update the version transformation layers to allow the new version to coexist peacefully with prior versions.
In the future, SDKs may allow us to soften this policy. While a significant number of our customers have direct integrations with our APIs, that number has dropped significantly over time. Instead, most new integrations are performed via one of our official API SDKs.
We believe that in the future, it may be possible for us to make more backwards incompatible changes because we can absorb the complexity of migrations into the SDKs we provide. That is certainly not the case yet today.
Our diagnosis of the impact on API changes and deprecation on our business is:
If you are a small startup composed of mostly engineers, integrating a new payments API seems easy. However, for a small business without dedicated engineers—or a larger enterprise involving numerous stakeholders—handling external API changes can be particularly challenging.
Even if this is only marginally true, we’ve modeled the impact of minimizing API changes on long-term revenue growth, and it has a significant impact, unlocking our ability to benefit from other churn reduction work.
While we believe API instability directly creates churn, we also believe that API stability directly retains customers by increasing the migration overhead even if they wanted to change providers. Without an API change forcing them to change their integration, we believe that hypergrowth customers are particularly unlikely to change payments API providers absent a concrete motivation like an API change or a payment plan change.
We are aware of relatively few companies that provide long-term API stability in general, and particularly few for complex, dynamic areas like payments APIs. We can’t assume that companies that make API changes are ill-informed. Rather it appears that they experience a meaningful technical debt tradeoff between the API provider and API consumers, and aren’t willing to consistently absorb that technical debt internally.
Future compliance or security requirements—along the lines of our upgrade from TLS 1.2 to TLS 1.3 for PCI—may necessitate API changes. There may also be new tradeoffs exposed as we enter new markets with their own compliance regimes. However, we have limited ability to predict these changes at this point.
2025-04-24 20:00:00
In How should Stripe deprecate APIs?, the diagnosis depends on the claim that deprecating APIs is a significant cause of customer churn. While there is internal data that can be used to correlate deprecation with churn, it’s also valuable to build a model to help us decide if we believe that correlation and causation are aligned in this case.
In this chapter, we’ll cover:
Time to investigate whether it’s reasonable to believe that API deprecation is a major influence on user retention and churn.
This is an exploratory, draft chapter for a book on engineering strategy that I’m brainstorming in #eng-strategy-book. As such, some of the links go to other draft chapters, both published drafts and very early, unpublished drafts.
In an initial model that has 10% baseline for customer churn per round, reducing customers experiencing API deprecation from 50% to 10% per round only increases the steady state of integrated customers by about 5%.
However, if we eliminate the baseline for customer churn entirely, then we see a massive difference between a 10% and 50% rate of API deprecation.
The biggest takeaway from this model is that eliminating API-deprecation churn alone won’t significantly increase the number of integrated customers. However, we also can’t fully benefit from reducing baseline churn without simultaneously reducing API deprecations. Meaningfully increasing the number of integrated customers requires lowering both sorts of churn in tandem.
We’ll start by sketching the model’s happiest path: potential customers flowing into engaged customers and then becoming integrated customers. This represents a customer who decides to integrate with Stripe’s APIs, and successfully completes that integration process.
Business would be good if that were the entire problem space. Unfortunately, customers do occasionally churn. This churn is represented in two ways:
baseline churn
where integrated customers leave Stripe for any number of reasons,
including things like dissolution of their companyexperience deprecation
followed by deprecation-influenced churn
, which represent
the scenario where a customer decides to leave after an API they use is deprecatedThere is also a flow for reintegration
, where a customer impacted by API deprecation
can choose to update their integration to comply with the API changes.
Pulling things together, the final sketch shows five stocks and six flows.
You could imagine modeling additional dynamics, such as recovery of churned customers, but it seems unlikely that would significantly influence our understanding of how API deprecation impacts churn.
In terms of acquiring customers, the most important flows are customer acquisition and initial integration with the API. Optimizing those flows will increase the number of existing integrations.
The flows driving churn are baseline churn, and the combination of API deprecation and deprecation-influenced churn. It’s difficult to move baseline churn for a payments API, as many churning customers leave due to company dissolution. From a revenue-weighted perspective, baseline churn is largely driven by non-technical factors, primarily pricing. In either case, it’s challenging to impact this flow without significantly lowering margin.
Engineering decisions, on the other hand, have a significant impact on both the number of API deprecations, and on the ease of reintegration after a migration. Because the same work to support reintegration also supports the initial integration experience, that’s a promising opportunity for investment.
You can find the full implementation of this model on GitHub if you want to see the full model rather than these emphasized snippets.
Now that we have identified the most interesting avenues for experimentation, it’s time to develop the model to evaluate which flows are most impactful.
Our initial model specification is:
# User Acquisition Flow
[PotentialCustomers] > EngagedCustomers @ 100
# Initial Integration Flow
EngagedCustomers > IntegratedCustomers @ Leak(0.5)
# Baseline Churn Flow
IntegratedCustomers > ChurnedCustomers @ Leak(0.1)
# Experience Deprecation Flow
IntegratedCustomers > DeprecationImpactedCustomers @ Leak(0.5)
# Reintegrated Flow
DeprecationImpactedCustomers > IntegratedCustomers @ Leak(0.9)
# Deprecation-Influenced Churn
DeprecationImpactedCustomers > ChurnedCustomers @ Leak(0.1)
Whether these are reasonable values depends largely on how we think about the length of each round. If a round was a month, then assuming half of integrated customers would experience an API deprecation would be quite extreme. If we assumed it was a year, then it would still be high, but there are certainly some API providers that routinely deprecate at that rate. (From my personal experience, I can say with confidence that Facebook’s Ads API deprecated at least one important field on a quarterly basis in the 2012-2014 period.)
Admittedly, for a payments API this would be a high rate, and is intended primarily as a contrast with more reasonable values in the exercise section below.
Our goal with exercising this model is to understand how much API deprecation impacts customer churn. We’ll start by charting the initial baseline, then move to compare it with a variety of scenarios until we build an intuition for how the lines move.
The initial chart stabilizes in about forty rounds, maintaining about 1,000 integrated customers and 400 customers dealing with deprecated APIs. Now let’s change the experience deprecation flow to impact significantly fewer customers:
# Initial setting with 50% experiencing deprecation per round
IntegratedCustomers > DeprecationImpactedCustomers @ Leak(0.5)
# Less deprecation, only 10% experiencing per round
IntegratedCustomers > DeprecationImpactedCustomers @ Leak(0.1)
After those changes, we can compare the two scenarios.
Lowering the deprecation rate significantly reduces the number of companies dealing with deprecations at any given time, but it has a relatively small impact on increasing the steady state for integrated customers. This must mean that another flow is significantly impacting the size of the integrated customers stock.
Since there’s only one other flow impacting that stock, baseline churn, that’s the one to exercise next. Let’s set the baseline churn flow to zero to compare that with the initial model:
# Initial Baseline Churn Flow
IntegratedCustomers > ChurnedCustomers @ Leak(0.1)
# Zeroed out Baseline Churn Flow
IntegratedCustomers > ChurnedCustomers @ Leak(0.0)
These results make a compelling case that baseline churn is dominating the impact of deprecation. With no baseline churn, the number of integrated customers stabilizes at around 1,750, as opposed to around 1,000 for the initial model.
Next, let’s compare two scenarios without baseline churn, where one has high API deprecation (50%) and the other has low API deprecation (10%).
In the case of two scenarios without baseline churn, we can see having an API deprecation rate of 10% leads to about 6,000 integrated customers, as opposed to 1,750 for a 50% rate of API deprecation. More importantly, in the 10% scenario, the integrated customers line shows no sign of flattening, and continues to grow over time rather than stabilizing.
The takeaway here is that significantly reducing either baseline churn or API deprecation magnifies the benefits of reducing the other. These results also reinforce the value of treating churn reduction as a system-level optimization, not merely a collection of discrete improvements.
2025-04-22 19:00:00
At work, we’ve been building agentic workflows to support our internal Delivery team on various accounting, cash reconciliation, and operational tasks. To better guide that project, I wrote my own simple workflow tool as a learning project in January. Since then, the Model Context Protocol (MCP) has become a prominent solution for writing tools for agents, and I decided to spend some time writing an MCP server over the weekend to build a better intuition.
The output of that project is library-mcp,
a simple MCP that you can use locally with tools like Claude Desktop
to explore Markdown knowledge bases.
I’m increasingly enamored with the idea of “datapacks” that I load into context windows with relevant
work, and I am currently working to release my upcoming book in a “datapack” format that’s optimized for usage with LLMs.
library-mcp
allows any author to dynamically build datapacks relevant to their current question,
as long as they have access to their content in Markdown files.
A few screenshots tell the story. First, here’s a list of the tools provided by this server. These tools give a variety of ways to search through content and pull that content into your context window.
Each time you access a tool for the first time in a chat, Claude Desktop prompts you to verify you want that tool to operate. This is a nice feature, and I think it’s particularly important that approval is done at the application layer, not at the agent layer. If agents approve their own usage, well, security is going to be quite a mess.
Here’s an example of retrieving all the tags to figure out what I’ve written about. You could do a follow-up like, “Get me posts I’ve written about ‘python’” after seeing the tags. The interesting thing here is you can combine retrieval and intelligence. For example, you could ask “Get me all the tags I’ve written, and find those that seem related to software architecture” and it does a good job of filtering.
Finally, here’s an example of actually using a datapack to answer a question. In this case, it’s evaluating how my writing has changed between 2018 and 2025.
More practically, I’ve already experimented with friends writing their CTO onboarding plans with Your first 90 days as CTO as a datapack in the context window, and you can imagine the right datapacks allowing you to go much further. Writing a company policy with all the existing policies in a datapack, along with a document about how to write policies effectively, for example, would improve consistency and be likely to identify conflicting policies.
Altogether, I am currently enamored with the vision of useful datapacks facilitating
creation, and hope that library-mcp
is a useful tool for folks as we experiment
our way towards this idea.
2025-04-20 19:00:00
Ahead of announcing the title and publisher of
my thus-far-untitled book on engineering strategy
in the next week or two, I put together a website for its content.
That site is pretty much the same format as this blog,
but with some improvements like better mobile rendering on /
than this blog has historically had.
After finishing that work, I ported the improvements back to lethain.com
, but also
decided to bring them to staffeng.com.
That was slightly trickier because, unlike this blog, StaffEng was historically a Gatsby app.
(Why a Gatsby app? Because Calm was using Gatsby for our web frontend and I wanted to get some experience with it.)
Over the weekend, I took some time to migrate to Hugo and apply the same enhancements.
which you can now see in the lethain:staff-eng
repository or on staffeng.com.
Here’s a screenshot of the old version.
Then here’s a screenshot of the updated version.
Overall, I think it’s slightly easier to read, and I took it as a chance to update the various links. For example, I removed the newsletter link and pointed that to this blog’s newsletter instead, given that one’s mailing list went quiet a long time ago.
Speaking of going quiet, I also brought these updates to infraeng.dev, which is the very-stuck-in-time site for the book I may-or-may-not one day write about infrastructure engineering. That means that I now have four essentially equivalent Hugo sites running different content websites: this blog, staffeng.com, infraeng.dev, and the site for the upcoming book. All of these build and deploy automatically onto GitHub Pages, which has been an extremely easy, reliable workflow for me.
While I was working on this, someone asked me why I don’t just write my own blog server to host my blogs. The answer here is pretty straightforward. I’ve written three blog servers for my blog over the years. The first two were in Python, and the last one was in Go. They all worked well enough, but maintaining them was eventually a pain point because they required a real build pipeline and deal with libraries that could have security issues. Even in the best case, the containers they run in would get end-of-lifed periodically as Ubuntu versions got deprecated.
What I’ve slowly learned from that is that, as a frequent writer, you really want your content to live somewhere that can work properly for decades. Even small maintenance costs can be prohibitive over time, and I’ve seen some good blogs disappear rather than e.g. figure out a WordPress upgrade. Individually, these are all minor, but over decades they can really add up. This is also my argument against using hosted providers: I’m sure Substack will be around in five years, but I have no idea if Substack will be around in twenty years, but I know that I’ll still be writing then, and will also want my previous writing to still be accessible.