2025-09-16 23:30:45
P99 CONF is the technical conference for anyone who obsesses over high-performance, low-latency applications. Engineers from Pinterest, Prime Video, Clickhouse, Gemini, Arm, Rivian and VW Group Technology, Meta, Wayfair, Disney, Uber, NVIDIA, and more will be sharing 60+ talks on topics like Rust, Go, Zig, distributed data systems, Kubernetes, and AI/ML.
Join 20K of your peers for an unprecedented opportunity to learn from experts like Chip Huyen (author of the O’Reilly AI Engineering book), Alexey Milovidov (Clickhouse creator/CTO), Andy Pavlo (CMU professor) and more – for free, from anywhere.
Bonus: Registrants are eligible to enter to win 300 free swag packs, get 30-day access to the complete O’Reilly library & learning platform, plus free digital books.
Disclaimer: The details in this post have been derived from the official documentation shared online by the Anthropic Engineering Team. All credit for the technical details goes to the Anthropic Engineering Team. The links to the original articles and sources are present in the references section at the end of the post. We’ve attempted to analyze the details and provide our input about them. If you find any inaccuracies or omissions, please leave a comment, and we will do our best to fix them.
Open-ended research tasks are difficult to handle because they rarely follow a predictable path. Each discovery can shift the direction of inquiry, making it impossible to rely on a fixed pipeline. This is where multi-agent systems become important
By running several agents in parallel, multi-agent systems allow breadth-first exploration, compress large search spaces into manageable insights, and reduce the risk of missing key information.
Anthropic’s engineering team also found that this approach delivers major performance gains. In internal evaluations, a system with Claude Opus 4 as the lead agent and Claude Sonnet 4 as supporting subagents outperformed a single-agent setup by more than 90 percent. The improvement was strongly linked to token usage and the ability to spread reasoning across multiple independent context windows, with subagents enabling the kind of scaling that a single agent cannot achieve.
However, the benefits also come with costs:
Multi-agent systems consume approximately fifteen times more tokens than standard chat interactions, making them best suited for tasks where the value of the outcome outweighs the expense.
They excel at problems that can be divided into parallel strands of research, but are less effective for tightly interdependent tasks such as coding.
Despite these trade-offs, multi-agent systems are proving to be a powerful way to tackle complex, breadth-heavy research challenges. In this article, we will understand the architecture of the multi-agent research system that Anthropic built.
The research system is built on an orchestrator-worker pattern, a common design in computing where one central unit directs the process and supporting units carry out specific tasks.
In this case, the orchestrator is the Lead Researcher agent, while the supporting units are subagents that handle individual parts of the job. Here are the details about the same:
Lead Researcher agent: This is the main coordinator. When a user submits a query, the Lead Researcher analyzes it, decides on an overall strategy, and records the plan in memory. Memory management is important here because large research tasks can easily exceed the token limit of the model’s context window. By saving the plan, the system avoids losing track when tokens run out.
Subagents: These are specialized agents created by the Lead Researcher. Each subagent is given a specific task, such as exploring a certain company, checking a particular time period, or looking into a technical detail. Because subagents operate in parallel and maintain their own context, they can search, evaluate results, and refine queries independently without interfering with one another. This separation of tasks reduces duplication and makes the process more efficient.
Citation Agent: Once enough information has been gathered, the results are passed to a Citation Agent. Its job is to check every claim against the sources, match citations correctly, and ensure the final output is traceable. This prevents errors such as making statements without evidence or attributing information to the wrong source.
See the diagram below that shows the high level architecture of these components:
This design differs from traditional Retrieval-Augmented Generation (RAG) systems.
In standard RAG, the model retrieves a fixed set of documents that look most similar to the query and then generates an answer from them. The limitation is that retrieval happens only once, in a static way.
The multi-agent system operates dynamically: it performs multiple rounds of searching, adapts based on the findings, and explores deeper leads as needed. In other words, it learns and adjusts during the research process rather than relying on a single snapshot of data.
The complete workflow looks like this:
A user submits a query.
The Lead Researcher creates a plan for performing the investigation.
Subagents are spawned, each carrying out searches or using tools in parallel.
The Lead Researcher gathers their results, synthesizes them, and decides if further work is required. If so, more subagents can be created, or the strategy can be refined.
Once enough information is collected, everything is handed to the Citation Agent, which ensures the report is properly sourced.
The final research report is then returned to the user.
See the diagram below for more details:
This layered system allows for flexibility, depth, and accountability. The Lead Researcher ensures direction and consistency, subagents provide parallel exploration and scalability, and the Citation Agent enforces accuracy by tying results back to sources. Together, they create a system that is both more powerful and more reliable than single-agent or static retrieval approaches.
Designing good prompts turned out to be the single most important way to guide how the agents behaved.
Since each agent is controlled by its prompt, small changes in phrasing could make the difference between efficient research and wasted effort. Through trial and error, Anthropic identified several principles that made the system work better.
To improve prompts, the engineering team built simulations where agents ran step by step using the same tools and instructions they would in production.
Watching them revealed common mistakes. Some agents kept searching even after finding enough results, others repeated the same queries, and some chose the wrong tools.
By mentally modeling how the agents interpret prompts, engineers could predict these failure modes and adjust the wording to steer agents toward better behavior.
See the diagram below to understand the concept of an agent on a high level:
The Lead Researcher is responsible for breaking down a query into smaller tasks and passing them to subagents.
For this to work, the instructions must be very clear: each subagent needs a concrete objective, boundaries for the task, the right output format, and guidance on which tools to use. Without this level of detail, subagents either duplicated each other’s work or left gaps. For example, one subagent looked into the 2021 semiconductor shortage while two others repeated nearly identical searches on 2025 supply chains. Proper delegation avoids wasted effort.
Agents often struggle to judge how much effort a task deserves. To prevent over-investment in simple problems, scaling rules were written into prompts. For instance:
A simple fact check should involve only one agent making 3–10 tool calls.
A direct comparison might need 2–4 subagents, each with 10–15 calls.
A complex research problem could require 10 or more subagents, each with clearly divided responsibilities.
These built-in guidelines helped the Lead Researcher allocate resources more effectively.
The way agents understand tools is as important as how humans interact with software interfaces. A poorly described tool can send an agent down the wrong path entirely.
For example, if a task requires Slack data but the agent only searches the web, the result will fail. With MCP servers that give the model access to external tools, this problem can be compounded since agents encounter unseen tools with varying quality.
See the diagram below that shows the concept of MCP or Model Context Protocol.
To solve this, the team gave agents heuristics such as:
Examine all available tools before starting.
Match the tool to the user’s intent.
Use the web for broad searches, but prefer specialized tools when possible.
Each tool was carefully described with a distinct purpose so that agents could make the right choice.
Claude 4 models proved capable of acting as their own prompt engineers. By giving them failing scenarios, they could analyze why things went wrong and suggest improvements.
Anthropic even created a tool-testing agent that repeatedly tried using a flawed tool, then rewrote its description to avoid mistakes. This process cut task completion times by about 40 percent, because later agents could avoid the same pitfalls.
Agents often defaulted to very specific search queries, which returned few or irrelevant results.
To fix this, prompts encouraged them to begin with broad queries, survey the landscape, and then narrow their focus as they learned more. This mirrors how expert human researchers work.
Anthropic used extended thinking and interleaved thinking as controllable scratchpads. Extended thinking allows the Lead Researcher to write out their reasoning before acting, such as planning which tools to use or how many subagents to create.
Subagents also plan their steps and then, after receiving tool outputs, use interleaved thinking to evaluate results, spot gaps, and refine their next queries. This structured reasoning improved accuracy and efficiency.
Early systems ran searches one after another, which was slow.
By redesigning prompts to encourage parallelization, the team achieved dramatic speedups. The Lead Researcher now spawns several subagents at once, and each subagent can use multiple tools in parallel.
This reduced research times by as much as 90 percent for complex queries, making it possible to gather broad information in minutes instead of hours.
Evaluating multi-agent systems is difficult because they rarely follow the same steps to reach an answer.
Anthropic used a mix of approaches to judge outcomes rather than strict processes.
Start small: In early development, even tiny changes to prompts had big effects. Testing with just 20 representative queries was enough to see improvements instead of waiting for large test sets.
LLM-as-judge: A separate model graded outputs using a rubric for factual accuracy, citation quality, completeness, source quality, and tool efficiency. Scores ranged from 0.0 to 1.0 with a pass/fail grade. This made the evaluation scalable and consistent with human judgment.
Human oversight: People remained essential for spotting edge cases, such as hallucinations or bias toward SEO-heavy sources. Their feedback led to new heuristics for source quality.
Emergent behavior: Small prompt changes could shift agent interactions in unpredictable ways. Instead of rigid rules, the best results came from prompt frameworks that guided collaboration, division of labor, and effort allocation.
Running multi-agent systems in production introduces reliability issues that go beyond traditional software.
Stateful agents: These agents run for long periods, keeping track of their progress across many tool calls. Small errors can build up, so the system needs durable recovery methods (such as checkpoints, retry logic, and letting agents adapt when tools fail) so that work can resume without starting over.
Debugging: Because agents make dynamic, non-deterministic choices, the same prompt may lead to different paths. To diagnose failures, Anthropic added production tracing and monitored high-level decision patterns, while avoiding storage of sensitive user content.
Deployments: Updates risk breaking agents already mid-task. To avoid this, Anthropic used rainbow deployments, where traffic is shifted gradually from old to new versions, keeping both active during rollout.
Synchronous bottlenecks: Currently, the LeadResearcher waits for subagents to finish before moving forward. This simplifies coordination but slows down the system. Asynchronous execution could remove these bottlenecks, though it would add complexity in managing state, coordinating results, and handling errors.
Building multi-agent systems is far more challenging than building single-agent prototypes.
Small bugs or errors can ripple through long-running processes, leading to unpredictable outcomes. Reliable performance requires proper prompt design, durable recovery mechanisms, detailed evaluations, and cautious deployment practices.
Despite the complexity, the benefits are significant.
Multi-agent research systems have shown they can uncover connections, scale reasoning across vast amounts of information, and save users days of work on complex tasks. They are best suited for problems that demand breadth, parallel exploration, and reliable sourcing. With the right engineering discipline, these systems can operate at scale and open new possibilities for how AI assists with open-ended research.
References:
TL:DR: Take this 2-minute survey so I can learn more about who you are,. what you do, and how I can improve ByteByteGo
Get your product in front of more than 1,000,000 tech professionals.
Our newsletter puts your products and services directly in front of an audience that matters - hundreds of thousands of engineering leaders and senior engineers - who have influence over significant tech decisions and big purchases.
Space Fills Up Fast - Reserve Today
Ad spots typically sell out about 4 weeks in advance. To ensure your ad reaches this influential audience, reserve your space now by emailing [email protected].
2025-09-15 23:30:45
Join us November 5–6 for Redpanda Streamfest, a two-day online event dedicated to streaming data technologies for agentic and data-intensive applications. Learn how to build scalable, reliable, and secure data pipelines through technical sessions, live demos, and hands-on workshops. Sessions include keynotes from industry leaders, real-world case studies, and tutorials on next-gen connectors for AI use cases. Discover why a streaming data foundation is essential for LLM-powered applications, how to simplify architectures, and new approaches to cost-effective storage. Connect with experts, sharpen your skills, and get ready to unlock the full potential of AI with streaming.
Disclaimer: The details in this post have been derived from the official documentation shared online by the Linear Engineering Team. All credit for the technical details goes to the Linear Engineering Team. The links to the original articles and sources are present in the references section at the end of the post. We’ve attempted to analyze the details and provide our input about them. If you find any inaccuracies or omissions, please leave a comment, and we will do our best to fix them.
Linear represents a new generation of project management platforms specifically designed for modern software teams.
Founded in 2019, the company has built its reputation on delivering exceptional speed and developer experience, distinguishing itself in a crowded market dominated by established players like Jira and Asana.
What sets Linear apart is its focus on performance, achieving sub-50ms interactions that make the application feel instantaneous. The platform embraces a keyboard-first design philosophy, allowing developers to navigate and manage their work without reaching for the mouse, a feature that resonates strongly with its technical user base.
The modern SaaS landscape presents a fundamental challenge that every growing platform must eventually face: how to serve a global customer base while respecting regional data requirements and maintaining optimal performance.
In this article, we look at how Linear implemented multi-region support for its customers. We will explore the architecture they built, along with the technical implementation details.
The decision to implement multi-region support at Linear wasn't made in a vacuum but emerged from concrete business pressures and technical foresight. There were a couple of reasons:
Compliance: The most immediate driver came from the European market, where larger enterprises expressed clear preferences for hosting their data within Europe. This was due to the need for GDPR compliance and internal data governance policies.
Technical: The primary technical concern centered on the eventual scaling limits of their PostgreSQL infrastructure. While their single-region deployment in Google Cloud's us-east-1 was serving them well, the team understood that continuing to scale vertically by simply adding more resources to a single database instance would eventually hit hard limits. By implementing multi-region support early, they created a horizontal scaling path that would allow them to distribute workspaces across multiple independent deployments, each with its own database infrastructure.
Competitive Advantage: The implementation of multi-region support also positioned Linear more favorably in the competitive project management space. By offering European data hosting, Linear could compete more effectively for enterprise contracts against established players who might not offer similar regional options.
The multi-region architecture Linear implemented follows four strict requirements that shaped every technical decision in the system.
Invisible to Users: This meant maintaining single domains (linear.app and api.linear.app) regardless of data location. This constraint eliminated the simpler approach of region-specific subdomains like eu.linear.app, which would have pushed complexity onto users and broken existing integrations. Instead, the routing logic lives entirely within the infrastructure layer.
Developer Simplicity: This meant that engineers writing application features shouldn’t need to consider multi-region logic in their code. This constraint influenced numerous implementation details, from the choice to replicate entire deployments rather than shard databases to the decision to handle all synchronization through background tasks rather than synchronous cross-region calls.
Feature Parity: Every Linear feature, integration, and API endpoint must function identically regardless of which region hosts a workspace. This eliminated the possibility of region-specific feature flags or degraded functionality, which would have simplified the implementation but compromised the user experience.
Full Regional Isolation: This meant that each region operates independently. A database failure, deployment issue, or traffic spike in one region cannot affect the other. This isolation provides both reliability benefits and operational flexibility. Each region can be scaled, deployed, and maintained independently based on its specific requirements.
The following architecture diagram reveals a three-tier structure.
User-facing clients (API users, the Linear web client, and OAuth applications) all connect to a central proxy layer. This proxy communicates with an authentication service to determine request routing, then forwards traffic to one of two regional deployments. Each region contains a complete Linear stack: API servers, sync engine, background task processors, and databases.
The proxy layer, implemented using Cloudflare Workers, serves as the routing brain of the system. When a request arrives, the proxy extracts authentication information, queries the auth service for the workspace's region, and obtains a signed JWT, then forwards the request to the appropriate regional deployment. This happens on every request, though caching mechanisms reduce the overhead for frequent requests from the same client.
The resulting architecture trades implementation complexity for operational benefits and user experience. Rather than distributing complexity across the application or pushing it onto users, Linear concentrated it within well-defined infrastructure components—primarily the proxy and authentication service.
There were three main phases to the technical implementation:
Before implementing multi-region support, Linear's infrastructure existed as manually configured resources in Google Cloud Platform. While functional for a single-region deployment, this approach wouldn't scale to managing multiple regional deployments. The manual configuration would have required duplicating every setup step for each new region, creating opportunities for configuration drift and human error.
The transformation began with Google Cloud's Terraform export tooling, which generated Terraform configurations from the existing infrastructure. This automated export provided a comprehensive snapshot of their us-east-1 deployment, but the raw export required significant cleanup. The team removed resources that weren't essential for the main application, particularly global resources that wouldn't need regional replication and resources that hadn't been manually created originally.
The critical work involved refactoring these Terraform resources into reusable modules. Each module was designed to accept region as a variable parameter, along with region-specific configurations for credentials and secrets. This modular approach transformed infrastructure deployment from a manual process into a parameterized, repeatable operation. Spinning up a new region became a matter of instantiating these modules with appropriate regional values rather than recreating infrastructure from scratch.
The team also built a staging environment using these Terraform modules, which served multiple purposes.
Validated that the infrastructure-as-code accurately replicated their production environment.
Provided a safe space for testing infrastructure changes before production deployment, and crucially
Provided an environment for testing the proxy's routing logic that would direct traffic between regions.
The authentication service extraction represented the most complex phase of Linear's multi-region implementation, touching large portions of their codebase.
The extraction followed a gradual approach designed to minimize risk. Initially, while still operating in a single region, the new authentication service shared a database with the main backend service in its US region. This co-location allowed the team to develop and test the extraction logic without immediately dealing with network latency.
Once the extraction logic was functionally complete, they implemented strict separation at the database level. Tables were split into distinct schemas with database-level permission boundaries—the authentication service couldn't read or write regional data, and regional services couldn't directly access authentication tables. This hard boundary, enforced by PostgreSQL permissions rather than application code, guaranteed that the architectural separation couldn't be accidentally violated by a coding error.
Some tables required splitting between the two services. For example, workspace configuration contained both authentication-relevant settings and application-specific data. The solution involved maintaining parallel tables with a one-to-one relationship for shared fields, requiring careful synchronization to maintain consistency.
Linear adopted a one-way data flow pattern that simplified the overall architecture. Regional services could call the authentication service directly through a GraphQL API, but the authentication service never made synchronous calls to regional services. When the authentication service needed regional actions, it scheduled background tasks using Google Pub/Sub's one-to-many pattern, broadcasting tasks to all regions.
This design choice meant the authentication service only handled HTTP requests without needing its own background task runner, simplifying deployment and operations. The choice of GraphQL for internal service communication leveraged Linear's existing investment in GraphQL tooling from their public API. Using Zeus to generate type-safe clients eliminated many potential integration errors and accelerated development by reusing familiar patterns and tools.
Three distinct patterns emerged for maintaining data consistency between services:
Record creation always began in the authentication service to ensure global uniqueness constraints (like workspace URL keys) were enforced before creating regional records. The authentication service would return an ID that regional services used to create corresponding records,
Updates used Linear's existing sync engine infrastructure. When a regional service updated a shared record, it would asynchronously propagate changes to the authentication service. This approach kept the update path simple for developers. They just updated the records normally in the regional service.
Deletion worked similarly to creation, with additional complexity from foreign key cascades. PostgreSQL triggers created audit logs of deleted records, capturing deletions that occurred due to cascading from related tables.
The final piece of Linear's multi-region architecture involved implementing a proxy layer to route requests to the appropriate regional deployment. Since Linear already used Cloudflare Workers extensively, extending their use for request routing was a natural choice.
See the diagram below:
The proxy worker's core responsibility is straightforward but critical.
For each incoming request, it extracts authentication information (from cookies, headers, or API tokens), makes a call to the authentication service to determine both the target region and obtain a signed JWT, then forwards the request to the appropriate regional deployment with the pre-signed header attached.
The JWT signing mechanism serves dual purposes. It validates that requests have been properly authenticated and authorized by the central authentication service, while also carrying metadata about the user and workspace context. This eliminates the need for regional services to make their own authentication calls, reducing latency and system complexity.
To optimize performance, the Cloudflare Worker implements sophisticated caching of authentication signatures.
When the same client makes frequent requests, the worker can serve cached authentication tokens without making repeated round-trip to the authentication service. The caching strategy had to balance performance with security. Tokens are cached for limited periods and include enough context to prevent cache poisoning attacks while still providing meaningful performance benefits for active users making multiple rapid requests.
Linear's real-time sync functionality relies heavily on WebSockets, making their efficient handling crucial for the multi-region architecture. The team leveraged an important Cloudflare Workers optimization: when a worker returns a fetch request without modifying the response body, Cloudflare automatically hands off to a more efficient code path.
This optimization is particularly valuable for long-lived connections. Rather than keeping a worker instance active for the entire duration of a WebSocket connection, the handoff mechanism allows the worker to complete its routing decision and then step aside, letting Cloudflare's infrastructure efficiently proxy the established connection to the regional deployment.
During the implementation, the team kept fallback mechanisms in place. While building out the proxy layer, the API service could still authenticate requests directly if they weren't pre-signed by the proxy.
Linear's multi-region implementation demonstrates that supporting geographical data distribution doesn't require sacrificing simplicity or performance.
By concentrating complexity within well-defined infrastructure components (the proxy layer and authentication service) the architecture shields both users and developers from the underlying regional complexity. The extensive behind-the-scenes work touched sensitive authentication logic throughout the codebase, yet careful planning meant most bugs remained invisible to users through strategic use of fallbacks and gradual rollouts.
The implementation now enables workspace creation in Linear's European region with full feature parity, automatically selecting the default region based on the user's timezone while preserving choice. The architecture positions Linear for future expansion, with the framework in place to add additional regions as needed. The team plans to extend this capability further by supporting workspace migration between regions, allowing existing customers to relocate their data as requirements change.
References:
TL:DR: Take this 2-minute survey so I can learn more about who you are,. what you do, and how I can improve ByteByteGo
Get your product in front of more than 1,000,000 tech professionals.
Our newsletter puts your products and services directly in front of an audience that matters - hundreds of thousands of engineering leaders and senior engineers - who have influence over significant tech decisions and big purchases.
Space Fills Up Fast - Reserve Today
Ad spots typically sell out about 4 weeks in advance. To ensure your ad reaches this influential audience, reserve your space now by emailing [email protected].
2025-09-13 23:31:25
Bugs sneak out when less than 80% of user flows are tested before shipping. However, getting that kind of coverage (and staying there) is hard and pricey for any team.
QA Wolf’s AI-native service provides high-volume, high-speed test coverage for web and mobile apps, reducing your organizations QA cycle to less than 15 minutes.
They can get you:
Unlimited parallel test runs
24-hour maintenance and on-demand test creation
Zero flakes, guaranteed
Engineering teams move faster, releases stay on track, and testing happens automatically—so developers can focus on building, not debugging.
The result? Drata achieved 4x more test cases and 86% faster QA cycles.
⭐ Rated 4.8/5 on G2
This week’s system design refresher:
Python vs Java
Help us Make ByteByteGo Newsletter Better
Design Patterns Cheat Sheet
CI/CD Simplified Visual Guide
How Apache Kafka Works?
Load Balancers vs API Gateways vs Reverse Proxy
SPONSOR US
Ever wondered what happens behind the scenes when you run a Python script or a Java program? Let’s find out:
Python (CPython Runtime):
Python source code (.py) is compiled into bytecode automatically in memory.
Bytecode can also be cached in .pyc files, making re-runs faster by using the cached version.
The Import System loads modules and dependencies.
The Python Virtual Machine (PVM) interprets the bytecode line by line, making Python flexible but relatively slower.
Java (JVM Runtime):
Java source code (.java) is compiled into .class bytecode using javac.
The Class Loader loads bytecode into the Java Runtime Environment (JVM).
Bytecode is verified and executed.
JVM uses both an Interpreter and a JIT Compiler, frequently used code (hot paths) is converted into native machine code, making Java faster.
Over to you: Do you prefer the flexibility of Python or the performance consistency of Java?
TL:DR: Take this 2-minute survey so I can learn more about who you are,. what you do, and how I can improve ByteByteGo
Outcomes speak louder than outputs.
DevStats gives engineering leaders the shared language they need to align with business goals and prove impact without the endless back-and-forth.
✅ Show how dev work connects to business outcomes
✅ Translate engineering metrics into exec-friendly insights
✅ Spot bottlenecks early and keep delivery flowing
✅ Prove your team’s value with every release
Finally, a way to make the business understand engineering, and make engineering impossible to ignore.
The cheat sheet briefly explains each pattern and how to use it.
What's included?
Factory
Builder
Prototype
Singleton
Chain of Responsibility
And many more!
Whether you're a developer, a DevOps specialist, a tester, or involved in any modern IT role, CI/CD pipelines have become an integral part of the software development process.
Continuous Integration (CI) is a practice where code changes are frequently combined into a shared repository. This process includes automatic checks to ensure the new code works well with the existing code.
Continuous Deployment (CD) takes care of automatically putting these code changes into real-world use. It makes sure that the process of moving new code to production is smooth and reliable.
This visual guide is designed to help you grasp and enhance your methods for creating and delivering software more effectively.
Over to you: Which tools or strategies do you find most effective in implementing CI/CD in your projects?
Apache Kafka is a distributed event streaming platform that lets producers publish data and consumers subscribe to it in real-time. Here’s how it works:
A producer application creates data, like website clicks or payment events.
The data is converted by a serializer into bytes so Kafka can handle it.
A partitioner decides which topic partition the message should go to.
The message is published into a Kafka cluster made of multiple brokers.
Each broker stores partitions of topics and replicates them to others for safety.
Messages inside partitions are stored in order and available for reading.
A consumer group subscribes to the topic and takes responsibility for processing data.
Each consumer in the group reads from different partitions to balance the work.
Consumers process the data in real-time, such as updating dashboards or triggering actions.
Over to you: Have you used Apache Kafka?
While load balancers, API gateways, and reverse proxies often overlap each other’s functionalities, they also have specific roles that can be leveraged in a project.
A client request first hits the edge load balancer, which provides a single entry point and distributes traffic to the API Gateway.
The API Gateway takes over and performs initial parameter validations to ensure the request is well-formed.
Next, the whitelist verification checks if the source is trusted and allowed to access the APIs.
Authentication and authorization validate the identity of the requester and their permissions.
Rate limiting ensures the client is not overwhelming the system with too many requests.
Finally, the reverse proxy inside the gateway forwards the request to the correct service endpoint.
At the service layer, another load balancer distributes the request across multiple instances of the target microservice.
The chosen service instance processes the request and returns the response to the client.
Over to you: Have you used load balancers, API gateways, and reverse proxies in your applications?
TL:DR: Take this 2-minute survey so I can learn more about who you are,. what you do, and how I can improve ByteByteGo
Get your product in front of more than 1,000,000 tech professionals.
Our newsletter puts your products and services directly in front of an audience that matters - hundreds of thousands of engineering leaders and senior engineers - who have influence over significant tech decisions and big purchases.
Space Fills Up Fast - Reserve Today
Ad spots typically sell out about 4 weeks in advance. To ensure your ad reaches this influential audience, reserve your space now by emailing [email protected].
2025-09-12 23:30:39
BIG announcement: We’ve launched a new YouTube channel to make learning AI easier — ByteByteAI!
Our first video is already live, and we plan to post a new one every week.
Here's a sneak peak into some titles you might see in the future:
- How Are Reasoning LLMs Like “GPT-5” Built?
- How to Build a Coding Agent?
- How LLMs See the World
- The $250M Paper - Molmo
- What Is Prompt and Context Engineering?
- How Does YouTube Recommend Videos?
- How Does Netflix Recommend Shows?
- How Does Google Translate Work?
- How to Build a Text-to-Image System?
- Are Small Language Models the future of agentic AI?
- How do LLMs remember things?
- Hacking AI with Words: Prompt Injection Explained
- And many more…
2025-09-11 23:30:46
In the world of distributed systems, one of the hardest problems isn’t just storing or retrieving data. It’s figuring out where that data should live when we have dozens, hundreds, or even thousands of servers.
Imagine running a large-scale web service where user profiles, cached web pages, or product catalogs need to be spread across multiple machines.
Consistent hashing emerged as a clever solution to this problem and quickly became one of the foundational ideas for scaling distributed systems.
Instead of scattering keys randomly and having to reshuffle them every time the cluster size changes, consistent hashing ensures that only a small, predictable portion of keys needs to move when servers are created or destroyed. This property, often described as “minimal disruption,” is what makes the technique so powerful.
Over the years, consistent hashing has been adopted by some of the largest companies in technology. It underpins distributed caching systems like memcached, powers databases like Apache Cassandra and Riak, and is at the heart of large-scale architectures such as Amazon Dynamo. When browsing a social media feed, streaming a video, or shopping online, chances are that consistent hashing is working quietly in the background to keep the experience smooth and fast.
In this article, we will look at consistent hashing in detail. We will also understand the improvements to consistent hashing using virtual nodes and how it helps scale systems.
2025-09-10 23:30:09
Cut onboarding time, reduce interruptions, and ship faster by surfacing the knowledge locked across GitHub, Slack, Jira, and Confluence (and more). You get:
Instant answers to questions about your architecture, past workarounds, and current projects.
An MCP Server that supercharges Claude and Cursor with your team knowledge so they generate code that makes sense in your codebase.
Agent that posts root cause and fix suggestions for CI failures directly in your Pull Request.
A virtual member of your team that automates internal support without extra overhead.
Disclaimer: The details in this post have been derived from the official documentation shared online by the DoorDash Engineering Team. All credit for the technical details goes to the DoorDash Engineering Team. The links to the original articles and sources are present in the references section at the end of the post. We’ve attempted to analyze the details and provide our input about them. If you find any inaccuracies or omissions, please leave a comment, and we will do our best to fix them.
When we order food online, the last thing we want is an out-of-date or inaccurate menu.
However, for delivery platforms, keeping menus fresh is a never-ending challenge. Restaurants constantly update items, prices, and specials, and doing all of this manually at scale is costly and slow.
DoorDash tackled this problem by applying large language models (LLMs) to automate the process of turning restaurant menu photos into structured, usable data. The technical goal of their project was clear: achieve accurate transcription of menu photos into structured menu data while keeping latency and cost low enough for production at scale.
On the surface, the idea is straightforward: take a photo, run it through AI, and get back a clean digital menu. In practice, though, the messy reality of real-world images (cropped photos, poor lighting, cluttered layouts) quickly exposes the limitations of LLMs on their own.
But the key insight was that LLMs, while strong at summarization and organization, break down when faced with noisy or incomplete inputs. To overcome this, DoorDash designed a system with guardrails. These are mechanisms that decide when automation is reliable enough to use and when a human needs to step in.
In this article, we will look at how DoorDash designed such a system and the challenges they faced.
The first step was to prove whether menus could be digitized at all in an automated way.
The engineering team started with a simple pipeline: OCR to LLM. The OCR system extracted raw text from menu photos, and then a large language model was tasked with converting that text into a structured schema of categories, items, and attributes.
This approach worked well enough as a prototype.
It showed that a machine could, in principle, take a photo of a menu and output something resembling a digital menu. But once the system was tested at scale, cracks began to appear. Accuracy suffered in ways that were too consistent to ignore.
The main reasons were as follows:
Inconsistent menu structures: Real-world menus are not neatly ordered lists. Some are multi-column, others use mixed fonts, and many scatter categories and items in unpredictable ways. OCR tools often scramble the reading order, which means the LLM ends up pairing items with the wrong attributes or misplacing categories entirely.
Incomplete menus: Photos are often cropped or partial, capturing only sections of a menu. When the LLM receives attributes without their parent items, or items without their descriptions, it makes guesses. These guesses lead to mismatches and incorrect entries in the structured output.
Low photographic quality: Many menu photos are taken in dim lighting, with glare from glass frames or clutter in the background. Small fonts and angled shots add to the noise. Poor image quality reduces OCR accuracy, and the errors cascade into the LLM stage, degrading the final transcription.
Through human evaluation, the team found that nearly all transcription failures could be traced back to one of these three buckets.
AI is the most essential technical skill of this decade.
CEOs of GitHub, Box, and others are prioritising hiring engineers with AI skills.
Engineers, devs, and technical leaders at Fortune 1000s + leading Silicon Valley startups read Superhuman AI to stay ahead of the curve and future-proof their skills.
To solve the accuracy problem, the engineering team introduced what they call a guardrail model.
At its core, this is a classifier that predicts whether the transcription produced from a given menu photo will meet the accuracy bar required for production. The logic is straightforward:
If the guardrail predicts that the output is good enough, the structured menu data is automatically published.
If the guardrail predicts a likely failure, the photo is routed to a human for transcription.
Building the guardrail meant more than just looking at the image.
The team realized the model needed to understand how the photo, the OCR system, and the LLM all interacted with each other. So they engineered features from three different sources:
Image-level features: These capture the quality of the photo itself, whether it is dark, blurry, has glare, or is cluttered with background objects.
OCR-derived features: These measure the reliability of the text extraction, such as how orderly the tokens are, whether confidence scores are high, or if the system has produced fragments and junk text.
LLM-output features: These reflect the quality of the structured transcription, such as how internally consistent the categories and attributes are, or whether the coverage looks incomplete.
This multi-view approach directly targets the three failure modes identified earlier: inconsistent menu structure, incomplete menus, and poor photographic quality.
By combining signals from the image, the OCR process, and the LLM itself, the guardrail learns to separate high-confidence transcriptions from those that are likely to go wrong.
Designing the guardrail model opened up the question of which architecture would actually work best in practice.
The team experimented with a three-component neural network design that looked like this:
Image encoding: The raw menu photo was passed through a pretrained vision backbone. They tried CNN-based models like VGG16 and ResNet, as well as transformer-based models such as ViT (Vision Transformer) and DiT (Document Image Transformer).
Tabular features: Alongside the image encoding, the network ingested features derived from the OCR output and the LLM transcription.
Fusion and classification: These inputs were combined through fully connected layers, ending in a classifier head that predicted whether a transcription was accurate enough.
The diagram below illustrates this design: an image model on one side, OCR/LLM tabular features on the other, both feeding into dense layers and then merging into a final classifier. It’s a standard multimodal fusion approach designed to capture signals from multiple sources simultaneously.
The results, however, were surprising.
Despite the sophistication of the neural network, the simplest model (LightGBM: a gradient-boosted decision tree) outperformed all the deep learning variants.
LightGBM not only achieved higher accuracy but also ran faster, which made it far more suitable for production deployment. Among the neural network variants, ResNet-based encoding came closest, while ViT-based models performed worst. The main reason was data: limited labeled samples made it difficult for the more complex architectures to shine.
Once the guardrail model was in place, the team built a full production pipeline that balanced automation with human review. It works step by step:
Photo validation: Every submitted menu photo goes through basic checks to ensure the file is usable.
Transcription stage: The candidate model (initially the OCR + LLM pipeline) generates a structured transcription from the photo.
Guardrail inference: Features from the photo, OCR output, and LLM summary are fed into the guardrail model, which predicts whether the transcription meets accuracy requirements.
Routing decisions: If the guardrail predicts the transcription is accurate, the structured data is published automatically. If the guardrail predicts likely errors, the photo is escalated to human transcription.
The diagram below shows this pipeline as a flow: menu photos enter, pass through the transcription model, then are evaluated by the guardrail. From there, accurate cases flow directly into the system, while uncertain ones branch off toward human operators.
This setup immediately raised efficiency. Machines handled the straightforward cases quickly, while humans focused their effort on the difficult menus. The result was a balanced process: automation sped up operations and cut costs without lowering the quality of the final menu data.
The pace of AI research did not stand still. In the months after the first guardrail model went live, multimodal LLMs (models that could process both images and text directly) became practical enough to try in production. Instead of relying only on OCR to extract text, these models could look at the raw photo and infer structure directly.
The DoorDash engineering team integrated these multimodal models alongside the existing OCR + LLM pipeline. Each approach had clear strengths and weaknesses:
Multimodal LLMs proved excellent at understanding context and layout. They could better interpret menus with unusual designs, multi-column layouts, or visual cues that OCR often scrambled. However, they were also more brittle when the photo itself was of poor quality, with dark lighting, glare, or partial cropping.
OCR and LLM models were more stable across noisy or degraded inputs, but they struggled with nuanced layout interpretation, often mislinking categories and attributes.
The diagram below shows how the two pipelines now coexist under the same guardrail system.
Both models attempt transcription, and their outputs are evaluated. The guardrail then decides which transcriptions meet the accuracy bar and which need human review.
This hybrid setup led to the best of both worlds. By letting the guardrail arbitrate quality between multimodal and OCR-based models, the system boosted automation rates while still preserving the high accuracy required for production.
Automating the transcription of restaurant menus from photos is a deceptively complex problem. What began as a simple OCR-to-LLM pipeline quickly revealed its limits when confronted with messy, real-world inputs: inconsistent structures, incomplete menus, and poor image quality.
The engineering team’s solution was not just to push harder on the models themselves, but to rethink the system architecture. The introduction of a guardrail classifier allowed automation to scale responsibly, ensuring that customers and restaurants always saw accurate menus while machines handled the simpler cases.
As the field of generative AI evolved, the system evolved with it.
By combining OCR and LLM models with newer multimodal approaches under the same guardrail framework, DoorDash was able to harness the strengths of both families of models without being trapped by their weaknesses.
Looking ahead, several opportunities remain open:
Domain fine-tuning: The growing dataset of human-verified transcriptions can be used to fine-tune LLMs and multimodal models for the specific quirks of restaurant menus.
Upstream quality controls: Investing in photo preprocessing with techniques like de-glare, de-noising, de-skewing, and crop detection will lift the accuracy of both OCR-based and multimodal systems.
Guardrail refinement: As models continue to improve, so can the guardrail. Expanding its feature set, retraining LightGBM, or even exploring hybrid architectures will push safe automation further.
References:
Launching the All-in-one interview prep. We’re making all the books available on the ByteByteGo website.
What's included:
System Design Interview
Coding Interview Patterns
Object-Oriented Design Interview
How to Write a Good Resume
Behavioral Interview (coming soon)
Machine Learning System Design Interview
Generative AI System Design Interview
Mobile System Design Interview
And more to come
Get your product in front of more than 1,000,000 tech professionals.
Our newsletter puts your products and services directly in front of an audience that matters - hundreds of thousands of engineering leaders and senior engineers - who have influence over significant tech decisions and big purchases.
Space Fills Up Fast - Reserve Today
Ad spots typically sell out about 4 weeks in advance. To ensure your ad reaches this influential audience, reserve your space now by emailing [email protected].