2025-11-29 00:32:57
Yearly Black Friday sale is now live! Use code BF2025 at checkout to get 30% off the all-in-one interview prep online courses.
To take advantage of this limited time offer, subscribe before 11:59 pm PST on Monday, December 1.
2025-11-28 00:30:51
Every modern application needs to handle transactions. These are operations that must either succeed completely or fail.
In a monolithic system, this process is usually straightforward. The application talks to a single database, and all the data needed for a business operation lives in one place. Developers can use built-in database transactions or frameworks that automatically manage them to ensure that the system remains consistent even when something goes wrong.
For example, when you make an online payment, the application might deduct money from your account and record the transaction in a ledger. Both actions happen within a single database transaction. If one action fails, the database automatically rolls everything back so that no partial updates are left behind. This behavior is part of the ACID properties (Atomicity, Consistency, Isolation, and Durability) that guarantee reliable and predictable outcomes.
However, as systems evolve and grow larger, many organizations adopt a services-based or microservices architecture. In such architectures, a business process often involves multiple services, each managing its own database. For instance, an e-commerce system might have separate services for orders, payments, shipping, and inventory. Each of these services owns its own data store and operates independently.
Now imagine a business transaction that spans all these services. Placing an order might require updating the order database, reserving stock in the inventory database, and recording payment details in another database. If one of these steps fails, the system must find a way to keep all services consistent. This is where the problem begins.
See the diagram below:
This challenge is known as the problem of distributed transactions. Traditional techniques like two-phase commit (2PC) attempt to coordinate commits across multiple databases, but they can reduce performance, limit availability, and add significant complexity. As applications become more distributed and use different types of databases or message brokers, these traditional methods become less practical.
To overcome these limitations, modern architectures rely on alternative patterns that provide consistency without strict coupling or blocking behavior. One of the most effective of these is the Saga pattern.
In this article, we will look at how the Saga pattern works and the pros and cons of various approaches to implement this pattern.
2025-11-25 00:30:48
AI is only as powerful as the data behind it — but most teams aren’t ready.
We surveyed 200 senior IT and data leaders to uncover how enterprises are really using streaming to power AI, and where the biggest gaps still exist.
Discover the biggest challenges in real-time data infrastructure, the top obstacles slowing down AI adoption, and what high-performing teams are doing differently in 2025.
Download the full report to see where your organisation stands.
Disclaimer: The details in this post have been derived from the details shared online by the Zalando Engineering Team. All credit for the technical details goes to the Zalando Engineering Team. The links to the original articles and sources are present in the references section at the end of the post. We’ve attempted to analyze the details and provide our input about them. If you find any inaccuracies or omissions, please leave a comment, and we will do our best to fix them.
Zalando is one of Europe’s largest fashion and lifestyle platforms, connecting thousands of brands, retailers, and physical stores under one digital ecosystem.
As the company’s scale grew, so did the volume of commercial data it generated. This included information about product performance, sales patterns, pricing insights, and much more. This data was not just important for Zalando itself but also for its vast network of retail partners who relied on it to make critical business decisions.
However, sharing this data efficiently with external partners became increasingly complex.
Zalando’s Partner Tech division, responsible for data sharing and collaboration with partners, found itself managing a fragmented and inefficient process. Partners needed clear visibility into how their products were performing on the platform, but accessing that information was far from seamless. Data was scattered across multiple systems and shared through a patchwork of methods. Some partners received CSV files over SFTP, others pulled data via APIs, and many depended on self-service dashboards to manually export reports. Each method served a purpose, but together created a tangled system where consistency and reliability were hard to maintain. Many partners had to dedicate the equivalent of 1.5 full-time employees each month just to extract, clean, and consolidate the data they received. Instead of focusing on strategic analysis or market planning, skilled analysts spent valuable time performing repetitive manual work.
There was also a serious accessibility issue. The existing interfaces were not designed for heavy or large-scale data downloads. Historical data was often unavailable when partners needed it most, such as during key planning or forecasting cycles. As a result, even well-resourced partners struggled to build an accurate picture of their own performance.
This problem highlighted a critical gap in Zalando’s data strategy. Partners did not just want raw data or operational feeds. They wanted analytical-ready datasets that could be accessed programmatically and integrated directly into their internal analytics tools. In simple terms, they needed clean, governed, and easily retrievable data that fit naturally into their business workflows.
To address this challenge, the Zalando Engineering Team began a multi-year journey to rebuild its partner data sharing framework from the ground up. The result of this effort was Zalando’s adoption of Delta Sharing, an open protocol for secure data sharing across organizations. In this article, we will look at how Zalando built such a system and the challenges they faced.
To solve the problem of fragmented data sharing, the Zalando Engineering Team first needed to understand who their partners were and how they worked with data.
Zalando operates through three major business models:
Wholesale: Zalando purchases products from brands and resells them directly on its platform.
Partner Program: Brands list and sell products directly to consumers through Zalando’s marketplace.
Connected Retail: Physical retail stores connect their local inventory to an online platform, allowing customers to buy nearby and pick up in person.
Each of these models generates unique datasets, and the scale of those datasets varies dramatically. A small retailer may only deal with a few hundred products and generate a few megabytes of data each week. In contrast, a global brand might handle tens of thousands of products and need access to hundreds of terabytes of historical sales data for planning and forecasting.
In total, Zalando manages more than 200 datasets that support a business generating over €5 billion in gross merchandise value (GMV). These datasets are critical to helping partners analyze trends, adjust pricing strategies, manage inventory, and plan promotions. However, not all partners have the same level of technical sophistication or infrastructure to consume this data effectively.
Zalando’s partners generally fall into three categories based on their data maturity. See the table below:
Large enterprise partners often have their own analytics teams, data engineers, and infrastructure. They expect secure, automated access to data that integrates directly into their internal systems. Medium-sized partners prefer flexible solutions that combine manual and automated options, such as regularly updated reports and dashboards. Smaller partners value simplicity above all else, often relying on spreadsheet-based workflows and direct downloads.
Zalando’s existing mix of data-sharing methods (such as APIs, S3 buckets, email transfers, and SFTP connections) worked in isolation but could not scale to meet all these varied needs consistently.
After understanding the different needs of its partner ecosystem, the Zalando Engineering Team began to look for a better, long-term solution. The goal was not only to make data sharing faster but also to make it more reliable, scalable, and secure for every partner, from small retailers to global brands.
The team realized that fixing the problem required more than improving existing systems. They needed to design an entirely new framework that could handle massive datasets, provide real-time access, and adapt to each partner’s technical capability without creating new complexity. To do that, Zalando created a clear list of evaluation criteria that would guide their decision.

First, the solution had to be cloud-agnostic. Zalando’s partners used a variety of technology stacks and cloud providers. Some worked with AWS, others used Google Cloud, Azure, or even on-premise systems. The new system needed to work seamlessly across all these environments without forcing partners to change their existing infrastructure.
Second, the platform had to be open and extensible. This meant avoiding dependence on a single vendor or proprietary technology. Zalando wanted an open-standard approach that could evolve and integrate with different tools, systems, and workflows.
Third, the solution needed strong performance and scalability. With over 200 datasets and some reaching hundreds of terabytes in size, performance could not be an afterthought. The system had to handle large-scale data transfers and queries efficiently while maintaining low latency and high reliability.
Security was another non-negotiable factor. The platform had to support granular security and auditing features. This included data encryption, access control at the table or dataset level, and comprehensive logging for compliance and traceability. Since partners would be accessing sensitive commercial data, robust governance mechanisms were essential to maintain trust.
The next requirement was flexibility in data access patterns. Partners used data in different ways, so the system had to support:
Real-time streaming for partners who need up-to-the-minute insights
Batch and incremental updates for partners who preferred scheduled or partial data loads
Historical data access for partners who needed to analyze long-term trends
Finally, the solution had to be easy to integrate with the tools that partners were already using. Whether it was business intelligence dashboards, data warehouses, or analytics pipelines, the new system should fit naturally into existing workflows rather than force partners to rebuild them from scratch.
The search for such a system eventually led them to Delta Sharing, an open protocol specifically designed for secure data sharing across organizations. This discovery would go on to transform the way Zalando and its partners collaborate on data.
After months of evaluation and research, the Zalando Engineering Team found a technology that met nearly all of their requirements: Delta Sharing.
Delta Sharing is an open protocol designed specifically for secure, zero-copy data sharing across organizations. This means that partners can access live data directly from its original location without creating separate copies or transferring large files across systems.
The team immediately recognized how well this approach fit their goals. It offered the openness, scalability, and security they needed while being simple enough to integrate into partners’ existing tools and workflows. Key features of Delta Sharing are as follows:
Zero-copy access: Partners can query live datasets directly without needing to download or duplicate them. This eliminates data redundancy and ensures that everyone works with the most up-to-date information.
Open standard: Because Delta Sharing is based on open principles, it works seamlessly with a wide range of tools and platforms. Partners can connect through Pandas, Apache Spark, Tableau, or even Microsoft Excel, depending on their needs.
Granular access control: Data is shared securely using token-based authentication and credential files, which means each partner receives access tailored to their role and data permissions.
Scalable performance: The protocol efficiently handles very large datasets, even those that exceed terabytes in size, while maintaining high reliability and low latency.
Security by design: Features such as encryption, auditing, and logging are built into the system. This ensures that all data access is traceable and compliant with internal governance policies.
While Delta Sharing is available as an open-source protocol, Zalando decided to implement the Databricks Managed Delta Sharing service instead of hosting its own version. This choice was made for several practical reasons:
It integrates tightly with Unity Catalog, Databricks’ governance, and metadata layer. This allowed Zalando to maintain a single source of truth for datasets and permissions.
It provides enterprise-grade security, compliance, and auditability, which are essential when dealing with sensitive commercial data from multiple organizations.
It removes the operational overhead of managing and maintaining sharing servers, tokens, and access logs internally.
By using the managed service, the Zalando Engineering Team could focus on delivering value to partners rather than spending time maintaining infrastructure.
Once the Zalando Engineering Team validated Delta Sharing as the right solution, the next challenge was designing a clean and efficient architecture that could be scaled across thousands of partners. Their approach was to keep the system simple, modular, and easy to manage while ensuring that security and governance remained central to every layer.
At its core, the new data-sharing framework relied on three main building blocks that defined how data would be organized, accessed, and distributed:
Delta Share: A logical container that groups related Delta Tables for distribution to external recipients.
Recipient: A digital identity representing each partner within the Delta Sharing system.
Activation Link: A secure URL that allows partners to download their authentication credentials and connect to shared datasets.
This architecture followed a clear, three-step data flow designed to keep operations transparent and efficient:
Data Preparation and Centralization: All partner datasets were first curated and stored in scalable storage systems as Delta Tables. These tables were then registered in Unity Catalog, which acted as the metadata and governance layer. Unity Catalog provided a single source of truth for data definitions, schema consistency, and lineage tracking, ensuring that every dataset was traceable and well-documented.
Access Configuration: Once datasets were ready, the engineering team created a Recipient entry for each partner and assigned appropriate permissions. Each recipient received an activation link, which allowed them to securely access their data credentials. This setup ensured that partners only saw the data they were authorized to access while maintaining strict access boundaries between different organizations.
Direct Partner Access: When a partner activated their link, they retrieved a credential file and authenticated through a secure HTTPS connection. They could then directly query live data without duplication or manual transfer. Since the data remained centralized in Zalando’s data lakehouse, there were no synchronization issues or redundant copies to maintain.
This architecture brought several benefits. Partners now had real-time access to data, partner-specific credentials ensured granular security, and no redundant storage simplified maintenance.
To implement this system in Databricks, Zalando followed a clear operational workflow:
Prepare the Delta Tables and register them in Unity Catalog.
Create a Share to group related datasets.
Add the relevant tables to that share.
Create a Recipient representing each partner.
Grant the appropriate permissions to the recipient.
See the diagram below:
Every step was guided by Databricks’ Delta Sharing API documentation, allowing the team to automate processes where possible and maintain strong governance controls.
Once the new data-sharing architecture was in place, the Zalando Engineering Team understood that technology alone would not guarantee success. For the system to work, partners needed to be able to use it confidently and easily. Usability became just as important as performance or scalability.
To make the onboarding process smooth, Zalando created a range of partner-facing resources. These included step-by-step user guides that explained how to connect to Delta Sharing using tools familiar to most data teams, such as Pandas, Apache Spark, and common business intelligence (BI) platforms. Each guide walked partners through the entire process—from receiving their activation link to successfully accessing and querying their first dataset.
The team also built detailed troubleshooting documentation. This helped partners solve common issues such as expired credentials, connection errors, or authentication problems without needing to contact support. By empowering partners to self-diagnose and fix minor issues, Zalando reduced delays and improved overall efficiency.
In addition, they developed prebuilt connector snippets—small code templates that partners could plug directly into their existing data pipelines. These snippets made it possible to integrate Zalando’s data into existing workflows within minutes, regardless of whether a partner used Python scripts, Spark jobs, or visualization tools.
Together, these efforts dramatically reduced onboarding friction. Instead of days of setup and testing, partners could access and analyze data in a matter of minutes. This ease of use quickly became one of the platform’s strongest selling points.
The success of the Partner Tech pilot did not go unnoticed within Zalando. Other teams soon realized that they faced similar challenges when sharing data with internal or external stakeholders. Rather than allowing every department to build its own version of the solution, Zalando decided to expand the Delta Sharing setup into a company-wide platform for secure and scalable data distribution.
This new platform came with several key capabilities:
Unified recipient management: Centralized control of who receives what data, ensuring consistent governance.
Built-in best practices: Guidelines for preparing datasets before sharing, helping teams maintain high data quality.
Standardized security and governance policies: Every department followed the same data-sharing rules, simplifying compliance.
Cross-team documentation and automation: Shared tools and documentation made it easier for new teams to adopt the platform without starting from scratch.
Looking ahead, Zalando plans to introduce OIDC Federation, a feature that allows partners to authenticate using their own identity systems. This will remove the need for token-based authentication and make access even more secure and seamless.
Zalando’s journey to modernize partner data sharing was both a technical and organizational transformation. By focusing on real partner challenges, the Zalando Engineering Team built a system that balanced openness, governance, and usability—creating long-term value for both the company and its ecosystem.
The key lessons were as follows:
Start with partner needs, not technology. Deep research into partner workflows helped Zalando design a solution that solved real pain points rather than adding complexity.
Design for diversity. A single rigid model could not serve everyone, so the platform was built to support different partner sizes, tools, and technical skills.
Cross-team collaboration is essential. Close cooperation between the Data Foundation, AppSec, and IAM teams ensured consistency, security, and compliance from day one.
Manual processes are acceptable for pilots but not for scale. Early manual steps were valuable for testing ideas, but later became automation goals as the platform grew.
Internal adoption validates external value. When other Zalando teams began using Delta Sharing, it confirmed the platform’s effectiveness beyond its original use case.
Security must be embedded from the start. Integrating encryption, access control, and auditing early prevented rework and established long-term trust.
Documentation is a product feature. Clear guides, troubleshooting steps, and code examples made onboarding fast and self-service for partners.
Managed is better than self-managed. Relying on Databricks’ managed Delta Sharing service gave Zalando operational stability and freed engineers to focus on partner success.
Delta Sharing has fundamentally changed how Zalando exchanges data with its partners. The company moved from fragmented exports to a unified, real-time, and governed data-sharing model. This shift has produced the following impact:
Reduced manual data handling and partner friction.
Enabled faster, data-driven decision-making through consistent access.
Created a scalable foundation for cross-partner analytics and collaboration.
Established a reusable enterprise framework for secure data exchange.
References:
Get your product in front of more than 1,000,000 tech professionals.
Our newsletter puts your products and services directly in front of an audience that matters - hundreds of thousands of engineering leaders and senior engineers - who have influence over significant tech decisions and big purchases.
Space Fills Up Fast - Reserve Today
Ad spots typically sell out about 4 weeks in advance. To ensure your ad reaches this influential audience, reserve your space now by emailing [email protected].
2025-11-23 00:30:44
Failed checkouts, dropped jobs, timeouts that don’t even throw errors ➡️ these are the issues that slow your team down.
With Sentry Logs, you can trace every step, from the user’s request through your code to the log line. And because Logs connect automatically with your errors, traces, and even replays, all your debugging context lives in one place.
TLDR: Sentry has Logs. More context, fewer tabs, faster fixes, more time shipping.
This week’s system design refresher:
Cloudflare vs. AWS vs. Azure
Popular Backend Tech Stack
HTTP vs. HTTPS
Forward Proxy versus Reverse Proxy
Concurrency is NOT Parallelism
SPONSOR US
Cloudflare is much more than just a CDN and DDoS protection service. Let’s do a quick comparison of Cloudflare, AWS, and Azure.
Cloudflare has rapidly expanded beyond its traditional CDN roots, launching a suite of modern developer-first services like Workers, R2, D1, and so on. These offerings position it as a serious edge-native alternative to other cloud providers.
Here are the key cloud capabilities that Cloudflare supports:
Edge Compute and Serverless
Object and Blob Storage
Relational Databases
Containers
Sandboxes
Workflows
AI Agents SDK
Vector and AI search
Data Connectivity
AI Infrastructure
Content Delivery Network
DNS
Load Balancing
Over to you: Have you used Cloudflare’s new offerings? What are your thoughts on them?
When you open a website, the difference between HTTP and HTTPS decides whether your data travels safely or in plain sight. Here’s what actually happens under the hood:
HTTP:
Sends data in plain text, anyone on the network can intercept it.
The client and server perform a simple TCP handshake: SYN, SYN-ACK, ACK
Fast but completely insecure. Passwords, tokens, and forms can all be read in transit.
HTTPS (SSL/TLS):
Step 1: TCP Handshake: Standard connection setup.
Step 2: Certificate Check: Client says hello. Server responds with hello and its SSL/TLS certificate. That certificate contains the server’s public key and is signed by a trusted Certificate Authority.
Your browser verifies this certificate is legitimate, not expired, and actually belongs to the domain you’re trying to reach. This proves you’re talking to the real server, not some attacker pretending to be it.
Step 3: Key Exchange: Here’s where asymmetric encryption happens. The server has a public key and a private key. Client generates a session key, encrypts it with the server’s public key, and sends it over. Only the server can decrypt this with its private key.
Both sides now have the same session key that nobody else could have intercepted. This becomes the symmetric encryption key for the rest of the session.
Step 4: Data Transmission: Now every request and response gets encrypted with that session key using symmetric encryption.
Over to you: What’s your go-to tool for debugging TLS issues, openssl, curl -v, or something else?
A forward proxy sits between clients (users) and the internet. It acts on behalf of users, hiding their identity or filtering traffic before it reaches the external web.
Some applications of a forward proxy are:
Protect users while browsing the internet.
Helps organizations restrict access to certain websites.
Speeds up web browsing by caching frequently accessed content.
A reverse proxy sits between the internet (clients) and backend servers. It acts on behalf of servers, handling incoming traffic.
Some applications of a reverse proxy are:
Distributes traffic across multiple servers to ensure no single server is overwhelmed.
Handles SSL encryption/decryption so backend servers don’t have to.
Helps protect backend servers from DDoS attacks.
Over to you: What else will you add to understand forward proxy and reverse proxy?
In system design, it is important to understand the difference between concurrency and parallelism.
As Rob Pyke(one of the creators of GoLang) stated:“ Concurrency is about 𝐝𝐞𝐚𝐥𝐢𝐧𝐠 𝐰𝐢𝐭𝐡 lots of things at once. Parallelism is about 𝐝𝐨𝐢𝐧𝐠 lots of things at once.” This distinction emphasizes that concurrency is more about the 𝐝𝐞𝐬𝐢𝐠𝐧 of a program, while parallelism is about the 𝐞𝐱𝐞𝐜𝐮𝐭𝐢𝐨𝐧.
Concurrency is about dealing with multiple things at once. It involves structuring a program to handle multiple tasks simultaneously, where the tasks can start, run, and complete in overlapping time periods, but not necessarily at the same instant.
Concurrency is about the composition of independently executing processes and describes a program’s ability to manage multiple tasks by making progress on them without necessarily completing one before it starts another.
Parallelism, on the other hand, refers to the simultaneous execution of multiple computations. It is the technique of running two or more tasks or computations at the same time, utilizing multiple processors or cores within a computer to perform several operations concurrently. Parallelism requires hardware with multiple processing units, and its primary goal is to increase the throughput and computational speed of a system.
In practical terms, concurrency enables a program to remain responsive to input, perform background tasks, and handle multiple operations in a seemingly simultaneous manner, even on a single-core processor. It’s particularly useful in I/O-bound and high-latency operations where programs need to wait for external events, such as file, network, or user interactions.
Parallelism, with its ability to perform multiple operations at the same time, is crucial in CPU-bound tasks where computational speed and throughput are the bottlenecks. Applications that require heavy mathematical computations, data analysis, image processing, and real-time processing can significantly benefit from parallel execution.
Get your product in front of more than 1,000,000 tech professionals.
Our newsletter puts your products and services directly in front of an audience that matters - hundreds of thousands of engineering leaders and senior engineers - who have influence over significant tech decisions and big purchases.
Space Fills Up Fast - Reserve Today
Ad spots typically sell out about 4 weeks in advance. To ensure your ad reaches this influential audience, reserve your space now by emailing [email protected].
2025-11-21 00:30:40
Modern applications must stay online around the clock.
When a banking app goes down during business hours or an e-commerce site crashes on Black Friday, the consequences extend far beyond frustrated users. Revenue evaporates, customer trust erodes, and competitors gain ground.
High availability has transformed from a luxury feature into a baseline expectation these days.
Building systems that remain operational despite failures requires more than just buying expensive hardware or running extra servers. It demands a combination of architectural patterns, redundancy strategies, and operational discipline. In other words, high availability emerges from understanding how systems fail and designing defenses at multiple layers.
In this article, we will understand what availability means and look at some of the most popular strategies to achieve high availability.
2025-11-19 00:31:27
Today’s systems are getting more complex, more distributed, and harder to manage. If you’re scaling fast, your observability strategy needs to keep up. This eBook introduces Datadog’s Observability Maturity Framework to help you reduce incident response time, automate repetitive tasks, and build resilience at scale.
You’ll learn:
How to unify fragmented data and reduce manual triage
The importance of moving from tool sprawl to platform-level observability
What it takes to go from reactive monitoring to proactive ops
Disclaimer: The details in this post have been derived from the details shared online by the Disney+ Hotstar (now JioHotstar) Engineering Team. All credit for the technical details goes to the Disney+ Hotstar (now JioHotstar) Engineering Team. The links to the original articles and sources are present in the references section at the end of the post. We’ve attempted to analyze the details and provide our input about them. If you find any inaccuracies or omissions, please leave a comment, and we will do our best to fix them.
In 2023, Disney+ Hotstar faced one of the most ambitious engineering challenges in the history of online streaming. The goal was to support more than 50 to 60 million concurrent live streams during the Asia Cup and Cricket World Cup. These are events that attract some of the largest online audiences in the world. For perspective, before this, Hotstar had handled about 25 million concurrent users on two self-managed Kubernetes clusters.
To make things even more challenging, the company introduced a “Free on Mobile” initiative, which allowed millions of users to stream live matches without a subscription. This move significantly expanded the expected load on the platform, creating the need to rethink its infrastructure completely.
Hotstar’s engineers knew that simply adding more servers would not be enough. The platform’s architecture needed to evolve to handle higher traffic while maintaining reliability, speed, and efficiency. This led to the migration to a new “X architecture,” a server-driven design that emphasized flexibility, scalability, and cost-effectiveness at a global scale.
The journey that followed involved a series of deep technical overhauls. From redesigning the network and API gateways to migrating to managed Kubernetes (EKS) and introducing an innovative concept called “Data Center Abstraction,” Hotstar’s engineering teams tackled multiple layers of complexity. Each step focused on ensuring that millions of cricket fans could enjoy uninterrupted live streams, no matter how many joined at once.
In this article, we look at how the Disney+ Hotstar engineering team achieved that scale and the challenges they faced.
To scale your company, you need compliance. And by investing in compliance early, you protect sensitive data and simplify the process of meeting industry standards—ensuring long-term trust and security.
Vanta helps growing companies achieve compliance quickly and painlessly by automating 35+ frameworks—including SOC 2, ISO 27001, HIPAA, GDPR, and more.
And with Vanta continuously monitoring your security posture, your team can focus on growth, stay ahead of evolving regulations, and close deals in a fraction of the time.
Start with Vanta’s Compliance for Startups Bundle, with key resources to accelerate your journey.
Step-by-step compliance checklists
Case studies from fast-growing startups
On-demand videos with industry leaders
Disney+ Hotstar serves its users across multiple platforms such as mobile apps (Android and iOS), web browsers, and smart TVs.
No matter which device a viewer uses, every request follows a structured path through the system.
When a user opens the app or starts a video, their request first goes to an external API gateway, which is managed through Content Delivery Networks (CDNs).
The CDN layer performs initial checks for security, filters unwanted traffic, and routes the request to the internal API gateway.
This internal gateway is protected by a fleet of Application Load Balancers (ALBs), which distribute incoming traffic across multiple backend services.
These backend services handle specific features such as video playback, scorecards, chat, or user profiles, and store or retrieve data from databases, which can be either managed (cloud-based) or self-hosted systems.
Each of these layers (from CDN to database) must be fine-tuned and scaled correctly. If even one layer becomes overloaded, it can slow down or interrupt the streaming experience for millions of viewers.
At a large scale, the CDN nodes were not just caching content like images or video segments. They were also acting as API gateways, responsible for tasks such as verifying security tokens, applying rate limits, and processing each incoming request. These extra responsibilities put a significant strain on their computing resources. The system began to hit limits on how many requests it could process per second.
To make matters more complex, Hotstar was migrating to a new server-driven architecture, meaning the way requests flowed and data was fetched had changed. This made it difficult to predict exactly how much traffic the CDN layer would face during a peak event.
To get a clear picture, engineers analyzed traffic data from earlier tournaments in early 2023. They identified the top ten APIs that generated the most load during live streams. What they found was that not all API requests were equal. Some could be cached and reused, while others had to be freshly computed each time.
This insight led to one of the most important optimizations: separating cacheable APIs from non-cacheable ones.
Cacheable APIs included data that did not change every second, such as live scorecards, match summaries, or key highlights. These could safely be stored and reused for a short period.
Non-cacheable APIs handled personalized or time-sensitive data, such as user sessions or recommendations, which had to be processed freshly for each request.

By splitting these two categories, the team could optimize how requests were handled. The team created a new CDN domain dedicated to serving cacheable APIs with lighter security rules and faster routing. This reduced unnecessary checks and freed up computing capacity on the edge servers. The result was a much higher throughput at the gateway level, meaning more users could be served simultaneously without adding more infrastructure.
Hotstar also looked at how frequently different features refreshed their data during live matches. For instance, the scorecard or “watch more” suggestions did not need to update every second. By slightly reducing the refresh rate for such features, they cut down the total network traffic without affecting the viewer experience.
Finally, engineers simplified complex security and routing configurations at the CDN layer. Each extra rule increases processing time, and by removing unnecessary ones, the platform was able to save additional compute resources.
Once the gateway layer was optimized, the team turned its attention to the deeper layers of the infrastructure that handled network traffic and computation.
Two key parts of the system required significant tuning: the NAT gateways that managed external network connections, and the Kubernetes worker nodes that hosted application pods.
Every cloud-based application depends on network gateways to handle outgoing traffic.
In Hotstar’s setup, NAT (Network Address Translation) Gateways were responsible for managing traffic from the internal Kubernetes clusters to the outside world. These gateways act as translators, allowing private resources in a Virtual Private Cloud (VPC) to connect to the internet securely.
During pre-event testing, engineers collected detailed data using VPC Flow Logs and found a major imbalance: one Kubernetes cluster was using 50 percent of the total NAT bandwidth even though the system was running at only 10 percent of the expected peak load. This meant that if traffic increased five times during the live matches, the gateways would have become a serious bottleneck.
On deeper investigation, the team discovered that several services inside the same cluster were generating unusually high levels of external traffic. Since all traffic from an Availability Zone (AZ) was being routed through a single NAT Gateway, that gateway was being overloaded.
To fix this, engineers changed the architecture from one NAT Gateway per AZ to one per subnet. In simpler terms, instead of a few large gateways serving everything, they deployed multiple smaller ones distributed across subnets. This allowed network load to spread more evenly.
The next challenge appeared at the Kubernetes worker node level. These nodes are the machines that actually run the containers for different services. Each node has limits on how much CPU, memory, and network bandwidth it can handle.
The team discovered that bandwidth-heavy services, particularly the internal API Gateway, were consuming 8 to 9 gigabits per second on individual nodes. In some cases, multiple gateway pods were running on the same node, creating contention for network resources. This could lead to unpredictable performance during peak streaming hours.
The solution was twofold:
First, the team switched to high-throughput nodes capable of handling at least 10 Gbps of network traffic.
Second, they used topology spread constraints, a Kubernetes feature that controls how pods are distributed across nodes. They configured it so that only one gateway pod could run on each node.
This ensured that no single node was overloaded and that network usage remained balanced across the cluster. As a result, even during the highest traffic peaks, each node operated efficiently, maintaining a steady 2 to 3 Gbps of throughput per node.
Even after the improvements made to networking and node distribution, the team’s earlier setup had a limitation.
The company was running two self-managed Kubernetes clusters, and these clusters could not scale reliably beyond about 25 million concurrent users. Managing the Kubernetes control plane (which handles how workloads are scheduled and scaled) had become increasingly complex and fragile at high loads.
To overcome this, the engineering team decided to migrate to Amazon Elastic Kubernetes Service (EKS), a managed Kubernetes offering from AWS.
This change meant that AWS would handle the most sensitive and failure-prone part of the system (which is the control plane) while the team could focus on managing the workloads, configurations, and optimizations of the data plane.
Once the migration was complete, the team conducted extensive benchmarking tests to verify the stability and scalability of the new setup. The EKS clusters performed very well during tests that simulated more than 400 worker nodes being scheduled and scaled simultaneously. The control plane remained responsive and stable during most of the tests.
However, at scales beyond 400 nodes, engineers observed API server throttling. In simpler terms, the Kubernetes API server, which coordinates all communication within the cluster, began slowing down and temporarily limiting the rate at which new nodes and pods could be created. This did not cause downtime but introduced small delays in how quickly new capacity was added during heavy scaling.
To fix this, the team optimized the scaling configuration. Instead of scaling hundreds of nodes at once, they adopted a stepwise scaling approach. The automation system was configured to add 100 to 300 nodes per step, allowing the control plane to keep up without triggering throttling.
After migrating to Amazon EKS and stabilizing the control plane, the team had achieved reliable scalability for around 25–30 million concurrent users.
However, as the 2023 Cricket World Cup approached, it became clear that the existing setup still would not be enough to handle the projected 50 million-plus users. The infrastructure was technically stronger but still complex to operate, difficult to extend, and costly to maintain at scale.
Hotstar managed its workloads on two large, self-managed Kubernetes clusters built using KOPS. Across these clusters, there were more than 800 microservices, each responsible for different features such as video playback, personalization, chat, analytics, and more.
Every microservice had its own AWS Application Load Balancer (ALB), which used NodePort services to route traffic to pods. In practice, when a user request arrived, the flow looked like this:
Client → CDN (external API gateway) → ALB → NodePort → kube-proxy → Pod
See the diagram below:
Although this architecture worked, it had several built-in constraints that became increasingly problematic as the platform grew.
Port Exhaustion: Kubernetes’ NodePort service type exposes each service on a specific port within a fixed range (typically 30000 to 32767). With more than 800 services deployed, Hotstar was fast running out of available ports. This meant new services or replicas could not be easily added without changing network configurations.
Hardware and Kubernetes Version Constraints: The KOPS clusters were running on an older Kubernetes version (v1.17) and using previous-generation EC2 instances. These versions did not support modern instance families such as Graviton, C5i, or C6i, which offer significantly better performance and efficiency. Additionally, because of the older version, the platform could not take advantage of newer scaling tools like Karpenter, which automates node provisioning and helps optimize costs by shutting down underused instances quickly
IP Address Exhaustion: Each deployed service used multiple IP addresses — one for the pod, one for the service, and additional ones for the load balancer. As the number of services increased, Hotstar’s VPC subnets began running out of IP addresses, creating scaling bottlenecks. Adding new nodes or services often meant cleaning up existing ones first, which slowed down development and deployments.
Operational Overhead: Every time a major cricket tournament or live event was about to begin, the operations team had to manually pre-warm hundreds of load balancers to ensure they could handle sudden spikes in traffic. This was a time-consuming, error-prone process that required coordination across multiple teams.
Cost Inefficiency: Finally, the older cluster autoscaler used in the legacy setup was not fast enough to consolidate or release nodes efficiently.
To overcome the limitations of the old setup, Disney+ Hotstar introduced a new architectural model known as Datacenter Abstraction. In this model, a “data center” does not refer to a physical building but a logical grouping of multiple Kubernetes clusters within a specific region. Together, these clusters behave like a single large compute unit for deployment and operations.
Each application team is given a single logical namespace within a data center. This means teams no longer need to worry about which cluster their application is running on. Deployments become cluster-agnostic, and traffic routing is automatically handled by the platform.
See the diagram below:
This abstraction made the entire infrastructure much simpler to manage. Instead of dealing with individual clusters, teams could now operate at the data center level, which brought several major benefits:
Simplified failover and recovery: If one cluster faced an issue, workloads could shift to another cluster without changing configurations.
Uniform scaling and observability: Resources across clusters could be managed and monitored as one system.
Centralized routing and security: Rate limiting, authentication, and routing rules could all be handled from a common platform layer.
Reduced management overhead: Engineering teams could focus on applications instead of constantly maintaining infrastructure details.
Some key architectural innovations the team made are as follows:
At the core of Datacenter Abstraction was a new central proxy layer, built using Envoy, a modern proxy and load balancing technology. This layer acted as the single point of control for all internal traffic routing.
Before this change, each service had its own Application Load Balancer (ALB), meaning more than 200 load balancers had to be managed and scaled separately. The new Envoy-based gateway replaced all of them with a single, shared fleet of proxy servers.
This gateway handled several critical functions:
Traffic routing: Directing requests to the right service, regardless of which cluster it was running on.
Authentication and rate limiting: Ensuring that all internal and external requests were secure and properly controlled.
Load shedding and service discovery: Managing temporary overloads and finding the correct service endpoints automatically.
By placing this gateway layer within each cluster and letting it handle routing centrally, the complexity was hidden from developers. Application teams no longer needed to know where their services were running; the platform managed everything behind the scenes.
The Datacenter Abstraction model was built entirely on Amazon Elastic Kubernetes Service (EKS), marking a complete move away from self-managed KOPS clusters.
This transition gave Hotstar several important advantages:
Managed control plane: AWS handled critical control components, reducing maintenance effort and improving reliability.
Access to newer EC2 generations: The team could now use the latest high-performance and cost-efficient instance types, such as Graviton and C6i.
Rapid provisioning: New clusters could be created quickly to meet growing demand.
Unified orchestration: Multiple EKS clusters could now operate together as part of one logical data center, simplifying management across environments.
To simplify how services communicate, the team introduced a unified endpoint structure across all environments. Previously, different teams created their own URLs for internal and external access, which led to confusion and configuration errors.
Under the new system, every service followed a clear and consistent pattern:
Intra-DC (within the same data center): <service>.internal.<domain>
Inter-DC (between data centers): <service>.internal.<dc>.<domain>
External (public access): <service>.<public-domain>
This made service discovery much easier and allowed engineers to move a service between clusters without changing its endpoints. It also improved traffic routing and reduced operational friction.
In the earlier architecture, deploying an application required maintaining five or six separate Kubernetes manifest files for different environments, such as staging, testing, and production. This caused duplication and made updates cumbersome.
To solve this, the team introduced a single unified manifest template. Each service now defines its configuration in one base file and applies small overrides only when needed, for example, to adjust memory or CPU limits.
Infrastructure details such as load balancer configurations, DNS endpoints, and security settings were abstracted into the platform itself. Engineers no longer had to manage these manually.
This approach provided several benefits:
Reduced duplication of configuration files.
Faster and safer deployments across environments.
Consistent standards across all teams.
Each manifest includes essential parameters like ports, health checks, resource limits, and logging settings, ensuring every service follows a uniform structure.
One of the biggest technical improvements came from replacing NodePort services with ClusterIP services using the AWS ALB Ingress Controller.
In simple terms, NodePort requires assigning a unique port to each service within a fixed range. This created a hard limit on how many services could be exposed simultaneously. By adopting ClusterIP, services were directly connected to their pod IPs, removing the need for reserved port ranges.
This change made the traffic flow more direct and simplified the overall network configuration. It also allowed the system to scale far beyond the earlier port limitations, paving the way for uninterrupted operation even at massive traffic volumes.
By the time the team completed its transformation into the Datacenter Abstraction model, its infrastructure had evolved into one of the most sophisticated and resilient cloud architectures in the world of streaming. The final stage of this evolution involved moving toward a multi-cluster deployment strategy, where decisions were driven entirely by real-time data and service telemetry.
For every service, engineers analyzed CPU, memory, and bandwidth usage to understand its performance characteristics. This data helped determine which services should run together and which needed isolation. For instance, compute-intensive workloads such as advertising systems and personalization engines were placed in their own dedicated clusters to prevent them from affecting latency-sensitive operations like live video delivery.
Each cluster was carefully designed to host only one P0 (critical) service stack, ensuring that high-priority workloads always had enough headroom to handle unexpected spikes in demand. In total, the production environment was reorganized into six well-balanced EKS clusters, each tuned for different types of workloads and network patterns.
The results of this multi-year effort were remarkable.
Over 200 microservices were migrated into the new Datacenter Abstraction framework, operating through a unified routing and endpoint system. The platform replaced hundreds of individual load balancers with a single centralized Envoy API Gateway, dramatically simplifying traffic management and observability.
When the 2023 Cricket World Cup arrived, the impact of these changes was clear. The system successfully handled over 61 million concurrent users, setting new records for online live streaming without major incidents or service interruptions.
References:
Scaling Infrastructure for Millions: From Challenges to Triumphs (Part 1)
Scaling Infrastructure for Millions: Datacenter Abstraction (Part 2)
Get your product in front of more than 1,000,000 tech professionals.
Our newsletter puts your products and services directly in front of an audience that matters - hundreds of thousands of engineering leaders and senior engineers - who have influence over significant tech decisions and big purchases.
Space Fills Up Fast - Reserve Today
Ad spots typically sell out about 4 weeks in advance. To ensure your ad reaches this influential audience, reserve your space now by emailing [email protected].