A CDN (Content Delivery Network) is a network of servers distributed across the globe. These servers store copies of website content and deliver them to users from the nearest location, improving speed and reliability while reducing the strain on the original website's server.
In this post, we will explore and understand the lesser-known but highly significant benefits of CDNs.
CDNs are not just for caching static content from the original server—they act as a layer over the internet, enhancing its design and performance.
Benefits of CDNs
CDNs form the underlying network substrate. The internet was not designed with today's performance expectations in mind. The more hops involved in a data transfer path from one system to another, the higher the cost, according to the original internet design. It did not account for factors like latency and congestion.
This is where CDNs play a crucial role by reducing latency or response time and increasing bandwidth.
How do CDNs improve latency?
CDNs are placed at multiple geographical locations and are accessed by nearby client systems.
A common method to find the closest server requested by a client is through global DNS load balancing. The request includes the client's IP address, and the DNS returns a list of nearby CDN servers along with their health status. (CDNs are essentially reverse proxies. Read more about server side caching in this article.)
CDN servers are also integrated into ISPs (Internet Service Providers). For instance, JioFibre and Airtel are popular ISPs in India.
CDNs use optimized routing algorithms within their overlay networks to select the fastest and least congested paths for data transfer. These algorithms leverage real-time updates on network health, such as traffic levels and server load, ensuring reduced latency and dynamic rerouting when congestion occurs.
For example, if a particular path becomes congested or experiences high latency due to heavy traffic, the routing algorithm dynamically reroutes the data through a faster, less congested path.
How do CDNs improve bandwidth?
To understand bandwidth improvement, consider the concept of RTT (Round Trip Time). CDNs decrease RTT. You can read about congestion window size and TCP optimization in my previous article.
CDNs leverage TCP-level optimizations. For instance, they can increase the congestion window size, thereby improving bandwidth.
Caching as a benefit of CDNs
CDNs cache the static content of the original server's responses. Learn more about server-side caching in my previous article.
This caching is multi-layered. The edge layer, or the first layer, is distributed across several geographical locations.
Naturally, the more edge clusters there are, the lower the likelihood of finding cached data in a specific edge cluster. This likelihood is referred to as the cache hit ratio. A lower cache hit ratio increases the load on the original server. To address this, additional intermediate layers of clusters are added. These intermediate layers are deployed in fewer geographical locations.
Conclusion
In summary, CDNs are not only used for caching static content but also for speeding up the delivery of dynamic resources from the original server.
Here are links to my previous posts, which I publish every Sunday on distributed systems:
- Building Resilient Applications: Insights into Scalability and Distributed Systems
- Understanding Server Connections in Distributed Systems
- How are your connections with web secure and integral?
- Understanding System Models in Distributed Systems
- Ping & Heartbeat in distributed systems
- How Real-Time Editing Works: Understanding Event Ordering in Distributed Systems
- High Availability for Social Media Platforms: Leader-Follower Architecture and Leader Election
- ACID Properties in Single Transactions Explained
- How is Concurrency Control Maintained to Ensure Isolation?
- Ensuring Atomicity in Modern Databases
- HTTP Caching in Distributed Systems
- Asynchronous Transactions in Distributed Systems
Feel free to check them out and share your thoughts!
Top comments (0)