What is Dynamic Content?
Any modern app / website combines static and dynamic content. Static content such as images, videos and stylesheets can be reused across multiple users, while dynamic content that includes personalized data (think flight search results) cannot be reused. Unlike static content that remains valid longer and hence can be cached closer to the end user to speedup download time, it makes no sense to cache dynamic content.
How can Dynamic Content download faster?
Since caching is not an option speeding up dynamic content is done using a group of techniques characterized as “Protocol Optimization”. These techniques are focused on minimizing the number of server round trips and ultimately reducing download time.
How does it work?
In order to accelerate dynamic content one should control two network nodes and minimize the number of round trips (e.g. reduce handshake, avoid slow start, optimize DNS lookup etc.). In the case of Web CDNs this is done between Edge Servers (the first server being close to the end user and second server close to the origin server) in what’s also known as the “middle mile”. This technology is also known as DSA (Dynamic Site Acceleration) and was well documented over the years; see these slides for example.
Is Mobile different than Web?
Back in the days the “middle mile” was slow (high latency between servers), but over the past decade thanks to major infrastructure improvements it is no longer the case. The major rise in mobile usage shifted latency to the “last mile” (i.e. the distance between the user to the edge server) and in many geographies we measure an average latency of over 100 milliseconds. The challenge of content acceleration - especially dynamic content - over mobile networks became a more significant one.
What can be done?
Accelerating dynamic content over the last mobile mile leverages similar concepts of protocol optimization (i.e. downloading more content using fewer round-trips). When applying this technique though on the slow mobile mile the impact is much greater as demonstrated in the chart below.
- Blue bars = Amazon Cloudfront acceleration in the middle mile
- Orange bars = PacketZoom acceleration in the last mile
Why is the difference so significant?
Amazon Cloudfront reduced the number of round trips in the “middle mile” by optimizing TCP. They were able to trim the elapsed time for most of the users, but unfortunately, there is still a fairly long tail.
PacketZoom protocol used similar techniques to reduce the number of server round trips, but since it is applied on the much slower last mobile mile - any round trip “shave” many more milliseconds from the total elapsed time.