NEW Game developers: check out PacketZoom Mobile Connect
Date By Ravishankar Achanta Tags Network / Speed

If your application has downloadable content and you care about how fast that content is getting delivered to your users, you should accurately measure the throughput seen by your app. Now, this sounds pretty simple in principle. Just take the amount of data transferred and divide it by the time taken. While that principle is true, putting it into practice can be surprisingly tricky when multiple parallel transfers are involved. We'll walk through some examples and try to develop an intuition for the complications involved.

Let us consider a few cases for which we want to measure throughput and various pitfalls that are involved.

Case 1: Single file transfer

1 transfer of 1000 bytes that finished in 2 seconds.

So here:

D (data transferred in bytes) = 1000 bytes

T (Time in seconds) = 2 seconds

S (Speed or throughput in bytes per second). = 1000/2 = 500 bytes/s

There is absolutely nothing wrong with this measurement. If your app does exactly 1 transfer then this would be a correct way to measure. However we have found that most apps in the real world download multiple files.

Case 1 Throughput = 500 bytes / second

Case 2: Serial File Transfers

Two serial transfers of 1000 and 500 bytes that finished in 2 and 1 seconds respectively. (By serial, we mean second transfer starts after the first one ended as shown in the image below)

Here, we measure the overall speed. Let N be the number of transfers,

S = (D1+D2+...+Dn)/(T1+T2+...+Tn)

Now, for the above Case 2, we get the following speed

S = (500+1000)/(1+2) = 500 bytes per second.

Here the speed is 500 bytes per second which is 1500 bytes for 3 seconds.

Throughput = 500 bytes / second

In both Case 1 and Case 2, we have determined that the throughput of our imaginary system is 500 bytes / second. Let us consider a different case that is very common among many mobile apps.

Case 3: Parallel File Transfers

In this situation, an app needs to download 8 files of 10KB bytes each from a single server using the HTTP protocol. This would be a common occurrence in an app displaying a screen full of thumbnail images, which is typical of many retail apps. A typical HTTP client side stack would create 4 simultaneous connections to a server. We can (simplistically) assume that all 4 of these connections start and end exactly at the same time, as shown in the diagram.

S = (D1+D2+...+Dn)/(T1+T2+...+Tn)

= (10+10+10+10+10+10+10+10)/((3+3+3+3+3+3+3+3)

= 80/24

= 3.33 KB/S

As soon as the first 4 transfers are complete, the second batch of 4 transfers will be triggered by the system. For the sake of simplicity, let's assume this entire second batch also finishes exactly at the same time (this would be normally a bad assumption for this case with 4 competing TCP connections but that's a topic for a different post).

Let's contrast this approach with one where all 8 transfers were started together.

S = (D1+D2+...+Dn)/(T1+T2+...+Tn)

= (10 X 8)/(6 X 8)

= 1.67 KB/S

Note how the same 8 transfers that completed in 6 seconds in both cases seem to have very different calculated throughputs at different levels of parallelization. The method we used to compute the throughput is common throughout the industry. Commonly used tools like New Relic and others provide exactly this type of measurement. That is, each transfer is measured in isolation without regards to the context surrounding it.

The challenge is that it's very hard for app developers to know in advance which of their transfers will happen in parallel. It is impractical, if not impossible, to carry out direct measurements for total bandwidth that are obtained for any arbitrary interval during an app's lifetime. Most available tools are fine with simply assuming that each network transfer (typically an HTTP request) is completely independent. This leads to highly misleading data made available to the developers and performance engineers which means plenty of confusion and wasted hours when trying to compare the results from A/B tests of various optimizations.

The hard question here is, if it's impractical to try and make direct bandwidth measurements in presence of arbitrary number of parallel requests during any given interval, how do we arrive at a satisfactory metric for network performance of the app?

Faced with the same problem, the PacketZoom engineering team came up with a solution. This solution is easy to apply even if all you have is the usually available individual measurements of network transfers, along with start times of each of the transfers. Since this post is already getting too long, I'll discuss this method in detail in part II of this post soon.


Share On:


Comments

comments powered by Disqus