Monitoring 101: Top 5 Ways Applications Waste Bandwidth

If you care about performance, you should care about bandwidth and how your applications make use of it. If you care about how much you pay for your Internet connection – or how much you are costing your end users – you should also care about bandwidth usage. This used to be something that both developers and operations staff cared deeply about. The rise of DSL and other high-bandwidth connections have led some of us to ignore this part of the application delivery picture. That inattention can be expensive.

 

To start with, let’s explore the reasons that bandwidth is expensive. Most, if not all, of the content on the public internet today is delivered in packets over TCP/IP. However, packets have a fixed size, so data that cannot fit within a single packet must use more than one for transmission. The use of TCP/IP means that every packet that is sent by a client to a server must be matched by an acknowledgement signal from the server to the client confirming that the packet was received successfully. The time it takes for a packet to be sent and acknowledged is called the Round-Trip Time (or RTT) for that packet – a metric that can be averaged over time for a given network. If the network experiences packet loss – i.e., a packet fails to be acknowledged (because the packet was lost or the acknowledgement was lost) – the client continues to re-send it until it is acknowledged. The number of extra packets that are sent out because of a lack of acknowledgement is known as a Retransmission count (or RTX). To ensure integrity, each packet is given a sequence number so that the server can ensure it doesn’t garble the data being sent. If the server notices a missing sequence number, it must wait to receive that packet before it can process the data from the other packets. The number of times this happens is called the Out Of Order count (or OOO).

 

The networks that handle internet traffic vary in their efficiency, as measured by the three metrics we just covered:

 

  • RTT – the performance delay for transporting each packet. The average for a given TCP connection should be under 100 ms, although I’ve seen it measured at over several seconds in some cases.
  • RTX – the additional number of packets required to complete the transmission. This should always be “0” on a reliable network. Things are very bad if this is measured at higher than 4 or 5.
  • OOO – the number of occasions where the server wasted time waiting for missing packets. This should always be “0” for a reliable network. A bad network will measure values over 4 or 5.

 

The potential for inefficient handling of packets is the reason that bandwidth can be expensive. Every packet has a cost. The more data you try to transmit, the more packets it takes to do the job. More packets means more effort spent and more time wasted. So you want to use a few of them as you can.

 

With this in mind and a nod to Google and Yahoo for their work in defining rules for proper page design, here are the top 5 ways that applications use more packets than they should:

 

  1. Needless connections – every new connection starts with a TCP handshake involving several packets, so every new connection is expensive. This can be avoided by merging files together during deployment – e.g., CSS files, JS files, etc. These files are often split up by developers for ease of maintenance, but the build and roll-out process can easily merge them together for operational efficiency. Since you also want to avoid loading any CSS or JS that you don’t need for a given page, the pre-processing that performs the merge should create merged results that are specific to each page’s needs – e.g., page 1 only needs CSS files A and B, whereas page 2 needs CSS files A, B and C; so the merge process produces a CSS file containing AB, a second CSS file containing ABC, and a revised page 1 that refers to AB.css and a revised page 2 that refers to ABC.css. Another source of needless connections are redirects. Redirects are typically introduced during deployment by the operations team in order to avoid breaking links inside old content when new content has moved. However, developers are equally guilty of creating programmatic redirects as well. Each redirect is the equivalent of a child’s taunt to the browser – “ha ha ha, your content is actually over here!” If you listen closely, you can actually hear an annoying giggle echo back over the line to the browser. Why use up yet another connection and more packets just to punk the browser? If you don’t have to do it, don’t do it.
  2. Improper caching – if a piece of content is never going to change – i.e., it is static content like a bullet GIF – it should be asked for only once by the browser and then reused after that. This is achieved by setting caching parameters on the content when it is deployed. However, content might change occasionally, so caching parameters can be tweaked to allow the browser to make a new request to check if the content has been updated. These checks are small in size, but they open a new connection and use up packets all the same. Moreover, poor caching configuration often results in a storm of these cache checks. I’ve seen up to 40% of a network’s bandwidth eaten up by useless cache checking.
  3. Bloated headers – headers contain all kinds of things and web developers often use them as a dumping ground. I’ve seen huge customized headers containing state information that should have simply been maintained within the server using standard session tracking or user tracking techniques. A common culprit are cookies. Introduced by Netscape way back in the early 1990s, cookies have few restrictions and almost no best practices around their use. Although the RFCs suggest an 8k limit on cookie headers (i.e., all cookies in a request or response should be no bigger than 8k in total), I have seen traffic with up to 5 cookies measuring more than 6k each. Also keep in mind that responses also use headers, including cookies. This makes the use of cookies for recording session state especially horrific, since the state can easily be changed by every request. That means that the user pays the cost twice for that cookie – once in the request and again in the response as the new state is sent back to the browser. NOTE: I will be writing another blog posting on proper cookie management later.
  4. Bloated data payloads – when defining your payloads, ensure that you are being as efficient as you can. Efficiency of payload size is one of the reasons that JSON is winning the fight for mindshare over XML. However, another insidious source of bloat are POST parameters. POST parameters show up in the payload of the request. I have seen POST requests with hundreds of parameters. Another important point is to turn on compression in your web servers. This will do wonders for diminishing the number of packets you need for delivery.
  5. Bloated images – people have an obsession with image quality that ignores common web practices. Most images on the web are squeezed into small view spaces and their resolution does not have to be that good. If you want users to have a higher-resolution version, make that something that they have to click to access (so the high-res content is not loaded as part of the main page) and then deploy two image files on your servers – one low-res for the main page and one high-res that is loaded on-demand. Apply this rule throughout your site.

 

Get your bandwidth usage under control and then watch it like a hawk. The pay-off will be immediate, I promise you.

These postings are my own and do not necessarily represent BMC's position, strategies, or opinion.

Share This Post