The Bandwidth Dilemma: Exceeding Low Expectations

By Posted in Tech Topics 30 January 2012

You know that going to the DMV takes forever. You know that, occasionally, something gets lost in the mail. Or when you place a complicated lunch order, there is a good chance it will come out wrong. And no matter what mobile provider you use, you know the network is unreliable and will sometimes drop a call. Whenever any of these things actually happen, you still get ticked off because no one likes a bad experience. Even when you expect an inconvenience, you want to be pleasantly surprised. When you’re not, you most likely look for someone to blame.

It’s not a totally rational attitude, but it seems especially common when people interact with what they perceive as sluggish web applications. We discussed relying on expectation management in last week’s post about the ‘Latency is Zero’ fallacy. In part three of our series on the Fallacies of Distributed Computing -  assuming bandwidth is infinite – we’ll again see why user expectations are not a sufficient barometer of success for web apps, and why this is even more applicable to mobile experiences.

Without a doubt, the one area in which networks have dramatically improved since Deutsch drafted the Fallacies in 1994 is bandwidth. Infrastructure providers are constantly working to make their series of tubes bigger and faster. But infinite is really, really big, and no one has quite accomplished that feat yet. Plus, even though bandwidth continues to increase, corresponding performance improvements aren’t always apparent because we’re also cramming bigger and bigger chunks of data through the pipe. Social games. Fantasy football podcasts. Streaming video of baby pandas. That episode of that TV show you missed. You know, important stuff.

We’re also predominantly doing all these things on wireless and cellular networks now. (Cisco estimates mobile data traffic will approach 2.5 million terabytes per month by next year.) Everyone loves mobile connectivity, but it does have the unfortunate side effect of offsetting bandwidth improvements. Wireless connections are far more susceptible to packet loss than ye olde wired LAN, which means requests are more likely to fail regardless of a connection’s total bandwidth potential. Of course, this variable is beyond the control of even the most reliable web and mobile apps.

For developers, designing apps without taking bandwidth limitations into consideration can lead to excruciatingly slow interactions and, in some cases, critical failures that will doom an app’s future. The latter seems to be of particular concern in the mobile environment, where packet loss glitches can cause apps to cycle endlessly without completing a request. This is where user expectations start to come into play.

We’ve mentioned before that the purpose of this series is to examine some of the contrasting viewpoints on the Fallacies’ relevance in contemporary web development, and we’ve been using Tim Bray’s article ‘The Web vs. The Fallacies’ as the antipode. His perspective on the fallacy of infinite bandwidth is similar to his take on the fallacy of zero latency. Through years of using the web, he contends, we’ve gotten used to these ‘networking realities’ and fully expect to encounter them.

With regard to bandwidth, he argues that people know large requests are going to slow down the experience. The tacit suggestion is that it’s not essential for developers to consider bandwidth limitations because users already factor it into their expectations. What Bray does not imply, but may also be true, is that users tend to blame their network providers – and not app developers – for any bandwidth issues that degrade their experiences.

Letting the user’s anticipation of bandwidth congestion influence one’s development decisions is problematic, however. First and foremost, it can discourage developers from addressing foreseeable performance issues that can be prevented. In other words, inducing them to naively and unnecessarily engineer a negative experience. Recalling our initial examples, even if users expect the inconvenience, it’s still going to be unsatisfying and deter them from using the app.

But that’s only half the story. For developers of mobile apps, restrictions imposed by cellular networks and hardware providers’ app distribution channels are another key reason to be mindful of bandwidth. Carbon Five wrote about this in detail, pointing out that Apple’s guidelines limit apps to a download size of 20MB, require bandwidth-defined video streaming options, and prescribe an audio streaming threshold of 5MB per five minutes. Developers who forget the fallacy of infinite bandwidth run a greater risk of being rejected by the App Store before user expectations and experience even matter.

Under these circumstances, the best approach seems straightforward: take steps to ration the amount of data being transmitted over time. However, as other systems experts have noted, it’s not quite that simple. Accounting for the effects of latency and the aforementioned threat of packet loss would suggest doing the exact opposite – issuing a smaller number of large requests. So once again it comes down to comprehensive testing, conducted in a simulated production environment.

That’s ultimately the only way to know how much bandwidth your app is using and adjust it accordingly to get within acceptable limits. Carbon Five recommends using Charles for testing purposes, but other utilities are available as well. By taking this extra precaution, you’ll know what kind of performance to expect under various bandwidth conditions – making your app more likely to pass inspection and deliver an optimal experience that doesn’t leave users frustrated.

Coming up: There’s no debate when it comes to network security.

leigh@newrelic.com'Marketing Manager, Content

Tell us your thoughts Or Send us an internal high five

Talk to @newrelic