We are reading, posting, watching, and streaming more than ever before, but cellular network data has not kept pace—in fact, it’s gone backwards (as those 44 percent of AT&T customers clinging to their grandfathered unlimited plans can readily attest to). While our need for and use of cellular network data has continued to increase, carriers have opted to manage usage and congestion through data caps, which can take a variety of forms depending on carrier. For the most part, data caps as they stand now are not about alleviating network congestion as carriers claim; they’re about profit. Carriers know this, consumers know this…heck even the head of the FCC knows this, judging from FCC Chairman Tom Wheeler’s July 2014 letter to Verizon CEO Dan Mead: “It is disturbing to me that Verizon Wireless would base its ‘network management’ on distinctions among its customers’ data plans, rather than on network architecture or technology.”
Yes, in some cases data caps can help prevent the most excessive overuse of a limited resource, such as the top 1 perecnt of mobile data subscribers that generate 10 percent of mobile data traffic. Network congestion itself is very real, and with estimates forecasting 15.9 exabytes per month of global data traffic in 2018 (that’s 20 billion GB a month aka over 7,700 GB being used every second for those playing along at home), it’s an issue that’s only going to grow more and more important. This alone is why crude approaches to network management like data caps need to change, and quickly. How we use, how we view, and how we’re delivered data are all rapidly changing – monitoring and measurement need to advance at the same pace.
How carriers currently handle network monitoring
In the wake of Sprint’s “double the high-speed data” promotion (and the subsequent responses of AT&T doubling their data, Verizon doubling their data, and Sprint doubling its data again), traditional views on the price of cellular data and the need for data caps are shifting. “Data” in the abstract seems more arbitrary than ever. If data caps were about managing congestion, how were they all able to increase at once? Did the network capacity magically double overnight?
Leading rhetorical questions aside, it’s clear data caps for congestion are the wrong tool for the job, but what’s interesting is that carriers don’t even have the right tools to begin with. In the ideal scenario, network traffic evaluation would be done in real time at the individual cell sites and only slow down users who were hogging bandwidth at that exact moment. But that is not how carriers handle it.
Verizon, AT&T, T-Mobile, and Sprint claim they only throttle the top 3-5 percent of users (as measured by monthly data consumption), and once you’ve gone over your monthly cap you’re at risk of being throttled if you enter a congested area. While this strategy is great for profits, it’s a fundamentally broken way to properly handle a network. A cellular network doesn’t care about past usage patterns of customers or how much data they’ve already used that month, it cares about how many people are trying to access how much data from it right now. Even the carriers’ seldom used but seemingly more technical approaches of peak time management, concurrent user thresholds, and bandwidth thresholds aren’t up to the task of properly handling network congestion.
The underlying problem in all three approaches is that they attempt to handle congestion through educated guesses based on network proxies rather than actually monitoring the network itself. Peak time management uses time of day guesses; concurrent users bases it on number of people on the network, not how much they’re actually using; and bandwidth thresholds operate under the misguided assumption that link/resource capacity has a fixed maximum, which results in guesses for congestion threshold levels (e.g. “apply management when traffic on this link exceeds 72 Mbps”) rather than dynamically adjusting to its capacity in that moment (e.g. “apply management when this link exceeds 90 percent of its current capacity”).
The main problem with proper and effective cellular network congestion management is not awareness—carriers understand these methods aren’t the best tools for the job—it’s capability. Legacy solutions were either prohibitively expensive, time-consuming, or didn’t work at scale, and as such carriers used the cheaper and easier methods outlined above. And to a degree, those techniques worked back when there was less cellular network traffic and consumers didn’t know what to expect from carriers, but the volume of data and our demands on the network are changing. It’s time the solutions for managing that data did as well.
How carriers should handle network monitoring
Fortunately, monitoring technology has not stood still and today’s cutting-edge solutions are better placed to support a more rational approach to congestion management. The key need is for granular monitoring of dynamic demand patterns from users, and of congestion conditions within the network itself. These requirements couldn’t be met in the days when operators had to rely on coarse-grained observations of total traffic load. But today’s technology enables real-time monitoring of data usage on a per-user basis, at timescales down to seconds or below. Solutions are also available for the problem of accurate real-time congestion measurement, for example by tracking user data as it moves across and between networks, and detecting any long transit times or inadvertent drops due to overloaded bottlenecks. Monitoring systems can even detect particular patterns of usage that are more likely to contribute to congestion than others – a bit like spotting slow ‘platoons’ of cars on the freeway that hold up other drivers. When you put these techniques together it’s clear they give operators more than sufficient visibility to dynamically detect congestion conditions, and react intelligently in a manner that correctly accounts for actual user activity. The days when lack of monitoring capability could be offered as an excuse for clumsy congestion management are drawing to a close.
While cellular network congestion can currently be semi-contained with the sledgehammer approach of data caps, the amount of people in the world with connected devices will only continue to grow (to the tune of 500 million more smartphones sold globally in 2018 than now) and with it the need for more refined methods of network monitoring.
When the amount of drivers on the road increased, we didn’t say that after driving 100 miles per month you’re capped at school zone speeds or must pay 3x the normal price for gas. Instead, we adjusted our transportation infrastructure, we incorporated technology like live updating and dynamically priced toll roads, and we worked towards more opportunities and innovation in public transit. It’s time for us to take a similar approach with network congestion.
The solution of tomorrow needs to be able to really and truly monitor network congestion (rather than using byproducts of congestion as proxies or estimates based on past data) and, more importantly, it needs to be able to do this in real time.
Fergal Toomey is chief scientist & Co-Founder of Corvil.