– Lori MacVittie, senior technical marketing manager at F5 Networks (www.f5.com), says:
Back in the day, before VoIP was common and we were all chatting over Skype, there was a very real concern about how to ensure the network could support it. Jitter was the most common source of issues making VoIP less than desirable, leading to the conclusion that prioritization of voice over data traffic was an essential component to any VoIP-enabled network.
So we tried using TOS (type of service) as a solution. TOS bits – long since obsoleted by the Differentiated Services field – specified parameters for the type of service requested. The belief then was that we could use these bits to prioritize traffic along the same lines as we did customers – gold, silver, bronze. Hence the nomenclature, “coloring bits”.
The problem wasn’t that this approach didn’t work – it did – as long as every network component in the traffic path honored the bits. Obviously you can see the problem with this approach. The Internet is not a single-owner network, and thus getting agreement across backbone providers to honor each other’s prioritization was something of a problem. Quality of service is a differentiator for providers, and prioritizing competitor’s traffic over your own wasn’t exactly going to enable you to sell on the strength of your network.
Being reliant on the Internet for transport with its stochastic behavior and having failed to find a means to prioritize traffic across provider control boundaries, QoS continued to be a source of research and frustration. Prioritization at the network layer had failed to achieve performance nirvana. Not even the adoption of differentiated services really solved the problem for the majority of users, as the same restrictions applied to it that applied to TOS – it still required dependence on the honor system.
IT’S ALWAYS ABOUT CONTROL
Even though today’s Internet is much faster and fatter than in the early days of VoIP, there is still a need to prioritize data exchanged between clients and services. What we’re seeing today is a more application-layer focused approach to prioritization that trusts the Internet to deliver data with alacrity and instead focuses on enforcing priority in those pieces of the flow we can control – the application and its supporting infrastructure.
This approach is not a replacement for traditional bandwidth management techniques that address performance issues in the network, but rather the means to address performance issues related directly to capacity and load – processing latency – and in situations where control over the network is not possible or exceedingly difficult. Prioritization of traffic at any layer requires control, something we simply don’t have end-to-end. Thus we leverage other technology to counter that lack of control in conjunction with enforcing priority at the application layer where we have much greater levels of control.
One of the interesting additions to the web comes with SPDY and specifically it’s support for prioritization. SPDY allows specific requests to be prioritized so that, say, the server could be instructed to process dynamic content over static, or requests for streaming objects before images. One of the things that does is allow both the application and application network infrastructure to more intelligently manage requests architecturally to ensure if not a faster at least a more consistently performing application.
It’s not unlike network queuing technologies that honor packet-based prioritization, in that when queues begin to fill, packets with higher priority are pushed to the front of the queue. With SPDY, if load or capacity is in question, the application or application network layer can push priority requests first to ensure processing while allowing other requests to be processed in a more leisurely fashion.
There exists a wide variety of potential architectures based on application layer prioritization, including scalability domains based on priority-based processing. In many ways such an architecture is not unlike the notion of storage tiering, where fast (and more expensive) storage is used for only specified data and slower (and less expensive) storage is used for lower priority data. A tiering-based scalability architecture at the application layer based on request priority enables compute, network, and storage resources to be more effectively provisioned to ensure consistently performing applications.
But it requires control; complete control over the application and application network infrastructure, just as its bit-coloring predecessors required control over the entire network path. Lack of control along the application exchange path at strategic points can have adverse effects including that of negating the benefits of prioritization in the first place. A SPDY-based application hosted in a public cloud environment leveraging rudimentary application routing (load balancing) techniques will not be able to take advantage of the burgeoning protocol’s prioritization facets, effectively negating much of the benefit of enabling priority in the first place.
As we continue to relinquish control over the lower levels of the networking stack, we will need to harness the flexibility and control over the application layers of the stack more effectively. Taking advantage of application layer prioritization through strategic points of control in the network may be one of the ways in which we can improve application performance without relying on an honor system in an environment where such a system works against itself.