QoS Mechanics

The topic of Quality of Service is absolutely huge. It is an entire universe in itself. I personally don't think I know QoS good enough. That's why it is a good idea to write about it!


First thing that hits you on the head when you start learning QoS concepts is the plethora of seemingly identical terms, each of which however presents a slightly different entity. Let's start with trying to provide clear definitions of four important terms in QoS world:


Interface Speed - a measure of a physical ability of hardware interface to transmit and receive data. Now many people get confused with this term, because everyone knows from school days the formula of speed being distance divided by time, or S=D/T. In data networking, a speed has a different meaning, because it is not a measure of how fast an electric signal can travel through the cable. This speed is pretty much a constant and is roughly equal to half speed of light in vacuum. There are many different factors that affect it, but the main thing is that once you got the interface and the media put together, your speed of electric signal is fixed and you can't influence it in any way. For this reason what is more important to know is what VOLUME of data can an interface transmit and receive in a given time, usually a second.


Bandwidth - this term can be divided into two parts: physical bandwidth and logical bandwidth. A physical bandwidth is pretty much a synonym to interface speed. Logical bandwidth on the other hand is an administrative measure of data transfer rate, guaranteed between devices on the link. Logical bandwidth can be lower, than physical bandwidth and is usually achieved by administratively limiting the volume of data that can be transmitted or received by an interface. Simply put, if a physical speed of an interface is 100Mbit/Sec, it can be limited to 70Mbit/Sec, by dropping 30% of packets on the link. In service provider terms, this is also known as Committed Information Rate or Committed Data Rate.


Throughput - a total amount of data that an interface is capable of processing in both inbound and outbound directions.


Queuing - a way of delaying packets in a temporary buffer due to lack of available bandwidth on an interface. This is a term that I want to view in more detail and to do that, we are going to play a little game I hope, everyone likes games as much as I do.


Let's imagine that we work at the end of a conveyor belt on a match-making factory. Our tiny little match boxes are bits of data. They can be arranged into bigger packs - packets. Packets can be of different sizes, but go on the conveyor through a short security pipe, that prevents us from riding on a conveyor belt. Who wouldn't want to?


A belt is 8 boxes wide and moves with a speed of 100 boxes a second, making our "interface" speed equal to 800 match boxes a second. Our little security pipe is our hardware queue. Once packets are in it, they can't be manipulated in any way and only get out to the belt as soon as it becomes available.


We get different packs of matchboxes for different customers: some are more urgent, some are less urgent. Our work table before the security pipe is our software queue. Here we can re-arrange packets and place the more important ones to the front of our queue, leaving less important ones to wait a little longer. Now the QoS policy can dictate that we limit our sending rate to let's say 50% of the "interface" speed. Why would it be needed, you may ask. Well, simply because the guy or girl on the other end of our conveyor belt isn't as fast as we are!    But because we can't change the belt speed, what we do instead is we put the packets through the security pipe at a slower average rate. So, if we have a set of 4 packets, of 100 boxes each, we know that it will take exactly half a second to transmit all of these packets at once. However, as we have a limit of 50%, these packets need to be distributed so that they are sent over a period of one second. In other words, we'll put one packet through, then wait a little and then put another one and wait little more and so on. There are of course more factors that determine the sending sequences (burst rate) and pauses, but this analogy gives you a general idea about the concept of queuing.


The worst thing that can happen to a packet in the queue is dropping. That happens when we run out of room on our work table. Then we just start binning all the excess packets. But some of them are more important (priority queue), so we can never drop them. Now to prevent dropping many-many excess packets, we can ask guys who bring them to our work table to slow down before we run out of room on our work table. That's our analogue of queue management, i.e. random early drop. When we see that our work table starts to get a bit crowded, we start dropping an odd packet (with lower importance) and that signals guys who bring them to us, that they need to slow down, otherwise we'll come to a point when we have to drop all the excess packets.


To distinguish more and less important packets, the guys who pack them up, use different colour labels. These are our QoS markings. We then re-arrange them on our table in a way that gives each colour (class/queue) a certain amount of belt-time (bandwidth). And this all represents our QoS policy


Huh! It is quite a job, isn't it? Stand there, fiddling with all the little and large packets, reading their labels, putting some in front of the queue, dropping some on the floor (what a mess!), counting the number of matchboxes in each packet to do the math. You need a fast brain and fast hands to do all that.


QoS involves a lot of concepts: classifying, marking, queuing, rate-limiting. It is a complex of measures that can be employed to ensure important traffic is protected throughout the network. Another very important point for QoS policy is to be consistent across the network. Think what would happen if the worker on the other end of the conveyor belt didn't use the same colour scheme as we did for more important and less important packets! We would tremble over our purple coloured packets, knowing that these are matches for Mr. Smith. We'd make sure that the package is all intact, well looking and that paper is not creased anywhere. We know, Mr. Smith is very picky when it comes to matches! And if the guy on the other end thought that purple coloured packets are for Mrs. Baker, who doens't care even if the matchboxes are all damaged. And if he were to drop half of them before putting them on to the next conveyor belt! That would be a disaster. Mr. Smith would be very disappointed in our services. We can't afford that to happen!


I find it simpler when concepts from networking, involving bits, packets, states etc. are represented as physical objects and mechanical interaction between them. Hence, the name of this article - qos mechanics.