You can test it by generating traffic going through your Cisco device, and observing what happens to the traffic. There are a few different opensource tools you can use for this that will allow you to create different streams, with different payloads and DSCP markings.
Some of them are:
Out of those TRex is my favourite to use.
Thank David for your information.
As I know wred only takes effect on TCP flow. If you only generate traffic by TRex or Ostinato then it is one-way traffic, it doesn't simulate TCP operation.
I set up a topology has 1 FTP client and 1 FTP server on 2 separate laptops.
Two laptops are connected together via 1 Cisco device that has wred configuration on the interface connected to the client.
Then on client, I download some files at the same time from server and observe ethernet card throughput graph.
I hope to see something like the following:
- If I don't configure wred (mean that Cisco device use default behavior tail drop)
- If I configure wred on the interface connected to the client
WRED help to solve TCP global synchronization, so ethernet card throughput graph with wred is smoother than it with tail drop.
(Refer to Cisco Press - DQOS Exam Certification Guide, Chapter 6, Page 432)
But actually, I don't see significant differences between to with my test. Two of them are bumpy, not smooth.
The global TCP sync problem occurs when you have many flows going through the same box. Port utilisation reaches 100% utilisation, and packets start getting tail-dropped indiscriminately. This causes dropped packets in multiple TCP flows going through the box, which causes them all to back off, dropping utilisation and stopping the tail drops. They then all ramp back up together, causing the classic saw-tooth effect.
Using WRED, when the buffers are reaching 100%, packets are selectively dropped from the queue. This causes some TCP sessions to back off to get a smooth traffic flow.
This is probably not something you can easily replicate with simple tools. You certainly need more than just a couple of flows.
Generally, testing QoS is seeing what the impact on the various queues is with the shaper/policers/buffer sizes etc you apply to your interfaces, rather than seeing the actual impact on individual flows through the box.
I need to test WRED function on my developed device to check whether it operate right or not.
I have another problem with my test.
- If I shape bandwidth on outgoing interface (such as shape Gigabit interface bandwidth from 1000 Mbps to 10 Mbps), use tail drop for congestion avoidance then ethernet card throughput graph looks quite flat (10 Mbps in most the time)
- If I use police on outgoing interface (such as CIR = PIR = 10 Mbps), use tail drop for congestion avoidance then ethernet card throughput graph looks bumpy, not smooth.
I know, with police, violate (even exceed) packet will be discard ==> global TCP sync problem ==> ethernet card throughput graph looks bumpy, not smooth
With shaping, packet will be queued when congestion occurs, but if queue full, packets will be tail drop ==> global TCP sync problem
In my test with shaping, queue isn't full, is it? I have no idea to check whether it is full or not?
If it isn't full, how can I make it full? I try to download many big files from server but outgoing rate alway stable as 10 Mbps.
How to determine size of each queue?
I read some document of QoS but I find it so large and abtract.
Hope for your help.
R1#show policy-map CC
Policy Map CC
bandwidth 10 (%)
packet-based wred, exponential weight 9
dscp min-threshold max-threshold mark-probablity
cs1 (8) 10 10 1/10
default (0) - - 1/10
also you can check the output in below and check the drops:
In this case:
- Queue limit 64 packets: this value will always be fixed or will vary from case to case?
this value can be changed by configuration:
In this case:
- Queue depth: Is it the current queue length. If it true then there is no congestion here, right?