interface bandwidth (kb/s) for LAN Delay(us) Ethernet 10000 1000 Token ring 16000 630 Fddi 100000 100 Serial 1544 20000
WIC on 1600/2600/3600 series,
sync/async interfaces on 252x,
sync/async serial modules on 2600/3600, etc..
115 20000 ISDN BRI & PRI 64 20000 Dialer 56 20000 Channelized T1 or E1 n * 64 20000 Async tty line speed 100000 Lo 8000000 5000
hope it helps
With GRE, the values are inherited from the underlying interface.
An ethernet interface has a delay of 1000 microseconds.
A fast ethernet interface has a delay of 100 microseconds.
A gigabit ethernet interface has a delay of 10 microseconds.
While I don't have a 10G handy, I would assume that a 1 microsecond delay would be seen!
here is another link about EIGRP GRE Delay:
The default delay value for a GRE tunnel interface is 500000 usec, to make one tunnel the backup interface the delay value was increase to 600000 usec. All traffic traverses the primary tunnel unless the head end device is unavailable. In the Cisco Enterprise Solutions Engineering lab test, there is only one physical interface and the tunnels are sourced off the physical interface. If there were two physical interfaces per branch, it would be preferable to source off loopback interfaces so both logical tunnels remain up in the event of a branch serial interface failure.
The show interface commands displays delay in microsecond units. The delay interface command specifies the delay metric, in 10 microsecond units. EIGRP calculates its metric from the minimum bandwidth in Kbps for all links in the path, and the cumulative delay in microseconds for all links in the path.
well, I'm not sure why in that particular example above delay default value is equal 500000us,
As I understand 500000us is not default delay value for GRE ?
Outputs shows something otherwise .........
Yea as far as I knew for GRE I have read the default delay is 500,000usec. I did learn on my own that delay values administravely defined throw one off unless you read what the command is doing. Apparently the command is measured in tens of microseconds, and the output is displayed simply in microseconds. so if one puts 600,000 delay on a GRE interface the actual delay is 6,000,000. That's what I am seeing.
I was confused, because it was quite strange for me that I couldn't find more direct info (except this) that default delay for GRE is 500000us,
So, with reference to your question, I can safely say that default GRE delay is 500000us and add below record to the "default BW/DLY table":
interface Bandwidth (kbps) Delay(us) GRE tunnel 9 500000
Very helpful as usual, thanks for that.
However, how is EIGRP going to deal with a delay of 1us? Is the value that EIGRP uses not in tens of us?
So, for a gig interface the delay showing in the "show interface" would be 10us, but EIGRP would then use a value of 1?
And so for a 10gig interface, the delay showing in the "show interface" would be 1us, and EIGRP would then use.... 0?
On a 6500 running 12.2(33)SXH5 I saw that the delay was showing up as 100us on a Gig interface.
However, the interface had auto-negotiated to 100Mb/Full duplex, would that influence the delay? Or is that cisco spotted that delay issue with high bandwidth interface and modified their default values.... I couldn't find any specific info about that with google.