3 Replies Latest reply: Mar 31, 2019 9:58 PM by carlos RSS

    High Latency between N9Ks

    carlos

      Hello,

       

      Has anyone here experienced unusual latency between two directly connected N9Ks? I have two 9504s connected via 40Gig link and am getting 200+ ms.

       

      N9K-A# ping 10.x.x.12

      PING 10.x.x.12x (10.x.x.12): 56 data bytes

      64 bytes from 10.x.x.12: icmp_seq=0 ttl=254 time=200.773 ms

      64 bytes from 10.x.x.12: icmp_seq=1 ttl=254 time=200.925 ms

      64 bytes from 10.x.x.12: icmp_seq=2 ttl=254 time=200.657 ms

      64 bytes from 10.x.x.12: icmp_seq=3 ttl=254 time=200.596 ms

      64 bytes from 10.x.x.12: icmp_seq=4 ttl=254 time=200.709 ms

       

      --- 10.x.x.12 ping statistics ---

      5 packets transmitted, 5 packets received, 0.00% packet loss

       

      But what's weird is I am getting "normal" and lower latency in ping responses from my laptop to servers behind N9K-B, the switch in question, so we don't have issues in production. I already spoke with TAC and the engineer stated that this is normal on N9Ks that act as HSRP Active - something to do with the a default COPP policy. I don't have issues on our other N9Ks though so I find the answer quite unacceptable.

        • 1. Re: High Latency between N9Ks
          Micheline

          Hello Carlos--HSRP traffic is considered "important" traffic according to default CoPP settings.  If you have your CoPP dialed into "strict" then the default CoPP rate is:

           

            class copp-system-p-class-important
            set cos 6
            police cir 3000 pps bc 128 packets conform transmit violate drop


          Your HSRP active router is likely receiving HSRP packets in excess of the allowed CoPP limit, so the excess is dropped, which would contribute to your high latency.  You can go into this configuration, and dial it up and see if you get better latency. 


          This is the Security Config Guide for NX-OS Release 7.x.  There are step-by-step instructions under "Configure Control Plane Policy Map."  Not sure what you were running.  https://www.cisco.com/c/en/us/td/docs/switches/datacenter/nexus9000/sw/7-x/security/configuration/guide/b_Cisco_Nexus_90….

           

          Hope this helps.  MM

          • 2. Re: High Latency between N9Ks
            Kevin Santillan

            The only time I've encountered this was when dmirror was enabled on a couple of 9Ks in sites that we acquired. At times, TAC enables dmirror to send traffic from a specific line card to the CPU as a troubleshooting step so that traffic appears in the debug and then forgets to disable it after the troubleshooting session. To verify if it is enabled, do this:

             

            9K-1# bcm-shell module <MODULE #> "dmirror show"

            xe44: Mirror all to local port cpu0

             

            If the output shows exactly as above, then yes, it is enabled and it would cause latency since traffic is mirrored then punted to the CPU. This is how to disable it, assuming that the port where your switch sends ICMP replies out to is via Module 1:

             

            9K-1# bcm-shell module 1

            Warning: BCM shell access should be used with caution

            Entering bcm shell on module 1

            bcm-shell.0> dmirror xe44 mode=off

            dmirror xe44 mode=off

            • 3. Re: High Latency between N9Ks
              carlos

              That was it, dmirror was enabled and the latency normalized when we disabled it. Probably someone within our team enabled it over the weekend. Thanks.

               

              N9K-A# ping 10.X.X.12

              PING 10.X.X.12 (10.X.X.12): 56 data bytes

              64 bytes from 10.X.X.12: icmp_seq=0 ttl=254 time=0.748 ms

              64 bytes from 10.X.X.12: icmp_seq=1 ttl=254 time=0.57 ms

              64 bytes from 10.X.X.12: icmp_seq=2 ttl=254 time=0.565 ms

              64 bytes from 10.X.X.12: icmp_seq=3 ttl=254 time=0.587 ms

              64 bytes from 10.X.X.12: icmp_seq=4 ttl=254 time=0.625 ms