Intent Based Networks Are Coming – Don’t be Scared, be Ready for It!The purpose of this post about SDN and SD-Access

 

Last year I made a post about Virtualization and the year before that I wrote about Network Terminology. In those posts I discussed a little bit how important it is to understand what you say and that virtualization literally changed everything. This year I wanted to continue along that journey and introduce you to Cisco’s automation solution for the Campus/Access-layer: Software-Defined Access (SD-Access).

 

I also want to try and highlight that network automation tools require a solid understanding about networks, virtualization techniques, and network terminology. Recently there have been a lot of good discussions about SDN and automation, and when you look around, there is a lot of worry that automation and SDN will make network engineers lose their jobs. Make no mistake that networks are getting more and more complex every day. So we are at the breaking point where it’s not going to be possible to manage them manually. There are many reasons for the need to automate networks, and making network engineers lose their jobs is not one of the reasons!

 

I have tried to highlight some of the things that I, as a network engineer, need to be aware of even if my intention is to fully automate my network. So when you read through this post, you should notice that it’s going to leave out a lot of information about technologies and terminology that a network engineer should be interested in knowing. The purpose of this writing style is to raise your questions, because a network engineer needs to know the fundamentals before designing and implementing automation tools!

 

You will also notice that I will mention some various networking terminology that spreads across all the cycles of a network (Design, Implementation, Operation, Troubleshooting). The purpose is to try and show you that SDN and automation is not going to be the end for network engineers. A solid understanding of networks is still a mandatory requirement for any network automation solution to work as intended. Hopefully you will also see that SDN means that you really need to understand networks! The network engineer will be needed, even with SDN!

 

Cisco’s automation solution for the campus/access layer is called Software-Defined Access, so let’s get started!

 

The information in this post is summarized based on a lot of information mostly available to partners and from viewing various CiscoLive sessions (linked at the end of the post).

 

The whitepaper is located here:

https://www.cisco.com/c/dam/en/us/solutions/collateral/enterprise-networks/software-defined-access/white-paper-c11-740585.pdf

 

The Cisco Validated Design Guide for SD-Access is available here:

https://www.cisco.com/c/dam/en/us/td/docs/solutions/CVD/Campus/CVD-Software-Defined-Access-Design-Sol1dot2-2018DEC.pdf

 

You can view almost all of the information from this post by watching the sessions yourself following this link (session names at the end of the post):

https://ciscolive.cisco.com/on-demand-library/?#/

 

 

Introduction to SD-Access

 

For the past few years, Cisco has tried very hard to promote the concept of “intent-based networking” and the “Cisco Digital Network Architecture." It means that, according to Cisco, there is a demand for networks to be defined by a policy (intent-based networking) that is automatically programmed down to the network infrastructure (automation).

 

For a network to be automated in such a way, there is a need for an architecture that defines how individual components in a network should integrate with each other. This is the Cisco Digital Network Architecture (Cisco DNA).

 

It’s a complicated architecture that requires specific hardware designs (ASICs, programmable flexible hardware, virtualization, processes that run in their own memory space, etc) in combination with a network design that allows it (underlays, overlays, tunnel protocols). I said it was a complicated architecture, but at the end of the day it still uses well-known protocols and standards, just to name a few: ISE, LISP, VXLAN, IPv4, IPv6, DNS, DHCP, 802.x, and so on.

 

When you combine all these things together, it means the end of the typical network design/topology where we use a VLAN/subnet to segment our network. Instead, Cisco introduces the concept of a “fabric” (which is a L3 routed access) that presents all the hosts to the fabric using a /32-host address (L3). Each individual host address is then handled by using LISP within the fabric.

 

Within the fabric, the hosts can use the same gateway, which enables mobility through access switches without reconfiguring hosts. This problem is solved through tunneling protocols. Hosts are tunneled end-to-end via the underlay topology. The L2<-->L3 resolutions (ARP) is handled by LISP. This whole process works in similar ways as the familiar DNS system.

 

Clients are connected to the fabric via “edge switches” (access switches). The edge switches are then communicating, through the fabric, to a control plane switch. The control plane switches are responsible for handling the ARP for clients connected to edge switches. Of course, it’s possible to connect to external networks, and these switches are called “border nodes”. The border nodes also communicate with the control plane switches for external prefixes; for example, the default route and similar prefixes that are not located within the fabric itself.

 

As you can imagine, this new way of thinking creates some issues and requires careful planning around other services. Just to name a few common services that needs to be considered:

 

  • Broadcast (Both L2/L3) is a problem, which makes DHCP a problem, etc.
  • WLCs and wireless networks (CAPWAP in particular) needs to function properly.
  • Integrations with firewalls is a requirement.
  • Connectivity to data centers is a requirement.
  • Internet needs to be reachable.
  • Possibly other SD-Access sites need to be reachable.

 

Cisco defines all the individual components and the architecture that is required to be able to program the intended network policy into all the infrastructure hardware as “Cisco DNA.” The automation solution Cisco provides is called "Cisco DNA Center."

 

This entire concept is what Cisco calls SD-Access. If you are used to working in data center lately, you are probably familiar with what ACI is. SD-Access is a similar solution for campus/access networks.

 

As I’ve already mentioned, this is a complex technology that involves a whole bunch of other protocols and technologies, and they need to work together to achieve the defined network policy from Cisco DNA Center. Exactly how everything works is explained by watching the sessions I’ve linked to at the end of this post.

 

For the rest of this post, I will try and keep things to the basics so that you can have a brief understanding of what SD-Access is and why it’s important to understand the concepts of SDN in today’s networks!

 

If you look into SDN and specifically SD-Access, it’s based on the following key components to function:

 

  1. A special ASIC specifically designed for SDN. (BRKCRS-2901)
  2. An underlay that is using a scalable IP routing design and a scalable routing protocol. Cisco recommends IS-IS, but any protocol that enables IP connectivity will work.
  3. An overlay that consists of LISP & VXLAN.
  4. Cisco DNA Center for the automation. This includes automated Underlays+Overlays+Integrations with DNS, DHCP, ISE, FW, DCs etc.
  5. Some small info about wireless integrations.

 

 

#1. A special ASIC specifically designed for software-defined networking (SDN)

 

To address the challenges with SDN, Cisco developed their own ASIC. In recent years, the IOS (the software of routers and switches) has been upgraded to support a more modern approach. This means the new IOS is more flexible, more scalable, and more modular. Modern IOS is capable of handling individual processes in their own memory space that is completely isolated from other processes. This means that it’s possible to restart processes in full production without “any” risk of affecting any other processes on top of it. Cisco defines this new IOS design as “Converged OS/Open IOS-XE”. This enables network engineers to finally be able to actually boot containers on top of the hardware.

 

Why would we want to do that?

 

Imagine yourself troubleshooting something and you need a packet capture. Now all you need to do is to boot Wireshark, as a container, and take your packet capture inline to your network traffic. That was just an example, but it enables us to run network applications for management on top of the network infrastructure.

 

This new ASIC also gives Cisco the ability to add/remove new protocols that are hardware-accelerated, and it’s also possible to add/remove features without the need to redesign the ASIC hardware. For the purpose of this post and SD-Access, it means that it can handle IPv4/IPv6 as well as the tunnel-protocols (VXLAN, MPLS etc) in hardware. If there is a new protocol invented, then it’s possible for Cisco to add support for it. In other words, you won’t need new ASICs to support new technology!

 

#2. An underlay that is built with IS-IS (or any other routing protocol of your choice)

 

The only real requirement to be able to install SD-Access is an ordinary infrastructure that routes IP. In other words you’ll need IP connectivity between nodes. There are no real requirements for a “full-mesh” topology, so your physical layer can be cabled in multiple ways. The IP network (the underlay) will handle the redundancy using equal-cost multi-path (ECMP).

 

As with any network design, it’s recommended to consider the pros/cons of various routing protocols in relation to the network topology. Cisco recommends IS-IS for the underlay because it scales to very large networks. In my opinion you, are probably going to be OK with EIGRP & OSPF as well.

 

I have mentioned the terminology “underlay” a couple of times now. I hope that everybody has heard that before, but just in case you haven’t... The short explanation is that an underlay is a “transport network” (L3 mostly, but can also be L2 with limitations) that is used for connectivity between your nodes. The routing protocol that is used for the underlay is responsible for the redundancy. For extremely large topologies, then IS-IS is recommended, but as mentioned before, in most cases you’ll be fine with EIGRP/OSPF. You can build your underlay manually, but of course you can also automate this through Cisco DNA Center!

 

If you compare SD-Access vs Cisco ACI from the DC world, then the biggest difference is that there is no recommended topology for SD-Access. ACI uses the concept of the “Spine & Leaf” topology to keep the latency very low. This is not needed for SD-Access, but it should be considered if fixed latency is a requirement. Given that the only requirement is that it routes IP-packets, that doesn’t mean that you should not be aware of your latency, jitter, convergence, and other performance indicators that your “overlay” will need to function properly.

 

#3. An overlay built with LISP & VXLAN

 

Now we are getting under the hood of SD-Access, where all the fancy and complex things are happening. As I mentioned before, SD-Access uses already well known technologies and protocols to be able to tunnel traffic end-to-end via the underlay.

 

Make no mistake, this is a very complex process, and my goal with this post is to just touch the surface so you can understand the mechanics of how it works! SD-Access overlay basically works like this:

 

  • A control plane built using LISP
  • A data plane built using VXLAN
  • Security build using CTS

 

A control plane built using LISP

 

I understand that some of you may never have heard about LISP (https://tools.ietf.org/html/rfc6830) before. LISP is a protocol that was intended to be used to address the problem that with IP networks a host is identified with an IP address and the subnet mask.

 

This is pretty much how all networks function today: you identify a host ID (IP-address) and the network ID (subnet mask) to figure out where a host is/should be located in the network topology. For this to be successful, your IP addressing needs to be carefully considered during network designs, and in most cases, the most efficient use case is to deploy IP addressing in a hierarchical design. This is the only design that makes it possible to aggregate multiple subnets in routing advertisements (summary blocks).

 

The key factor to understand here is that based on the configured IP address and the subnet mask, this identifies WHO the host is (IP address) and WHERE the host is located (subnet mask) in the topology. In other words, you have no flexibility when it comes to host mobility; if your host moves around your network topology, then it will need a different IP address and a different subnet mask to identify itself properly in the network.

 

LISP addresses the mobility problems by using the concept of “endpoint identifier” and “routing locator”. To understand this easily, just think of LISP building and maintaining a database of where hosts (endpoint IDs, or EIDs) exist (routing locators, or RLOCs). LISP uses this database to build the L2-L3 mappings that protocols like ARP need to use. I mentioned earlier that you can think of SD-Access as a “DNS system,” and this means that whenever you need to do L2-L3 mappings, you need to ask the control plane node (which handles the LISP database) for the correct resolution information.

 

This also addresses some scalability issues with the so-called “routed access design,” Instead of having a very large routing database with multiple default gateways, LISP enables us to consolidate this into a smaller database that uses less hardware resources. In other words, it’s also more scalable from a design perspective.

 

From the perspective of a host, they are completely unaware that this is happening in the background. I mentioned in the beginning that this is done by using three different types of nodes for the control plane of the fabric. The next section will briefly cover most of the “WHYs”.

 

SD-Access uses three different types of nodes for the control plane:

 

  • Edge nodes … (Connects to all the hosts and announces the hosts to the control plane node)
  • Border nodes … (Connects to all external networks outside the SD-Access fabric)
  • Control plane nodes… (Maintains the LISP database information about where hosts are located and their L2<-->L3 information)

 

Cisco uses the above terminology, while LISP itself uses a different terminology. That doesn’t matter in the end, but if you have the need to troubleshoot the fabric/overlay using CLI, then the show commands will list the LISP terminology, which can be confusing to grasp. I will just name them here for reference, but don’t make a big deal out of it. The concept is the same, and you will find out it’s very similar to the DNS system.

 

SD-Access terminology: MAP Server /Resolver, LISP Tunnel Router (XTR), LISP Proxy Tunnel Router (PXTR).

LISP terminology: MAP Server/Resolver, ITR, ETR, PITR, PETR.

 

SD-Access and LISP both use a MAP Server/Resolver to request information from the LISP database. To make it more efficient there is a cache system built in so that you only need to resolve the same information once, and then it’s in the cache for the future. Very similar to how DNS works!

 

Edge Node

 

  • First hop gateway IP for all connected clients. This will be the same on all the edge nodes and is achieved through Anycast. In other words, clients are using Anycast gateways!
  • Is responsible for authenticating clients, for example using 802.1X.
  • Is responsible to announce all the EIDs to the control plane nodes (can be multiple).
  • Encapsulates and decapsulates data plane traffic (VXLAN) to/from clients connected to the edge node.

 

Here is where a lot of magic happens with SD-Access. What I want you to understand by reading this is that we are going to use the same subnet behind all the edge nodes and that all the clients will be using the same default gateway. This means that mobility between edge nodes will work without reconfiguring hosts.

 

The hosts are not aware of the infrastructure intelligence at all. Since all this intelligence is handled by the edge nodes, asking the control plane nodes for information which the edge nodes then store in their own local cache for future use. This makes it transparent to the connected end hosts behind the edge nodes. During normal operations the edge nodes will cache where other EIDs are located, and if they don’t know where a specific EID is located, then all they need to do is to ask the MAP server (control plane nodes).

 

In other words, the connected clients behind each edge node will build their payloads as usual, but when it reaches the edge node Anycast Gateway – here is where the magic happens and where packets are encapsulated end-to-end between edge nodes using VXLAN!

 

This is why Cisco calls the overlay a fabric. If you are connected to the fabric, the fabric will take care of the end-to-end tunneling, regardless of where you are located in the topology.

 

Border Node

 

There are three different types of border nodes (internal, external, anywhere) but they have almost the same function. Their function is to keep track of ”known and unknown networks” that exists outside the SD-Access fabric. They communicate with the control plane nodes so that all the EIDs can reach these networks. In other words, they act as a gateway between SD-Access and other networks. A use case for SDN would be between your DC, running ACI, and your campus network running SD-Access.

 

Control Plane Node

 

This is the most important node with SD-Access. It maintains the database by storing information about EIDs (hosts behind nodes) and RLOCs (edge nodes or border nodes) so that the data plane (VXLAN) can function properly.

 

Data Plane built with VXLAN

 

OK, let’s be honest. This is probably where most people will struggle to understand SD-Access. I believe that a lot of you have probably heard about VXLANs. It’s more rare that engineers that I meet with have experienced the power of using VXLANs. If you are familiar with data center technologies, then you know that VXLANs enable a datacenter to tunnel L2 traffic over L3 (MAC-in-IP tunnels).

 

Mobility and the need to communicate through L2 end-to-end is almost always a need in a data center. This is because a lot of virtualization technologies still today have a technical constraint that it needs to be in the same L2 domain to function. Stretched subnets in datac enters are very common, but it’s in general considered bad network design to have a large L2-domain. In data centers, it’s common to solve this problem using VXLAN as an enabler, but it’s implemented differently, and that’s another story.

 

What is common with SD-Access is that they both tunnel L2 via a L3 network. In other words, data centers are also using the concept of underlay/overlay networks. VXLAN is used in both cases, and this post is not enough to explain entirely what a VXLAN is and how it works. Instead, my intention is to hopefully explain what it’s used for when using SD-Access!

 

The very short explanation of how VXLAN works is that it will tunnel L2 over L3. If you combine this with the concept of “anycast gateways,” then it’s possible for hosts to communicate L2-L2 via the underlay, regardless of where they are attached at the access layer. They are going to use the same gateway behind all Edge Nodes. This enables the mobility within the fabric.

 

VXLAN has a “tiny” problem if you care about security. More importantly, there is not enough room to encapsulate some important information that would enable “intent-based networking." So to solve this problem, Cisco had to modify the VXLAN header a little bit and then call it VXLAN-GPO. It’s almost the same header, but the following fields were added to support policy-based network automation for the data plane:

 

  • VN-ID
  • Segment-ID

 

VN-ID = Maps to which VRF a packet belongs to.

Segment-ID = Maps to which scalable group tag the packet belongs to. (This is confusing and should not be mixed up with secure group tags from Cisco TrustSec)

 

The terminology of SGT is used frequently with Cisco's SDN solutions. For those of you that work with security solutions, you should be familiar with the concept of Cisco TrustSec (CTS). With SDN, the difference is that CTS is used “hop-by-hop” and is exchanged with the protocol SXP. With SD-Access, the same concept is used with VXLANs except it’s “end-to-end”.

 

This means that using VXLAN-GPO, it’s possible to tunnel traffic end-to-end (carrying security information on top of it) using only VXLANs. Like I’ve mentioned before, this is a complicated topic to grasp, but this covers the basics of it!

 

As with all tunneling-protocols, the last thing that is needed is to compensate for the additional bytes that the VXLAN header takes up (typically 50-54 bytes). The recommended implementation is to support jumbo frames (9100) end-to-end to compensate for this.

 

Security built with CTS (Cisco TrustSec)

 

I have to be honest and say that Cisco TrustSec has had a lot of issues, but don’t worrywith SD-Access, the concept of “secure group tags” is used and rebranded into “scalable group tags”. Remember that SD-Access uses VXLAN-header for this and not SXP (which was/is the source of a lot of issues with secure group tags/TrustSec).

 

So a very short summary of the security with SD-Access is, it’s built with VRFs (virtual networks in SD-Access terminology). Within each VRF/virtual network we can use SGTs to segment hosts. In other words, we tag hosts with a policy (the SGT) and then deny/allow based on this.

 

Since this tag is used in the VXLAN encapsulation/decapsulation, it’s possible to roam around the network (mobility) and keep your policy wherever you are located by using SGTs. Once the host is attached to a SGT, you can then use this in many other ways with SDN, but that is outside the scope of this post! To give you an idea, you can create network alarms (for monitoring), SPAN ports (for troubleshooting), QoS policies, Netflow policies, and so on. All traffic is identified with the SGTs.

 

Of course you need to identify the host and then classify the security needs for a specific host (your policy) and then add a SGT for this to the host. It’s cumbersome to do this process manually (but it’s possible!), however SDN means automation, and this is done through integrations with Cisco ISE (more on that soon) that uses 802.1X, MAB or WebAuth and then dynamically assigns the SGT to hosts.

 

#4. Cisco DNA Center for automation of Underlay + Overlay + Integrations with DNS, DHCP, ISE, FW etc.

 

Cisco DNA Center is a product that is capable of many things. In this post, I will focus on how Cisco DNA Center is capable of integrating with Cisco ISE to support SD-Access. Do note that Cisco DNA Center is a very capable automation product that can do a lot morebut that’s outside the scope of this post!

 

My personal thought about Cisco DNA Center is that when you look at the hardware specifications, it is a very powerful server on paper. However, my personal experience is that it feels slow to work with Cisco DNA Center even with those system specifications. It’s just a warning so it doesn’t scare you wayit’s still a very capable automation tool!

 

Cisco DNA Center is basically just a very powerful server that you can deploy as a single server or as a “HA cluster”. I write “HA cluster” because Cisco implemented this cluster a bit differently than most other HA solutions. If you deploy it with HA, then some of the services will be “spread out” across the cluster, while other services will be run locally at one server with redundancy at the other servers.

 

In order to understand how Cisco DNA Center really works under the hood, we need to split the Cisco DNA Center into 3 components:

 

  • NDP (Network Data Platform)
  • NCP (Network Control Platform)
  • Cisco DNA Center

 

Network Data Platform

 

I haven’t mentioned it before, but Cisco DNA Center collects a lot of information (telemetry data) from the network, and it’s enabled by default. Within the concept of SD-Access, you should be familiar that it uses common protocols (Syslog, Netflow, NBAR, SNMP etc) to collect telemetry data from the network. This data is used in various processes within SDN, and you could use this with SD-Access or somewhere else in your network.

 

Within Cisco DNA Cente,r all this data is called “Assurance” or “Network Data Platform”. You have probably heard the terminology “Assurance Engine”…this is what NDP is all about. This is enabled on all the edge nodes, and the data is sent back to the Cisco DNA Center server. Unfortunately it’s not so very flexible with how you can set up this policy since Cisco DNA Center uses this information for a lot of troubleshooting.

 

Do note that Cisco DNA Center supports a lot of APIs, and it’s possible to overcome this using other techniques. For example, a use case would be to poll Netflow data from your existing collectors instead of having to use Cisco DNA Center for this.

 

Network Control Platform

 

It sounds complicated, but this is just the piece of Cisco DNA Center that operates with APIs between various systems. This is typically done using well known techniques such as Yang, SSH, Netconf, etc. The policy itself is defined in the Cisco DNA Center, and it’s automated through APIs.

 

Cisco DNA Center

 

Finally, we have arrived at the section where we actually create and define a network policy with SD-Access. It’s been quite a few concepts to go through before we reached this point. But without a fundamental understanding of how SD-Access “works” and how the architecture behind it is designed, it wouldn’t make much sense to create a network policy (but in theory you could start here).

 

Within Cisco DNA Center, you define your network policies, and you define your network. You still need to know a little bit about your topology and some design parameters for your network. You need to define border nodes, control panel nodes, edge nodes, IP addressing, and such things. When this is done, then Cisco DNA Center will use APIs to automate this policy across your entire infrastructure “end-to-end” to support it within your SD-Access fabric.

 

This is of course a complicated process, and if you are familiar with Cisco ISE, you will feel familiar with the Cisco DNA Center GUI. It follows a similar pattern and structure.

 

If it still isn’t clear, this is the key difference between a traditional network where we manually configure all individual components. With SDN it’s automated. Cisco DNA Center will help you automate your Campus network. On top of that, Cisco DNA Center comes with a ton of other features that is considered to be SDN. If you’ve made it this far, I hope you have a better understanding now that SDN is not the end of network engineering. I believe it’s going to be even more important for network engineers to understand the complexity of modern networks! SDN will help us piece the puzzle together as long as we understand how it works!

 

One of my favorite features of Cisco DNA Center is, for example, to be able to define a “workflow,” which means I can design a step-by-step process that solves a specific need for me. And then I can let anyone run this process in the future. It’s actually very simple to automate using workflows, and since anyone can run the “script” I design, it means I can send it to pretty much anyone, and they would have to follow my workflow I have designed. You can even have scripts that triggers workflows based on network events (SNMP-traps) and so on.

 

In other words, it can be used to automate a lot of things, not just your SD-Access fabric, but also a lot of common troubleshooting issues.

 

#5. Additional information about SD-Access

 

If you made it this far, you might still have a lot of unanswered questions, so I will try and address a couple of those in this section:

 

  • Wireless integration with SD-Access
  • Caveats with SD-Access
  • Where do I go to learn more?
  • How do I get access to a lab?
  • Is SDN the end for network engineers?

 

Wireless APs build VXLAN tunnels with Edge Nodes

 

SD-Access has support for wireless integrations, of course! I didn’t mention it before because with SD-Access, the traffic flow is altered from a traditional WLC (CAPWAP) implementation. The AP will still build a traditional CAPWAP tunnel to the WLC for the control plane. On top of this, the AP will also build a VXLAN tunnel directly to the edge node where it’s connected!

 

If you did pay attention until now…this means that client-to-client traffic is NOT tunneled via the WLC but directly between the AP and the access switch! This is a very big difference for how the data plane normally works with a Cisco WLC! From a client perspective, it means you no longer need to be tunneled through the WLC (in most cases).

 

Another major change when it comes to wireless is that in the GUI for the WLC, you will see that you can configure wireless policies to support “intent-based networking."

 

 

All this talk about SDN, Cisco DNA Center, and Automation… So what are the caveats?

 

  • Since SD-Access by implementation is a L3 routed access design, you need to understand the limitations of broadcast protocols and how they work with this design. For example, to make DHCP work, it’s a mandatory requirement that you have a more advanced DHCP server that can handle Option 82.

 

  • SD-Access is a licensed-based technology, so it’s going to be more expensive than a traditional network. In a lot of networks, the cost for SD-Access is difficult to justify. You need to carefully calculate your operational cost savings for implementing SD-Access vs the license cost.

 

  • From a design perspective, it’s fairly difficult to pick hardware for SD-Access. It’s very important to choose the right hardware for the node type you want it to operate as. On the other hand, the limitations for SD-Access is really just the amount of hardware resources available, which is making it difficult to pick the right device. For example, if you want it to operate as a border node, then it needs to support at least 2 topologies (SD-Access+The other domain). Edge nodes are pretty much only depending on how much TCAM they support. IPv4 have a benefit here since it only uses 1 TCAM while IPv6 can use up to 6 per host! And of course the control plane node should be the device with the most hardware resources due to the important database it needs to maintain!

 

Those are some of the caveats that I think most people will come in contact with when investigating SD-Access. Even though this was just a “small” introduction to SD-Access, I really hope it did not scare you away from learning more about it!

 

I am interested in learning more, what should I do next?

 

There is just so much to learn about SDN, intent-based networking, SD-Access, ACI, SD-Wan, and so on. It’s a complex product that needs a solid understanding of networking. It’s a completely different approach to designing, implementing, and operating networks, so you would want to learn about all the protocols I’ve mentioned in this post as a priority. This will give you a foundation to dig deeper into SD-Access:

 

  • -Underlay
  • -Overlay
  • -VXLANs
  • -L3 Routed Access Designs
  • -ECMP
  • -CTS
  • -WLC/CAPWAP
  • -EIGRP/OSPF/IS-IS
  • -ANYCAST
  • -LISP
  • -Cisco ISE

 

 

Once you are ready for that, I would recommend the following sessions that you can view, depending on what you are interested in (note that some of them might not be available to everybody):

https://ciscolive.cisco.com/on-demand-library/?#/

 

  • Cisco SD-Access - A Look Under the Hood - BRKCRS-2810
  • Cisco SD-Access – Assurance and Analytics - BRKCRS-2814
  • Cisco SD-Access - Connecting Multiple Sites in a Single Fabric Domain - BRKCRS-2815
  • Cisco SD-Access - Connecting to the Data Center, Firewall, WAN and More ! - BRKCRS-2821
  • Cisco SD-Access – Integrating with Your Existing Network - BRKCRS-2812
  • Cisco SD-Access - Scaling the Fabric to Hundreds of Sites - BRKCRS-2825
  • Cisco SD-Access Campus Wired and Wireless Network Deployment Using Cisco Validated Designs - BRKCRS-1501
  • Cisco SD-Access Wireless Integration - BRKEWN-2020
  • How to setup an SD Access Wireless fabric from scratch - BRKEWN-2021
  • SD Access  Troubleshooting the fabric - BRKARC-2020
  • SD-Access technology deepdive - BRKCRS-3810
  • Cisco SD-Access - Design, Deployment, Monitoring, Troubleshooting and Assurance - TECCRS-3810
  • Cisco Silicon - The Importance of Hardware in a Software-Defined World - BRKCRS-2901
  • Design and deployment of Cisco Catalyst 9800 Wireless Controller - TECEWN-2005
  • Next Generation Network Architectures Design, Deployment and Operations - TECSPG-2801
  • The Catalyst 9000 Switch Family - An Architectural View - BRKARC-2035

 

Can I try a Demo of SD-Access anywhere?

 

One of the problems with SDN is that it’s difficult to set up a lab to learn about these solutions. If you are lucky, you work for a Partner that has access to the SD-Access “LAB-Kit”. Most of you are probably not that lucky. But I will take this opportunity to point you in the direction of Cisco’s Public Demo Portfolio “dCloud”:

 

https://dcloud.cisco.com/

 

I will not go through how it works, but it’s possible to try out most of Cisco’s technologies using dCloud. If you are interested in SDN and SD-Access, I strongly encourage you to try it out using dCloud!

 

I am scared that SDN will take over my job, should I be scared?

 

Every industry changes over time, and to stay successful, it’s important to adapt to the changes and accept them. I have already mentioned a couple of times that when networks become more and more complex, there is a need for automation whether we like it or not. The role for a network engineer will be to understand the architecture and the individual components so that you can define the Intent of the network and translate this into a Software-defined policy.

 

This requires very good technical and business skills.

 

So I believe that every network engineer that has an open mind towards SDN and automation will be ready for the future. Network engineers will still be wanted for their understanding about networks. That won’t change, but what changes is the way that they will define the Intent of the Network using policies instead of CLI commands.

 

Intent-based networks are comingdon’t be scared, be ready for it!