An Offer You Can't Refuse: ACI Contracts...Just for Fun!
By Micheline Murphy
In the law, a Contract is an agreement between two parties to exchange something for something. Two wheat for two stone, for example. In Application-Centric Infrastructure (ACI), a Contract is something altogether different. An ACI Contract is authorization for two groups of endpoints to talk. In this installment of …Just for Fun! Let’s take a look at ACI Contracts, see how they are put together and how to verify them when they are put together right.
As always, I like to start with a few words about topology. ACI is a bit of a different beast. Physically, the ACI topology is a spine/leaf fabric. But the virtual topology is much different, and frankly more interesting. Virtually, our ACI topology is a big cloud with a bunch of EPGs studded off of it. Kind of like a coronavirus…
For this lab, I’ve got the following EPGs configured.
VM name and IP
Which EPGs can talk to which other EPGs and how they can talk is dictated by which Contracts are configured. For this document, I’m going to classify Contracts based on the characteristics of the EPGs you want to be able to communicate. I’ll begin with EPGs in the same VRF and Tenant, and then go to EPGs in different VRFs but the same Tenant. After that, we’ll explore Contracts between EPGs in different Tenants, and then round out the day with an exploration of Contracts between EPGs within the ACI fabric and EPGs outside the fabric.
Contracts Between EPGs in the Same VRF
Constructing Contracts in ACI is a little like putting together an engine. There are a bunch of moving parts, and each part has to be connected to the right other parts for the whole thing to work. From this perspective, the simplest Contract is a Contract between two EPGs in the same VRF. There are really only two steps to this operation: (1) build a Contract; and (2) apply it to the EPGs you want.
So, let’s get going by building a Contract. In this example, I’m going to build a Contract that permits ICMP and TCP, but not UDP. To begin, let’s navigate to the Contracts folder. Here’s a screenshot of the Navigation Pane (it’s always on the left of the screen) for the Tenant White_Tree. The Contracts folder has been expanded to show a number of Contracts that have already been built. Look a little farther down, and you can see the Filters folder expanded. Filters are the basic Contract building block, and actually define the criteria for allowed traffic. You can see below, in the Work Pane (it’s always on the right of the screen), that some filters have a single entry, but two filters have multiple entries. They are the Contracts named Allow_ICMP_only and Allow_TCP_and_UDP.
Practice Tip: If you have two or more criteria that are always allowed together, create a single filter with both criteria.
I created these filters when I initialized this fabric so that they would be specific to this Tenant, but there are a number filters that come pre-loaded in ACI. They are found in the Common Tenant. Since they are in the Common Tenant, any Tenant can use them. (If you have access to an APIC, why don’t you take a moment right now to go to the common Tenant and check them out. We’ll wait for you.)
In our example, there is no filter available that allows both ICMP and TCP, but not other traffic. We can approach this combination problem two ways: we could build a filter with two entries, or we could build a subject with two filters. To create a filter, you right-click the Filters folder to bring up a creation wizard. The Filters creation wizard looks like this:
The in the upper-right hand corner of the Entries table allows you to fill in entries, and as you can see, there are a number of criteria that you can use to specify allowed traffic. Those criteria include:
- Ethertype—for example, IP, FCoE, or ARP
- If you select IP as the Ethertype, you can further refine your filter to an IP protocol. Here is where you would select ICMP or TCP.
- Matching fragments
- If you select TCP or UDP, you can further refine your filter to allow only ports in certain ranges.
Hit the <Update> button to save your entries, and <Submit> to save your filter. When you’re done, the new filter will populate on the Navigation Pane.
Given that I’ve got filters created already that individually filter ICMP and TCP, a more efficient way of building this Contract would be to create a subject with two filters. For that we right-click the Contracts folder. This causes the APIC to offer a drop-down menu with two choices—create a Contract and export a Contract. We want to create a Contract. As with elsewhere in ACI, selecting create causes a creation wizard to pop up. The Contracts creation wizard asks you to name your new Contract and add subjects to it using the in the upper-right hand corner of the Subjects Table. The causes a second creation wizard to pop up. It looks like this:
You can see here that I’ve already named the subject and added a filter for ICMP, and I’ve selected a second filter for allowing TCP, but it has yet to be updated. Before you hit that <OK> button, let me draw your attention to the two innocuous looking little check boxes in the Filter creation wizard right in the middle. By default, these two boxes are checked, which allows either EPG to originate traffic (Apply Both Directions) and accounts for source and destination port changing depending on the direction of the flow (Reverse Filter Ports). For now, let’s leave them checked.
Now hit the <OK> button. You’ll be returned to the Contracts creation wizard. Hit <Submit> and the Contract will be saved and populated on the Navigation Pane and in the Work Pane, like below.
OK, step #1 is done! The next step is to apply our new Contract to EPGs. In this environment, EPG-A and EPG-B are both in VRF White_Tree-1. By default, VMs in EPG-A can talk to any other VM in EPG-A, but VMs in EPG-A will not be allowed to talk to any VM in any other EPG without a Contract. That is, the traffic in ACI must be specifically allowed, or white-listed. While this approach is the exact opposite of traditional networking, in which all traffic is generally allowed, the white-list approach is much more secure.
There are actually a couple different ways that you can associate EPGs with a Contract, and for this first Contract, I’m going to show you the easiest way. And that way is from the Topology Pane of the individual application profile. Here’s the topology for APP_PROFILE-1.
You can see that there are two EPGs in this application profile, and each EPG has a VMware VMM domain associated with it. We’re just going to grab a green Contract object from the gray banner and drag-n-drop it on an EPG. That EPG will become the provider. When you release the green Contract object, a gray arrow will populate that you will direct to the EPG you want to be the consumer. When you release the gray arrow, a pop-up window will walk you through associating the EPGs with a Contract. It looks like this:
If you fat-fingered the provider or consumer EPG, you can use the down arrow next to each EPG name to drop-down a menu and change the EPG. Or, the circular arrows button next to the Provider EPG field will just switch the EPGs. Here you can see I’ve chosen to select an existing Contract, and you can see our Contract in the drop-down menu. Select the right Contract and hit <OK>. You will be taken back to the Topology page. Now the Contract we selected has been added to the topology connecting the two EPGs. Now for the most important part! Hit the <Submit> button! Here’s what the finished Contract looks like.
Mousing over the Contract will cause a little box to pop up with pertinent information about the Contract that’s been implemented… like this:
Now let’s go to the VMs that are attached and test connectivity. We’ve got VM1 (IP: 172.23.11.1) in EPG-A and VM3 in EPG-B (IP: 172.23.12.3). Let’s begin by making sure that both VMs can hit their default gateway.
pod3-white_tree-vm1 ~]$ ping 172.23.11.254 -c 5
PING 172.23.11.254 (172.23.11.254) 56(84) bytes of data.
64 bytes from 172.23.11.254: icmp_seq=2 ttl=63 time=0.273 ms
64 bytes from 172.23.11.254: icmp_seq=3 ttl=63 time=0.236 ms
64 bytes from 172.23.11.254: icmp_seq=4 ttl=63 time=0.268 ms
64 bytes from 172.23.11.254: icmp_seq=5 ttl=63 time=0.184 ms
--- 172.23.11.254 ping statistics ---
5 packets transmitted, 4 received, 20% packet loss, time 3999ms
rtt min/avg/max/mdev = 0.184/0.240/0.273/0.037 ms
pod3-white_tree-vm3 ~]$ ping 172.23.12.254 -c 5
PING 172.23.12.254 (172.23.12.254) 56(84) bytes of data.
64 bytes from 172.23.12.254: icmp_seq=1 ttl=63 time=0.270 ms
64 bytes from 172.23.12.254: icmp_seq=2 ttl=63 time=0.262 ms
64 bytes from 172.23.12.254: icmp_seq=3 ttl=63 time=0.272 ms
64 bytes from 172.23.12.254: icmp_seq=4 ttl=63 time=0.268 ms
64 bytes from 172.23.12.254: icmp_seq=5 ttl=63 time=0.238 ms
--- 172.23.12.254 ping statistics ---
5 packets transmitted, 5 received, 0% packet loss, time 3999ms
rtt min/avg/max/mdev = 0.238/0.262/0.272/0.012 ms
OK. So far so good. Can they ping each other now? We expect that they should be able to, since we allowed ICMP traffic.
pod3-white_tree-vm1 ~]$ ping 172.23.12.3 -c 5
PING 172.23.12.3 (172.23.12.3) 56(84) bytes of data.
64 bytes from 172.23.12.3: icmp_seq=1 ttl=61 time=0.568 ms
64 bytes from 172.23.12.3: icmp_seq=2 ttl=61 time=0.355 ms
64 bytes from 172.23.12.3: icmp_seq=3 ttl=61 time=0.373 ms
64 bytes from 172.23.12.3: icmp_seq=4 ttl=61 time=0.333 ms
64 bytes from 172.23.12.3: icmp_seq=5 ttl=61 time=0.381 ms
--- 172.23.12.3 ping statistics ---
5 packets transmitted, 5 received, 0% packet loss, time 4000ms
rtt min/avg/max/mdev = 0.333/0.402/0.568/0.084 ms
Now let’s generate some TCP traffic. Again, we expect this traffic to be able to go through, since we allowed TCP.
pod3-white_tree-vm1 ~]$ iperf3 -c 172.23.12.3
Connecting to host 172.23.12.3, port 5201
[ 4] local 172.23.11.1 port 39668 connected to 172.23.12.3 port 5201
[ ID] Interval Transfer Bandwidth Retr Cwnd
[ 4] 0.00-1.00 sec 1.01 GBytes 8.69 Gbits/sec 0 1.01 MBytes
[ 4] 1.00-2.00 sec 1.02 GBytes 8.72 Gbits/sec 143 766 KBytes
[ 4] 2.00-3.00 sec 1.00 GBytes 8.60 Gbits/sec 0 837 KBytes
[ 4] 3.00-4.00 sec 1.01 GBytes 8.69 Gbits/sec 0 851 KBytes
[ 4] 4.00-5.00 sec 1.00 GBytes 8.63 Gbits/sec 0 884 KBytes
[ 4] 5.00-6.00 sec 1.00 GBytes 8.63 Gbits/sec 0 884 KBytes
[ 4] 6.00-7.00 sec 1.02 GBytes 8.79 Gbits/sec 0 898 KBytes
[ 4] 7.00-8.00 sec 1.01 GBytes 8.66 Gbits/sec 0 898 KBytes
[ 4] 8.00-9.00 sec 1.02 GBytes 8.75 Gbits/sec 0 898 KBytes
[ 4] 9.00-10.00 sec 1.01 GBytes 8.67 Gbits/sec 0 898 KBytes
- - - - - - - - - - - - - - - - - - - - - - - - -
[ ID] Interval Transfer Bandwidth Retr
[ 4] 0.00-10.00 sec 10.1 GBytes 8.68 Gbits/sec 143 sender
[ 4] 0.00-10.00 sec 10.1 GBytes 8.68 Gbits/sec receiver
And now for the moment of truth. UDP traffic should not be allowed through. We specify a UDP flow with the command -u.
pod3-white_tree-vm1 ~]$ iperf3 -u -c 172.23.12.3
Connecting to host 172.23.12.3, port 5201
iperf3: error - unable to read from stream socket: Resource temporarily unavailable
One important thing that I do want to note now that we’ve gone through this process of configuring and applying a Contract. Now we have a re-useable Contract that allows ICMP and TCP only. If in the future we have more EPGs who’s traffic we need to restrict to ICMP and TCP, we don’t have to create another Contract. All we would need to do is apply the existing Contract to the new EPGs. Here is where we begin to see ACI’s strength over traditional NX-OS based network management. Let’s see how that plays out when we start configuring traffic between EPGs in different VRFs.
Contracts between EPGs in Different VRFs Same Tenant
A VRF in ACI is the same thing as a VRF in regular networking. That is, it is a separate routing table. And by default, VMs in different VRFs can’t reach each other. Unless you leak routes. In this next section, we’ll go through the process of configuring route leaking in ACI.
As with EPGs in the same VRF, we need a Contract, and we’ll need to apply that Contract to the EPGs we want to be able to talk. With EPGs in different VRFs, there are just three tweaks we need to make before we apply our Contract to the EPGs:
- Move the provider EPG’s Subnet from the BD to the EPG.
- Set the Subnet scope.
- Set the Contract scope.
So, let’s begin with moving the Subnet. If you expand the EPG folder and the Bridge Domain folder you’ll see that there’s a duplication of the Subnet sub-folder…. I’ve pointed the duplicate folders out below with a red arrow.
Basic ACI Config Guides all direct the user to configure Subnets under a Bridge Domain. BUT, for inter-VRF traffic (either within the same Tenant or different Tenants) you need to move the Subnet of the provider EPG to be directly attached to the EPG. The easiest way to move the Subnet is to create a duplicate Subnet under the EPG by right-clicking the EPG Subnet folder. As everywhere else in ACI, the right-click brings up a creation wizard that exactly the same as the creation wizard you used to create a Subnet under a bridge domain. When you have completed creation of the EPG Subnet, delete the Subnet object under the bridge domain. A right-click on the Bridge Domain Subnet will bring up a drop-down menu with the option to delete the ‘old’ Subnet. Once the Subnet has been moved to the EPG, make sure that a ping from a VM to the default gateway works.
The next step is to modify the scope of the Subnet. Let’s take another look at the Subnet that we just created. Here is 172.23.11.254/24 for EPG-A.
Do you see the section right in the middle called “Scope”? The scope of a Subnet defines where a route to the Subnet gets advertised. You’ve got three choices:
- Private to a VRF—this Subnet will not get advertised beyond the VRF to which the Subnet belongs.
- Advertised externally—this Subnet will get advertised outside the ACI fabric.
- Shared between VRFs—this Subnet will get advertised to other VRFs within the same ACI fabric.
The default is that the Subnet is private to the VRF, but in order to get two EPGs not in the same VRF to be able to talk to each other, we need to change that to “Shared between VRFs.” Make sure to OK the change.
The last change we need to make is to the Contract scope. Like Subnet scope, Contract scope defines where the Contract is enforced, or to borrow a legal term, what jurisdiction the Contract has. Let’s navigate to the Contracts folder.
In the Work Pane you can see that the scope of each previously configured Contract is listed. The default Contract scope is VRF, which means that the Contract is only enforced within a VRF. VRF scope means that you could have four EPGs—A and B in VRF-1 and C and D in VRF-2—using the same Contract, and A and B would be able to talk. Likewise, C and D could talk. But neither A nor B could talk to C or D. Contracts with a Tenant scope govern any communication within a Tenant, regardless of whether the EPGs are in the same or different VRFs. With Tenant scope, assuming the VRF-1 and VRF-2 were in the same Tenant, A, B, C and D could all talk to each other. Lastly, Global Contracts are enforced within the entire ACI fabric. With a globally scoped Contract, A, B, C, and D could talk to each other even if their VRFs were in different Tenants, so long as they were in the same fabric.
As you might guess, we need to adjust the scope of our Contract from VRF to Tenant. Select the Contract and use the policy tab on the upper right of the Work Pane. Like this.
Now that all the fine-tuning has been completed, the only thing we need to do is to apply the Contract to the EPGs. For this example, I’m going to apply the Contract named All_IP_Tenant-wide to EPG-A in VRF White_Tree-1 and to EPG-C in VRF White_Tree-2. This time we can’t use the handy Application Profile topology, since there’s no one topology that both EPGs belong to. Instead we are going to have to go each EPG and configure the Contract.
Let’s first navigate to EPG-A. It’s in APP_PROFILE-1. Once we’re in EPG-A, drill down into the Contracts sub-folder. Right-clicking the Contracts sub-folder will bring up a drop-down menu that offers:
- Add a taboo Contract
- Add a provided Contract
- Add a consumed Contract
- Add a consumed Contract interface
Here’s where provider/consumer role is important. One EPG needs to be the provider and the other the consumer. This decision is somewhat arbitrary if your Contracts are all written with Allow Both Directions = true. So, in our example, we’re just going to make EPG-A the provider and EPG-C the consumer. Select your option and you get a pop-up window that allows you to select the Contract you want to apply. Hit the <Submit> button and your newly applied Contract will show up on the Work Pane. Repeat the process with EPG-C, keeping in mind that EPG-C needs to be the consumer if EPG-A is the provider.
Now if you navigate to the Contract All_IP_Tenant-wide and look at its topology, you will see that it has EPG-A and EPG-C.
Likewise, the Contract will populate on the topology of each of the involved Application Profiles… like this.
The proof is always in the ping, though, so let’s make sure that VM1 in EPG-A can ping VM4 in EPG-C.
pod3-white_tree-vm1 ~]$ ping 172.23.13.4 -c 5
PING 172.23.13.4 (172.23.13.4) 56(84) bytes of data.
64 bytes from 172.23.13.4: icmp_seq=3 ttl=61 time=0.400 ms
64 bytes from 172.23.13.4: icmp_seq=4 ttl=61 time=0.292 ms
64 bytes from 172.23.13.4: icmp_seq=5 ttl=61 time=0.309 ms
--- 172.23.13.4 ping statistics ---
5 packets transmitted, 3 received, 40% packet loss, time 3999ms
rtt min/avg/max/mdev = 0.292/0.333/0.400/0.051 ms
Contacts Between EPGs in Different Tenants
Now that we have Contracts between EPGs in different VRFs down pat, the jump to Contracts between EPGs in different Tenants isn’t so huge. Basically, you do all the tweaks required for EPGs in different VRFs (You do remember them don’t you? Move the Subnet to the EPG, set the Subnet scope and the Contract scope?). You also complete whatever steps are necessary to create the Contract you need. If it’s already created, you catch a break. Don’t forget to set the scope on your Contract! Do you remember what scope you need to set it to?
The last step you need to do is to import/export the Contract between the Tenants. It’s this last step that is really the only new step. For this exercise, I want to import a Contract from White_Tree to Blue_Mountain, so that we can configure EPG-A in White-Tree to be able to talk to 1-APP-EPG in Blue_Mountain. So, let’s navigate to White_Tree’s Contracts folder. Right-clicking the Contracts folder brings up the familiar drop-down menu. This time, instead of creating a Contract, we want to select the “Export Contract” option. ACI offers you a pop-up window that asks you to name the exported Contract, designate which Contract you want to export, and to which Tenant you want to export. Hit <Submit> to export.
Now if we navigate to Blue_Mountain, we can find White_Tree’s imported Contract in <Security Policies><Contracts><Imported Contracts>. Like this:
Now that we have the same Contract in both Tenants we can apply the Contract to the EPGs we want to be able to talk. In the Tenant that exports the Contract, we apply the Contract as a provider. For our example, that means that EPG-A in White_Tree. By now this should be old hat, but if you don’t remember, navigate to EPG-A and right-click. Select “Add a Provided Contract.” The pop-up window will walk you through selecting and applying a Contract.
In the Tenant that imported the Contract, the Contract gets applied as a “Consumed Contract Interface” rather than as a plain-old Consumed Contract. Navigate to Blue_Mountain’s 1-APP-EPG and right-click. This time, select “Add Consumed Contract Interface.” This pop-up window walks you through selecting a Contract, but this window will only offer you Contracts that have been imported. Like this:
Now if you navigate to the Application Profile topology in Blue_Mountain, you’ll see the imported Contract applied to 1-APP-EPG. Notice that the Contract object is not a smooth green circle, but a frilly green circle. The frilly edge indicates that the Contract was imported from elsewhere.
And the moment of truth… VM1 in White_Tree reaching VM1 in Blue_Mountain.
pod3-white_tree-vm1 ~]$ ping 172.23.1.101 -c 5
PING 172.23.1.101 (172.23.1.101) 56(84) bytes of data.
64 bytes from 172.23.1.101: icmp_seq=1 ttl=61 time=0.403 ms
64 bytes from 172.23.1.101: icmp_seq=2 ttl=61 time=0.289 ms
64 bytes from 172.23.1.101: icmp_seq=3 ttl=61 time=0.378 ms
64 bytes from 172.23.1.101: icmp_seq=4 ttl=61 time=0.305 ms
64 bytes from 172.23.1.101: icmp_seq=5 ttl=61 time=0.289 ms
--- 172.23.1.101 ping statistics ---
5 packets transmitted, 5 received, 0% packet loss, time 3999ms
rtt min/avg/max/mdev = 0.289/0.332/0.403/0.053 ms
Did you get all of that? It is a lot to absorb, so let’s summarize and then take a breather.
- Any communication between EPGs requires a Contract applied to the EPGs.
- Communication between EPGs in different VRFs requires additional tweaking:
- Subnet moved from the BD to the EPG.
- Subnet scope adjusted.
- Contract scope adjusted.
- Communication between EPGs in different Tenants requires a Contract import/export.
As always, please feel free to post with comments, questions, or suggestions. Until next time...
 This document was not intended to be a primer on ACI. If you’re new to the technology, I would recommend Cisco Learning Network ACI Training Video series. Also recommended is Cisco Live BRKACI-1002 Intro to ACI for Network Admins (Melbourne 2017) (This technical session is also scheduled for Cisco Live in Melbourne, 2018.)
 Fun fact: Coronavirus is a family of viruses, several of which cause the common cold. Another member of the coronavirus family causes SARS, and another causes MERS, both of which can be life-threatening.
 The APIC codes a checked box as “true” and an unselected box as “false”.
 Because our Contracts have Allow Both Directions = true provider and consumer roles are not as important. But if you do take advantage of Contract directionality, you do need to be careful about selecting the correct EPG for provider and consumer.
 Contracts and filters are re-useable. Subjects are not.