The subtype definition was changed in recent VIRL versions. You can find a new dynamic subtype for vEOS at: https://gogs.informatik.hs-fulda.de/srieger/virl-utils-hs-fulda/src/master/create-arista-veos-image/dynamic-subtype-vEOS.json that includes cli_protocol. After updating the subtype, Arista vEOS nodes can be reached using SSH as well as telnet to the serial console.
Thank you for this effort. Using your instructions I was able to get vEOS loaded into my VIRL server.
I do notice that the management interface shows not connected. How did you fix that?
I see this in the original post but not sure I understand it. -
"If you include the "! ip of ma1 configured on launch" is replaced with the management IP during the boot process, so you should be able to login using ssh (user: cisco, password: cisco or user: admin, password: arista) by right-clicking on the node in the simulation tab of VM Maestro."
I do get into the console using Telnet - console port but cannot get to the management port.
What did I miss?
Any thoughts around using Cloudvision to automate and manage vEOS in VIRL?
Thanks Michael Wynston
thanks for the feedback! Is it possible, that you did not provide an initial config for the vEOS nodes? As described in Arista vEOS - VIRL - Dev-Innovate discussion and support community, you can get a config template from https://gogs.informatik.hs-fulda.de/srieger/virl-utils-hs-fulda/src/master/create-arista-veos-image/minimal-config-vEOS.…. You need to paste the config into the Node Configuration in VM Maestro:
as you would also do for other nodes and images in VIRL. The important line, that you also already referenced, is marked in the picture above. This line ("!ip of ma1 configured on launch") is replaced by a script that I injected into the image to get the IP for ma1 via DHCP.
Be sure to disable AutoNetkit for the vEOS nodes! You can do this in the AutoNetkit tab above the Configuration tab:
This way, you prevent the config from being accidentally cleared, e.g., after clicking on "Build Intitial Configurations" in VM Maestro. This is also described in the VIRL tutorial VIRL CLUS 2017 Tutorial. Currently, I do not see a way to teach AutoNetkit to create configurations for node types other then Cisco. This problem is the same for other third-party images from other vendors in VIRL.
I just checked again with the latest version of VIRL, and as long as the configuration template is supplied, ma1 gets an IP without any problems.
If you want, you can also try one of our vEOS topologies from our git repository. The topology I used for the test above is available at https://gogs.informatik.hs-fulda.de/srieger/git-virl-hs-fulda/src/master/GIT-VIRL-HS-Fulda/Advanced%20Computer%20Network…
Our gitlab server was migrated to Gogs. The old gitlab server is available until the end of October. The new repository for the Arista vEOS image creation script for VIRL can be found at https://gogs.informatik.hs-fulda.de/srieger/virl-utils-hs-fulda/src/master/create-arista-veos-image.
regarding Cloudvision, interesting point! I've never used it before but reading the docs I suppose that you should be able to configure the nodes in VIRL using CVP als long as you can reach their management port (ma1). Maybe the easiest way to try this would be to change the management network for the simulation to "Shared flat network". This way, all ma1 interfaces will get an IP that should be reachable from external networks where you can then try to deploy Cloudvision? But as I said, I've not worked with Cloudvision before.
Sure! We just used MLAG in one of our Masters' courses again this semester. You can find our MLAG scenario here:
To make MLAG work, you need to change the standard MAC prefix on all of your VIRL hosts. To do this, you need to change the "base_mac" line in /etc/neutron/neutron.conf.
# Base MAC address. The first 3 octets will remain unchanged. If the # 4h octet is not 00, it will also be used. The others will be # randomly generated. # 3 octet # base_mac = fa:16:3e:00:00:00 # 4 octet # base_mac = fa:16:3e:4f:00:00 base_mac = 00:0c:29:00:00:00
The default should be something similar to "fa:16:3e:00:00:00" as shown in the comments above. As you can see, I changed it to "00:0c:29:00:00:00" (this OUI belongs to VMware, which we previously used for our MLAG scenarios). After changing the base_mac in /etc/neutron/neutron.conf, you need to restart the neutron services or simply reboot the host. You can check if the change was successful by taking a look at the MAC addresses prefix assigned by neutron for interfaces of the nodes in running simulations, e.g. using "show int | grep address" on the vEOS nodes:
The topology shows the scenario that you can find on our GOGS server referenced above. You can check that MLAG works using "show mlag":
Background information: This output also shows the reason, why the default base_mac does not work. vEOS generates the MLAG system-id from the MAC address of the switch. The Python code implementing MLAG in Arista takes the MAC address and changes the second-least-significant bit of the first octet of the address from 0 to 1. Hence in the screenshot above, the system id changes the prefix from 00:0c:29 to 02:0c:29. The bit controls whether the MAC address in globally unique (bit = 0) or locally generated/administered (bit =1), so it's not a bug in vEOS. However, the standard MAC prefix of KVM (e.g., fa:16:3e, as described above) also correctly already sets this bit to 1, as all KVM MAC addresses are locally generated. vEOS MLAG implementation crashes in this case, as it cannot create a unique/new system id from the MAC addess, as the bit is already set. I already posted this issue in the vEOS comments, as this prevents setting up MLAG in vEOS on any KVM environment.
I just updated the script to be compatible with VIRL version >=1.3. There were some timing issues while creating the loop devices in Ubuntu 16.04. As always, the script can be get from:
I also added the creation of a default subtype for the image name. So now all dependencies (image, flavor, subtype) to use the vEOS image in VIRL are automatically created by the script. The only thing left to do after running the script is to refresh the subtypes in VM Maestro and the vEOS subtype will be directly usable from the palette. The minimum config for vEOS nodes in VIRL can be found here: srieger/virl-utils-hs-fulda - Gogs: Go Git Service.
Console connection should be possible! I just tested it again, though I typically use the SSH connection via mgmt LXC. What version of VIRL are you running? What is the version of vEOS and Aboot did you use to create the image? I just tried it successfully on VIRL 1.3.156 with an image based on vEOS 4.14.16M with Aboot 2.0.8. I should also work with never versions. There's a bug in some Aboot images though, that prevents the serial console from being activated during the boot process. As stated on Arista EOS Central – vEOS – Running EOS in a VM this should be fixed in Aboot-8.0.0-serial.
Ah! This combination does not work according to Arista. You have to use Aboot-veos-serial-8.0.0 for vEOS >=4.17.0 see Arista EOS Central – vEOS – Running EOS in a VM. The SSH login as well as telnet/console login is the regular combination for VIRL images: Username "cisco" and Password "cisco". You can change the accounts in the supplied EOS config though.
There was a problem with the default subtype creation, that I introduced in the last version. Maybe you ran into that! Please check, that your vEOS Subtype has "Protocol for network CLI" set to "ssh".
I also updated the script to reflect that. Please get the new version from srieger/virl-utils-hs-fulda - Gogs: Go Git Service. Delete the vEOS Subtype and recreate the image.
When I tried with the following,
- Aboot-veos-serial-8.0.0 and vEOS-4.17...2f and your new script, the image is created successfully, but the image subtypes creation failed.
- Aboot-veos-serial-8.0.0 and vEOS-4.17...2f and your old script, the image is created successfully with image subtypes. And then I added the "Protocol for network CLI" ´set it to "ssh" manually.
With this I got console connection.
But now I have the following 2 problems:
- Sometimes one of the node will change the status from "Active-Reachable" to "Active-unreachable".
- When I am trying to extract the configuration, it fail giving the following:
Node "veos-2" does not support config retrieval.
Thanks for your feedback! Seems like Cisco changed the attributes of subtypes in 1.3.156. You are right! My script did not create the subtype in 1.2.83 after the upgrade. I fixed the script to support subtype creation for VIRL <1.3 as well as 1.3.156 now.
The two problems you described can be explained I guess.
1. I also had this issue with VIRL 1.2.83 for the 4.17.2F especially when the image had less then <2 GB RAM. Please check that your image gets 2 GB RAM. Also make sure, that you have enough resources. When oversubscription of RAM is too high, the same issue can happen. You can see this on the console of the VIRL host, as your KVM instances take 99% cpu constantly. Also please try vEOS Version 4.14 and 4.15. They were running fine in a lot of simulations carried out by students this semester. I tested the issue of node getting unreachable in VIRL 1.3.156 and it seems to be gone there. I checked recent vEOS 4.14.x, 4.15.x, 4.16.x and 4.17.x without any issues.
2. Extraction of config from third-party nodes (other than Cisco) is not supported in VIRL. Just do a "show running-config" from the console and copy it into the configuration in VM Maestro. Our get the config file from VEOS e.g. using SCP, etc.