VIRL 1.3.296 (Aug. 2017 Release)


    IMPORTANT -- Start Here


    The VIRL 1.3.296  release includes major technology updates to the underlying infrastructure on the VIRL server.  While most of these changes are invisible to the user, they all have an impact on every simulation running in VIRL. This release also includes bug fixes, updated Cisco reference platforms, minor enhancements and changes.



    • VIRL 1.3 release requires a fresh deployment / Installation. There is no upgrade path from any previous VIRL releases.
    • VIRL 1.3 and later requires the use of new Salt Masters
    • Due to space constraints, the ISO image only contains a minimum set of VM images (IOSv, IOSv-L2 and the Server image). Additional images can be added using UWM once the system is installed.
    • If deploying on a Cisco UCS C220M4 with Cisco 12G SAS Modular RAID Controller you must enable RAMdisk using the UWM System Configuration pages in order to support IOS XRv images.


    Installation images for VMware Workstation, Player, Fusion Pro, ESXi and bare metal systems (ISO) are available now. All downloads are available via Cisco Learning Network My Account page.

    ATTENTION: All support for VIRL 1.2.6x  and older has ended with the release of 1.3.x.  Please upgrade as soon as possible!


    Online training material is available and is designed to help get you started and be productive quickly. If you are new to VIRL, take a look at our Tutorials section found HERE. The tutorial includes video walkthroughs and step-by-step instructions to get you familiar with the VIRL interface. If the videos fail to start, ensure that your browser supports H.264 video and appropriate plugins are enabled.



    New Features

    New Deployment Option

    VIRL PE is now offered with a single network interface virtual machine and the traditional five network interface deployment.

    • 1int - (One Network Interface) This option is appropriate for deployments where no integration with external networks is desired, or where it is not possible to assign five network interfaces. In the 1int variant, the VIRL VM will only have a management interface enabled, and all other interfaces are 'dummy' interfaces. To increase the interface count after deployment, see How to: Add Interfaces and Networks to VIRL PE
    • 5int - (Five Network Interfaces) This option is appropriate for deployments where connectivity to external networks is desired. In the 5int variant, the VIRL VM has five network interfaces enabled, and attention must be taken during the deployment process. This option requires the configuration of virtual networks on your local or ESXi server instance which will be mapped to the VIRL virtual machine.

    No Graphical Desktop

    VIRL does not provide the LXDE graphical desktop anymore.  This change not only conserves resources (disk space, memory and CPU), but it also reduces complexity by removing the need to install additional Ubuntu packages and their dependencies.  The text based console (vty) login now displays the management IP address in use by the system. If no DHCP server is available, no address is displayed and user is prompted to configure a static IP via virl_setup. More information about virl_setup read CLI Setup Script section below. To get started with web-based system configuration, log into UWM using the IP address displayed in console. UWM is the only officially supported method for configuring and customizing your VIRL PE server.

    Landing Page Removed

    With VIRL 1.3, the landing page displayed in previous versions has been removed. This page was a historical remnant of the early days of VIRL and users are now redirected to User Workspace Management for configuration and customization of the system.  The links from the old landing page have been integrated into UWM's menus.



    • UWM is now the default landing page
    • Salt Configuration and Status now includes:
      • Export of complete license information
      • Checksum of configuration
      • Import of a new License or previous license configuration
      • Auto-population of Salt Masters now include EU servers
    • Software download links for VM Maestro Client and Python Libraries are now located under VIRL Server > Download
    • Horizon read-only console link has been moved to VIRL Server > System Tools
    • Community tab added which includes:
      • Direct link to Community Forum
      • Direct link to Online Videos on VIRL Youtube Channel
    • Documentation section expanded to include:
      • Installation and Tutorials
      • Simulation Concepts
    • Remote Server Menu has been removed, with the adoption of Packet's iPXE deployment method. Detailed instructions can be found in Deploying VIRL on Packet.



    The Salt licensing configuration has been improved by providing the ability to download and upload the entire license information as a file. The user is now able to Export and Import configured Salt information for easy transfer to a new VIRL deployment. The user's pem key can now be imported easily, with the addition of Load Config File button.

    Export Keys and ID

    From VIRL Server > Salt Configuration and Status



    The file includes

    • Checksum to ensure the integrity of the licensing information
    • Complete Salt ID
    • Email Address
    • List of configured Salt masters
    • Salt Master public key
    • User's private key (VIRL License in PEM format)


    Import License Key / Configuration File

    From VIRL Server > Salt Configuration and Status > Reset Keys and ID




    OpenStack Networking

    OpenStack networking has changed in the new Mitaka release. In VIRL, that change affects how the network interfaces are auto-configured and reported. External connectivity for topology simulations is now routed via "bridge" interfaces assigned to each physical Network Interface Card (NIC). As with previous releases, FLAT and FLAT1 networks connect OpenStack (simulation) networks to the outside world.

    In this section:(v)NIC refers to a physical NIC on a bare metal server or a virtual NIC on the VIRL Virtual Machine. A (v)NIC would be attached to a PCI resource, which may be viewed by running the command lspci. Logical Network Card on the VIRL server can be a dummy, tap, or network bridge interface.

    • For people unfamiliar with VIRL and / or Virtualization products like VMware ESXi / Fusion or Workstation, we recommend to use the 1int version of the product.
    • When using the 5int image, we recommend to disconnect all interfaces which are not immediately needed (typically, INT, SNAT and FLAT1 can be safely disconnected. FLAT can also be disconnected, if external connectivity is not needed in the beginning).
    • When adding additional interfaces, make sure to not mix NIC types. Otherwise interface sequence might get out of order. E.g. on VMX, the OVA uses Intel E1000 NICs but when adding new NICs VMware might provision VMXNET3 NICs by default.
    When using a 1int deployment then external connectivity will be not possible until additional interfaces have been added and configured to the VIRL host. Alternatively, OpenVPN may be enabled and all simulations set to use shared FLAT for management network. FLAT connectors and FLAT connectivity from the VIRL host will still be OK (e.g. reaching the VIRL host on the FLAT IP of the VIRL host on FLAT, by default or the SNAT address of the host, by default


    CLI Setup Script

    VIRL now includes an automation script designed for environments where no DHCP server is available or a static IP address needs to be configured during initial deployment. The administrator can now log into VIRL PE console and running virl_setup can assign the proper network information easily without the need to know Linux.



    VIRL Personal Edition has been restricted to a single user.  This means that with version 1.3 and newer, only a single project 'guest' containing the single user 'guest' may be used. No other projects or users may be created.




    VIRL Core

    • NX-OSv 9000 is now supported out-of-the-box
    • IOS XRv now uses e1000 interface types
    • We have deprecated the auth_url in the UWM/STD CLI options and use the single value configured in /etc/virl config files for all users and uses
    • LXC nodes will now stop as expected using work-around for incorrect exit code returned by lxc-stop on stopped nodes



    • Modifying the uwmadmin password is now performed only via the UWM System configuration.

    • Removed buttons for deleting uwmadmin.

    • UWM improvements for system configuration validation and reload.

      • The built-in configuration is truly built-in, not using files that are temporarily unavailable during reinstalls.

      • A new /etc/virl/virl-core.ini is the new and preferred configuration file for all virl-core configuration, replacing and overriding the old common.cfg and virl.cfg.

    • Added node check for traffic capture creation.

    • Refactored support for query interface selection for capture dialog.

    • The UWM navigation sidebar will be hidden when pages are opened as popup window, for example, by the Live Visualization.

    • Changed font color in alert messages when using dark theme.

    • UWM Moved VIRL License Agreement from side menu to footer.

    • Added links to the UWM navigation sidebar that replace the old "landing page" links.


    VM Maestro

    • Included bundled help content (see Help → Help Contents).

    • Splash image is now displayed on VMM launch. (Windows platforms).



    VIRL does support OpenStack clusters. Up to four compute nodes can be added to a controller. Note that there are certain restrictions in terms of configuring the controller and how this affects the compute nodes. In addition, there are mandatory requirements on the underlay network (multicast capabilities and Jumbo Frame requirements) to ensure proper operation of a VIRL cluster. Please refer to the cluster installation instructions for additional information.

    Network Transparency

    Links between nodes in simulated networks are not 'virtual patch cables'.  In fact, Linux bridges are used to connect nodes.  By default, these bridges have MAC address learning enabled and thus are not necessarily 100% transparent.  SPAN (Switch port analyzer) does not work properly when learning on those bridges is enabled.  The team experimented with disabling learning on those bridges entirely, making them effectively flooding each frame and turning the bridge into a hub.  However, certain OSs (like the Nexus 9000v or the IOS XR 9000v) in their current versions do not like flooding on their management interfaces at all.  In this release, the flooding has been turned off by default.  This can be controlled by the neutron_bridge_flooding parameter in /etc/virl.ini.  Note that changing this requires a re-host of the system (vinstall rehost).


    Infrastructure Changes

    This release includes major technology updates to the underlying infrastructure on the VIRL server.  While most of these changes will be invisible to the user when working with VIRL from VIRL's web-based editor, the UWM, or VM Maestro, they have an impact on every simulation that runs in VIRL.

    • The new release is based on Ubuntu 'Xenial' 16.04.02 instead of Ubuntu 'Trusty' 14.04.
    • The new release uses OpenStack 'Mitaka' instead of OpenStack 'Kilo'.
    • The new release uses KVM/QEMU version 2.5 instead of version 2.2.
    • Updates to how VIRL manages all background processes and web service handlers.
    • Removal of LX desktop environment (LXDE).
    • Additional login banners to indicated IP addresses on the VTY console of the server.



    Performing a New Installation


    Please use the updated VIRL 1.3 installation guides posted at Get-VIRL site and select the instructions appropriate for your platform. Ensure to review the known issues listed below. Here a direct link to the complete list of Known Issues and Caveats.


    Known Issue(s)

    Applicable to: VIRL 1.3 5-Interface Deployment

    Problem Description: When deploying VIRL on Workstation, or Fusion and on ESXi server where DHCP service is available on all port-groups, you may see multiple IP addresses assigned to eth0; similar to this:


    Having multiple IP addresses assigned to your server will not effect performance or simulations. This issue only posses a nuisance and causes the VIRL server to take additional IP address unnecessarily. To perform the workaround you will need to use a text editor like vim, vi, or nano; all of which are preinstalled on your VIRL server.



    Manually edit  /etc/network/interfaces and remove dhcp from interfaces eth1 - eth3 configuration and replace with manual as shown below. If you decide to update your servers management IP address or perform an operation which will run vinstall rehost in the future, you will have to repeat this procedure. Make sure to save the file and then reboot your server. On reboot, a single IP address will be assigned to the management interface.       


    • Run:
      • sudo vim /etc/network/interfaces
    • Use curser keys to navigate to eth1 configuration line
    • Remove dhcp and insert manual
      • Repeat for eth2, eth3
    • Once complete press:
      • 'esc'
      • ' : '
      • Type: wq!
    • Your file is now saved and closed
    • Reboot your server
      • sudo reboot now


    Example of edited file:

    auto lo

    iface lo inet loopback

    auto lo:1

    iface lo:1 inet loopback



    auto eth0

    iface eth0 inet dhcp

    auto eth1

    iface eth1 inet manual

    auto eth2

    iface eth2 inet manual

    auto eth3

    iface eth3 inet manual

    auto eth4

    iface eth4 inet manual


    Applicable to: VIRL 1.3 ALL Deployments

    Problem Description: Initial console login may take longer than expected. This is caused by the system checking for specific PROXY information which will not be available and check must timeout before allowing login to continue.

    Workaround: None

    Deployment instructions:


    Upgrade VIRL Client (VM Maestro)

    You should update VM Maestro to version VMMaestro 1.3.0-497 or later.  Older releases should still work as there were no changes in the file format or APIs from VIRL 1.2. However, running the latest version is recommended.To download the new VM Maestro client

    1. Login to the User Workspace Management (UWM).
    2. Click VIRL Server > Download
    3. Click on appropriate VM Maestro install type to download
    4. Install VM Maestro.

    Once you have installed VM Maestro, you may want to update the node types shown in the Palette to match any changes on the VIRL server:

    1. Launch VM Maestro
    2. Select File > Preferences > Node Subtypes.
    3. Click the Fetch From Server button.
    4. Click OK.


    VIRL Server Component Versions

    Openstack2.3.1 (Mitaka)
    VIRL CORE0.10.32.19
    AutoNetKit0.23.5 / 0.23.10
    Topology Visualization Engine0.17.27
    Live Network Collection Engine0.11.6
    VM Maestro1.3.0-497


    Cisco Platform VMs


    Reference Platform / ImageVersionBundled
    IOSv L215.2 (03.2017)yes
    IOS XRv6.1.3


    IOS XRv 90006.0.1* no
    CSR 1000v16.5.1b (XE based image)yes
    NXOSv 7k7.3.0.1yes
    NXOSv 9k7.0.3.I6.1* no
    Ubuntu Server16.04.1 (cloud-init image)yes

    * Image may be installed via VIRL Software section in UWM.


    Linux Container Images

    Container NameVersionType



    Important Notes


    Salt Masters

    VIRL release version 1.3 marks a shift in how Salt is being utilized for license authentication and software delivery. Previous salt servers will not work as expected with VIRL version 1.3 and later. We recommend to specify at least two salt masters and no more than four based on your geographical location.


    US Salt-Master

    EU Salt-MasterAsia Pacific (AP)


    Known Issues and Caveats

    ComponentDefect IDDescription



    Do NOT use the built-in package manager in Ubuntu to perform system upgrades. Using the built-in package manager apt-get is not supported and WILL break your server. In most cases a redeployment will be required. All VIRL system upgrades must be completed via UWM > VIRL Server > System Upgrade.

    Workaround: None



    IOSvL2 generated config inconsistent. In IOS (XE, XR, NX-OSv) we have (or equivalent):username cisco privilege 15 secret cisco
    line vty 0 4 login localThis is missing in the IOSvL2 configuration and might cause automation tools to fail if they assume that logging in using 'cisco/cisco' gives them automatic privilege level 15 ('enabled') access.

    Workaround: Add the configuration manually, if needed.


    ANK creates an exception when trying to generate a VRF configuration for NX-OSv and NX-OSv 9000: "Error generating network configurations: 'dict' object has no attribute 'vrf'. More information may be available in the debug log." This is a known limitation in ANK.

    Workaround: None


    The browser based editor might occasionally generate invalid VIRL files. This has been observed with large topologies (hundreds of nodes). In such a case, the VIRL server refuses to start the topology.

    Workaround: Double check the generated links or use VM Maestro.


    Topology is not loading in browser based editor when remote .virl file option is selected. Files can be located on Git or via an arbitrary URL.

    Workaround: Save the file locally first.

    STDVIRLDEV-4877With VIRL 1.3, the product is restricted to a single project and user. This change manifests in two areas:
    1. Removal of 'Add' and 'Import' buttons for project and user creation
    2. Ability to run simulations of multiple projects at the same time
    This change is in line with the positioning of the Personal Edition where the product is designed to be used by individuals (single user).

    In situations with low free disk space, VIRL core software upgrades might fail. This is also dependent on the size of the configured Cinder file (block storage for VM images). The default for that file is 20GB.

    Workaround: Ensure that enough disk space is available.

    VIRLDEV-5110Configuration changes on the controller do not properly propagate to cluster members / compute nodes. Please see the cluster installation document.

    Connecting to statically assigned console ports is slow when accessing these ports in fast sequence (as with SecureCRT opening a set of ports at the same time).

    Workaround: Manually open the ports in sequence.


    Customized link parameters are not applied to links of simulation nodes when those are running on compute instances of a VIRL cluster.

    Workaround: None


    Docker image names can not contain upper case letters.

    Workaround: Use lower case image names


    The rules governing the effective name of an LXC image are not consistent between creation, modification, and use in the simulation. The name is produced as a combination of the owning project, subtype name, and version suffix set by the user when the image is created. The use of subtype name can be overridden by the subtype definition's baseline_image attribute, usually to make use of a different subtype's installed images by that subtype.

    Workaround: Do not set this property for custom LXC subtypes. It is also recommended that an LXC image, when being added, is not marked for use by a specific project, and that the Modify container function is not used to alter the suffix of the name.

    VIRLDEV-4819Machines with high CPU count might require a different amount of database and front-end processes to be able to deal with requests. Symptoms of the issue can be:
    • failed sim starts
    • failed UWM dialogs dealing with node resources or simulations

    By default, the values are set for small machines which should be fine for VIRL personal edition. VIRL has the ability to adjust these settings based on built-in empirical values which are applied whenever a rehost is run.

    Workaround: Run 'salt-call -linfo state.sls openstack.worker_pool' or use thevirl_setup script, menu item Maintenance → 3 Reset OpenStack worker pools.


    Changing the IP of the cluster controller causes problems. If a VIRL cluster (or standalone controller) has its IP address on the public interface (typically eth0) changed while UWM/STD is operating, or while simulations are running, UWM and STD will still report the old IP to the user.

    Workaround: Don't change the IP or prevent dynamic IPs to change by having sufficient large DHCP lease times (days, not minutes).


    Changing the internalnet_ip address from the default ( value and then performing 'vinstall rehost' operation will break your VIRL system.

    Workaround: None. Changing the internalnet_ip address is NOT supported.


    Docker by default creates SNAT / MASQUERADING iptables entries for the default docker0 bridge. This can interfere with simulation network traffic when used IPv4 networks are overlapping.The default masquerading entry has been removed. However, there is at least one entry left for the Docker registry with a /32 address which can not be used in any simulation.

    Workaround: Don't use the network whenever possible. Check the masqueraded IP addresses currently in use by the local registry by typing:

    sudo iptables -L -v -tnat

    Sample output:

    sudo iptables-L -v -tnat

    0    0 MASQUERADE  tcp  --  any    any  tcp  dpt:5000


    When configuring 'logging console' in cases where neither the Jumphost nor the LXC Management node are available (e.g. off), configuration extraction may fail due to unexpected output as the extraction mechanism is falling back to use theconsole.Also, configuration extraction might fail when consoles are opened via UWM.

    Workaround: Don't configure any logging on the console and / or don't turn off the management LXC / Jumphost. Don't have consoles open via UWM when extracting configurations.


    When the local time of the host computer that runs the VM is in a timezone > 0 (e.g. east of Greenwich), NTP might step the clock back at system start which might confuse STD and therefore results in a licensing issue due to invalid time.

    Workaround: Restart STD/UWM using sudo salt-call -linfo state.sls virl.restart


    NTP does not sync under certain circumstances. If this happens, refer to T-Shoot: NTP errors and connectivity on VIRL

    Workaround: Restart NTPd using sudo systemctl restart ntp

    Live VisVIRLDEV-5085

    In LiveVis, when selecting the 'Clear Log' option a pop-up message might include a "Prevent this page from opening additional dialogs". If the user selects that option may prevent the system from collecting further log messages. This is a known issue and it is also browser dependent.




    Workaround: Do not select this option when offered by the browser.


    Downloading the Syslog as CSV from within Live Vis is not working on Safari and results in a page error. This is a known limitation with the Safari browser.




    Workaround: Use a different browser like Firefox or Chrome.


    Extracting configurations from a running topology within Live Visualization is not working as expected when using Safari. The document returned is shown as XML text, rendered in the browser and is not offered to 'save' into the Downloads folder. This is a known limitation.

    Workaround: Use a different browser like Firefox or Chrome or save the resulting XML text manually into a .virl file.


    In Live Visualization, when retrieving the interface table from an NX-OSv device, some entries are duplicated in the output; One displays IP, and the other displays None.  This is only cosmetic and might cause a spurious extra line being drawn.

    Workaround: None


    Live Visualization might not work properly when changing the UWM port. UWM offers the ability to change it's listening port from 19400 to another port. When doing so, this might have side effects in Live Visualization.

    Workaround: Do not change the UWM port.


    The overlay menu button to switch among different overlays might get blocked after updating Live Vis Web server port. In UWM, it is possible to change the Live Vis Web server port from 19402 to a different port. This might result in a blocked overlay menu button (not showing any entries). The image below illustrates the correct menu behavior whereas when broken, the entire drop down part of the menu is missing.




    Workaround: None


    In Live Visualization, the 'Physical Live' Overlay does not correctly show physical links for XRv nodes. This is a known defect.

    Workaround: None


    IOS XRv 9000 is not fully supported in Live Visualization.

    Workaround: None


    In Live Visualization, the 'Collect Log' action fails for NX-OSv.

    Workaround: None


    Sometimes, BGP VPNv4 links are not being drawn in Live Visualization.

    Workaround: None


    Large (80-300+) topologies might not render properly and perform slower than expected in Live Vis.

    Workaround: None


    Live Vis does not fully support NX-OS 9000v.

    Workaround: None


    UWM does not validate IP address syntax under System Configuration -- Networks. If changes are applied to Network IPs for Flat, Flat1 or SNAT, the data entered is not validated. This includes missing DNS information, Gateways not matching the Subnet etc. If the information entered is inconsistent or wrong, OpenStack services will not be available leading to a inoperable system. This is a known limitation.

    Workaround: Double-check the information for validity and consistency manually.


    Preview of a topology before launching the simulation does not render properly with certain browsers. This happens with Windows and Internet Explorer. IE is not a supported browser.

    Workaround: Switch to Chrome or Firefox to prevent the issue from showing.


    When importing projects into the system, the uwmadmin password in virl.ini was out of sync with the database. To avoid this issue, the uwmadmin project is skipped when importing projects into the system.

    Workaround: Manually change the password after importing to get the correct password into the system.


    UWM does not validate IP address syntax under System Configuration > Networks.

    Workaround: Ensure network definition is entered correctly. Manually validate subnet, sub-mask and gateway information.


    When configuring networks in UWM, OpenStack Neutron networks may not be created OK when configuring the same name server twice (identical name servers)

    Workaround: You must choose different name servers.

    VM MaestroVIRLDEV-4434

    In VM Maestro, use external browser for both ANK vis and Live Vis. We've had reports about using the internal browser for ANK and Live Visualization from within VM Maestro.



    Workaround: Use an external browser, configure as shown above.


    Terminal preference for detached internal terminals - this function has been deprecated in VM Maestro 1.2.4 onwards.

    Workaround: You can manual 'tear' the terminal pane from the main VM Maestro window. Use this in conjunction with the VM Maestro preference (Cisco terminal) - "multiple tabs for one simulation".


    In VM Maestro (1.2.8 build 474), the scroll bar in the 'Preferences > Node Subtypes' dialog doesn't work properly on OS X 10.11 and newer.



    Workaround: Configure scroll bars to 'always show' in the General section of the System preferences as shown above.


    Under rare circumstances, the Telnet menu is missing on a few nodes in the simulation

    Workaround: None


    When using the Jumphost for out-of-band management, SSH to nodes via the Jumphost in VM Maestro does not work.

    Workaround: Use the LXC management node or UWM instead of the jump host.


    Live packet capture port verification is not properly checked. STD wants 10000-16999 (and it enforces it) whereas VM Maestro thinks it is acceptable to use 1025 to 65535.

    Workaround: Use a proper port in the range that is accepted by STD.


    Occasionally, the message 'Unexpected end of input within / between OBJECT entries' can be observed in VM Maestro's Simulations panel. This is mostly cosmetic as it recovers automatically.

    Workaround: None


    While running ANK, clicking Yes on "View configuration changes" the dialog may disappear resulting in 'Unable to connect to Vis Server' error in console. This can happen when working with large topologies.

    Workaround: None


    When running ASAv nodes using the bundled ASAv image, the console of the ASAv is bound to the serial port. By default, 'vanilla' ASAv images downloaded from CCO are having their console bound to the 'VGA' screen which is accessible in VIRL using the VNC option. However, access to the console is EITHER on the serial port OR on the VGA screen, never both. Since it is not known where the console is bound (serial or VGA), both options are offered to the user but only one option will succeed.

    Workaround: When working with ASAv nodes, you may need to try both console or VGA connections.

    VIRLDEV-5092NX-OSv / NX-OSv 9000 nodes remain UNREACHABLE and connection to monitor port fails.

    NX-OSv (Titanium) nodes might not be available on the management interface. This is a reference platform issue where sometimes the Mgmt0 interface is stuck in 'down/up' state. E.g. the interface is 'admin up' but the link is indicated as 'down'.

    • Workaround: Manually issue a 'shut / no shut' sequence on the management interface of the affected node

    NX-OSv 9000 nodes create unnecessary broadcast traffic on the management interface. This is a reference platform issue where NX-OSv 9000 nodes respond to frames not owned by the node which might result in a broadcast storm of IP redirect packets on the management network.

    • Workaround: Configure 'no ip directed-broadcast' on the management interface of NX-OSv 9000 nodes.

    NX-OSv 9000 nodes occasionally (less than 5 in 100 launches) do not become reachable even with the aforementioned workaround. A restart of the affected node usually resolves this.

    VIRLDEV-4682The subtype for Ostinato has changed between OpenStack Kilo and OpenStack Mitaka. When upgrading to VIRL 1.3 from 1.2 and thus staying on the OpenStack Kilo release, the Ostinato 0.8.1 image will not be used due to the mismatch in the subtype (lxc-ostinato vs. lxc-ostinato-drone).Workaround: Change subtype for affected nodes to 'lxc-ostinato-drone'

    IOS XRv 9000 duplicate management IP configuration. IOS XRv 9000 nodes acquire the same IP for both the XR operating system layer and its associated management interface as well as for an underlying Linux layer which uses the same 'physical' interface.

    Workaround: None

    IOSv 15.6(2)T - On boot-up the following (similar) message may be observed:

    %SYS-3-CPUHOG: Task is running for (1997)msecs, more than (2000)msecs (0/0),process = TTY Background.-Traceback= 114ECF8z 130425z 15E20Ez 15DF30z 15DD3Dz 157D75z 158A2Bz 1589BFz 159B67z 153672z 3C9740Az 3C868CEz 3C89BEFz 5125F91z 491D86Cz 492E540z - Process "Crypto CA", CPU hog, PC 0x00157D2C

    Workaround: This is cosmetic and can be ignored.


    Versions older than of NX-OS9000v 7.0.3.I6.1 require 8GB of memory. Starting with the I6 release, the memory footprint has been reduced to 4G.

    Workaround: When installing the image, a flavor with 4GB is created. When using an older version, make sure to create and assign the appropriate flavor which allocates 8GB of memory.


    Multiple VIRL PE instances should not be connected to the same FLAT / FLAT1 network segment as MAC address clashes will occur for IOSv-L2 VMs  due to MAC address handling for those VMs.

    Workaround: None






    Resolved Defects


    ComponentDefect IDDescription
    IOSv L2 15.2 (03.2017)CSCva46621Some L2 Protocols Packets Being Tagged Incorrectly
    CSCva52816PVLAN Operation Issue if Config Loaded from CVAC/NVRAM
    CSCva74314Pings to PVLAN Primary VLAN SVI From PVLAN Host Fail
    CSCva78232Add l2trace feature
    CSCva52846Vlan Config Not Applied from NVRAM if VTP Transparent/Off Mode
    CSCva88344New Fix for Vlan Mgr Error Messages Not Fixed by CSCva52846
    CSCux37121Duplex Mismatch Discovered
    CSCux93767MAC address-table does not refresh
    CSCuy92774Crash on VIRL on Boot With L3 Etherchannels Configured
    CSCuv77089day0 configuration only partially saved
    CSCuz03444DTP Negotiation Does Not Work Correctly on L2 Etherchannel
    CSCuz50864L2 Protocol Tunneled Packets Being Tagged Incorrectly
    CSCuz84281vlan map not working on vlan that is not an SVI
    CSCvb35794Crash when using 'sh mac addr' w/ L2 Port Channel
    CSCvb35863port channel does not reliable reflect state
    CSCvb35899L2 port-channels: sh run conf / system state inconsistency
    CSCvc28827Some Interfaces Boot Up with their RX/TX Disabled
    CSCvc13178member interfaces of (PAgP) EtherChanel stuck in I state