VIRL 1.6.65 Release Notes

     

    IMPORTANT -- Start Here

     

    Before you perform an in-place upgrade from VIRL 1.5, you must prepare your system with the following steps:

    1. Shutdown the VIRL server and take a snapshot
    2. Start VIRL server and log in VIA SSH
    3. From CLI copy and paste the following command:
      • sudo apt-key adv --keyserver hkp://ha.pool.sks-keyservers.net:80 --recv-keys 9DC858229FC7DD38854AE2D88D81803C0EBFCD88
    4. Login to UWM and start in-place upgrade

     

    Recover from failed upgrade:

    1. Login to the VIRL server via SSH
    2. From CLI copy and paste the following command:
      • sudo apt-key adv --keyserver hkp://ha.pool.sks-keyservers.net:80 --recv-keys 9DC858229FC7DD38854AE2D88D81803C0EBFCD88
    3. Restart the upgrade by running:
      • sudo vinstall upgrade
      • Wait for the upgrade to complete. This may take several minutes and it will appear to hang. When complete, reboot the VIRL server
      • sudo reboot now
    4. If the upgrade fails or errors are returned, run the following command and post in a new thread:
      • cd /var/local/virl/logs ; grep -l 'vinstall upgrade' *.cmd | while read ff ; do tail ${ff/cmd/out} ; done

     

    The VIRL PE 1.6.65 release is an incremental update to VIRL PE 1.5, including bug fixes, updated reference platforms, and some enhancements since the VIRL PE 1.5.145 release.  The major additions to this release is support for IPv6 on the primary and shared networks, and the availability of a set of additional shared networks.  Unlike in the 1.5 release, the changes introduced have not affected the configuration and installation process in significant ways. Therefore, the same set of salt masters as configured for VIRL PE 1.5 can provision the upgrade to any deployed 1.5 instance. For more information, see the sections below on how to migrate to VIRL PE 1.6.

     

    NOTE

    • If you are running VIRL PE 1.2.83 or earlier today, or if you're running a 1.3.x cluster setup, you CANNOT perform an in-place upgrade.  Migrating to VIRL PE 1.6 requires a fresh installation either by deploying the OVA or, on bare metal systems, by installing the ISO image.
    • Due to space constraints, the ISO image only contains a minimum set of VM images (IOSv, IOSv-L2 and the Server image). Additional images can be added using UWM once the system is installed.
    • If deploying on a Cisco UCS C220M4 with Cisco 12G SAS Modular RAID Controller you must enable RAM disk using the UWM System Configuration pages in order to support IOS XRv images.


    Availability

    Installation images for all supported platforms are available for download. Login to My Account in the Cisco Learning Store and click Download in the VIRL PE subscription box.


    ATTENTION: Versions older than v1.5.145 are not supported anymore with the release of v1.6. PLEASE UPGRADE AS SOON AS POSSIBLE! 

     

    Online training material is available -- this material is designed to help get you started and productive quickly –  Cisco VIRL Getting Started Tutorial

    NOTE: the tutorial includes some video walkthroughs.  To view the video, ensure that your browser supports H.264 video and any plugins are enabled.

    Enhancements and New Features

    IPv6 access on the primary network

    With the 1.6 release, both automatic and static optional configuration of an IPv6 address is available on the primary (management) network used to access the VIRL deployment. Either at install time, in the startup configuration, or post-deployment and after an upgrade, using the virl_setup tool, the primary network's addressing can be adjusted. All required VIRL services are available via both IPv4 and IPv6. Static IP configuration of cluster compute nodes is available in the UWM System Configuration, or in the virl_setup tool. Automatic configuration on the primary network can be switched from SLAAC to DHCPv6, if required in a particular deployment environment.

    Additional shared networks

    Up to six shared networks are made available in the UWM System configuration, in addition to the original pair of shared networks, named 'flat' and 'flat1'. These new networks are named composed of a fixed prefix 'shared-', and a configurable suffix; the expectation is to use this suffix to distinguish these new networks by their purpose, or equipment connected through them.

    Like with the original two shared networks, a physical interface of the main VIRL (controller) host may be selected to provide a connection of this network with the outside lab environment. This is optional, hence no additional interfaces are required on the host in order to install and use VIRL PE 1.6. Without an external interface, the shared networks are available to bridge together nodes from different simulations running in the same deployment.

    Both an IPv4 and an IPv6 subnet may be configured on any of the eight shared networks. The original two shared networks must have an IPv4 subnet configured for compatibility reasons. The new shared networks require at least one subnet to be configured.

    The IPv6 subnet for a shared network is fixed with a /64 prefix. The simulation nodes shall receive their IPv6 addresses via SLAAC, provided by the built-in facility. Thus, the IPv6 address of each host is computed from their interface MAC address, which can be controlled using the 'static_mac' extension. This extension can be set of a node to control the management interface addressing, or on the L2 FLAT element connected to a data interface used for this shared network.

    For Cisco router nodes that don't support SLAAC, notably IOS XRv, the initial configuration may contain a '(no )ipv6 address' line in the initial configuration. The correct IPv6 address will be injected into that line when the node is starting, using the same mechanism employed for IPv4 addresses on nodes that don't support DHCP.

    In case the nodes in the shared network are supposed to communicate IPv6 traffic through to other networks in the outside lab environment, a suitable router is expected to be configured in the outside lab. That router should be advertising the same IPv6 prefix to avoid misconfiguration.

    Controlling reported host URLs and IP addresses (in a cluster)

    The location of the VIRL PE host used by individual users may vary in particular scenarios, such as if a NAT is employed between the users and the deployed lab, or if the VIRL services are being accessed over a shared network and an OpenVPN connection. The VIRL PE services do not make assumptions over the presence and validity of a DNS system in the lab where it is deployed either - no reverse lookups are used to determine a public hostname for any VIRL host, be it the controller or compute hosts in a cluster.

    A configuration option named 'virl_local_ip' was used in these situations to inform the VIRL services what hostname or IP to use to refer to itself. No equivalent option existed for compute hosts in a cluster; this resulted in the reported links for serial consoles of nodes deployed to computes using the primary interface IP address of that compute host. Which is usually incorrect in such scenarios.

    The generated links in both UWM and the desktop UI now consistently reuse the same host that the client is using, be it a hostname or IP address. Thus, for single-host deployments, and the controller node in a cluster, there is no need to configure virl_local_ip or anything else in this respect, anymore.

    In a cluster deployment, in cases where the primary network address of the compute is not to be used, a different configuration can be applied. Log into the VIRL controller node, and issue a command such as sudo crudini --set /etc/virl/virl-core.ini cluster resolve_compute1 "public-hostname-or-ip". For controller that has the set hostname 'X', use the 'resolve_X' to force the value regardless of what the user configured; this is not required, and may be counterproductive though. After the configurations are present for all hosts, use sudo systemctl restart virl-std virl-uwm to apply the changes.

    Improved control over generated MAC addresses

    The MAC addresses of simulation nodes' interfaces are assigned by one of three mechanisms: 1. static_mac - for management interfaces, and interfaces connected to L2 FLAT or L3 SNAT simulation elements; 2. management addresses are generated using a pattern consisting of a constant prefix, and a unique simulation and node number; and 3; all other interfaces use a random MAC address with a constant prefix like 'fa:16:3e'.

    In cases where shared networks are inter-connected between multiple VIRL PE or OpenStack deployments, the same address might be assigned to two nodes. The UWM system configuration can now be used to control and avoid clashes by picking a different prefix for addresses generated by the second and third mechanism.

    Bare-metal installations use hardware enablement kernels

    The ISO installer contains a newer Linux kernel release, which is run by default on all new deployments. This kernel should allow for certain newer hardware components to be recognized properly when installed on a physical computer. The older kernel can still be selected when booting the server from the bootloader menus.

    Due to performance issues with some newer kernels available by default to the Ubuntu 16.04 (Xenial) system, the new kernel is installed from a Ubuntu 18.04 (Bionic) repository. Only the kernel packages shall be installed from this repository, all other packages are disabled from being considered for installation.

    The ISO installer still does not support installations in UEFI mode; servers should be configured to Legacy (BIOS) mode before installation, as described in installation instructions.

    Bug fixes and minor enhancements

    • Auto-netkit / Live visualization
      • VIRLDEV-4878 - IOSvL2 configuration for the cisco/cisco account consistent with IOSv
      • VIRLDEV-4738 - Custom configuration for router loopback 0 interface missing from generated configuration
      • VIRLDEV-6561 - Live vis services failed to communicate after port changes
      • VIRLDEV-6165 - Live vis path traces from LXC nodes were failing due to incorrect MTU settings in the LXC
      • VIRLDEV-4922 - Collect log action fails for NX-OSv
      • VIRLDEV-6357 - Connecting to management IP of server and LXC nodes failed
    • VM Maestro
      • VIRLDEV-6375 - Upgrade the bundled java versions
      • VIRLDEV-6242 - Offer current shared networks for L2 FLAT host network setting
      • VIRLDEV-6198 - Allow setting backend service URL using an IPv6 address
      • VIRLDEV-6022 - Remember size of the extension edit window
      • VIRLDEV-6599 - Remove dependency on potentially vulnerable package
      • VIRLDEV-6707 - Sign the Mac OS X GUI application with an official Cisco key
    • UWM   

      • VIRLDEV-5570 - Remember previous values for some system configuration values for proper cleanup

      • VIRLDEV-5571 - Fix order of system configuration actions when multiple system configuration values are changed
      • VIRLDEV-6361 - Background tasks may leak memory and stop updating external state
      • VIRLDEV-5851 - Cluster health was green when active computes were missing
      • VIRLDEV-6180 - Do not report license failure before activation
      • VIRLDEV-6240 - Ensure each compute host is used in cluster system operation check
      • VIRLDEV-6246 - Add support for Git operations via REST API
      • VIRLDEV-6133 - Multiple user creation does not work as expected
      • VIRLDEV-6461 - Salt configuration and status does not show each failing salt master connection
      • VIRLDEV-6307 - Support git source for browser-based editing of a virl file
      • VIRLDEV-6348 - Fix preview of virl file synced from the browser-based editor
      • VIRLDEV-6451 - Fix API wrongly returning 401 unauthorized instead of 403 forbidden if resource operation is prohibited by System configuration controls
      • VIRLDEV-5488 - SSH console can be higher than browser window, failing to display most recent lines
      • VIRLDEV-6523 - Volumes cannot be created
      • VIRLDEV-6284 - Upgrade shall visually distinguish successfult and failed steps
      • VIRLDEV-6241 - System operation check fails in maintenance mode
      • VIRLDEV-6164 - OpenVPN configuration file is being cached by browsers unchanged even when modified during system configuration
      • VIRLDEV-6008 - Restrict access to VM Control pages to uwmadmin based on system configuration.
    • VIRL Core
      • VIRLDEV-6079 - allow up to 128 data interfaces in a node subtype. Note that large numbers of interfaces on a node may slow down its deployment and teardown, in addition to potential increased resource requirements by the node itself. All data interfaces from first up to the highest interface present on a node in the topology file must be simulated.
      • VIRLDEV-6072 - Interface count for IOS XRv 9000 was lowered to 8 data interfaces. This matches official documentation of the current IOS XRv 9000 images. Future releases of IOS XRv 9000 may increase the number of available interfaces
      • VIRLDEV-6073 - Interface count for NX-OSv 9000 was increased to 64 data interfaces, and its hw_bios switched to OVMF.fd.
        • Note that operating a NX-OSv 9000 node with more interfaces places increased demands on memory for the node, meaning that a different flavor may be required for such nodes. See also known issue VIRLDEV-6416.
        • On upgrade, existing NX-OSv 9000 images with a hw_bios setting set to n9kbios.bin should be modified in UWM Node Resources > Images to also switch to OVMF.fd.
      • VIRLDEV-4947 - Allow node to set its serial and machine UUID. Due to infrastructure constraints, only one node with a given UUID may be placed in the VIRL system at a time.
      • VIRLDEV-6555 - Change method of connecting static serial ports with node serial consoles directly
      • VIRLDEV-6528 - Allow nodes to specify common names for dummy interface networks. A node extension allows connecting special interfaces between management and data interfaces
      • VIRLDEV-6258 - Initial confuguration management/shared/snat IP address injection shall not rewrite 'ip address dhcp'
      • VIRLDEV-6061 - Traffic capture does not start for node in a site element
      • VIRLDEV-6095 - Docker node in a site, or with non-standard name does not start
      • VIRLDEV-6169 - Fix apparent overlap in TCP port ranges
      • VIRLDEV-6105 - Live capture does not report port clashes
      • VIRLDEV-6817 - Increased memory limit in lxc-sshd (and derived subtypes including management LXC) to 32 MB to avoid shutoffs due to limit being reached
      • VIRLDEV-6888 - HWE kernel dropped overlayfs alias for overlay filesystem
    • Infrastructure

      • VIRLDEV-6079 - Support PCI multifunction for interfaces. An image property can control how interface hardware is simulated; may be useful if more than 28 interfaces are simulated on a node
      • VIRLDEV-6098 - Nodes are not proportionately distributed across cluster. The main weight is now placed on the number of unoccupied CPUs on a host
      • VIRLDEV-6102 - OpenStack and other infrastructure services only listen on internal interfaces
      • VIRLDEV-6116 - qemu was generating large amounts of messages in individual VM logs (IOSv nodes) wrt register access
      • VIRLDEV-6100 - add generated password protection to redis service
      • VIRLDEV-6254 - randomize keystone service token
      • VIRLDEV-6159 - generate new OpenVPN keys whenever OpenVPN is re-enabled
      • VIRLDEV-6267 - propagate built bridge module to computes
      • VIRLDEV-6201 - Switch to docker community edition; update coreos image
      • VIRLDEV-6292 - Eliminate vinstall subcommands other than salt and upgrade
      • VIRLDEV-6224 - Compute minion validates MTU on cluster network on startup
      • VIRLDEV-6343 - Do not run Apache on compute hosts
      • VIRLDEV-6338 - Increase timeout of libvirtd waiting on kvm to avoid nodes going into error state on smaller hosts
      • VIRLDEV-6312 - Upgrade from obsolete and vulnerable pycrypto to pycryptodome
      • VIRLDEV-6313 - Upgrade from theoretically vulnerable paramiko versions
      • VIRLDEV-6231 - Change login message of disabled computes
      • VIRLDEV-6187 - Allow changing cluster subnet on non-cluster deployment
      • VIRLDEV-6543 - Disabling cluster in virl_setup does not work as expected
      • VIRLDEV-6317 - Kernel bridge module must be compiled with 'retpoline'-supporting gcc on latest kernels
      • VIRLDEV-6342 - Make system services wait for interfaces they depend on
      • VIRLDEV-6422 - Salt error even if valid salt master is found
      • VIRLDEV-6745 - Allow VM metadata to be downloaded using neutron metadata service
      • VIRLDEV-5457 - Disabled screensaver on baremetal console
      • VIRLDEV-6825 - Upgrade fails on common.grub

     

     

    Migrating to VIRL PE 1.6

    Performing a New Installation

    If you already have a previous version of VIRL PE installed, it may be possible to upgrade it.  See below for more information on in-place upgrades.

    If you do not already have VIRL PE installed, please use the updated VIRL PE 1.6 installation guides posted at VIRL PE documentation site

    Select the instructions on that site appropriate for your selected installation option.

    Deployment instructions for:

     

         

    In-place upgrade instructions

     

    NOTE - you must have communication to the current Cisco salt masters and have a valid license key in order to perform the upgrade.

    NOTE - recent releases of both Firefox and Chrome browsers have disabled a feature required by UWM Upgrade page prior to the 1.6 release. Read the upgrade steps page for workarounds available in both browsers.


    Currently Running VIRL PE version 1.5.x

    An upgrade is available in the UWM VIRL Server > System Upgrade page. Follow the steps of the upgrade process to completion.

    For further information on the individual steps, visit the upgrade steps page.


    Currently Running VIRL PE Version 1.3.x

    If your current VIRL PE instance is VIRL 1.3.x, you may perform an in-place upgrade to the latest release without reinstalling. The instructions for 1.5 upgrade can be followed, with two major differences.

    First, if you're using DHCP for the primary network configuration, you need to clear Google's DNS nameserver configuration from the /etc/virl.ini configuration.

    Second, the UWM in 1.3.x does not store the result of the individual upgrade commands. As a result, once the newer UWM takes over during the 'vinstall upgrade' step, the previous steps, including vinstall upgrade, may be highlighted in red, indicating a failure. A successful vinstall upgrade command will report "Done. You will need to restart once all subsequent upgrades are completed."

    For further information on the individual steps, visit the upgrade steps for 1.3 page.


    Currently Running VIRL PE Version 1.2.x or Less

    If your current VIRL PE instance is VIRL PE version 1.2.x, 1.1.x, or less, you must perform a full reinstallation to migrate to VIRL PE 1.6.  See the section above for instructions on performing a "new installation."

     

    Currently Running a VIRL PE Cluster

    In-place upgrades are only supported for VIRL PE clusters of version 1.5.145 –a VIRL PE installation that consists of a controller and one or more compute nodes.  If your current VIRL PE installation is a cluster, you may perform an upgrade, which will simultaneously upgrade all active compute hosts with the controller. The installation procedures and configuration process for a VIRL PE cluster has changed significantly in VIRL PE 1.5 to fix limitations in the VIRL PE 1.3 cluster configuration process. The upgrade instructions are not different between a 1.5.145 standalone and cluster deployments. All enabled computes must be running and reported as working properly for the upgrade to succeed.

    Upgrade VIRL Client (VM Maestro)

    You should update VM Maestro to the most recent 1.6.0 build. Older releases should still work since there were no changes in the file format or APIs from VIRL PE 1.2 or 1.3. However, running the latest version is generally recommended.

    To download the new VM Maestro client

    1. Open a web browser and navigate to the VIRL host or virtual machine's IP address.

    2. Login to the User Workspace Management (UWM).

    3. Select VIRL Server from the menu that appears on the left.

    4. Select the Download sub-menu.

    5. Select VM Maestro Clients from the list of options.

    6. From the list of files presented, download the VM Maestro client appropriate to your local platform (setup EXE for Windows, DMG for OS X, or zip file for Linux).

    7. Install VM Maestro.

    Once you have installed VM Maestro, you may want to update the node types shown in the Palette to match any changes on the VIRL server:

    1. Launch VM Maestro

    2. Select File > Preferences > Node Subtypes.

    3. Click the Fetch From Server button.
    4. Click OK.

     

    VIRL Server Component Versions

    This release contains the following component versions:

    • OpenStack Mitaka
    • VIRL_CORE 0.10.37.32
    • AutoNetkit 0.24.1/0.23.12
    • Topology Visualization Engine 0.17.28
    • Live Network Collection Engine 0.12.11
    • VM Maestro 1.6.0-534

     

    Cisco Platform VMs

    Bare-metal ISO installers bundle fewer images, which may be added later through UWM.

    • IOSv - 15.7(3)M (New)
    • IOSv L2 - 15.2.1 (06.2018) (New)
    • IOS XRv - 6.1.3 image
    • IOS XRv 9000 - 6.5.1 image (New) (NOT BUNDLED, must be installed from the VIRL Software page in the UWM)
    • CSR 1000v - 16.9.1 XE-based image (New)
    • NX-OSv 7.3.0.1 (Nexus 7000)
    • NX-OSv 9000 9.2.3 (Nexus 9000)  (New) (NOT BUNDLED, must be installed via UWM > VIRL Software)
    • ASAv 9.9.2 (New)
    • CoreOS 1632.2.1 (New)
    • Ubuntu 16.04.3 Cloud-init image

     

    Linux Container Images

    • Ubuntu 16.04 LXC

    • iPerf 2.0.2 LXC

    • Routem 2.1.8 LXC

    • Ostinato-drone 0.8 LXC

    Important Notes

     

    Salt Master Settings

    Once you have installed VIRL, apply for VIRL license key as per the installation instructions.  You should enter at least two salt masters, picking a number between 1 and 4. Do not enter the same number twice! You can list up to four salt-masters.  When specifying multiple salt masters, separate each one with a comma followed by a space; as shown below.  Update your salt-master list if needed.

     

    US (external only)

    vsm-us-51.virl.info, vsm-us-52.virl.info, vsm-us-53.virl.info, vsm-us-54.virl.info       
     

    EU (external only)

    vsm-eu-51.virl.info, vsm-eu-52.virl.info, vsm-eu-53.virl.info, vsm-eu-54.virl.info

    AP (external only)

    vsm-ap-51.virl.info, vsm-ap-52.virl.info, vsm-ap-53.virl.info, vsm-ap-54.virl.info

     

    Note that in order to maximize availability and redundancy these master names may at times resolve to servers located in adjacent zones.

    The Reset keys and ID dialog within the Salt Configuration and Status page of UWM makes this process easier by providing a default set of Salt masters which can be selected by either clicking the US or the EU button for the respective set of masters.


    VIRL PE System Scaling

    • Do NOT oversubscribe hardware resources at multiple levels.

      • It is possible to oversubscribe CPU and memory resources at both the VIRL PE System Configuration level and at the VMware ESXi level.

      • By default, VIRL PE applies an oversubscription factor of 2.0 for memory resources and 3.0 for CPU resources.

      • The recommended configuration is to use dedicated resources for the VIRL PE VM at the ESXi layer and control the hardware oversubscription from UWM > VIRL Server > System Configuration.

      • System performance should be closely monitored and the following caveats should be taken into account when running large topologies to this scale.

    • The ability to run larger simulations, approaching the node limit or the total CPU and memory capacity of the system, is truly a factor of available resources (memory, CPU, i/o speed, networking configuration, etc.).  In particular, node types that are heavier than IOSv might or might not work depending on available memory and CPU resources.
    • Additional features (routing protocols, MPLS, ...) might impact the ability to reach the node limit by using more shared resources of the simulation environment.
    • At this time, when launching large simulations, approaching the node limit or the system memory and CPU capacity, users must stagger the launch manually (see below for instructions on performing a staggered launch).  Most of the Cisco node types place a higher load on the CPU just as the node boots up and loads its configuration.  Cisco node types do not always react well to CPU starvation.  The system generally functions properly with modest CPU oversubscription, but running simulations close to the total hardware capacity, especially with CPU oversubscription, and starting all of the nodes at once can lead to CPU starvation.  A staggered launch will help to avoid this problem.

     

    Staggered Launch of a Topology Simulation

    When launching a large topology simulation (i.e., a topology that approaches the limits of the VIRL installation's hardware), it is recommended to avoid booting up every node at once when the simulation first starts.  Instead, stagger the launch so that only a subset of the nodes is booting up at once.  In the current release, topologies are not automatically staggered during launch.  To perform a staggered launch of a topology simulation, the topology must indicate which nodes to start when the simulation is first launched.  The back end will start the simulation, but it will only boot those nodes.  The remaining nodes will remain in off / ABSENT state until they are manually started.

    In VM Maestro, set the Exclude from Launch setting on nodes that you do not want to boot when the simulation first starts. As a starting point, pick a number of nodes equal to the number of physical cores, N, on your system.  Try setting the “exclude from launch” setting on all nodes except for that “initial set” of nodes.  Note that VM Maestro supports bulk editing: select multiple nodes at once in the topology editor, click the Properties view, and then check the edit a value to apply or remove the setting to all selected nodes.  Once the Exclude from Launch setting has been applied to all but N nodes of the topology, the topology is ready for a staggered launch.

    Start the simulation.  Wait for just the initial set of N nodes to boot up and settle down.  The nodes should at least to go to ACTIVE – REACHABLE state, and it’s probably best to leave them for a few minutes even after that to make sure that the configuration is loaded and the initial protocol processing is complete.  In the running simulation view, select another batch of N nodes, right-click and select Start Node.  Wait until that batch finishes booting up.  Then start another batch of N nodes.  Repeat until all of the nodes are booted up, running and ACTIVE – REACHABLE. Note that a REACHABLE state is only achieved if the initial or automatic configuration of a starting node achieves that the management IP address is present on the first interface of the node; initial configuration as generated by AutoNetkit is designed to accomplish this.

     

     

    Known Issues and Caveats

                                                     

    Component

    Defect ID

    Description

              BBE

    VIRLDEV-5159

    The browser based editor might occasionally generate invalid VIRL files. This has been observed with large topologies (hundreds of nodes). In such a case, the VIRL server refuses to start the topology.


    Workaround: Double check the generated links or use VM Maestro.

     

    VIRLDEV-6522

    Topology is not loading in browser based editor when remote (HTTP) .virl file option is selected. This may be caused by the remote webserver's default configurations, which do not allow loading of content from other domains (here, the VIRL server's domain).


    IMPORTANT: Save a copy of the original file before continuing!

    Workaround: Configure the web server originating the downloaded .virl file to ininclude a header Access-Control-Allow-Origin with a value "*".
           
    VIRLDEV-6066

    The Browser-Based Editor (BBE) does not support IOS XRv 9000 nodes, nor custom node subtypes.  If a topology uses an IOS XRv 9000 node, attempting to open the topology in the BBE results in a blank page.  No topology is shown on the canvas.


    Workaround: Use VM Maestro if you need to use IOS XRv 9000 or custom nodes.

              Core

    VIRLDEV-4877

    With VIRL 1.3, the product is restricted to a single project and user. This change manifests in two areas:

    1. removal of 'Add' and 'Import' buttons for project and user creation

    2. ability to run simulations of multiple projects at the same time

    This change is in line with the positioning of the 'personal edition' where the product is designed to be used by individuals ('single user').

    VIRLDEV-6951In a simulation with 4 and more docker nodes, the coreos node fails to start. An error is shown in the messages console stating:

    (ERROR) [date time] Failed to start simulation "name": Failed to create config file for node "~coreos" in simulation "name": 'NoneType' object is unsliceable

     

    Workaround: All docker nodes must start together with the first (management LXC) and second (coreos) node, in the first batch of nodes to start. This can be achieved by setting the backend's batch size to 2 + number_of_docker_nodes, e.g. 7 when at most five docker nodes need to be started. This number should not be made arbitrarily large, especially if larger VM nodes, such as IOS XRv 9000, are to be started at the same time, even in different simulations. To change the value, use the following commands:

    sudo crudini --set /etc/virl/virl-core.ini orchestration node_op_batch_size 7
    sudo service virl-std restart
    VIRLDEV-5506

    Simulation element naming restrictions are inconsistent. Using names for nodes, networks, custom subtypes and their interfaces, images, users and projects with e.g. non-alphanumeric or accented characters, or with some symbols may lead to failures in various stages of a simulation's run.

    Neither the simulation engine nor UI prevents a launch of such topologies outright. A future version of VIRL PE may place new restrictions on the names of these elements.

    Using non-accented English letters, numbers, simple dash, dot and underscore is generally safe. Symbols like colon, percent-sign, slash and quotes should be avoided. Some platforms may place additional restrictions on file names when downloaded from UWM, such as traffic capture files.

    VIRLDEV-6314

    Simulation nodes cannot be restarted from SHUTOFF state. A node may drop into this state when it stops executing - either by being shut down from within, by crashing its operating system or the virtualization layer, or if the host operating system reboots for any reason.

    There is no command in UWM nor VMMaestro UI to revive such a node. Individual nodes may or may not recover from abrupt stops, but the revival can at least be attempted. Only information commited to the virtual disk drive of the node can be recovered. In case the system is confiured to store node virtual drives in memory, the data will not be available when the host shuts down or reboots.


    Workaround: use the following commands on the VIRL server to revive all shutoff LXC nodes and OpenStack VM nodes, respectively:

              sudo lxc-ls --stopped | xargs -rn1 sudo lxc-start -n

              openstack server list --status shutoff --all-projects -f value -c ID | xargs -r openstack server start

    Individual nodes whose ID has been identified from the lists can be started by 'sudo lxc-start -n <id>' and 'openstack server start <id>', respectively. In case there are numerous nodes to revive, it is suggested to revive them in small batches.

     

    VIRLDEV-4468

    In situations with low free disk space, VIRL core software upgrades might fail. This is also dependent on the size of the configured Cinder file (block storage for VM images). The default for that file is 20GB. Workaround: Ensure that enough disk space is available.

     

    VIRLDEV-5042

    Very large topologies between 100 and 300 nodes might misbehave when leaving ANK configuration generation parameters at default which produces huge topology files due to generation of a full iBGP mesh. This can manifest in:

    • timeouts while waiting for ANK to generate the topology

    • errors when displaying configuration differences in VM Maestro

    • errors when downloading the resulting topology file in VM Maestro or in UWM

    • runtime errors where nodes might not be coming up or put too much strain on system resources due to unrealistic configurations.

              Example: A 300 node topology with default ANK settings will produce 300x300 = 90,000 iBGP configurations for the topology which will result in a >>10MB topology file.


    Workaround:If ANK configuration generation is required, it is suggested to use valid constraints like multiple ***, route reflectors and other means to split the simulation domain into more manageable chunks.

    Note that even if you are not using ANK to generate full configurations, it is still possible to generate IP addresses and default accounts but not full configurations by running ANK in "infrastructure only" mode.  In the UI, click on the topology background, and set the "Infrastructure Only" property to true in the topology's AutoNetkit page in the Properties view before invoking ANK to Build Initial Configurations.

     

    VIRLDEV-5360

    When configuring 'logging console' in cases where neither the Jumphost nor the LXC Management node are available (e.g. off), configuration extraction may fail due to unexpected output as the extraction mechanism is falling back to use the console, and relying on clean and uninterrupted outputs from the nodes.

    Also, configuration extraction might fail when consoles are opened via UWM.


    Workaround: Don't configure any logging on the console and / or don't turn off the management LXC / Jumphost. Don't have consoles open via UWM when extracting configurations.

     

    VIRLDEV-4588

    Docker image names can not contain upper case letters.


    Workaround: Use lower caseimage names

     

    VIRLDEV-4710

    The rules governing the effective name of an LXC image are not consistent between creation, modification, and use in the simulation. The name is produced as a combination of the owning project, subtype name, and version suffix set by the user when the image is created. The use of subtype name can be overridden by the subtype definition's baseline_image attribute, usually to make use of a different subtype's installed images by that subtype.

              Workaround: Do not set this property for custom LXC subtypes. It is also recommended that an LXC image, when being added, is not marked for use by a specific project, and that the Modify container function is not used to alter the suffix of the name.

              Infrastructure

    VIRLDEV-4819

    Machines with high CPU count might require a different amount of database and front-end processes to be able to deal with requests.

              Symptoms of the issue can be:

    •               failed sim starts

    •               failed UWM dialogs dealing with node resources or simulations

    By default, the values are set for small machines which should be fine for VIRL personal edition. VIRL has the ability to adjust these settings based on built-in empirical values which are applied whenever a rehost is run.

    Alternatively, run 'salt-call -linfo state.sls openstack.worker_pool' or use the virl_setup script, menu item 'Maintenance → 3 Reset OpenStack worker pools'.

    VIRLDEV-6628The virl_setup tool lets an administrator peroform system-wide modifications even if simulations are running. It is highly recommended to stop all simulations before running the virl_setup tool to make modifications to the system. The configuration steps may fail in unexpected ways if simulations have deployed artifacts in the system.

    VIRLDEV-5326

    Docker by default creates SNAT / MASQUERADING iptables entries for the default docker0 bridge. This can interfere with simulation network traffic when used IPv4 networks are overlapping.

    The default 172.17.0.0/16 masquerading entry has been removed. However, there is at least one entry left for the Docker registry with a /32 address which can not be used in any simulation.


    Workaround: Don't use the 172.17.0.0/16 network whenever possible. Check the masqueraded IP addresses currently in use by the local registry by typing sudo iptables -L -v -tnat

    Example (excerpt):

                  0    0 MASQUERADE  tcp  --  any    any    172.17.0.2          172.17.0.2          tcp dpt:5000

    VIRLDEV-6243

    Live snapshot does not create modified image. A running simulation node's current contents of its hard drive can be captured to form a new image. In case the snapshot is taken on a running node, the modifications to the original image as the node launched with need not be captured correctly.

    Note that this functionality is mainly geared towards custom server images, less so for capturing router VM state.


    Workaround: Create the snapshot from a shutdown (not stopped) node. The shutdown should occur from within the node, i.e. via logging into its console, and running 'sudo shutdown -h now' command.

    See notes on VIRLDEV-6314 on how a shutoff node can be restarted.


    Workaround: Enable and install the qemu-guest-agent package prior to snapshotting.

    Before the node that shall be built is started in a simulation, edit the base image, to add 'hw_qemu_guest_agent=yes' in its property list. During image preparation, install qemu-guest-agent into the VM node.

              VIRLDEV-5365

    When the local time of the host computer that runs the VM is in a timezone >0 (e.g. east of Greenwich), NTP might step the clock back at system start which might confuse STD and therefore results in a licensing issue due to invalid time.

              Workaround: Restart STD/UWM using sudo salt-call -linfo state.sls virl.restart

    VIRLDEV-5387

    NTP does not sync under certain circumstances.


    Workaround: Restart NTPd using

    sudo systemctl restart ntp

    check results:

    ntpq -p

     

    Sometimes, it was sometimes necessary to kill the nptd process manually:
              sudo systemctl stop ntp         
              sudo killall ntpd         
              sudo systemctl start ntp
             

    VIRLDEV-6432

    Launching an OpenVPN client in Linux for IPv6-only shared network does not bring up tunnel interface.

    This does not affect Windows nor Mac OS X clients.

     

    Workaround: once the tunnel connection is established, run sudo ip link set tap0 up.

    VIRLDEV-6078

    The APT cache is not ready on fresh install. The VIRL PE build process clears the database used by Ubuntu to recognize installable packages. An attempt to install a new package will result in an error stating the package is not available. Resolve by running 'sudo apt-get update' prior to package installation.

    VIRLDEV-6524

    VIRLDEV-6542

    IPv6 literals are unsupported as sources for image downloads. The underlying tools used for the actual downloads of VM/LXC packages (curl) and docker packages (docker) cannot use an IPv6 address as the hostname part in URLs.

     

    Workaround: Use a valid DNS name for the image source host. As a last resort, edit the VIRL server's /etc/hosts file to add a custom hostname association for the IP address.

    VIRLDEV-6821

    Baremetal installs create a swap partition of same size as available memory.

    In cases where the physical disk is comparatively small to the amount of RAM on the server, the main partition may become too small to operate larger simulations.

     

    Workaround: In the installer, select manual partitioning instead of automatic; contact support if assistance is required.

    VIRLDEV-6732

    Failed baremetal installation may offer to send an automatic bug report.

    In some cases, if the baremetal installer fails, it may offer to collect troubleshooting information, and upload it to a bug tracker for developer review.

     

    This is not an offer made by Cisco, the tool would not upload the information to any Cisco infrastructure, and it is very unlikely to result in actual support being provided.

    Use the official and appropriate support channels for this product instead.

              LiveVis

    VIRLDEV-5085

    In LiveVis, when selecting the 'Clear Log' option a pop-up message might include a "Prevent this page from opening additional dialogs". If the user selects that option may prevent the system from collecting further log messages. This is a known issue and it is also browser dependent.

    virl-1.3-live-vis-clear.png

    Workaround: Do not select this option when offered by the browser.

     

    VIRLDEV-4899

    Downloading the Syslog as CSV from within Live Vis is not working on Safari and results in a page error. This is a known limitation with the Safari browser.

    virl-1.3-live-vis-download-csv.png

    Workaround: Use a different browser like Firefox or Chrome.

     

    VIRLDEV-4898

    Extracting configurations from a running topology within Live Visualization is not working as expected when using Safari. The document returned is shown as XML text, rendered in the browser and is not offered to 'save' into the Downloads folder. This is a known limitation.


    Workaround: Use a different browser like Firefox or Chrome or save the resulting XML text manually into a .virl file.

     

    VIRLDEV-4698

    In Live Visualization, NX-OSv interfaces are listed twice in the interface table, one time showing IP, other showing None. When retrieving the interface table from an NX-OSv device, some entries show twice in the output. This is only cosmetic and might cause a spurious extra line being drawn.

     

    VIRLDEV-4223

    Live Visualization might not work properly when changing the UWM port. UWM System configuration offers the ability to change it's listening port from 19400 to another port. When doing so, this might have side effects in Live Visualization, specifically with creating packet captures, and opening serial console connections, which are performed using UWM.


    Workaround: Do not change the UWM port.

     

    VIRLDEV-4202

    The overlay menu button to switch among different overlays might get blocked after updating Live Vis Web server port. In UWM, it is possible to change the Live Vis Web server port from 19402 to a different port. This might result in a blocked overlay menu button (not showing any entries). The image below illustrates the correct menu behavior whereas when broken, the entire drop down part of the menu is missing.

    virl-1.3-live-vis-physical.png


    Workaround: None

     

    VIRLDEV-4695

    In Live Visualization, the 'Physical Live' Overlay does not correctly show physical links for XRv nodes. This is a known defect.


    Workaround: None

     

    VIRLDEV-4949

    Sometimes, BGP VPNv4 links are not being drawn in Live Visualization.


    Workaround: None

     

    VIRLDEV-4342

    Large (80-300+) topologies might not render properly and perform slower than expected in Live Vis. Workaround: None

              UWM

    VIRLDEV-5061

    Preview of a topology before launching the simulation does not render properly with certain browsers. This happens with Windows and Internet Explorer. IE is not a supported browser.

              Workaround: Switch to Chrome or Firefox to prevent the issue from showing.

     

    VIRLDEV-2167

    When importing projects into the system, the uwmadmin password in virl.ini was out of sync with the database. To avoid this issue, the uwmadmin project is skipped when importing projects into the system.


    Workaround: Manually change the password after importing to get the correct password into the system.

    VIRLDEV-6613

    UWM does not let one specify a launched management network as a shared network other than 'flat'.


    Workaround: see workaround for VIRLDEV-6614 below

    VIRLDEV-6592The UWM admin-mode button to 'Stop all simulations' does not ask for confirmation. Do not press this button accidentally.
    VIRLDEV-6261

    In the VIRL Server > System Upgrade page of UWM up to the 1.5.145 release, there is a list of packages to install in the first table.  If you select all on that first table and then press the Upgrade button, the request will be denied with an HTTP 400 error.


    Workaround: This error is triggered by bad behavior of the select all functionality on that table.  Selecting packages individually works as expected.

    VIRLDEV-6838

    In a cluster, System configuration changes to the DNS server settings are not propagated to compute hosts.


    Workaround: After completing the system configuration change (which applies it to the controller), log into the controller and run the CLI command:

    sudo salt "compute*" state.sls common.cluster.compute.config.primary

            VM MaestroVIRLDEV-6614

    VMMaestro does not permit setting a different shared management network. Only the original 'flat' network may be selected, even if 'flat1' or other shared networks are available.


    Workaround: Preferably use the 'flat' shared management network. To switch to another shared management network, go to the Extensions tab of the topology properties (shown when nothing is selected in the topology). Edit the 'management_network' extension to use the value 'flat1' or 'shared-X' where X is the name given to the shared network in UWM System Configuration.

                         
             

    VIRLDEV-4434

    In VM Maestro, use external browser for both ANK vis and Live Vis. We've had reports about using the internal browser for ANK and Live Visualization from within VM Maestro.

    virl-1.3-vmm-web-browser.png

    Workaround: Use an external browser, configure as shown above.

     

    ./.

    Terminal preference for detached internal terminals - this function has been deprecated in VM Maestro 1.2.4 onwards.


    Workaround: You can manual 'tear' the terminal pane from the main VM Maestro window. Use this in conjunction with the VM Maestro preference (Cisco terminal) - "multiple tabs for one simulation".

     

    VIRLDEV-3525

    In VM Maestro (1.2.8 build 474), the scroll bar in the 'Preferences > Node Subtypes' dialog doesn't work properly on OS X 10.11 and newer.

    virl-1.3-vmm-osx-scrolling.png

    Workaround: Configure scroll bars to 'always show' in the General section of the System preferences as shown above.

     

    VIRLDEV-6359

    Telnet/ssh to management does not work in cluster if jumphost node is running on a compute host. If a simulation does not enable the management LXC, an alternate 'jumphost' node is used to proxy traffic into the simulation's management network. In a cluster environment, the jumphost may be launched on a compute node. VMMaestro falsely assumes the jumphost is running on the controller host, and thus connections to it fail.


    Workaround: Do not disable the management LXC for simulations; use of such nodes is strongly preferred to the jumphost.

     

    VIRLDEV-4820

    Occasionally, the message 'Unexpected end of input within/between OBJECT entries' can observed in VM Maestro's Simulations panel. This is mostly cosmetic as it recovers automatically.


    Workaround: None

     

    VIRLDEV-5042

    While running ANK, when clicking 'yes' on "View configuration changes dialog" the dialog disappears resulting in 'Unable to connect to Vis Server' error in console. This happens for large topologies.


    Workaround: None

    VIRLDEV-6288

    Sometimes, installing a new version of VM Maestro on an OS X system that already has VM Maestro installed leads OS X to report that the application is damaged.

    vmm_is_damaged.png

     

    Workaround:

    1) The most likely cause of that error on OS X is a combination of OS X’s default security settings, lack of signing of the OS X version, and some terrible error reporting from OS X. Try this:

    • Open the OS X System Preferences
    • Click Security & Privacy
    • Click General.
    • If necessary, click the little lock on the bottom to unlock the page.
    • Switch the option to say “Allow apps downloaded from” > Anywhere.
    • Close the Security & Privacy dialog
    • Now try to launch VM Maestro.
    • If that works, you can now go back and change your Security & Privacy setting back to their defaults.
    Unfortunately, since Sierra, that Security & Privacy dialog no longer shows in the security preferences. In that case, your options are:
    • Remove the “quarantine” attribute that may be causing the symptoms you’re seeing:
      • Open the Terminal.app.
      • Remove the quarantine attribute:
      • sudo xattr -d -r com.apple.quarantine /Applications/VMMaestro-1.5.0-510/
    • Re-enable the install from “Anywhere” option for Sierra.
    VIRLDEV-6755

    On OS X, webservices cannot be reached over IPv6 in many configurations.

    A likely issue with Java IPv6 support causes the UI application to fail contacting the webservices over IPv6, with a "No route to host" error being reported. This happens even if other applications running on the same device are able to properly route to the backend server. This issue may be affected by the presence and configuration of wired/wireless interfaces on the device.


    Workaround: Use IPv4 when communicating with the webservices via the UI application.

              VMs

    VIRLDEV-3616

    When running ASAv nodes using the bundled ASAv image, the console of the ASAv is bound to the serial port. By default, 'vanilla' ASAv images downloaded from CCO are having their console bound to the 'VGA' screen which is accessible in VIRL using the VNC option. However, access to the console is EITHER on the serial port OR on the VGA screen, never both. Since it is not known where the console is bound to (serial or VGA), both options are offered to the user but only one option will succeed.

     

    VIRLDEV-5092

    NX-OSv / NX-OSv 9000 nodes remain UNREACHABLE and connect to monitor port doesn't connect.

    • NX-OSv (Titanium) nodes might not be available on the management interface. This is a reference platform issue where sometimes the Mgmt0 interface is stuck in 'down/up' state. E.g. the interface is 'admin up' but the link is indicated as 'down'.

      Workaround: Manually issue a 'shut / no shut' sequence on the management interface of the affected node

    • NX-OSv 9000 nodes create unnecessary broadcast traffic on the management interface. This is a reference platform issue where NX-OSv 9000 nodes respond to frames not owned by the node which might result in a broadcast storm of IP redirect packets on the management network.

      Workaround: Configure 'no ip directed-broadcast' on the management interface of NX-OSv 9000 nodes.

    • NX-OSv 9000 nodes occasionally (less than 5 in 100 launches) do not become reachable even with the aforementioned workaround. A restart of the affected node usually resolves this.

     

    VIRLDEV-4682

    The subtype for Ostinato has changed between the 1.2 and 1.3 release. When upgrading to VIRL 1.3 from 1.2 and thus staying on the OpenStack Kilo release, the Ostinato 0.8.1 image will not be used due to the mismatch in the subtype (lxc-ostinato vs. lxc-ostinato-drone).


    Workaround: Change subtype for affected nodes to 'lxc-ostinato-drone'

     

    VIRLDEV-4955

    IOS XRv 9000 duplicate management IP configuration. IOS XRv 9000 nodes acquire the same IP for both the XR operating system layer and its associated management interface as well as for an underlying Linux layer which uses the same 'physical' interface.


    Workaround: None

     

    ./.

    IOSv 15.6(2)T - On boot-up the following (similar) message may be observed:%SYS-3-CPUHOG: Task is running for (1997)msecs, more than (2000)msecs (0/0),process = TTY Background.-Traceback= 114ECF8z 130425z 15E20Ez 15DF30z 15DD3Dz 157D75z 158A2Bz 1589BFz 159B67z 153672z 3C9740Az 3C868CEz 3C89BEFz 5125F91z 491D86Cz 492E540z - Process "Crypto CA", CPU hog, PC 0x00157D2C


    Workaround: This is cosmetic and can be ignored.

    VIRLDEV-6297

    IOS XRv cannot use more than 16 interfaces. As a regression from VIRL PE 1.2, IOS XRv fails to pass any traffic shortly after booting whenever more than 16 data interfaces are present on the node. This is currently attributed to non-specific changes in the qemu/kvm software.

    The subtype definition in VIRL PE 1.6 was adjusted to only allow up to 16 data interfaces.


    Workaround: None. Setting up an alternate qemu 2.0 installation alongside the default 2.5 may allow the nodes to run as before.

    ./.

    IOS XRv 9000 release 6.5.1 has a recommended memory requirement of 12 GB; the minimum observed memory required to boot and run the system is 8 to 10GB. The VM will actually consume this amount of memory. This means that memory reservation, which employs simple memory overcommit calculation, is at increased risk of allowing memory exhaustion to occurr on the simulation hosts.

     

    When IOS XRv 9000 is starting, it writes a large amount of data (at least 3.5GB, up to 7GB with release 6.5.1) into its virtual hard drive. This amount of data is stored separately for each instance of an IOS XRv 9000 on a given host, unlike the base image data which is shared. The largest among the other node subtypes exhibiting similar behavior is NX-OSv 9000, writing about 250 MB on startup.

    In case several such nodes are started on the same host, the amount of data can potentially add up. Make sure the available space on the hosts' hard drive is plentiful. If the system has ramdisk-based storage of nodes' virtual disks enabled, then this space allocation counts towards memory usage, but is not accounted for in memory reservation calculations, potentially also leading to memory exhaustion.

    ./.

    When IOS XRv 9000 is booting, if you enter text at the console, the boot will drop to a CLIsetup setup prompt for the admin user account account. This prompt prevents the pre- blocks CVAC, causing initial configuration application to be abandoned.


    Workaround: Do not open a console until the node becomes reachable, or don't enter any text into the console (not even enter to check if any output will be generated), or for releases of IOS XRv 9000 up to 6.4.1, enter a valid pair of credentials (which are named neither cisco nor admin) as soon as the prompt appears. The latter workaround may be ineffective in newer IOS XRv 9000 releases.

    In case the initial configuration is not loaded after all, it may still be loaded manually. After setting up the admin account, log in using this new account.
    Then, enter command: run cp /etc/sysconfig/iosxr_config.txt cvac/
    If that command succeeds, go into config mode, and enter: load cvac/iosxr_config.txt
    If the configuration load succeeds, enter commit and then end to exit config more.

    VIRLDEV-6074

    VIRL PE has provided support for a reference platform image based on the Nexus 9k code since December 2016.  This subtype for this image is NX-OSv 9000.  Cisco has changed the name of this image to Nexus 9000v.  VIRL PE has no Nexus 9000v subtype.


    Workaround: None.  Just use the NX-OSv 9000 for Nexus 9000v images.

    VIRLDEV-6416

    Default NX-OSv 9000 memory allocation maynotsufficient.  When launching a topology simulation that uses multiple NX-OSv 9000 nodes, some of them may fail to boot.

    The "flavor" used by default for NX-OSv 9000 was changed to use 6 GB of RAM with the release of NX-OSv 9000 version 7.0.3.I6.3, because the previously lowered memory allocation of 4GB has proved to be too small in practice.  We continue to work with the Nexus team to determine optimal default resource settings for NX-OSv 9000 and the other router VM images.

    Increasing the number of allocated interfaces on the NX-OSv 9000 nodes (e.g. above 10 interfaces) will further increase the memory requirements in an unspecified manner. A satisfactory allocation may also depend on the task load on the node, and thus require some level of experimentation for an individual topology in larger configurations.


    Workaround: If your NX-OSv 9000 nodes fail to start properly, or log errors indicating memory issues, we recommend replacing the NX-OSv 9000 flavor with a new flavor that has a larger memory allocation.

    • Login to the UWM as the uwmadmin, and navigate to Node resources > Flavors.
    • Click the Delete icon in the Options column for the NX-OSv 9000 flavor to delete the default flavor.
    • Click the Add button to add a new flavor.
    • On the Create Flavor form, enter the values
      • Name: NX-OSv 9000
      • RAM (MB): 6144 (and larger)
      • Virtual CPUs: 2
      • Disk (GB): 0
    • Click the Create button.

    Simulations that you launch after making this change will use the new flavor and the (minimum) 6 GB memory allocation for all NX-OSv 9000 nodes.

     

     

    END