IMPORTANT - Start Here
VIRL 1.1.1 release is an upgrade building on the VIRL 1.0.x release.
NOTE - If you have VIRL 1.0.11 or 1.0.26 installed today, you can perform an in-place upgrade.
In-place upgrade from VIRL versions below 1.0.11 is NOT supported. If you are running an earlier release, a new installation image MUST be downloaded and installed. Please refer to http://community.virl.info/t/virl-1-0-11-december15-release-now-available-for-download-and-upgrade/65173 for more information.
Installation images are available for VMware Workstation, Workstation Player, Fusion Pro, ESXi and Bare-metal systems now.
Please see the section below on Self-Service Downloads for instructions on how to obtain the appropriate installation image.
NOTE - SUPPORT FOR VIRL v1.0.11 and v.1.0.26 WILL END ON 13th May. PLEASE UPGRADE AS SOON AS POSSIBLE!
Online training material is available - this is designed to help get you started and productive quickly - http://virl-dev-innovate.cisco.com/tutorial.php - NOTE, this includes video walkthroughs - ensure that your browser supports H.264 video and any plugins are enabled.
New Features
NAT Enabled on PC Images
Users who run VIRL on Vmware Workstation or Fusion are now able to use the native NAT functionality offered by Vmware's 'host networking' setting. The gateway IP address 'l2_network_gateway' and 'l2_network_gateway2' settings that relate to the Flat and Flat1 networks are now set to the 172.16.x.2 address. Devices configured with a default route to that address, or using DHCP to get a default route, will have Internet access via the NAT function.
Online Resource Requirements Calculator
The new web page at VIRL - http://virl.cisco.com/resource/ provides you with a quick resource calculator for any topology that you may be thinking of using. Simply enter in desired node amount in numbers under the field 'Enter # of Nodes' to see the estimated hardware requirement that's needed to run the simulation.
Please note that this tool is for estimation purposes only. The actual hardware requirements to run certain simulations may vary depending on many factors including the over-commit ratios that you're using on your system.
System Software Upgrade Simplification
This version of VIRL introduces a new System software upgrade method. Once running VIRL 1.0.x, future upgrades will be driven via the 'System Upgrade' panel in the User Workspace Manager (UWM) page.
2016-04-11 06.23.00 pm.png2555x627 88.5 KB
The System Upgrade page will offer users three modes of upgrade:
- Core - upgrades the key components of the VIRL system as well as the Linux operating system.
- Full - upgrades the key components of the VIRL system, the Linux operating system and updates all of the available Virtual Machine and LinuX Container images.
- Advanced - upgrades the key components of the VIRL system, the Linux operating system and offer you the ability to choose which VM Maestro packages, Virtual Machine and Linux Container images will be upgraded.
When an upgrade is performed, the function will also manage any necessary reboots and Linuxbridge kernel patches (required for Layer 2 simulation operations) without the need to issue any CLI commands.
Real-Time Traffic Visualization
When a simulation is running, users can log into the UWM page as the user under which the simulation was launched. When clicking on 'My Simulations' and selecting the simulation of interest, a new 'Show Traffic' button is offered on the right-hand side of the page just above the 'Interfaces' table.
Clicking on the button presents the user with a table of all of the interfaces in the simulation, with traffic counters showing the amount of traffic sent and received on each interface.
You can select a subset of interface that you wish to 'graph', resulting in a graph showing the data from the last 1, 5 or 10 minutes, or the of a 'Live' graph.
NOTE - if you encounter a situation where the RX/TX packet & byte counters report 'loading' and do not populate with value when you have a simulation running, flush your browser's cache.
Git Repository Browser Enhancement
If you have a GIT repository 'added' to your VIRL server, you're now able to browse, search and select topologies from your GIT repository when using the UWM 'Launch new simulation' interface. Log in to UWM as a user such as 'guest' and select 'My Simulations'. Press the 'Launch new simulation' button and then click the radio button to 'Select .virl file from git repository'. The new browser function provides an 'explorer-like' view of the git repository contents as well as providing a search function.
git_browser.gif1551x951 984 KB
Syslog Data Export
The Live Visualisation function includes the ability to set up a central syslog server to which all network devices can be configured to send syslog messages to. The Syslog function now offers the ability to export the syslog data to a .CSV file that can be downloaded for subsequent usage. To set up all network devices to send messages to the syslog server, start the Live Visualisation view, then select 'Setup Syslog' from the 'Action' menu. Messages will now be collected. To export the data, press the 'Syslog' button on the menu bar, press the 'Actions' button on the Syslog panel and select 'Download CSV' to start the download.
Ostinato Lxc Name Change
The Ostinato LXC instance has changed name from 'LXC-Ostinato' to 'LXC-Ostinato-drone'. If you have an existing topology using the LXC-Ostinato subtype, you will be able to start your simulation but you will be advised to update the type to 'LXC-Ostinato-drone'. If you run 'Build initial configurations', it too will warn you if it detects an LXC-Ostinato instance, as per the recording below. To change the subtype, select the LXC-Ostinato instance in your topology and change the 'subtype' to 'LXC-Ostinato-drone' using the drop-down list in VM Maestro under 'Node properties'.
VIRL Server Cluster
We're delighted to introduce VIRL running on Openstack clusters! Using Openstack's clustering capability means that you can now run simulations across multiple computers, with a single point of control. In this release, the system will support up to five computers operating within a cluster. The current supported cluster configuration includes, one controller node (required), and up to four compute nodes. VIRL on Openstack Clusters is available on the Packet.net hosting platform as well as being available for local installation on either VMWare ESXi or on Bare-metal systems. No special licenses are required in order to operate a VIRL Cluster. The maximum number of Cisco VMs that you can run will be subject to the following:
- VIRL License key (node count)
- Hardware resources that are have available within the set of computers.
NOTE - Cluster installation and operation have only been tested on Vmware ESXi and on Cisco UCS C-series systems. Installation issues MAY be encountered when using other types of hardware.
Clusters on Packet.net - please see the video at https://www.youtube.com/watch?v=PjIGuJzkUbY.
Note, this requires you to have established an account on Packet.net and to have a valid VIRL license key. No additional license keys are required but there will be an increased cost since you're running additional servers on the Packet.net platform.
Clusters - local installation - One computer (bare-metal or virtual machine) will be brought up as a 'controller' node, with up to four additional computers (bare-metal or virtual machines) being brought up and configured as 'compute' nodes. You will need to establish the required network connectivity between all of the computers in your cluster, either using virtual networking as offered by Vmware ESXi or using physical networking equipment.
Specific OVA and ISO images are required to be installed on compute nodes. Converting an existing 'single-node' VIRL system to operate as a cluster 'controller' is NOT recommended. You should install a fresh image of VIRL on your designated 'controller' node.
Full instructions on Cluster installations can be found at http://virl-dev-innovate.cisco.com/virl.cluster.php
Updated
Virtual Machines and Container Images
IOSv 15.6(2)T - An updated IOSv virtual machine is now available and becomes the default IOSv instance.
NX-OSv 7.3.0.1 - An updated NX-OSv virtual machine is now available and becomes the default NX-OSv instance.
ASAv 9.5.2 - An updated ASAv virtual machine is now available and becomes the default ASAv instance.
How to Upgrade
Self-Service Download
Every registered VIRL user is now able to download the OVA and ISO images from https://virl.mediuscorp.com/my-account. The new 'Download VIRL' link on this page will take you through to a self-service selection page where you are able to select the image you would like.
Please note that the downloads are large. The use of download manager application is strongly recommended.
In-Place Upgrade Instructions
NOTE - you must have communication to a Cisco salt-master and have a valid license key in order to perform the upgrade.
Existing VIRL 1.0.x users are able to upgrade to the latest release by logging into the User Workspace Managment (UWM) interface as 'uwmadmin'. From the menu on the left-hand side of the page, select 'VIRL Server'/'VIRL Software'. After a ~60 seconds a list of available images will be presented. New packages will be shown as available for installation with a tick-box present in the 'Install Y/N' column. Select the package and press the 'Start installation' button.
New Virtual machine images are also available from the 'Cisco VM image upgrades' section of the VIRL Software package. Again, check the appropriate tick-box and press the 'Start installation' button.
Performing A New Installation
Please use the installation guides posted at http://virl-dev-innovate.cisco.com/ and select the instructions appropriate for your platform.
Upgrade VIRL Client (VM Maestro) required
You must update VM Maestro to version 1.2.6 Dev-393 or later. Older releases are not supported with VIRL 1.1.1
Download the new VM Maestro client from "http://your VIRL server IP/download". Once installed, update the available node types as follows:
- Launch VM Maestro
- Select 'File / Preferences / Node Subtypes'
- Press 'Fetch From Server'
- Press 'Apply'
VIRL Server Component Versions
This release contains the following component versions:
Openstack Kilo
VM Maestro 1.2.6 Build Dev-393 (NEW)
AutoNetkit 0.21.4/0.21.7 (NEW)
Live Network Collection Engine 0.9.5 (NEW)
- VIRL_CORE 0.10.24.7 (NEW)
Cisco Platform VMs
IOSv - 15.6(2)T image (NEW)
IOSvL2 - 15.2.4055 DSGS image
IOSXRv - 6.0.0 image IOS XRv 9000 - 6.0.0 image (NOT BUNDLED - click here1 for details)
CSR1000v - 3.17 XE-based image
NX-OSv 7.3.0.1 (NEW)
ASAv 9.5.2 (NEW)Ubuntu 14.4.2 Cloud-init
Linux Container mages
- Ubuntu 14.4.2 LXC
- iPerf LXC
- Routem LXC
- Ostinato-drone LXC
Bare-Metal installation image
Salt Master Settings
Once you have installed VIRL, apply for VIRL license key as per the installation instructions. Update you salt-master list as follows:
US
us-1.virl.info, us-2.virl.info, us-3.virl.info, us-4.virl.info
EU
eu-1.virl.info, eu-2.virl.info, eu-3.virl.info, eu-4.virl.info
You should enter at least two hosts picking a number between 1 and 4. Do not enter the same number twice! You can list up to four salt-masters. There must be a ',' and a space between each salt-master.
Caveats
- CSR1000v 16.2.1 - this image CANNOT be used with VIRL due to a defect CSCuz09110.
There is no workaround. - VIRLDEV-3525 - Scroll bar on Preferences > Node Subtypes dialog doesn't work on OS X 10.11
When using VM Maestro on Mac OS X 10.11.x, the scroll-bar that should be displayed inside the Node Subtype list panel, may be missing.
Workround: Open the Mac System Preferences pane. Under 'General' select 'Show scroll bars' and set to 'Always'. - VIRLDEV-3998 - UWM : Preview is not working after AutoNetkit webserver port change
In the VIRL server's System Configuration panel, if the AutoNetkit webserver port is changed from its default value (19401), when you subsequently use the UWM interface and select 'My simulations' to start a simulation, the preview function will not work and will instead report 'can't establish a connection to the server at x.x.x.x:19401.'. - Workaround: Use the AutoNetkit protocol visualisation function in VM Maestro to view your topology.
- VIRLDEV-4006 - Link parameters on compute nodes not operating correctly
When using VIRL in cluster mode, if link-parameters (latency, jitter, packet-loss) are applied to a link where the VM is operating on a Compute node (vs the Controller node), the link parameters are not applied.
Workaround: none - Bare-metal installation (ISO) - when following the installation instructions and selecting 'LVM' for partition management, the system will report that there is insufficient disk space and that you should increase the size of the /boot partition. THIS MESSAGE CAN BE IGNORED. Press the 'continue' button. The installation will proceed without issue.
- Following an upgrade from 1.0.11 to 1.0.26, the following message may be observed when trying to start up a simulation:
(ERROR) [Feb/04/2016 21:14:51] Failed to start simulation "LXC_demo-clsv9_": local variable 'message' referenced before assignment
Workaround: Reboot your VIRL server and please try to start your simulation once more. - IOSv 15.6(2)T - On boot-up the following message may be observed:%SYS-3-CPUHOG: Task is running for (1997)msecs, more than (2000)msecs (0/0),process = TTY Background.-Traceback= 114ECF8z 130425z 15E20Ez 15DF30z 15DD3Dz 157D75z 158A2Bz 1589BFz 159B67z 153672z 3C9740Az 3C868CEz 3C89BEFz 5125F91z 491D86Cz 492E540z - Process "Crypto CA", CPU hog, PC 0x00157D2C
- Workaround: This is cosmetic and can be ignored.
- IOSv 15.6(1)T / IOSvL2 15.2(4055) DSGS - CSCuv77089 - CVAC: day0 configuration only partially saved
When booting an IOSv or IOSvL2 instance within VIRL, it will insert the bootstrap configuration into running-config and report the following message:
Aug 10 15:06:08.555: %CVAC-4-CONFIG_DONE: Configuration generated from file flash3:/ios_config.txt was applied and saved to NVRAM. See 'show running-config' or 'show startup-config' for more details.
The running-config is fully applied. However, the startup configuration only contains partial content.
- Workaround: issuing the command 'copy run start' after the device has fully booted, will copy the running-configuration content to the startup-configuration as expected.Note: VIRL's configuration extraction function performs a 'copy run start' operation as part of its execution.
- VIRLDEV-3140 - Live Visualization - ping with 50% packet loss - timeout reported
Configured a link with 50% packet loss and use the 'ping from' 'ping to' function. The ping 'failed' reporting the following:
ping 192.168.0.6 source 192.168.0.5 Timeout exceeded.
This issue impacts the ping function within the Live Visualisation system but does not impact the regular operation of pings from the VMs themselves.
- Workaround: reduce the packet loss on the selected link.
- VIRLDEV-3119 - Rehost operation - changing the internalnetport IP address from 172.16.10.250 results in broken system_
Changing the internalnet_port IP address from the default (172.16.10.250) value and then performing the 'vinstall rehost' operation results in an VIRL system which is not operational. - Workaround: None. Changing the internalnet_port IP address is NOT supported.
- VM Maestro - terminal preference for detached internal terminals - this function has been deprecated in VM Maestro 1.2.4 onwards.
- Workaround: you can manual 'tear' the terminal pane from the main VM Maestro window. Use this in conjunction with the VM Maestro preference (Cisco terminal) - "multiple tabs for one simulation".
Customer Defects Resolved
The following customer-found defects are resolved in this version of VIRL:
- VIRLDEV-3610 External user (Cyraga) - Cannot initiate a packet capture to a port from VM Maestro
- VIRLDEV-3657 Improve error message when tcp port is occupied
- VIRLDEV-3858 SALT - openstack.setup failing - Rendering SLS 'base:openstack.nova.keystone' failed: Jinja error: Unable to establish connection to http://127.0.0.1:35357/v2.0/tenants
Comments