1 2 Previous Next 17 Replies Latest reply: Sep 4, 2019 1:05 PM by Bruno RSS

    Problems with upgrade from 1.5 to 1.6

    Bruno

      The upgrade seems to have mostly worked, but there are 3 services showing problems in the Health Status.

      What should I do about these problems?

      I've rebooted twice and taken VIRL out of maintenance mode.

       

       

      virl@virl:~$ cd /var/local/virl/logs ; grep -l 'vinstall upgrade' *.cmd | while read ff ; do tail ${ff/cmd/out} ; done

      Summary for local

      ------------

      Succeeded: 6 (changed=4)

      Failed:    0

      ------------

      Total states run:    6

      Total run time:  41.715 s

      Done. You will need to restart once all subsequent upgrades are completed.

      /etc/salt/grains (key None) is valid

       

       

      Summary for local

      ------------

      Succeeded: 4 (changed=2)

      Failed:    2

      ------------

      Total states run:    6

      Total run time:  41.721 s

      Done. You will need to restart once all subsequent upgrades are completed.

      /etc/salt/grains (key None) is valid

        • 1. Re: Problems with upgrade from 1.5 to 1.6
          Karlo Bobiles

          Hello Bruno,

           

          Looping in mirlos for further investigation. Please stand by.

           

          Thank you,

          Karlo Bobiles
          Cisco Learning Network

          • 2. Re: Problems with upgrade from 1.5 to 1.6
            Richard

            Hi,

            please show me the output from the sudo systemctl status rabbitmq-server.service


            -Richard

            • 3. Re: Problems with upgrade from 1.5 to 1.6
              Bruno

              Hi Richard,

               

              Here is the output. Also, the Overview page shows a red 'x' by the release now when originally it showed a green checkmark after the upgrade. I've attached the screenshot.

               

              Thanks,

              Bruno

               

               

              virl@virl:~$ sudo systemctl status rabbitmq-server.service

              ‚óè rabbitmq-server.service - RabbitMQ Messaging Server

                Loaded: loaded (/lib/systemd/system/rabbitmq-server.service; enabled; vendor preset: enabled)

                Active: failed (Result: timeout) since Mon 2019-08-19 23:41:33 GMT; 4h 13min ago

                Process: 10200 ExecStartPost=/usr/lib/rabbitmq/bin/rabbitmq-server-wait (code=killed, signal=TERM)

                Process: 10196 ExecStart=/usr/sbin/rabbitmq-server (code=killed, signal=TERM)

              Main PID: 10196 (code=killed, signal=TERM)

               

               

              Aug 19 23:40:03 virl systemd[1]: Starting RabbitMQ Messaging Server...

              Aug 19 23:40:18 virl rabbitmq[10200]: Waiting for rabbit@virl ...

              Aug 19 23:40:18 virl rabbitmq[10200]: pid is 10365 ...

              Aug 19 23:41:33 virl systemd[1]: rabbitmq-server.service: Start-post operation timed out. Stopping.

              Aug 19 23:41:33 virl systemd[1]: Failed to start RabbitMQ Messaging Server.

              Aug 19 23:41:33 virl systemd[1]: rabbitmq-server.service: Unit entered failed state.

              Aug 19 23:41:33 virl systemd[1]: rabbitmq-server.service: Failed with result 'timeout'.

              • 4. Re: Problems with upgrade from 1.5 to 1.6
                Richard

                That red 'x' by the release really bothers me. First we need to fix that. Which salt master(s) do you use?

                 

                [Possible fix for the release version]

                • Go to /admin/salt/reset/
                • Click on the Reset button.
                • Check the release version again.
                • If it still displays the red "x" then show me the output from curl -u <uwm_user>:<password> http://<ip>/rest/overview/versions/

                 

                -Richard

                • 5. Re: Problems with upgrade from 1.5 to 1.6
                  Bruno

                  Here are the salt servers I'm using: vsm-us-51.virl.info, vsm-us-52.virl.info, vsm-us-53.virl.info, vsm-us-54.virl.info

                  I successfully performed a salt reset, but the red "x" is still there. Here is the output you wanted.

                   

                  virl@virl:~$ curl -u uwmadmin:password http://10.0.1.9/rest/overview/versions/

                  [{"name": "Release", "avail_version": "1.5.145", "version": "1.6.65"}, {"name": "VIRL-CORE", "avail_version": "0.10.37.33", "version": "0.10.37.33"}, {"name": "AutoNetkit", "avail_version": "0.24.1", "version": "0.24.1"}, {"name": "AutoNetkit-Cisco", "avail_version": "0.23.12", "version": "0.23.12"}, {"name": "Topology Visualization Engine", "avail_version": "0.17.28", "version": "0.17.28"}, {"name": "Live Network Collection Engine", "avail_version": "0.12.11", "version": "0.12.11"}]

                  • 6. Re: Problems with upgrade from 1.5 to 1.6
                    jsicuranza

                    Same issue here. Tried several times, followed all documents and even the steps if the upgrade fail. Still dubious as to what I am left with.

                     

                    Long time VIRL user since 2014 its beginning. It was a miracle that  1.5.145 upgrade worked.

                     

                    However, true to form of VIRL, to this day,  this, not for prime time(as I have mentioned on many past posts) junk, cannot even provide their customers with a simple single click upgrade.

                     

                    FACT FROM 2014-2019  Network engineers spend more time re-engineering VIRL to just get this ****(yes kluge ****) to work than USING it as an actual network engineering tool. We have high hopes for V2 and that makes sense it was a complete re-write.

                     

                    Cisco should heavily discount or provide for free, the first 2 years of V2 for all of us who believed in this product and held annual subscriptions since 2014.

                    • 7. Re: Problems with upgrade from 1.5 to 1.6
                      Richard

                      OK the salt master should be fixed. Try to reset the salt ID again and check the release version on the overview page again. If it's green then follow these steps:

                      • sudo vinstall salt
                      • sudo salt-call state.sls virl.refresh
                      • sudo salt-call state.sls openstack.rabbitmq
                      • sudo reboot
                      • Check the Health Status page (hopefully everything will be green)

                       

                      -Richard

                      • 8. Re: Problems with upgrade from 1.5 to 1.6
                        Bruno

                        Hi Richard,

                         

                        I now have a green checkmark under release and the version is 1.6.65 - that's a win.

                         

                        However, the other 3 services are still red when I do the health check. These are: RabbitMQ, OpenStack services response & STD configuration.

                         

                        Thanks,

                        Bruno

                        • 9. Re: Problems with upgrade from 1.5 to 1.6
                          Richard

                          So those commands didn't solve your problem, am I right? Can you please upload an attachment with these files?

                          • /var/log/rabbitmq/
                          • /var/log/keystone/
                          • /var/log/nova/
                          • /etc/rabbitmq/rabbitmq-env.conf
                          • /etc/rabbitmq/rabbitmq.config


                          -Richard

                          • 10. Re: Problems with upgrade from 1.5 to 1.6
                            Bruno

                            Correct, the commands didn't solve the problem.

                             

                            Here are the files you requested.

                            • 11. Re: Problems with upgrade from 1.5 to 1.6
                              Richard

                              Thanks for the logs also i need the version of the rabbitmq and additional info, so please show me the output from these commands:

                              • apt-cache policy rabbitmq-server
                              • ps aux | grep epmd
                              • virl_health_status


                              -Richard

                              • 12. Re: Problems with upgrade from 1.5 to 1.6
                                Bruno

                                Here you go...

                                 

                                virl@virl:~$ apt-cache policy rabbitmq-server

                                rabbitmq-server:

                                  Installed: 3.5.7-1ubuntu0.16.04.2

                                  Candidate: 3.5.7-1ubuntu0.16.04.2

                                  Version table:

                                *** 3.5.7-1ubuntu0.16.04.2 500

                                        500 http://us.archive.ubuntu.com/ubuntu xenial-updates/main amd64 Packages

                                        500 http://us.archive.ubuntu.com/ubuntu xenial-updates/main i386 Packages

                                        500 http://security.ubuntu.com/ubuntu xenial-security/main amd64 Packages

                                        500 http://security.ubuntu.com/ubuntu xenial-security/main i386 Packages

                                        100 /var/lib/dpkg/status

                                    3.5.7-1 500

                                        500 http://us.archive.ubuntu.com/ubuntu xenial/main amd64 Packages

                                        500 http://us.archive.ubuntu.com/ubuntu xenial/main i386 Packages

                                 

                                virl@virl:~$ ps aux | grep epmd

                                rabbitmq  15675  0.0  0.0  26304  228 ?        S    Aug21  0:00 /usr/lib/erlang/erts-7.3/bin/epmd -daemon

                                virl      96450  0.0  0.0  14224  940 pts/0    S+  00:43  0:00 grep --color=auto epmd

                                • 13. Re: Problems with upgrade from 1.5 to 1.6
                                  Bruno

                                  virl@virl:~$ sudo virl_health_status

                                  Disk usage:

                                  Filesystem                 Size  Used Avail Use% Mounted on

                                  udev                       4.9G     0  4.9G   0% /dev

                                  tmpfs                      999M   47M  952M   5% /run

                                  /dev/mapper/virl--vg-root   60G   19G   42G  31% /

                                  tmpfs                      4.9G   96K  4.9G   1% /dev/shm

                                  tmpfs                      5.0M     0  5.0M   0% /run/lock

                                  tmpfs                      4.9G     0  4.9G   0% /sys/fs/cgroup

                                  /dev/sda1                  961M  159M  754M  18% /boot

                                  cgmfs                      100K     0  100K   0% /run/cgmanager/fs

                                  tmpfs                      999M     0  999M   0% /run/user/119

                                  tmpfs                      999M     0  999M   0% /run/user/1000

                                   

                                   

                                   

                                   

                                  CPU info:

                                  6 Intel(R) Core(TM) i7-4980HQ CPU @ 2.80GHz cores

                                  Load: 2.3%, 2.3%, 2.3% for the past 1, 5, and 15 minutes

                                  Overcommitted to 18 cores (multiplier 3)

                                   

                                   

                                  RAM info:

                                  ERROR    2019-08-22 00:46:23,932 virl.openstack.client Failed to resolve versions for network service at http://172.16.10.250:9696: Unable to connect to http://172.16.10.250:9696: HTTPConnectionPool(host='172.16.10.250', port=9696): Max retries exceeded with url: / (Caused by NewConnectionError('<urllib3.connection.HTTPConnection object at 0x7f1f3e38b490>: Failed to establish a new connection: [Errno 111] Connection refused',))

                                  ERROR    2019-08-22 00:46:23,933 virl.openstack.client Failed to resolve versions for network service at http://172.16.10.250:9696: Unable to connect to http://172.16.10.250:9696: HTTPConnectionPool(host='172.16.10.250', port=9696): Max retries exceeded with url: / (Caused by NewConnectionError('<urllib3.connection.HTTPConnection object at 0x7f1f3e38be90>: Failed to establish a new connection: [Errno 111] Connection refused',))

                                  ERROR    2019-08-22 00:46:23,934 virl.openstack.client Failed to resolve versions for network service at http://172.16.10.250:9696: Unable to connect to http://172.16.10.250:9696: HTTPConnectionPool(host='172.16.10.250', port=9696): Max retries exceeded with url: / (Caused by NewConnectionError('<urllib3.connection.HTTPConnection object at 0x7f1f3e38bf10>: Failed to establish a new connection: [Errno 111] Connection refused',))

                                  WARNING  2019-08-22 00:46:23,935 virl.openstack.client Service endpoints are incomplete

                                  INFO     2019-08-22 00:46:23,940 virl.openstack.client node_info GET on URL "http://172.16.10.250:8774/v2/0aa03e970e264f8cbd4eaec03542ea73/servers/detail"

                                  INFO     2019-08-22 00:46:23,996 virl.openstack.client node_info response 200 to GET on URL "http://172.16.10.250:8774/v2/0aa03e970e264f8cbd4eaec03542ea73/servers/detail"

                                  INFO     2019-08-22 00:46:23,998 virl.openstack.client flavor_info GET on URL "http://172.16.10.250:8774/v2/0aa03e970e264f8cbd4eaec03542ea73/flavors/detail"

                                  INFO     2019-08-22 00:46:24,038 virl.openstack.client flavor_info response 200 to GET on URL "http://172.16.10.250:8774/v2/0aa03e970e264f8cbd4eaec03542ea73/flavors/detail"

                                  Total RAM capacity available on host: 9GB

                                  Free RAM available on host: 5GB

                                  Total overcommitted RAM capacity available on host: 19GB (multiplier 2)

                                  RAM capacity required by currently running nodes: 0GB

                                   

                                   

                                  ERROR    2019-08-22 00:46:24,058 virl.common.execute Subprocess "['ps', '-C', 'ntpd', '-o', 'command=']" failed with exit code 1

                                   

                                   

                                  NTP servers:

                                  pool.ntp.org iburst

                                  us.pool.ntp.org iburst

                                   

                                   

                                   

                                   

                                   

                                   

                                  Interface addresses:

                                  1: lo    inet 127.0.0.1/8 scope host lo\       valid_lft forever preferred_lft forever

                                  2: eth0    inet 192.168.1.71/24 brd 192.168.1.255 scope global eth0\       valid_lft forever preferred_lft forever

                                  55: br1    inet 172.16.1.250/24 brd 172.16.1.255 scope global br1\       valid_lft forever preferred_lft forever

                                  56: br3    inet 172.16.3.250/24 brd 172.16.3.255 scope global br3\       valid_lft forever preferred_lft forever

                                  57: br2    inet 172.16.2.250/24 brd 172.16.2.255 scope global br2\       valid_lft forever preferred_lft forever

                                  58: br4    inet 172.16.10.250/24 brd 172.16.10.255 scope global br4\       valid_lft forever preferred_lft forever

                                   

                                   

                                   

                                   

                                  2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP mode DEFAULT group default qlen 1000

                                      link/ether 00:0c:29:60:d6:1c brd ff:ff:ff:ff:ff:ff

                                  7: dummy0: <BROADCAST,NOARP> mtu 1500 qdisc noop state DOWN mode DEFAULT group default qlen 1000

                                      link/ether c2:aa:39:da:49:11 brd ff:ff:ff:ff:ff:ff

                                  8: dummy1: <BROADCAST,NOARP,UP,LOWER_UP> mtu 1500 qdisc noqueue master br1 state UNKNOWN mode DEFAULT group default qlen 1000

                                      link/ether 2e:75:d1:83:f2:c7 brd ff:ff:ff:ff:ff:ff

                                  9: dummy2: <BROADCAST,NOARP,UP,LOWER_UP> mtu 1500 qdisc noqueue master br2 state UNKNOWN mode DEFAULT group default qlen 1000

                                      link/ether 72:07:de:9d:db:83 brd ff:ff:ff:ff:ff:ff

                                  10: dummy3: <BROADCAST,NOARP,UP,LOWER_UP> mtu 1500 qdisc noqueue master br3 state UNKNOWN mode DEFAULT group default qlen 1000

                                      link/ether 8e:26:21:83:bf:53 brd ff:ff:ff:ff:ff:ff

                                  11: dummy4: <BROADCAST,NOARP,UP,LOWER_UP> mtu 9000 qdisc noqueue master br4 state UNKNOWN mode DEFAULT group default qlen 1000

                                      link/ether f6:38:b1:66:e5:53 brd ff:ff:ff:ff:ff:ff

                                  12: dummy5: <BROADCAST,NOARP> mtu 1500 qdisc noop state DOWN mode DEFAULT group default qlen 1000

                                      link/ether 02:0f:6d:9f:49:4c brd ff:ff:ff:ff:ff:ff

                                  13: dummy6: <BROADCAST,NOARP> mtu 1500 qdisc noop state DOWN mode DEFAULT group default qlen 1000

                                      link/ether a2:14:c9:4d:5e:1f brd ff:ff:ff:ff:ff:ff

                                  14: dummy7: <BROADCAST,NOARP> mtu 1500 qdisc noop state DOWN mode DEFAULT group default qlen 1000

                                      link/ether 2e:1a:52:a9:34:ae brd ff:ff:ff:ff:ff:ff

                                  15: dummy8: <BROADCAST,NOARP> mtu 1500 qdisc noop state DOWN mode DEFAULT group default qlen 1000

                                      link/ether 3e:14:de:a4:a0:87 brd ff:ff:ff:ff:ff:ff

                                  16: dummy9: <BROADCAST,NOARP> mtu 1500 qdisc noop state DOWN mode DEFAULT group default qlen 1000

                                      link/ether 92:1e:4b:6a:87:8b brd ff:ff:ff:ff:ff:ff

                                  17: dummy10: <BROADCAST,NOARP> mtu 1500 qdisc noop state DOWN mode DEFAULT group default qlen 1000

                                      link/ether 36:0c:0a:5f:28:6b brd ff:ff:ff:ff:ff:ff

                                  18: dummy11: <BROADCAST,NOARP> mtu 1500 qdisc noop state DOWN mode DEFAULT group default qlen 1000

                                      link/ether 52:a1:b6:40:5a:ef brd ff:ff:ff:ff:ff:ff

                                  19: dummy12: <BROADCAST,NOARP> mtu 1500 qdisc noop state DOWN mode DEFAULT group default qlen 1000

                                      link/ether 5e:d2:a2:55:18:e4 brd ff:ff:ff:ff:ff:ff

                                  20: dummy13: <BROADCAST,NOARP> mtu 1500 qdisc noop state DOWN mode DEFAULT group default qlen 1000

                                      link/ether 06:bc:2b:fb:e7:ac brd ff:ff:ff:ff:ff:ff

                                  21: dummy14: <BROADCAST,NOARP> mtu 1500 qdisc noop state DOWN mode DEFAULT group default qlen 1000

                                      link/ether b6:af:a2:e0:70:9d brd ff:ff:ff:ff:ff:ff

                                  22: dummy15: <BROADCAST,NOARP> mtu 1500 qdisc noop state DOWN mode DEFAULT group default qlen 1000

                                      link/ether 1e:d1:04:ec:bb:5a brd ff:ff:ff:ff:ff:ff

                                  55: br1: <BROADCAST,MULTICAST,PROMISC,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP mode DEFAULT group default qlen 1000

                                      link/ether 2e:75:d1:83:f2:c7 brd ff:ff:ff:ff:ff:ff

                                  56: br3: <BROADCAST,MULTICAST,PROMISC,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP mode DEFAULT group default qlen 1000

                                      link/ether 8e:26:21:83:bf:53 brd ff:ff:ff:ff:ff:ff

                                  57: br2: <BROADCAST,MULTICAST,PROMISC,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP mode DEFAULT group default qlen 1000

                                      link/ether 72:07:de:9d:db:83 brd ff:ff:ff:ff:ff:ff

                                  58: br4: <BROADCAST,MULTICAST,PROMISC,UP,LOWER_UP> mtu 9000 qdisc noqueue state UP mode DEFAULT group default qlen 1000

                                      link/ether f6:38:b1:66:e5:53 brd ff:ff:ff:ff:ff:ff

                                   

                                   

                                  Proxy:

                                  Proxy:

                                  https://github.com: responded with code: 200

                                  https://pypi.org: responded with code: 200

                                  https://registry-1.docker.io: responded with code: 200

                                  Proxy is OK.

                                   

                                   

                                  Hypervisor info:

                                  QEMU emulator version 2.5.0 (Debian 1:2.5+dfsg-5ubuntu10.41), Copyright (c) 2003-2008 Fabrice Bellard

                                   

                                   

                                  INFO: /dev/kvm exists

                                  KVM acceleration can be used

                                   

                                   

                                   

                                   

                                  Bridge patch applied:

                                  Bridge module patch:

                                  For kernel 4.4.0-116-generic: Applied BR_GROUPFWD_RESTRICTED_v2

                                  For kernel 4.4.0-159-generic (running kernel): Applied BR_GROUPFWD_RESTRICTED_v2

                                  For kernel 4.4.0-87-generic: NOT applied!

                                   

                                   

                                  MySQL is reachable on unix socket and IP address

                                   

                                   

                                  Salt ID: 2hJqQoDL.virl.info

                                  Salt Master: [u'vsm-us-51.virl.info', u'vsm-us-52.virl.info', u'vsm-us-53.virl.info', u'vsm-us-54.virl.info']

                                  Salt Ping: Success

                                   

                                   

                                  Host vsm-us-51.virl.info:

                                  DNS: Success - 173.38.221.79

                                  Ping: Success

                                  Port 4505: Success

                                  Port 4506: Success

                                  Connect: Success

                                   

                                   

                                  Host vsm-us-52.virl.info:

                                  DNS: Success - 173.38.221.79

                                  Ping: Success

                                  Port 4505: Success

                                  Port 4506: Success

                                  Connect: Success

                                   

                                   

                                  Host vsm-us-53.virl.info:

                                  DNS: Success - 173.38.221.80

                                  Ping: Success

                                  Port 4505: Success

                                  Port 4506: Success

                                  Connect: Success

                                   

                                   

                                  Host vsm-us-54.virl.info:

                                  DNS: Success - 173.38.221.80

                                  Ping: Success

                                  Port 4505: Success

                                  Port 4506: Success

                                  Connect: Success

                                   

                                   

                                   

                                   

                                  DEBUG    2019-08-22 00:47:10,932 salt.config Reading configuration from /etc/salt/minion

                                  DEBUG    2019-08-22 00:47:10,964 salt.config Including configuration from '/etc/salt/minion.d/extra.conf'

                                  DEBUG    2019-08-22 00:47:10,964 salt.config Reading configuration from /etc/salt/minion.d/extra.conf

                                  DEBUG    2019-08-22 00:47:10,967 salt.config Including configuration from '/etc/salt/minion.d/openstack.conf'

                                  DEBUG    2019-08-22 00:47:10,967 salt.config Reading configuration from /etc/salt/minion.d/openstack.conf

                                  INFO     2019-08-22 00:47:10,971 virl.common.packages Creating new salt caller. Masterless: True

                                  DEBUG    2019-08-22 00:47:11,009 salt.config Reading configuration from /etc/salt/minion

                                  DEBUG    2019-08-22 00:47:11,049 salt.config Including configuration from '/etc/salt/minion.d/extra.conf'

                                  DEBUG    2019-08-22 00:47:11,049 salt.config Reading configuration from /etc/salt/minion.d/extra.conf

                                  DEBUG    2019-08-22 00:47:11,052 salt.config Including configuration from '/etc/salt/minion.d/openstack.conf'

                                  DEBUG    2019-08-22 00:47:11,052 salt.config Reading configuration from /etc/salt/minion.d/openstack.conf

                                  DEBUG    2019-08-22 00:47:11,693 salt.pillar Determining pillar cache

                                  DEBUG    2019-08-22 00:47:11,709 salt.utils.lazy LazyLoaded jinja.render

                                  DEBUG    2019-08-22 00:47:11,711 salt.utils.lazy LazyLoaded yaml.render

                                  DEBUG    2019-08-22 00:47:11,739 salt.utils.lazy LazyLoaded jinja.render

                                  DEBUG    2019-08-22 00:47:11,740 salt.utils.lazy LazyLoaded yaml.render

                                  Salt grains is synchronized

                                   

                                   

                                  RabbitMQ status:

                                  ERROR    2019-08-22 00:47:12,118 virl.common.execute Subprocess "['rabbitmqctl', '-q', 'status']" failed with exit code 2

                                  Error: unable to connect to node rabbit@controller: nodedown

                                   

                                   

                                  DIAGNOSTICS

                                  ===========

                                   

                                   

                                  attempted to contact: [rabbit@controller]

                                   

                                   

                                  rabbit@controller:

                                    * connected to epmd (port 4369) on controller

                                    * epmd reports: node 'rabbit' not running at all

                                                    other nodes on controller: ['rabbitmq-cli-98160']

                                    * suggestion: start the node

                                   

                                   

                                  current node details:

                                  - node name: 'rabbitmq-cli-98160@virl'

                                  - home dir: /var/lib/rabbitmq

                                  - cookie hash: AAV5WI61mgUmciZeAhf/7w==

                                   

                                   

                                   

                                   

                                   

                                   

                                   

                                   

                                  RabbitMQ configured for Glance and Nova is probably down

                                  RabbitMQ configured for Neutron is probably down

                                   

                                   

                                  OpenStack system services:

                                  virl (controller): all required services are running.

                                   

                                   

                                  INFO     2019-08-22 00:47:13,731 virl.openstack.client image_info GET on URL "http://172.16.10.250:9292/v1/images/detail"

                                  INFO     2019-08-22 00:47:13,958 virl.openstack.client image_info response 200 to GET on URL "http://172.16.10.250:9292/v1/images/detail"

                                  INFO     2019-08-22 00:47:13,959 virl.openstack.client node_info GET on URL "http://172.16.10.250:8774/v2/0aa03e970e264f8cbd4eaec03542ea73/servers/detail"

                                  INFO     2019-08-22 00:47:13,993 virl.openstack.client node_info response 200 to GET on URL "http://172.16.10.250:8774/v2/0aa03e970e264f8cbd4eaec03542ea73/servers/detail"

                                  INFO     2019-08-22 00:47:13,993 virl.openstack.client image_info GET on URL "http://172.16.10.250:8774/v2/0aa03e970e264f8cbd4eaec03542ea73/images/detail"

                                  INFO     2019-08-22 00:47:14,054 virl.openstack.client image_info response 200 to GET on URL "http://172.16.10.250:8774/v2/0aa03e970e264f8cbd4eaec03542ea73/images/detail"

                                  INFO     2019-08-22 00:47:14,054 virl.openstack.client network_list GET on URL "http://172.16.10.250:8774/v2/0aa03e970e264f8cbd4eaec03542ea73/os-networks"

                                  INFO     2019-08-22 00:47:14,064 virl.openstack.client network_list invalid response 500 to GET on URL "http://172.16.10.250:8774/v2/0aa03e970e264f8cbd4eaec03542ea73/os-networks"

                                  ERROR    2019-08-22 00:47:14,066 virl.openstack.client Compute2_0.network_list failed args=() kargs={}

                                  Traceback (most recent call last):

                                    File "/var/jenkins/workspace/VIRL_CORE_build/test-virl-repo/bin/virl_health_status", line 269, in <module>

                                    File "/var/jenkins/workspace/VIRL_CORE_build/test-virl-repo/virl/common/env.py", line 319, in run

                                    File "/var/jenkins/workspace/VIRL_CORE_build/test-virl-repo/bin/virl_health_status", line 231, in run_oldcheck

                                    File "/var/jenkins/workspace/VIRL_CORE_build/test-virl-repo/virl/health/status.py", line 148, in print_old_check

                                    File "/usr/local/lib/python2.7/dist-packages/werkzeug/utils.py", line 91, in __get__

                                      value = self.func(obj)

                                    File "/var/jenkins/workspace/VIRL_CORE_build/test-virl-repo/virl/health/__init__.py", line 49, in get

                                    File "/usr/local/lib/python2.7/dist-packages/werkzeug/utils.py", line 91, in __get__

                                      value = self.func(obj)

                                    File "/var/jenkins/workspace/VIRL_CORE_build/test-virl-repo/virl/health/openstack.py", line 583, in status

                                    File "/usr/local/lib/python2.7/dist-packages/werkzeug/utils.py", line 91, in __get__

                                      value = self.func(obj)

                                    File "/var/jenkins/workspace/VIRL_CORE_build/test-virl-repo/virl/health/openstack.py", line 555, in data_mf

                                    File "</usr/local/lib/python2.7/dist-packages/decorator.pyc:decorator-gen-222>", line 2, in network_list

                                    File "/var/jenkins/workspace/VIRL_CORE_build/test-virl-repo/virl/openstack/client.py", line 96, in wrapper

                                    File "/var/jenkins/workspace/VIRL_CORE_build/test-virl-repo/virl/openstack/client.py", line 94, in wrapper

                                    File "/var/jenkins/workspace/VIRL_CORE_build/test-virl-repo/virl/openstack/client.py", line 5621, in network_list

                                    File "/var/jenkins/workspace/VIRL_CORE_build/test-virl-repo/virl/openstack/client.py", line 231, in openstack_result

                                    File "/var/jenkins/workspace/VIRL_CORE_build/test-virl-repo/virl/openstack/client.py", line 331, in wrong_http_response

                                  ClientException: OpenStack call to Compute2_0.network_list received invalid response status 500 (Internal Server Error) with error "{"computeFault": {"message": "Unexpected API Error. Please report this at http://bugs.launchpad.net/nova/ and attach the Nova API log if possible.\n<class 'keystoneauth1.exceptions.connection.ConnectFailure'>", "code": 500}} ..."

                                   

                                   

                                  ERROR    2019-08-22 00:47:14,066 virl.openstack.client ===== Request ==========

                                  ===== Headers ===========

                                  Accept: */*

                                  Accept-Encoding: gzip, deflate

                                  Connection: keep-alive

                                  Content-Type: application/json

                                  User-Agent: python-requests/2.18.1

                                  X-Auth-Token: f6b2ca7ba6534eaba6e242f6160cd376

                                  ===== Request body =====

                                  ===== End request ======

                                  ERROR    2019-08-22 00:47:14,067 virl.openstack.client ===== Response ==========

                                  ===== Headers ===========

                                  Connection: keep-alive

                                  Content-Length: 224

                                  Content-Type: application/json; charset=UTF-8

                                  Date: Thu, 22 Aug 2019 00:47:14 GMT

                                  X-Compute-Request-Id: req-3a70c5d4-ad43-4e03-aa8f-2330397a9c85

                                  ===== Response body =====

                                  --- Parsed JSON data ----

                                  {u'computeFault': {u'code': 500,

                                                     u'message': u"Unexpected API Error. Please report this at http://bugs.launchpad.net/nova/ and attach the Nova API log if possible.\n<class 'keystoneauth1.exceptions.connection.ConnectFailure'>"}}

                                  ===== End response ======

                                  ERROR    2019-08-22 00:47:14,067 virl.health.openstack Failed network_list in compute

                                  ERROR    2019-08-22 00:47:14,067 virl.openstack.client Cannot find requested service "network" in service catalog on OpenStack server:

                                  {u'identity': {'publicURL': {u'3': u'http://172.16.10.250:5000/v3'}, 'adminURL': {u'3': u'http://172.16.10.250:35357/v3'}}, u'volumev2': {'publicURL': {u'2': u'http://172.16.10.250:8776/v2/0aa03e970e264f8cbd4eaec03542ea73'}, 'adminURL': {u'2': u'http://172.16.10.250:8776/v2/0aa03e970e264f8cbd4eaec03542ea73'}}, u'image': {'publicURL': {u'1.1': u'http://172.16.10.250:9292/v1/', u'1.0': u'http://172.16.10.250:9292/v1/', u'2.2': u'http://172.16.10.250:9292/v2/', u'2.3': u'http://172.16.10.250:9292/v2/', u'2.0': u'http://172.16.10.250:9292/v2/', u'2.1': u'http://172.16.10.250:9292/v2/'}, 'adminURL': {u'1.1': u'http://172.16.10.250:9292/v1/', u'1.0': u'http://172.16.10.250:9292/v1/', u'2.2': u'http://172.16.10.250:9292/v2/', u'2.3': u'http://172.16.10.250:9292/v2/', u'2.0': u'http://172.16.10.250:9292/v2/', u'2.1': u'http://172.16.10.250:9292/v2/'}}, u'volume': {'publicURL': {u'2': u'http://172.16.10.250:8776/v2/0aa03e970e264f8cbd4eaec03542ea73'}, 'adminURL': {u'2': u'http://172.16.10.250:8776/v2/0aa03e970e264f8cbd4eaec03542ea73'}}, u'network': {}, u'compute': {'publicURL': {u'2': u'http://172.16.10.250:8774/v2/0aa03e970e264f8cbd4eaec03542ea73'}, 'adminURL': {u'2': u'http://172.16.10.250:8774/v2/0aa03e970e264f8cbd4eaec03542ea73'}}}

                                  ERROR    2019-08-22 00:47:14,068 virl.health.openstack Failed network_info in network

                                  OpenStack network service for STD is not available

                                  OpenStack identity service for STD is available

                                  OpenStack image service for STD is available

                                  OpenStack compute service for STD is not available

                                   

                                   

                                  OpenStack compute services:

                                  INFO     2019-08-22 00:47:14,068 virl.openstack.client service_info GET on URL "http://172.16.10.250:8774/v2/0aa03e970e264f8cbd4eaec03542ea73/os-services"

                                  INFO     2019-08-22 00:47:14,084 virl.openstack.client service_info response 200 to GET on URL "http://172.16.10.250:8774/v2/0aa03e970e264f8cbd4eaec03542ea73/os-services"

                                  [

                                    {

                                      "binary": "nova-cert",

                                      "state": "down",

                                      "disabled_reason": null,

                                      "updated_at": "2019-08-15T12:58:58.000000",

                                      "status": "enabled",

                                      "host": "virl",

                                      "id": 9,

                                      "zone": "internal"

                                    },

                                    {

                                      "binary": "nova-consoleauth",

                                      "state": "down",

                                      "disabled_reason": null,

                                      "updated_at": "2019-08-15T12:58:58.000000",

                                      "status": "enabled",

                                      "host": "virl",

                                      "id": 10,

                                      "zone": "internal"

                                    },

                                    {

                                      "binary": "nova-scheduler",

                                      "state": "down",

                                      "disabled_reason": null,

                                      "updated_at": "2019-08-21T17:18:03.000000",

                                      "status": "enabled",

                                      "host": "virl",

                                      "id": 11,

                                      "zone": "internal"

                                    },

                                    {

                                      "binary": "nova-conductor",

                                      "state": "down",

                                      "disabled_reason": null,

                                      "updated_at": "2019-08-15T12:58:48.000000",

                                      "status": "enabled",

                                      "host": "virl",

                                      "id": 12,

                                      "zone": "internal"

                                    },

                                    {

                                      "binary": "nova-compute",

                                      "state": "down",

                                      "disabled_reason": null,

                                      "updated_at": "2019-08-15T12:58:38.000000",

                                      "status": "enabled",

                                      "host": "virl",

                                      "id": 13,

                                      "zone": "nova"

                                    }

                                  ]

                                   

                                   

                                  Service "cert" is down.

                                  Service "consoleauth" is down.

                                  Service "scheduler" is down.

                                  Service "conductor" is down.

                                  Service "compute" is down.

                                   

                                   

                                   

                                   

                                  OpenStack network agents:

                                  ERROR    2019-08-22 00:47:14,085 virl.openstack.client Cannot find requested service "network" in service catalog on OpenStack server:

                                  {u'identity': {'publicURL': {u'3': u'http://172.16.10.250:5000/v3'}, 'adminURL': {u'3': u'http://172.16.10.250:35357/v3'}}, u'volumev2': {'publicURL': {u'2': u'http://172.16.10.250:8776/v2/0aa03e970e264f8cbd4eaec03542ea73'}, 'adminURL': {u'2': u'http://172.16.10.250:8776/v2/0aa03e970e264f8cbd4eaec03542ea73'}}, u'image': {'publicURL': {u'1.1': u'http://172.16.10.250:9292/v1/', u'1.0': u'http://172.16.10.250:9292/v1/', u'2.2': u'http://172.16.10.250:9292/v2/', u'2.3': u'http://172.16.10.250:9292/v2/', u'2.0': u'http://172.16.10.250:9292/v2/', u'2.1': u'http://172.16.10.250:9292/v2/'}, 'adminURL': {u'1.1': u'http://172.16.10.250:9292/v1/', u'1.0': u'http://172.16.10.250:9292/v1/', u'2.2': u'http://172.16.10.250:9292/v2/', u'2.3': u'http://172.16.10.250:9292/v2/', u'2.0': u'http://172.16.10.250:9292/v2/', u'2.1': u'http://172.16.10.250:9292/v2/'}}, u'volume': {'publicURL': {u'2': u'http://172.16.10.250:8776/v2/0aa03e970e264f8cbd4eaec03542ea73'}, 'adminURL': {u'2': u'http://172.16.10.250:8776/v2/0aa03e970e264f8cbd4eaec03542ea73'}}, u'network': {}, u'compute': {'publicURL': {u'2': u'http://172.16.10.250:8774/v2/0aa03e970e264f8cbd4eaec03542ea73'}, 'adminURL': {u'2': u'http://172.16.10.250:8774/v2/0aa03e970e264f8cbd4eaec03542ea73'}}}

                                  ERROR    2019-08-22 00:47:14,085 virl.health.openstack Cannot find "network" service in service catalog on OpenStack server

                                  Failed to get OpenStack network agents list.

                                  "Cannot find \"network\" service in service catalog on OpenStack server"

                                   

                                   

                                  OpenVPN services:

                                  OpenVPN server:

                                  Disabled

                                  Stopped

                                  Status:

                                  No status information present

                                   

                                   

                                  AutoNetkit services:

                                  virl-vis-webserver listening on port 19402

                                  virl-vis-mux running

                                  ank-cisco-webserver listening on port 19401

                                  virl-vis-processor running

                                   

                                   

                                  VIRL environment priority (lowest->highest): global conf, SHELL env, CLI args

                                  Global config can be defined at "/etc/virl/virl-core.ini"

                                  To set as SHELL ENV var: export NAME=value

                                  To unset as SHELL ENV var: unset NAME

                                  =========================================================

                                  Your global config:

                                  VIRL_STD_PROCESS_COUNT = 20

                                  VIRL_STD_USER_NAME = uwmadmin

                                  VIRL_STD_DIR = /var/local/virl

                                  VIRL_DEBUG = False

                                  VIRL_STD_PORT = 19399

                                  VIRL_STD_HOST = ::

                                  =========================================================

                                  Your SHELL environment:

                                  =========================================================

                                  Used values:

                                  VIRL_STD_PROCESS_COUNT = 20

                                  VIRL_STD_USER_NAME = uwmadmin

                                  VIRL_STD_DIR = /var/local/virl

                                  VIRL_DEBUG = False

                                  VIRL_STD_PORT = 19399

                                  VIRL_STD_HOST = ::

                                  =========================================================

                                   

                                   

                                  STD/UWM is initialized with the following users: uwmadmin,guest

                                  STD server on url http://localhost:19399 is listening, server version 0.10.37.33

                                   

                                   

                                  UWM server on url http://localhost:19400 is listening, server version 0.10.37.33

                                   

                                   

                                  Webmux server on url http://localhost:19403 is listening

                                  Redis server on localhost:6379 is listening

                                  Tap collector is running

                                   

                                   

                                  STD server version:

                                  ERROR    2019-08-22 00:47:17,531 virl.health Failed to pack status of a health check

                                  Traceback (most recent call last):

                                    File "/var/jenkins/workspace/VIRL_CORE_build/test-virl-repo/virl/health/__init__.py", line 56, in get

                                    File "/usr/local/lib/python2.7/dist-packages/werkzeug/utils.py", line 91, in __get__

                                      value = self.func(obj)

                                    File "/var/jenkins/workspace/VIRL_CORE_build/test-virl-repo/virl/health/std.py", line 246, in status

                                    File "/usr/local/lib/python2.7/dist-packages/werkzeug/utils.py", line 91, in __get__

                                      value = self.func(obj)

                                    File "/var/jenkins/workspace/VIRL_CORE_build/test-virl-repo/virl/health/std.py", line 238, in data_mf

                                    File "</usr/local/lib/python2.7/dist-packages/decorator.pyc:decorator-gen-296>", line 2, in simengine_version

                                    File "/var/jenkins/workspace/VIRL_CORE_build/test-virl-repo/virl/common/client.py", line 51, in wrapper

                                    File "/var/jenkins/workspace/VIRL_CORE_build/test-virl-repo/virl/std/client.py", line 292, in simengine_version

                                    File "/var/jenkins/workspace/VIRL_CORE_build/test-virl-repo/virl/common/client.py", line 212, in _get_result

                                    File "/var/jenkins/workspace/VIRL_CORE_build/test-virl-repo/virl/common/client.py", line 269, in _wrong_http_response

                                  ClientException: STD simengine-version request received invalid response: 503 - OpenStack admin user or services are not available

                                  STD simengine-version request received invalid response: 503 - OpenStack admin user or services are not available

                                   

                                   

                                  STD server licensing:

                                  ERROR    2019-08-22 00:47:17,712 virl.health Failed to pack status of a health check

                                  Traceback (most recent call last):

                                    File "/var/jenkins/workspace/VIRL_CORE_build/test-virl-repo/virl/health/__init__.py", line 56, in get

                                    File "/usr/local/lib/python2.7/dist-packages/werkzeug/utils.py", line 91, in __get__

                                      value = self.func(obj)

                                    File "/var/jenkins/workspace/VIRL_CORE_build/test-virl-repo/virl/health/std.py", line 274, in status

                                    File "/usr/local/lib/python2.7/dist-packages/werkzeug/utils.py", line 91, in __get__

                                      value = self.func(obj)

                                    File "/var/jenkins/workspace/VIRL_CORE_build/test-virl-repo/virl/health/std.py", line 266, in data_mf

                                    File "</usr/local/lib/python2.7/dist-packages/decorator.pyc:decorator-gen-292>", line 2, in simengine_licensing

                                    File "/var/jenkins/workspace/VIRL_CORE_build/test-virl-repo/virl/common/client.py", line 51, in wrapper

                                    File "/var/jenkins/workspace/VIRL_CORE_build/test-virl-repo/virl/std/client.py", line 254, in simengine_licensing

                                    File "/var/jenkins/workspace/VIRL_CORE_build/test-virl-repo/virl/common/client.py", line 212, in _get_result

                                    File "/var/jenkins/workspace/VIRL_CORE_build/test-virl-repo/virl/common/client.py", line 269, in _wrong_http_response

                                  ClientException: STD simengine-licensing request received invalid response: 503 - OpenStack admin user or services are not available

                                  STD simengine-licensing request received invalid response: 503 - OpenStack admin user or services are not available

                                   

                                   

                                  STD server autonetkit status:

                                  ERROR    2019-08-22 00:47:17,892 virl.health Failed to pack status of a health check

                                  Traceback (most recent call last):

                                    File "/var/jenkins/workspace/VIRL_CORE_build/test-virl-repo/virl/health/__init__.py", line 56, in get

                                    File "/usr/local/lib/python2.7/dist-packages/werkzeug/utils.py", line 91, in __get__

                                      value = self.func(obj)

                                    File "/var/jenkins/workspace/VIRL_CORE_build/test-virl-repo/virl/health/std.py", line 174, in status

                                    File "/usr/local/lib/python2.7/dist-packages/werkzeug/utils.py", line 91, in __get__

                                      value = self.func(obj)

                                    File "/var/jenkins/workspace/VIRL_CORE_build/test-virl-repo/virl/health/std.py", line 166, in data_mf

                                    File "</usr/local/lib/python2.7/dist-packages/decorator.pyc:decorator-gen-384>", line 2, in ank_version

                                    File "/var/jenkins/workspace/VIRL_CORE_build/test-virl-repo/virl/common/client.py", line 51, in wrapper

                                    File "/var/jenkins/workspace/VIRL_CORE_build/test-virl-repo/virl/std/client.py", line 1185, in ank_version

                                    File "/var/jenkins/workspace/VIRL_CORE_build/test-virl-repo/virl/common/client.py", line 212, in _get_result

                                    File "/var/jenkins/workspace/VIRL_CORE_build/test-virl-repo/virl/common/client.py", line 269, in _wrong_http_response

                                  ClientException: STD ank-version request received invalid response: 503 - OpenStack admin user or services are not available

                                  STD ank-version request received invalid response: 503 - OpenStack admin user or services are not available

                                  • 14. Re: Problems with upgrade from 1.5 to 1.6
                                    Tomas

                                    Hi Bruno,

                                     

                                    there is Protocol 'inet_tcp': register/listen error: econnrefused error in rabbitmq startup log. The rabbitmq service cannot connect to epmd port 4369.

                                    Please try to stop the rabbitmq service and check if the port is occupied or not (execute those commands as sudo user):


                                    systemctl stop rabbitmq-server

                                    ps aux | grep epmd

                                    netstat -tlnup | grep 4369

                                     

                                    If the epmd is running although the rabbitmq service is stopped, then we found the problem. Kill all epmd processes and start the rabbitmq-server again:

                                     

                                    killall epmd #or you can try epmd -kill

                                    systemctl start rabbitmq-server


                                    Please let us know if that helped.

                                     

                                    Regards,

                                    Tomas

                                    1 2 Previous Next