6 Replies Latest reply: Oct 2, 2019 7:20 AM by Andy RSS

    The server is temporarily unable to service your request due to maintenance downtime or capacity problems. Please try again later.

    Andy

      After a VIRL upgrade I now get: The server is temporarily unable to service your request due to maintenance downtime or capacity problems. Please try again later.

       

      Looking at apache2 logs I can see:

      [Tue Oct 01 18:42:07.486720 2019] [proxy:error] [pid 10699:tid 139825202063104] AH00959: ap_proxy_connect_backend disabling worker for (localhost) for 0s

      [Tue Oct 01 18:42:07.486729 2019] [proxy_http:error] [pid 10699:tid 139825202063104] [client 10.167.29.188:49892] AH01114: HTTP: failed to make connection to backend: localhost

      [Tue Oct 01 18:42:07.486729 2019] [proxy_http:error] [pid 10699:tid 139825202063104] [client 10.167.29.188:49892] AH01114: HTTP: failed to make connection to backend: localhost

       

      I can ssh into the instance but no HTTPS access.

       

      I've tried running: sudo vinstall upgrade

       

      The errors are:

      === State common.arp.init ===

      Succeeded 2; failed 1

      Duration: 3.28

      ERROR: Some states failed in call "state.sls common.arp.init":

      "file_|-enlarge_arp_cache_|-/usr/local/bin/enlarge_arp_cache_|-managed" (User virl-services is not available Group virl-services is not available)

      Upgrade failed on common.arp.init

       

      === State common.arp.init ===

      Succeeded 2; failed 1

      Duration: 3.03

      ERROR: Some states failed in call "state.sls common.arp.init":

      "file_|-enlarge_arp_cache_|-/usr/local/bin/enlarge_arp_cache_|-managed" (User virl-services is not available Group virl-services is not available)

      Upgrade failed on common.arp.init

      Summary for local

      ------------

      Succeeded: 6 (changed=4)

      Failed:    0

      ------------

      Total states run:     6

      Total run time:  44.424 s

      Done. You will need to restart once all subsequent upgrades are completed.

      /etc/salt/grains (key None) is valid

       

      This has been long running instance running on ESXi.

       

      Can anybody advise?