Logs and troubleshooting (Release THREE)
Known errors are captured in our Technical FAQ. If the error you experienced is not there, you can troubleshoot the different components following the instructions in this section.
Logs
UI logs
The server-side UI logs can be obtained on the SO-ub container at:
/usr/rift/usr/share/rw.ui/skyquake/err.log /usr/rift/usr/share/rw.ui/skyquake/out.log
Client side UI logs can be obtained in the Developer Console in a browser.
SO logs
SO logs can be obtained on the SO-ub container at:
/var/log/syslog
RO logs
RO logs are at file "/var/log/osm/openmano.log" on the RO container. You can get it from the container to the OSM machine with:
lxc file pull RO/var/log/osm/openmano.log .
Log file and log level (debug by default) can be set at file "/etc/osm/openmanod.cfg" on RO container
VCA logs
General LXC-related logs and related to Juju can be obtained on the Juju controller's container, inside the VCA container at:
$ lxc exec VCA bash $ lxc list +-----------------+---------+----------------------+------+------------+-----------+ | NAME | STATE | IPV4 | IPV6 | TYPE | SNAPSHOTS | +-----------------+---------+----------------------+------+------------+-----------+ | juju-ed3163-288 | RUNNING | 10.44.127.188 (eth0) | | PERSISTENT | 0 | +-----------------+---------+----------------------+------+------------+-----------+ | juju-f050fc-0 | RUNNING | 10.44.127.136 (eth0) | | PERSISTENT | 0 | +-----------------+---------+----------------------+------+------------+-----------+ $ lxc exec juju-f050fc-0 bash # General Juju logs /var/log/juju/logsink.log # Juju controller logs /var/log/juju/machine-0.log
Troubleshooting
Troubleshooting UI
UI status
- The status of the UI process can be checked by running the following command in the SO-ub container as root:
forever list
- You should see a status similar to following with an uptime:
info: Forever processes running data: uid command script forever pid id logfile uptime data: [0] uivz /usr/bin/nodejs skyquake.js --enable-https --keyfile-path=/usr/rift/etc/ssl/current.key --certfile-path=/usr/rift/etc/ssl/current.cert 21071 21082 /root/.forever/forever.log 0:18:12:3.231
- In case the UI server process has issues, you won't see an uptime but will instead see a status "STOPPED" in that position.
- In case the UI server process never started, you'll see a status saying "No forever processes running".
Restarting UI
- The UI is restarted when the SO module (launchpad) is restarted.
- However, in case just the UI needs to be restarted, you can run the following command in the SO-ub container as root:
forever restartall
Troubleshooting SO
SO status
The SO now starts as a service, but earlier versions of the SO used to run SO as a process. To check if the SO is running as a service, run the following command:
lxc exec SO-ub systemctl status launchpad.service
Restarting SO
Note: Restarting the SO also restarts the UI.
When the SO has been started as a service (see SO status section above), use the following commands to restart the SO:
lxc exec SO-ub systemctl stop launchpad.service lxc exec SO-ub systemctl start launchpad.service
When SO hasn't been started as a service, please follow the following instructions to restart the SO
lxc restart SO-ub # Optional, Needed if there is an existing running instance of launchpad lxc exec SO-ub -- nohup sudo -b -H /usr/rift/rift-shell -r -i /usr/rift -a /usr/rift/.artifacts -- ./demos/launchpad.py --use-xml-mode
Troubleshooting RO
RO status
The status of the RO process can be checked by running the following command in the RO container as root:
service osm-ro status ● osm-ro.service - openmano server Loaded: loaded (/etc/systemd/system/osm-ro.service; disabled; vendor preset: enabled) Active: active (running) since Wed 2017-04-02 16:50:21 UTC; 2s ago Main PID: 550 (python) Tasks: 1 Memory: 51.2M CPU: 717ms CGroup: /system.slice/osm-ro.service └─550 python openmanod -c /etc/osm/openmanod.cfg --log-file=/var/log/osm/openmano.log Nov 02 16:50:21 RO-integration systemd[1]: Started openmano server.
In case it is not running, try to see the last logs at /var/log/osm/openmano.log
Restarting RO
RO can be restarted on the RO container at:
service osm-ro restart
Troubleshooting VCA
VCA status
lxc exec VCA -- juju status
Check that juju account is in 'green' status in the UI
lxc exec VCA -- juju config <app>
lxc exec VCA -- juju list-actions <app>
lxc exec VCA -- juju show-action-status
lxc exec VCA -- juju show-action-output <action-id>
VCA password
To retrieve the VCA password, install osmclient and run the following:
$ lxc list +-------+---------+--------------------------------+------+------------+-----------+ | NAME | STATE | IPV4 | IPV6 | TYPE | SNAPSHOTS | +-------+---------+--------------------------------+------+------------+-----------+ | RO | RUNNING | 10.143.142.48 (eth0) | | PERSISTENT | 0 | +-------+---------+--------------------------------+------+------------+-----------+ | SO-ub | RUNNING | 10.143.142.43 (eth0) | | PERSISTENT | 0 | +-------+---------+--------------------------------+------+------------+-----------+ | VCA | RUNNING | 10.44.127.1 (lxdbr0) | | PERSISTENT | 0 | | | | 10.143.142.139 (eth0) | | | | +-------+---------+--------------------------------+------+------------+-----------+ $ export OSM_HOSTNAME=10.143.142.43 $ osm config-agent-list +---------+--------------+----------------------------------------------------------------------------------------------------------------------+ | name | account-type | details | +---------+--------------+----------------------------------------------------------------------------------------------------------------------+ | osmjuju | juju | {u'secret': u'OTlhODg1NmMxN2JhODg1MTNiOTY4ZTk0', u'user': u'admin', u'ip-address': u'10.44.127.235', u'port': 17070} | +---------+--------------+----------------------------------------------------------------------------------------------------------------------+
Under the details column, the 'secret' value is the admin user password.
Restarting VCA
- The Juju controller can be restarted or started in case of a problem.
lxc exec VCA bash $ lxc list +-----------------+---------+----------------------+------+------------+-----------+ | NAME | STATE | IPV4 | IPV6 | TYPE | SNAPSHOTS | +-----------------+---------+----------------------+------+------------+-----------+ | juju-ed3163-288 | RUNNING | 10.44.127.188 (eth0) | | PERSISTENT | 0 | +-----------------+---------+----------------------+------+------------+-----------+ | juju-f050fc-0 | RUNNING | 10.44.127.136 (eth0) | | PERSISTENT | 0 | <-- Juju controller instance +-----------------+---------+----------------------+------+------------+-----------+ # Get the name of the Juju controller instance and access (e.g., "juju-f050fc-0") lxc exec juju-f050fc-0 bash # Stop the process if needed by killing the process running on default port 17070 # Or, if not active, run the following in background (or exit with Ctrl+C) /var/lib/juju/init/jujud-machine-0/exec-start.sh
Known error messages in the VCA and their solution
Software upgrade
You can find instructions in this link: Software upgrade (Release THREE)