Logs and troubleshooting (Release ONE)
Under elaboration
Logs
UI logs
The server-side UI logs can be obtained on the SO-ub container at:
/usr/rift/usr/share/rw.ui/skyquake/err.log /usr/rift/usr/share/rw.ui/skyquake/out.log
Client side UI logs can be obtained in the Developer Console in a browser.
SO logs
SO logs can be obtained on the SO-ub container at:
/var/log/rift/rift.log
RO logs
RO logs can be obtained on the RO container at:
/var/log/openmano/openmano.log
Log file and log level (debug by default) can be set at file:
/etc/default/openmanod.cfg
VCA logs
Troubleshooting
Troubleshooting UI
UI status
- The status of the UI process can be checked by running the following command in the SO-ub container as root:
forever list
- You should see a status similar to following with an uptime:
info: Forever processes running data: uid command script forever pid id logfile uptime data: [0] uivz /usr/bin/nodejs skyquake.js --enable-https --keyfile-path=/usr/rift/etc/ssl/current.key --certfile-path=/usr/rift/etc/ssl/current.cert 21071 21082 /root/.forever/forever.log 0:18:12:3.231
- In case the UI server process has issues, you won't see an uptime but will instead see a status "STOPPED" in that position.
- In case the UI server process never started, you'll see a status saying "No forever processes running".
Restarting UI
- The UI is restarted when the SO module (launchpad) is restarted.
- However, in case just the UI needs to be restarted, you can run the following command in the SO-ub container as root:
forever restartall
Known error messages in the UI and their solution
Troubleshooting SO
SO status
Restarting SO
To restart SO module, please follow the following instructions
lxc restart SO-ub # Optional, Needed if there is an existing running instance of launchpad lxc exec SO-ub -- nohup sudo -b -H /usr/rift/rift-shell -r -i /usr/rift -a /usr/rift/.artifacts -- ./demos/launchpad.py --use-xml-mode
Known error messages in the SO and their solution
Troubleshooting RO
RO status
The status of the RO process can be checked by running the following command in the RO container as root:
service openmano status ● openmano.service - openmano server Loaded: loaded (/etc/systemd/system/openmano.service; disabled; vendor preset: enabled) Active: active (running) since Wed 2016-11-02 16:50:21 UTC; 2s ago Main PID: 550 (python) Tasks: 1 Memory: 51.2M CPU: 717ms CGroup: /system.slice/openmano.service └─550 python /opt/openmano/openmanod.py -c /opt/openmano/openmanod.cfg --log-file=/opt/openmano/logs/openmano.log Nov 02 16:50:21 RO-integration systemd[1]: Started openmano server.
In case it is not running, try to see the last logs at /var/log/openmano/openmano.log
Restarting RO
RO can be restarted on the RO container at:
service openmano restart
Known error messages in the RO and their solution
Possible known errors are:
Invalid openstack credentials
SYMPTOM: At deployment it appears the error message: Not possible to get_networks_list from VIM: AuthorizationFailure: Authorization Failed: The resource could not be found. (HTTPS: 404)
CAUSE: Openstack is not properly added to openmano with the right credentials
SOLUTION:
- See the steps at OSM_Release_ONE#Openstack_site. If the problem persist ensure openstack is properly configured.
- Ensure you have access to the openstack endpoints: if port 5000 is redirected using iptables, other endpoint ports must also be redirected.
- use "v2" authorization URL. "v3" is on betta.
- if https (insetad of http) is used for authorization URL, ensure the certificate is installed at RO container, by e.g. putting a .crt (not .pem) certificate at /usr/local/share/ca-certificates and running update-ca-certificates.
Can be checked using openstack client with:
sudo apt install python-openstackclient openstack --os-project-name <auth-project-name> --os-url <auth-url> --os-username <auth-username> --os-password <auth-password> --debug network list openstack --os-project-name <auth-project-name> --os-url <auth-url> --os-username <auth-username> --os-password <auth-password> --debug host list openstack --os-project-name <auth-project-name> --os-url <auth-url> --os-username <auth-username> --os-password <auth-password> --debug flavor list openstack --os-project-name <auth-project-name> --os-url <auth-url> --os-username <auth-username> --os-password <auth-password> --debug server list #providing the same URL, credentials as you provide to openmano. The (--debug) will show more info so that you will see the IP/ports it try to access
Wrong database version
SYMPTOM: openmano service fails. At openmano logs appear:
2016-11-02T17:19:51 CRITICAL openmano openmanod.py:268 DATABASE wrong version '0.15'. Try to upgrade/downgrade to version '0.16' with './database_utils/migrate_mano_db.sh'
CAUSE: Openmano has been upgraded with a new version that requieres a new database version.
SOLUTION: To upgrade de database version run ./database_utils/migrate_mano_db.sh and provide credentias if needed (by default database user is 'mano', and database password is 'manopw')
Troubleshooting VCA
VCA status
Restarting VCA
Known error messages in the VCA and their solution
Software upgrade
OSM is being upgraded periodically to fix bugs that are being reported. Last version corresponds with the tag v1.0.4. These guidelines show you how to upgrade the different components.
UI upgrade
Note: It is recommended to upgrade both SO and UI modules together.
Execute at UI container ("lxc exec SO-ub bash" to enter into the container, "exit" to exit)
cd /root/UI git pull --rebase git checkout tags/v1.0.4 make clean && make -j16 && make install
Follow SO upgrade to upgrade SO and restart launchpad.
SO upgrade
Execute at SO container ("lxc exec SO-ub bash" to enter into the container, "exit" to exit)
cd /root/SO git pull --rebase git checkout tags/v1.0.4 make clean && make -j16 && make install
Exit from SO-container and restart SO-ub container and Launchpad
lxc restart SO-ub lxc exec SO-ub -- nohup sudo -b -H /usr/rift/rift-shell -r -i /usr/rift -a /usr/rift/.artifacts -- ./demos/launchpad.py --use-xml-mode
RO upgrade
Execute at RO container ("lxc exec RO bash" to enter into the container, "exit" to exit)
service openmano stop #git -C /opt/openmano stash #required if the original config file has changed git -C /opt/openmano pull --rebase git -C /opt/openmano checkout tags/v1.0.4 #git -C /opt/openmano stash pop #required if the original file has changed /opt/openmano/database_utils/migrate_mano_db.sh service openmano start