Logs and troubleshooting (Release TWO)
Logs
UI logs
The server-side UI logs can be obtained on the SO-ub container at:
/usr/rift/usr/share/rw.ui/skyquake/err.log /usr/rift/usr/share/rw.ui/skyquake/out.log
Client side UI logs can be obtained in the Developer Console in a browser.
SO logs
SO logs can be obtained on the SO-ub container at:
/var/log/rift/rift.log
RO logs
RO logs are at file "/var/log/osm/openmano.log" on the RO container. You can get it from the container to the OSM machine with:
lxc file pull RO/var/log/osm/openmano.log .
Log file and log level (debug by default) can be set at file "/etc/osm/openmanod.cfg" on RO container
VCA logs
Troubleshooting
Troubleshooting UI
UI status
- The status of the UI process can be checked by running the following command in the SO-ub container as root:
forever list
- You should see a status similar to following with an uptime:
info: Forever processes running data: uid command script forever pid id logfile uptime data: [0] uivz /usr/bin/nodejs skyquake.js --enable-https --keyfile-path=/usr/rift/etc/ssl/current.key --certfile-path=/usr/rift/etc/ssl/current.cert 21071 21082 /root/.forever/forever.log 0:18:12:3.231
- In case the UI server process has issues, you won't see an uptime but will instead see a status "STOPPED" in that position.
- In case the UI server process never started, you'll see a status saying "No forever processes running".
Restarting UI
- The UI is restarted when the SO module (launchpad) is restarted.
- However, in case just the UI needs to be restarted, you can run the following command in the SO-ub container as root:
forever restartall
Known error messages in the UI and their solution
Troubleshooting SO
SO status
The SO now starts as a service, but earlier versions of the SO used to run SO as a process. To check if the SO is running as a service, run the following command:
lxc exec SO-ub systemctl status launchpad.service
Restarting SO
Note: Restarting the SO also restarts the UI.
When the SO has been started as a service (see SO status section above), use the following commands to restart the SO:
lxc exec SO-ub systemctl stop launchpad.service lxc exec SO-ub systemctl start launchpad.service
When SO hasn't been started as a service, please follow the following instructions to restart the SO
lxc restart SO-ub # Optional, Needed if there is an existing running instance of launchpad lxc exec SO-ub -- nohup sudo -b -H /usr/rift/rift-shell -r -i /usr/rift -a /usr/rift/.artifacts -- ./demos/launchpad.py --use-xml-mode
Known error messages in the SO and their solution
Troubleshooting RO
RO status
The status of the RO process can be checked by running the following command in the RO container as root:
service osm-ro status ● osm-ro.service - openmano server Loaded: loaded (/etc/systemd/system/osm-ro.service; disabled; vendor preset: enabled) Active: active (running) since Wed 2017-04-02 16:50:21 UTC; 2s ago Main PID: 550 (python) Tasks: 1 Memory: 51.2M CPU: 717ms CGroup: /system.slice/osm-ro.service └─550 python openmanod -c /etc/osm/openmanod.cfg --log-file=/var/log/osm/openmano.log Nov 02 16:50:21 RO-integration systemd[1]: Started openmano server.
In case it is not running, try to see the last logs at /var/log/osm/openmano.log
Restarting RO
RO can be restarted on the RO container at:
service osm-ro restart
Known error messages in the RO and their solution
Possible known errors are:
Invalid openstack credentials
SYMPTOM: At deployment it appears some of the following error messages:
- Not possible to get_networks_list from VIM: AuthorizationFailure: Authorization Failed: The resource could not be found. (HTTPS: 404)
- VIM Exception vimconnUnexpectedResponse Unauthorized: The request you have made requieres authentication. (HTTP 401) ...
CAUSE: Openstack is not properly added to openmano with the right credentials
SOLUTION:
- See the steps at OSM_Release_TWO#Openstack_site. If the problem persist ensure openstack is properly configured.
- Ensure you have access to the openstack endpoints: if port 5000 is redirected using iptables, other endpoint ports must also be redirected.
- Use "v2" authorization URL. "v3" is currently experimental in the master branch and is not recommended.
- If https (instead of http) is used for authorization URL, you can either use the insecure option at datacenter-create (See Openstack_configuration_(Release_TWO)#Add_openstack_at_OSM); or install the certificate at RO container, by e.g. putting a .crt (not .pem) certificate at /usr/local/share/ca-certificates and running update-ca-certificates.
Can be checked using openstack client from RO container:
lxc exec RO bash # for entering the RO container CLI #apt install -y python-openstackclient # if not already installed openstack --os-project-name <project-name> --os-auth-url <auth-url> --os-username <auth-username> --os-password <auth-password> --debug network list openstack --os-project-name <project-name> --os-auth-url <auth-url> --os-username <auth-username> --os-password <auth-password> --debug host list openstack --os-project-name <project-name> --os-auth-url <auth-url> --os-username <auth-username> --os-password <auth-password> --debug flavor list openstack --os-project-name <project-name> --os-auth-url <auth-url> --os-username <auth-username> --os-password <auth-password> --debug server list #providing the same URL, credentials as you provide to openmano. The (--debug) will show more info so that you will see the IP/ports it try to access #run 'openmano datacenter-list <datacenter-nanme> -vvv' to see openstack credentials
Wrong database version
SYMPTOM: osm-ro service fails. At openmano logs appear:
2016-11-02T17:19:51 CRITICAL openmano openmanod:268 DATABASE wrong version '0.15'. Try to upgrade/downgrade to version '0.16' with 'database_utils/migrate_mano_db.sh'
CAUSE: Openmano has been upgraded with a new version that requieres a new database version.
SOLUTION: To upgrade de database version run database_utils/migrate_mano_db.sh and provide credentials if needed (by default database user is 'mano', and database password is 'manopw')
Problems with images
SYMPTOM: It appears at UI or RO logs the error: Error creating image at VIM 'xxxx': Cannot create image without location
CAUSE: There is a mismatch between image names (and checksum) at OSM VNFD and at VIM. Basically the image is not present at VIM.
SOLUTION: Add the immage at VIM and ensure that it is visible for the VIM credentials provided to RO. At RO container you can list the VIM images using these credentialas easily with the command: "openmano vim-image-list --datacenter <xxxxx>".
Problems with MTU
SYMPTOM: It is one of the timeout causes. At OSM UI its appers "...Openmano command timed out". At RO level, some commands from RO to datacenters fail, normally blocking the thread that communictes with this datacenter. Command "openmano vim-xxxx-list" can block openmano.
CAUSE: MTU of containers are higher than the MTU of the machine. By default lxc containers are created with MTU 1500, however if the interface of the machine where OSM is installed has a lower MTU some response packets will be lost. This is the case for example when OSM is installed in a virtual machine launched by openstack, that offers a MTU of 1496
SOLUTION: Enters in RO,SO-ub and VCA container and put the same MTU of the main machine. Use the command "ifconfig eth0 mtu 1496". See LXD_configuration_for_OSM_Release_TWO#MTU for a permanent solution when VM or containers restarts
Troubleshooting VCA
VCA status
lxc exec VCA -- juju status
Check that juju account is in 'green' status in the UI
lxc exec VCA -- juju config <app>
lxc exec VCA -- juju list-actions <app>
lxc exec VCA -- juju show-action-status
lxc exec VCA -- juju show-action-output <action-id>
VCA password
To retrieve the VCA password, install osmclient and run the following:
$ lxc list +-------+---------+--------------------------------+------+------------+-----------+ | NAME | STATE | IPV4 | IPV6 | TYPE | SNAPSHOTS | +-------+---------+--------------------------------+------+------------+-----------+ | RO | RUNNING | 10.143.142.48 (eth0) | | PERSISTENT | 0 | +-------+---------+--------------------------------+------+------------+-----------+ | SO-ub | RUNNING | 10.143.142.43 (eth0) | | PERSISTENT | 0 | +-------+---------+--------------------------------+------+------------+-----------+ | VCA | RUNNING | 10.44.127.1 (lxdbr0) | | PERSISTENT | 0 | | | | 10.143.142.139 (eth0) | | | | +-------+---------+--------------------------------+------+------------+-----------+ $ export OSM_HOSTNAME=10.143.142.43 $ osm config-agent-list +---------+--------------+----------------------------------------------------------------------------------------------------------------------+ | name | account-type | details | +---------+--------------+----------------------------------------------------------------------------------------------------------------------+ | osmjuju | juju | {u'secret': u'OTlhODg1NmMxN2JhODg1MTNiOTY4ZTk0', u'user': u'admin', u'ip-address': u'10.44.127.235', u'port': 17070} | +---------+--------------+----------------------------------------------------------------------------------------------------------------------+
Under the details column, the 'secret' value is the admin user password.
Restarting VCA
Known error messages in the VCA and their solution
Software upgrade (source code)
OSM is being upgraded periodically to fix bugs that are being reported. Last version corresponds with the tag v1.0.5. These guidelines show you how to upgrade the different components.
- Note: The SW upgrade procedure does not include restoring/migrating configuration state in the upgraded OSM platform.
UI upgrade source code
Note: It is recommended to upgrade both SO and UI modules together.
Execute at UI container ("lxc exec SO-ub bash" to enter into the container, "exit" to exit)
cd /root/UI git pull --rebase git checkout tags/v2.0.1 make clean && make -j16 && make install
Follow SO upgrade to upgrade SO and restart launchpad.
SO upgrade source code
Execute at SO container ("lxc exec SO-ub bash" to enter into the container, "exit" to exit)
cd /root/SO git pull --rebase git checkout tags/v2.0.1 make clean # Clean the previous installation ./BUILD.sh # Install the new version
Exit from SO-container and restart SO-ub container and Launchpad
lxc restart SO-ub # lxc exec SO-ub -- nohup sudo -b -H /usr/rift/rift-shell -r -i /usr/rift -a /usr/rift/.artifacts -- ./demos/launchpad.py --use-xml-mode
RO upgrade source code
Assuming that you have installed RO v2.0.x, you need to do the following to upgrade to v2.0.y.
First, enter into the RO container ("lxc exec RO bash" to enter, "exit" to exit). Then run these instructinos:
# cp /etc/osm/openmanod.cfg openmanod.cfg.bck # Optional: create a backup of the config file if you made changes cd RO git pull --rebase git checkout tags/v2.0.y ./scripts/install-openmano-service.sh --uninstall ./scripts/install-openmano.sh --noclone --updatedb -q # cp openmanod.cfg.bck /etc/osm/openmanod.cfg # Optional: replace the config file again # service osm-ro restart # Optional: if you replaced the config file, you need to restart the service
Software upgrade (binaries)
Under elaboration