Logs and troubleshooting (Release ONE): Difference between revisions

From OSM Public Wiki
Jump to: navigation, search
No edit summary
 
(44 intermediate revisions by 5 users not shown)
Line 1: Line 1:
Under elaboration
'''Under elaboration'''


__TOC__
__TOC__


==Logs==
=Logs=
===SO logs===
==UI logs==
 
===UI logs===


The server-side UI logs can be obtained on the SO-ub container at:
The server-side UI logs can be obtained on the SO-ub container at:
Line 16: Line 14:
Client side UI logs can be obtained in the Developer Console in a browser.
Client side UI logs can be obtained in the Developer Console in a browser.


===RO logs===
==SO logs==
 
SO logs can be obtained on the SO-ub container at:
<pre>
/var/log/rift/rift.log
</pre>
 
==RO logs==
RO logs can be obtained on the RO container at:
RO logs can be obtained on the RO container at:
<pre>/var/log/openmano/openmano.log</pre>
<pre>/var/log/openmano/openmano.log</pre>
Line 23: Line 28:
<pre>/etc/default/openmanod.cfg</pre>
<pre>/etc/default/openmanod.cfg</pre>


===VCA logs===
==VCA logs==
 
==Troubleshooting==


===Checking status===
=Troubleshooting=


====SO status====
==Troubleshooting UI==


====UI status====
===UI status===
* The status of the UI process can be checked by running the following command in the SO-ub container as root:
* The status of the UI process can be checked by running the following command in the SO-ub container as root:
<pre>
<pre>
Line 45: Line 48:
* In case the UI server process never started, you'll see a status saying "No forever processes running".
* In case the UI server process never started, you'll see a status saying "No forever processes running".


====RO status====
===Restarting UI===
* The status of the RO process can be checked by running the following command in the RO container as root:
* The UI is restarted when the SO module (launchpad) is restarted.
* However, in case just the UI needs to be restarted, you can run the following command in the SO-ub container as root:
<pre>
forever restartall
</pre>
 
===Known error messages in the UI and their solution===
 
==Troubleshooting SO==
 
===SO status===
The SO now starts as a service, but earlier versions of the SO used to run SO as a process.
To check if the SO is running as a service, run the following command:
<pre>
lxc exec SO-ub systemctl status launchpad.service
</pre>
 
 
===Restarting SO===
Note: Restarting the SO also restarts the UI.
 
When the SO has been started as a service (see SO status section above), use the following commands to restart the SO:
<pre>
lxc exec SO-ub systemctl stop launchpad.service
lxc exec SO-ub systemctl start launchpad.service
</pre>
 
When SO hasn't been started as a service, please follow the following instructions to restart the SO
lxc restart SO-ub # Optional, Needed if there is an existing running instance of launchpad
lxc exec SO-ub -- nohup sudo -b -H /usr/rift/rift-shell -r -i /usr/rift -a /usr/rift/.artifacts -- ./demos/launchpad.py --use-xml-mode
 
===Known error messages in the SO and their solution===
 
==Troubleshooting RO==
 
===RO status===
The status of the RO process can be checked by running the following command in the RO container as root:
<pre>
<pre>
  service openmano status
  service openmano status
Line 62: Line 101:
</pre>
</pre>


In case it is not running try to see the last logs at /opt/openmano/logs/openmano.log (or /var/logs/openmano/openmano.log that links to the same file).
In case it is not running, try to see the last logs at '''/var/log/openmano/openmano.log'''
 
===Restarting RO===
RO can be restarted on the RO container at:
<pre>service openmano restart</pre>


====VCA status====
===Known error messages in the RO and their solution===
Possible known errors are:


===Restarting services===
====Invalid openstack credentials====
====Restarting SO====


====Restarting UI====
SYMPTOM: At deployment it appears the error message: Not possible to get_networks_list from VIM: AuthorizationFailure: Authorization Failed: The resource could not be found. (HTTPS: 404)
* The UI is restarted when the SO module (launchpad) is restarted.
 
* However, in case just the UI needs to be restarted, you can run the following command in the SO-ub container as root:
CAUSE: Openstack is not properly added to openmano with the right credentials
<pre>
 
forever restartall
SOLUTION:
</pre>
* See the steps at [[OSM_Release_ONE#Openstack_site]]. If the problem persist ensure openstack is properly configured.
* Ensure you have access to the openstack endpoints: if port 5000 is redirected using iptables, other endpoint ports must also be redirected.
* Use "v2" authorization URL. "v3" is currently experimental in the master branch and is not recommended.
* If https (instead of http) is used for authorization URL, you can either use the insecure option at datacenter-create (See [[Openstack_configuration_(Release_ONE)#Add_openstack_at_OSM]]); or install the certificate at RO container, by e.g. putting a .crt (not .pem) certificate at /usr/local/share/ca-certificates and running update-ca-certificates.
 
Can be checked using openstack client with:
sudo apt install python-openstackclient
openstack --os-project-name <auth-project-name> --os-url <auth-url> --os-username <auth-username> --os-password <auth-password> --debug network list
openstack --os-project-name <auth-project-name> --os-url <auth-url> --os-username <auth-username> --os-password <auth-password> --debug host list
openstack --os-project-name <auth-project-name> --os-url <auth-url> --os-username <auth-username> --os-password <auth-password> --debug flavor list
openstack --os-project-name <auth-project-name> --os-url <auth-url> --os-username <auth-username> --os-password <auth-password> --debug server list
#providing the same URL, credentials as you provide to openmano. The (--debug) will show more info so that you will see the IP/ports it try to access
 
====Wrong database version====
 
SYMPTOM: openmano service fails. At openmano logs appear:
<pre>2016-11-02T17:19:51 CRITICAL  openmano openmanod.py:268 DATABASE wrong version '0.15'.                                Try to upgrade/downgrade to version '0.16' with './database_utils/migrate_mano_db.sh'</pre>
 
CAUSE: Openmano has been upgraded with a new version that requieres a new database version.
 
SOLUTION: To upgrade de database version run ./database_utils/migrate_mano_db.sh and provide credentias if needed (by default database user is 'mano', and database password is 'manopw')
 
==Troubleshooting VCA==
 
===VCA status===
<pre>lxc exec VCA -- juju status</pre>
 
Check that juju account is in 'green' status in the UI
 
===Restarting VCA===
 
===Known error messages in the VCA and their solution===
 
=Software upgrade=
OSM is being upgraded periodically to fix bugs that are being reported. Last version corresponds with the tag v1.0.5. These guidelines show you how to upgrade the different components.
 
==UI upgrade==
 
'''Note:''' It is recommended to upgrade both SO and UI modules together.
 
Execute at UI container ("lxc exec SO-ub bash" to enter into the container, "exit" to exit)
 
cd /root/UI
git pull --rebase
git checkout tags/v1.0.5
make clean && make -j16 && make install
 
Follow [[Logs_and_troubleshooting_(Release_ONE)#Software_upgrade#SO_upgrade|SO upgrade]] to upgrade SO and restart launchpad.
 
==SO upgrade==
 
 
Execute at SO container ("lxc exec SO-ub bash" to enter into the container, "exit" to exit)
 
cd /root/SO
git checkout v1.0
git pull --rebase
git checkout tags/v1.0.5
make clean &&  make -j16 && make install
 
Exit from SO-container and restart SO-ub container and Launchpad
 
lxc restart SO-ub
# lxc exec SO-ub -- nohup sudo -b -H /usr/rift/rift-shell -r -i /usr/rift -a /usr/rift/.artifacts -- ./demos/launchpad.py --use-xml-mode
 
==RO upgrade==
 
Execute at RO container ("lxc exec RO bash" to enter into the container, "exit" to exit)


====Restarting RO====
service openmano stop
RO can be restarted on the RO container at:
#git -C /opt/openmano stash        #required if the original config file has changed
<pre>service openmano restart</pre>
git -C /opt/openmano pull --rebase
git -C /opt/openmano checkout tags/v1.0.5
#git -C /opt/openmano stash pop    #required if the original file has changed
/opt/openmano/database_utils/migrate_mano_db.sh
service openmano start


====Restarting VCA====
==VCA upgrade==

Latest revision as of 11:24, 7 March 2017

Under elaboration

Logs

UI logs

The server-side UI logs can be obtained on the SO-ub container at:

/usr/rift/usr/share/rw.ui/skyquake/err.log
/usr/rift/usr/share/rw.ui/skyquake/out.log

Client side UI logs can be obtained in the Developer Console in a browser.

SO logs

SO logs can be obtained on the SO-ub container at:

/var/log/rift/rift.log

RO logs

RO logs can be obtained on the RO container at:

/var/log/openmano/openmano.log

Log file and log level (debug by default) can be set at file:

/etc/default/openmanod.cfg

VCA logs

Troubleshooting

Troubleshooting UI

UI status

  • The status of the UI process can be checked by running the following command in the SO-ub container as root:
forever list
  • You should see a status similar to following with an uptime:
info:    Forever processes running
data:        uid  command         script                                                                                                                 forever pid   id logfile                    uptime
data:    [0] uivz /usr/bin/nodejs skyquake.js --enable-https --keyfile-path=/usr/rift/etc/ssl/current.key --certfile-path=/usr/rift/etc/ssl/current.cert 21071   21082    /root/.forever/forever.log 0:18:12:3.231
  • In case the UI server process has issues, you won't see an uptime but will instead see a status "STOPPED" in that position.
  • In case the UI server process never started, you'll see a status saying "No forever processes running".

Restarting UI

  • The UI is restarted when the SO module (launchpad) is restarted.
  • However, in case just the UI needs to be restarted, you can run the following command in the SO-ub container as root:
forever restartall

Known error messages in the UI and their solution

Troubleshooting SO

SO status

The SO now starts as a service, but earlier versions of the SO used to run SO as a process. To check if the SO is running as a service, run the following command:

lxc exec SO-ub systemctl status launchpad.service


Restarting SO

Note: Restarting the SO also restarts the UI.

When the SO has been started as a service (see SO status section above), use the following commands to restart the SO:

lxc exec SO-ub systemctl stop launchpad.service
lxc exec SO-ub systemctl start launchpad.service

When SO hasn't been started as a service, please follow the following instructions to restart the SO

lxc restart SO-ub # Optional, Needed if there is an existing running instance of launchpad
lxc exec SO-ub -- nohup sudo -b -H /usr/rift/rift-shell -r -i /usr/rift -a /usr/rift/.artifacts -- ./demos/launchpad.py --use-xml-mode

Known error messages in the SO and their solution

Troubleshooting RO

RO status

The status of the RO process can be checked by running the following command in the RO container as root:

 service openmano status
● openmano.service - openmano server
   Loaded: loaded (/etc/systemd/system/openmano.service; disabled; vendor preset: enabled)
   Active: active (running) since Wed 2016-11-02 16:50:21 UTC; 2s ago
 Main PID: 550 (python)
    Tasks: 1
   Memory: 51.2M
      CPU: 717ms
   CGroup: /system.slice/openmano.service
           └─550 python /opt/openmano/openmanod.py -c /opt/openmano/openmanod.cfg --log-file=/opt/openmano/logs/openmano.log

Nov 02 16:50:21 RO-integration systemd[1]: Started openmano server.

In case it is not running, try to see the last logs at /var/log/openmano/openmano.log

Restarting RO

RO can be restarted on the RO container at:

service openmano restart

Known error messages in the RO and their solution

Possible known errors are:

Invalid openstack credentials

SYMPTOM: At deployment it appears the error message: Not possible to get_networks_list from VIM: AuthorizationFailure: Authorization Failed: The resource could not be found. (HTTPS: 404)

CAUSE: Openstack is not properly added to openmano with the right credentials

SOLUTION:

  • See the steps at OSM_Release_ONE#Openstack_site. If the problem persist ensure openstack is properly configured.
  • Ensure you have access to the openstack endpoints: if port 5000 is redirected using iptables, other endpoint ports must also be redirected.
  • Use "v2" authorization URL. "v3" is currently experimental in the master branch and is not recommended.
  • If https (instead of http) is used for authorization URL, you can either use the insecure option at datacenter-create (See Openstack_configuration_(Release_ONE)#Add_openstack_at_OSM); or install the certificate at RO container, by e.g. putting a .crt (not .pem) certificate at /usr/local/share/ca-certificates and running update-ca-certificates.

Can be checked using openstack client with:

sudo apt install python-openstackclient
openstack --os-project-name <auth-project-name> --os-url <auth-url> --os-username <auth-username> --os-password <auth-password> --debug network list 
openstack --os-project-name <auth-project-name> --os-url <auth-url> --os-username <auth-username> --os-password <auth-password> --debug host list 
openstack --os-project-name <auth-project-name> --os-url <auth-url> --os-username <auth-username> --os-password <auth-password> --debug flavor list 
openstack --os-project-name <auth-project-name> --os-url <auth-url> --os-username <auth-username> --os-password <auth-password> --debug server list 
#providing the same URL, credentials as you provide to openmano. The (--debug) will show more info so that you will see the IP/ports it try to access

Wrong database version

SYMPTOM: openmano service fails. At openmano logs appear:

2016-11-02T17:19:51 CRITICAL  openmano openmanod.py:268 DATABASE wrong version '0.15'.                                 Try to upgrade/downgrade to version '0.16' with './database_utils/migrate_mano_db.sh'

CAUSE: Openmano has been upgraded with a new version that requieres a new database version.

SOLUTION: To upgrade de database version run ./database_utils/migrate_mano_db.sh and provide credentias if needed (by default database user is 'mano', and database password is 'manopw')

Troubleshooting VCA

VCA status

lxc exec VCA -- juju status

Check that juju account is in 'green' status in the UI

Restarting VCA

Known error messages in the VCA and their solution

Software upgrade

OSM is being upgraded periodically to fix bugs that are being reported. Last version corresponds with the tag v1.0.5. These guidelines show you how to upgrade the different components.

UI upgrade

Note: It is recommended to upgrade both SO and UI modules together.

Execute at UI container ("lxc exec SO-ub bash" to enter into the container, "exit" to exit)

cd /root/UI
git pull --rebase
git checkout tags/v1.0.5
make clean && make -j16 && make install

Follow SO upgrade to upgrade SO and restart launchpad.

SO upgrade

Execute at SO container ("lxc exec SO-ub bash" to enter into the container, "exit" to exit)

cd /root/SO
git checkout v1.0
git pull --rebase
git checkout tags/v1.0.5
make clean &&  make -j16 && make install

Exit from SO-container and restart SO-ub container and Launchpad

lxc restart SO-ub
# lxc exec SO-ub -- nohup sudo -b -H /usr/rift/rift-shell -r -i /usr/rift -a /usr/rift/.artifacts -- ./demos/launchpad.py --use-xml-mode

RO upgrade

Execute at RO container ("lxc exec RO bash" to enter into the container, "exit" to exit)

service openmano stop
#git -C /opt/openmano stash        #required if the original config file has changed
git -C /opt/openmano pull --rebase
git -C /opt/openmano checkout tags/v1.0.5
#git -C /opt/openmano stash pop     #required if the original file has changed
/opt/openmano/database_utils/migrate_mano_db.sh 
service openmano start

VCA upgrade