<?xml version="1.0"?>
<feed xmlns="http://www.w3.org/2005/Atom" xml:lang="en">
	<id>https://osm.etsi.org/wikipub/api.php?action=feedcontributions&amp;feedformat=atom&amp;user=Fernandezca</id>
	<title>OSM Public Wiki - User contributions [en]</title>
	<link rel="self" type="application/atom+xml" href="https://osm.etsi.org/wikipub/api.php?action=feedcontributions&amp;feedformat=atom&amp;user=Fernandezca"/>
	<link rel="alternate" type="text/html" href="https://osm.etsi.org/wikipub/index.php/Special:Contributions/Fernandezca"/>
	<updated>2026-05-11T03:29:48Z</updated>
	<subtitle>User contributions</subtitle>
	<generator>MediaWiki 1.39.1</generator>
	<entry>
		<id>https://osm.etsi.org/wikipub/index.php?title=How_to_uninstall_OSM&amp;diff=3688</id>
		<title>How to uninstall OSM</title>
		<link rel="alternate" type="text/html" href="https://osm.etsi.org/wikipub/index.php?title=How_to_uninstall_OSM&amp;diff=3688"/>
		<updated>2018-09-27T12:25:55Z</updated>

		<summary type="html">&lt;p&gt;Fernandezca: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;__TOC__&lt;br /&gt;
&lt;br /&gt;
This page intends to provide means to leave the environment clean and ready, assuming OSM release FOUR was installed previously. This should enable successful OSM redeployments in the future.&lt;br /&gt;
&lt;br /&gt;
== Removing the OSM stack ==&lt;br /&gt;
&lt;br /&gt;
Any of the two following procedures should work.&lt;br /&gt;
&lt;br /&gt;
=== Automatic procedure ===&lt;br /&gt;
&lt;br /&gt;
It is possible to clean the environment by providing a flag to the OSM installer script:&lt;br /&gt;
&lt;br /&gt;
  sudo ./install_osm.sh --uninstall&lt;br /&gt;
&lt;br /&gt;
=== Manual procedure ===&lt;br /&gt;
&lt;br /&gt;
Stop the osm docker stack and remove all unused containers, images and volumes:&lt;br /&gt;
&lt;br /&gt;
  docker stack rm osm&lt;br /&gt;
  # Warning: the instruction below will clean all volumes and images, not only the ones used by OSM&lt;br /&gt;
  docker system prune --all --volumes&lt;br /&gt;
&lt;br /&gt;
== Removing packages in the system ==&lt;br /&gt;
&lt;br /&gt;
After removing the OSM stack (either automatically or manually), remove any packages that were left:&lt;br /&gt;
&lt;br /&gt;
  # Warning: the instruction below will remove Docker, which may be used by others&lt;br /&gt;
  sudo apt remove --purge docker-ce&lt;br /&gt;
  # Warning: tthe instruction below will remove Juju, which may be used by others&lt;br /&gt;
  sudo snap remove juju&lt;br /&gt;
  sudo apt remove --purge osm-devops python-osmclient&lt;/div&gt;</summary>
		<author><name>Fernandezca</name></author>
	</entry>
	<entry>
		<id>https://osm.etsi.org/wikipub/index.php?title=How_to_uninstall_OSM&amp;diff=3686</id>
		<title>How to uninstall OSM</title>
		<link rel="alternate" type="text/html" href="https://osm.etsi.org/wikipub/index.php?title=How_to_uninstall_OSM&amp;diff=3686"/>
		<updated>2018-09-27T11:09:08Z</updated>

		<summary type="html">&lt;p&gt;Fernandezca: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;__TOC__&lt;br /&gt;
&lt;br /&gt;
This page intends to provide means to leave the environment clean and ready, assuming OSM release FOUR was installed previously. This should enable successful OSM redeployments in the future.&lt;br /&gt;
&lt;br /&gt;
== Removing the OSM stack ==&lt;br /&gt;
&lt;br /&gt;
Any of the two following procedures should work.&lt;br /&gt;
&lt;br /&gt;
=== Automatic procedure ===&lt;br /&gt;
&lt;br /&gt;
It is possible to clean the environment by providing a flag to the OSM installer script:&lt;br /&gt;
&lt;br /&gt;
  sudo ./install_osm.sh --uninstall&lt;br /&gt;
&lt;br /&gt;
=== Manual procedure ===&lt;br /&gt;
&lt;br /&gt;
Stop the Docker stack and remove every container:&lt;br /&gt;
&lt;br /&gt;
  docker stack rm osm&lt;br /&gt;
  docker system prune --all --volumes&lt;br /&gt;
&lt;br /&gt;
== Removing packages in the system ==&lt;br /&gt;
&lt;br /&gt;
After removing the OSM stack (either automatically or manually), remove any packages that were left:&lt;br /&gt;
&lt;br /&gt;
  sudo apt remove --purge docker-ce&lt;br /&gt;
  sudo snap remove juju&lt;br /&gt;
  sudo apt remove --purge osm-devops python-osmclient&lt;/div&gt;</summary>
		<author><name>Fernandezca</name></author>
	</entry>
	<entry>
		<id>https://osm.etsi.org/wikipub/index.php?title=How_to_uninstall_OSM&amp;diff=3685</id>
		<title>How to uninstall OSM</title>
		<link rel="alternate" type="text/html" href="https://osm.etsi.org/wikipub/index.php?title=How_to_uninstall_OSM&amp;diff=3685"/>
		<updated>2018-09-27T11:07:59Z</updated>

		<summary type="html">&lt;p&gt;Fernandezca: Created page with &amp;quot;__TOC__  This page intends to provide means to leave the environment clean and ready, assuming OSM release FOUR was installed previously. This should allow successful OSM rede...&amp;quot;&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;__TOC__&lt;br /&gt;
&lt;br /&gt;
This page intends to provide means to leave the environment clean and ready, assuming OSM release FOUR was installed previously. This should allow successful OSM redeployments.&lt;br /&gt;
&lt;br /&gt;
== Removing the OSM stack ==&lt;br /&gt;
&lt;br /&gt;
Any of the two following procedures should work.&lt;br /&gt;
&lt;br /&gt;
=== Automatic procedure ===&lt;br /&gt;
&lt;br /&gt;
It is possible to clean the environment by providing a flag to the OSM installer script:&lt;br /&gt;
&lt;br /&gt;
  sudo ./install_osm.sh --uninstall&lt;br /&gt;
&lt;br /&gt;
=== Manual procedure ===&lt;br /&gt;
&lt;br /&gt;
Stop the Docker stack and remove every container:&lt;br /&gt;
&lt;br /&gt;
  docker stack rm osm&lt;br /&gt;
  docker system prune --all --volumes&lt;br /&gt;
&lt;br /&gt;
== Removing packages in the system ==&lt;br /&gt;
&lt;br /&gt;
After removing the OSM stack (either automatically or manually), remove any packages that were left:&lt;br /&gt;
&lt;br /&gt;
  sudo apt remove --purge docker-ce&lt;br /&gt;
  sudo snap remove juju&lt;br /&gt;
  sudo apt remove --purge osm-devops python-osmclient&lt;/div&gt;</summary>
		<author><name>Fernandezca</name></author>
	</entry>
	<entry>
		<id>https://osm.etsi.org/wikipub/index.php?title=Technical_FAQ_(Release_THREE)&amp;diff=2569</id>
		<title>Technical FAQ (Release THREE)</title>
		<link rel="alternate" type="text/html" href="https://osm.etsi.org/wikipub/index.php?title=Technical_FAQ_(Release_THREE)&amp;diff=2569"/>
		<updated>2018-05-10T13:27:00Z</updated>

		<summary type="html">&lt;p&gt;Fernandezca: Mispelled command&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== &amp;quot;Instantiation failed&amp;quot;, but VMs and networks were successfully created ==&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Q. After trying to instantiate, I got the message that the instantiation failed without much information about the reason. After checking the logs, it seems to be a timeout issue. However, I am seeing that the VMs and networks were created at the VIM.&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;A&#039;&#039;&#039;. First check in the RO that there is an IP address in the management interface of each VNF of the NS.&lt;br /&gt;
 lxc exec RO --env OPENMANO_TENANT=osm openmano instance-scenario-list                      # to identify the running scenarios in the RO&lt;br /&gt;
 lxc exec RO --env OPENMANO_TENANT=osm openmano instance-scenario-list &amp;lt;id&amp;gt; -vvv |grep ip   # to get verbose information on a specific scenario in the RO&lt;br /&gt;
&lt;br /&gt;
If no IP address is present in the management interface of each VNF, then you are hitting a SO-RO timeout issue. The reason is typically a wrong configuration of the VIM. The way how management IP addresses are assigned to the VNFs change from one VIM to another. In all the cases, the recommendation is the following:&lt;br /&gt;
* Pre-provision a management network in the VIM, with DHCP enabled. You can see, for instance, the instructions in the case of Openstack (https://osm.etsi.org/wikipub/index.php/Openstack_configuration_(Release_TWO) ).&lt;br /&gt;
* Then make sure that, at instantiation time, you specify a mapping between the management network in th NS and the VIM network name that you pre-provisioned at the VIM.&lt;br /&gt;
&lt;br /&gt;
If the IP address is present in the management interface, then you are probably hitting a SO-VCA timeout, caused because the VNF configuration via Juju charms takes too long. To confirm, connect to VCA container and check &amp;quot;juju status&amp;quot;.&lt;br /&gt;
 lxc exec VCA -- juju status&lt;br /&gt;
&lt;br /&gt;
Then, if you see an error, you should debug the VNF charm or ask the people providing that VNF package.&lt;br /&gt;
&lt;br /&gt;
== &amp;quot;cannot load cookies: file locked for too long&amp;quot; where charms are not loaded ==&lt;br /&gt;
&lt;br /&gt;
Depending on the remainder of the error message, this most likely means that a condition on the server hosting OSM is not allowing the charm to be loaded.&lt;br /&gt;
&lt;br /&gt;
For instance, the following error indicates that the VCA container is full and cannot host new containers for Juju: &amp;quot;Cannot load charms due to &amp;quot;ERROR cannot load cookies: file locked for too long; giving up: cannot acquire lock: open /root/.local/share/juju/cookies/osm.json.lock: no space left on device&amp;quot;.&lt;br /&gt;
&lt;br /&gt;
To solve that, the containers and data created through Juju should be removed. Check the connection to the OSM Juju config agent account. Is it red/unavailable? Check if the service is running on port 17070, inside the Juju controller container that runs inside the VCA container; otherwise restore it.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
# Access VCA container&lt;br /&gt;
$ lxc exec VCA bash&lt;br /&gt;
&lt;br /&gt;
# Check the Juju status. The command may get stalled if the service is not running in the Juju controller&lt;br /&gt;
root@VCA:~# juju status&lt;br /&gt;
&lt;br /&gt;
# Check if Juju is running on the Juju controller&lt;br /&gt;
# First, check the name for the LXC with the Juju controller&lt;br /&gt;
# In this case, juju_controller_instance_id = 10.44.127.136&lt;br /&gt;
&lt;br /&gt;
root@VCA:~# lxc list | grep &amp;quot;${juju_controller_instance_id}&amp;quot;&lt;br /&gt;
| juju-f050fc-0   | RUNNING | 10.44.127.136 (eth0) |      | PERSISTENT | 0         |&lt;br /&gt;
&lt;br /&gt;
# Check if the service for the agent account is running. It is not running in this case&lt;br /&gt;
root@VCA:~# lxc exec juju-f050fc-0 -- netstat -apen | grep 17070&lt;br /&gt;
(Not all processes could be identified, non-owned process info&lt;br /&gt;
 will not be shown, you would have to be root to see it all.)&lt;br /&gt;
&lt;br /&gt;
# Check if disk is completely filled in VCA&lt;br /&gt;
root@VCA:~# df -h&lt;br /&gt;
&lt;br /&gt;
# If it is, remove some LXC beloging to the failed machines&lt;br /&gt;
# Note: keep the Juju controller container! (here, the last row)&lt;br /&gt;
# The IP is available on the configuration section, under &amp;quot;Accounts&amp;quot; in the OSM dashboard&lt;br /&gt;
&lt;br /&gt;
root@VCA:~# lxc list&lt;br /&gt;
+-----------------+---------+----------------------+------+------------+-----------+&lt;br /&gt;
|      NAME       |  STATE  |         IPV4         | IPV6 |    TYPE    | SNAPSHOTS |&lt;br /&gt;
+-----------------+---------+----------------------+------+------------+-----------+&lt;br /&gt;
| juju-ed3163-260 | RUNNING | 10.44.127.190 (eth0) |      | PERSISTENT | 0         |&lt;br /&gt;
+-----------------+---------+----------------------+------+------------+-----------+&lt;br /&gt;
| juju-ed3163-269 | RUNNING | 10.44.127.69 (eth0)  |      | PERSISTENT | 0         |&lt;br /&gt;
+-----------------+---------+----------------------+------+------------+-----------+&lt;br /&gt;
| juju-ed3163-272 | RUNNING | 10.44.127.118 (eth0) |      | PERSISTENT | 0         |&lt;br /&gt;
+-----------------+---------+----------------------+------+------------+-----------+&lt;br /&gt;
| juju-ed3163-277 | RUNNING | 10.44.127.128 (eth0) |      | PERSISTENT | 0         |&lt;br /&gt;
+-----------------+---------+----------------------+------+------------+-----------+&lt;br /&gt;
| juju-ed3163-278 | RUNNING | 10.44.127.236 (eth0) |      | PERSISTENT | 0         |&lt;br /&gt;
+-----------------+---------+----------------------+------+------------+-----------+&lt;br /&gt;
| juju-ed3163-282 | RUNNING | 10.44.127.61 (eth0)  |      | PERSISTENT | 0         |&lt;br /&gt;
+-----------------+---------+----------------------+------+------------+-----------+&lt;br /&gt;
| juju-ed3163-283 | RUNNING | 10.44.127.228 (eth0) |      | PERSISTENT | 0         |&lt;br /&gt;
+-----------------+---------+----------------------+------+------------+-----------+&lt;br /&gt;
| juju-f050fc-0   | RUNNING | 10.44.127.136 (eth0) |      | PERSISTENT | 0         |   # &amp;lt;- Do not remove the Juju controller!&lt;br /&gt;
+-----------------+---------+----------------------+------+------------+-----------+&lt;br /&gt;
&lt;br /&gt;
# Example: lxc stop juju-ed3163-260; lxc delete juju-ed3163-260&lt;br /&gt;
lxc stop ${name}; lxc delete ${name}&lt;br /&gt;
&lt;br /&gt;
# Clean status in Juju by removing the machines, units and apps that whose LXCs were removed before&lt;br /&gt;
root@VCA:~# juju status&lt;br /&gt;
Model    Controller  Cloud/Region         Version  SLA&lt;br /&gt;
default  osm         localhost/localhost  2.2.2    unsupported&lt;br /&gt;
&lt;br /&gt;
App                      Version  Status       Scale  Charm      Store  Rev  OS      Notes&lt;br /&gt;
flhf-testcf-flhfilter-b           active         0/1  fl7filter  local   20  ubuntu&lt;br /&gt;
flhf-testd-flhfilter-b            active         0/1  fl7filter  local   22  ubuntu&lt;br /&gt;
flhf-testdc-flhfilter-b           active         0/1  fl7filter  local   25  ubuntu&lt;br /&gt;
flhf-va-flhfilter-b               active         0/1  fl7filter  local   21  ubuntu&lt;br /&gt;
ids-test-ac-ids-b                 active         0/1  ids        local    1  ubuntu&lt;br /&gt;
lala-dpi-b                        maintenance    0/1  dpi        local    4  ubuntu&lt;br /&gt;
lhbcdf-lcdfilter-b                maintenance    0/1  l23filter  local   20  ubuntu&lt;br /&gt;
&lt;br /&gt;
Unit                       Workload     Agent   Machine  Public address  Ports  Message&lt;br /&gt;
flhf-testcf-flhfilter-b/0  unknown      lost    260      10.44.127.190          agent lost, see &#039;juju show-status-log flhf-testcf-flhfilter-b/0&#039;&lt;br /&gt;
flhf-testd-flhfilter-b/1   unknown      lost    278      10.44.127.236          agent lost, see &#039;juju show-status-log flhf-testd-flhfilter-b/1&#039;&lt;br /&gt;
flhf-testdc-flhfilter-b/2  unknown      lost    282      10.44.127.61           agent lost, see &#039;juju show-status-log flhf-testdc-flhfilter-b/2&#039;&lt;br /&gt;
flhf-va-flhfilter-b/0      unknown      lost    272      10.44.127.118          agent lost, see &#039;juju show-status-log flhf-va-flhfilter-b/0&#039;&lt;br /&gt;
ids-test-ac-ids-b/0        unknown      lost    277      10.44.127.128          agent lost, see &#039;juju show-status-log ids-test-ac-ids-b/0&#039;&lt;br /&gt;
lala-dpi-b/1               maintenance  failed  283      10.44.127.228          installing charm software&lt;br /&gt;
lhbcdf-lcdfilter-b/0       unknown      lost    269      10.44.127.69           agent lost, see &#039;juju show-status-log lhbcdf-lcdfilter-b/0&#039;&lt;br /&gt;
&lt;br /&gt;
Machine  State  DNS            Inst id          Series  AZ  Message&lt;br /&gt;
260      down   10.44.127.190  juju-ed3163-260  trusty      Running&lt;br /&gt;
269      down   10.44.127.69   juju-ed3163-269  trusty      Running&lt;br /&gt;
272      down   10.44.127.118  juju-ed3163-272  trusty      Running&lt;br /&gt;
277      down   10.44.127.128  juju-ed3163-277  trusty      Running&lt;br /&gt;
278      down   10.44.127.236  juju-ed3163-278  trusty      Running&lt;br /&gt;
282      down   10.44.127.61   juju-ed3163-282  trusty      Running&lt;br /&gt;
283      down   10.44.127.228  juju-ed3163-283  trusty      Running&lt;br /&gt;
&lt;br /&gt;
# Example: juju remove-machine 260&lt;br /&gt;
juju remove-machine ${&amp;quot;machine&amp;quot; whose ip is related to juju_instance_id} --force&lt;br /&gt;
&lt;br /&gt;
# Start Juju in the juju controller&lt;br /&gt;
root@VCA:~# lxc exec juju-f050fc-0 bash&lt;br /&gt;
# Cancel if needed or run in background&lt;br /&gt;
root@juju-f050fc-0:~# /var/lib/juju/init/jujud-machine-0/exec-start.sh &amp;amp;&lt;br /&gt;
^C&lt;br /&gt;
# Verify that the process is running&lt;br /&gt;
root@juju-f050fc-0:~# sudo netstat -apen | grep 17070&lt;br /&gt;
(Not all processes could be identified, non-owned process info&lt;br /&gt;
 will not be shown, you would have to be root to see it all.)&lt;br /&gt;
tcp        0      0 127.0.0.1:58402         127.0.0.1:17070         ESTABLISHED 0          22936276    359/jujud&lt;br /&gt;
tcp        0      0 10.44.127.136:56746     10.44.127.136:17070     ESTABLISHED 0          22936277    359/jujud&lt;br /&gt;
tcp        0      0 10.44.127.136:56740     10.44.127.136:17070     ESTABLISHED 0          22940759    359/jujud&lt;br /&gt;
tcp        0      0 127.0.0.1:58432         127.0.0.1:17070         ESTABLISHED 0          22940767    359/jujud&lt;br /&gt;
tcp6       0      0 :::17070                :::*                    LISTEN      0          22940744    359/jujud&lt;br /&gt;
tcp6       0      0 127.0.0.1:17070         127.0.0.1:58432         ESTABLISHED 0          22930280    359/jujud&lt;br /&gt;
tcp6       0      0 10.44.127.136:17070     10.44.127.136:56740     ESTABLISHED 0          22939104    359/jujud&lt;br /&gt;
tcp6       0      0 10.44.127.136:17070     10.44.127.136:56746     ESTABLISHED 0          22936278    359/jujud&lt;br /&gt;
tcp6       0      0 127.0.0.1:17070         127.0.0.1:58402         ESTABLISHED 0          22940756    359/jujud&lt;br /&gt;
&lt;br /&gt;
# Go to SO-ub container and restart SO service to connect again against the Juju controller&lt;br /&gt;
$ lxc exec SO-ub bash&lt;br /&gt;
root@SO-ub:~# service launchpad restart&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
After this process, access the OSM dashboard and check again the connectivity from the &amp;quot;Accounts&amp;quot; tab. It should be green, and now any new NS instantiated should correctly load its associated charm.&lt;br /&gt;
&lt;br /&gt;
== &amp;quot;Instantiation failed&amp;quot; and VMs and network were not created at VIM ==&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Q. After trying to instantiate, I got the message that the instantiation failed without much information about the reason. I connected to the VIM and checked that the VMs are networks were not created&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;A&#039;&#039;&#039;. You are hitting a SO-RO timeout, caused either by the lack of communication from the RO to the VIM or because the creation of VMs and networks from the RO to the VIMs takes too long.&lt;br /&gt;
&lt;br /&gt;
== SO connection error: not possible to contact OPENMANO-SERVER (openmanod) ==&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Q&#039;&#039;&#039;. The NS operational data of an instantiated NS in the SO CLI shows &amp;quot;Connection error: not possible to contact OPENMANO-SERVER (openmanod)&amp;quot;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;A&#039;&#039;&#039;. Please check connectivity from SO-ub to RO container. Can you ping the RO IP address (configured in SO) from SO-ub container? If not, then make sure that osm-ro service is up and running on the RO container.&lt;br /&gt;
 $ lxc exec RO -- bash&lt;br /&gt;
 root@SO-ub:~# service osm-ro status&lt;br /&gt;
 root@SO-ub:~# OPENMANO_TENANT=osm openmano datacenter-list&lt;br /&gt;
&lt;br /&gt;
== Deployment fails with the error message &amp;quot;Not possible to get_networks_list from VIM: AuthorizationFailure: Authorization Failed: The resource could not be found. (HTTPS: 404)&amp;quot; ==&lt;br /&gt;
&#039;&#039;&#039;Q. During instantiation, I got, the following error message: Not possible to get_networks_list from VIM: AuthorizationFailure: Authorization Failed: The resource could not be found. (HTTPS: 404)&amp;quot;.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;A&#039;&#039;&#039;. The cause is that Openstack has not been properly added to openmano with the right credentials&lt;br /&gt;
&lt;br /&gt;
Go to the RO container:&lt;br /&gt;
 lxc exec RO bash&lt;br /&gt;
&lt;br /&gt;
Install the package python-openstackclient, in case it does not exist:&lt;br /&gt;
 apt install -y python-openstackclient&lt;br /&gt;
&lt;br /&gt;
Execute the following commands with the appropriate substitution to check that your openstack is reachable and you can do specific actions:&lt;br /&gt;
 openstack --os-project-name &amp;lt;project-name&amp;gt; --os-auth-url &amp;lt;auth-url&amp;gt; --os-username &amp;lt;auth-username&amp;gt; --os-password &amp;lt;auth-password&amp;gt; --debug network list &lt;br /&gt;
 openstack --os-project-name &amp;lt;project-name&amp;gt; --os-auth-url &amp;lt;auth-url&amp;gt; --os-username &amp;lt;auth-username&amp;gt; --os-password &amp;lt;auth-password&amp;gt; --debug host list &lt;br /&gt;
 openstack --os-project-name &amp;lt;project-name&amp;gt; --os-auth-url &amp;lt;auth-url&amp;gt; --os-username &amp;lt;auth-username&amp;gt; --os-password &amp;lt;auth-password&amp;gt; --debug flavor list &lt;br /&gt;
 openstack --os-project-name &amp;lt;project-name&amp;gt; --os-auth-url &amp;lt;auth-url&amp;gt; --os-username &amp;lt;auth-username&amp;gt; --os-password &amp;lt;auth-password&amp;gt; --debug server list &lt;br /&gt;
 #providing the same URL, credentials as you provide to openmano. The (--debug) will show more info so that you will see the IP/ports it try to access&lt;br /&gt;
&lt;br /&gt;
This is to ensure that you have access to the openstack endpoints and that you are using the right credentials.&lt;br /&gt;
&lt;br /&gt;
Case 1. If any of the previous commands do not work, then either your connectivity to Openstack endpoints is incorrect or you are not using the right parameters. Please check internally and debug until the previous commands work. Check also the guidelines here: [[OSM_Release_TWO#Openstack_site]].&lt;br /&gt;
&lt;br /&gt;
Case 2. If all of them worked, then follow these guidelines:&lt;br /&gt;
* Use &amp;quot;v2&amp;quot; authorization URL. &amp;quot;v3&amp;quot; is currently experimental in the master branch and is not recommended yet.&lt;br /&gt;
* If https (instead of http) is used for authorization URL, you can either use the insecure option at datacenter-create (See [[Openstack_configuration_(Release_TWO)#Add_openstack_at_OSM]]); or install the certificate at RO container, by e.g. putting a .crt (not .pem) certificate at /usr/local/share/ca-certificates and running update-ca-certificates. &lt;br /&gt;
* Check the parameters you used to create and attached the datacenter by running the following commands&lt;br /&gt;
 export OPENMANO_TENANT=osm&lt;br /&gt;
 openmano datacenter-list&lt;br /&gt;
 openmano datacenter-list &amp;lt;DATACENTER_NAME&amp;gt; -vvv&lt;br /&gt;
* If all seems right, maybe the password was wrong. Try to detach and delete the datacenter, and then create and attach it again with the right password.&lt;br /&gt;
 openmano datacenter-detach openstack-site&lt;br /&gt;
 openmano datacenter-delete openstack-site&lt;br /&gt;
 openmano datacenter-create openstack-site http://10.10.10.11:5000/v2.0 --type openstack --description &amp;quot;OpenStack site&amp;quot;&lt;br /&gt;
 openmano datacenter-attach openstack-site --user=admin --password=userpwd --vim-tenant-name=admin&lt;br /&gt;
&lt;br /&gt;
== Deployment fails with the error message &amp;quot;VIM Exception vimconnUnexpectedResponse Unauthorized: The request you have made requieres authentication. (HTTP 401)&amp;quot; ==&lt;br /&gt;
Follow the same instructions as for the error &amp;quot;Not possible to get_networks_list from VIM: AuthorizationFailure: Authorization Failed: The resource could not be found. (HTTPS: 404)&amp;quot;&lt;br /&gt;
&lt;br /&gt;
== OSM RO service fails to start with a message &amp;quot;DATABASE wrong version&amp;quot; ==&lt;br /&gt;
&#039;&#039;&#039;Q. OSM RO service (osm-ro service in RO container) fails to start and logs show &amp;quot;DATABASE wrong version&amp;quot;&#039;&#039;&#039;&lt;br /&gt;
 2016-11-02T17:19:51 CRITICAL  openmano openmanod:268 DATABASE wrong version &#039;0.15&#039;.&lt;br /&gt;
 Try to upgrade/downgrade to version &#039;0.16&#039; with &#039;database_utils/migrate_mano_db.sh&#039;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;A&#039;&#039;&#039;. The reason is that the RO has been upgraded with a new version that requires a new database version. To upgrade the database version, run database_utils/migrate_mano_db.sh and provide credentials if needed (by default database user is &#039;mano&#039;, and database password is &#039;manopw&#039;)&lt;br /&gt;
&lt;br /&gt;
== Deployment fails with the message &#039;&#039;Error creating image at VIM &#039;xxxx&#039;: Cannot create image without location&#039;&#039;  ==&lt;br /&gt;
&lt;br /&gt;
The reason of the failure is that ther is a mismatch between image names (and checksum) at OSM VNFD and at VIM. Basically the image is not present at VIM.&lt;br /&gt;
&lt;br /&gt;
To fix it, you should add the image at VIM and ensure that it is visible for the VIM credentials provided to RO. At RO container you can list the VIM images using these credentialas easily with the command:&lt;br /&gt;
 openmano vim-image-list --datacenter &amp;lt;xxxxx&amp;gt;&lt;/div&gt;</summary>
		<author><name>Fernandezca</name></author>
	</entry>
	<entry>
		<id>https://osm.etsi.org/wikipub/index.php?title=User:Fernandezca&amp;diff=2556</id>
		<title>User:Fernandezca</title>
		<link rel="alternate" type="text/html" href="https://osm.etsi.org/wikipub/index.php?title=User:Fernandezca&amp;diff=2556"/>
		<updated>2018-04-19T10:23:16Z</updated>

		<summary type="html">&lt;p&gt;Fernandezca: Created page with &amp;quot;  Carolina Fernández   org: i2CAT   slack: carolina.fernandez&amp;quot;&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;  Carolina Fernández&lt;br /&gt;
  org: i2CAT&lt;br /&gt;
  slack: carolina.fernandez&lt;/div&gt;</summary>
		<author><name>Fernandezca</name></author>
	</entry>
	<entry>
		<id>https://osm.etsi.org/wikipub/index.php?title=Logs_and_troubleshooting_(Release_THREE)&amp;diff=2555</id>
		<title>Logs and troubleshooting (Release THREE)</title>
		<link rel="alternate" type="text/html" href="https://osm.etsi.org/wikipub/index.php?title=Logs_and_troubleshooting_(Release_THREE)&amp;diff=2555"/>
		<updated>2018-04-18T16:03:07Z</updated>

		<summary type="html">&lt;p&gt;Fernandezca: VCA-related logs and link to newly reported VCA and Juju issue&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;__TOC__&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Known errors are captured in our [[Technical FAQ]]. If the error you experienced is not there, you can troubleshoot the different components following the instructions in this section.&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
=Logs=&lt;br /&gt;
==UI logs==&lt;br /&gt;
&lt;br /&gt;
The server-side UI logs can be obtained on the SO-ub container at:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
/usr/rift/usr/share/rw.ui/skyquake/err.log&lt;br /&gt;
/usr/rift/usr/share/rw.ui/skyquake/out.log&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Client side UI logs can be obtained in the Developer Console in a browser.&lt;br /&gt;
&lt;br /&gt;
==SO logs==&lt;br /&gt;
&lt;br /&gt;
SO logs can be obtained on the SO-ub container at:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
/var/log/syslog&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==RO logs==&lt;br /&gt;
RO logs are at file &amp;quot;/var/log/osm/openmano.log&amp;quot; on the RO container. You can get it from the container to the OSM machine with:&lt;br /&gt;
&amp;lt;pre&amp;gt;lxc file pull RO/var/log/osm/openmano.log .&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Log file and log level (&#039;&#039;debug&#039;&#039; by default) can be set at file &amp;quot;/etc/osm/openmanod.cfg&amp;quot; on RO container&lt;br /&gt;
&lt;br /&gt;
==VCA logs==&lt;br /&gt;
&lt;br /&gt;
General LXC-related logs and related to Juju can be obtained on the Juju controller&#039;s container, inside the VCA container at:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ lxc exec VCA bash&lt;br /&gt;
$ lxc list&lt;br /&gt;
+-----------------+---------+----------------------+------+------------+-----------+&lt;br /&gt;
|      NAME       |  STATE  |         IPV4         | IPV6 |    TYPE    | SNAPSHOTS |&lt;br /&gt;
+-----------------+---------+----------------------+------+------------+-----------+&lt;br /&gt;
| juju-ed3163-288 | RUNNING | 10.44.127.188 (eth0) |      | PERSISTENT | 0         |&lt;br /&gt;
+-----------------+---------+----------------------+------+------------+-----------+&lt;br /&gt;
| juju-f050fc-0   | RUNNING | 10.44.127.136 (eth0) |      | PERSISTENT | 0         |&lt;br /&gt;
+-----------------+---------+----------------------+------+------------+-----------+&lt;br /&gt;
&lt;br /&gt;
$ lxc exec juju-f050fc-0 bash&lt;br /&gt;
&lt;br /&gt;
# General Juju logs&lt;br /&gt;
/var/log/juju/logsink.log&lt;br /&gt;
&lt;br /&gt;
# Juju controller logs&lt;br /&gt;
/var/log/juju/machine-0.log&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=Troubleshooting=&lt;br /&gt;
&lt;br /&gt;
==Troubleshooting UI==&lt;br /&gt;
&lt;br /&gt;
===UI status===&lt;br /&gt;
* The status of the UI process can be checked by running the following command in the SO-ub container as root:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
forever list&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
* You should see a status similar to following with an uptime:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
info:    Forever processes running&lt;br /&gt;
data:        uid  command         script                                                                                                                 forever pid   id logfile                    uptime&lt;br /&gt;
data:    [0] uivz /usr/bin/nodejs skyquake.js --enable-https --keyfile-path=/usr/rift/etc/ssl/current.key --certfile-path=/usr/rift/etc/ssl/current.cert 21071   21082    /root/.forever/forever.log 0:18:12:3.231&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
* In case the UI server process has issues, you won&#039;t see an uptime but will instead see a status &amp;quot;STOPPED&amp;quot; in that position.&lt;br /&gt;
* In case the UI server process never started, you&#039;ll see a status saying &amp;quot;No forever processes running&amp;quot;.&lt;br /&gt;
&lt;br /&gt;
===Restarting UI===&lt;br /&gt;
* The UI is restarted when the SO module (launchpad) is restarted.&lt;br /&gt;
* However, in case just the UI needs to be restarted, you can run the following command in the SO-ub container as root:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
forever restartall&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==Troubleshooting SO==&lt;br /&gt;
&lt;br /&gt;
===SO status===&lt;br /&gt;
The SO now starts as a service, but earlier versions of the SO used to run SO as a process.&lt;br /&gt;
To check if the SO is running as a service, run the following command:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
lxc exec SO-ub systemctl status launchpad.service&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
===Restarting SO===&lt;br /&gt;
Note: Restarting the SO also restarts the UI.&lt;br /&gt;
&lt;br /&gt;
When the SO has been started as a service (see SO status section above), use the following commands to restart the SO:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
lxc exec SO-ub systemctl stop launchpad.service&lt;br /&gt;
lxc exec SO-ub systemctl start launchpad.service&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
When SO hasn&#039;t been started as a service, please follow the following instructions to restart the SO&lt;br /&gt;
 lxc restart SO-ub # Optional, Needed if there is an existing running instance of launchpad&lt;br /&gt;
 lxc exec SO-ub -- nohup sudo -b -H /usr/rift/rift-shell -r -i /usr/rift -a /usr/rift/.artifacts -- ./demos/launchpad.py --use-xml-mode&lt;br /&gt;
&lt;br /&gt;
==Troubleshooting RO==&lt;br /&gt;
&lt;br /&gt;
===RO status===&lt;br /&gt;
The status of the RO process can be checked by running the following command in the RO container as root:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
 service osm-ro status&lt;br /&gt;
● osm-ro.service - openmano server&lt;br /&gt;
   Loaded: loaded (/etc/systemd/system/osm-ro.service; disabled; vendor preset: enabled)&lt;br /&gt;
   Active: active (running) since Wed 2017-04-02 16:50:21 UTC; 2s ago&lt;br /&gt;
 Main PID: 550 (python)&lt;br /&gt;
    Tasks: 1&lt;br /&gt;
   Memory: 51.2M&lt;br /&gt;
      CPU: 717ms&lt;br /&gt;
   CGroup: /system.slice/osm-ro.service&lt;br /&gt;
           └─550 python openmanod -c /etc/osm/openmanod.cfg --log-file=/var/log/osm/openmano.log&lt;br /&gt;
&lt;br /&gt;
Nov 02 16:50:21 RO-integration systemd[1]: Started openmano server.&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
In case it is not running, try to see the last logs at &#039;&#039;&#039;/var/log/osm/openmano.log&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
===Restarting RO===&lt;br /&gt;
RO can be restarted on the RO container at:&lt;br /&gt;
&amp;lt;pre&amp;gt;service osm-ro restart&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==Troubleshooting VCA==&lt;br /&gt;
&lt;br /&gt;
===VCA status===&lt;br /&gt;
&amp;lt;pre&amp;gt;lxc exec VCA -- juju status&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Check that juju account is in &#039;green&#039; status in the UI&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;lxc exec VCA -- juju config &amp;lt;app&amp;gt; &amp;lt;/pre&amp;gt;&lt;br /&gt;
&amp;lt;pre&amp;gt;lxc exec VCA -- juju list-actions &amp;lt;app&amp;gt; &amp;lt;/pre&amp;gt;&lt;br /&gt;
&amp;lt;pre&amp;gt;lxc exec VCA -- juju show-action-status &amp;lt;/pre&amp;gt;&lt;br /&gt;
&amp;lt;pre&amp;gt;lxc exec VCA -- juju show-action-output &amp;lt;action-id&amp;gt; &amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===VCA password===&lt;br /&gt;
&lt;br /&gt;
To retrieve the VCA password, install [[OsmClient|osmclient]] and run the following:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ lxc list&lt;br /&gt;
+-------+---------+--------------------------------+------+------------+-----------+&lt;br /&gt;
| NAME  |  STATE  |              IPV4              | IPV6 |    TYPE    | SNAPSHOTS |&lt;br /&gt;
+-------+---------+--------------------------------+------+------------+-----------+&lt;br /&gt;
| RO    | RUNNING | 10.143.142.48 (eth0)           |      | PERSISTENT | 0         |&lt;br /&gt;
+-------+---------+--------------------------------+------+------------+-----------+&lt;br /&gt;
| SO-ub | RUNNING | 10.143.142.43 (eth0)           |      | PERSISTENT | 0         |&lt;br /&gt;
+-------+---------+--------------------------------+------+------------+-----------+&lt;br /&gt;
| VCA   | RUNNING | 10.44.127.1 (lxdbr0)           |      | PERSISTENT | 0         |&lt;br /&gt;
|       |         | 10.143.142.139 (eth0)          |      |            |           |&lt;br /&gt;
+-------+---------+--------------------------------+------+------------+-----------+&lt;br /&gt;
$ export OSM_HOSTNAME=10.143.142.43&lt;br /&gt;
$ osm config-agent-list&lt;br /&gt;
+---------+--------------+----------------------------------------------------------------------------------------------------------------------+&lt;br /&gt;
| name    | account-type | details                                                                                                              |&lt;br /&gt;
+---------+--------------+----------------------------------------------------------------------------------------------------------------------+&lt;br /&gt;
| osmjuju | juju         | {u&#039;secret&#039;: u&#039;OTlhODg1NmMxN2JhODg1MTNiOTY4ZTk0&#039;, u&#039;user&#039;: u&#039;admin&#039;, u&#039;ip-address&#039;: u&#039;10.44.127.235&#039;, u&#039;port&#039;: 17070} |&lt;br /&gt;
+---------+--------------+----------------------------------------------------------------------------------------------------------------------+&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Under the details column, the &#039;secret&#039; value is the admin user password.&lt;br /&gt;
===Restarting VCA===&lt;br /&gt;
&lt;br /&gt;
* The Juju controller can be restarted or started in case of a problem.&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
lxc exec VCA bash&lt;br /&gt;
&lt;br /&gt;
$ lxc list&lt;br /&gt;
+-----------------+---------+----------------------+------+------------+-----------+&lt;br /&gt;
|      NAME       |  STATE  |         IPV4         | IPV6 |    TYPE    | SNAPSHOTS |&lt;br /&gt;
+-----------------+---------+----------------------+------+------------+-----------+&lt;br /&gt;
| juju-ed3163-288 | RUNNING | 10.44.127.188 (eth0) |      | PERSISTENT | 0         |&lt;br /&gt;
+-----------------+---------+----------------------+------+------------+-----------+&lt;br /&gt;
| juju-f050fc-0   | RUNNING | 10.44.127.136 (eth0) |      | PERSISTENT | 0         | &amp;lt;-- Juju controller instance&lt;br /&gt;
+-----------------+---------+----------------------+------+------------+-----------+&lt;br /&gt;
&lt;br /&gt;
# Get the name of the Juju controller instance and access (e.g., &amp;quot;juju-f050fc-0&amp;quot;)&lt;br /&gt;
lxc exec juju-f050fc-0 bash&lt;br /&gt;
# Stop the process if needed by killing the process running on default port 17070&lt;br /&gt;
# Or, if not active, run the following in background (or exit with Ctrl+C)&lt;br /&gt;
/var/lib/juju/init/jujud-machine-0/exec-start.sh&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===Known error messages in the VCA and their solution===&lt;br /&gt;
&lt;br /&gt;
* [https://osm.etsi.org/wikipub/index.php/Technical_FAQ#.22cannot_load_cookies:_file_locked_for_too_long.22_where_charms_are_not_loaded Cannot load charms due to &amp;quot;ERROR cannot load cookies: file locked for too long; giving up: cannot acquire lock: open /root/.local/share/juju/cookies/osm.json.lock: no space left on device&amp;quot;]&lt;br /&gt;
&lt;br /&gt;
=Software upgrade=&lt;br /&gt;
You can find instructions in this link: [[Software upgrade (Release THREE)]]&lt;/div&gt;</summary>
		<author><name>Fernandezca</name></author>
	</entry>
	<entry>
		<id>https://osm.etsi.org/wikipub/index.php?title=Technical_FAQ_(Release_THREE)&amp;diff=2554</id>
		<title>Technical FAQ (Release THREE)</title>
		<link rel="alternate" type="text/html" href="https://osm.etsi.org/wikipub/index.php?title=Technical_FAQ_(Release_THREE)&amp;diff=2554"/>
		<updated>2018-04-18T16:02:10Z</updated>

		<summary type="html">&lt;p&gt;Fernandezca: Solution to fully loaded VCA containers where also Juju controller ceased to run&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== &amp;quot;Instantiation failed&amp;quot;, but VMs and networks were successfully created ==&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Q. After trying to instantiate, I got the message that the instantiation failed without much information about the reason. After checking the logs, it seems to be a timeout issue. However, I am seeing that the VMs and networks were created at the VIM.&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;A&#039;&#039;&#039;. First check in the RO that there is an IP address in the management interface of each VNF of the NS.&lt;br /&gt;
 lxc exec RO --env OPENMANO_TENANT=osm openmano instance-scenario-list                      # to identify the running scenarios in the RO&lt;br /&gt;
 lxc exec RO --env OPENMANO_TENANT=osm openmano instance-scenario-list &amp;lt;id&amp;gt; -vvv |grep ip   # to get verbose information on a specific scenario in the RO&lt;br /&gt;
&lt;br /&gt;
If no IP address is present in the management interface of each VNF, then you are hitting a SO-RO timeout issue. The reason is typically a wrong configuration of the VIM. The way how management IP addresses are assigned to the VNFs change from one VIM to another. In all the cases, the recommendation is the following:&lt;br /&gt;
* Pre-provision a management network in the VIM, with DHCP enabled. You can see, for instance, the instructions in the case of Openstack (https://osm.etsi.org/wikipub/index.php/Openstack_configuration_(Release_TWO) ).&lt;br /&gt;
* Then make sure that, at instantiation time, you specify a mapping between the management network in th NS and the VIM network name that you pre-provisioned at the VIM.&lt;br /&gt;
&lt;br /&gt;
If the IP address is present in the management interface, then you are probably hitting a SO-VCA timeout, caused because the VNF configuration via Juju charms takes too long. To confirm, connect to VCA container and check &amp;quot;juju status&amp;quot;.&lt;br /&gt;
 lxc exec VCA -- juju status&lt;br /&gt;
&lt;br /&gt;
Then, if you see an error, you should debug the VNF charm or ask the people providing that VNF package.&lt;br /&gt;
&lt;br /&gt;
== &amp;quot;cannot load cookies: file locked for too long&amp;quot; where charms are not loaded ==&lt;br /&gt;
&lt;br /&gt;
Depending on the remainder of the error message, this most likely means that a condition on the server hosting OSM is not allowing the charm to be loaded.&lt;br /&gt;
&lt;br /&gt;
For instance, the following error indicates that the VCA container is full and cannot host new containers for Juju: &amp;quot;Cannot load charms due to &amp;quot;ERROR cannot load cookies: file locked for too long; giving up: cannot acquire lock: open /root/.local/share/juju/cookies/osm.json.lock: no space left on device&amp;quot;.&lt;br /&gt;
&lt;br /&gt;
To solve that, the containers and data created through Juju should be removed. Check the connection to the OSM Juju config agent account. Is it red/unavailable? Check if the service is running on port 17070, inside the Juju controller container that runs inside the VCA container; otherwise restore it.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
# Access VCA container&lt;br /&gt;
$ lxc exec VCA bash&lt;br /&gt;
&lt;br /&gt;
# Check the Juju status. The command may get stalled if the service is not running in the Juju controller&lt;br /&gt;
root@VCA:~# juju status&lt;br /&gt;
&lt;br /&gt;
# Check if Juju is running on the Juju controller&lt;br /&gt;
# First, check the name for the LXC with the Juju controller&lt;br /&gt;
# In this case, juju_controller_instance_id = 10.44.127.136&lt;br /&gt;
&lt;br /&gt;
root@VCA:~# lxc list | grep &amp;quot;${juju_controller_instance_id}&amp;quot;&lt;br /&gt;
| juju-f050fc-0   | RUNNING | 10.44.127.136 (eth0) |      | PERSISTENT | 0         |&lt;br /&gt;
&lt;br /&gt;
# Check if the service for the agent account is running. It is not running in this case&lt;br /&gt;
root@VCA:~# lxc exec juju-f050fc-0 -- netstat -apen | grep 17070&lt;br /&gt;
(Not all processes could be identified, non-owned process info&lt;br /&gt;
 will not be shown, you would have to be root to see it all.)&lt;br /&gt;
&lt;br /&gt;
# Check if disk is completely filled in VCA&lt;br /&gt;
root@VCA:~# df -h&lt;br /&gt;
&lt;br /&gt;
# If it is, remove some LXC beloging to the failed machines&lt;br /&gt;
# Note: keep the Juju controller container! (here, the last row)&lt;br /&gt;
# The IP is available on the configuration section, under &amp;quot;Accounts&amp;quot; in the OSM dashboard&lt;br /&gt;
&lt;br /&gt;
root@VCA:~# lxc list&lt;br /&gt;
+-----------------+---------+----------------------+------+------------+-----------+&lt;br /&gt;
|      NAME       |  STATE  |         IPV4         | IPV6 |    TYPE    | SNAPSHOTS |&lt;br /&gt;
+-----------------+---------+----------------------+------+------------+-----------+&lt;br /&gt;
| juju-ed3163-260 | RUNNING | 10.44.127.190 (eth0) |      | PERSISTENT | 0         |&lt;br /&gt;
+-----------------+---------+----------------------+------+------------+-----------+&lt;br /&gt;
| juju-ed3163-269 | RUNNING | 10.44.127.69 (eth0)  |      | PERSISTENT | 0         |&lt;br /&gt;
+-----------------+---------+----------------------+------+------------+-----------+&lt;br /&gt;
| juju-ed3163-272 | RUNNING | 10.44.127.118 (eth0) |      | PERSISTENT | 0         |&lt;br /&gt;
+-----------------+---------+----------------------+------+------------+-----------+&lt;br /&gt;
| juju-ed3163-277 | RUNNING | 10.44.127.128 (eth0) |      | PERSISTENT | 0         |&lt;br /&gt;
+-----------------+---------+----------------------+------+------------+-----------+&lt;br /&gt;
| juju-ed3163-278 | RUNNING | 10.44.127.236 (eth0) |      | PERSISTENT | 0         |&lt;br /&gt;
+-----------------+---------+----------------------+------+------------+-----------+&lt;br /&gt;
| juju-ed3163-282 | RUNNING | 10.44.127.61 (eth0)  |      | PERSISTENT | 0         |&lt;br /&gt;
+-----------------+---------+----------------------+------+------------+-----------+&lt;br /&gt;
| juju-ed3163-283 | RUNNING | 10.44.127.228 (eth0) |      | PERSISTENT | 0         |&lt;br /&gt;
+-----------------+---------+----------------------+------+------------+-----------+&lt;br /&gt;
| juju-f050fc-0   | RUNNING | 10.44.127.136 (eth0) |      | PERSISTENT | 0         |   # &amp;lt;- Do not remove the Juju controller!&lt;br /&gt;
+-----------------+---------+----------------------+------+------------+-----------+&lt;br /&gt;
&lt;br /&gt;
# Example: lxc stop juju-ed3163-260; lxc delete juju-ed3163-260&lt;br /&gt;
lxc stop ${name}; lxc delete ${name}&lt;br /&gt;
&lt;br /&gt;
# Clean status in Juju by removing the machines, units and apps that whose LXCs were removed before&lt;br /&gt;
root@VCA:~# juju status&lt;br /&gt;
Model    Controller  Cloud/Region         Version  SLA&lt;br /&gt;
default  osm         localhost/localhost  2.2.2    unsupported&lt;br /&gt;
&lt;br /&gt;
App                      Version  Status       Scale  Charm      Store  Rev  OS      Notes&lt;br /&gt;
flhf-testcf-flhfilter-b           active         0/1  fl7filter  local   20  ubuntu&lt;br /&gt;
flhf-testd-flhfilter-b            active         0/1  fl7filter  local   22  ubuntu&lt;br /&gt;
flhf-testdc-flhfilter-b           active         0/1  fl7filter  local   25  ubuntu&lt;br /&gt;
flhf-va-flhfilter-b               active         0/1  fl7filter  local   21  ubuntu&lt;br /&gt;
ids-test-ac-ids-b                 active         0/1  ids        local    1  ubuntu&lt;br /&gt;
lala-dpi-b                        maintenance    0/1  dpi        local    4  ubuntu&lt;br /&gt;
lhbcdf-lcdfilter-b                maintenance    0/1  l23filter  local   20  ubuntu&lt;br /&gt;
&lt;br /&gt;
Unit                       Workload     Agent   Machine  Public address  Ports  Message&lt;br /&gt;
flhf-testcf-flhfilter-b/0  unknown      lost    260      10.44.127.190          agent lost, see &#039;juju show-status-log flhf-testcf-flhfilter-b/0&#039;&lt;br /&gt;
flhf-testd-flhfilter-b/1   unknown      lost    278      10.44.127.236          agent lost, see &#039;juju show-status-log flhf-testd-flhfilter-b/1&#039;&lt;br /&gt;
flhf-testdc-flhfilter-b/2  unknown      lost    282      10.44.127.61           agent lost, see &#039;juju show-status-log flhf-testdc-flhfilter-b/2&#039;&lt;br /&gt;
flhf-va-flhfilter-b/0      unknown      lost    272      10.44.127.118          agent lost, see &#039;juju show-status-log flhf-va-flhfilter-b/0&#039;&lt;br /&gt;
ids-test-ac-ids-b/0        unknown      lost    277      10.44.127.128          agent lost, see &#039;juju show-status-log ids-test-ac-ids-b/0&#039;&lt;br /&gt;
lala-dpi-b/1               maintenance  failed  283      10.44.127.228          installing charm software&lt;br /&gt;
lhbcdf-lcdfilter-b/0       unknown      lost    269      10.44.127.69           agent lost, see &#039;juju show-status-log lhbcdf-lcdfilter-b/0&#039;&lt;br /&gt;
&lt;br /&gt;
Machine  State  DNS            Inst id          Series  AZ  Message&lt;br /&gt;
260      down   10.44.127.190  juju-ed3163-260  trusty      Running&lt;br /&gt;
269      down   10.44.127.69   juju-ed3163-269  trusty      Running&lt;br /&gt;
272      down   10.44.127.118  juju-ed3163-272  trusty      Running&lt;br /&gt;
277      down   10.44.127.128  juju-ed3163-277  trusty      Running&lt;br /&gt;
278      down   10.44.127.236  juju-ed3163-278  trusty      Running&lt;br /&gt;
282      down   10.44.127.61   juju-ed3163-282  trusty      Running&lt;br /&gt;
283      down   10.44.127.228  juju-ed3163-283  trusty      Running&lt;br /&gt;
&lt;br /&gt;
# Example: juju machine-remove 260&lt;br /&gt;
juju machine-remove ${&amp;quot;machine&amp;quot; whose ip is related to juju_instance_id} --force&lt;br /&gt;
&lt;br /&gt;
# Start Juju in the juju controller&lt;br /&gt;
root@VCA:~# lxc exec juju-f050fc-0 bash&lt;br /&gt;
# Cancel if needed or run in background&lt;br /&gt;
root@juju-f050fc-0:~# /var/lib/juju/init/jujud-machine-0/exec-start.sh &amp;amp;&lt;br /&gt;
^C&lt;br /&gt;
# Verify that the process is running&lt;br /&gt;
root@juju-f050fc-0:~# sudo netstat -apen | grep 17070&lt;br /&gt;
(Not all processes could be identified, non-owned process info&lt;br /&gt;
 will not be shown, you would have to be root to see it all.)&lt;br /&gt;
tcp        0      0 127.0.0.1:58402         127.0.0.1:17070         ESTABLISHED 0          22936276    359/jujud&lt;br /&gt;
tcp        0      0 10.44.127.136:56746     10.44.127.136:17070     ESTABLISHED 0          22936277    359/jujud&lt;br /&gt;
tcp        0      0 10.44.127.136:56740     10.44.127.136:17070     ESTABLISHED 0          22940759    359/jujud&lt;br /&gt;
tcp        0      0 127.0.0.1:58432         127.0.0.1:17070         ESTABLISHED 0          22940767    359/jujud&lt;br /&gt;
tcp6       0      0 :::17070                :::*                    LISTEN      0          22940744    359/jujud&lt;br /&gt;
tcp6       0      0 127.0.0.1:17070         127.0.0.1:58432         ESTABLISHED 0          22930280    359/jujud&lt;br /&gt;
tcp6       0      0 10.44.127.136:17070     10.44.127.136:56740     ESTABLISHED 0          22939104    359/jujud&lt;br /&gt;
tcp6       0      0 10.44.127.136:17070     10.44.127.136:56746     ESTABLISHED 0          22936278    359/jujud&lt;br /&gt;
tcp6       0      0 127.0.0.1:17070         127.0.0.1:58402         ESTABLISHED 0          22940756    359/jujud&lt;br /&gt;
&lt;br /&gt;
# Go to SO-ub container and restart SO service to connect again against the Juju controller&lt;br /&gt;
$ lxc exec SO-ub bash&lt;br /&gt;
root@SO-ub:~# service launchpad restart&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
After this process, access the OSM dashboard and check again the connectivity from the &amp;quot;Accounts&amp;quot; tab. It should be green, and now any new NS instantiated should correctly load its associated charm.&lt;br /&gt;
&lt;br /&gt;
== &amp;quot;Instantiation failed&amp;quot; and VMs and network were not created at VIM ==&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Q. After trying to instantiate, I got the message that the instantiation failed without much information about the reason. I connected to the VIM and checked that the VMs are networks were not created&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;A&#039;&#039;&#039;. You are hitting a SO-RO timeout, caused either by the lack of communication from the RO to the VIM or because the creation of VMs and networks from the RO to the VIMs takes too long.&lt;br /&gt;
&lt;br /&gt;
== SO connection error: not possible to contact OPENMANO-SERVER (openmanod) ==&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Q&#039;&#039;&#039;. The NS operational data of an instantiated NS in the SO CLI shows &amp;quot;Connection error: not possible to contact OPENMANO-SERVER (openmanod)&amp;quot;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;A&#039;&#039;&#039;. Please check connectivity from SO-ub to RO container. Can you ping the RO IP address (configured in SO) from SO-ub container? If not, then make sure that osm-ro service is up and running on the RO container.&lt;br /&gt;
 $ lxc exec RO -- bash&lt;br /&gt;
 root@SO-ub:~# service osm-ro status&lt;br /&gt;
 root@SO-ub:~# OPENMANO_TENANT=osm openmano datacenter-list&lt;br /&gt;
&lt;br /&gt;
== Deployment fails with the error message &amp;quot;Not possible to get_networks_list from VIM: AuthorizationFailure: Authorization Failed: The resource could not be found. (HTTPS: 404)&amp;quot; ==&lt;br /&gt;
&#039;&#039;&#039;Q. During instantiation, I got, the following error message: Not possible to get_networks_list from VIM: AuthorizationFailure: Authorization Failed: The resource could not be found. (HTTPS: 404)&amp;quot;.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;A&#039;&#039;&#039;. The cause is that Openstack has not been properly added to openmano with the right credentials&lt;br /&gt;
&lt;br /&gt;
Go to the RO container:&lt;br /&gt;
 lxc exec RO bash&lt;br /&gt;
&lt;br /&gt;
Install the package python-openstackclient, in case it does not exist:&lt;br /&gt;
 apt install -y python-openstackclient&lt;br /&gt;
&lt;br /&gt;
Execute the following commands with the appropriate substitution to check that your openstack is reachable and you can do specific actions:&lt;br /&gt;
 openstack --os-project-name &amp;lt;project-name&amp;gt; --os-auth-url &amp;lt;auth-url&amp;gt; --os-username &amp;lt;auth-username&amp;gt; --os-password &amp;lt;auth-password&amp;gt; --debug network list &lt;br /&gt;
 openstack --os-project-name &amp;lt;project-name&amp;gt; --os-auth-url &amp;lt;auth-url&amp;gt; --os-username &amp;lt;auth-username&amp;gt; --os-password &amp;lt;auth-password&amp;gt; --debug host list &lt;br /&gt;
 openstack --os-project-name &amp;lt;project-name&amp;gt; --os-auth-url &amp;lt;auth-url&amp;gt; --os-username &amp;lt;auth-username&amp;gt; --os-password &amp;lt;auth-password&amp;gt; --debug flavor list &lt;br /&gt;
 openstack --os-project-name &amp;lt;project-name&amp;gt; --os-auth-url &amp;lt;auth-url&amp;gt; --os-username &amp;lt;auth-username&amp;gt; --os-password &amp;lt;auth-password&amp;gt; --debug server list &lt;br /&gt;
 #providing the same URL, credentials as you provide to openmano. The (--debug) will show more info so that you will see the IP/ports it try to access&lt;br /&gt;
&lt;br /&gt;
This is to ensure that you have access to the openstack endpoints and that you are using the right credentials.&lt;br /&gt;
&lt;br /&gt;
Case 1. If any of the previous commands do not work, then either your connectivity to Openstack endpoints is incorrect or you are not using the right parameters. Please check internally and debug until the previous commands work. Check also the guidelines here: [[OSM_Release_TWO#Openstack_site]].&lt;br /&gt;
&lt;br /&gt;
Case 2. If all of them worked, then follow these guidelines:&lt;br /&gt;
* Use &amp;quot;v2&amp;quot; authorization URL. &amp;quot;v3&amp;quot; is currently experimental in the master branch and is not recommended yet.&lt;br /&gt;
* If https (instead of http) is used for authorization URL, you can either use the insecure option at datacenter-create (See [[Openstack_configuration_(Release_TWO)#Add_openstack_at_OSM]]); or install the certificate at RO container, by e.g. putting a .crt (not .pem) certificate at /usr/local/share/ca-certificates and running update-ca-certificates. &lt;br /&gt;
* Check the parameters you used to create and attached the datacenter by running the following commands&lt;br /&gt;
 export OPENMANO_TENANT=osm&lt;br /&gt;
 openmano datacenter-list&lt;br /&gt;
 openmano datacenter-list &amp;lt;DATACENTER_NAME&amp;gt; -vvv&lt;br /&gt;
* If all seems right, maybe the password was wrong. Try to detach and delete the datacenter, and then create and attach it again with the right password.&lt;br /&gt;
 openmano datacenter-detach openstack-site&lt;br /&gt;
 openmano datacenter-delete openstack-site&lt;br /&gt;
 openmano datacenter-create openstack-site http://10.10.10.11:5000/v2.0 --type openstack --description &amp;quot;OpenStack site&amp;quot;&lt;br /&gt;
 openmano datacenter-attach openstack-site --user=admin --password=userpwd --vim-tenant-name=admin&lt;br /&gt;
&lt;br /&gt;
== Deployment fails with the error message &amp;quot;VIM Exception vimconnUnexpectedResponse Unauthorized: The request you have made requieres authentication. (HTTP 401)&amp;quot; ==&lt;br /&gt;
Follow the same instructions as for the error &amp;quot;Not possible to get_networks_list from VIM: AuthorizationFailure: Authorization Failed: The resource could not be found. (HTTPS: 404)&amp;quot;&lt;br /&gt;
&lt;br /&gt;
== OSM RO service fails to start with a message &amp;quot;DATABASE wrong version&amp;quot; ==&lt;br /&gt;
&#039;&#039;&#039;Q. OSM RO service (osm-ro service in RO container) fails to start and logs show &amp;quot;DATABASE wrong version&amp;quot;&#039;&#039;&#039;&lt;br /&gt;
 2016-11-02T17:19:51 CRITICAL  openmano openmanod:268 DATABASE wrong version &#039;0.15&#039;.&lt;br /&gt;
 Try to upgrade/downgrade to version &#039;0.16&#039; with &#039;database_utils/migrate_mano_db.sh&#039;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;A&#039;&#039;&#039;. The reason is that the RO has been upgraded with a new version that requires a new database version. To upgrade the database version, run database_utils/migrate_mano_db.sh and provide credentials if needed (by default database user is &#039;mano&#039;, and database password is &#039;manopw&#039;)&lt;br /&gt;
&lt;br /&gt;
== Deployment fails with the message &#039;&#039;Error creating image at VIM &#039;xxxx&#039;: Cannot create image without location&#039;&#039;  ==&lt;br /&gt;
&lt;br /&gt;
The reason of the failure is that ther is a mismatch between image names (and checksum) at OSM VNFD and at VIM. Basically the image is not present at VIM.&lt;br /&gt;
&lt;br /&gt;
To fix it, you should add the image at VIM and ensure that it is visible for the VIM credentials provided to RO. At RO container you can list the VIM images using these credentialas easily with the command:&lt;br /&gt;
 openmano vim-image-list --datacenter &amp;lt;xxxxx&amp;gt;&lt;/div&gt;</summary>
		<author><name>Fernandezca</name></author>
	</entry>
</feed>