Commit 0efe7639 authored by garciadav's avatar garciadav
Browse files

Add testing procedure for HA VCA

parent 6bb11a7b
Loading
Loading
Loading
Loading
+98 −0
Original line number Diff line number Diff line
@@ -97,3 +97,101 @@ This test will check project creation edition and isolation. It tests the quotas

       osm project-list | grep -e p1 -e p2 && echo "FAIL"
       ```

## Production readiness

### \[PR-01\] HA VCA

First, check the number of nodes in your nodes in your VCA with the following command:

```bash
$ juju status -m controller
Model       Controller  Cloud/Region         Version  SLA          Timestamp
controller  osm         localhost/localhost  2.7.7    unsupported  14:45:51Z

Machine  State    DNS           Inst id        Series  AZ  Message
0        started  10.16.13.145  juju-ff4c51-0  xenial      Running
```

Then, in the MongoDB container, enter to a `mongo` shell, and execute the following:

```
> use osm
switched to db osm
> db.admin.find({"_id": "juju"})
{ "_id" : "juju", "api_endpoints" : [ "10.16.13.145:17070" ] }
```

You should see only 1 api_endpoint in the list, and should match the one in the command `juju status -m controller` output.

At this point, you can go to [this test](#basic-06-vnf-with-charm) to deploy a proxy charm.

Now we will scale the juju controller.

```bash
$ juju enable-ha
maintaining machines: 0
adding machines: 1, 2
```

It takes a bit to have the cluster set. Do `watch -c juju status -m controller --color` to watch the process.

When you see the following output, it means the machines are ready, but it takes a couple of minutes for the cluster to be properly set.

```
Model       Controller  Cloud/Region         Version  SLA          Timestamp
controller  osm         localhost/localhost  2.7.7    unsupported  16:24:48Z

Machine  State    DNS           Inst id        Series  AZ  Message
0        started  10.16.13.145  juju-ff4c51-0  xenial      Running
1        started  10.16.13.252  juju-ff4c51-1  xenial      Running
2        started  10.16.13.220  juju-ff4c51-2  xenial      Running
```

Now, we should go to the `mongo` shell again to check the list of endpoints:
> Note: It can take up to 5 minutes to see the endpoints.

```
> use osm
switched to db osm
> db.admin.find({"_id": "juju"})
{ "_id" : "juju", "api_endpoints" : [ "10.16.13.145:17070", "10.16.13.252:17070", "10.16.13.220:17070" ] }
```

When we deployed the proxy charm, only the first node existed. So now, we are going to stop that container (juju-ff4c51-0), and remove the network service previously created

```
$ lxc stop juju-ff4c51-0
$ juju status -m controller
Model       Controller  Cloud/Region         Version  SLA          Timestamp
controller  osm         localhost/localhost  2.7.7    unsupported  16:32:47Z

Machine  State    DNS           Inst id        Series  AZ  Message
0        down     10.16.13.145  juju-ff4c51-0  xenial      Running
1        started  10.16.13.252  juju-ff4c51-1  xenial      Running
2        started  10.16.13.220  juju-ff4c51-2  xenial      Running
```

Now I will remove the created network service:

```
$ osm ns-list
+------------------+--------------------------------------+---------------------+----------+-------------------+---------------+
| ns instance name | id                                   | date                | ns state | current operation | error details |
+------------------+--------------------------------------+---------------------+----------+-------------------+---------------+
| hackfest5        | 779f984b-0448-4ee6-b787-e45caa6e35c4 | 2020-07-01T15:40:50 | READY    | IDLE (None)       | N/A           |
+------------------+--------------------------------------+---------------------+----------+-------------------+---------------+
To get the history of all operations over a NS, run "osm ns-op-list NS_ID"
For more details on the current operation, run "osm ns-op-show OPERATION_ID"
$ osm ns-delete hackfest5
Deletion in progress
$ osm ns-list
+------------------+----+------+----------+-------------------+---------------+
| ns instance name | id | date | ns state | current operation | error details |
+------------------+----+------+----------+-------------------+---------------+
+------------------+----+------+----------+-------------------+---------------+
To get the history of all operations over a NS, run "osm ns-op-list NS_ID"
For more details on the current operation, run "osm ns-op-show OPERATION_ID"
```

The network service should be removed fine, without leaving models behind.