LXD configuration for OSM Release TWO: Difference between revisions

From OSM Public Wiki
Jump to: navigation, search
No edit summary
(Undo revision 1849 by Marcoc (talk))
 
(15 intermediate revisions by 3 users not shown)
Line 1: Line 1:
In order to run LXD containers, you need to install lxd (if not installed by default) and zfs package (to be used as storage backend)
 
== Summary ==
 
LXD is a pure container hypervisor that runs unmodified Linux guest operating systems with VM-style operations at incredible speed and density. This makes it particularly well-suited for developing complex systems. This can be used to install OSM without tainting your host system with its dependencies. OSM modules will be running in LXD containers, thus not affecting your host system.
 
== Configuring LXD ==
 
The current installation is intended to be used with the Ubuntu 16.04 LTS.
 
=== Installing LXD ===
 
Along with LXD, we'll install ZFS, to use as LXD's storage backend, for optimal performance.
 
  sudo apt-get update
  sudo apt-get update
  sudo apt-get install zfs lxd
  sudo apt-get install zfs
sudo apt install lxd
  newgrp lxd                  # required to log the user in the lxd group if lxd was just installed
  newgrp lxd                  # required to log the user in the lxd group if lxd was just installed


Then, just run the following command to configure lxd and answer the questions accordingly. Default options for ZFS will work:
Configure LXD to use zfs, with an bridge for networking:
 
  sudo lxd init
  sudo lxd init
   Name of the storage backend to use (dir or zfs) [default=zfs]:
   Name of the storage backend to use (dir or zfs) [default=zfs]:
Line 13: Line 27:
   Would you like LXD to be available over the network (yes/no) [default=no]?
   Would you like LXD to be available over the network (yes/no) [default=no]?
   Do you want to configure the LXD bridge (yes/no) [default=yes]?
   Do you want to configure the LXD bridge (yes/no) [default=yes]?
    Do you want to setup an IPv4 subnet? Yes
      Default values apply for next questions
    Do you want to setup an IPv6 subnet? No
  LXD has been successfully configured.
=== Network Bridge ===
By default, LXD creates a bridge named lxdbr0. You can modify this bridge, such as changing the MTU, and these changes will be reflected on the interfaces of the containers managed by the host container.


Although further customization is possible, default options for LXD bridge configuration will work.
Although further customization is possible, default options for LXD bridge configuration will work.


Check the MTU of the LXD bridge (lxdbr0) and the MTU of the default interface. If they are different, adjust the MTU of the LXD bridge accordingly to have the same MTU:
=== MTU ===
 
Check the MTU of the LXD bridge (lxdbr0) and the MTU of the default interface. If they are different, change the MTU of the LXD bridge accordingly to have the same MTU as the default interface.
 
'''Note: In this example, we will assume that the default interface is ens3 and its MTU is 1446'''
 
  lxc list                        # This will drive initialization of lxdbr0
  lxc list                        # This will drive initialization of lxdbr0
  ip adddress show ens3           # In case ens3 is the default interface
  ip address show ens3             # In case ens3 is the default interface
  ip adddress show lxdbr0
  ip address show lxdbr0
  sudo ifconfig lxdbr0 mtu 1446    # Use the appropriate MTU value
  sudo ifconfig lxdbr0 mtu 1446    # Use the appropriate MTU value
  sudo sed -i '/ifconfig lxdbr0 mtu/d' /etc/rc.local         # To make MTU change persistent between reboots
 
  sudo sed -i "$ i ifconfig lxdbr0 mtu 1446" /etc/rc.local   # To make MTU change persistent between reboots. Use the appropriate MTU value.
Delete any previously-made rc.local changes:
 
sudo sed -i '/lxc list/d' /etc/rc.local                   
  sudo sed -i '/ifconfig lxdbr0 mtu/d' /etc/rc.local
 
Make the MTU change persistent across reboots:
sudo sed -i "$ i lxc list > /dev/null" /etc/rc.local
  sudo sed -i "$ i ifconfig lxdbr0 mtu 1446" /etc/rc.local
 
After reboot, rc.local is running before lxd-bridge is started. That's the reason why "lxc list" is run in rc.local before configuring MTU.
 
== LXD within LXD (optional, only for advanced users) ==
 
While OSM installer won't install any dependencies in the system, it will add a minimal configuration (NAT rules and routes) in your host system to work properly. If you want to avoid that minimal configuration, you might be interested in using LXD within LXD. You can create a LXD container in the host (host container) which will run LXD again. OSM modules will run in LXD containers inside the host container. This is called also called '''nesting'''.
 
As illustrated below, your Host System (a laptop, a virtual machine, etc), you launch the Host Container, with nesting enabled. Inside the Host Container, we'll launch the containers for OSM: SO, RO, and VCA.
 
+--------------------------------------------------------------------------------+
|                              Host System                                      |
|                                                                                |
|                              eth0: 192.168.1.173                              |
|                                                                                |
| +----------------------------------------------------------------------------+ |
| |                          +----------v-----------+                          | |
| |                          | Host Container      |                          | |
| |                          |                      |                          | |
| |                          | eth0: 10.0.3.59      |                          | |
| |                          | lxdbr0: 10.143.142.1 |                          | |
| |            +-------------+----------+-----------+------------+            | |
| |            |                        |                        |            | |
| |            |                        |                        |            | |
| | +----------v-----------+ +----------v-----------+ +----------v-----------+ | |
| | | SO-ub                | | RO                  | | VCA                  | | |
| | |                      | |                      | |                      | | |
| | | eth0: 10.143.142.216 | | eth0: 10.143.142.216 | | eth0: 10.143.142.216 | | |
| | |                      | |                      | | lxdbr0: 10.44.127.1  | | |
| | +----------------------+ +----------------------+ +----------------------+ | |
| +----------------------------------------------------------------------------+ |
+--------------------------------------------------------------------------------+
 
Please note that the IP addresses used in the diagram above and instructions below will vary. Please replace these IP addresses with the ones on your system.
 
===Prepare the host system ===
 
You need to configure LXD in you host system, following the same steps indicated [[#Configuring LXD|above]].
 
=== Launch the Host Container ===
 
Launch a container to host the OSM installation:
 
lxc launch ubuntu:16.04 osmr2 -c security.privileged=true -c security.nesting=true
 
=== Resource Limits ===
 
Setting limits will prevent any process from using an unexpected amount of resources. Here, we'll set the resource limits to OSM Release 2's recommended minimum resources:
 
lxc config set osmr2 limits.cpu 4
lxc config set osmr2 limits.memory 8GB
 
=== Configuring the Host Container ===
 
Before we install OSM, we want to make sure LXD is installed and configured ''inside'' the Host Container. On Ubuntu 16.04, we want to enable backports to install the latest stable version of LXD. '''Note''': this is a repeat of the steps above to prepare the Host Machine.
 
lxc exec osmr2 bash
sudo add-apt-repository -u "deb http://archive.ubuntu.com/ubuntu $(lsb_release -cs)-backports main restricted universe multiverse"
sudo apt update
sudo apt upgrade
sudo apt -t xenial-backports install lxd
 
Next, initialize LXD inside the host container. Unless otherwise noted, the defaults will work fine. ZFS won't work inside a container, but that's okay; we've set it up on the host machine.
 
sudo lxd init
  Do you want to configure a new storage pool (yes/no) [default=yes]?
  Name of the new storage pool [default=default]:
  Name of the storage backend to use (dir, btrfs, lvm, zfs) [default=zfs]: '''dir'''
  Would you like LXD to be available over the network (yes/no) [default=no]?
  Would you like stale cached images to be updated automatically (yes/no) [default=yes]?
  Would you like to create a new network bridge (yes/no) [default=yes]?
  What should the new bridge be called [default=lxdbr0]?
  What IPv4 address should be used (CIDR subnet notation, “auto” or “none”) [default=auto]?
  What IPv6 address should be used (CIDR subnet notation, “auto” or “none”) [default=auto]? '''none'''
  LXD has been successfully configured.
 
Continue with the [https://osm.etsi.org/wikipub/index.php/OSM_Release_TWO#Install_OSM|OSM Release 2] installation, and return to this page once the installation is complete.
 
=== Routing ===
 
All containers within the osmr2 container will be assigned an IP address from the lxdbr0 interface inside the Host Container. The steps below will route traffic destined for the OSM containers through the osmr2 container's primary interface.
 
==== Get the bridge and network IP of osmr2 ====
 
From the Host Machine, get the two IPv4 addresses for the osmr2 container: one for the lxdbr0 interface inside the Host Container, and one from the Host Machine. This tells us where to route the traffic to the network inside the Host Container.
 
lxc list osmr2
+-------+---------+--------------------------------+------+------------+-----------+
| NAME  |  STATE  |              IPV4              | IPV6 |    TYPE    | SNAPSHOTS |
+-------+---------+--------------------------------+------+------------+-----------+
| osmr2 | RUNNING | 10.143.142.1 (lxdbr0)          |      | PERSISTENT | 0        |
|      |        | 10.0.3.59 (eth0)              |      |            |          |
+-------+---------+--------------------------------+------+------------+-----------+
 
==== Route traffic from the Host Machine to the Host Container ====
 
route add -net 10.143.142.0/24 gw 10.0.3.59
 
You should now be able to connect to the SO in your browser. Open up https://10.143.142.216:8443/ and verify you can reach the OSM Launchpad.
 
=== Troubleshooting ===
 
==== Unable to launch nested container ====
 
If the installation fails with an error message similar to this:
 
Creating RO
Starting RO         
error: Error calling 'lxd forkstart RO /var/lib/lxd/containers /var/log/lxd/RO/lxc.conf': err='exit status 1'
  lxc 20170530162611.740 ERROR lxc_apparmor - lsm/apparmor.c:apparmor_process_label_set:234 - No such file or directory - failed to change apparmor profile to lxd-coherent-reptile_</var/lib/lxd>//&:lxd-RO <var-lib-lxd>:
  lxc 20170530162611.740 ERROR lxc_sync - sync.c:__sync_wait:57 - An error occurred in another process (expected sequence number 5)
  lxc 20170530162611.740 ERROR lxc_start - start.c:__lxc_start:1346 - Failed to spawn container "RO".
  lxc 20170530162612.281 ERROR lxc_conf - conf.c:run_buffer:405 - Script exited with status 1.
  lxc 20170530162612.281 ERROR lxc_start - start.c:lxc_fini:546 - Failed to run lxc.hook.post-stop for container "RO".
 
Try `lxc info --show-log local:RO` for more info.
 
There is a known issue if the version of LXD on your system is 2.12 or higher, where the default version of LXD (2.0.9) included in the Xenial cloud image conflicts with newer versions of LXD. In this case, you either need to downgrade your host machine's LXD to 2.0.9, or wait for the release of 2.0.10, which fixes this version conflict.

Latest revision as of 09:04, 25 October 2017

Summary

LXD is a pure container hypervisor that runs unmodified Linux guest operating systems with VM-style operations at incredible speed and density. This makes it particularly well-suited for developing complex systems. This can be used to install OSM without tainting your host system with its dependencies. OSM modules will be running in LXD containers, thus not affecting your host system.

Configuring LXD

The current installation is intended to be used with the Ubuntu 16.04 LTS.

Installing LXD

Along with LXD, we'll install ZFS, to use as LXD's storage backend, for optimal performance.

sudo apt-get update
sudo apt-get install zfs
sudo apt install lxd
newgrp lxd                   # required to log the user in the lxd group if lxd was just installed

Configure LXD to use zfs, with an bridge for networking:

sudo lxd init
 Name of the storage backend to use (dir or zfs) [default=zfs]:
 Create a new ZFS pool (yes/no) [default=yes]?
 Name of the new ZFS pool [default=lxd]:
 Would you like to use an existing block device (yes/no) [default=no]?
 Size in GB of the new loop device (1GB minimum) [default=15]:
 Would you like LXD to be available over the network (yes/no) [default=no]?
 Do you want to configure the LXD bridge (yes/no) [default=yes]?
   Do you want to setup an IPv4 subnet? Yes
      Default values apply for next questions
   Do you want to setup an IPv6 subnet? No
 LXD has been successfully configured.

Network Bridge

By default, LXD creates a bridge named lxdbr0. You can modify this bridge, such as changing the MTU, and these changes will be reflected on the interfaces of the containers managed by the host container.

Although further customization is possible, default options for LXD bridge configuration will work.

MTU

Check the MTU of the LXD bridge (lxdbr0) and the MTU of the default interface. If they are different, change the MTU of the LXD bridge accordingly to have the same MTU as the default interface.

Note: In this example, we will assume that the default interface is ens3 and its MTU is 1446

lxc list                         # This will drive initialization of lxdbr0
ip address show ens3             # In case ens3 is the default interface
ip address show lxdbr0
sudo ifconfig lxdbr0 mtu 1446    # Use the appropriate MTU value

Delete any previously-made rc.local changes:

sudo sed -i '/lxc list/d' /etc/rc.local                     
sudo sed -i '/ifconfig lxdbr0 mtu/d' /etc/rc.local

Make the MTU change persistent across reboots:

sudo sed -i "$ i lxc list > /dev/null" /etc/rc.local
sudo sed -i "$ i ifconfig lxdbr0 mtu 1446" /etc/rc.local

After reboot, rc.local is running before lxd-bridge is started. That's the reason why "lxc list" is run in rc.local before configuring MTU.

LXD within LXD (optional, only for advanced users)

While OSM installer won't install any dependencies in the system, it will add a minimal configuration (NAT rules and routes) in your host system to work properly. If you want to avoid that minimal configuration, you might be interested in using LXD within LXD. You can create a LXD container in the host (host container) which will run LXD again. OSM modules will run in LXD containers inside the host container. This is called also called nesting.

As illustrated below, your Host System (a laptop, a virtual machine, etc), you launch the Host Container, with nesting enabled. Inside the Host Container, we'll launch the containers for OSM: SO, RO, and VCA.

+--------------------------------------------------------------------------------+
|                              Host System                                       |
|                                                                                |
|                              eth0: 192.168.1.173                               |
|                                                                                |
| +----------------------------------------------------------------------------+ |
| |                          +----------v-----------+                          | |
| |                          | Host Container       |                          | |
| |                          |                      |                          | |
| |                          | eth0: 10.0.3.59      |                          | |
| |                          | lxdbr0: 10.143.142.1 |                          | |
| |            +-------------+----------+-----------+------------+             | |
| |            |                        |                        |             | |
| |            |                        |                        |             | |
| | +----------v-----------+ +----------v-----------+ +----------v-----------+ | |
| | | SO-ub                | | RO                   | | VCA                  | | |
| | |                      | |                      | |                      | | |
| | | eth0: 10.143.142.216 | | eth0: 10.143.142.216 | | eth0: 10.143.142.216 | | |
| | |                      | |                      | | lxdbr0: 10.44.127.1  | | |
| | +----------------------+ +----------------------+ +----------------------+ | |
| +----------------------------------------------------------------------------+ |
+--------------------------------------------------------------------------------+

Please note that the IP addresses used in the diagram above and instructions below will vary. Please replace these IP addresses with the ones on your system.

Prepare the host system

You need to configure LXD in you host system, following the same steps indicated above.

Launch the Host Container

Launch a container to host the OSM installation:

lxc launch ubuntu:16.04 osmr2 -c security.privileged=true -c security.nesting=true

Resource Limits

Setting limits will prevent any process from using an unexpected amount of resources. Here, we'll set the resource limits to OSM Release 2's recommended minimum resources:

lxc config set osmr2 limits.cpu 4
lxc config set osmr2 limits.memory 8GB

Configuring the Host Container

Before we install OSM, we want to make sure LXD is installed and configured inside the Host Container. On Ubuntu 16.04, we want to enable backports to install the latest stable version of LXD. Note: this is a repeat of the steps above to prepare the Host Machine.

lxc exec osmr2 bash
sudo add-apt-repository -u "deb http://archive.ubuntu.com/ubuntu $(lsb_release -cs)-backports main restricted universe multiverse"
sudo apt update
sudo apt upgrade
sudo apt -t xenial-backports install lxd

Next, initialize LXD inside the host container. Unless otherwise noted, the defaults will work fine. ZFS won't work inside a container, but that's okay; we've set it up on the host machine.

sudo lxd init
 Do you want to configure a new storage pool (yes/no) [default=yes]? 
 Name of the new storage pool [default=default]: 
 Name of the storage backend to use (dir, btrfs, lvm, zfs) [default=zfs]: dir
 Would you like LXD to be available over the network (yes/no) [default=no]? 
 Would you like stale cached images to be updated automatically (yes/no) [default=yes]? 
 Would you like to create a new network bridge (yes/no) [default=yes]? 
 What should the new bridge be called [default=lxdbr0]? 
 What IPv4 address should be used (CIDR subnet notation, “auto” or “none”) [default=auto]? 
 What IPv6 address should be used (CIDR subnet notation, “auto” or “none”) [default=auto]? none
 LXD has been successfully configured.

Continue with the Release 2 installation, and return to this page once the installation is complete.

Routing

All containers within the osmr2 container will be assigned an IP address from the lxdbr0 interface inside the Host Container. The steps below will route traffic destined for the OSM containers through the osmr2 container's primary interface.

Get the bridge and network IP of osmr2

From the Host Machine, get the two IPv4 addresses for the osmr2 container: one for the lxdbr0 interface inside the Host Container, and one from the Host Machine. This tells us where to route the traffic to the network inside the Host Container.

lxc list osmr2
+-------+---------+--------------------------------+------+------------+-----------+
| NAME  |  STATE  |              IPV4              | IPV6 |    TYPE    | SNAPSHOTS |
+-------+---------+--------------------------------+------+------------+-----------+
| osmr2 | RUNNING | 10.143.142.1 (lxdbr0)          |      | PERSISTENT | 0         |
|       |         | 10.0.3.59 (eth0)               |      |            |           |
+-------+---------+--------------------------------+------+------------+-----------+

Route traffic from the Host Machine to the Host Container

route add -net 10.143.142.0/24 gw 10.0.3.59

You should now be able to connect to the SO in your browser. Open up https://10.143.142.216:8443/ and verify you can reach the OSM Launchpad.

Troubleshooting

Unable to launch nested container

If the installation fails with an error message similar to this:

Creating RO
Starting RO           
error: Error calling 'lxd forkstart RO /var/lib/lxd/containers /var/log/lxd/RO/lxc.conf': err='exit status 1'
  lxc 20170530162611.740 ERROR lxc_apparmor - lsm/apparmor.c:apparmor_process_label_set:234 - No such file or directory - failed to change apparmor profile to lxd-coherent-reptile_//&:lxd-RO <var-lib-lxd>:
  lxc 20170530162611.740 ERROR lxc_sync - sync.c:__sync_wait:57 - An error occurred in another process (expected sequence number 5)
  lxc 20170530162611.740 ERROR lxc_start - start.c:__lxc_start:1346 - Failed to spawn container "RO".
  lxc 20170530162612.281 ERROR lxc_conf - conf.c:run_buffer:405 - Script exited with status 1.
  lxc 20170530162612.281 ERROR lxc_start - start.c:lxc_fini:546 - Failed to run lxc.hook.post-stop for container "RO".

Try `lxc info --show-log local:RO` for more info.

There is a known issue if the version of LXD on your system is 2.12 or higher, where the default version of LXD (2.0.9) included in the Xenial cloud image conflicts with newer versions of LXD. In this case, you either need to downgrade your host machine's LXD to 2.0.9, or wait for the release of 2.0.10, which fixes this version conflict.