LXD configuration for OSM Release FIVE

From OSM Public Wiki
Jump to: navigation, search

Summary

LXD is a pure container hypervisor that runs unmodified Linux guest operating systems with VM-style operations at incredible speed and density. This makes it particularly well-suited for developing complex systems. This is used by the VCA for the deployment of proxy charms.

Configuring LXD

LXD will be installed and configured as part of the OSM installation, but you can follow the steps below to install it manually or change it's default behaviour.

Removing apt-installed LXD

Some lxd packages may be installed by default, and will conflict with the snap-installed version of LXD. If you are working from a clean VM, removing these packages is safe. Otherwise, verify that you don't have any containers running, as they will be destroyed.

# Get a list of LXC/LXD packages that are installed via apt
dpkg -l|grep lx[cd]
ii  liblxc-common                   3.0.2-0ubuntu1~18.04.1                      amd64        Linux Containers userspace tools (common tools)
ii  liblxc1                         3.0.2-0ubuntu1~18.04.1                      amd64        Linux Containers userspace tools (library)
ii  lxcfs                           3.0.2-0ubuntu1~18.04.1                      amd64        FUSE based filesystem for LXC
ii  lxd                             3.0.2-0ubuntu1~18.04.1                      amd64        Container hypervisor based on LXC - daemon
ii  lxd-client                      3.0.2-0ubuntu1~18.04.1                      amd64        Container hypervisor based on LXC - client

# Remove the packages
sudo apt-get remove --purge lxd lxd-client lxcfs liblxc1 liblxc-common

Installing LXD

Previous releases of OSM installed LXD via apt from the Ubuntu Archives. We now recommend installing from snap.

sudo snap install lxd

Next, we'll configure LXD to create the lxdbr0 bridge and create a ZFS storage pool. ZFS using Copy on Write so creating containers is faster.

cat <<EOF | lxd init --preseed
config: {}
networks:
- config:
    ipv4.address: auto
    ipv4.nat: true
    ipv6.address: none
  description: ""
  managed: false
  name: lxdbr0
  type: ""
storage_pools:
- config:
    size: 30GB
  description: ""
  name: default
  driver: zfs
profiles:
- config: {}
  description: ""
  devices:
    eth0:
      name: eth0
      nictype: bridged
      parent: lxdbr0
      type: nic
    root:
      path: /
      pool: default
      type: disk
  name: default
cluster: null
EOF

If you get errors, check the Troubleshooting section below.

Testing LXD

To test that your LXD installation is correct, try to deploy a container and run 'apt-get update' from inside:

lxc launch ubuntu:16.04 test          # Create a container based on Ubuntu 16.04 with name 'test'
lxc exec test bash                    # Access the container
root@test:~# apt-get update           # Run command 'apt-get update' from inside the container
root@test:~# exit                     # Exit from the container
lxc stop test                         # Stop the container
lxc delete test                       # Delete the container

Troubleshooting

Error: Failed to create network 'lxdbr0': Failed to automatically find an unused IPv4 subnet, manual configuration required

This typically happens when you have a route for 10.0.0.0/8 on your machine, effectively marking the entirety of the 10.0.0.0/8 RFC1918 space as directly attached.

When we initialize lxd with "ipv4.address: auto", it tries 100 random subnets, using the pattern 10.x.y.1/24. For each network, it tests if the network is a) in the routing table and b) if it can be pinged.

To work around this, determine a subnet available to use and change the lxd preseed yaml above, replacing "ipv4.address: auto" with the available subnet, i.e., "ipv4.address: 10.10.10.1/24"

Error: Failed to update network 'lxdbr0': not found

If you get the error "Failed to update network 'lxdbr0': not found", you will have to manually delete the bridge lxdbr0 to progress with lxd configuration:

sudo ip link del lxdbr0

Fixing MTU mismatch

A MTU mismatch between the VM running your VNF and the container with its proxy charm can lead to packets being dropped, causing configuration via SSH to fail.

In cases like this, we can configure LXD's default device profile to set a specific MTU.

lxc profile device set default eth0 mtu 1446

New containers will have the updated MTU. Existing containers will need to be restarted via lxc restart in order for the new MTU to take effect.