Talk: LXD configuration for OSM Release TWO: Difference between revisions
No edit summary |
|||
Line 39: | Line 39: | ||
Along with LXD, we'll install ZFS, to use as LXD's storage backend, for optimal performance. | Along with LXD, we'll install ZFS, to use as LXD's storage backend, for optimal performance. | ||
sudo add-apt-repository -u "deb http://archive.ubuntu.com/ubuntu $(lsb_release -cs)-backports main restricted universe multiverse" | |||
sudo apt-get update | sudo apt-get update | ||
sudo apt-get install zfs | sudo apt-get install zfs | ||
Line 44: | Line 45: | ||
newgrp lxd # required to log the user in the lxd group if lxd was just installed | newgrp lxd # required to log the user in the lxd group if lxd was just installed | ||
Configure LXD to use zfs, with an bridge for networking: | |||
sudo lxd init | sudo lxd init | ||
Name of the storage backend to use (dir or zfs) [default=zfs]: | Name of the storage backend to use (dir or zfs) [default=zfs]: | ||
Line 80: | Line 55: | ||
Would you like LXD to be available over the network (yes/no) [default=no]? | Would you like LXD to be available over the network (yes/no) [default=no]? | ||
Do you want to configure the LXD bridge (yes/no) [default=yes]? | Do you want to configure the LXD bridge (yes/no) [default=yes]? | ||
What IPv4 address should be used (CIDR subnet notation, “auto” or “none”) [default=auto]? | |||
What IPv6 address should be used (CIDR subnet notation, “auto” or “none”) [default=auto]? '''none''' | |||
LXD has been successfully configured. | |||
=== Network Bridge === | |||
By default, LXD creates a bridge named lxdbr0. You can modify this bridge, such as changing the MTU, and these changes will be reflected on the interfaces of the containers managed by the host container. | By default, LXD creates a bridge named lxdbr0. You can modify this bridge, such as changing the MTU, and these changes will be reflected on the interfaces of the containers managed by the host container. | ||
Although further customization is possible, default options for LXD bridge configuration will work. | Although further customization is possible, default options for LXD bridge configuration will work. | ||
==== MTU ==== | |||
Check the MTU of the LXD bridge (lxdbr0) and the MTU of the default interface. If they are different, adjust the MTU of the LXD bridge accordingly to have the same MTU: | Check the MTU of the LXD bridge (lxdbr0) and the MTU of the default interface. If they are different, adjust the MTU of the LXD bridge accordingly to have the same MTU: | ||
Line 94: | Line 73: | ||
ip address show lxdbr0 | ip address show lxdbr0 | ||
sudo ifconfig lxdbr0 mtu 1446 # Use the appropriate MTU value | sudo ifconfig lxdbr0 mtu 1446 # Use the appropriate MTU value | ||
sudo sed -i '/ifconfig lxdbr0 mtu/d' /etc/rc.local # | sudo sed -i '/ifconfig lxdbr0 mtu/d' /etc/rc.local # Delete any previously set MTU | ||
sudo sed -i "$ i ifconfig lxdbr0 mtu 1446" /etc/rc.local # | sudo sed -i "$ i ifconfig lxdbr0 mtu 1446" /etc/rc.local # Add the MTU so it's persistent across reboots | ||
== Launch the Host Container == | |||
Launch a container to host the OSM installation: | Launch a container to host the OSM installation: | ||
lxc launch ubuntu:16.04 osmr2 -c security.privileged=true -c security.nesting=true | lxc launch ubuntu:16.04 osmr2 -c security.privileged=true -c security.nesting=true | ||
== | === Resource Limits === | ||
Setting limits will prevent any process from using an unexpected amount of resources. Here, we'll set the resource limits to OSM Release 2's recommended minimum resources: | |||
lxc config set osmr2 limits.cpu 4 | |||
lxc config set osmr2 limits.memory 8GB | |||
=== Configuring the Host Container === | |||
Before we install OSM, we want to make sure LXD is installed and configured. | Before we install OSM, we want to make sure LXD is installed and configured ''inside'' the Host Container. | ||
=== LXD === | ==== LXD ==== | ||
On Ubuntu Xenial, we want to enable backports to install the latest stable version of LXD. | On Ubuntu Xenial, we want to enable backports to install the latest stable version of LXD. '''Note''': this is a repeat of the steps above to prepare the Host Machine. | ||
lxc exec osmr2 bash | |||
sudo add-apt-repository -u "deb http://archive.ubuntu.com/ubuntu $(lsb_release -cs)-backports main restricted universe multiverse" | sudo add-apt-repository -u "deb http://archive.ubuntu.com/ubuntu $(lsb_release -cs)-backports main restricted universe multiverse" | ||
sudo apt update | sudo apt update | ||
Line 131: | Line 117: | ||
LXD has been successfully configured. | LXD has been successfully configured. | ||
== | === Routing === | ||
All containers within the osmr2 container will be assigned an IP address from the lxdbr0 interface inside the Host Container. The steps below will route traffic destined for the OSM containers through the osmr2 container's primary interface. | |||
==== Get the bridge and network IP of osmr2 ==== | |||
The osmr2 container should have two IPV4 addresses: one for the lxdbr0 interface inside the Host Container, and one from the Host Machine. This tells us where to route the traffic from the Host Machine to the network inside the Host Container. | |||
lxc list osmr2 | |||
+-------+---------+--------------------------------+------+------------+-----------+ | |||
| NAME | STATE | IPV4 | IPV6 | TYPE | SNAPSHOTS | | |||
+-------+---------+--------------------------------+------+------------+-----------+ | |||
| osmr2 | RUNNING | 10.143.142.1 (lxdbr0) | | PERSISTENT | 0 | | |||
| | | 10.0.3.59 (eth0) | | | | | |||
+-------+---------+--------------------------------+------+------------+-----------+ | |||
==== Route traffic from the Host Machine to the Host Container ==== | |||
route add -net 10.143.142.0/24 gw 10.0.3.59 | |||
And then continue with the [https://osm.etsi.org/wikipub/index.php/OSM_Release_TWO#Install_OSM|OSM Release 2] installation. | And then continue with the [https://osm.etsi.org/wikipub/index.php/OSM_Release_TWO#Install_OSM|OSM Release 2] installation. | ||
You should now be able to connect to the SO in your browser. Open up https://10.143.142.216:8443/ and verify you can reach the OSM Launchpad. | |||
== Troubleshooting == | == Troubleshooting == | ||
== Frequently Asked Questions == | == Frequently Asked Questions == |
Revision as of 17:38, 26 May 2017
Summary
LXD is a pure container hypervisor that runs unmodified Linux guest operating systems with VM-style operations at incredible speed and density. This makes it particularly well-suited for developing complex systems. This can be used to install OSM without tainting your host system with its dependencies. This is called nesting, where our host container can launch containers within itself.
As illustrated below, your Host System (a laptop, a virtual machine, etc), you launch the Host Container, with nesting enabled. Inside the Host Container, we'll launch the containers for OSM: SO, RO, and VCA.
+--------------------------------------------------------------------------------+ | Host System | | | | eth0: 192.168.1.173 | | | | +----------------------------------------------------------------------------+ | | | +----------v-----------+ | | | | | Host Container | | | | | | | | | | | | eth0: 10.0.3.59 | | | | | | lxdbr0: 10.143.142.1 | | | | | +-------------+----------+-----------+------------+ | | | | | | | | | | | | | | | | | | +----------v-----------+ +----------v-----------+ +----------v-----------+ | | | | | SO-ub | | RO | | VCA | | | | | | | | | | | | | | | | eth0: 10.143.142.216 | | eth0: 10.143.142.216 | | eth0: 10.143.142.216 | | | | | | | | | | lxdbr0: 10.44.127.1 | | | | | +----------------------+ +----------------------+ +----------------------+ | | | +----------------------------------------------------------------------------+ | +--------------------------------------------------------------------------------+
Please note that the IP addresses used in the diagram above and instructions below will vary. Please replace these IP addresses with the ones on your system.
Prepare the host system
The current installation is intended to be used with the Ubuntu 16.04 LTS.
Installing LXD
Along with LXD, we'll install ZFS, to use as LXD's storage backend, for optimal performance.
sudo add-apt-repository -u "deb http://archive.ubuntu.com/ubuntu $(lsb_release -cs)-backports main restricted universe multiverse" sudo apt-get update sudo apt-get install zfs sudo apt -t xenial-backports install lxd newgrp lxd # required to log the user in the lxd group if lxd was just installed
Configure LXD to use zfs, with an bridge for networking:
sudo lxd init Name of the storage backend to use (dir or zfs) [default=zfs]: Create a new ZFS pool (yes/no) [default=yes]? Name of the new ZFS pool [default=lxd]: Would you like to use an existing block device (yes/no) [default=no]? Size in GB of the new loop device (1GB minimum) [default=15]: Would you like LXD to be available over the network (yes/no) [default=no]? Do you want to configure the LXD bridge (yes/no) [default=yes]? What IPv4 address should be used (CIDR subnet notation, “auto” or “none”) [default=auto]? What IPv6 address should be used (CIDR subnet notation, “auto” or “none”) [default=auto]? none LXD has been successfully configured.
Network Bridge
By default, LXD creates a bridge named lxdbr0. You can modify this bridge, such as changing the MTU, and these changes will be reflected on the interfaces of the containers managed by the host container.
Although further customization is possible, default options for LXD bridge configuration will work.
MTU
Check the MTU of the LXD bridge (lxdbr0) and the MTU of the default interface. If they are different, adjust the MTU of the LXD bridge accordingly to have the same MTU:
lxc list # This will drive initialization of lxdbr0 ip address show ens3 # In case ens3 is the default interface ip address show lxdbr0 sudo ifconfig lxdbr0 mtu 1446 # Use the appropriate MTU value sudo sed -i '/ifconfig lxdbr0 mtu/d' /etc/rc.local # Delete any previously set MTU sudo sed -i "$ i ifconfig lxdbr0 mtu 1446" /etc/rc.local # Add the MTU so it's persistent across reboots
Launch the Host Container
Launch a container to host the OSM installation:
lxc launch ubuntu:16.04 osmr2 -c security.privileged=true -c security.nesting=true
Resource Limits
Setting limits will prevent any process from using an unexpected amount of resources. Here, we'll set the resource limits to OSM Release 2's recommended minimum resources:
lxc config set osmr2 limits.cpu 4 lxc config set osmr2 limits.memory 8GB
Configuring the Host Container
Before we install OSM, we want to make sure LXD is installed and configured inside the Host Container.
LXD
On Ubuntu Xenial, we want to enable backports to install the latest stable version of LXD. Note: this is a repeat of the steps above to prepare the Host Machine.
lxc exec osmr2 bash sudo add-apt-repository -u "deb http://archive.ubuntu.com/ubuntu $(lsb_release -cs)-backports main restricted universe multiverse" sudo apt update sudo apt upgrade sudo apt -t xenial-backports install lxd
Next, initialize LXD inside the host container. Unless otherwise noted, the defaults will work fine. ZFS won't work inside a container, but that's okay; we've set it up on the host machine.
sudo lxd init Do you want to configure a new storage pool (yes/no) [default=yes]? Name of the new storage pool [default=default]: Name of the storage backend to use (dir, btrfs, lvm, zfs) [default=zfs]: dir Would you like LXD to be available over the network (yes/no) [default=no]? Would you like stale cached images to be updated automatically (yes/no) [default=yes]? Would you like to create a new network bridge (yes/no) [default=yes]? What should the new bridge be called [default=lxdbr0]? What IPv4 address should be used (CIDR subnet notation, “auto” or “none”) [default=auto]? What IPv6 address should be used (CIDR subnet notation, “auto” or “none”) [default=auto]? none LXD has been successfully configured.
Routing
All containers within the osmr2 container will be assigned an IP address from the lxdbr0 interface inside the Host Container. The steps below will route traffic destined for the OSM containers through the osmr2 container's primary interface.
Get the bridge and network IP of osmr2
The osmr2 container should have two IPV4 addresses: one for the lxdbr0 interface inside the Host Container, and one from the Host Machine. This tells us where to route the traffic from the Host Machine to the network inside the Host Container.
lxc list osmr2 +-------+---------+--------------------------------+------+------------+-----------+ | NAME | STATE | IPV4 | IPV6 | TYPE | SNAPSHOTS | +-------+---------+--------------------------------+------+------------+-----------+ | osmr2 | RUNNING | 10.143.142.1 (lxdbr0) | | PERSISTENT | 0 | | | | 10.0.3.59 (eth0) | | | | +-------+---------+--------------------------------+------+------------+-----------+
Route traffic from the Host Machine to the Host Container
route add -net 10.143.142.0/24 gw 10.0.3.59
And then continue with the Release 2 installation.
You should now be able to connect to the SO in your browser. Open up https://10.143.142.216:8443/ and verify you can reach the OSM Launchpad.