Talk: LXD configuration for OSM Release TWO: Difference between revisions
No edit summary |
|||
Line 28: | Line 28: | ||
== Installing LXD == | == Prepare your host system == | ||
=== Installing LXD === | |||
In order to run LXD containers, you need to install lxd and zfs (Ubuntu-only?) for lxd's storage backend. | In order to run LXD containers, you need to install lxd and zfs (Ubuntu-only?) for lxd's storage backend. | ||
=== Ubuntu === | ==== Ubuntu ==== | ||
sudo apt-get update | sudo apt-get update | ||
sudo apt-get install zfs | sudo apt-get install zfs | ||
sudo apt -t xenial-backports install lxd | sudo apt -t xenial-backports install lxd | ||
newgrp lxd # required to log the user in the lxd group if lxd was just installed | newgrp lxd # required to log the user in the lxd group if lxd was just installed | ||
=== Routing === | === Routing === | ||
Line 85: | Line 74: | ||
sudo sed -i "$ i ifconfig lxdbr0 mtu 1446" /etc/rc.local # To make MTU change persistent between reboots. Use the appropriate MTU value. | sudo sed -i "$ i ifconfig lxdbr0 mtu 1446" /etc/rc.local # To make MTU change persistent between reboots. Use the appropriate MTU value. | ||
== | === Launch the host container === | ||
Launch a container to host the OSM installation: | Launch a container to host the OSM installation: | ||
Line 91: | Line 80: | ||
lxc launch ubuntu:16.04 osmr2 -c security.privileged=true -c security.nesting=true | lxc launch ubuntu:16.04 osmr2 -c security.privileged=true -c security.nesting=true | ||
lxc exec osmr2 bash | lxc exec osmr2 bash | ||
== Prepare your host container == | |||
Before we install OSM, we want to make sure LXD is installed and configured. | |||
=== LXD === | |||
sudo add-apt-repository -u "deb http://archive.ubuntu.com/ubuntu $(lsb_release -cs)-backports main restricted universe multiverse" | |||
sudo apt update | |||
sudo apt upgrade | |||
sudo apt -t xenial-backports install lxd | |||
sudo lxd init | |||
Do you want to configure a new storage pool (yes/no) [default=yes]? | |||
Name of the new storage pool [default=default]: | |||
Name of the storage backend to use (dir, btrfs, lvm, zfs) [default=zfs]: '''dir''' | |||
Would you like LXD to be available over the network (yes/no) [default=no]? | |||
Would you like stale cached images to be updated automatically (yes/no) [default=yes]? | |||
Would you like to create a new network bridge (yes/no) [default=yes]? | |||
What should the new bridge be called [default=lxdbr0]? | |||
What IPv4 address should be used (CIDR subnet notation, “auto” or “none”) [default=auto]? | |||
What IPv6 address should be used (CIDR subnet notation, “auto” or “none”) [default=auto]? '''none''' | |||
LXD has been successfully configured. | |||
=== Resource Limits === | |||
== Configuration == | |||
And then continue with the [https://osm.etsi.org/wikipub/index.php/OSM_Release_TWO#Install_OSM|OSM Release 2] installation. | And then continue with the [https://osm.etsi.org/wikipub/index.php/OSM_Release_TWO#Install_OSM|OSM Release 2] installation. |
Revision as of 20:47, 25 May 2017
Summary
This is a work-in-progress.
TODO: Why?
LXD is a pure container hypervisor that runs unmodified Linux guest operating systems with VM-style operations at incredible speed and density. This makes it particularly well-suited for developing complex systems. This can be used to install OSM without tainting your host system with its dependencies. This is called nesting, where our host container can launch containers within itself.
As illustrated below, your Host System (a laptop, a virtual machine, etc), you launch the Host Container, with nesting enabled. Inside the Host Container, we'll launch the containers for OSM: SO, RO, and VCA.
+-----------------------------------+ | | | Host System | | | | +-------------------------------+ | | | | | | | Host Container | | | | | | | | +------+ +------+ +-------+ | | | | | | | | | | | | | | | SO | | RO | | VCA | | | | | | | | | | | | | | | +------+ +------+ +-------+ | | | +-------------------------------+ | +-----------------------------------+
Prepare your host system
Installing LXD
In order to run LXD containers, you need to install lxd and zfs (Ubuntu-only?) for lxd's storage backend.
Ubuntu
sudo apt-get update sudo apt-get install zfs sudo apt -t xenial-backports install lxd newgrp lxd # required to log the user in the lxd group if lxd was just installed
Routing
By default, your containers will be assigned IP addresses from the bridge inside the Host Container.
TODO: Add option(s) for routing the traffic: iptables route from Host System to Container, or a new bridge added to the containers?
Advanced
If you want finer-grain control of how LXD is configured, you can omit the `--auto` flag and change the default options:
sudo lxd init Name of the storage backend to use (dir or zfs) [default=zfs]: Create a new ZFS pool (yes/no) [default=yes]? Name of the new ZFS pool [default=lxd]: Would you like to use an existing block device (yes/no) [default=no]? Size in GB of the new loop device (1GB minimum) [default=15]: Would you like LXD to be available over the network (yes/no) [default=no]? Do you want to configure the LXD bridge (yes/no) [default=yes]?
ZFS
Network Bridge
By default, LXD creates a bridge named lxdbr0. You can modify this bridge, such as changing the MTU, and these changes will be reflected on the interfaces of the containers managed by the host container.
Although further customization is possible, default options for LXD bridge configuration will work.
Check the MTU of the LXD bridge (lxdbr0) and the MTU of the default interface. If they are different, adjust the MTU of the LXD bridge accordingly to have the same MTU:
lxc list # This will drive initialization of lxdbr0 ip address show ens3 # In case ens3 is the default interface ip address show lxdbr0 sudo ifconfig lxdbr0 mtu 1446 # Use the appropriate MTU value sudo sed -i '/ifconfig lxdbr0 mtu/d' /etc/rc.local # To make MTU change persistent between reboots sudo sed -i "$ i ifconfig lxdbr0 mtu 1446" /etc/rc.local # To make MTU change persistent between reboots. Use the appropriate MTU value.
Launch the host container
Launch a container to host the OSM installation:
lxc launch ubuntu:16.04 osmr2 -c security.privileged=true -c security.nesting=true lxc exec osmr2 bash
Prepare your host container
Before we install OSM, we want to make sure LXD is installed and configured.
LXD
sudo add-apt-repository -u "deb http://archive.ubuntu.com/ubuntu $(lsb_release -cs)-backports main restricted universe multiverse" sudo apt update sudo apt upgrade sudo apt -t xenial-backports install lxd
sudo lxd init
Do you want to configure a new storage pool (yes/no) [default=yes]? Name of the new storage pool [default=default]: Name of the storage backend to use (dir, btrfs, lvm, zfs) [default=zfs]: dir Would you like LXD to be available over the network (yes/no) [default=no]? Would you like stale cached images to be updated automatically (yes/no) [default=yes]? Would you like to create a new network bridge (yes/no) [default=yes]? What should the new bridge be called [default=lxdbr0]? What IPv4 address should be used (CIDR subnet notation, “auto” or “none”) [default=auto]? What IPv6 address should be used (CIDR subnet notation, “auto” or “none”) [default=auto]? none LXD has been successfully configured.
Resource Limits
Configuration
And then continue with the Release 2 installation.