Talk: LXD configuration for OSM Release TWO: Difference between revisions
No edit summary |
No edit summary |
||
Line 34: | Line 34: | ||
=== Ubuntu === | === Ubuntu === | ||
sudo apt-get update | sudo apt-get update | ||
sudo apt-get install zfs lxd | sudo apt-get install zfs | ||
sudo apt -t xenial-backports install lxd | |||
newgrp lxd # required to log the user in the lxd group if lxd was just installed | newgrp lxd # required to log the user in the lxd group if lxd was just installed | ||
Line 45: | Line 46: | ||
=== LXD === | === LXD === | ||
sudo apt update | |||
sudo apt upgrade | |||
sudo apt -t xenial-backports install lxd | |||
sudo lxd init --auto | sudo lxd init --auto | ||
Revision as of 14:38, 17 May 2017
Summary
This is a work-in-progress.
TODO: Why?
LXD is a pure container hypervisor that runs unmodified Linux guest operating systems with VM-style operations at incredible speed and density. This makes it particularly well-suited for developing complex systems. This can be used to install OSM without tainting your host system with its dependencies. This is called nesting, where our host container can launch containers within itself.
As illustrated below, your Host System (a laptop, a virtual machine, etc), you launch the Host Container, with nesting enabled. Inside the Host Container, we'll launch the containers for OSM: SO, RO, and VCA.
+-----------------------------------+ | | | Host System | | | | +-------------------------------+ | | | | | | | Host Container | | | | | | | | +------+ +------+ +-------+ | | | | | | | | | | | | | | | SO | | RO | | VCA | | | | | | | | | | | | | | | +------+ +------+ +-------+ | | | +-------------------------------+ | +-----------------------------------+
Installing LXD
In order to run LXD containers, you need to install lxd and zfs (Ubuntu-only?) for lxd's storage backend.
Ubuntu
sudo apt-get update sudo apt-get install zfs sudo apt -t xenial-backports install lxd newgrp lxd # required to log the user in the lxd group if lxd was just installed
CentOS
Other
Configuration
LXD
sudo apt update sudo apt upgrade sudo apt -t xenial-backports install lxd sudo lxd init --auto
Routing
By default, your containers will be assigned IP addresses from the bridge inside the Host Container.
TODO: Add option(s) for routing the traffic: iptables route from Host System to Container, or a new bridge added to the containers?
Advanced
If you want finer-grain control of how LXD is configured, you can omit the `--auto` flag and change the default options:
sudo lxd init Name of the storage backend to use (dir or zfs) [default=zfs]: Create a new ZFS pool (yes/no) [default=yes]? Name of the new ZFS pool [default=lxd]: Would you like to use an existing block device (yes/no) [default=no]? Size in GB of the new loop device (1GB minimum) [default=15]: Would you like LXD to be available over the network (yes/no) [default=no]? Do you want to configure the LXD bridge (yes/no) [default=yes]?
ZFS
Network Bridge
By default, LXD creates a bridge named lxdbr0. You can modify this bridge, such as changing the MTU, and these changes will be reflected on the interfaces of the containers managed by the host container.
Although further customization is possible, default options for LXD bridge configuration will work.
Check the MTU of the LXD bridge (lxdbr0) and the MTU of the default interface. If they are different, adjust the MTU of the LXD bridge accordingly to have the same MTU:
lxc list # This will drive initialization of lxdbr0 ip address show ens3 # In case ens3 is the default interface ip address show lxdbr0 sudo ifconfig lxdbr0 mtu 1446 # Use the appropriate MTU value sudo sed -i '/ifconfig lxdbr0 mtu/d' /etc/rc.local # To make MTU change persistent between reboots sudo sed -i "$ i ifconfig lxdbr0 mtu 1446" /etc/rc.local # To make MTU change persistent between reboots. Use the appropriate MTU value.
Launching your host instance
Launch a container to host the OSM installation:
lxc launch ubuntu:16.04 osmr2 -c security.privileged=true -c security.nesting=true lxc exec osmr2 bash