How to install OSM in Amazon EC2 (Rel THREE)
Configure LXD
LXD setup
Install the lxd package
sudo apt-get update sudo apt-get install -y lxd newgrp lxd
Configure LXD with a bridge for networking
sudo lxd init
Although further customization is possible, these options for LXD bridge configuration will work:
- Name of the storage backend to use (dir or zfs) [default=dir]
- Would you like LXD to be available over the network (yes/no) [default=no]
- Do you want to configure the LXD bridge (yes/no) [default=yes]?
- Do you want to setup an IPv4 subnet? Yes
- Default values apply for more questions questions
- Do you want to setup an IPv6 subnet? No
Beware that, by default, LXD creates a bridge named lxdbr0.
Check MTU
Check the MTU of the LXD bridge (lxdbr0) and the MTU of the default interface. If they are different, change the default MTU of the containers. This might be required, for instance, when running OSM in a VM on some special conditions.
Note: In this example, we will assume that the default interface is ens3 and its MTU is 1446:
lxc list # This will drive initialization of lxdbr0 ip address show ens3 # In case ens3 is the default interface ip address show lxdbr0 sudo lxc profile device set default eth0 mtu 1446 # Use the appropriate MTU value
Testing LXD
To test that your LXD installation is correct, try to deploy a container and run 'apt-get update' from inside:
lxc launch ubuntu:16.04 test # Create a container based on Ubuntu 16.04 with name 'test' lxc exec test bash # Access the container
root@test:~# apt-get update # Run command 'apt-get update' from inside the container </nowiki> root@test:~# exit # Exit from the container
lxc stop test # Stop the container lxc delete test # Delete the container
INSTALL OSM FROM SCRIPT
Download script, modify perms, and run install
wget https://osm-download.etsi.org/ftp/osm-3.0-three/install_osm.sh chmod +x install_osm.sh lxc list ./install_osm.sh --lxdimages -R ReleaseTHREE-hackfest2
Capture instructions at tail-end of installation and update .bashrc file
You might be interested in adding the following OSM client env variables to your .bashrc file:
export OSM_HOSTNAME=10.126.86.221 export OSM_RO_HOSTNAME=10.126.86.42
Create the AWS vim
osm vim-create --name aws-site --user xxxxxxx --password yyyyyyyyy --auth_url https://aws.amazon.com --tenant admin --account_type aws --config '{region_name: us-west-2, flavor_info: {t2.nano: {cpus: 1, disk: 100, ram: 512}, t2.micro: {cpus: 1, disk: 100, ram: 1024}, t2.small: {cpus: 1, disk: 100, ram: 2048}, m1.small: {cpus: 1, disk: 160, ram: 1741}}}'
a. Verify vim creation:
i. osm vim-list
ii. lxc exec RO --env OPENMANO_TENANT=osm openmano datacenter-list
Check that route to reach juju controller is via VCA, and this was added properly
i. cat /etc/rc.local
ii. route add -host 10.44.127.207 gw 10.126.86.216
(Where .207 is the juju controller IP, and gw IP is VCA IP (can get from lxc list VCA)
iii. Remove any additional routes
Test Accessing the Console
- Try visiting the console by accessing the private IP of the instance from another instance within the same subnet.
ii. https:// 172.31.17.232:8443
iii. You should see the OSM console log in screen. Log in with default credentials.
ONBOARD/CONFIGURE VNFs
- Upload packages for vnf and ns
osm upload-package …vnf..
osm upload-package …ns…
*You can also do this from the UI by dragging and dropping the VNF FIRST and then the NS file*
- Log in to the UI, go to Catalog and confirm packages are onboarded in catalog and Instantiate
Go to Catalog and then click on Management in the NSD descriptor
i. Update the “VIM Network” with a reachable AWS subnet (I picked the same subnet in which my OSM instance resides)
Go to LaunchpadInstantiate
i. Click next
ii. Add an “Instance Name” and verify that the “VIM Network” name is the same as the one you updated in the NSD Descriptor
- Verify successful launch
Once deployed, get the public IP address and try to ssh:
i. ssh ubuntu@<IP>
ii. Pwd: c0mpl3xp4ssw0rd
You can also try to run from any Linux
i. nslookup www.telefonica.com <IP>
ii. And the domain should be properly resolved by the deployed VNF.