Installing and configuring OpenStack (release 0)

From OSM Public Wiki
Jump to: navigation, search

Infrastructure

Below a reference architecture for Openstack deployment.

Openstack Datacenter infrastructure

Openstack Controller needs:

  • To make its API accesible from Resource Orchestrator (openmano). That's the purpose of the VIM mgmt network in the figure.
  • To be connected to all compute servers through a network, the DC infrastructure network in the figure.

Compute nodes, besides being connected to the DC infrastructure network, must also be connected to two additional networks:

  • Telco/VNF management network, used by Configuration Manager (Juju Server) to configure the VNFs
  • Inter-DC network, optionally required to interconnect this datacenter to other datacenters (e.g. in MWC'16 demo, to interconnect the two sites).

VMs will be connected to these two networks at deployment time if requested by openmano.

VM creation (Openstack controller)

  • Requirements for Openstack controller:
    • 4 vCPU
    • 4GB RAM
    • 40GB disk
    • Interfaces to connect to:
      • VIM mgmt network (to interact with RO)
      • DC infrastructure network (to interact with the compute servers and switches)

Installing Openstack

Installation from packstack

(Detailed here as a reference - example with RHEL OSP7)

yum install -y openstack-packstack
packstack --answer-file=/root/osp7.answers

where the answer-file has the following relevant information:

CONFIG_NEUTRON_OVS_BRIDGE_MAPPINGS=extnet:br-ex,physnet_eno3:br-eno3
CONFIG_NEUTRON_OVS_BRIDGE_IFACES=br-ex:eno0,br-eno3:eno3

The first configuration line associates the tag physnet_eno3 to the bridge br-eno3. The second line associates the bridge br-eno3 to the interface eno3. Interface eno3 is the interface in compute nodes that is connected to the Telco/VNF management network and to the interDC network. Different VLANs will be used for each of these networks.

Configuring

Configure a provider network for telco/VNF management network.

  • In MWC'16 demo setup, this provider network uses the physical interface eno3, as specified during Openstack installation, and VLAN 2199. Additionally, a subnet is created, where VNFs will be assigned IP addresses from a range (e.g. 10.250.251.2-249), separate from the ranges used in other datacenters.
neutron net-create net-mgmtOS --provider:network_type=vlan --provider:physical_network=physnet_eno3 --provider:segmentation_id=2199 --shared
neutron subnet-create --name subnet-mgmtOS net-mgmtOS 10.250.251.0/24 --allocation-pool start=10.250.251.2,end=10.250.251.249

Configure another provider network for interDC communication. This is required, for instance, to connect VNFs in Openstack datacenter to other VNFs in other datacenters. A similar configuration is required in all datacenters. The physical interconnection of these datacenters is out of the scope of this doc.

  • In MWC'16 demo setup, this provider network uses the physical interface eno3, as specified during Openstack installation, and VLAN 108. Additionally, a subnet is created, where VNFs will be assigned IP addresses from a range (e.g. 10.0.4.20-21).
neutron net-create net-corp:108 --provider:network_type=vlan --provider:physical_network=physnet_eno3 --provider:segmentation_id=108 --shared
neutron subnet-create --name subnet-interDC interDC 10.0.4.0/24  --allocation-pool start=10.0.4.20,end=10.0.4.21

As a side note, if you plan to test the network scenarios from MWC'16, the names of the networks must be preserved (net-mgmtOS, net-corp:108). If different network names are used, NS packages must be updated accordingly.


Your feedback is most welcome!
You can send us your comments and questions to OSM_TECH@list.etsi.org
Or join the OpenSourceMANO Slack Workplace
See hereafter some best practices to report issues on OSM