Installing and configuring OpenStack (release 0): Difference between revisions

From OSM Public Wiki
Jump to: navigation, search
mNo edit summary
 
(10 intermediate revisions by the same user not shown)
Line 1: Line 1:
__TOC__
__TOC__


=VM creation (Openstack controller)=
=Infrastructure=
* Requirements for Openstack controller (this is only an example):
** 1 vCPU (2 recommended)
** 2GB RAM (4 recommended)
** 40GB disk
** Interfaces to connect to:
*** OSM network (to interact with RO)
*** DC infrastructure network (to interact with the compute servers and switches)
 
=Infrastructure requirements=
Below a reference architecture for Openstack deployment.
Below a reference architecture for Openstack deployment.


[[File:OpenstackDC.png|500px|Openstack Datacenter infrastructure]]
[[File:OpenstackDC.png|500px|Openstack Datacenter infrastructure]]


Openstack Controller needs to be accesible from Resource Orchestrator (openmano).
Openstack Controller needs:
* To make its API accesible from Resource Orchestrator (openmano). That's the purpose of the VIM mgmt network in the figure.
* To be connected to all compute servers through a network, the DC infrastructure network in the figure.


Besides, interfaces of compute nodes must be connected to two networks:
Compute nodes, besides being connected to the DC infrastructure network, must also be connected to two additional networks:
*Telco/VNF management network, used by Configuration Manager (Juju Server) to configure the VNFs
*Telco/VNF management network, used by Configuration Manager (Juju Server) to configure the VNFs
*Inter-DC network, used to interconnect this datacenter to other datacenters (e.g. in MWC'16 demo, to interconnect the two sites).
*Inter-DC network, optionally required to interconnect this datacenter to other datacenters (e.g. in MWC'16 demo, to interconnect the two sites).


VMs will be connected to these two networks at deployment time if requested by openmano.
VMs will be connected to these two networks at deployment time if requested by openmano.
=VM creation (Openstack controller)=
* Requirements for Openstack controller:
** 4 vCPU
** 4GB RAM
** 40GB disk
** Interfaces to connect to:
*** VIM mgmt network (to interact with RO)
*** DC infrastructure network (to interact with the compute servers and switches)


=Installing Openstack=
=Installing Openstack=


==Installation from packstack==
==Installation from packstack==
(Detailed here as a reference)
(Detailed here as a reference - example with RHEL OSP7)
  yum install -y openstack-packstack
  yum install -y openstack-packstack
  packstack --answer-file=/root/osp7.answers
  packstack --answer-file=/root/osp7.answers
Line 36: Line 38:
The first configuration line associates the tag physnet_eno3 to the bridge br-eno3. The second line associates the bridge br-eno3 to the interface eno3. Interface eno3 is the interface in compute nodes that is connected to the Telco/VNF management network and to the interDC network. Different VLANs will be used for each of these networks.
The first configuration line associates the tag physnet_eno3 to the bridge br-eno3. The second line associates the bridge br-eno3 to the interface eno3. Interface eno3 is the interface in compute nodes that is connected to the Telco/VNF management network and to the interDC network. Different VLANs will be used for each of these networks.


==Openstack distrib A==
=Configuring=
Configure a provider network for telco/VNF management network.
*In MWC'16 demo setup, this provider network uses the physical interface eno3, as specified during Openstack installation, and VLAN 2199. Additionally, a subnet is created, where VNFs will be assigned IP addresses from a range (e.g. 10.250.251.2-249), separate from the ranges used in other datacenters.
neutron net-create net-mgmtOS --provider:network_type=vlan --provider:physical_network=physnet_eno3 --provider:segmentation_id=2199 --shared
neutron subnet-create --name subnet-mgmtOS net-mgmtOS 10.250.251.0/24 --allocation-pool start=10.250.251.2,end=10.250.251.249
 
Configure another provider network for interDC communication. This is required, for instance, to connect VNFs in Openstack datacenter to other VNFs in other datacenters. A similar configuration is required in all datacenters. The physical interconnection of these datacenters is out of the scope of this doc.
*In MWC'16 demo setup, this provider network uses the physical interface eno3, as specified during Openstack installation, and VLAN 108. Additionally, a subnet is created, where VNFs will be assigned IP addresses from a range (e.g. 10.0.4.20-21).
neutron net-create net-corp:108 --provider:network_type=vlan --provider:physical_network=physnet_eno3 --provider:segmentation_id=108 --shared
neutron subnet-create --name subnet-interDC interDC 10.0.4.0/24  --allocation-pool start=10.0.4.20,end=10.0.4.21
 
As a side note, if you plan to test the network scenarios from MWC'16, the names of the networks must be preserved (net-mgmtOS, net-corp:108). If different network names are used, NS packages must be updated accordingly.


==Openstack distrib B==


=Configuring=
{{Feedback}}
*Configure a provider network for telco/VNF management network. In MWC'16 demo setup, this provider network uses the physical interface eno3, as specified during Openstack installation, and VLAN 2199. Additionally, a subnet is created, where VNFs will be assigned IP addresses from a range (e.g. 10.250.251.2-250), separate from the ranges used in other datacenters.
neutron net-create net-vnfmgmt --provider:network_type=vlan --provider:physical_network=physnet_eno3 --provider:segmentation_id=2199 --shared
neutron subnet-create --name subnet-mgmtOS net-mgmtOS 10.250.251.0/24 --allocation-pool start=10.250.251.2,end=10.250.251.250
*In order to run MWC'16 demo, another provider network is required to connect VNFs in Openstack datacenter to other datacenters. In MWC'16 demo setup, this provider network uses the physical interface eno3, as specified during Openstack installation, and VLAN 108. Additionally, a subnet is created, where VNFs will be assigned IP addresses from a range (e.g. 10.0.4.20-21).
neutron net-create net-interDC --provider:network_type=vlan --provider:physical_network=physnet_eno3 --provider:segmentation_id=108 --shared
neutron subnet-create --name subnet-interDC interDC 10.0.4.0/24  --allocation-pool start=10.0.4.20,end=10.0.4.21

Latest revision as of 08:05, 26 May 2016

Infrastructure

Below a reference architecture for Openstack deployment.

Openstack Datacenter infrastructure

Openstack Controller needs:

  • To make its API accesible from Resource Orchestrator (openmano). That's the purpose of the VIM mgmt network in the figure.
  • To be connected to all compute servers through a network, the DC infrastructure network in the figure.

Compute nodes, besides being connected to the DC infrastructure network, must also be connected to two additional networks:

  • Telco/VNF management network, used by Configuration Manager (Juju Server) to configure the VNFs
  • Inter-DC network, optionally required to interconnect this datacenter to other datacenters (e.g. in MWC'16 demo, to interconnect the two sites).

VMs will be connected to these two networks at deployment time if requested by openmano.

VM creation (Openstack controller)

  • Requirements for Openstack controller:
    • 4 vCPU
    • 4GB RAM
    • 40GB disk
    • Interfaces to connect to:
      • VIM mgmt network (to interact with RO)
      • DC infrastructure network (to interact with the compute servers and switches)

Installing Openstack

Installation from packstack

(Detailed here as a reference - example with RHEL OSP7)

yum install -y openstack-packstack
packstack --answer-file=/root/osp7.answers

where the answer-file has the following relevant information:

CONFIG_NEUTRON_OVS_BRIDGE_MAPPINGS=extnet:br-ex,physnet_eno3:br-eno3
CONFIG_NEUTRON_OVS_BRIDGE_IFACES=br-ex:eno0,br-eno3:eno3

The first configuration line associates the tag physnet_eno3 to the bridge br-eno3. The second line associates the bridge br-eno3 to the interface eno3. Interface eno3 is the interface in compute nodes that is connected to the Telco/VNF management network and to the interDC network. Different VLANs will be used for each of these networks.

Configuring

Configure a provider network for telco/VNF management network.

  • In MWC'16 demo setup, this provider network uses the physical interface eno3, as specified during Openstack installation, and VLAN 2199. Additionally, a subnet is created, where VNFs will be assigned IP addresses from a range (e.g. 10.250.251.2-249), separate from the ranges used in other datacenters.
neutron net-create net-mgmtOS --provider:network_type=vlan --provider:physical_network=physnet_eno3 --provider:segmentation_id=2199 --shared
neutron subnet-create --name subnet-mgmtOS net-mgmtOS 10.250.251.0/24 --allocation-pool start=10.250.251.2,end=10.250.251.249

Configure another provider network for interDC communication. This is required, for instance, to connect VNFs in Openstack datacenter to other VNFs in other datacenters. A similar configuration is required in all datacenters. The physical interconnection of these datacenters is out of the scope of this doc.

  • In MWC'16 demo setup, this provider network uses the physical interface eno3, as specified during Openstack installation, and VLAN 108. Additionally, a subnet is created, where VNFs will be assigned IP addresses from a range (e.g. 10.0.4.20-21).
neutron net-create net-corp:108 --provider:network_type=vlan --provider:physical_network=physnet_eno3 --provider:segmentation_id=108 --shared
neutron subnet-create --name subnet-interDC interDC 10.0.4.0/24  --allocation-pool start=10.0.4.20,end=10.0.4.21

As a side note, if you plan to test the network scenarios from MWC'16, the names of the networks must be preserved (net-mgmtOS, net-corp:108). If different network names are used, NS packages must be updated accordingly.


Your feedback is most welcome!
You can send us your comments and questions to OSM_TECH@list.etsi.org
Or join the OpenSourceMANO Slack Workplace
See hereafter some best practices to report issues on OSM