README.md 9.49 KiB
Newer Older
lavado's avatar
lavado committed
## Diagram

![magmaHF9](/uploads/c1e07f12824302269ef7d591de8841b0/magmaHF9.png)
lavado's avatar
lavado committed

lavado's avatar
lavado committed
## Preparation
lavado's avatar
lavado committed

### Infrastructure preparation

lavado's avatar
lavado committed
This example requires a PNF, emulated with a VyOS router (image [here](http://osm-download.etsi.org/ftp/osm-6.0-six/7th-hackfest/images/vyos-1.1.7-cloudinit.qcow2.tgz)), connected to a shared management network (osm-ext in this example) and to a shared internal "sgi" network where the Slice will be placed.
There is a "build_infra.sh" with some examples of what needs to be prepared.
lavado's avatar
lavado committed

1. Make sure you add a VIM that has a default management network, for example: `osm vim-create ... --config='{management_network_name: <vim mgmt-net name>}'`
lavado's avatar
lavado committed
1. Add the PDU with the yaml file (emulated by a VyOS VM in this environment). You can do it with `osm pdu-create --descriptor_file pdu.yaml` (editing at least the VIM ID first)

### Packages preparation

1. If you just cloned the repo, make sure you run `git submodule update --init` under the osm-packages folder.
1. Take a look at the cloud-init files of the VDUs. They basically remove the default gateway obtained from DHCP and add static routes back to the management networks. Since they map to the ETSI VIM, they might need to be adjusted for your specific use case. The purpose of removing any default gateway acquired from the management interface, is to establish the data path to Internet through the AGW and PNF.
lavado's avatar
lavado committed
1. Upload the packages to OSM, the "build_slice.sh" file contain some useful commands, from building to launching.
1. Make sure you got the images for AGW and srsLTE emulator available [here](http://osm-download.etsi.org/ftp/osm-7.0-seven/OSM9-hackfest/images/).
lavado's avatar
lavado committed

lavado's avatar
lavado committed
## Launching the Slice

1. Edit the params.yaml and set an address for your Magma Orc8r-proxy service, which AGW will connect to. Same IP address should go to 'proxyserviceloadBalancerIP' and 'orch_ip', and should belong to your K8 Cluster MetalLB pool. Make sure you assign an IP address not being used in the target cluster.
lavado's avatar
lavado committed
1. In the same file, set a name and ID for the first AGW, in parameters agw_id and agw_name (they need to be different each time you launch a new slice)
1. Launch the slice with:
lavado's avatar
lavado committed
`osm nsi-create --nsi_name magma_slice --nst_name magma_slice_hackfest_nst --config_file params.yaml --ssh_keys <your_key> --vim_account <vim_account>`
lavado's avatar
lavado committed

## Verifying the services

### AG, eNodeB and Subscriber Registration through Day-1 primitives

lavado's avatar
lavado committed
1. Visit the Orc8r dashboard at the KNF's nginx-proxy svc IP, with https and credentials admin@magma.test / password1234, then check that your AGW has been registered successfully under the list of Gateways in this path: https://<orc8r-nginx-proxy-ip>/nms/osmnet/gateways) (after proxy charms are finished)
2. Via this same dashboard, check that a test subscriber has been added (as Day-1 primitive), with these parameters:
lavado's avatar
lavado committed
    - IMSI: 722070000000008
    - KEY: c8eba87c1074edd06885cb0486718341
    - OPC: 17b6c0157895bcaa1efc1cef55033f5f

3. Check that the eNodeB has registered to the AGW (as Day-1 primitive that uses juju-relations), by accesing the Magma AGW via SSH and running "tail -f /var/log/mme.log".
   Otherwise, you can go to the Magma Orc8r dashboard and look for the "Connected EnodeB" metrics. 
### UE attach through Day-2 primitive
lavado's avatar
lavado committed

After the eNodeB is connected, UE attachment can be done through the following Day-2 primitive:
`osm ns-action magma_slice.slice_hackfest_nsd_epc --vnf_name MagmaAGWsrsLTE --vdu_id srsLTE-vdu --action_name attach-ue --params '{usim-imsi: "722070000000008", usim-k: "c8eba87c1074edd06885cb0486718341", usim-opc: "17b6c0157895bcaa1efc1cef55033f5f"}'`
lavado's avatar
lavado committed

lavado's avatar
lavado committed
## Testing traffic

After UE is attached (at emulator machine), the "tun_srsue" will appear, and a default route should be added automatically to it (script at the image), pointing to the GTP tunnel endpoint:
lavado's avatar
lavado committed

Make sure the PNF (VyOS Router) is pre-configured to deny all traffic unless explicitely added to a MAGMA_AGW group:

```
set firewall group network-group MAGMA_AGW network 192.168.239.10 # this rule is added by the primitive

set firewall name MAGMA_FW default-action drop
set firewall name MAGMA_FW rule 10 action accept
set firewall name MAGMA_FW rule 10 source group network-group MAGMA_AGW

set interfaces ethernet eth1 firewall in name MAGMA_FW
```

With this, a Day-2 primitive can be executed against the PNF to allow traffic from the specific Magma SGI IP address, for example, if it's 192.168.239.10:

`osm ns-action magma_slice.slice_hackfest_nsd_epc --vnf_name VYOS-PNF --action_name configure-remote --params '{magmaIP: "192.168.239.10"}'`
After the primitive is executed, the UE machine will have access to Internet through the AGW and then the VyOS PNF.
lavado's avatar
lavado committed
## Additional tests

lavado's avatar
lavado committed
### Web Proxy service

A Web Proxy is available via a KNF with primitives, implemented with Squid through juju-bundles mechanism.

Deploy this NS:
`osm ns-create --ns_name webcache --nsd_name squid-cnf-ns --vim_account <vim_account> --config '{vld: [ {name: mgmtnet, vim-network-name: osm-ext} ]}'`

The UE's browser can be now configured to navigate using this proxy service. Configure it at the browser's preferences, using the K8sCluster exposed IP for this service, and port 3128 (for HTTP, HTTPs and FTP requests)

Primitives are available to add/remove allowed URLs:

`osm ns-action webcache --vnf_name squid-vnf --kdu_name squid-kdu --action_name addurl --params '{application-name: squid, url: wikipedia.org}'`

`osm ns-action webcache --vnf_name squid-vnf --kdu_name squid-kdu --action_name deleteurl --params '{application-name: squid, url: wikipedia.org}'`
lavado's avatar
lavado committed

lavado's avatar
lavado committed
### Network Slicing

Additional slice instances can be launched (changing agw_id and agw_name), and we should see that just the AGW+emulator NS is launched, as the Orc8r NS is shared.
lavado's avatar
lavado committed

### Metrics collection

VIM-level metrics are being collected by default, they can be observed at the Grafana dashboard.

VNF-level metrics are collected through SNMP using a dedicated "execution environment" POD which includes the SNMP Exporter Prometheus module.
Its specifications can be found inside the "helm" folder in the VNF package. They can be found at Prometheus GUI (look for metrics starting with "if", from the IF-MIB) and added manually to Grafana. 


lavado's avatar
lavado committed
### Placement

A second slice, reusing the same Orc8r, can be launched at different VIM, so that we see AGWs being launched at remotely VIMs. It would need management networks to be reachable between VIMs so the Orc8r's exposed address is reachable from the remote AGWs. The procedure is as follows:
lavado's avatar
lavado committed

1. [ If PLA not available in setup] Make sure you add the Placement module to your OSM installation.
lavado's avatar
lavado committed
1. Prepare the second VIM by ensuring it has the PNF/PDU and the required images.
1. Edit the `pil_price_list.yaml` and `vnf_price_list.yaml` as desired, ensuring that it's "less expensive" to launch the VNFs at the second VIM.  Examples are available at this repo.     
1. Copy the files to the placement folder at PLA. A sample using docker is below, which can be made with kubectl in a similar way.
lavado's avatar
lavado committed
   `docker cp vnf_price_list.yaml $(docker ps -qf name=osm_pla):/placement/.`
   `docker cp pil_price_list.yaml $(docker ps -qf name=osm_pla):/placement/.`
lavado's avatar
lavado committed
1. Uncomment the placement-engine line and launch as usual! you should see the AGW/srsLTE/PNF subnet being instantiated in the second VIM.
lavado's avatar
lavado committed

lavado's avatar
lavado committed
### Auto-scaling

Magma AGW VDU is configured for autoscaling when CPU exceeds a threshold. After scaling, services are not automatically balanced (possible enhancement for the future)
lavado's avatar
lavado committed

lavado's avatar
lavado committed
### SDN Assist
lavado's avatar
lavado committed

The MagmaAGWsrsLTE VNF has been designed so that there can be an SDN-Assisted data-plane in the internal S1 VLD. As long as the interfaces are declared as "SR-IOV" in the descriptor, instead of PARAVIRT/VIRTIO.
lavado's avatar
lavado committed

Even though it can be tested with any fabric for which there is an SDN Assist plugin in OSM, this documentation will refer to the case where an OpenFlow-based fabric is used to interconnect the servers, so that ONOS VPLS becomes the ideal plugin to use.

Requirements:
- Each server where an AGW or srsLTE VDU is expected to be launched should have an SR-IOV-enabled port towards the OpenFlow fabric.
- The VIM must have been created with a reference to the physnet(s) to use for SR-IOV (typically something like: `--config '{dataplane_physical_net: physnet2, microversion: 2.32}'`)
- An ONOS SDN Controller must be installed and it must be reachable from OSM, for example, you can use the following snippet inside the OSM VM:
```
docker run -t -d --restart always --network host --name onos onosproject/onos

docker exec -it onos /bin/bash
apt update
apt install -y openssh-server

ssh -p 8101 -o StrictHostKeyChecking=no karaf@localhost

# These entries may require a ctrl+c to escape back to prompt, or rather use the ONOS GUI. 
onos:app activate org.onosproject.openflow-base
onos:app activate org.onosproject.openflow
onos:app activate org.onosproject.ofagent
onos:app activate org.onosproject.vpls

``` 
- An "SDN Port Mapping" file must be prepared to include all the possible PCI ports that can be selected by the VIM, per port. An example for ETSI VIM is included in this repo.
- The VIM user must have admin privileges.

Procedure:
1. Create the SDN Controller in OSM, for example:
`osm sdnc-create --name onos01 --type onos_vpls --url http://172.21.248.19:8181 --user karaf --password karaf`
1. Update the VIM to use the SDN Controller and the port mapping file, for example:
`osm vim-update etsi-openstack --sdn_controller onos01 --sdn_port_mapping sdn_port_mapping.yaml`
1. Instantiate with SR-IOV ports. OpenFlow entries will be created and can be seen at the ONOS GUI (http://onos_ip/onos/ui)