Newer
Older
![magmaHF9](/uploads/c1e07f12824302269ef7d591de8841b0/magmaHF9.png)
This example requires a PNF, emulated with a VyOS router (image [here](http://osm-download.etsi.org/ftp/osm-6.0-six/7th-hackfest/images/vyos-1.1.7-cloudinit.qcow2.tgz)), connected to a shared management network (osm-ext in this example) and to a shared internal "sgi" network where the Slice will be placed.
There is a "build_infra.sh" with some examples of what needs to be prepared.
lavado
committed
1. Make sure you add a VIM that has a default management network, for example: `osm vim-create ... --config='{management_network_name: <vim mgmt-net name>}'`
1. Add the PDU with the yaml file (emulated by a VyOS VM in this environment). You can do it with `osm pdu-create --descriptor_file pdu.yaml` (editing at least the VIM ID first)
lavado
committed
1. Add your K8s Cluster to the VIM.
### Packages preparation
1. If you just cloned the repo, make sure you run `git submodule update --init` under the osm-packages folder.
1. Upload the packages to OSM, the "build_slice.sh" file contain some useful commands, from building to launching.
1. Make sure you got the images for AGW and srsLTE emulator available [here](http://osm-download.etsi.org/ftp/osm-7.0-seven/OSM9-hackfest/images/).
1. Edit the params.yaml and set an address for your Magma Orc8r-proxy service, which AGW will connect to. Same IP address should go to 'proxyserviceloadBalancerIP' and 'orch_ip', and should belong to your K8 Cluster MetalLB pool. Make sure you assign an IP address not being used in the target cluster.
1. In the same file, set a name and ID for the first AGW, in parameters agw_id and agw_name (they need to be different each time you launch a new slice)
`osm nsi-create --nsi_name magma_slice --nst_name magma_slice_hackfest_nst --config_file params.yaml --ssh_keys <your_key> --vim_account <vim_account>`
1. Visit the Orc8r dashboard at the KNF's nginx-proxy svc IP, with https and credentials admin@magma.test / password1234, then check that your AGW has been registered successfully under the list of Gateways in this path: https://<orc8r-nginx-proxy-ip>/nms/osmnet/gateways) (after proxy charms are finished)
2. Via this same dashboard, check that a test subscriber has been added (as Day-1 primitive), with these parameters:
- IMSI: 722070000000008
- KEY: c8eba87c1074edd06885cb0486718341
- OPC: 17b6c0157895bcaa1efc1cef55033f5f
3. The emulator is ready to connect, Day-2 primitives are available for this:
`osm ns-action magma_slice.slice_hackfest_nsd_epc --vnf_name 1 --vdu_id srsLTE-vdu --action_name register --params '{mme-addr: "192.168.100.254", gtp-bind-addr: "192.168.100.10", s1c-bind-addr: "192.168.100.10"}'`
`osm ns-action magma_slice.slice_hackfest_nsd_epc --vnf_name 1 --vdu_id srsLTE-vdu --action_name attach-ue --params '{usim-imsi: "722070000000008", usim-k: "c8eba87c1074edd06885cb0486718341", usim-opc: "17b6c0157895bcaa1efc1cef55033f5f"}'`
After UE is attached (at emulator machine), the "tun_srsue" will appear, and a default route should be added automatically to it (script at the image), pointing to the GTP tunnel endpoint:
Make sure the PNF (VyOS Router) is pre-configured to deny all traffic unless explicitely added to a MAGMA_AGW group:
```
set firewall group network-group MAGMA_AGW network 192.168.239.10 # this rule is added by the primitive
set firewall name MAGMA_FW default-action drop
set firewall name MAGMA_FW rule 10 action accept
set firewall name MAGMA_FW rule 10 source group network-group MAGMA_AGW
set interfaces ethernet eth1 firewall in name MAGMA_FW
```
With this, a Day-2 primitive can be executed against the PNF to allow traffic from the specific Magma SGI IP address, for example, if it's 192.168.239.10:
`osm ns-action magma_slice.slice_hackfest_nsd_epc --vnf_name 2 --action_name configure-remote --params '{magmaIP: "192.168.239.10"}'`
With this, the UE machine will have access to Internet through the AGW and then the VyOS PNF.
### Web Proxy service
A Web Proxy is available via a KNF with primitives, implemented with Squid through juju-bundles mechanism.
Deploy this NS:
`osm ns-create --ns_name webcache --nsd_name squid-cnf-ns --vim_account <vim_account> --config '{vld: [ {name: mgmtnet, vim-network-name: osm-ext} ]}'`
The UE's browser can be now configured to navigate using this proxy service. Configure it at the browser's preferences, using the K8sCluster exposed IP for this service, and port 3128 (for HTTP, HTTPs and FTP requests)
Primitives are available to add/remove allowed URLs:
`osm ns-action webcache --vnf_name squid-vnf --kdu_name squid-kdu --action_name addurl --params '{application-name: squid, url: wikipedia.org}'`
`osm ns-action webcache --vnf_name squid-vnf --kdu_name squid-kdu --action_name deleteurl --params '{application-name: squid, url: wikipedia.org}'`
Additional slice instances can be launched (changing agw_id and agw_name), and we should see that just the AGW+emulator NS is launched, as the Orc8r NS is shared.
A second slice, reusing the same Orc8r, can be launched at different VIM, so that we see AGWs being launched at remotely VIMs. It would need management networks to be reachable between VIMs so the Orc8r's exposed address is reachable from the remote AGWs. The procedure is as follows:
1. [ If PLA not available in setup] Build the PLA image by cloning the repo and running `docker build . -f docker/Dockerfile -t osm_pla:dev`, then plug it into the OSM network. In docker swarm it would be with `docker run -d --name osm_pla --restart always --network netosm osm_pla:dev`
1. Prepare the second VIM by ensuring it has the PNF/PDU and the required images.
1. Edit the `pil_price_list.yaml` and `vnf_price_list.yaml` as desired, ensuring that it's "less expensive" to launch the VNFs at the second VIM. Examples are available at this repo.
1. Copy the files to the placement folder at PLA:
`docker cp vnf_price_list.yaml $(docker ps -qf name=osm_pla):/placement/.`
`docker cp pil_price_list.yaml $(docker ps -qf name=osm_pla):/placement/.`
1. Uncomment the placement-engine line and launch as usual! you should see the AGW/srsLTE/PNF subnet being instantiated in the second VIM.
VIM-level metrics are being collected by default, they can be observed at the Grafana dashboard.
Magma AGW VDU is configured for autoscaling when CPU exceeds a threshold. After scaling, services are not automatically balanced (possible ennhancement for the future)
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
The MagmaAGW+srsLTE VNF has been designed so that there can be an SDN-Assisted data-plane in the internal S1 VLD. As long as the interfaces are declared as "SR-IOV" in the descriptor, instead of PARAVIRT/VIRTIO.
Even though it can be tested with any fabric for which there is an SDN Assist plugin in OSM, this documentation will refer to the case where an OpenFlow-based fabric is used to interconnect the servers, so that ONOS VPLS becomes the ideal plugin to use.
Requirements:
- Each server where an AGW or srsLTE VDU is expected to be launched should have an SR-IOV-enabled port towards the OpenFlow fabric.
- The VIM must have been created with a reference to the physnet(s) to use for SR-IOV (typically something like: `--config '{dataplane_physical_net: physnet2, microversion: 2.32}'`)
- An ONOS SDN Controller must be installed and it must be reachable from OSM, for example, you can use the following snippet inside the OSM VM:
```
docker run -t -d --restart always --network host --name onos onosproject/onos
docker exec -it onos /bin/bash
apt update
apt install -y openssh-server
ssh -p 8101 -o StrictHostKeyChecking=no karaf@localhost
# These entries may require a ctrl+c to escape back to prompt, or rather use the ONOS GUI.
onos:app activate org.onosproject.openflow-base
onos:app activate org.onosproject.openflow
onos:app activate org.onosproject.ofagent
onos:app activate org.onosproject.vpls
```
- An "SDN Port Mapping" file must be prepared to include all the possible PCI ports that can be selected by the VIM, per port. An example for ETSI VIM is included in this repo.
- The VIM user must have admin privileges.
Procedure:
1. Create the SDN Controller in OSM, for example:
`osm sdnc-create --name onos01 --type onos_vpls --url http://172.21.248.19:8181 --user karaf --password karaf`
1. Update the VIM to use the SDN Controller and the port mapping file, for example:
`osm vim-update etsi-openstack --sdn_controller onos01 --sdn_port_mapping sdn_port_mapping.yaml`
1. Instantiate with SR-IOV ports. OpenFlow entries will be created and can be seen at the ONOS GUI (http://onos_ip/onos/ui)