Commit d952e3eb authored by lavado's avatar lavado
Browse files

new sections migrated from wiki

parent 7c86d948
Loading
Loading
Loading
Loading

05-basic-examples.md

0 → 100644
+151 −0
Original line number Diff line number Diff line
# Reference NSD/VNFD & Charms

## Reference NS#1: Testing an endpoint VNF

The following network service captures a simple test setup where a VNF is tested with a traffic generator VNF (or a simple VNF/VM with a basic client application). For simplicity, this network service assumes that the VNF under test is the endpoint of a given service (e.g. DNS, AAA, etc.) and does not require special conditions or resource allocation besides the usual in a standard cloud environments.

![Reference NS #1: Testing an endpoint VNF](assets/450px-Example_ns_1.png)

In this example, unless otherwise specified in the description, the following defaults apply:

- CPs are regular para-virtualized interfaces (VirtIO or equivalent).
- VLs provide E-LAN connectivity via regular (overlay) networks provided by the VIM.
- VLs provide IP addressing via DHCP if applicable.
- Mapping between internal and external CPs may be either direct (as aliases) or via an intermediate VL.
- VIM+NFVI can guarantee predictable ordering of guest interfaces' virtual PCI addresses.

In the case of REF_NS_1:

- When deploying the NS, VL1 would be typically mapped to a pre-created VIM network intended to provide management IP address to VNFs via DHCP.
- DHCP in VL2 may be optional.

### Reference VNF#11: Endpoint VNF

![Reference VNF#11: Endpoint](assets/350px-Ref_vnf_11.png)

#### Description in common language

- Name: Ref_VNF_11
  - Component: Ref_VM1
    - **Memory:** 2 GB
    - **CPU:** 2 vCPU
    - **Storage:** 8 GB
    - **Image:** ref_vm1.qcow2
  - Component: Ref_VM2
    - **Memory:** 4GB
    - **CPU:** 2 vCPU
    - **Storage:** 16GB
    - **Image:** ref_vm2.qcow2
  - Internal Virtual Link: VL12
    - No DHCP server is enabled.
    - Static addressing may be used at CP iface11 and CP iface21.

#### OSM VNF descriptor for VNF#11

[VNF11.yaml](https://osm.etsi.org/gitweb/?p=osm/devops.git;a=blob;f=descriptor-packages/vnfd/ref11_vnf/src/ref11_vnfd.yaml)

### Reference VNF#21: Generator 1 port

![Reference VNF#21: Generator 1 port](assets/350px-Ref_vnf_21.png)

#### Description in common language

- Name: Ref_VNF_21
  - Component: Ref_VM5
    - **Memory:** 1 GB
    - **CPU:** 1 vCPU
    - **Storage:** 16 GB
    - **Image:** ref_vm21.qcow2

#### OSM VNF descriptor for VNF#21

[VNF21.yaml](https://osm.etsi.org/gitweb/?p=osm/devops.git;a=blob;f=descriptor-packages/vnfd/ref21_vnf/src/ref21_vnfd.yaml)

### OSM NS descriptor for NS#1

[NS1.yaml](https://osm.etsi.org/gitweb/?p=osm/devops.git;a=blob;f=descriptor-packages/nsd/ref1_ns/src/ref1_nsd.yaml)

## Reference NS #2: Testing a middle point VNF

![Reference NS #2: Testing a middle point VNF](assets/400px-Example_ns_2.png)

The following network service captures a more advanced test setup where the VNF under test is a middlepoint in the communication (e.g. router, EPC) and might require special conditions or resource allocation and connectivity foreseen in NFV ISG specs. In this case, the traffic generator VNF behaves as source and sink of traffic and might also require special resource allocation.

In this example, unless otherwise specified in the description, the following applies:

- Same defaults as in NS#1
- vCPUs must be pinned to dedicated physical CPUs, with no over subscription.
- CPUs, memory and interfaces (if applicable) to be assigned to a given VM should belong to the same socket (NUMA awareness).
- Memory assigned to VMs should be backed by host's huge pages memory.
- VL2 and VL3 are E-Line underlay connectivity. No DHCP is required.

### Reference VNF#12: Middle point VNF

![Reference VNF#12: Middle point](assets/400px-Ref_vnf_12.png)

#### Description in common language

- Name: Ref_VNF_12
  - Component: Ref_VM3
    - **Memory:** 2 GB huge pages
    - **CPU:** 2 vCPU (= CPU)
    - **Storage:** 8 GB
    - **Image:** ref_vm3.qcow2
  - Component: Ref_VM4
    - **Memory:** 4GB
    - **CPU:** 2 vCPU
    - **Storage:** 16GB
    - **Image:** ref_vm4.qcow2
  - Connection Point: iface42 (west)
    - **Type:** Passthrough
  - Connection Point: iface43 (east)
    - **Type:** SR-IOV

#### OSM VNF descriptor for VNF#12

[VNF12.yaml](https://osm.etsi.org/gitweb/?p=osm/devops.git;a=blob;f=descriptor-packages/vnfd/ref12_vnf/src/ref12_vnfd.yaml)

### Reference VNF#22: Generator 2 ports

![Reference VNF#22: Generator 2 ports](assets/400px-Ref_vnf_22.png)

#### Description in common language

- Name: Ref_VNF_22
  - Component: Ref_VM6
    - **Memory:** 1 GB huge pages
    - **CPU:** 1 vCPU (= CPU)
    - **Storage:** 16 GB
    - **Image:** ref_vm22.qcow2
  - Connection Point: iface61 (west)
    - **Type:** Passthrough
  - Connection Point: iface62 (east)
    - **Type:** SR-IOV

#### OSM VNF descriptor for VNF#22

[VNF22.yaml](https://osm.etsi.org/gitweb/?p=osm/devops.git;a=blob;f=descriptor-packages/vnfd/ref22_vnf/src/ref22_vnfd.yaml)

### OSM NS descriptor for NS#2

[NS2.yaml](https://osm.etsi.org/gitweb/?p=osm/devops.git;a=blob;f=descriptor-packages/nsd/ref2_ns/src/ref2_nsd.yaml)

## Sample VNF Charms

This section is intended to be an index to VNF charms written by members of the OSM community. Please feel free to add links to your own examples below.

### Ansible

Under the scope of a H2020 project, [5GinFIRE](https://5ginfire.eu/) has developed a [charm that enables the configuration of a VNF, instantiated through OSM, using an Ansible playbook](https://github.com/5GinFIRE/mano/tree/master/charms/ansible-charm). The charm builds off of the base vnfproxy and ansible-base layers, and provides a template ready for customization that supports the execution of an Ansible playbook within the Juju framework used by OSM.

### UbuntuVNF 'Say Hello' Proxy Charm

A single VDU VNF containing a simple proxy charm that takes a parameter (name) and sends a greeting to all the VM's terminals using the 'wall' command. It serves like an example that can be extended to send any command with parameters to VNFs. Download it from [here](https://github.com/gianpietro1/osmproxycharms)

### Video Transcoder VNFs

Under the scope of a H2020 project, [5GinFIRE](https://5ginfire.eu/) has developed two Video Transcoding VNFs. The first uses [OpenCV](https://github.com/5GinFIRE/opencv_transcoder_vnf) and the other uses [FFMpeg](https://github.com/5GinFIRE/ffmpeg_transcoder_vnf). Both VNFs use systemd to run the transcoding service. The systemd services are configured using Juju charms. There is also a small script that builds the VNF and NS packages that might be useful.

## Resources

The template used to create these NS/VNF diagrams is available at: [Reference_NS-VNF_diagrams.pptx](https://drive.google.com/open?id=0B0IUJnTZzp2iUnJUb1JFSGpBRGs)
 No newline at end of file

07-advanced-charms.md

0 → 100644
+146 −0
Original line number Diff line number Diff line
# Advanced Charm Development

As you create more advanced charms, you'll find tips and tips here for making it a smoother process. There are a handful of tips that make development and repeat testing of charms a less time-ocn

# Juju

## Faster Deployments

When a charm is deployed, there are several time-consuming steps that are executed by default.

1. Launch an LXD container - download or update the cloud image for the series of charm being deployed
2. Run *apt-get update && apt-get upgrade*
3. Provision the machine with the Juju machine agent
4. Install charm (execute hooks, i.e., install, start)

### Build a custom cloud image

Caveat: This is intended only for use in a development environment, to provide faster iteration between deploying VNFs and charms.

The script below can be taken as-is We start with the base cloud image that LXD downloads from its [image server](https://us.images.linuxcontainers.org/), update it's installed software, and install the packages required by the reactive charm framework.

1. Launch a container using the latest cloud image
2. Run *apt-get update* and *apt-get upgrade*
3. Install extra packages needed by the reactive framework and your charm(s)
4. Publish the container as an image, under the alias *juju/$series/amd64*


**Note**: It's highly recommended to place this script into a nightly or weekly cron, so that you have relatively current updates.

```
#!/bin/bash
#
# This script will create trusty, xenial and/or bionic lxd images that will be used by the
# lxd provider in juju 2.1+ It is for use with the lxd provider for local
# development and preinstalls a common set of packages.
#
# This is important, as between them, basenode and layer-basic install ~111
# packages, before we even get to any packages installed by your charm.
#
# It also installs some helpful development tools, and pre-downloads some
# commonly used packages.
#
# This dramatically speeds up the install hooks for lxd deploys. On my slow
# laptop, average install hook time went from ~7min down to ~1 minute.
set -eux

# The basic charm layer also installs all the things. 47 packages.
LAYER_BASIC="gcc build-essential python3-pip python3-setuptools python3-yaml"

# the basic layer also installs virtualenv, but the name changed in xenial.
TRUSTY_PACKAGES="python-virtualenv"
XENIAL_PACKAGES="virtualenv"
BIONIC_PACKAGES="virtualenv"

# Predownload common packages used by your charms in development
DOWNLOAD_PACKAGES=

PACKAGES="$LAYER_BASIC $DOWNLOAD_PACKAGES"

function cache() {
    series=$1
    container=juju-${series}-base
    alias=juju/$series/amd64

    lxc delete $container -f || true
    lxc launch ubuntu:$series $container
    sleep 15  # wait for network

    lxc exec $container -- apt update -y
    lxc exec $container -- apt upgrade -y
    lxc exec $container -- apt install -y $PACKAGES $2
    lxc stop $container

    lxc image delete $alias || true
    lxc publish $container --alias $alias description="$series juju dev image ($(date +%Y%m%d))"

    lxc delete $container -f || true
}

# Uncomment the series you need pre-cached. By default, this will only
# cache the most recent series -- currently bionic.
# cache trusty "$TRUSTY_PACKAGES"
cache xenial "$XENIAL_PACKAGES"
# cache bionic "$BIONIC_PACKAGES"
```

### Disable OS upgrades

Prevent Juju from running *apt-get update && apt-get upgrade* when starting a machine

```
juju model-config enable-os-refresh-update=false enable-os-upgrade=false
```

Please note that any 'juju model-config' command needs to run right aftert you have switched to the juju model of your Network Service, in order to work.

### Using a custom Apt repository

You can configure Juju to use a local or regional Apt repository:

```
juju model-config apt-mirror=http://archive.ubuntu.com/ubuntu/
```

Please note that any 'juju model-config' command needs to run right aftert you have switched to the juju model of your Network Service, in order to work.

### Using a proxy server

Due to policy or network bandwidth, you may want to use a proxy server. Juju supports several types of proxy server, including:

- http-proxy
- https-proxy
- apt-http-proxy
- apt-https-proxy

```
juju model-config apt-http-proxy=http://squid.internal:3128 apt-https-proxy=https://squid.internal:3128
```

You can find a complete list of [model configuration](https://docs.jujucharms.com/2.4/en/models-config) keys in the [Juju Documentation](https://docs.jujucharms.com/2.4/en/).

## Debugging

[Debugging Charm Hooks](https://docs.jujucharms.com/2.4/en/developer-debugging) is a good place to start to familiarize yourself with the process and available ways of debugging a charm.

### Debug Logs

It's useful to watch the debug-logs while deploying a charm, to confirm what hooks are being run and to catch any exceptions that are raised. By default, it will tail the log for all charms:

```
$ juju debug-log
unit-charmnative-vnf-a-5: 18:12:11 INFO unit.charmnative-vnf-a/5.juju-log Reactive main running for hook start
unit-charmnative-vnf-a-5: 18:12:13 INFO unit.charmnative-vnf-a/5.juju-log Reactive main running for hook test
unit-charmnative-vnf-a-5: 18:12:13 INFO unit.charmnative-vnf-a/5.juju-log Invoking reactive handler: reactive/native-ci.py:21:test
unit-charmnative-vnf-a-5: 18:12:13 INFO unit.charmnative-vnf-a/5.juju-log Reactive main running for hook test
unit-charmnative-vnf-a-5: 18:12:13 INFO unit.charmnative-vnf-a/5.juju-log Invoking reactive handler: reactive/native-ci.py:21:test
unit-charmnative-vnf-a-5: 18:12:14 INFO unit.charmnative-vnf-a/5.juju-log Reactive main running for hook testint
unit-charmnative-vnf-a-5: 18:12:14 INFO unit.charmnative-vnf-a/5.juju-log Invoking reactive handler: reactive/native-ci.py:33:testint
unit-charmnative-vnf-a-5: 18:13:17 WARNING juju.worker.uniter.operation we should run a leader-deposed hook here, but we can't yet
unit-charmnative-vnf-a-5: 18:13:18 INFO unit.charmnative-vnf-a/5.juju-log Reactive main running for hook leader-settings-changed
unit-charmnative-vnf-a-5: 18:13:18 INFO unit.charmnative-vnf-a/5.juju-log Reactive main running for hook stop
```

### Interactive Debugging

One of the more useful, advanced tools we have is the *juju debug-hook* command, which lets us interact with the charm in a tmux session inside the container. This allows us to edit code and re-run it, use pdb, and inspect configuration and state. Please refer to the [Developer Debugging](https://docs.jujucharms.com/2.4/en/developer-debugging) docs for more information about how to do this.
+47 −0
Original line number Diff line number Diff line
# Known Issues
# Additional References

## OSM Release FIVE
## Known Issues

### v5.0.5

@@ -11,3 +11,37 @@
### v6.0.1

* VDUs that do not support 'MIME-multi part file' and use charms, will not be able to receive the cloud_init information provided in its own package. The issue is explained in [this bug](https://osm.etsi.org/bugzilla/show_bug.cgi?id=828)

## Installing Python OSM IM package

For using the devops tools for validating the descriptors aganst the model, the python-osm-im package should be installed.
Follow these steps to install it if needed:

```bash
# Check that the current OSM debian repository is the current stable repo for the release:
grep -h ^deb /etc/apt/sources.list /etc/apt/sources.list.d/* |grep osm-download
#  should be similar to this, and should include IM component:
#    deb [arch=amd64] https://osm-download.etsi.org/repository/osm/debian/ReleaseSIX stable IM osmclient devops

# If missing, add repository with:
curl "https://osm-download.etsi.org/repository/osm/debian/ReleaseSIX/OSM%20ETSI%20Release%20Key.gpg" | apt-key add -
apt-get update && add-apt-repository -y "deb [arch=amd64] https://osm-download.etsi.org/repository/osm/debian/ReleaseSIX stable IM osmclient devops"

# Install/update python-osm-im and its dependencies
apt-get update
apt-get install python-osm-im
sudo -H pip install pyangbind
```

## Migrating old descriptors to current release

If you have Release 1 or 2 descriptors, you can convert it to a newer, supported format. Only the files containing the VNFD or NSD descriptor need to be migrated. 
Clone the devops repo, run the utility for that and generate the package:

```bash
git clone https://osm.etsi.org/gerrit/osm/devops
./devops/descriptor-packages/tools/upgrade_descriptor_version.py -i <old-descriptor-file> -o <new-descriptor-file>
# generate package following the instructions of previous sections
```

This command fails if package python-osm-im is not installed.
 No newline at end of file
+22.6 KiB
Loading image diff...
+13.8 KiB
Loading image diff...
Loading