diff --git a/TO-BE-MOVED-TO-OTHER-REPOS/VNF Onboarding/03-01-create-vnf-package.md b/TO-BE-MOVED-TO-OTHER-REPOS/VNF Onboarding/03-01-create-vnf-package.md deleted file mode 100644 index a5edeec56878284a210ac2dd4f45a26b37bece51..0000000000000000000000000000000000000000 --- a/TO-BE-MOVED-TO-OTHER-REPOS/VNF Onboarding/03-01-create-vnf-package.md +++ /dev/null @@ -1,94 +0,0 @@ -# Creating your own VNF package - -This page illustrates how to create your own VNF package. As a previous step, it is highly convenient that you collect the internal structure of your VNF: its VDUs, the VDU information (flavor, interfaces, image, etc.), the VNF internal networks, and the external connection points. The presentation in this [link](https://osm-download.etsi.org/ftp/osm-3.0-three/1st-hackfest/presentations/20180117 OSM Hackfest - Guidelines for VNF builders.pptx) might help you collecting all the information. - -## VNF without primitives - -### Using the CLI tool - -- Clone the devops repo: - -```bash -git clone https://osm.etsi.org/gerrit/osm/devops -``` - -- Run the following command to create a folder with all the files required for a single-VM VNF package: - -```bash -./devops/descriptor-packages/tools/generate_descriptor_pkg.sh -t vnfd --image -c -``` - -- A folder called `_vnfd` will be created with all the files required for a VNF package. -- Edit the descriptor file `_vnfd.yaml`. - - By default, the descriptor is prepared for a single-VM VNF. - - Add as many VMs as required. - - Add also Internal VLDs as required. -- Add any artifacts needed by the VNF (eg, charm, icons, images etc ...) to the appropriate folder and make sure it is referenced in the descriptor. -- Once done, you can generate the tar.gz VNF package with the command: - -```bash -./devops/descriptor-packages/tools/generate_descriptor_pkg.sh -t vnfd -N _vnfd -#Note: the argument -N is optional and is intended to keep the package files after creating the package -``` - -## VNF with primitives - -You will have to create a proxy charm for the VNF. You can follow the general instructions below: - -- Clone the devops repo: - -```bash -git clone https://osm.etsi.org/gerrit/osm/devops -``` - -- Enter the juju-charms folder under devops and follow the instructions to create your own charm: [Creating your own VNF charm](03-02-00-create-vnf-charm.md) - -You can then follow the ping-pong example in OSM descriptor packages to integrate the charm into VNF primitives - -## Migrating old descriptors to current release - -Only file containing the VNFD or NSD descriptor need to be migrated. Clone the devops repo, run the utility for that and generate the package: - -```bash -git clone https://osm.etsi.org/gerrit/osm/devops -./devops/descriptor-packages/tools/upgrade_descriptor_version.py -i -o -# generate package following the instructions of previous sections -``` - -This command fails if package python-osm-im is not installed. Follow these guidelines to install it: [Installing_Python_OSM_IM_package](#installing-python-osm-im-package) - -## Validate descriptors - -This utility is under the `devops` repository. Clone it using the above steps. Can be invoked with: - -```bash -./devops/descriptor-packages/tools/validate_descriptor.py -``` - -It is also integrated in the devops/descriptor-packages makefile system - -```bash -make test -``` - -This command fails if package `python-osm-im` is not installed. Follow these guidelines to install it: [Installing_Python_OSM_IM_package](#installing-python-osm-im-package) - -## Installing Python OSM IM package - -Follow these steps to install it if needed: - -```bash -# Check that the current OSM debian repository is the current stable repo for the release: -grep -h ^deb /etc/apt/sources.list /etc/apt/sources.list.d/* |grep osm-download -# should be similar to this, and should include IM component: -# deb [arch=amd64] https://osm-download.etsi.org/repository/osm/debian/ReleaseSIX stable IM osmclient devops - -# If missing, add repository with: -curl "https://osm-download.etsi.org/repository/osm/debian/ReleaseSIX/OSM%20ETSI%20Release%20Key.gpg" | apt-key add - -apt-get update && add-apt-repository -y "deb [arch=amd64] https://osm-download.etsi.org/repository/osm/debian/ReleaseSIX stable IM osmclient devops" - -# Install/update python-osm-im and its dependencies -apt-get update -apt-get install python-osm-im -sudo -H pip install pyangbind -``` diff --git a/TO-BE-MOVED-TO-OTHER-REPOS/VNF Onboarding/03-02-00-create-vnf-charm.md b/TO-BE-MOVED-TO-OTHER-REPOS/VNF Onboarding/03-02-00-create-vnf-charm.md deleted file mode 100644 index 7e01d7abe0b9d8f54600e455a1be545e3089a173..0000000000000000000000000000000000000000 --- a/TO-BE-MOVED-TO-OTHER-REPOS/VNF Onboarding/03-02-00-create-vnf-charm.md +++ /dev/null @@ -1,290 +0,0 @@ -# Creating your own VNF charm - -## Creating a VNF proxy charm - -### What is a charm - -A [charm](https://jujucharms.com/docs/stable/charms) is a collection of scripts and metadata that encapsulate the distilled DevOps knowledge of experts in a particular product. These charms make it easy to reliably and repeatedly deploy applications, then scale them as required with minimal effort. - -Driven by [Juju](https://jujucharms.com/docs/stable/about-juju), these charms manage the complete lifecycle of the application, including installation, configuration, clustering, and scaling. - -### What is a proxy charm - -OSM Release THREE supports a limited version of charms that we call "proxy charms". These charms is responsible for doing Day 1 configuration. Configurations are mapped to [Juju Actions](https://jujucharms.com/docs/stable/actions) which manage configuration within the VNFD qcow2 image (over SSH, via RESTful API, etc). - -The diagram below illustrates the OSM workflow: - -``` -+---------------------+ +---------------------+ -| <----+ | -| Resource | | Service | -| Orchestrator (RO) +----> Orchestrator (SO) | -| | | | -+------------------+--+ +-------+----^--------+ - | | | - | | | - | | | - +-----v-----+ +-v----+--+ - | <-------+ | - | Virtual | | Proxy | - | Machine | | Charm | - | +-------> | - +-----------+ +---------+ -``` - -The SO directs the RO to create a virtual machine using the selected VNF image. When that has successfully completed, the SO will instantiate a LXD container, managed by Juju, with the proxy charm. The proxy charm will then communicate with the VNF virtual machine to do Day 1 configuration. - -### Creating a proxy charm - -#### Setup - -We recommend that you are running Ubuntu 16.04 or newer, or [install snapd](https://docs.snapcraft.io/core/install) on the Linux distribution of your choice. - -Install the *charm* snap, which provides the charm command and libraries necessary to compile your charm: - -``` -snap install charm -``` - -Setup your workspace for writing layers and building charms: - -``` -mkdir -p ~/charms/layers -export JUJU_REPOSITORY=~/charms -export LAYER_PATH=$JUJU_REPOSITORY/layers -cd $LAYER_PATH -``` - -#### Layers - -Layers are individual components that, when combined, result in a finished product. The diagram below describes what our example *pingpong* charm looks like, followed by a walkthrough of how it is built. The completed charm is available in the [juju-charms](https://osm.etsi.org/gitweb/?p=osm/juju-charms.git;a=summary) repository. - -``` -+------------------+ -| | -| Layers | -| | -| +------------+ | -| | | | -| | Base | | -| | | | -| +------+-----+ | -| | | -| +------v-----+ | -| | | | -| | sshproxy | | +-----------------+ -| | | | | | -| +------+-----+ | | pingpong | -| | +------------> | -| +------v-----+ | | charm | -| | | | | | -| | vnfproxy | | +-----------------+ -| | | | -| +------+-----+ | -| | | -| +------v-----+ | -| | | | -| | pingpong | | -| | | | -| +------------+ | -| | -+------------------+ -``` - -Create the layer for your proxy charm: - -``` -charm create pingpong -cd pingpong -``` - -This will create a charm layer ready for customization: - -``` -. -├── config.yaml -├── icon.svg -├── layer.yaml -├── metadata.yaml -├── reactive -│ └── pingpong.py -├── README.ex -└── tests - ├── 00-setup - └── 10-deploy -``` - -Next, modify *layers.yaml* to the following: - -``` -includes: - - layer:basic - - layer:vnfproxy -``` - -The *[metadata.yaml](https://jujucharms.com/docs/stable/authors-charm-metadata)* file describes what your charm is and sets certain properties used by Juju. - -``` -name: pingpong -summary: A service to test latency between machines. -maintainer: Adam Israel -description: | - The pingpong charm manages the pingpong vnfd deployed by Open Source Mano. -tags: - - nfv -subordinate: false -series: - - trusty - - xenial -``` - - -This means that your charm will include the basic layer, required for all charms, and the vnfproxy layer, which has been designed to aid in the development in proxy charms by implementing common functionality. - -#### Actions - -There are three pieces that make up an action: *actions.yaml*, which define an action, the *actions/* directory where we'll place a small script that invokes the reactive framework, and the python code in *reactive/pingpong.py* that performs said action. - -In *actions.yaml*, we define the actions we wish to support: - -``` -set-server: - description: "Set the target IP address and port" - params: - server-ip: - description: "IP on which the target service is listening." - type: string - default: "" - server-port: - description: "Port on which the target service is listening." - type: integer - default: 5555 - required: - - server-ip -set-rate: - description: "Set the rate of packet generation." - params: - rate: - description: "Packet rate." - type: integer - default: 5 -get-stats: - description: "Get the stats." -get-state: - description: "Get the admin state of the target service." -get-rate: - description: "Get the rate set on the target service." -get-server: - description: "Get the target server and IP set" -``` - - - -``` -mkdir actions/ -``` - -For each action, we need to create a script to invoke the reactive framework. This is a boilerplate script that will be used for every action. The first step is to create the first action script. - -``` -cat <<'EOF' >> actions/set-server -#!/usr/bin/env python3 -import sys -sys.path.append('lib') - -from charms.reactive import main -from charms.reactive import set_state -from charmhelpers.core.hookenv import action_fail, action_name - -""" -`set_state` only works here because it's flushed to disk inside the `main()` -loop. remove_state will need to be called inside the action method. -""" -set_state('actions.{}'.format(action_name())) - -try: - main() -except Exception as e: - action_fail(repr(e)) -EOF -``` - -After this, make the file executable. - -``` -chmod +x actions/set-server -``` - -Next, copy this script for the remaining actions: - -``` -cp actions/set-server actions/set-rate -cp actions/set-server actions/get-stats -cp actions/set-server actions/set-state -cp actions/set-server actions/get-rate -cp actions/set-server actions/get-server -``` - -The last step is to map the action to the command(s) to be run. To do this, open up reactive/pingpong.py and add this code. - -``` -@when('actions.set-server') -def set_server(): - err = '' - try: - cmd = "" - result, err = charms.sshproxy._run(cmd) - except: - action_fail('command failed:' + err) - else: - action_set({'outout': result}) - finally: - remove_flag('actions.set-server') -``` - -The reactive framework, coupled with the script in the *actions/* directory, maps the SO's invocation of the action to the block of code with the matching *@when* decorator. As demonstrated in the above code, it will execute a command via the ssh (configured automatically by the SO). You could replace with with calls to a REST API or any other RPC method. You can also run code against the LXD container running the charm. - -#### Building - -When you're ready, you can create your charm via the *charm build* command: - -``` -$ charm build -build: Composing into /home/stone/charms -build: Destination charm directory: /home/stone/charms/builds/pingpong -build: Please add a `repo` key to your layer.yaml, with a url from which your layer can be cloned. -build: Processing layer: layer:basic -build: Processing layer: layer:sshproxy -build: Processing layer: layer:vnfproxy -build: Processing layer: pingpong -``` - -This combines all layers that you included, and those that they include, into a charm called *pingpong*, located in the *~/charms/builds* directory. - -#### VNF Descriptor - -In your Virtual Network Function Descriptor (VNFD), you specify the name of the charm as demonstrated below: - -``` -vnfd:vnfd-catalog: - vnfd:vnfd: - - vnfd:id: rift_pong_vnf - vnfd:name: pong_vnf - vnfd:vnf-configuration: - vnfd:juju: - vnfd:charm: pingpong -``` - -Then the compiled charm (from the builds directory) has to be packaged with the descriptor package under the charm directory. So the ping VNF with the charm would be: - -``` -ping_vnf -├── charms -│ └── pingpong -├── checksums.txt -├── icons -├── images -├── ping_vnfd.yaml -├── README -└── scripts -``` diff --git a/TO-BE-MOVED-TO-OTHER-REPOS/VNF Onboarding/03-02-01-examples-vnf-charms.md b/TO-BE-MOVED-TO-OTHER-REPOS/VNF Onboarding/03-02-01-examples-vnf-charms.md deleted file mode 100644 index 99fd0e7547690a2db31f1c53505ce631fc3b5ffd..0000000000000000000000000000000000000000 --- a/TO-BE-MOVED-TO-OTHER-REPOS/VNF Onboarding/03-02-01-examples-vnf-charms.md +++ /dev/null @@ -1,17 +0,0 @@ -# Example VNF Charms - -## Example VNF Charms - -This page is intended to be an index to VNF charms written by members of the OSM community. Please feel free to add links to your own examples below. - -### Ansible - -Under the scope of a H2020 project, [5GinFIRE](https://5ginfire.eu/) has developed a [charm that enables the configuration of a VNF, instantiated through OSM, using an Ansible playbook](https://github.com/5GinFIRE/mano/tree/master/charms/ansible-charm). The charm builds off of the base vnfproxy and ansible-base layers, and provides a template ready for customization that supports the execution of an Ansible playbook within the Juju framework used by OSM. - -### UbuntuVNF 'Say Hello' Proxy Charm - -A single VDU VNF containing a simple proxy charm that takes a parameter (name) and sends a greeting to all the VM's terminals using the 'wall' command. It serves like an example that can be extended to send any command with parameters to VNFs. Download it from [here](https://github.com/gianpietro1/osmproxycharms) - -### Video Transcoder VNFs - -Under the scope of a H2020 project, [5GinFIRE](https://5ginfire.eu/) has developed two Video Transcoding VNFs. The first uses [OpenCV](https://github.com/5GinFIRE/opencv_transcoder_vnf) and the other uses [FFMpeg](https://github.com/5GinFIRE/ffmpeg_transcoder_vnf). Both VNFs use systemd to run the transcoding service. The systemd services are configured using Juju charms. There is also a small script that builds the VNF and NS packages that might be useful. diff --git a/TO-BE-MOVED-TO-OTHER-REPOS/VNF Onboarding/03-04-information-to-create-descriptors.md b/TO-BE-MOVED-TO-OTHER-REPOS/VNF Onboarding/03-04-information-to-create-descriptors.md deleted file mode 100644 index bbc8916ff440d9aebfe8a84130f682a7e053b60e..0000000000000000000000000000000000000000 --- a/TO-BE-MOVED-TO-OTHER-REPOS/VNF Onboarding/03-04-information-to-create-descriptors.md +++ /dev/null @@ -1,135 +0,0 @@ -# Reference VNF and NS Descriptors - -## Reference NS#1: Testing an endpoint VNF - -The following network service captures a simple test setup where a VNF is tested with a traffic generator VNF (or a simple VNF/VM with a basic client application). For simplicity, this network service assumes that the VNF under test is the endpoint of a given service (e.g. DNS, AAA, etc.) and does not require special conditions or resource allocation besides the usual in a standard cloud environments. - -![Reference NS #1: Testing an endpoint VNF](assets/450px-Example_ns_1.png) - -In this example, unless otherwise specified in the description, the following defaults apply: - -- CPs are regular para-virtualized interfaces (VirtIO or equivalent). -- VLs provide E-LAN connectivity via regular (overlay) networks provided by the VIM. -- VLs provide IP addressing via DHCP if applicable. -- Mapping between internal and external CPs may be either direct (as aliases) or via an intermediate VL. -- VIM+NFVI can guarantee predictable ordering of guest interfaces' virtual PCI addresses. - -In the case of REF_NS_1: - -- When deploying the NS, VL1 would be typically mapped to a pre-created VIM network intended to provide management IP address to VNFs via DHCP. -- DHCP in VL2 may be optional. - -### Reference VNF#11: Endpoint VNF - -![Reference VNF#11: Endpoint](assets/350px-Ref_vnf_11.png) - -#### Description in common language - -- Name: Ref_VNF_11 - - Component: Ref_VM1 - - **Memory:** 2 GB - - **CPU:** 2 vCPU - - **Storage:** 8 GB - - **Image:** ref_vm1.qcow2 - - Component: Ref_VM2 - - **Memory:** 4GB - - **CPU:** 2 vCPU - - **Storage:** 16GB - - **Image:** ref_vm2.qcow2 - - Internal Virtual Link: VL12 - - No DHCP server is enabled. - - Static addressing may be used at CP iface11 and CP iface21. - -#### OSM VNF descriptor for VNF#11 - -[VNF11.yaml](https://osm.etsi.org/gitweb/?p=osm/devops.git;a=blob;f=descriptor-packages/vnfd/ref11_vnf/src/ref11_vnfd.yaml) - -### Reference VNF#21: Generator 1 port - -![Reference VNF#21: Generator 1 port](assets/350px-Ref_vnf_21.png) - -#### Description in common language - -- Name: Ref_VNF_21 - - Component: Ref_VM5 - - **Memory:** 1 GB - - **CPU:** 1 vCPU - - **Storage:** 16 GB - - **Image:** ref_vm21.qcow2 - -#### OSM VNF descriptor for VNF#21 - -[VNF21.yaml](https://osm.etsi.org/gitweb/?p=osm/devops.git;a=blob;f=descriptor-packages/vnfd/ref21_vnf/src/ref21_vnfd.yaml) - -### OSM NS descriptor for NS#1 - -[NS1.yaml](https://osm.etsi.org/gitweb/?p=osm/devops.git;a=blob;f=descriptor-packages/nsd/ref1_ns/src/ref1_nsd.yaml) - -## Reference NS #2: Testing a middle point VNF - -![Reference NS #2: Testing a middle point VNF](assets/400px-Example_ns_2.png) - -The following network service captures a more advanced test setup where the VNF under test is a middlepoint in the communication (e.g. router, EPC) and might require special conditions or resource allocation and connectivity foreseen in NFV ISG specs. In this case, the traffic generator VNF behaves as source and sink of traffic and might also require special resource allocation. - -In this example, unless otherwise specified in the description, the following applies: - -- Same defaults as in NS#1 -- vCPUs must be pinned to dedicated physical CPUs, with no over subscription. -- CPUs, memory and interfaces (if applicable) to be assigned to a given VM should belong to the same socket (NUMA awareness). -- Memory assigned to VMs should be backed by host's huge pages memory. -- VL2 and VL3 are E-Line underlay connectivity. No DHCP is required. - -### Reference VNF#12: Middle point VNF - -![Reference VNF#12: Middle point](assets/400px-Ref_vnf_12.png) - -#### Description in common language - -- Name: Ref_VNF_12 - - Component: Ref_VM3 - - **Memory:** 2 GB huge pages - - **CPU:** 2 vCPU (= CPU) - - **Storage:** 8 GB - - **Image:** ref_vm3.qcow2 - - Component: Ref_VM4 - - **Memory:** 4GB - - **CPU:** 2 vCPU - - **Storage:** 16GB - - **Image:** ref_vm4.qcow2 - - Connection Point: iface42 (west) - - **Type:** Passthrough - - Connection Point: iface43 (east) - - **Type:** SR-IOV - -#### OSM VNF descriptor for VNF#12 - -[VNF12.yaml](https://osm.etsi.org/gitweb/?p=osm/devops.git;a=blob;f=descriptor-packages/vnfd/ref12_vnf/src/ref12_vnfd.yaml) - -### Reference VNF#22: Generator 2 ports - -![Reference VNF#22: Generator 2 ports](assets/400px-Ref_vnf_22.png) - -#### Description in common language - -- Name: Ref_VNF_22 - - Component: Ref_VM6 - - **Memory:** 1 GB huge pages - - **CPU:** 1 vCPU (= CPU) - - **Storage:** 16 GB - - **Image:** ref_vm22.qcow2 - - Connection Point: iface61 (west) - - **Type:** Passthrough - - Connection Point: iface62 (east) - - **Type:** SR-IOV - -#### OSM VNF descriptor for VNF#22 - -[VNF22.yaml](https://osm.etsi.org/gitweb/?p=osm/devops.git;a=blob;f=descriptor-packages/vnfd/ref22_vnf/src/ref22_vnfd.yaml) - -### OSM NS descriptor for NS#2 - -[NS2.yaml](https://osm.etsi.org/gitweb/?p=osm/devops.git;a=blob;f=descriptor-packages/nsd/ref2_ns/src/ref2_nsd.yaml) - -## Resources - -The template used to create these NS/VNF diagrams is available at: [Reference_NS-VNF_diagrams.pptx](https://drive.google.com/open?id=0B0IUJnTZzp2iUnJUb1JFSGpBRGs) diff --git a/TO-BE-MOVED-TO-OTHER-REPOS/VNF Onboarding/14-advanced-charm-development.md b/TO-BE-MOVED-TO-OTHER-REPOS/VNF Onboarding/14-advanced-charm-development.md deleted file mode 100644 index c42c6919a6387ab4520f72dea6ec6e0c2955de4c..0000000000000000000000000000000000000000 --- a/TO-BE-MOVED-TO-OTHER-REPOS/VNF Onboarding/14-advanced-charm-development.md +++ /dev/null @@ -1,142 +0,0 @@ -# Advanced Charm Development - -As you create more advanced charms, you'll find tips and tips here for making it a smoother process. There are a handful of tips that make development and repeat testing of charms a less time-ocn - -# Juju - -## Faster Deployments - -When a charm is deployed, there are several time-consuming steps that are executed by default. - -1. Launch an LXD container - download or update the cloud image for the series of charm being deployed -2. Run *apt-get update && apt-get upgrade* -3. Provision the machine with the Juju machine agent -4. Install charm (execute hooks, i.e., install, start) - -### Build a custom cloud image - -Caveat: This is intended only for use in a development environment, to provide faster iteration between deploying VNFs and charms. - -The script below can be taken as-is We start with the base cloud image that LXD downloads from its [image server](https://us.images.linuxcontainers.org/), update it's installed software, and install the packages required by the reactive charm framework. - -1. Launch a container using the latest cloud image -2. Run *apt-get update* and *apt-get upgrade* -3. Install extra packages needed by the reactive framework and your charm(s) -4. Publish the container as an image, under the alias *juju/$series/amd64* - - -**Note**: It's highly recommended to place this script into a nightly or weekly cron, so that you have relatively current updates. - -``` -#!/bin/bash -# -# This script will create trusty, xenial and/or bionic lxd images that will be used by the -# lxd provider in juju 2.1+ It is for use with the lxd provider for local -# development and preinstalls a common set of packages. -# -# This is important, as between them, basenode and layer-basic install ~111 -# packages, before we even get to any packages installed by your charm. -# -# It also installs some helpful development tools, and pre-downloads some -# commonly used packages. -# -# This dramatically speeds up the install hooks for lxd deploys. On my slow -# laptop, average install hook time went from ~7min down to ~1 minute. -set -eux - -# The basic charm layer also installs all the things. 47 packages. -LAYER_BASIC="gcc build-essential python3-pip python3-setuptools python3-yaml" - -# the basic layer also installs virtualenv, but the name changed in xenial. -TRUSTY_PACKAGES="python-virtualenv" -XENIAL_PACKAGES="virtualenv" -BIONIC_PACKAGES="virtualenv" - -# Predownload common packages used by your charms in development -DOWNLOAD_PACKAGES= - -PACKAGES="$LAYER_BASIC $DOWNLOAD_PACKAGES" - -function cache() { - series=$1 - container=juju-${series}-base - alias=juju/$series/amd64 - - lxc delete $container -f || true - lxc launch ubuntu:$series $container - sleep 15 # wait for network - - lxc exec $container -- apt update -y - lxc exec $container -- apt upgrade -y - lxc exec $container -- apt install -y $PACKAGES $2 - lxc stop $container - - lxc image delete $alias || true - lxc publish $container --alias $alias description="$series juju dev image ($(date +%Y%m%d))" - - lxc delete $container -f || true -} - -# Uncomment the series you need pre-cached. By default, this will only -# cache the most recent series -- currently bionic. -# cache trusty "$TRUSTY_PACKAGES" -cache xenial "$XENIAL_PACKAGES" -# cache bionic "$BIONIC_PACKAGES" -``` - -### Disable OS upgrades - -Prevent Juju from running *apt-get update && apt-get upgrade* when starting a machine - -``` -juju model-config enable-os-refresh-update=false enable-os-upgrade=false -``` - -### Using a custom Apt repository - -You can configure Juju to use a local or regional Apt repository: - -``` -juju model-config apt-mirror=http://archive.ubuntu.com/ubuntu/ -``` - -### Using a proxy server - -Due to policy or network bandwidth, you may want to use a proxy server. Juju supports several types of proxy server, including: - -- http-proxy -- https-proxy -- apt-http-proxy -- apt-https-proxy - -``` -juju model-config apt-http-proxy=http://squid.internal:3128 apt-https-proxy=https://squid.internal:3128 -``` - -You can find a complete list of [model configuration](https://docs.jujucharms.com/2.4/en/models-config) keys in the [Juju Documentation](https://docs.jujucharms.com/2.4/en/). - -## Debugging - -[Debugging Charm Hooks](https://docs.jujucharms.com/2.4/en/developer-debugging) is a good place to start to familiarize yourself with the process and available ways of debugging a charm. - -### Debug Logs - -It's useful to watch the debug-logs while deploying a charm, to confirm what hooks are being run and to catch any exceptions that are raised. By default, it will tail the log for all charms: - -``` -$ juju debug-log -unit-charmnative-vnf-a-5: 18:12:11 INFO unit.charmnative-vnf-a/5.juju-log Reactive main running for hook start -unit-charmnative-vnf-a-5: 18:12:13 INFO unit.charmnative-vnf-a/5.juju-log Reactive main running for hook test -unit-charmnative-vnf-a-5: 18:12:13 INFO unit.charmnative-vnf-a/5.juju-log Invoking reactive handler: reactive/native-ci.py:21:test -unit-charmnative-vnf-a-5: 18:12:13 INFO unit.charmnative-vnf-a/5.juju-log Reactive main running for hook test -unit-charmnative-vnf-a-5: 18:12:13 INFO unit.charmnative-vnf-a/5.juju-log Invoking reactive handler: reactive/native-ci.py:21:test -unit-charmnative-vnf-a-5: 18:12:14 INFO unit.charmnative-vnf-a/5.juju-log Reactive main running for hook testint -unit-charmnative-vnf-a-5: 18:12:14 INFO unit.charmnative-vnf-a/5.juju-log Invoking reactive handler: reactive/native-ci.py:33:testint -unit-charmnative-vnf-a-5: 18:13:17 WARNING juju.worker.uniter.operation we should run a leader-deposed hook here, but we can't yet -unit-charmnative-vnf-a-5: 18:13:18 INFO unit.charmnative-vnf-a/5.juju-log Reactive main running for hook leader-settings-changed -unit-charmnative-vnf-a-5: 18:13:18 INFO unit.charmnative-vnf-a/5.juju-log Reactive main running for hook stop -``` - -### Interactive Debugging - -One of the more useful, advanced tools we have is the *juju debug-hook* command, which lets us interact with the charm in a tmux session inside the container. This allows us to edit code and re-run it, use pdb, and inspect configuration and state. Please refer to the [Developer Debugging](https://docs.jujucharms.com/2.4/en/developer-debugging) docs for more information about how to do this. diff --git a/assets/350px-Ref_vnf_11.png b/assets/350px-Ref_vnf_11.png deleted file mode 100644 index 95710b8c25d5ff04bf4268759b1db067483a2f9d..0000000000000000000000000000000000000000 Binary files a/assets/350px-Ref_vnf_11.png and /dev/null differ diff --git a/assets/350px-Ref_vnf_21.png b/assets/350px-Ref_vnf_21.png deleted file mode 100644 index c1982c52022ceb52821a7a96ffbdc636a4f9a545..0000000000000000000000000000000000000000 Binary files a/assets/350px-Ref_vnf_21.png and /dev/null differ diff --git a/assets/400px-Example_ns_2.png b/assets/400px-Example_ns_2.png deleted file mode 100644 index 52807f25cfb502981393acfad24a0d1ef836f53b..0000000000000000000000000000000000000000 Binary files a/assets/400px-Example_ns_2.png and /dev/null differ diff --git a/assets/400px-Ref_vnf_12.png b/assets/400px-Ref_vnf_12.png deleted file mode 100644 index a1df607b3d69fb22dfd1614d4213325aad964c64..0000000000000000000000000000000000000000 Binary files a/assets/400px-Ref_vnf_12.png and /dev/null differ diff --git a/assets/400px-Ref_vnf_22.png b/assets/400px-Ref_vnf_22.png deleted file mode 100644 index d4a86b1b4c9cc48c45410d110cbf4cd2d70f852d..0000000000000000000000000000000000000000 Binary files a/assets/400px-Ref_vnf_22.png and /dev/null differ diff --git a/assets/450px-Example_ns_1.png b/assets/450px-Example_ns_1.png deleted file mode 100644 index f63af4897cad5de600865b0370e9484357a5719d..0000000000000000000000000000000000000000 Binary files a/assets/450px-Example_ns_1.png and /dev/null differ