As previously discussed, OSM supports the deployment of CNFs/KNFs over existing K8s clusters. This case implies that:
1. The K8s cluster is created upfront by an external entity. This process will happen out-of-band and no interaction with OSM is actually expected.
-**NOTE:** This also includes the case where OSM creates a K8s cluster using a specific VNF package for that purpose.
2. OSM is informed (administratively) that the cluster is available for use in a given location (actually, referring to a VIM target). **This is the step covered in this section**.
- Later on, NS instantiation processes that require the use of a K8s cluster in that location (i.e. VIM target) will deploy their KDUs over that cluster.
- In case more than one K8s cluster is made available in that location, OSM will choose one based on the labels specified in the descriptor and in the registration of the cluster or. In case of more than one cluster in the same VIM meet the VNF requirements, OSM will follow a default order of preference or, if aplicable, the preference indicated in the instantiation parameters (similarly to the cases covered in VIMs).
Hence, in other to support this case, OSM's NBI provides a mechanism to be informed about existing K8s clusters. The corresponding CLI client command is the following:
-`name` is the internal name that OSM will use to refer to the cluster.
-`credentials_file.yaml`: Credentials to access a given K8s cluster, i.e. a valid `.kube/config` information, including:
- Reference to the K8s cluster
- server's URL (`server`)
- CA data (`certificate-authority-data`)
- User with sufficient privileges
- User name
- Secret
- At least one context. In case more than one context is selected, an explicit `current-context` must be defined.
-`namespace`. By default, it will use `kube-system` for this operation.
-`VIM_target`: The VIM where the cluster would reside or be attached (the case of bare metal clusters is simply considered a particular case here).
-`ver`: K8s version
-`k8s_nets`: list of VIM networks where the cluster is accessible via L3 routing, in (key,value) format, where:
- The _key_ will be used to refer to a given cluster's network (e.g. "mgmt", "external", etc.)
- The _value_ will be used to refer to a VIM network that provides L3 access to that cluster network.
- Optionally:
-`cni`: list of CNIs used in the cluster.
This call triggers several actions in OSM:
1. Save this information in the common database.
2. Trigger the `init_env` call in the cluster.
3. If applicable, make the corresponding `repo_add` operations to add repositories known globally by OSM.
It is also possible removing a K8s cluster from the list of clusters known by OSM. In that case, the corresponding NBI call can be triggered by this command:
```bash
osm k8scluster-delete <name>
```
In case no CNFs/KNFs are running in the cluster, this call will trigger a `reset` operation and the entry will be removed from the common database. Otherwise, it will report an error.
### Management of K8s repos
OSM may be aware of a list of repos for K8s applications, so that they can be referenced from OSM packages. This prevents from the need of embedding them in the VNF package and makes the use more convenient in most of the cases.
In order to add a repo, the user should invoke the following command:
```bash
osm repo-add <name> <URI> type:<chart|bundle>
```
Where the type of repo should be either `chart` for applications based on Helm Charts, or `bundle` for applications based in Juju bundles for K8s.
This call will trigger the following actions in OSM:
1. Save this information in the common database.
2. Invoke the `repo_add` operation with the known K8s clusters.
Conversely, a repo can be removed with:
```bash
osm repo-delete <name>
```
Likewise, this operation would:
1. Update the common database accordingly.
2. Invoke the `repo_remove` operation with the known K8s clusters
At any moment, it is also possible to get the list of repos known by OSM: