Support placement of KNFs on specified Kubernetes nodes
Support placement of KNFs on specified Kubernetes nodes
Proposer
Federico Facca, federico.facca@martel-innovate.com Gabriele Cerfoglio, gabriele.cerfoglio@martel-innovate.com
Type
Feature
Target MDG/TF
Unknown.
Description
This feature aims at enabling OSM to place Kubernetes Network Function on specific nodes in the registered K8s clusters.
While right now it's possible to place functions on specified VIMs, with this method the placement is done in terms of which k8s cluster a service goes to. What about placing a service on certain nodes of a cluster? If there is a large K8s cluster with a large quantity of nodes, then targeting the cluster as a whole might not be enough in certain scenarios, hence the need for placement on the nodes themselves as well as on the cluster.
OSM currently only allow for placement on specific nodes when the placement details are already set up within Helm charts for example, either hardcoded or set up to be passed at runtime via variables. If this placement configuration isn't set up in a helm chart then there is no way to do any kind of placement from OSM itself.
Let's assume a scenario which involves image recognition software split into 3 modules: edge object detection, edge object classification, and hazard behaviour detection. The first two are less computationally intensive than the third module, meaning they could be deployed on different nodes with less resource availability, but still requiring a certain amount of resources and at least some specific hardware (like an FPGA card). The third module however needs more resources, as it needs to process some large video stream data. Ultimately, these modules are all part of the same overall function, but have different resource needs that should be taken into account when deploying on a Kubernetes cluster whose nodes can vary in terms of their setup.
When it comes to identifying and targeting specific nodes in a cluster, the placement information (e.g. labels that need to be present on a K8s node) should be included as part of the instantiation parameters. Including this information in the descriptors would be difficult, without clearly defining which placement information is consistently available across all Kubernetes clusters, versus user/vendor defined information.
For instance: Kubernetes's API can provide information about a node's resource availability, as described in the [official documentation]{https://kubernetes.io/docs/concepts/architecture/nodes/}. This is a defined schema, so it's consistent across all kind of k8s clusters.
On the other hand, Kubernetes allows to attach arbitray labels to nodes and define deployment constraints matching those labels, as described [here]{https://kubernetes.io/docs/concepts/scheduling-eviction/assign-pod-node/}. Whilst the labels are a universal functionality of Kubernetes, the actual labels are arbitray.
For this feature, we would focus primarily on the placement information to be included as instantiation parameters.
The way these functionalities could be exploited will be discussed in the design phase for this feature.
Demo or definition of done
Given a registered K8s cluster with a series of nodes, a user can successfully deploy a KNF on nodes fullfilling specified constraints. A robot test would be created and needs to pass.