kupid

command module
v0.7.0 Latest Latest
Warning

This package is not in the latest version of its module.

Go to latest
Published: Feb 12, 2024 License: Apache-2.0, BSD-2-Clause, MIT Imports: 27 Imported by: 0

README

kupid

Inject scheduling criteria into target pods orthogonally by policy definition.

Content

Goals

  • Declare and manage many different forms of pod scheduling criteria for pods in a Kubernetes cluster. This includes affinity (for node, pod and anti-affinity), nodeName, nodeSelector, schedulerName and tolerations.
  • Dynamically inject the maintained relevant pod scheduling criteria to the pods during pod creation.
  • Allow pods to declare their own scheduling criteria which would override any declaratively maintained policy in case of conflict.
  • Allow some namespaces and/or pods to be selected (or not selected) as targets for scheduling policies based on the label selector mechanism.
  • Generally, make it possible to cleanly separate (and orthogonally enforce) the concerns of how and where the workload deployed on a Kubernetes cluster should be scheduled from the controllers/operators that manage them.
  • Enable Gardener to deploy such a mechanism to inject pod scheduling criteria orthogonally into seed cluster workloads by deploying kupid as a Gardener extension along with suitable scheduling policy instances. This is especially relevant for supporting dedicated worker pools for shoot etcd workload in the seed clusters.

Non-goals

  • Prevent pods from declaring their own scheduling criteria.
  • Prevent Gardener from supporting seed clusters which do not have any dedicated worker pools or any form of pod scheduling criteria for seed cluster workload.

Development Installation

The steps for installing kupid on a Kubernetes cluster for development and/or trial are given below. These are only development installation steps and not intended for any kind of production scenarios. For anything other than development or trial purposes, please use your favourite CI/CD toolkit.

Building the docker image

The following steps explain how to build a docker image for kupid from the sources. It is an optional step and can be skipped if the upstream docker image can be used.

  1. Build kupid locally. This step is optional if you are using upstream container image for kupid.
$ make webhook
  1. Build kupid container image. This step is optional if you are using upstream container image for kupid.
$ make docker-build
  1. Push the container image to the container repository. This step is optional if you are using upstream container image for kupid.
$ make docker-push
Deploying kupid

Please follow the following steps to deploy kupid resources on the target Kubernetes cluster.

Pre-requisites

The development environment relies on kustomize. Please install it in your development environment.

Using self-generated certificates

Kupid requires TLS certificates to be configures for its validating and mutating webhooks. Kupid optionally supports generating the required TLS certificates and the default ValidatingWebhookConfiguration and MutatingWebhookConfiguration automatically.

Deploy the resources based on config/default/kustomization.yaml which can be further customized (if required) before executing this step.

$ make deploy
Using cert-manager

Alternatively, kupid can be deployed with externally generated TLS certificates and custom ValidatingWebhookConfiguration and MutatingWebhookConfiguration. Below is an example of doing this using cert-manager. Please make sure the target Kubernetes cluster you want to deploy kupid to has a working installation of cert-manager.

Deploy the resources based on config/using-certmanager/kustomization.yaml which can be further customized (if required) before executing this step.

$ make deploy-using-certmanager

Context

Kubernetes API provides many mechanism for pods to influence how and where (which node) they get scheduling in/by the Kubernetes cluster. All such mechanisms involve the pods declaring things in their PodSpec. At present, there are five such mechanisms.

affinity

Affinity is one of the more sophisticated ways for a pod to influence where (which node) it gets scheduled.

It has three further sub-mechanisms.

nodeAffinity

NodeAffinity is similar to but a more sophisticated way than the nodeSelector to constrain the viable candidate subset of nodes in the cluster as a scheduling target for the pod. An example of how it can be used can be seen here.

podAffinity

PodAffinity is more subtle way to constrain the viable candidate subset of nodes in the cluster as a scheduling target for the pod. In contrast to nodeAffinity, this is done not by directly identifying the viable candidate nodes by node label selector terms. Instead, it is done by selecting some other already scheduled pods that this pod should be collocated with. An example of how it can be used can be seen here.

podAntiAffinity

PodAAntiffinity works in a way that is opposite of podAffinity. It constrains the viable candidate nodes by selecting some other already scheduled pods that this pod should not be collocated with. An example of how it can be used can be seen here.

nodeName

NodeName is a very crude way that bypasses the whole pod scheduling mechanism by the pod itself declaring which node it wants to be scheduled on.

nodeSelector

NodeSelector is a simple way to constrain the viable candidate nodes for scheduling by specifying a label selector that select such viable nodes. An example of how it can used can be seen here.

schedulerName

Kubernetes supports multiple schedulers that can schedule workload in it. The individual pods can declare which scheduler should scheduler them in the schedulerName. The additional schedulers should be separately deployed, of course.

tolerations

Kubernetes supports the functionality of taints which allow nodes to declaratively repel pods from being scheduled on them. Pods that want to get scheduled on such tainted nodes need to declare tolerations to such taints. Typically, this functionality is used in combination with other ways of attracting these pods to get scheduled on such tainted nodes, such as nodeAffinity, nodeSelector etc.

Problem

  • All the mechanisms for influencing the scheduling of pods described above have to be specified top-down (or in other words, vertically) by the pods themselves (or any higher order component/controller/operator that deploys them).
  • Such top-down approach forces all the components up the chain to be aware of the details of these mechanisms. I.e. they either make some assumptions at some stage about the pod scheduling criteria or expose the flexibility of specifying such pod scheduling criteria all the way up the chain.
  • Specifically, in the Gardener seed cluster, some workloads like etcd might be better of scheduled on dedicated worker pools so that other workloads and the common nodes on which they are scheduled can be scaled up and down by the Cluster Autoscaler more efficiently. This approach might be used for other workloads too for other reasons in the future (pre-emptible nodes for controller workloads?).
  • However, Gardener must not force all seed clusters to always have dedicated worker pools. It should be always possible to use Gardener with plain-vanilla seed clusters with no dedicated worker pools. The support for dedicated worker pools should be optional.

Solution

The proposed solution is to declare the pod scheduling criteria as described above in a CustomResourceDefinition and then inject the relevant specified pod scheduling criteria into pods orthogonally when they are created via a mutating webhook.

Sequence Diagram

Sequence Diagram

PodSchedulingPolicy

PodSchedulingPolicy is a namespaced CRD which describes, in its spec, all the pod scheduling criteria described above.

The criteria for selecting target pods on which the PodSchedulingPolicy is applied can be specified in the spec.podSelector.

ClusterPodSchedulingPolicy

ClusterPodSchedulingPolicy is similar to the PodSchedulingPolicy, but it is a non-namespaced (cluster-scoped) CRD which describes, in its spec, all the pod scheduling criteria described above.

The criteria for selecting target pods on which the ClusterPodSchedulingPolicy is applied can be specified in the spec.podSelector.

In addition, it allows specifying the target namespaces to which the ClusterPodSchedulingPolicy is applied via spec.namespaceSelector.

Only a pod whose namespace matches the spec.namespaceSelector and also matches the spec.podSelector will be applied the specified pod scheduling policy.

An explicitly specified empty selector would match all objects (i,e. namespaces and pods respectively).

A nil selector (i.e. not specified in the spec) will match no objects (i.e. namespaes and pods respectively).

Support for top-down pod scheduling criteria

Pods can continue to specify their scheduling criteria explicitly in a top-down way.

One way to make this possible is to use the spec.namespaceSelector and spec.podSelector judiciously so that the pods that specify their own scheduling criteria do not get targeted by any of the declared scheduling policies.

If any additional declared PodSchedulingPolicy or ClusterPodSchedulingPolicy are applicable for such pods, then the pod scheduling criteria will be merged with the already defined scheduling criteria specified in the pod.

During merging, if there is a conflict between the already existing pod scheduling criteria and the additional pod scheduling criteria that is being merged, then only the non-conflicting part of the additional pod scheduling criteria will be merged and the conflicting part will be skipped.

Gardener Integration Sequence Diagram

Gardener Integration Sequence Diagram

Pros

This solution has the following benefits.

  1. Systems that provision and manager workloads on the clusters such as CI/CD pipelines, helm charts, operators and controllers do not have to embed the knowledge of cluster topology.
  2. A cluster administrator can inject cluster topology constraints into scheduling of workloads. Constraints which are not taken into account by the provisioning systems.
  3. A cluster administrator can enforce some default cluster topology constraints into the workload as a policy.
Cons
  1. Pod creations go through an additional mutating webhook. The scheduling performance impact of this can be mitigated by using the namespaceSelector and podSelector fields in the policies judiciously.
  2. Pods already scheduled in the cluster will not be affected by newly created policies. Pods must be recreated to get the new policies applied.
Mutating higher-order controllers

Though this document talks about mutating pods dynamically to inject declaratively defined scheduling policies, in principle, it might be useful to mutate the pod templates in higher order controller resources like replicationcontrollers, replicasets, deployments, statefulsets, daemonsets, jobs and cronjobs instead of (or in addition to) mutating pods directly. This is supported by kupid. Which objects are mutated is now controllable in the MutatingWebhookConfiguration.

Sequence Diagram

Sequence Diagram

Gardener Integration Sequence Diagram

Gardener Integration Sequence Diagram

Alternatives

Propagate flexibility up the chain

Expose the flexibility of specifying pod scheduling mechanism all the way up the chain. I.e. in deployments, statefulsets, operator CRDs, helm chart configuration or some other form of configuration. This suffers from polluting many layers with information that is not too relevant at those levels.

Make assumptions

Make some assumptions about the pod scheduling mechanism at some level of deployment and management of the workload. This would not be flexible and will make it hard to change the pod scheduling behaviour.

Prior Art

PodPreset

The standard PodPreset resource limits itself to the dynamic injection of only environment variables, secrets, configmaps, volumes and volume mounts into pods. There is mechanism to define and inject other fields (especially, those related to scheduling) into pods.

Banzai Cloud Spot Config Webhook

The spot-config-webhook limits itself to the dynamic injection of the schedulerName into pods. There is no mechanism to define and inject other fields like affinity, tolerations etc.

OPA Gatekeeper

The OPA Gatekeeper allows to define policy to validate and mutate any kubernetes resource. Technically, this can be used to dynamically inject anything, including scheduling policy into pods. But this is too big a component to introduce just to dynamically inject scheduling policy. Besides, the policy definition as code is undesirable in this context because the policy itself would be non-declarative and hard to validate while deploying the policy.

Documentation

The Go Gopher

There is no documentation for this package.

Directories

Path Synopsis
api
v1alpha1
Package v1alpha1 contains API Schema definitions for the kupid v1alpha1 API group +kubebuilder:object:generate=true +groupName=kupid.gardener.cloud
Package v1alpha1 contains API Schema definitions for the kupid v1alpha1 API group +kubebuilder:object:generate=true +groupName=kupid.gardener.cloud
charts
gardener-extension-kupid
Package chart enables go:generate support for generating the correct controller registration.
Package chart enables go:generate support for generating the correct controller registration.
pkg

Jump to

Keyboard shortcuts

? : This menu
/ : Search site
f or F : Jump to
y or Y : Canonical URL