Getty Images/iStockphoto

Tip

How to validate a Kubernetes manifest

Dev teams must validate Kubernetes manifests. Developers can navigate validation and issues that arise with the help of native and third-party tools and other coding methods.

Kubernetes manifests are crucial to managing Kubernetes clusters and the software running on them.

Manifests are the configuration files for Kubernetes resources -- so any organization using Kubernetes can benefit from understanding more about manifests and how to validate them.

Without validating manifests before applying, there is a significant risk of failing a deployment by applying an invalid manifest. Malformed manifests can cause downtime, as some misconfigurations result in an unreachable or unlaunchable application. In most cases, an invalid manifest will cause an error when applying and have no effect on the cluster. The resource won't be created if the apply command fails.

Learn a basic method of validating Kubernetes manifests using kubectl, a command line tool included with Kubernetes. Analyze a typical manifest, see how to validate it and learn how to take validation to the next level with policy as code tools.

Sample Kubernetes manifest

Below is a basic manifest that contains a deployment of two nginx containers:

apiVersion: apps/v1
kind: Deployment
metadata:
  name: nginx-deployment
spec:
  replicas: 2
  selector:
    matchLabels:
      app: nginx
  template:
    metadata:
      labels:
        app: nginx
    spec:
      containers:
      - image: nginx:1.14.2
        name: nginx
       ports:
- containerPort: 80

In general, a manifest specifies how the application runs. Different resource types have different schemas that control the different variables of the resource. For example, a deployment has a replicas attribute that represents how many containers should run as part of the deployment. Getting these attributes right ensures that Kubernetes deploys the application properly.

How to validate a Kubernetes manifest

Fortunately, there is a built-in method from Kubernetes to validate manifests: the --dry-run=server flag in kubectl. Using kubectl is sensible because it is the same command that applies the manifests after validation. There are third-party tools that might offer different style or formatting checks, but kubectl --dry-run=server is the best test for users determining if their manifest is valid for the cluster they are currently connected to.

The --dry-run=server flag prompts kubectl to send the manifest to the API server of the Kubernetes cluster, which processes the request as if it were a real creation or update -- but without creating the resources. This way, if the dry-run command succeeds, users can be certain that the manifest is valid for their specific cluster.

For example, the --dry-run=server command generates the following output:

❯ kubectl apply -f nginx-test.yaml --dry-run=server
deployment.apps/nginx-deployment created (server dry run)

The kubectl command ran successfully. The output shows that the manifest, deployment.apps/nginx-deployment, was "created". The output also reports that this apply was run as a server dry run command, so nothing was actually created.

If the user makes a change to the manifest that isn't valid, the output will include a failure from the kubectl apply command that reports the invalid section.

For example, if the user accidentally creates a mismatch between labels and matchLabels, as in the update manifest below, the manifest is no longer valid and cannot be applied.

apiVersion: apps/v1
kind: Deployment
metadata:
  name: nginx-deployment
spec:
  replicas: 2
  selector:
    matchLabels:
      app: nginx-deployment
  template:
    metadata:
      labels:
        app: nginx
    spec:
      containers:
      - image: nginx:1.14.2
        name: nginx
        ports:
        - containerPort: 80

When running the kubectl command with the modified manifest above, the following output is generated:

❯ kubectl apply -f nginx-test.yaml --dry-run=server
The Deployment "nginx-deployment" is invalid: spec.template.metadata.labels: Invalid value: map[string]string{"app":"nginx"}: `selector` does not match template `labels`

Potential validation problems

The most common problems that would cause a Kubernetes manifest to be invalid are the following:

  • YAML syntax errors.
  • Missing mandatory fields.
  • Schema violations.
  • Invalid resource references.

Kubernetes manifests are formatted with indents, much like the formatting in Python, because it uses YAML style. In other formatting styles, such as JSON, the start/stop of a block is inside brackets. YAML uses whitespace, specifically indents. When indenting a Kubernetes manifest, the tab character isn't valid. Instead, indentation must use spaces. Whitespace formatting can be tricky, since it's not as easy to tell if whitespace in a file consists of spaces, tabs or some mix of the two. Most text editors have a setting to show whitespace characters, which will then show a different symbol to represent what specific type of whitespace character is present.

Mandatory fields might create problems for a Kubernetes manifest. Some fields are required, and the manifest cannot be applied without them. If a certain mandatory field is not incorporated, the manifest will fail to be applied to a cluster.

If a value is not the correct type -- such as a string where a number is expected -- the manifest is invalid as it does not match the schema definition for that resource. If there are any misspellings in resource fields, such as apiVerson instead of apiVersion, the manifest will also fail to match the schema definition.

When using CustomResourceDefinitions, manifests might incorrectly reference a resource before the CustomResourceDefinition is applied to a cluster. If a resource type is not installed on the cluster, then a manifest that references that resource type will not be valid. This is why it's important to use the kubectl --dry-run=server flag -- a third-party validation tool often only validates the format and style of a manifest, not actually connecting to the Kubernetes cluster to confirm the manifest can be applied.

What about policy as code?

Validating manifests and using policy as code tools are one and the same. Validating a manifest in the most basic sense means checking to see if the manifest is properly written such that it can be applied to a cluster. Policy as code refers to tools that enforce policies based on configurations stored in version control. An example of a policy as code tool is Kyverno, which is described on its Github repo as "a Kubernetes-native policy engine designed for platform engineering teams. It enables security, compliance, automation and governance through policy-as-code."

Using Kyverno, teams can improve manifest validation beyond the built-in tool of Kubernetes. Kyverno policies validate and enforce characteristics such as what images are permitted to run on a cluster or whether a pod manifest meets the proper security standards. A common example of a Kyverno policy is only allowing images from a specific repository to run on a cluster. Enforcing this policy ensures that if an attacker did get access to a cluster, they wouldn't automatically be able to start running any container image they wanted.

Policy as code tools, such as Kyverno, automate the enforcement of best practices and security policies by treating them as version-controlled definitions. They operate by validating, mutating or generating Kubernetes resources based on these policies -- often applied through admission controllers to enforce rules at the time of creation or update. Using Kyverno policies prevents invalid or non-compliant manifests from ever being applied to the cluster, offering an extra layer of shift-left security and governance beyond basic schema validation.

Matt Grasberger is a DevOps engineer with experience in test automation, software development and designing automated processes to reduce work.

Michael Levan is a seasoned engineer and consultant in the Kubernetes and Security space who spends his time working with startups and enterprises around the globe on Kubernetes consulting, training and content creation. He is a trainer, 4x published author, podcast host, international public speaker, CNCF Ambassador and was part of the Kubernetes v1.28 and v1.31 Release Team.

Dig Deeper on Software design and development