Kubernetes Certification (CKAD)

This article contains the notes I used to study when I took my CKAD test.
I hope it can be useful for other people studying to get this certification or Kubernetes in general.

Linux Foundation Description of the exam:

The Certified Kubernetes Application Developer exam certifies that users can design, build, configure, and expose cloud native applications for Kubernetes.
A Certified Kubernetes Application Developer can define application resources and use core primitives to build, monitor, and troubleshoot scalable applications and tools in Kubernetes.
The exam assumes working knowledge of container runtimes and microservice architecture.

The successful candidate will be comfortable:

  • working with (OCI-compliant) container images
  • applying Cloud Native application concepts and architectures
  • working with and validating Kubernetes resource definitions

The certification program allows users to demonstrate their competence in a hands-on, command-line environment. The purpose of the Certified Kubernetes Application Developer (CKAD) program is to provide assurance that CKADs have the skills, knowledge, and competency to perform the responsibilities of Kubernetes application developers.

Major Concepts

  1. Designing and Building Applications - We will explore how to use Kubernetes to meet challenges related to how applications are designed to work behind the scenes.
  2. Deployment - We will look into various tools and strategies that can help us get new code into our Kubernetes environment.
  3. Observability and Maintenance - We will examine various ways of gathering information about our applications and troubleshooting issues.
  4. Environment, Configuration, and Security - We will dive into the features in Kubernetes that can help us configure and secure applications.
  5. Services and Networking - We will discuss how Kubernetes facilitates network communication with our application components.

Designing and Building Applications

Pods

What is a pod? - Pods are the smallest deployable units of computing that you can create and manage in Kubernetes. A Pod is a group of one or more containers, with shared storage and network resources, and a specification for how to run the containers.

This is a simple Pod specification:

apiVersion: v1 # Tells Kubernetes what api version to use
kind: Pod # Tells kubernetes what type of resource to create
metadata:
  name: nginx # Add a name to the pod
  labels:
    app: nginx # Add a label to this pod. It can be used to find pods easier
spec:
  containers: # Describes the containers in this pod
  - name: nginx
    image: nginx:1.14.2
    ports:
    - containerPort: 80

General commands

kubectl apply -f [filename] # creates file contents
kubectl get pods # Gets jobs
kubectl logs [pod name] -c [container name (optional)] # Get logs of a pod

Jobs and CronJobs

What is a job? - Jobs are designed to run containerized task successfully to completion

General commands

kubectl get jobs # Gets jobs
kubectl logs [job pod name] # Get logs of a pod
apiVersion: batch/v1
kind: Job 
metadata:
  name: my-job 
spec:
  activeDeadlineSeconds: 10 # Limit in secs to run job
  backoffLimit: 4 # Amount of retries
  template: # Pod definition
    spec:
      restartPolicy: Never # never restart server
      containers:
        - name: print
        image: busybox:stable 
        command: ["echo", "this is a test"] 

What is a cronjob? Cronjobs are jobs that run periodically according to a schedule

apiVersion: batch/v1 
kind: CronJob 
metadata:
  name: my-cronjob 
spec:
  schedule: "**/1 ** ** ** **" # cron expression
  jobTemplate: # Job Definition 
    restartPolicy: Never 
    backoffLimit: 4
    activeDeadlineSeconds: 10
    spec:
      template:
        spec:
          containers:
            - name: print
              image: busybox:stable 
              command: ["echo", "This is a test!"]

Multi-Container Pods

What is a Multi-container Pod? - Multi-container Pods are pods that include multiple containers that work together

There are 3 main patterns:

When should I use multi-container pods - Only use multi-container Pods when the containers need to be tightly coupled, sharing resources such as network and storage volumes.

apiVersion: v1 
kind: Pod 
metadata:
  name: sidecar-test 
spec:
  containers:
    - name: writer
      image: busybox:stable 
      command: ['sh' "-c' , 'echo "The writer wrote this!" > output/data.txt; while true; do sleep 5; done']
      volumeMounts:
        - name: shared 
        mountPath: /output
    
    - name: sidecar 
      image: busybox:stable 
      command: ['sh' '-c', 'while true; do cat /input/dat sleep 5; done']
      volumeMounts:
        - name: shared 
        mountPath: /input
volumes:
  - name: shared 
  emptyDir: 3

Init Containers

What is an init container? - An init container is a container that runs a task before the main Pod's container starts.

Why use an init container?

  1. Separate Image - You can use a different image with software and config that the main container doesn't need. This can make the main container lighter.
  2. Delay Startup - It can be used to delay a startup until some requirements are met.
  3. Security - It can perform sensitive steps like consuming secrets in isolation from the main container.

Creating an init container

apiVersion: v1
kind: Pod
metadata:
  name: init-test
spec:
  containers:
    - name: nginx
      image: nginx:stable
  initContainers: # Configuration of your init containers.
    - name: busybox
      image: busybox:stable
      command: ['sh', '-c', 'sleep 60'] #  In this case the main container will start after waiting 60 seconds

Volumes (Container Storage)

What is a Volume? A volume provides external storage for containers outside the contianer file system.

There are 2 concepts to instantiate a volume in kubernetes.
volumes - Which are defined in Pod level and defines the details of where and how data is stored
volumeMounts - Defined in the container spec. It attaches a volume to a container and specify its path.

What types of Volumes exist?

  1. hostPath - Data is stored on the worker node file system
  2. emptyDir - Data is store on the worker node file system, but the Pod is deleted the data is also deleted.
  3. persistentVolumeClaim - Data is stored using a PersistentVolume.

Using a hostPath Volume

apiVersion: v1 
kind: Pod 
metadata:
  name: hostpath-volume-test 
spec:
  restartPolicy: OnFailure 
  containers:
    - name: busybox
      image: busybox:stable 
      command: ['sh', '-c', 'cat /data/data.txt']
      volumeMounts:
      - name: host-data # The same name defined on volumes
      mountPath: /data
  volumes:
    - name: host-data
      hostPath:
      path: /etc/hostPath # Where it will mount
      type: Directory # It can be Directory, DirectoryOrCreate, File or FileOrCreate

Using a emptyDir Volume

apiVersion: v1 
kind: Pod 
metadata:
  name: hostpath-volume-test 
spec:
  restartPolicy: OnFailure 
  containers:
    - name: busybox
      image: busybox:stable 
      command: ['sh', '-c', 'echo "Writting..." > /data/data.txt; cat /data/data.txt']
      volumeMounts:
      - name: emptydir-vol # The same name defined on volumes
      mountPath: /data
  volumes:
    - name: emptydir-vol
      emptyDir: {}

Using a Persistent volume claim

A PersistentVolume allows you to abstract voulme storage details away from Pods and treat storage like a consumable resource. Once you have the persistent volume configured you can create a claim for it and mount into a container.
The claim will automatically find a PersistentVolume that matches all the criteria (eg. storage type, size) and get bound to it.

apiVersion: v1
kind: PersistentVolume
metadata:
  name: hostpath-pv
spec:
  capacity:
    storage: 1Gi # Max amount of space reserved for this volume
  accessModes:
    - ReadWriteOnce
  storageClassName: slow # Any name you want. This will describe what kind of storage it is.
  hostPath:
    path: /etc/hostPath # The path where this volume will be
    type: Directory # It can be Directory, DirectoryOrCreate, File or FileOrCreate

Once created you can see the volume using this command:

kubectl get pv

Consuming a PersistentVolume is quite similar to the other two:

apiVersion: v1
kind: Pod
metadata:
  name: pv-pod-test
spec:
  restartPolicy: onFailure
  containers:
    - name: busybox
      image: busybox:stable
      command: ['sh', '-c', 'cat /data/data.txt']
      volumeMounts:
      - pv-host-data
  volumes:
    - name: pv-host-data
      persistentVolumeClaim:
      claimName: hostpath-pv # This name should match with the Persistent Volume

Deployment

What is a deployment? - A deployment defines a desired state for a set of replica pods. Kubernetes constantly works to maintain the desired state by creating, deleting and replacing those pods.

How does it achieve that? - A deployment manages multiple replica sets using a Pod Template. The pod template is the shared configuration used by the deployment to create new replicas. The deployment also has a property called replicas which tells the amount of desired pods.

apiVersion: apps/v1
kind: Deployment
metadata:
  name: nginx-deployment
spec:
  replicas: 2 # Amount of pods desired
  selector: # This is a "query" that is used to identify the pods that belong to this deployment
    matchLables: # Will check the labels on pod. There is also matchExpressions that allows more complex queries like In, NotIn, Exist and DoesNotExist 
      app: nginx
  template: # Pod template (check bellow)
    metadata:
      labels:
        app: nginx
    spec:
      containers:
        - name: nginx
          image: nginx:1.14.2
          ports:
          - containerPort: 80

This configuration will produce 2 pods like this:

apiVersion: v1
kind: Pod
metadata:
  name: nginx-deployment-[random-id] # The name is the deployment name (nginx-deployment) + a random identifier
  labels:
    app: nginx
spec:
  containers:
    - name: nginx
      image: nginx:1.14.2
      ports:
      - containerPort: 80

As you can see the template is almost 1-1 with the only difference being the new name.

To scale up or down a deployment, all you need to do is change replicas number. You can either changing the file and reapplying it or running the command:

# kubectl scale deployment/[deployment-name] --replicas=4
kubectl scale deployment/nginx-deployment --replicas=4

Performing rolling updates

What is a rolling update? - It allows you to change a Deployment's Pod template, gradually replacing replicas with zero downtime. You can use rolling update to deploy new code, change configurations and so on.

You don't need to add any extra configuration to your deployment, by default kubernetes has already those settings, but if you want you can change the way those rolling updates happen.

spec:
  replicas: 3
  paused: false # When true it will prevent changes from triggering rollouts
  revisionHistoryLimit: 10 # specifies the number of old ReplicaSets to retain to allow rollback
  progressDeadlineSeconds: 1 # How many seconds it will wait for your Deployment to progress before the system reports back that the Deployment has failed progressing
  strategy:
    type: RollingUpdate
    rollingUpdate:
      maxSurge: 2 # how many pods we can add at a time 
      maxUnavailable: 1  # maxUnavailable define how many pods can be unavailable during the rolling update

Both maxSurge and maxUnavailable can receive fixed numbers or percentages.

To see kubernetes rolling out the changes you can run the following command:

# kubectl status deployment/[deployment-name]
kubectl rollout status deployment/rolling-deployment

And if you want to rollback a rollout you can perform the following ones:

# kubectl rollout history deployment/[deployment-name]
kubectl rollout history deployment/app

# kubectl rollout undo deployment/[deployment-name] --to-revision=[revision_number]
kubectl rollout undo deployment/app --to-revision=2

In case you don't want to have changes with rolling update, you can also terminate all running instances and then recreate them with the new version. To do that you just need to set the strategy to Recreate

spec:
  replicas: 3
  strategy:
    type: Recreate

Deploying strategy

What is deployment strategy? - Is a method of rollung out new code that is used to achieve some benefit, such as increasing reliability and minimizing risk and downtime.

Blue/Green deployment

A blue/green deploument involves using 2 identical production environments, usually called blue and green.

New code is rolledout to the second environment. This environment is confirmed to be stable and working before user traffic is directed to the new environment.

#blue deployment
apiVersion: apps/v1
kind: Deployment
metadata:
  name: blue-deployment
spec:
  replicas: 1
  selector:
    matchLabels:
      app: bluegreen-test
      color: blue # The only difference between those two deployments are the tag colors and the image
  template:
    metadata:
      labels:
        app: bluegreen-test
        color: blue
      spec:
        containers:
          - name: nginx
            image: linuxacademycontent/ckad-nginx:blue
            ports:
              - containerPort: 80
---
#green deployment
apiVersion: apps/v1
kind: Deployment
metadata:
  name: green-deployment
spec:
  replicas: 1
  selector:
    matchLabels:
      app: bluegreen-test
      color: green
  template:
    metadata:
      labels:
        app: bluegreen-test
        color: green
      spec:
        containers:
          - name: nginx
            image: linuxacademycontent/ckad-nginx:green
            ports:
              - containerPort: 80

Then we will need to create a service, that will allow us to direct trafic to those deployments that we created. It will be responsible for switching the traffic between those deployments.
We will see more about services later, but for now this should be enough.

Create the service:

apiVersion: v1
kind: Service
metadata:
  name: bluegreen-test-svc
spec:
  selector: # That will select only the pods that have those two labels
    app: bluegreen-test
    color: blue # To switch environments we just need to change this to green
  ports:
    - protocol: TCP
      port: 80
      targetPort: 80

Then execute those commands:

# kubectl get service [service-name]
kubectl get service bluegreen-test-svc

# Get the cluster IP and execute curl
curl 10.103.231.42

As response you should get "I'm blue" and if you change the service to match the label green it should change to "I'm Green!"

Canary deployment

Just like blue/green, a canary deployment uses 2 environments.
A portion of the user base is directed to the new code in order to expose any issues before the changes are rolled out to everyone else.

#main deployment
apiVersion: apps/v1
kind: Deployment
metadata:
  name: main-deployment
spec:
  replicas: 4 # The main deployment should have more replicas
  selector:
    matchLabels:
      app: canary-test
      environment: main # The only difference between those two deployments are the tag environment and the image
  template:
    metadata:
      labels:
        app: canary-test
        environment: main
      spec:
        containers:
          - name: nginx
            image: linuxacademycontent/ckad-nginx:1.0.0
            ports:
              - containerPort: 80
---
#canary deployment
apiVersion: apps/v1
kind: Deployment
metadata:
  name: canary-deployment
spec:
  replicas: 1 # canary environment should have less replicas
  selector:
    matchLabels:
      app: canary-test
      environment: canary # canary environment
  template:
    metadata:
      labels:
        app: canary-test
        environment: canary
      spec:
        containers:
          - name: nginx
            image: linuxacademycontent/ckad-nginx:canary
            ports:
              - containerPort: 80

Create the service:

apiVersion: v1
kind: Service
metadata:
  name: canary-test-svc
spec:
  selector: # That will select both deployments with those labels
    app: canary-test
    - protocol: TCP
      port: 80
      targetPort: 80

Then execute those commands:

# kubectl get service [service-name]
kubectl get service canary-test-svc

# Get the cluster IP and execute curl
curl 10.103.231.42

In this case any requests would go 80% of the time to main and 20% on canary. This is a very simplistic way to do canary deployment, but if you want something more complex you can do it using ingress

Helm

What is Helm? - Helm is a package management tool for applications that run in Kubernetes. It allows you to simplify the creation of Kubernetes Objects, by allowing repetitive configuration to be in one place and changed by adding parameters.

What is a Helm chart? - It is the "package". It containes all of the Kubernetes resources definitions needed to get the application up and running in the cluster.

Where are Helm Charts stored? - They are stored in a Helm Repository. It allows you to store, browse and download those charts.

How to create a Helm Chart? - As mentioned before a helm chart is an encapsulation of a Kubernetes configuration (more details). The configuration is quite simple, you can create the following folder structure.

charts
  - <chart_name>
    - templates
      - <configuration_file_1>
      - <configuration_file_2>
      - <configuration_file_n>
    Chart.yaml
    values.yaml

Chart.yaml - Contains the metadata for the charts (eg. version, name, description…)

apiVersion: v2
name: s3-proxy
description: A S3 Proxy
type: application # It can be application or library. Application can be deployed and library not, they are supposed to be useful utilities or functions
version: 0.1.0 # Chart version
appVersion: "1.16.0" # Application version (Optional)

values.yaml - Sets default values for the chart

name: backoffice-qa

backend:
  image:
    repository: my-image-path
    tag: 16461

templates - Files inside the folder templates are Helm Charts templates, they use a template language to inject dynamic values into our configuration files.

apiVersion: v1
kind: Service
metadata:
  name: {{ .Values.name }} # This uses the values described on values.yaml or the ones passed as arguments
  namespace: default
  labels:
    name: {{ .Values.name }}
spec:
  type: ClusterIP
  ports:
    - name: http
      port: 80
      targetPort: http
  selector:
    app: {{ .Values.name }}

How to install a package? - To install a new package, use the helm install command. At its simplest, it takes two arguments: A release name that you pick, and the name of the chart you want to install.

# helm install [release-name] [package]
helm install happy-panda bitnami/wordpress

if you want to modify some of the default values you can pass a file using the -f flag:

# helm install -f [values file] [release-name] [package]
helm install -f values.yaml happy-panda bitnami/wordpress

Helm does not wait until all of the resources are running before it exits. To keep track of a release's state, or to re-read configuration information, you can use helm status

# helm status [release-name]
helm status happy-panda

How to upgrade a package? - When a new version of a chart is released, or when you want to change the configuration of your release, you can use the helm upgrade command

# helm upgrade -f [values file] [release-name] [package]
helm upgrade -f panda.yaml happy-panda bitnami/wordpress

How to uninstall a package? - You can execute helm uninstall:

# helm uninstall [release-name]
helm uninstall happy-panda

To confirm that it was deleted you can list all releases and see if the package is not there anymore:

helm list

Probes

What is a proble? - Probles are part of the container spec in Kubernetes. They allow to customize how Kubernetes detects the state of a container.

There are 3 types of Probes:

  1. Liveness - Check if a container is healthy, so it can be restarted if it becomes unhealthy
  2. Readiness - Determine when a container is fully started up and ready to receive user traffic
  3. Startup - Check a container health during startup for slow containers
apiVersion: v1
kind: Pod
metadata:
  name: probed-pod
spec:
  containers:
    - name: nginx
      image: nginx:1.20.1
      ports:
        - containerPort: 80
      livenessProbe:
        httpGet:
          path: /
          port: 80
        initialDelaySeconds: 3 # Determines how long it will wait until start requesting
        periodSeconds: 3 # Determines the interview
      readinessProbe:
        httpGet:
          path: /
          port: 80
        initialDelaySeconds: 5 # Determines how long it will wait until start requesting
        periodSeconds: 5 # Determines the retry interview

Ingress

What is an Ingress? - An Ingress is a Kubernetes object that manages access to Swervices from outside the cluster.

For example, you can use an Ingress to provide your end users with access to your web application running on Kubernetes

Ingresses are more powerful than NodePort servers, for example, they can provide routing, add headers and so on.
The only caveat is that you need to create an Ingress controller in order to actually implement Ingress functionality

apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  name: ingress-test
spec:
  ingressClassName: nginx
  rules:
    - host: traderepublic.staging.com
      http:
        paths:
          - path: /
            pathType: prefix
            backend:
              service:
                name: frontend-service
                port:
                  number: 80
          - path: /api
            pathType: prefix
            backend:
              service:
                name: backend-service
                port:
                  number: 80

Custom Resource Definition (CRD)

Is an extension of Kubernetes API. It allows you to define your own custom kubernetes object types and interact with them, just as you would with other Kubernetes objects.

You can create custom controllers that act upon custom resources.

apiVersion: apiextensions.k8s.io/v1
kind: CustomResourceDefinition
metadata:
	name: beehives.acloud.guru # It has to be [plural].[group]
spec:
	group: acloud.guru
	names:
		plural: beehives
		singular: beehive
		kind: Beehive
		shortNames:
			- hive
	scope: Namespaced
	versions:
		- name: v1
		  served: true
		  storage: true
		  schema:
		    openAPIV3Schema:
			    type: object
				properties:
					spec:
						type: object
						properties:
							supers:
								type: integer
							bees:
								type: integer

To create a instance of it, follow this example:

apiVersion: acloud.guru/v1 # [group]/[version]
kind: Beehive
metadata:
	name: beehive
spec:
	supers: 5
	bees: 100

Service Accounts

A ServiceAccount allow processes within containers to authenticate with the Kubernetes API server. They can be assigned permissions via role-based access control, just like regular user accounts.

To give permissions to a service account you need to create a role and after that bind the role to the service account through a role binding. Roles and role bindings are namespaced, if you want to create a non-namescaped one you can use ClusterRole and ClusterRoleBinding

  1. Create a Role
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
	name: list-pods-role
rules:
	- apiGroups: [""] # for anything that is not under /v1 (eg. rbac.authorization.k8s.io/v1 or CRDs)
	  resources: ["pods"]
	  verbs: ["list"]
  1. Create a Service Account
apiVersion: v1
kind: ServiceAccount
metadata:
	name: my-sa
  1. Create a Role Binding
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
	name: list-pods-rb
subjects:
	- kind: ServiceAccount
	  name: my-sa
	  namespace: default
roleRef:
	kind: Role
	name: list-pods-role
	apiGroup: rbac.authorization.k8s.io
  1. Create a Pod that consumes it
apiVersion: v1
kind: Pod
metadata:
  name: sa-pod
spec:
  serviceAccountName: my-sa
  containers:
    - name: my-pod
      image: radial/busyboxplus:curl
      command: ['sh', '-c', 'curl -s --header "Authorization: Bearer $(cat /var/run/secrets/kubernetes.io/serviceaccount/token)" --cacert /var/run/secrets/kubernetes.io/serviceaccount/ca.crt https://kubernetes/api/v1/namespaces/default/pods'] 

Admission controller

An Admission controller intercept requests to Kubernetes APO after authentication and authorization, but before any objects are persisted. They can be used to validate, deny or even modify the request. They live on /etc/kuberntes/manifests/kube-apiserver.yaml

Resources requests and limits

Requests - Provides K8 with an idea of how many resources a container is expected to use
Limits - Provides an upper level on how many resources a container is allowed to use. Containers that pass this are terminated

ResourceQuota - Is a k8 object that sets limits on the resources used within a Namespace. If creating or modifying a resource would go beyond this limit, the request will be denied. You need to add it to the list of Admission Plugins.

apiVersion: v1
kind: ResourceQuota
metadata:
	name: resources-test-quota
	namespace: resources-test
spec:
	hard:
		requests.memory: 128Mi
		requests.cpu: 500m
		limits.memory: 256Mi
		limits.cpu: "1"

Config Map and Secrets

Config Map is an object that stores configuration data.
Secret is an object that stores sensitive data.

There are 2 ways to access those data:

Creating a config map:

apiVersion: v1
kind: ConfigMap
metadata:
	name: my-configmap
data:
	message: Potato
	app.cfg: |
		# A config file
		key1=value1
		key2=value2

Consuming a config map:

apiVersion: v1
kind: Pod
metadata:
	name: cm-pod
spec:
	restartPolicy: Never
	containers:
		- name: my-app
		  image: busybox:stable
		  command: ['sh', '-c', 'echo $MESSAGE; cat /config/app.cfg']
		  volumeMounts:
			  - name: config
			    path: /config
			    readOnly: true
		  env:
			  - name: MESSAGE
				valueFrom:
					configMapKeyRef:
						name: my-configmap
						key: message
	volume:
		- name: config
		  configMap:
			  name: my-configmap
			  items:
				  - key: app.cfg
					path: app.cfg

Transforming a value into base64:

echo Potato | base64

Creating a secret with the value:

apiVersion: v1
kind: Secret
metadata:
	name: my-secret
type: Opaque
data:
	sensitive.data: UG90YXRvCg==
	passwords.txt: YW5vdGhlciBzZWNyZXQK

Consuming the secret:

apiVersion: v1
kind: Pod
metadata:
	name: secrets-pod
spec:
	restartPolicy: Never
	containers:
		- name: my-app
		  image: busybox:stable
		  command: ['sh', '-c', 'echo $SENSITIVE; cat /config/passwords.txt']
		  volumeMounts:
			  - name: config
			    path: /config
			    readOnly: true
		  env:
			  - name: SENSITIVE
				valueFrom:
					secretKeyRef:
						name: my-secret
						key: sensitive.data
	volume:
		- name: config
		  secret:
			  secretName: my-configmap
			  items:
				  - key: passwords.txt
					path: passwords.txt

Security Context

Is part of the Pod and container spec. It allows you to control advanced security-related settings for containers.

apiVersion: v1
kind: Pod
metadata:
  name: my-pod
  labels:
    app: my-pod
spec:
  containers:
    - name: main
      image: busybox:stable
      command: ['sh', '-c', 'while true; do echo "Potato"; sleep 10; done;']
	  securityContext:
	    runAsUser: 300
	    runAsGroup: 4000
	    allowPriviledgeEscalation: false
	    readOnlyRootFilesystem: true

Network Policies

Kubernetes creates a virtual network between worker nodes. Network Policies are objects that allows you to restrict network traffic to and from pods within the cluster network.

Non-isolated pods - Pods that are not selected by any network policies. It is open to income and outcome traffic
Isolated pod - Pods that are selected by any network policies. It is open to income and outcome traffic only if the NetworkPolicy allows it.

apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
	name: np-test-a-default-deny-ingress
	namespace: np-test-a
spec:
	podSelector:
		matchLabels:
			app: np-test-server
	policyTypes:
		- Ingress # blocks ingress
	ingress: # add rules for ingress
		- from:
			- namespaceSelector:
				matchLabels:
					team: bteam
				podSelector:
					matchLabels:
						app: np-test-client
			ports:
				- protocol: TCP
				  port: 80

Kubernetes Contexts

If you want to work with multiple Kubernetes clusters, you can create additional contexts.
To list your contexts run the command:

kubectl config get-contexts

To change the context use

kubectl config use-context [context]