In my last two posts I touched on Kubernetes Pod and ReplicaSet (RS). I will now build on that with another controller concept called “Deployment”. The main job of Deployment is to provide declarative update to both Pod and RS. You do this by declaring a state you want in a manifest (yaml file) and the controller makes sure the current state is reconciled to match the desired state. To better understand, let us see how it actually works, how to define it in a manifest and finally creating a demo deployment.
Why use a Deployment?
In Kubernetes, Deployment is the recommended way to deploy Pod or RS, simply because of the advance features it comes with. Below are some of the key benefits.
- Deploy a RS.
- Updates pods (PodTemplateSpec).
- Rollback to older Deployment versions.
- Scale Deployment up or down.
- Pause and resume the Deployment.
- Use the status of the Deployment to determine state of replicas.
- Clean up older RS that you don’t need anymore.
- Canary Deployment.
Let us now look at some of these features in actions.
How to create a Deployment
To demonstrate, we are going to create a simple deployment of nginx with 3 replicas. Like any other object in Kubernetes, the apiVersion, kind and metadata are required. The spec section is slightly different. With this basic example, we will see what the manifest for a deployment looks like and see some of the benefits mentioned above in action.
apiVersion: apps/v1 kind: Deployment metadata: name: example-deployment # Name of our deployment labels: app: nginx spec: replicas: 3 # number of pods Selector: # this is how deployment knows what pod to manage matchLabels: app: nginx template: metadata: labels: app: nginx spec: containers: - name: nginx image: nginx:1.7.1 # Image version ports: - containerPort: 80
$ kubectl create -f deploy.yaml deployment.apps "example-deployment" created
How to work with a Deployment
We first check to see if our deployment was successfully created by running “kubectl rollout status” and “kubectl get deployment” command. The first command simply tells us if the deployment was successful or not, the latter shows us desired number of replicas, number that have been updated, how many replicas of our nginx pod do we have running and how many are actually available to end users.
$ kubectl rollout status deployment example-deployment deployment "example-deployment" successfully rolled out
$ kubectl get deployment NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE example-deployment 3 3 3 3 2m
If we run “kubectl get replicaset”, we will see that the deployment created a RS for our nginx pods. We can use status to determine state of replicas since RS is automatically created.
$ kubectl get deployment $ kubectl get replicaset NAME DESIRED CURRENT READY AGE example-deployment-5d4fbdd945 3 3 3 10m
By default, deployment will add a pod-template-hash to the name of RS that it creates. Do not change this hash. For example, “example-deployment-64ff85b579”.
If we run “kubectl get pod”, we will see the pods our RS created.
$ kubectl get pod NAME READY STATUS RESTARTS AGE example-deployment-5d4fbdd945-7hmfq 1/1 Running 0 29m example-deployment-5d4fbdd945-nfwcx 1/1 Running 0 29m example-deployment-5d4fbdd945-wzrr6 1/1 Running 0 29m
Before we go any further, there is a special argument that you should be aware of, it is called “—record” . You append this argument to kubectl and it records the command we ran. We will use this argument from here on to demonstrate.
It is also worth noting that a rollout is triggered if and only if the Deployment’s pod template (.spec.template) is changed. So, for example if the labels or container images of the template are updated. Existing RS controlling Pods whose labels match .spec.selector but whose template does not match .spec.template are scaled down as new RS is created. Other updates, such as scaling the Deployment, do not trigger a rollout meaning it will not create a new rs. This is a key concept to understand.
Run “kubectl set image” command to change the image version of nginx. This command is the same as updating the container image field in the manifest(yaml), then applying it. We can also just run “kubectl edit deployment” and edit our image version directly.
$ kubectl set image deployment example-deployment nginx=nginx:latest --record deployment.apps "example-deployment" image updated
If we run “kubectl get replicaset” again, we will see a new RS because of the update we made to the image.
$ kubectl get replicaset NAME DESIRED CURRENT READY AGE example-deployment-5d4fbdd945 0 0 0 31m example-deployment-7d9f9876cc 3 3 3 3m
We also see that our first RS was scaled to 0 while the new RS with latest version of nginx now has 3 replicas. This is another key feature, scaling down the old rs without us having to do anything manually.
At this point it is worth understanding how deployment handles rollouts. By default, it will make sure that only 25 percent of your pods are unavailable. This is great to make sure that all our nginx pods are not all scaled down at the same time. It also makes sure that it does not create more that 25 percent of the desired number or replicas we specified while performing the rollout. It does not kill old Pods until a sufficient number of new Pods have come up, and does not create new Pods until a sufficient number of old Pods have been killed. This is referred to as “Rolling Update Strategy”. Another key benefit of using deployment.
Run “kubectl rollout history deployment example-deployment“ to see record of the changes we made earlier.
$ kubectl rollout history deployment example-deployment deployments "example-deployment" REVISION CHANGE-CAUSE 1 kubectl create --filename=deploy.yaml --record=true 2 kubectl set image deployment example-deployment nginx=nginx:latest --record=true
Here we can see that all our deployment changes are recorded and we can see what command changed what. Pay attention to the revision number.
How to rollback the changes
Let’s say the image version was bad and we want to go back to the previous version. We achieved this with the help of the “record” argument and “revision” number from above. Once we get the revision number we want to rollback to, we make sure that the revision contains what we want. We can then append the revision number to the rollout history command to show more information about the revision.
$ kubectl rollout history deployment example-deployment --revision=1 deployments "example-deployment" with revision #1 Pod Template: Labels: app=nginx pod-template-hash=1809688501 Annotations: Kubernetes.io/change-cause=kubectl create --filename=deploy.yaml --record=true Containers: nginx: Image: nginx:1.7.1 Port: 80/TCP Host Port: 0/TCP Environment: <none> Mounts: <none> Volumes: <none>
Once we confirm this is the revision, we can then rollback.
$ kubectl rollout undo deployment example-deployment --to-revision=1 deployment.apps "example-deployment"
We can run the status command to make sure it was successfully rolled out. Once we are sure, we run the rollout history command again. Notice how we no longer have revision 1, we now have 2 and 3. If we describe the deployment, we will see in the event section how the rollback happened.
Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal ScalingReplicaSet 13m deployment-controller Scaled up replica set example-deployment-7d9f9876cc to 1 Normal ScalingReplicaSet 13m deployment-controller Scaled down replica set example-deployment-5d4fbdd945 to 2 Normal ScalingReplicaSet 13m deployment-controller Scaled up replica set example-deployment-7d9f9876cc to 2 Normal ScalingReplicaSet 13m deployment-controller Scaled down replica set example-deployment-5d4fbdd945 to 1 Normal ScalingReplicaSet 13m deployment-controller Scaled up replica set example-deployment-7d9f9876cc to 3 Normal ScalingReplicaSet 13m deployment-controller Scaled down replica set example-deployment-5d4fbdd945 to 0 Normal DeploymentRollback 2m deployment-controller Rolled back deployment "example-deployment" to revision 1 Normal ScalingReplicaSet 2m deployment-controller Scaled up replica set example-deployment-5d4fbdd945 to 1 Normal ScalingReplicaSet 2m deployment-controller Scaled up replica set example-deployment-5d4fbdd945 to 2 Normal ScalingReplicaSet 2m deployment-controller Scaled down replica set example-deployment-7d9f9876cc to 2 Normal ScalingReplicaSet 1m (x2 over 19m) deployment-controller Scaled up replica set example-deployment-5d4fbdd945 to 3 Normal ScalingReplicaSet 1m deployment-controller Scaled down replica set example-deployment-7d9f9876cc to 1 Normal ScalingReplicaSet 1m deployment-controller Scaled down replica set example-deployment-7d9f9876cc to 0
Change the amount of replicas
We can scale our deployment by simply changing the replica number in the manifest then apply. We can also just run “kubectl scale” command and append “—replicas”.
$ kubectl scale --replicas=4 deployment example-deployment deployment.extensions "example-deployment" scaled
$ kubectl get deploy NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE example-deployment 4 4 4 4 23m
We now have 4 replicas.
Pause and resume our deployment
The “kubectl pause” command allows us to make changes and fixes without triggering a new RS rollout.
$ kubectl rollout pause deploy example-deployment deployment.apps "example-deployment" paused
Let’s change the image again
$ kubectl set image deployment example-deployment nginx=nginx:1.9.1 --record deployment.apps "example-deployment" image updated
$ kubectl get replicaset NAME DESIRED CURRENT READY AGE example-deployment-5d4fbdd945 4 4 4 28m example-deployment-7d9f9876cc 0 0 0 22m
You see how deployment did not rollout a new RS but If we resume the rollout by running “kubectl rollout resume deploy example-deployment”, a new RS will be created.
There are three stages a deployment can be in its lifecycle.
Deployment strategies are used to replace old Pods by new ones. There are two kinds you can use.
- spec.strategy.type to Recreate in the manifest Recreate – All pods are killed and recreated. This is defined by setting .
- Rolling update – pods are updated in a rolling fashion. This is defined by setting .spec.strategy.type to RollingUpdate. We can set maxUnavailable and maxSurge but as we mentioned earlier on, by default, it makes sure that only 25 percent of your pods are unavailable so we don’t have to change it if we don’t need to.
These postings are my own and do not necessarily represent BMC's position, strategies, or opinion.