image_pdfimage_print

In this post I’m going to explain what a Kubernetes pod is, its use cases, how to use it to deploy an example application and its lifecycle. This post assumes you understand the purpose of Kubernetes and you have minikube and kubectl installed.

What is a Pod (as in pod of whales or pea pod)? A pod is the smallest building block of Kubernetes object model. In a Kubernetes cluster, a pod represents a running process. Inside a pod, you can have one or more containers. Those containers all share a unique network IP, storage, network and any other specification applied to the pod. Another way to think of a pod is that it is an application-specific “logical host” that holds one or more containers that that are tightly coupled. Let us take for example we have an “app-container” and “logging-container” in a pod. “Logging-container’s” only job is to pull logs from the “app-container”. You can see how having them in a pod eliminates a lot of extra setup to get them to talk. They are co-located so everything is local and they share all the same resources. This is the same thing as being executed on the same physical server in a pre-container world.

Pod model types

There are two model types of pod you can create. “one-container-per-pod” and “multi-container-pod.

  • One-container-per-pod. This model is the most popular. The Pod acts as a wrapper for a single container and since Pod is the smallest object Kubernetes knows, it manages the Pods rather than the containers directly.
  • Multi-container-pod. With this model a pod might hold multiple co-located containers that are tightly coupled and need to share resources. These containers work as a single cohesive unit of service. The Pod wraps these containers and storage resources together as a single unit. Some example use cases are sidecars, proxies, logging.

The idea is that each pod is meant to run a single instance of a given application. If you want to scale your application horizontally (e.g., run multiple replicas), you should use multiple Pods, one for each instance. Note that this is not the same thing as running multiple containers of the same application in a single pod.

It is worth mentioning that Pods aren’t meant to be durable entities so if a node fails, or in the case of node maintenance etc, it won’t survive. There is something called Controllers in Kubernetes that solves this issue for us. In general pods created with some type of controller.

Pods in practice

We have talked about what a pod is in theory, now let’s see what it actually looks like in practice. We will first go over what a simple pod manifest looks like, then we will deploy an example app showing how to work with it.

What does the manifest (YAML) look like?

apiVersion: "api version"              (1)
kind: "object to create"                  (2)
Metadata:                   (3)
  Name: "Pod name"
  labels:
    App: "label value"
Spec:                       (4)
  containers:
  - name: "container name"
    image: "image to use for container"

We will break the manifest down to 4 parts. The ApiVersion, Kind, Metadata and Spec.

  • ApiVersion – Which version of the Kubernetes API you’re using to create this object.
  • Kind – What kind of object you want to create.
  • Metadata – Information that uniquely identify the object we are creating. Name or namespace.
  • Spec – This is where we specify config of our pod e.g image name, container name, volumes etc.

ApiVersion, Kind and Metadata are required fields and applies to all Kubernetes objects, not just pods. The layout of spec (also required) on the other hand looks different from one object to another. The example manifest shown above shows what a single container pod spec looks like.

Ok now that we understand what the manifest looks like, we are going to show both model types of creating a pod.

Single Container Pod

Our pod-1.yaml is the manifest for our single container pod. It runs an nginx pod that echo’s something for us.

apiVersion: v1
kind: Pod
metadata:
  name: firstpod
  labels:
    app: myapp
spec:
  containers:
  - name: my-first-pod
    image: nginx

Next, we deploy this manifest into our local Kubernetes cluster by running “Kubectl create -f pod-1.yaml”. Then we run “kubectl get pods” to confirm that our pod is running as expected.

kubectl get pod
NAME                                          READY     STATUS    RESTARTS   AGE
firstpod                                      1/1       Running   0          45s

As you can see it is now running. How can we confirm nginx is actually running? Run “kubectl exec firstpod—kubeconfig=kubeconfig — service nginx status”. What this is doing is running a command inside our pod by passing in “— service nginx status”. Note that this is very similar to running “docker exec” if you are already familiar with running docker.

kubectl exec firstpod — service nginx status
nginx is running.

Cool. Now let’s clean up by running “kubectl delete pod firstpod

kubectl delete pod firstpod
pod "firstpod" deleted

Multi container manifest?

In this example, we will deploy something more useful. This time we want to create a pod with multiple containers that work together as one entity. One container writes the current date to a file every 10 sec, the other container serves the logs for us.
Go ahead and deploy the pod-2.yaml manifest with “kubectl create -f pod-2.yaml”.

apiVersion: v1
kind: Pod
metadata:
  name: multi-container-pod # Name of our pod
spec:
  volumes:
  - name: shared-date-logs  # Creating a shared volume for my containers
    emptyDir: {}
  containers:
  - name: container-writing-dates # Name of first container
    image: alpine # Image to use for first container
    command: ["/bin/sh"]
    args: ["-c", "while true; do date >> /var/log/output.txt; sleep 10;done"] # writing date every 10secs
    volumeMounts:
    - name: shared-date-logs
      mountPath: /var/log # Mounting log dir so app can write to it.
  - name: container-serving-dates # Name of second container
    image: nginx:1.7.9 # Image for second container
    ports:
      - containerPort: 80 # Defining what port to use.
    volumeMounts:
    - name: shared-date-logs
      mountPath: /usr/share/nginx/html # Where nginx will serve the written file

It is worth stopping here and briefly touching on “volumes” in pods. Volumes in our example provides a way for the containers to communicate during the life of the pod. If the pod is deleted and recreated, any data stored in the shared volume is lost(Persistent Volumes object solves this issue so that your data can persist pod loss). We are using this multi-container example to not only demonstrate how to create two container pod but to also show the way both containers share resources.

kubectl create -f pod-2.yaml
pod "multi-container-pod" created

Then we check to see if it’s really deployed.

kubectl get pod --kubeconfig=kubeconfig 
NAME                                          READY     STATUS    RESTARTS   AGE
multi-container-pod                           2/2       Running   0          1m

Great! It is running. Now let’s make sure things are working as we expect. We need to make sure that our second container is serving the dates.

We check to make sure two containers are in our pod by running “kubectl describe pod “pod name””. This command is good to see what the created object looks like.

Containers:
  container-writing-dates:
    Container ID:  docker://e5274fb901cf276ed5d94b625b36f240e3ca7f1a89cbe74b3c492347e98c7a5b
    Image:         alpine
    Image ID:      docker-pullable://alpine@sha256:621c2f39f8133acb8e64023a94dbdf0d5ca81896102b9e57c0dc184cadaf5528
    Port:          
    Host Port:     
    Command:
      /bin/sh
    Args:
      -c
      while true; do date >> /var/log/output.txt; sleep 10;done
    State:          Running
      Started:      Fri, 16 Nov 2018 11:31:44 -0700
    Ready:          True
    Restart Count:  0
    Environment:    
    Mounts:
      /var/log from shared-date-logs (rw)
      /var/run/secrets/Kubernetes.io/serviceaccount from default-token-8dl5j (ro)
  container-serving-dates:
    Container ID:   docker://f9c85f3fe398c3197644fb117dc1681635268903b3bba43aa0a1d151fab6ad22
    Image:          nginx:1.7.9
    Image ID:       docker-pullable://nginx@sha256:e3456c851a152494c3e4ff5fcc26f240206abac0c9d794affb40e0714846c451
    Port:           80/TCP
    Host Port:      0/TCP
    State:          Running
      Started:      Fri, 16 Nov 2018 11:31:44 -0700
    Ready:          True
    Restart Count:  0
    Environment:    
    Mounts:
      /usr/share/nginx/html from shared-date-logs (rw)
      /var/run/secrets/Kubernetes.io/serviceaccount from default-token-8dl5j (ro)

Now we see that both containers are in fact running, next is to make sure both containers are doing their jobs.

Connect to the container by running “kubectl exec -ti multi-container-pod -c container-serving-dates —kubeconfig=kubeconfig bash”. Now we are inside the container.

Next, we run “curl ‘http://localhost:80/output.txt’” inside the container and it should serve our file.

curl 'http://localhost:80/app.txt'
Fri Nov 16 18:31:44 UTC 2018
Fri Nov 16 18:31:54 UTC 2018
Fri Nov 16 18:32:04 UTC 2018
Fri Nov 16 18:32:14 UTC 2018
Fri Nov 16 18:32:24 UTC 2018
Fri Nov 16 18:32:34 UTC 2018
Fri Nov 16 18:32:44 UTC 2018
Fri Nov 16 18:32:54 UTC 2018

If you don’t have curl installed in the container, run “apt-get update && apt-get install curl” then run “curl ‘http://localhost:80/output.txt’” again.

There are other things you can do with pods. For example you can have an init container that initializes the second container, once the second container come up and serving, the first container stops since it’s job is done.

Pod lifecycle

A pod status tells us where the pod is in its lifecycle. It is meant to give you an idea not for certain, therefore It is good practice to debug if pod does not come up cleanly. There are 5 phases during a pod lifecycle.

  • Pending – Pod has been accepted, but one or more of the Container images has not been created.
  • Running – The Pod has been bound to a node, and all of the Containers have been created. One Container is still running, or is in the process of starting or restarting.
  • Succeeded – All Containers in the Pod have terminated in success and will not be restarted.
  • Failed – All Containers in the Pod have terminated, and at least one Container has terminated in failure. The Container exited with non-zero status.
  • Unknown – For some reason the state of the Pod could not be obtained.

 

Kubernetes On-Demand Webinar

Kubernetes (K8S), containers, microservices… what’s missing? Application Workflows! Watch this On-Demand Webinar to learn about K8S JOB and DaemonSet objects and much more!
Watch Now ›
Last updated: 12/11/2018

These postings are my own and do not necessarily represent BMC's position, strategies, or opinion.

See an error or have a suggestion? Please let us know by emailing blogs@bmc.com.

About the author

Toye Idowu

Toye Idowu

Olatoye is a Certified Kubernetes Administrator and experienced DevOps/Platform engineering Consultant with a demonstrated history of working in the Information Technology and Services industry.