Kubernetes - Units of concern

Kubernetes (K8s), the container orchestration management software. This post represents the initial learnings I have gathered on K8s), the units that K8s organises resources into, and the deployment strategies K8s makes available.


Kubernetes’ smallest unit of concern is the pod. A pod, perhaps curiously, can be comprised of one or more containers.

Why would you place several containers into one pod? Because they absolutely depend on one another, they cannot exist without one another, and they must be in the same physical and logical space as one another.

A Pod has - IP address - Name - Port range (Note: this means all containers within a single pod are sharing that range)

Because a Pod’s IP address is only known after the Pod has been provisioned, and started, and the IP address can change over time, something is needed to allow any other pod to know how what that pod is, and how to contact it.


That is where Services come in. A service name is bound to an IP and Port, and a service normally refers to a set of pods.


Labels allow you to create logical groupings of pods, and give application identity to them. Think Frontend, and Backend pods on one axis, whilst Customer and Contact pods are on another.


These aren’t intended for querying, they’re meant for non-searchable meta data. Things like Build IDs, Release IDs, Git Branch names, and so on.


Namespaces allow the division of a cluster into a logical pool of resources, providing scope for Kubernetes resources and providing a mechanism to apply policies to subsections of the cluster.

The most common use-case, is to divide your cluster up into Dev, Staging/UAT, and Production.

Keep in mind that namespaces provide scope, but not isolation.

Resource Quotas, though, are per namespace.

Predictable demands

Every container has a unique resource consumption profile. How much CPU, Memory or other resources the container will need needs to be known ahead of time.

In order to communicate this Kubernetes has Resource Profiles.

Kubernetes offers volume as Pod level storage, that survives container restarts, that is, when the container is stopped, for whatever reason, the data held on the volume will persist, and can be shared within the pod, with other containers.

When the pod is removed, or moved from that node, the content held in that storage is lost.

apiVersion: v1
kind: Pod
    name: funky-cool-name
      - image: project/funky-cool-name:1.0
        name: funky-cool-name
          - mountPath: "/logs"
            name: log_volume
      - name: log_volume
            claimName: funky-cool-name-log

Volumes are an example of runtime dependency that affects what kind of infrastructure a pod can run on and id a pod can be scheduled at all.

Container ports, via the hostPort field, are similar, after all, only one container can be assigned a given port per node.

Note: hostPort reserves the port on each node in a cluster.

Some of the dependencies that are stipulated can cause scheduling issues, after all if two pods specify the same hostPort then two nodes that support the other requirements for the pods need to be found.

Config Maps

Basically a list of environment variables or files (eg. Yaml) that can be pulled in.

Similar to config maps are secrets.

Resource Profiles

Resources can be - Compressible - can be throttled, eg. CPU/Network - Incompressible - eg. Memory

If your container consumes too much compressible resource that container will be throttled.

The kubernetes configuration has two fields, requests, that specify the minimum amount of a given resource that the container requires, and the limits, that specify the maximum.

apiVersion: v1
kind: Pod
    name: funky-cool-name
      - image: project/funky-cool-name:1.0
        name: funky-cool-name
                cpu: 100m
                memory: 100miB
                cpu: 200m
                memory: 100miB

Quality of Service

There are three levels of service that kubernetes undertakes to provide. - Best Effort - Pod has no requests or limits - This is the lowest priority, and as such pods in this group will be the first to be killed - Burstable - Pod has requests and limits defined, but the min is not equal to the corresponding max - Will be killed if no Best Effort pods exist - Guaranteed - Pod has requests and limits defined, and the min is equal to the corresponding max -Last to be killed

comments powered by Disqus