Probably the most confusing part for new Kubernetes users is the fact that it has little to no idea about their favourite containerisation technology. That is, Kubernetes cares not how to build a Docker container, where the repository for that container, or what incantation of commands is required to build it. Yet the whole idea of Kubernetes is intrinsically linked to containerisation.
To get a real view of what’s going on we need to step back. If you are used to developing with Docker on your local machine, then you will understand that docker build
creates an image
that can be seen with docker images
.
A Docker user would then create a running instance of that image, which can be seen with docker ps
or docker ps -a
if that instance is stopped.
What’s happened is the image that was built from a Dockerfile
has been created and stored on a daemon
, eg. dockerd
, or containerd
which then manages the running instances.
This is the point where Kubernetes lives, the developer’s container registry is as far as Kubernetes can see. A developer creates a containerised image, pushes it to their preferred registry, which Kubernetes knows about, and then uses as it manages instances of that image in a virtual data centre.
For local use, the typical container registry might be minikube
, but in the “cloud” typical container registries might be AWS’ ECR
, or Google Cloud’s container registry found at gcr.io, or Azure’s equivalent.
A CI/CD
system is usually configured to note that changes were made to a development repository, which then triggers a series of actions, such as running unit tests, end to end tests, building a container and pushing the image to one of the aforementioned registries.
Once that concept is out of the way the next step is to look at how Kubernetes is configured, and how it views everything, from the bare metal to the containers it manages.