Ubuntu Server Struggles With Kubernetes Post-Docker Installs – The New Stack


I have a bone to settle with someone. Honestly, I don’t know where to direct this anger, but there is now a big problem with using Ubuntu Server as a base for Kubernetes.

Over the past few days, I’ve tried, many times, to get Kubernetes to work on Ubuntu Server 22.04, and no matter how many tries, it fails. Now I can install Kubernetes on Ubuntu Server with no problem like I have done so many times before. The only difference is that now, instead of using Docker, I have to use a runtime like containerd. However, when trying to initialize the cluster, I encounter (every time) the error:

Doesn’t matter if I’m from a fresh install or made a sudo kubeadm reset, it times out and never initializes. I’ve tried three times (each with new Ubuntu Server 22.04 instances) and it never succeeds.

The issue sent me down a rabbit hole that promised the latest version of containerd had issues installing on Ubuntu Server, but even after attempting a new deployment with an older version of containerd I found myself with the same problem.

Suffice it to say that I came away from this little experience frustrated. A Kubernetes cluster on Ubuntu Server 22.04 should be a no-brainer. It’s not. Although I can run a single instance just fine and even deploy an application with it. But the second I want to go the cluster route, things go seriously wrong.

To give detail

What’s interesting about this error is that the kubelet is running. However, when running:

sudo systemctl status kubelet

I see the errors like this:

Aug 04 13:56:19 kubecontroller kubelet[550949]: E0804 13:56:19.613305  550949 kubelet.go:2424] "Error getting node" err="node "kubecontroller" not found"

Aug 04 13:56:19 kubecontroller kubelet[550949]: E0804 13:56:19.714099  550949 kubelet.go:2424] "Error getting node" err="node "kubecontroller" not found"

Aug 04 13:56:19 kubecontroller kubelet[550949]: E0804 13:56:19.814923  550949 kubelet.go:2424] "Error getting node" err="node "kubecontroller" not found"

The next rabbit hole concerned the file ~./kube/config. Even after rechecking the permissions on this file, I had problems. Fixed issues and restarted the kubelet with:

sudo systemctl restart kubelet

Guess what? Now kubelet does not start.

Let the hair pulling begin!

A quick reboot of the system to see if it could clear out anything unpleasant. Once the machine restarted, I ran the initialize command so I can display more troubleshooting information like this:

sudo kubeadm init

Guess what? New errors such as:

error execution phase wait-control-plane

k8s.io/kubernetes/cmd/kubeadm/app/cmd/phases/workflow.(*Runner).Run.func1    cmd/kubeadm/app/cmd/phases/workflow/runner.go:235

k8s.io/kubernetes/cmd/kubeadm/app/cmd/phases/workflow.(*Runner).visitAll    cmd/kubeadm/app/cmd/phases/workflow/runner.go:421

k8s.io/kubernetes/cmd/kubeadm/app/cmd/phases/workflow.(*Runner).Run    cmd/kubeadm/app/cmd/phases/workflow/runner.go:207

k8s.io/kubernetes/cmd/kubeadm/app/cmd.newCmdInit.func1    cmd/kubeadm/app/cmd/init.go:153

k8s.io/kubernetes/vendor/github.com/spf13/cobra.(*Command).execute    vendor/github.com/spf13/cobra/command.go:856

k8s.io/kubernetes/vendor/github.com/spf13/cobra.(*Command).ExecuteC    vendor/github.com/spf13/cobra/command.go:974

k8s.io/kubernetes/vendor/github.com/spf13/cobra.(*Command).Execute    vendor/github.com/spf13/cobra/command.go:902

k8s.io/kubernetes/cmd/kubeadm/app.Run    cmd/kubeadm/app/kubeadm.go:50

main.main    cmd/kubeadm/kubeadm.go:25

runtime.main    /usr/local/go/src/runtime/proc.go:250

runtime.goexit    /usr/local/go/src/runtime/asm_amd64.s:1571

Obviously, that’s zero help. And no matter how much time I spend with my friend Google, I can’t find an answer to what ails me.

Back to the drawing board. Another installation and the same results. And, of course, the official Kubernetes documentation is absolutely no help.

What is the conclusion to draw and what can you do (when Ubuntu Server is your benchmark)?

It’s all about Docker

Once upon a time, deploying a Kubernetes cluster on Ubuntu Server was incredibly simple and rarely (if ever) failed. What is the difference ?

In a nutshell… Docker.

The second Kubernetes removed Docker support, deploying a cluster on Ubuntu Server became an absolute nightmare. With that in mind, what can you do? Well, you can still install microk8s via snap with the command:

sudo snap install microk8s --classic --channel=1.24

Of course, as everyone knows, the instant install can take a while and isn’t as responsive as a standard install. However… when in Rome.

Once the installation is complete, add your user to the necessary group with:

sudo usermod -a -G microk8s $USER

Modify the permissions of the .kube directory with:

sudo chown -f -R $USER ~/.kube

Log out and log back in, then run the command:

microk8s status --wait-ready

Big bada boom, everything works. I can deploy an NGINX application with:

microk8s kubectl create deployment nginx --image=nginx

But it is not a cluster. Ah, but microk8s has you covered. On the controller node, issue the command:

microk8s add-node

You will receive a join command to run on any other machine where you have microk8s installed that looks like this:

microk8s join

Oh, but I guess it will also fool you with something like:

Contacting cluster at

Connection failed. Invalid token (500).

Rebooted both machines, run the add-node command again with the –ignore-check order, and no dice.

Make sure you set hostnames for both machines, that those hostnames are mapped in /etc/hosts, and double check that the time is correct on both machines. No arrangements.

However, after the second restart of both machines, for some reason the node was able to rejoin the controller. The process took much longer than it should have, but I associate that with using the instant version of the service.

After running microk8s kubectl get nodesI can now see both nodes on my cluster.


Why make it difficult?

For those involved…it shouldn’t be that hard. Seriously. Kubernetes is already a difficult platform to use. Making it equally difficult to get started, without fail, brings me back to the simplicity of Docker Swarm.

Sure, I could migrate to an RHEL-based server for my Kubernetes deployments, but Ubuntu Server has been my go-to for years. Don’t get me wrong, I don’t mind microk8s, but it won’t work with the likes of Portainer (which is my go-to for this stuff). For this, I need to add the Portainer addon with:

microk8s enable community

microk8s enable portainer

Exceptional… just not. What’s the problem now? After enabling Portainer, you are prompted to access it through the nodeport, the address of which you can get using the command:

export NODE_PORT=$(kubectl get --namespace portainer -o jsonpath="{.spec.ports[1].nodePort}" services portainer)

But wait… kubectl is not installed because I am using microk8s! I have to modify the command like this:

export NODE_PORT=$(microk8s kubectl get --namespace portainer -o jsonpath="{.spec.ports[1].nodePort}" services portainer)

Then you need to run the commands (again, modifying the following to include microk8s):

export NODE_IP=$(microk8s kubectl get nodes --namespace portainer -o jsonpath="{.items[0].status.addresses[0].address}")  echo https://$NODE_IP:$NODE_PORT

The final command will indicate the address used to access Portainer.

Wouldn’t it be great if it worked? This was not the case.

Guess what? This was all from the official documentation (minus the addition of the microk8s part of the command which was conveniently omitted).

For those responsible for these technologies, it should not be so difficult. I realize there may be some bad blood between Kubernetes and Canonical, but when this spills over into userspace, the real frustration falls on the heads of admins and developers.

Please, please, please fix these issues so that those who prefer Ubuntu Server can get their Kubernetes with the same ease as before.

The New Stack is a wholly owned subsidiary of Insight Partners, an investor in the following companies mentioned in this article: Docker.

Image by Monique Stokman from Pixabay.


About Author

Comments are closed.