Having Docker on Raspberry Pi offers tons of possibilities for hobbyist and home devices. It also triggered my interest because Kubernetes, one of the Docker orchestrators, can be run standalone on a single node using Docker containers. I wrote a post several months ago about doing it using docker-compose. So I decided to give it a try last week-end, running Kubernetes on a PI using the Hypriot image that has the Docker engine.
The first issue is that Kubernetes currently uses etcd, and that you need to run it on ARM. I decided to get the
etcd to run
etcdsource directly on the PI and updated the Dockerfile to build it directly there. Etcd uses a Golang ONBUILD image and it was causing me grief. So I copied the content of the ONBUILD image and created a new Dockerfile based on
hypriot/rpi-golangto build it directly. You can see the Dockerfile. With that you have a Docker container running etcd on ARM.
Getting the Hyperkube to run on ARMNow, I needed the hyperkube binary to run on ARM. Hyperkube is a single binary that allows you to start all the Kubernetes components. Thankfully there are some binaries already available for ARM. That was handy because I struggled to compile Kubernetes directly on the PI.
With that hyperkube binary on hand, I built an image based on the
resin/rpi-raspbian:wheezyimage. Quite straightforward:
FROM resin/rpi-raspbian:wheezy RUN apt-get update RUN apt-get -yy -q install iptables ca-certificates COPY hyperkube /hyperkube
The Kubelet systemd unitThe Kubernetes agent running on all nodes in a cluster is called the Kubelet. The Kubelet is in charge of making sure that all containers supposed to be running on the node actually do run. It can also be used with a manifest to start some specific containers at startup. There is a good post from Kelsey Hightower about it. Since The Hypriot image uses systemd I took the systemd unit that creates a Kubelet service directly from Kelsey's post:
The kubelet binary is downloaded directly from the same location as hyperkube. The manifest is a Kubernetes Pod definition that starts all the containers to get a Kubernetes controller running. It starts
[Unit] Description=Kubernetes Kubelet Documentation=https://github.com/kubernetes/kubernetes [Service] ExecStart=/usr/bin/kubelet \ --api-servers=http://127.0.0.1:8080 \ --allow-privileged=true \ --config=/etc/kubernetes/manifests \ --v=2 Restart=on-failure RestartSec=5 [Install] WantedBy=multi-user.target
etcd, the API server, the scheduler, the controller and the service proxy, all using the hyperkube image built above.
Now the dirty hackKubernetes does something interesting. All containers in a Pod actually use the same IP address. This is done by running a fake container that just does nothing. The other containers in the Pod just share the same network namespace as this fake container. This is actually called the pause container. I did not find a way to specify a different image for the pause container in Kubernetes, it seems hard coded to
gcr.io/google_containers/pause:0.8.0which off course is supposed to run on x86_64.
So the dirty trick consisted in taking the pause Goland code from the Kubernetes source, compiling it on the PI using the
hypriot/rpi-golangand sticking the binary in a SCRATCH image and tagging it locally to appear as
gcr.io/google_containers/pause:0.8.0and avoid the download of the real image that runs on x86_64. Yeah...right...I told you dirty but that was the quickest way I could think of.
Putting it all togetherNow that you have all the images ready directly on the PI, plus a Kubelet service, you can start it. The containers will be created and you will have a single node Kubernetes cluster on the PI. All is left is to use the kubectl CLI to use it. You can download an ARM version of Kubectl form the official Kubernetes release.
HypriotOS: root@black-pearl in ~ $ docker images REPOSITORY TAG hyperkube latest gcr.io/google_containers/pause 0.8.0 etcd latest resin/rpi-raspbian wheezy hypriot/rpi-golang latest HypriotOS: root@black-pearl in ~ $ ./kubectl get pods NAME READY STATUS RESTARTS AGE kube-controller-black-pearl 5/5 Running 5 5m HypriotOS: root@black-pearl in ~ $ ./kubectl get nodes NAME LABELS STATUS black-pearl kubernetes.io/hostname=black-pearl Ready
Get itEverything is on GitHub at
https://github.com/skippbox/k8s4piincluding a horrible bash script that does the entire build :)