Install Minikube on Ubuntu Server 17.10

I have some experiences with Docker and containers, but never played with Kubernetes before. I started to explore Kubernetes recently as I may need a container orchestration solution in the coming projects. Kubernetes is supported by Azure AKS. Even Docker has announced their support of it. Looks like it is going to be the major container orchestration solution in the market for the coming years.

I started with deploying a local Kubernetes cluster with Minikube on a Ubuntu 17.10 server on Azure. Kubernetes has a document on its site which is about installing the Minikube. But it is very brief. So in this post, I will try to document the step by step procedure both for the future reference of myself and for others who are new to Kubernetes.

Install a Hypervisor

To install Minikube, the first step is to install a hypervisor on the server. On Linux, both VirtualBox and KVM are supported hypervisors. I chose to install KVM and followed the guidance here. The following are steps.

  • Make sure VT-x or AMD-v virtualization is enabled. In Azure, if the VM is based on vCPUs, the virtualization is enabled. To double check, run command egrep -c '(vmx|svm)' /proc/cpuinfo, if the output is 1, the virtualization is enabled.
  • Install the KVM packages with the following command:
sudo apt-get install qemu-kvm libvirt-bin ubuntu-vm-builder bridge-utils
  • Use the following command to add the current user to the libvert group, and then logout and login to make it work. Note, in the guidance the group name is libvirtd, but on Ubuntu 17.10, the name has changed to libvert.
sudo adduser `id -un` libvirt
  • Test if your install has been successful with the following command:
virsh list --all
  • Install virt-manager so that we have a UI to manage VMs
sudo apt-get install virt-manager

Install kubectl

Follow the instruction here to install kubectl. The following are the commands:

curl -LO https://storage.googleapis.com/kubernetes-release/release/$(curl -s https://storage.googleapis.com/kubernetes-release/release/stable.txt)/bin/linux/amd64/kubectl
chmod +x ./kubectl
sudo mv ./kubectl /usr/local/bin/kubectl

Install Minikube

Follow the instruction on the release notes of Minikube to install it. I used the following command:

curl -Lo minikube https://storage.googleapis.com/minikube/releases/v0.25.0/minikube-linux-amd64 && chmod +x minikube && sudo mv minikube /usr/local/bin/

When you finish this step, according to the official document, the installation of Minikube has been completed. But before you can use it, there are several other components which needs to be installed as well.

Install Docker, Docker-Machine, and KVM driver

Minikube can run on natively on the Ubuntu server without a virtual machine. To do so, Docker needs to be installed on the server. Docker-CE has a different way to be installed and Docker has a document for it.

Docker Machine can be installed with the following commands:

curl -L https://github.com/docker/machine/releases/download/v0.13.0/docker-machine-`uname -s`-`uname -m` >/tmp/docker-machine && \
sudo install /tmp/docker-machine /usr/local/bin/docker-machine

Finally, we need to install a VM driver for the docker machine. Kubernetes team ships a KVM2 driver which is supposed to replace the KVM driver created by others. However, I failed to make the Minikube work with the KVM2 driver. There is a bug report for this issue and hope the Kubernetes team will fix it soon.

So I installed the KVM driver with the following command:

curl -LO https://github.com/dhiltgen/docker-machine-kvm/releases/download/v0.10.0/docker-machine-driver-kvm-ubuntu16.04
sudo cp docker-machine-driver-kvm-ubuntu16.04 /usr/local/bin/docker-machine-driver-kvm
sudo chmod +x /usr/local/bin/docker-machine-driver-kvm

Test if Minikube Works

With the completion of all the above steps, we can test the Minikube now.

minikube start --vm-driver kvm

It will create a vm named as minikube in KVM and configure a local Kubernetes cluster based on it. With kubectl, you should be able to see the cluster info and node info.

kubectl cluster-info
kubectl get nodes

With that, you can start to explore Kubernetes.

Running Linux Containers on Windows Server 2016

I never thought running Linux containers on Windows Server is a big deal. A reason that I run Docker for Windows on my Windows 10 laptop is to run some Linux based containers. I thought I just need to install Docker for Windows on a Windows Server 2016 server with Container feature enabled, then I should be able to run both Linux and Windows containers. I didn’t know it is not the case until when I tried it yesterday.

It turns out the Linux Containers on Windows (lcow) Server is a preview feature of both Windows Server, version 1709 and Docker EE. It won’t work on Windows Server 2016 of which the version is older than 1709. As a side learning of this topic, I also got some ideas about the Windows Server semi-annual channel. An interesting change.

So here is a summary of how to enable lcow on Windows Server, version 1709.

  1. First of all, you need to get a Windows Server, version 1709 up and running. You can get the installation media of Windows Server, version 1709 from here. As I use Azure, I provision a server based on the Windows Server, version 1709 with Container image. Version 1709 was only offered as a Server Core installation. It doesn’t have the desktop environment.

  2. Once you have the server up and running, you will have to enable the Hyper-V and Containers feature on it, and install the Docker EE preview. It can be installed with the following PowerShell script.

    As I use the Azure image, the Container feature and Docker EE has been enabled on it, and docker daemon has been configured as a Windows service. I don’t have to run the above script.

  3. Now you can follow the instruction here to configure the lcow. Specifically, I use the following script to configure it. I also update the configuration file in C:\ProgramData\Docker\config\daemon.json to enable the experimental feature of LinuxKit when docker service is started.

  4. Once you finish all the above configuration, you have enabled the lcow on Windows Server 1709. To test it, simply run

docker run --platform linux --rm -ti busybox sh

That is it. If you want, you can also try to run Ubuntu containers by following the instructions here.

Dockerfile实践

最近在玩OpenCV,顺手build了一个OpenCV 3.2.0的Docker image。这个image是基于Ubuntu 16.04和OpenCV 3.2.0的source code build的,顺带也build进了Python3的绑定。这个image比较适合用来作为开发和测试基于OpenCV的服务端程序环境的base image。由于包含了几乎全部的OpenCV组件,build的过程还是比较费时的,image的尺寸也比较
大,所以我将它push到了Docker Hub里。需要的话,可以用
docker pull chunliu/docker-opencv

把它拉下来使用。如果要精简组件,或build别的版本的OpenCV,可以修改Dockerfile,重新build。

实际上我以前并没有怎么用过Docker,只在虚机中安装过,顺着Docker官方的tutorial做过,并简单看过官方的几篇doc,仅此而已。大概明白Dockerfile是怎么回事,但没有写过很完整复杂的Dockerfile。事实证明,事非经过不知难,写这个Dockerfile还是有一些坑的。

首先,要写好这个Dockerfile,只靠记事本比较困难,使用辅助工具会容易一些。我用的是VS Code + Docker support,它能提供关键字着色和IntelliSense,也仅此而已。如果有工具能做语法检查就更好了,比如检查行尾是否少了一个续行符之类的。我开始几次都是跑build失败才发现,是某一行少了一个续行符。

另外,我没发现有什么好的方法,来debug和测试Dockfile。最开始,我是修改了Dockfile之后,就跑build,失败再找原因。但是这个build比较费时,这样不是很有效率。后来,我开始在一个container里,逐条跑Dockerfile里的命令,保证每条命令都没问题,再跑build。这样做的问题是,所有命令在一个bash session里跑成功了,并不能保证它们用RUN组织到Dockfile以后,build还能成功。

这就牵扯到Docker是怎么执行这个RUN的问题了。Docker的文档说,每一个RUN会是一个新的layer。我起初不太明白layer的含义,做过之后发现,所谓layer,就是一个中间状态的container。RUN后面的代码是在这个container里跑,跑完之后这个container被commit到image里,然后这个container被删除。后面的RUN,会基于新commit的image,起一个新的container。

所以,如果两段代码需要在一个bash session里跑的话,就需要在一个RUN里面才行。一个例子,比如build OpenCV的时候,会用下面的方式来make:

mkdir build
cd build
cmake ......
make ......

如果将cd,cmake和make分开到不同的RUN中,那cmake和make就有问题了,因为工作路径不对。实际上,RUN的工作目录是由WORKDIR设定的,每个RUN开始时都会使用它上面最靠近它的WORKDIR作为工作目录。所以如果非要将上面的代码分开到不同的RUN,也可以在RUN之间插入WORKDIR来指定路径,不过路径跳来跳去的,比较混乱。

细读Docker的两篇官方文档,对规避Dockerfile里的一些坑,是很有帮助的。

Reference