Install Minikube on Ubuntu Server 17.10

I have some experiences with Docker and containers, but never played with Kubernetes before. I started to explore Kubernetes recently as I may need a container orchestration solution in the coming projects. Kubernetes is supported by Azure AKS. Even Docker has announced their support of it. Looks like it is going to be the major container orchestration solution in the market for the coming years.

I started with deploying a local Kubernetes cluster with Minikube on a Ubuntu 17.10 server on Azure. Kubernetes has a document on its site which is about installing the Minikube. But it is very brief. So in this post, I will try to document the step by step procedure both for the future reference of myself and for others who are new to Kubernetes.

Install a Hypervisor

To install Minikube, the first step is to install a hypervisor on the server. On Linux, both VirtualBox and KVM are supported hypervisors. I chose to install KVM and followed the guidance here. The following are steps.

  • Make sure VT-x or AMD-v virtualization is enabled. In Azure, if the VM is based on vCPUs, the virtualization is enabled. To double check, run command egrep -c '(vmx|svm)' /proc/cpuinfo, if the output is 1, the virtualization is enabled.
  • Install the KVM packages with the following command:
sudo apt-get install qemu-kvm libvirt-bin ubuntu-vm-builder bridge-utils
  • Use the following command to add the current user to the libvert group, and then logout and login to make it work. Note, in the guidance the group name is libvirtd, but on Ubuntu 17.10, the name has changed to libvert.
sudo adduser `id -un` libvirt
  • Test if your install has been successful with the following command:
virsh list --all
  • Install virt-manager so that we have a UI to manage VMs
sudo apt-get install virt-manager

Install kubectl

Follow the instruction here to install kubectl. The following are the commands:

curl -LO$(curl -s
chmod +x ./kubectl
sudo mv ./kubectl /usr/local/bin/kubectl

Install Minikube

Follow the instruction on the release notes of Minikube to install it. I used the following command:

curl -Lo minikube && chmod +x minikube && sudo mv minikube /usr/local/bin/

When you finish this step, according to the official document, the installation of Minikube has been completed. But before you can use it, there are several other components which needs to be installed as well.

Install Docker, Docker-Machine, and KVM driver

Minikube can run on natively on the Ubuntu server without a virtual machine. To do so, Docker needs to be installed on the server. Docker-CE has a different way to be installed and Docker has a document for it.

Docker Machine can be installed with the following commands:

curl -L`uname -s`-`uname -m` >/tmp/docker-machine && \
sudo install /tmp/docker-machine /usr/local/bin/docker-machine

Finally, we need to install a VM driver for the docker machine. Kubernetes team ships a KVM2 driver which is supposed to replace the KVM driver created by others. However, I failed to make the Minikube work with the KVM2 driver. There is a bug report for this issue and hope the Kubernetes team will fix it soon.

So I installed the KVM driver with the following command:

curl -LO
sudo cp docker-machine-driver-kvm-ubuntu16.04 /usr/local/bin/docker-machine-driver-kvm
sudo chmod +x /usr/local/bin/docker-machine-driver-kvm

Test if Minikube Works

With the completion of all the above steps, we can test the Minikube now.

minikube start --vm-driver kvm

It will create a vm named as minikube in KVM and configure a local Kubernetes cluster based on it. With kubectl, you should be able to see the cluster info and node info.

kubectl cluster-info
kubectl get nodes

With that, you can start to explore Kubernetes.

Running Linux Containers on Windows Server 2016

I never thought running Linux containers on Windows Server is a big deal. A reason that I run Docker for Windows on my Windows 10 laptop is to run some Linux based containers. I thought I just need to install Docker for Windows on a Windows Server 2016 server with Container feature enabled, then I should be able to run both Linux and Windows containers. I didn’t know it is not the case until when I tried it yesterday.

It turns out the Linux Containers on Windows (lcow) Server is a preview feature of both Windows Server, version 1709 and Docker EE. It won’t work on Windows Server 2016 of which the version is older than 1709. As a side learning of this topic, I also got some ideas about the Windows Server semi-annual channel. An interesting change.

So here is a summary of how to enable lcow on Windows Server, version 1709.

  1. First of all, you need to get a Windows Server, version 1709 up and running. You can get the installation media of Windows Server, version 1709 from here. As I use Azure, I provision a server based on the Windows Server, version 1709 with Container image. Version 1709 was only offered as a Server Core installation. It doesn’t have the desktop environment.

  2. Once you have the server up and running, you will have to enable the Hyper-V and Containers feature on it, and install the Docker EE preview. It can be installed with the following PowerShell script.

    As I use the Azure image, the Container feature and Docker EE has been enabled on it, and docker daemon has been configured as a Windows service. I don’t have to run the above script.

  3. Now you can follow the instruction here to configure the lcow. Specifically, I use the following script to configure it. I also update the configuration file in C:\ProgramData\Docker\config\daemon.json to enable the experimental feature of LinuxKit when docker service is started.

  4. Once you finish all the above configuration, you have enabled the lcow on Windows Server 1709. To test it, simply run

docker run --platform linux --rm -ti busybox sh

That is it. If you want, you can also try to run Ubuntu containers by following the instructions here.

Creating API Management instances in Parallel with Automation Runbook

Provisioning an Azure API Management (APIM) service instance is a bit time-consuming task. It usually takes 20 to 30 minutes to get an instance created. In most of cases, it is fine because you usually don’t have the needs to create many APIM instances. For most customers, 2 or 3 instances are enough for their solutions. And provisioning APIM instances doesn’t seem to be a day to day work.

But recently I am preparing a lab environment for an APIM related lab session that I am going to deliver in an event. Given that provisioning an APIM instance would take 20 to 30 minutes, it is impracticable to let attendees create the instances during the lab session. I have to provision an APIM instance for each attendee before the lab session. As there could be more than 40 attendees, I have to do it with a script rather than manually clicking around in Azure portal.

APIM supports creating instances with PowerShell. It doesn’t support Azure CLI at the moment. The Cmdlet for instance creation is New-AzureRmApiManagement, and as mentioned in the document, this is a long running operation which could take up to 15 minutes. If I simply create a PowerShell script to run this operation sequentially, it would take tens of hours to get all APIM instances created. It is not acceptable. I have to run the operations in parallel.

I ended up creating a PowerShell Workflow runbook in Azure Automation to do the task. PowerShell Workflow has several ways to support parallel processing, and Azure Automation provides the enough computing resource to run all operations in parallel.

The following code snippet shows the key part of the workflow.

The code is quite straight forward. I need to include the Azure authorization code for each of the parallel operation because when operations are running in parallel, each operation runs in its own process. So each of them need to be authorized before they can access the Azure resource.

For the completed code, you can get it from here. To run this workflow runbook in Azure Automation, the AzureRM.ApiManagement module needs to be imported into Azure Automation. That’s all.

为什么Surface Book才是我心目中,PC的理想形态

上个月去美国旅行的时候,跑去微软商店买了一台Surface Book 2。几个星期用下来,我觉得它是我用过的,最好用的Windows 10 PC。它的这种混合了笔记本和平板的模式,是我觉得十分理想的PC形态。不知道微软怎么考虑的,Surface Book系列竟然只在少数几个国家卖。如果不是受限于微软的市场策略,和偏高的价格,它应该大卖才对。

Surface Book好在哪儿呢?

首先,它够强大,够powerful,又够轻。我之前用的是ThinkPad W540,i7+32GB内存,双显卡,非常强大,但是太重,单单它的充电器就比别人的整台机器还重了,不适合背着到处走。我的这台surface book,是13寸版本,i7+16GB内存,底座是带显卡的。跑Visual Studio,Docker等等,统统没问题。文明6玩起来也比W540感觉还强。重点是,不相上下的性能,surface book的重量连W540的一半都不到,也就刚刚比W540的充电器重了一点点吧,绝对有利于保护肩膀。

其次,surface book的底座更稳定,键盘手感更好。我之前还有一台surface pro 3。那时候是出门用surface pro,在家用W540。Surface pro的键盘手感不好还在其次。有一次去客户机房,服务器旁边没桌子,我只好坐在高脚椅上,然后把surface pro摊开放腿上。可是surface pro典型的头重脚轻,一不小心,它从我腿上掉下去了,我伸手只抓到了它的键盘。偏巧掉下去的角度不对,是一角先着地,屏幕就摔裂了,直接导致触屏玩完了。也导致我之后不得不背着W540出门。Surface book的屏幕和键盘重量比较均衡,链接铰链也比surface pro的安全多了。

第三点就是它对Windows 10的支持了。通过平板分离按键,可以在不断电的情况下,将平板部分卸下来。由于它有两块电池,平板部分的电池比较小,使得它端着比surface pro还轻。这点太方便了。有时候资料看到一半,不想坐在办公桌前看了,可以卸下平板,端着坐沙发上看,完全可以无缝切换。虽然离了底座,平板的电池大概只能支撑两三个小时,不过对轻度使用足够了。

总之surface book我用着蛮好的,希望微软能调整策略,让它大卖。


最近在玩OpenCV,顺手build了一个OpenCV 3.2.0的Docker image。这个image是基于Ubuntu 16.04和OpenCV 3.2.0的source code build的,顺带也build进了Python3的绑定。这个image比较适合用来作为开发和测试基于OpenCV的服务端程序环境的base image。由于包含了几乎全部的OpenCV组件,build的过程还是比较费时的,image的尺寸也比较
大,所以我将它push到了Docker Hub里。需要的话,可以用
docker pull chunliu/docker-opencv



首先,要写好这个Dockerfile,只靠记事本比较困难,使用辅助工具会容易一些。我用的是VS Code + Docker support,它能提供关键字着色和IntelliSense,也仅此而已。如果有工具能做语法检查就更好了,比如检查行尾是否少了一个续行符之类的。我开始几次都是跑build失败才发现,是某一行少了一个续行符。

另外,我没发现有什么好的方法,来debug和测试Dockfile。最开始,我是修改了Dockfile之后,就跑build,失败再找原因。但是这个build比较费时,这样不是很有效率。后来,我开始在一个container里,逐条跑Dockerfile里的命令,保证每条命令都没问题,再跑build。这样做的问题是,所有命令在一个bash session里跑成功了,并不能保证它们用RUN组织到Dockfile以后,build还能成功。


所以,如果两段代码需要在一个bash session里跑的话,就需要在一个RUN里面才行。一个例子,比如build OpenCV的时候,会用下面的方式来make:

mkdir build
cd build
cmake ......
make ......




Open Live Writer

早年间Blog还流行的时候,微软出的Windows Live Writer是非常流行的一款离线写blog的工具。WLW我用过很久,一开始是和MSN Messenger一起的装的,主要是支持MSN Spaces。后来微软不支持Spaces了,但WLW还留着,因为它也支持Wordpress。又后来MSN Messenger被淘汰了,还是会通过Live Essential Tools安装里面的WLW。直到后来WLW2012之后,这个工具我还是用了很久,直到开发和支持都停止了。

WLW之后,我就再没用过桌面编辑器写blog了。主要是blog写的也少了,偶尔写一篇,就在浏览器里解决了。再者也没发现顺手的工具。直到今天在讨论组里看到有人提起,原来WLW有了开源的版本,而且还发布到了Windows Store里面。赶紧下了一个来试试。新的OLW界面和WLW一致,看起来不是UWP应用,像是通过Desktop Bridge包装了一下。OLW是.NET Foundation支持的,官网是,代码开源在Github上。我已经fork了一份。它的readme还介绍了一段OLW的历史,蛮有趣的。


Ubuntu 16.04

前两天收到通知,说是我host在Azure上的这台VM,可以升级到Ubuntu 16.04了。趁着有空,就将它升了上去。

说起来,这台VM也经历好几次版本升级了。最初的时候,OS是Ubuntu 13.04。后来升级到13.10,再后来是14.04。每次升级都或多或少会遇到一些问题,要花些时间troubleshooting。因为怕麻烦,升到14.04之后就没继续折腾15.10,呆在14.04有两年多了。


备份好之后,我就开始跑升级。没想到还挺顺利的,除了mysql升级失败之外,没有遇到什么会导致升级失败的错误。查log之后发现,mysql之所以失败,是因为apparmor保护了一些路径。因为升级的过程中,我选择保留所有的旧的配置文件,这导致mysql需要访问的一些新的文件路径是被apparmor保护的。改了apparmor的设置,问题就解决了。fail2ban也遇到一样的问题,我在旧版里修改的jail rule有一个bug,但旧版忽略了,新版就出错无法启动。修了这个bug之后就好了。其他的服务都没有遇到问题,升级之后就立即可用了。



Crazyflie 2.0
My Crazyflie 2.0

Azure Marketplace搞了一个叫做Super Human的推广活动,推广Azure Marketplace里的各种服务。这个活动推出了一些virtual labs,你如果正好对这些服务感兴趣,通过这些virtual labs,可以学习怎么在Azure中使用它们。

其实virtual labs不是我想说的重点。重点是,如果你成功做出了某个virtual lab的结果,会得到一个奖励。你可以选择得到一个为期3个月的Azure Pass,或者一个Crazyflie 2.0无人机。3个月的Azure Pass也许不错,可是我想大家应该都会选无人机吧?


Build a SharePoint Server 2016 Hybrid Lab

SharePoint Server 2016 has been out there for a while. One big feature of it is the hybrid configuration with Office 365. To understand how it works, I built a lab environment based on Azure VMs and a trial subscription of Office 365. Here is how I did it.


To build a lab environment for hybrid solutions, you need the following components in place.

  • An Office 365 subscription. A trial is fine.
  • A public domain name. The default <yourcompany> domain that you get from the O365 subscription won’t work in hybrid scenarios. You have to register a public domain if you don’t have one.

Configure Office 365

In order to configure the hybrid environment, you must register a public domain with your O365 subscription. The process is like you go to your O365 subscription and kick start a setup process. O365 will generate a TXT value. You need to create a TXT record in the DNS of your domain vendor with that value, and then ask O365 to verify it. Once the domain is verified, the domain is register with your O365 subscription successfully. More details can be found here.

You don’t need to create those DNS records for mail exchange such as MX etc. if you just want to test SharePoint hybrid scenarios. You only need to create them if you also want to test the mailbox features.

The next step is to configuration AD sync between your on-premise AD and the Azure AD created with your O365 subscription. You can configure the Azure AD Connect tool to do it. And for a lab environment, AD sync with password sync is good enough. You can also try AD sync SSO if you have an AD FS to play with.

Before kicking start the AD sync, you might have to do some cleaning on AD attributes. I changed the following:

  • Add a valid and unique email address in the proxyAddresses attribute.
  • Ensure that each user who will be assigned Office 365 service offerings has a valid and unique value for the userPrincipalName attribute in the user’s user object.

With the cleaning done, you can start to sync the AD. You should be able to see users account in the O365 admin center after syncing.

Configure SharePoint Server 2016

Deploy the SharePoint Server 2016 farm. You can try the MinRole deployment if you have multiple servers. In my lab, I just deployed a single server.

The following service applications are required for the hybrid scenarios.

  • Managed Metadata Service
  • User Profile Service with user profile sync and MySite host.
  • App Management Service
  • Subscription Settings Service
  • Search Service for hybrid search scenario

The user profile properties need to have the following mapping:

  • User Principal Name property is mapped to userPrincipalName attribute.
  • Work email property is mapped to mail attribute.

Configure Hybrid

Once you have the O365 and SharePoint Server 2016 ready, you can start to configure the hybrid. It is fairly simple with the help of Hybrid Picker of SharePoint Online. You just need to go to SharePoint admin center of O365, click configure hybrid and pickup a hybrid solution, follow the wizard. If everything is ok, you will get the hybrid configured. Browse to an on-premise site, and you should see the app picker like the screenshot below.

Next Step

Next thing to try is to configure the server to server trust and the cloud hybrid search. Stay tuned.




这几个vhd加起来大概需要800GB的磁盘空间。我就从家里翻出一块许久没用的WD MyPassport硬盘,USB3.0的接口,1TB的容量,大概是2013年买的,一直没怎么用过。我心说这次派上用场了。谁知插到电脑上一试,坏掉了,Windows可以认到硬盘,但不能mount,容量显示0。网上查了半天,没找到修复的方法。

同事知道了,把他的一块全新未开封的硬盘借给了我,说是称打折买的,一直也没用。他的这块硬盘到是好的,但是我发现,从Azure Storage上下载800GB的数据,所花的时间太长了,以后真要重建这个环境的话,还要花更长的时间上传,根本不划算。

最后,我根本没用移动硬盘,而是将这些vhd用AzCopy备份到另一个Azure Storage里面了。用AzCopy的异步拷贝,这样备份很快,以后要重建也很容易。关键是存储几乎是Azure里最便宜的服务,1GB每个月只要2.4美分,又没有数据丢失的风险,比下载下来备份到移动硬盘里安全多了。