Spring Boot, Azure Database for MySQL, and Azure App Service – Part 1

I recently played with Java and Azure App Service. What I was trying to find out is how the development experience would look like for Java developers if they want to build their applications with Azure App Service and Azure Database for MySQL.

There are some documents on Microsoft doc site, such as this one. It might be good enough for an experienced Java developer, but for someone like me who has limit Java experience, it is not easy to follow, and the sample is also too simple to make any sense for a real development. So I decided to try it myself and documented my experience here for others to reference. There would be a series of posts, and this is the first one. 

Prepare the dev environment

So instead of installing IntelliJ or Eclipse, I choose to use VSCode as my Java IDE. On my computer I’ve already had the VSCode installed. According to this tutorial, I just need to install JDK and Maven. I am a bit lost with the Java terms like Java SE, JDK, JRE and their versions, but I don’t want to be bothered. I choose to install OpenJDK because Oracle JDK requires a license. So here are steps to install OpenJDK. 

  1. Download OpenJDK from here. Windows version of OpenJDK is a zip file. Unzip it to C:\Program Files\Java so the root fold of the JDK would be something like C:\Program Files\Java\jdk-11.0.1
  2. Add an environment variable JAVA_HOME, set its value to the root of the JDK, for example, C:\Program Files\Java\jdk-11.0.1
  3. Add C:\Program Files\Java\jdk-11.0.1\bin to the system path. 
  4. With the above steps, OpenJDK is installed completely. To test if it works, open a command window and run java -version. It should print out the OpenJDK version and runtime information. 

When OpenJDK is installed, you can follow the vscode tutorial to download and install maven, and the Java Extension Pack for vscode. 

Create a MySQL database

Instead of installing MySQL on my local computer, I choose to create an Azure Database for MySQL instance as the dev database environment. It is easy to provision an Azure Database for MySQL instance. Azure has quick start for it. I also run the following SQL query to configure the database in Azure Cloud Shell. 

CREATE DATABASE tododb; -- Create a database
CREATE USER 'springuser'@'%' IDENTIFIED BY 'Spring1234'; -- Create a database user
GRANT ALL PRIVILEGES ON tododb.* TO 'springuser'@'%'; -- Grant user permissions to the database
FLUSH PRIVILEGES;

With the above preparation, we have a Java development environment and a MySQL database ready for the development. In the next post, I will start to create a Spring Boot REST API app with VSCode. Stay tuned. 

Upgrade Ubuntu Server From 16.04 to 18.04.1

I have received several notifications from my Ubuntu server running in Azure for asking me to upgrade the server to Ubuntu 18.04.1. When Ubuntu 18.04 was first released, I didn’t upgrade the server. I was afraid there could be compatibility issues and I don’t want to break the server. With the release of 18.04.1, it seems the version is stable enough for an upgrade. So I decided to upgrade the server. 

Here is what I did.

First of all, I updated the server with apt update && apt upgrade, and then I backed up my server with Azure VM backup. In case upgrade failed, I can restore the VM back.

Then I ran do-release-upgrade to upgrade the server. The os kernel seemed to upgrade successfully, but the software package upgrade failed with the following output. 

authenticate 'bionic.tar.gz' against 'bionic.tar.gz.gpg' 
extracting 'bionic.tar.gz'

 libpython3.6-stdlib:amd64
 python3.6
 python3-apt
 python3
 python3-cffi-backend
 apt-xapian-index
 python3-xapian
 python3-gi
 mailutils
 python3-markupsafe
 python3-systemd
 python3-gdbm:amd64
 python3-lib2to3
 python-apt
 dh-python
 python3-distutils
 libpython3-stdlib:amd64
 python3-yaml
 python3-pycurl
 python3-dbus

Upgrade complete

The upgrade has completed but there were errors during the upgrade
process.

To continue please press [ENTER]

I did some search on the internet. It seems a common issue. To solve this issue, I ran the command sudo mv /usr/share/dbus-1/system-services/org.freedesktop.systemd1.service /usr/share/dbus-1/system-services/org.freedesktop.systemd1.service.bak as it is mentioned here

After the issue was fixed, I just ran sudo apt-get dist-upgrade to upgrade all packages, and I chose to keep all local copies of configurations. After that, the upgrade completed successfully with all software and services running normally.  

CQRS和Event Sourcing模式

这两天重读了微软Patterns & Practices团队几年前写的CQRS Journey。这本书我几年前就看到过,只是当时我工作的重点不在应用架构上,当时没读下去。这两天重读,对CQRS和Event Sourcing (ES)模式有了新的认识。在云计算和微服务大行其道的当下,这两个模式在应用架构方面仍然是非常有参考价值的。这篇文章算是读书小结吧。

什么是CQRS模式?

CQRS是Command and Query Responsibility Segregation的缩写,中文直译过来,就是命令与查询责任分离的意思。这里涉及到两个定义,何为命令?何为查询?

  • 命令会改变对象的状态,但不返回任何数据。
  • 查询则相反,会返回数据,但并不改变对象的状态。

如果将查询和命令简化理解成对数据的读写操作,CQRS模式的含义就是,应用架构中负责模型读写的模块应当分离。这里的分离,不单是程序代码或逻辑上的分离,也包括数据模型,甚至是数据存储的分离。CQRS Journey的这张描述了CQRS的一种典型应用。

虽然不是必须的,但CQRS模式通常与Domain Driven Design (DDD)同时使用。CQRS不是一种全局性的架构模式,它只适用于特定的bounded context。对于领域模型十分复杂的场景,CQRS模式可以增强架构的扩展性和灵活性,同时降低模块的复杂度。由于读和写的模型分离,可以分别针对读写操作优化,同时避免的数据锁定,对于性能提升也有帮助。这是CQRS模式带来的好处。

于此同时,CQRS模式也有其局限性。首先,它并不是一种易于实现的模式。因为读写责任的分离,它不如CRUD来的直观。而且读写数据同步和确保数据一致性会是一个问题。比较常见的作法,是通过事件(Event)来将写操作的结果同步到读操作的数据库中。为了确保数据的一致性,常常借用Event Sourcing模式来实现事件的存储和分发。同时由于通过事件实现的数据同步,其实是异步完成的,在分布式系统中,需要考虑数据的最终一致性(Eventual Consistency)。在CQRS模式中,通常写数据具有完全一致性(Full Consistency),而读数据则具有最终一致性。

这些局限决定了,CQRS模式并非适用于所有场景,或所有的bounded context。它通常只适用于复杂多变,涉及多方操作的场景。而对于业务简单,操作方单一,以及非核心的bounded context,使用CQRS模式可能会增加开销,但并不能带来明显的好处。

什么是Event Sourcing模式?

Event Sourcing (ES)模式是一个关于如何存储domain model状态的模式。这个模式不直接存储模型的状态,而是存储模型状态变化的历史。应用想要获取模型的当前状态时,需要重演整个历史来得到当前状态。一个常用的解释ES模式的场景,是银行账户。

账户余额的直观存储方式,是存储余额本身。当用户存入100元时,余额假定是100,其后用户取出10元,余额变为90,其后用户又存入50元,余额变为140。当用户查询余额时,系统直接获取当前余额。

同样的场景,使用ES模式时,系统不存储余额本身,而是存储用户的行为,即Event。当用户存入100元时,存储“存入100”这个Event,当用户取出10元时,存储“取出10”。当用户查询余额时,系统获得所有的Event,然后进行计算,得到余额。这实际上是金融机构存储交易信息的方式。Bitcoin也是使用同样的方式存储账户的send和receive操作,并且将所有的Event使用blockchain链接起来。

ES模式带来的好处显而易见,比如它能简化写操作,所有Event一旦发生,就变为immutable,写操作就变为简单的添加纪录,避免了复杂的锁定和冲突。同时ES模式保留了所有状态的历史,容易做audit,或纠错。而ES模式的一个问题是,随着历史数据的增加,查询操作的性能可能会降低。

虽然不是必须的,但通常CQRS模式和ES模式会被同时使用。因为通常CQRS模式使用Event来同步读写两端的数据,使用ES模式存储Event有助于这种数据同步,在数据出现不一致的情况时(根据CAP理论,这在分布式系统中是不可避免的),读操作端可以通过replay event的历史,来确保数据的最终一致性。

Deploying a Service Fabric cluster to run Windows containers

From container perspective, Service Fabric is a container orchestrator which supports both Windows and Linux containers. In legacy application lift and shift scenarios, we usually containerize the legacy application with minimal code change. And Service Fabric is a good platform to run these containers.

To deploy a Service Fabric cluster on Azure which is suitable for running containers, we can use ARM template. I created a template with the following special settings:

1 – An additional data disk is attached to the VMs in the cluster to host the downloaded container images. We need this disk is because by default all container images would be downloaded to C drive of the VMs. The C drive may run out of space if there are several large images downloaded.

"dataDisks": [
    {
        "lun": 0,
        "createOption": "Empty",
        "caching": "None",
        "managedDisk": {
            "storageAccountType": "Standard_LRS"
        },
        "diskSizeGB": 100
    }
]

2 – A custom script extension is used to run a custom script to format the data disk and change the configuration of dockerd service.

{
    "properties": {
        "publisher": "Microsoft.Compute",
        "type": "CustomScriptExtension",
        "typeHandlerVersion": "1.9",
        "autoUpgradeMinorVersion": true,
        "settings": {
            "fileUris": [

"https://gist.githubusercontent.com/chunliu/8b3c495f7ff0289c19d7d359d9e14f0d/raw/2fdcd207f795756dd94ad7aef4cdb3a97e03d9f8/config-docker.ps1"
            ],
            "commandToExecute": "powershell -ExecutionPolicy Unrestricted -File config-docker.ps1"
        }
    },
    "name": "VMCustomScriptVmExt_vmNodeType0Name"
}

The customer script is as follows:

Create authorization header for Cosmos DB with Go

I started a side project to create a client package for Cosmos DB SQL API with Go so I can try Go in a real project. My plan is to implement something similar to .NET Core SDK with Go. As this is a project for learning and practice, I will do it little by little, and there is no timeline regarding when it can be done.

I build the project based on SQL API via REST. To access resources in Cosmos DB with SQL API via REST, an authorization header is required for the requests. The value of the authorization header has the following format, as it is mentioned in this document.

type={typeoftoken}&ver={tokenversion}&sig={hashsignature}

In the above string, the values of type and version are simple: type is either master or resource, while the current version is 1.0. The value of signature is a bit complex. It is a hash of several other values by using the access key of Cosmos DB as the hash key. The document has all details in it and even better it has a sample written in C#.

So following the document and the sample, I implemented a Go equivalence as follows. It is a good example to try the base64 encoding and HMAC hash in Go.

The date format in the signature is required to be in HTTP-date format defined by RFC7231. However, the time package in the Go standard library doesn’t seem to support this format out of the box, but it provides a very easy way to create custom format. The utcNow() function in the above code is what I implemented to format the time to RFC7231 format.

Go语言笔记

最近接了个业余任务,给对Go语言感兴趣的同事讲讲Go语言是怎么回事。

说起来我对Go语言也不是很懂。记得我第一次听说Go,是在大概2011年的时候。那时听说Google开发了一种新语言,看了一眼,留下的第一印象是丑。那时候我正好在研究比特币是怎么回事,没时间玩这个新语言。2013年的时候,Go忽然在国内火了起来,很多网站开始用Go来写后台。我于是好奇这个语言有什么特别,花了点时间了解了一下。只是在工作中一直没有机会用,所以也谈不上精通了。这次趁着准备这个技术分享,我又深入学习了一下Go语言。

准备材料的时候,我在想怎么跟有其他语言经验的人介绍Go。我觉得作为一个简介,如果能回答好下面三个问题,应该就算不错了吧。

什么是Go?

不同于C#或Java这样依赖虚机的语言,或者Python这样的解释型动态语言,Go是一种编译型的静态类型语言,更接近C。实际上,C是直接影响go的语言之一。看看Go的三个创始人的背景,就大概知道Go会有怎样的基因了。编程语言领域转了一圈又回来了,十几年前我刚工作时,C/C++几乎一统天下。后来为了解决内存管理问题,Java和C#流行起来。然后随着机器性能的提升和解释器的改进,Javascript和Python在不同的领域崛起。现在随着云计算的普及,静态编译语言如Go和rust又开始流行了,所不同的是,它们提供了比C/C++更好的内存管理。

Go的最突出的几个特性是,可编译;静态类型,但也有部分类型推导;垃圾收集,这是其它大部分编译型静态类型语言所没有的;基于CSP(Communicating sequential processes)的并发编程;等等。

这两年,Go语言社区成长很快。据说Go是2017年,GitHub上用户增长最多的语言。而stack overflow的2017年开发者调查中,Go是most loved第五名,most wanted第三名,足见其火爆。

Most Wanted vs. Most Loved

为什么需要Go?

可是我们已经有无数种编程语言了,为什么还需要go呢?这就要从go试图解决什么问题说起了。根据go的创始人之一,Rob Pike,的说法,go的设计初衷是为了解决两个问题:

  1. Google的问题:big hardware, big software. 编译慢;依赖关系复杂;每个程序员都有自己的风格,不易合作;缺少文档;升级困难;经常重复造轮子;等等。
  2. 让Go的设计者的日常工作能够更轻松,生活更美好。

为此,Go的设计哲学遵循了下面两条核心规则。

  1. 极简:使用类Pascal语法,语法简单,关键字少;不支持类,继承,泛型等语言特性。
  2. 正交:数据结构和方法分开,通过聚合而非继承来联系二者;类型抽象通过接口实现;数据结构和接口都可以通过内嵌的方式来扩展。

虽然关于Go的设计哲学,不同的人可以列举出不同的项目,但极简和正交是最重要的两条。这两条使得Go简单易学,容易上手。这是Go受欢迎的重要原因之一。

但也由于这两条哲学,我觉得导致Go,至少是Go 1,并没有很好的解决所有它的设计者打算解决的问题。比如,Go的依赖通过package实现,但是它并没能解决大型项目的依赖问题。这个问题Go通过不同的package试了好几次,不过看来又要推倒重来,Go 1.11会有新的依赖管理工具。再比如不支持泛型,导致重复造轮子变成了免不了的问题。这些都是Go在大受欢迎的同时,为人诟病的问题。

目前Go有哪些应用场景?

目前看了,Go使用比较多的场景,还是在服务端的后台程序。容器领域里,Go俨然成为了标准语言。Docker, Kubernetes等等都是Go编写的。根据Go 2017 Survey的结果,其它go比较流行的领域包括中间件,微服务等等。Go比较不适合用来写桌面GUI应用。

Install Minikube on Ubuntu Server 17.10

I have some experiences with Docker and containers, but never played with Kubernetes before. I started to explore Kubernetes recently as I may need a container orchestration solution in the coming projects. Kubernetes is supported by Azure AKS. Even Docker has announced their support of it. Looks like it is going to be the major container orchestration solution in the market for the coming years.

I started with deploying a local Kubernetes cluster with Minikube on a Ubuntu 17.10 server on Azure. Kubernetes has a document on its site which is about installing the Minikube. But it is very brief. So in this post, I will try to document the step by step procedure both for the future reference of myself and for others who are new to Kubernetes.

Install a Hypervisor

To install Minikube, the first step is to install a hypervisor on the server. On Linux, both VirtualBox and KVM are supported hypervisors. I chose to install KVM and followed the guidance here. The following are steps.

  • Make sure VT-x or AMD-v virtualization is enabled. In Azure, if the VM is based on vCPUs, the virtualization is enabled. To double check, run command egrep -c '(vmx|svm)' /proc/cpuinfo, if the output is 1, the virtualization is enabled.
  • Install the KVM packages with the following command:

<br />
sudo apt-get install qemu-kvm libvirt-bin ubuntu-vm-builder bridge-utils<br />

  • Use the following command to add the current user to the libvert group, and then logout and login to make it work. Note, in the guidance the group name is libvirtd, but on Ubuntu 17.10, the name has changed to libvert.

<br />
sudo adduser `id -un` libvirt<br />

  • Test if your install has been successful with the following command:

<br />
virsh list --all<br />

  • Install virt-manager so that we have a UI to manage VMs

<br />
sudo apt-get install virt-manager<br />

Install kubectl

Follow the instruction here to install kubectl. The following are the commands:

<br />
curl -LO https://storage.googleapis.com/kubernetes-release/release/$(curl -s https://storage.googleapis.com/kubernetes-release/release/stable.txt)/bin/linux/amd64/kubectl<br />
chmod +x ./kubectl<br />
sudo mv ./kubectl /usr/local/bin/kubectl<br />

Install Minikube

Follow the instruction on the release notes of Minikube to install it. I used the following command:

<br />
curl -Lo minikube https://storage.googleapis.com/minikube/releases/v0.25.0/minikube-linux-amd64 &amp;&amp; chmod +x minikube &amp;&amp; sudo mv minikube /usr/local/bin/<br />

When you finish this step, according to the official document, the installation of Minikube has been completed. But before you can use it, there are several other components which needs to be installed as well.

Install Docker, Docker-Machine, and KVM driver

Minikube can run on natively on the Ubuntu server without a virtual machine. To do so, Docker needs to be installed on the server. Docker-CE has a different way to be installed and Docker has a document for it.

Docker Machine can be installed with the following commands:

<br />
curl -L https://github.com/docker/machine/releases/download/v0.13.0/docker-machine-`uname -s`-`uname -m` &gt;/tmp/docker-machine &amp;&amp; \<br />
sudo install /tmp/docker-machine /usr/local/bin/docker-machine<br />

Finally, we need to install a VM driver for the docker machine. Kubernetes team ships a KVM2 driver which is supposed to replace the KVM driver created by others. However, I failed to make the Minikube work with the KVM2 driver. There is a bug report for this issue and hope the Kubernetes team will fix it soon.

So I installed the KVM driver with the following command:

<br />
curl -LO https://github.com/dhiltgen/docker-machine-kvm/releases/download/v0.10.0/docker-machine-driver-kvm-ubuntu16.04<br />
sudo cp docker-machine-driver-kvm-ubuntu16.04 /usr/local/bin/docker-machine-driver-kvm<br />
sudo chmod +x /usr/local/bin/docker-machine-driver-kvm<br />

Test if Minikube Works

With the completion of all the above steps, we can test the Minikube now.

<br />
minikube start --vm-driver kvm<br />

It will create a vm named as minikube in KVM and configure a local Kubernetes cluster based on it. With kubectl, you should be able to see the cluster info and node info.

<br />
kubectl cluster-info<br />
kubectl get nodes<br />

With that, you can start to explore Kubernetes.

Running Linux Containers on Windows Server 2016

I never thought running Linux containers on Windows Server is a big deal. A reason that I run Docker for Windows on my Windows 10 laptop is to run some Linux based containers. I thought I just need to install Docker for Windows on a Windows Server 2016 server with Container feature enabled, then I should be able to run both Linux and Windows containers. I didn’t know it is not the case until when I tried it yesterday.

It turns out the Linux Containers on Windows (lcow) Server is a preview feature of both Windows Server, version 1709 and Docker EE. It won’t work on Windows Server 2016 of which the version is older than 1709. As a side learning of this topic, I also got some ideas about the Windows Server semi-annual channel. An interesting change.

So here is a summary of how to enable lcow on Windows Server, version 1709.

  1. First of all, you need to get a Windows Server, version 1709 up and running. You can get the installation media of Windows Server, version 1709 from here. As I use Azure, I provision a server based on the Windows Server, version 1709 with Container image. Version 1709 was only offered as a Server Core installation. It doesn’t have the desktop environment.

  2. Once you have the server up and running, you will have to enable the Hyper-V and Containers feature on it, and install the Docker EE preview. It can be installed with the following PowerShell script.

    As I use the Azure image, the Container feature and Docker EE has been enabled on it, and docker daemon has been configured as a Windows service. I don’t have to run the above script.

  3. Now you can follow the instruction here to configure the lcow. Specifically, I use the following script to configure it. I also update the configuration file in C:\ProgramData\Docker\config\daemon.json to enable the experimental feature of LinuxKit when docker service is started.

  4. Once you finish all the above configuration, you have enabled the lcow on Windows Server 1709. To test it, simply run

<br />
docker run --platform linux --rm -ti busybox sh<br />

That is it. If you want, you can also try to run Ubuntu containers by following the instructions here.

Creating API Management instances in Parallel with Automation Runbook

Provisioning an Azure API Management (APIM) service instance is a bit time-consuming task. It usually takes 20 to 30 minutes to get an instance created. In most of cases, it is fine because you usually don’t have the needs to create many APIM instances. For most customers, 2 or 3 instances are enough for their solutions. And provisioning APIM instances doesn’t seem to be a day to day work.

But recently I am preparing a lab environment for an APIM related lab session that I am going to deliver in an event. Given that provisioning an APIM instance would take 20 to 30 minutes, it is impracticable to let attendees create the instances during the lab session. I have to provision an APIM instance for each attendee before the lab session. As there could be more than 40 attendees, I have to do it with a script rather than manually clicking around in Azure portal.

APIM supports creating instances with PowerShell. It doesn’t support Azure CLI at the moment. The Cmdlet for instance creation is New-AzureRmApiManagement, and as mentioned in the document, this is a long running operation which could take up to 15 minutes. If I simply create a PowerShell script to run this operation sequentially, it would take tens of hours to get all APIM instances created. It is not acceptable. I have to run the operations in parallel.

I ended up creating a PowerShell Workflow runbook in Azure Automation to do the task. PowerShell Workflow has several ways to support parallel processing, and Azure Automation provides the enough computing resource to run all operations in parallel.

The following code snippet shows the key part of the workflow.

The code is quite straight forward. I need to include the Azure authorization code for each of the parallel operation because when operations are running in parallel, each operation runs in its own process. So each of them need to be authorized before they can access the Azure resource.

For the completed code, you can get it from here. To run this workflow runbook in Azure Automation, the AzureRM.ApiManagement module needs to be imported into Azure Automation. That’s all.

为什么Surface Book才是我心目中,PC的理想形态

上个月去美国旅行的时候,跑去微软商店买了一台Surface Book 2。几个星期用下来,我觉得它是我用过的,最好用的Windows 10 PC。它的这种混合了笔记本和平板的模式,是我觉得十分理想的PC形态。不知道微软怎么考虑的,Surface Book系列竟然只在少数几个国家卖。如果不是受限于微软的市场策略,和偏高的价格,它应该大卖才对。

Surface Book好在哪儿呢?

首先,它够强大,够powerful,又够轻。我之前用的是ThinkPad W540,i7+32GB内存,双显卡,非常强大,但是太重,单单它的充电器就比别人的整台机器还重了,不适合背着到处走。我的这台surface book,是13寸版本,i7+16GB内存,底座是带显卡的。跑Visual Studio,Docker等等,统统没问题。文明6玩起来也比W540感觉还强。重点是,不相上下的性能,surface book的重量连W540的一半都不到,也就刚刚比W540的充电器重了一点点吧,绝对有利于保护肩膀。

其次,surface book的底座更稳定,键盘手感更好。我之前还有一台surface pro 3。那时候是出门用surface pro,在家用W540。Surface pro的键盘手感不好还在其次。有一次去客户机房,服务器旁边没桌子,我只好坐在高脚椅上,然后把surface pro摊开放腿上。可是surface pro典型的头重脚轻,一不小心,它从我腿上掉下去了,我伸手只抓到了它的键盘。偏巧掉下去的角度不对,是一角先着地,屏幕就摔裂了,直接导致触屏玩完了。也导致我之后不得不背着W540出门。Surface book的屏幕和键盘重量比较均衡,链接铰链也比surface pro的安全多了。

第三点就是它对Windows 10的支持了。通过平板分离按键,可以在不断电的情况下,将平板部分卸下来。由于它有两块电池,平板部分的电池比较小,使得它端着比surface pro还轻。这点太方便了。有时候资料看到一半,不想坐在办公桌前看了,可以卸下平板,端着坐沙发上看,完全可以无缝切换。虽然离了底座,平板的电池大概只能支撑两三个小时,不过对轻度使用足够了。

总之surface book我用着蛮好的,希望微软能调整策略,让它大卖。