Kubernetes is a platform that abstracts the litany of operational tasks for applications into a more automative fashion and enables application needs declarations via YAML files. Its ideal for Microservice deployments. In this post, I will walk through creating a simple deployment using Azure AKS, Microsoft managed Kubernetes offering.
Create the Cluster
In your Azure Portal (you can do this from the az command line as well) search for kubernetes and select ‘Kubernetes Service’. Creating the cluster is very easy, just follow the steps.
- Take all of the defaults (you can adjust the number of nodes, but I will show you how to cut cost for this)
- You want to be using VM Scale Sets (this is a group of VMs that comprise the nodes in your cluster)
- Make sure RBAC is enabled in the Authentication section of the setup
- Change the HTTP application routing flag to Yes
- It is up to you if you want to link your service into App Insights
Full tutorial here: https://docs.microsoft.com/en-us/azure/aks/kubernetes-walkthrough
Cluster creation takes time. Go grab a coffee or a pop tart.
Once complete you will notice several new Resource Groups have been created. The one you specified contains the Kubernetes services itself, I consider this the main resource group that I will deploy other services into – the others are for supporting the networking needed by the Kubernetes service.
I want to draw you attention to the resource group that starts with MC (or at least mine does, it will have the region you deployed to). Within this resource group you will find a VM scale set. Assuming you are using this cluster for development, you shut off the VMs within this scale set to save on cost. Just a word to the wise.
To see the Cluster in action, proxy the dashboard: https://docs.microsoft.com/en-us/azure/aks/kubernetes-dashboard
Install and Configure kubectl
This post is not an intro and setup of Kubernetes per se so I assume that you already have the kubectl tool installed locally if not: https://kubernetes.io/docs/tasks/tools/install-kubectl
Without going to deep into it, kubectl connects to a Kubernetes cluster via a context. You can actually see the current context with this command:
kubectl config current-context
This will show you which Kubernetes cluster your kubectl instance is currently configured to communicate with. You can use the command line to see all available contexts or read the ~/.kube/config file (Linux) to see everything.
For AKS, you will need to update kubectl to point at your new Kubernetes service as the context. This is very easy.
az aks get-credentials -n <your service name> -g <your resource group name>
Executing this command will create the context information locally and set your default context to your AKS cluster.
If you dont have the Azure Command Line tools, I highly recommend downloading them. (https://docs.microsoft.com/en-us/cli/azure/install-azure-cli?view=azure-cli-latest).
Deploy our Microservices
Our example will have three microservices – all of which are simple and contrived to be used to play with our use cases. The code is here: https://github.com/xximjasonxx/MicroserviceExample
Kubernetes runs everything as containers so, before we can start talking about our services we need a place to store the Docker images so Kubernetes can pull them. You can use Docker Hub, I will use Azure Container Registry, Azure’s Container Registry service, it has very nice integration with the Kubernetes service.
You can create the Registry by searching for container in the Azure search bar and selecting ‘Container Registry’. Follow the steps to create it, I recommend storing it in the same Resource Group that your Kubernetes service exists in, you will see why in a moment. Full tutorial: https://docs.microsoft.com/en-us/azure/container-registry/container-registry-get-started-portal
Once this is created we need to attach it to our Kubernetes service so images can be pulled when requested by our Kubernetes YAML spec files. This process is very easy, and documented here: https://docs.microsoft.com/en-us/azure/aks/cluster-container-registry-integration
We are now ready to actually deploy our microservices as Docker containers running on Kubernetes.
Names API and Movies API
Each of these APIs are structured the same and serve as the source of data for our main service (user-api) which we will talk about next. Assuming you are using the cloned source, you can run the following commands to push these APIs into the ACR:
docker build -t <acr url>/names-api:v1 .
az acr login –name <acr name>
docker push <acr yrl>/names-api:v1
The commands are the same for movies-api. Notice the call to az acr login which grants the command line access to the ACR for pushing – normally this would all be done by a CI process like Azure DevOps.
Once the images is in the ACR (you can check via Repositories under the Registry in the Azure Portal) you are ready to have Kubernetes call for it. This, again, takes an az aks command line call. Details are here: https://docs.microsoft.com/en-us/azure/aks/cluster-container-registry-integration
As a personal convention I store my Kubernetes related specs in a folder called k8s this enables me to run all of the files using the following command:
kubectl apply -f k8s/
For this example, I am only using a single spec file that defines the following:
- A namespace for our resources
- A deployment which ensures at least three pods are always active for each of the two APIs
- A service that handles routing to the various pods being used by our service
- An Ingress that enables cleaner pathing for the services via URL pattern matching
If you are not familiar with these resources and their uses, I would recommend reviewing the Kubernetes documentation here: https://kubernetes.io/docs/home/
If you head back to your Kubernetes dashboard the namespaces should appear in your dropdown list (left side). Selecting this will bring up the Overview for the namespace. Everything should be green or Creating (yellow).
Once complete, you can go back into Azure and access the same Resource Group that contains your VM scale set, look for the Public IP Address address. Here are two URLs you can use to see the data coming out of these services:
The URL pathing here is defined by the Ingress resources – you can learn more about Ingress resources here: https://kubernetes.io/docs/concepts/services-networking/ingress. Ingress is one of the most important tools you have in your Kubernetes toolbox, especially when building microservice applications.
The User API service is our main service and will call the other two services we just deployed. Because it will call them it needs to know the URL, but I do not want to hard code this, I want it to be something I can inject. Kubernetes offers ConfigMap for just this purpose. Here is the YAML I defined for my ConfigMap:
ConfigMaps are key value pairs under a common name, server-hostnames. Then, we can access our values via their respective keys.
How we get these values into our API happens via the Pods which are provisioned for our Deployment. Here is that YAML:
Note the env section of the YAML. We can load our ConfigMap values into environment variables which are then accessible from within the containers. Here is an example of reading it (C#):
As with the other two services you can run a kubectl apply command against the k8s directory to have all of this created for you. Of note though, if you change namespaces or service names you will need to update the ConfigMap values.
Once deployed you can access our main endpoint /user off the public Url as before. This will randomly build the Person list with a set of favorite movies.
So, this was, as I said, a simple example of deploying microservices to Azure AKS. This is but the first step in this process and up next is handling concepts like retry, circuit breaking, and service isolation (where I define what services can talk to). Honestly, this is best handled through a tool like Isito.
I hope to not show more of that in the future.