Infrastructure as Code with Terraform

The concepts of Infrastructure as Code (IaC) are one of the main pillars to modern DevOps and Cloud Native Applications.  The general idea is, the software itself should dictate its infrastructure needs and should always be able to quickly and automatically deploy to existing and new environments.

This is important because most applications today tend use not a single cloud service but many, often times configured in varying ways depending on the environment. The risk here is, if this configuration lives only in the Cloud than if user error occurs or the cloud provider has problems this valuable configuration and settings information can be lost. With IaC, a simple rerun of the release script is all that is needed to reprovision your services.

Additionally, doing this mitigates the “vault of knowledge” problem whereby a small group of persons understand how things are set up. If they depart or are otherwise unavailable during an outage the organization can be at risk. The configuration and settings information for infrastructure is as much a part of your application as any DLL or line of code, we need to treat it as such.

To show this in action, we will develop a simple NodeJS application that responds to Http Requests using ExpressJS, Containerize it, and then deploy it to Azure using Terraform.

Step 1: Build the application

When laying out the application, I always find it useful to create a separate directory for my infrastructure code files, in this case I will create a directory called terraform. I store my source files under a directory src.

For this simple application I will use ExpressJS and the default Hello World code from the ExpressJS documentation:

npm install express –save

Create a file index.js – paste the following contents (taken from ExpressJS Hello World: https://expressjs.com/en/starter/hello-world.html)

const express = require(‘express’)
const app = express()
const port = 3000

app.get(‘/’, (req, res) => res.send(‘Hello World!’))

app.listen(port, () => console.log(`Example app listening on port ${port}!`))

We can run this locally using the following NPM command

npm start

Note, however, this does not come prebuilt after npm init so you might have to define it yourself. In essence, its the same as running node index.js at the command line.

Step 2: Containerize

Containerization is not actually required for this but, let’s be honest, if you are not using containers at this point you are only depriving yourself of easy more consistent deployments; in my view it has become a question of when I do NOT use containers versus the default of using containers.

Within our src directory we create a Dockerfile. Below is the contents of my Dockerfile which enable the application from above to be served via a container.

FROM node:jessie
WORKDIR /app
COPY . .
RUN npm install
EXPOSE 3000
ENTRYPOINT [ “npm”, “start” ]

We start off by using the node:jessie base image (Jessie is the flavor of Linux inside the Container) – you can find additional base images here: https://hub.docker.com/_/node/

Next we set our directory within the container (where we will execute further commands) – in this case /app – note that you can call this whatever you like

Next we copy everything from the Dockerfile context directory (by default the directory where the Dockerfile lives). Note that for our example we are not creating a .dockerignore due to the simple nature. If this were more complicated you would want to make sure the node_modules directory was not copied, less it make your build time progressively longer

We then run the npm install command which populates node_modules with our dependencies. Recall in the previous point, we do not want to copy node_modules over, this is for two reasons:

  1. Often we will have development environment specific NPM packages which we likely do not want on the container – the goal with the container is ALWAYS to be as small as possible
  2. In accordance with #1, copy from the file system is often slower (especially in the Cloud) than simply downloading things – also to make sure we only download what we need (see Point #1)

Next we run a Docker command which instructs the container to have port 3000 open and accepting traffic. If you look at the ExpressJS script, this is the port it listen on, so this is poking a hole in the container firewall so the server can receive requests.

Finally, all Dockerfile will end with the EntryPoint Docker command. With the source in place, this is the command that gets run when the Docker image is started as a container. For web servers like this, this should be a command that blocks the program from exiting because, when the program exists the container will close down as well

Step 3: Publish to a Registry

When we build Dockerfile we create a Docker image. An image, by itself, is useless as it is merely a template for subsequent container [ we haven’t run ENTRYPOINT yet ]. Images are served from a Container Registry, this is where they live until be called on to become container ( an instance of execution ).

Now, generally, its a bad idea to use a laptop to run any sort of production services (these days the same is true for development as well) so, keeping your images in the local registry is not a good idea. Fortunately, all of the major cloud providers (and others) provide registries to store your images in:

  • Azure Container Registry (Microsoft Azure)
  • Elastic Container Registry (Amazon)
  • Docker Hub (Docker)
  • Container Registry (Google)

You can create the cloud registries above within the correct provider and publish your Docker images, more here: https://docs.microsoft.com/en-us/azure/container-registry/container-registry-get-started-docker-cli

Docker images being published to a registry such as this opens them up to being used, at scale, by other services include Kubernetes (though you can also host the registry itself in Kubernetes, but we wont get into that here).

The command to publish is actually more of a push (from the link above)

docker tag nginx myregistry.azurecr.io/samples/nginx

docker push myregistry.azurecr.io/samples/nginx

With this, we have our image in a centralized registry and we can pull it into App Service, AKS, or whatever.

Step 4: Understand Terraform

At the core of IaC is the idea of using code to provision infrastructure into, normally, Cloud providers. Both Azure and Amazon offer tools to automatically provision infrastructure based on a definition: CloudFormation (Amazon) and Azure Resource Manager (ARM) (Azure).

Terraform, by HashiCorp, is a third party version which can work with either and has gained immense popularity thanks to its ease of implementation. It can be downloaded here.

There is plenty of resources around the web and on HashiCorp’s site to explain how Terraform works at a conceptual level and how it interacts with each supported provider. Here are the basics:

  • We define a provider block that indicates the provider plugin we will use to create resources; this will be specific to our target provider
  • The provider block than governs the additional block types we can define in our HCL (HashiCorp Configuration Language)
  • We define resource blocks to indicate we want to create something
  • We define data blocks to indicate that we wish to query for certain values from existing resources

The distinction between resource and data is important as some elements are ONLY available as one or the others. One such example is a Container Registry. When you think about it, this makes sense. While we will certainly want to audit and deploy many infrastructure components with new releases, the container registry is not such a components. More likely, we want to be able to read from this component and use its data points in configuring other components, such as Azure App Service (we will see this later)

To learn more about Terraform (we will start covering syntax in the next step) I would advise reading through the HashiCorp doc site for Azure, its very thorough and pretty easy to make sense of things: https://www.terraform.io/docs/providers/azurerm/index.html

Step 5: Deploy with Terraform

Terraform definition files usually end with the .tf extension. I usually advise creating this in a separate folder if only to keep it separate from your application code.

Let’s start with a basic script which creates an Azure Resource Group

provider “azurerm” {
  version=”=1.22.0″
  subscription_id=””
}
resource “azurerm_resource_group” “test” {
  name=”example-group”
  location=”CentralUS”
}

The first block defines the provider we will use (Azure in this case) and the target version we want to use of that provider. I also supply a Subscription Id which enables me to target a personal Azure subscription.

Open a command line (yeah, no GUI that I am aware of) and cd to the directory that holds your .tf file, execute the following command:

terraform init

Assuming the terraform program is in your PATH, this should get things going, you will see it download the provider and provision the .terraform which holds the binary for the provider plugin (and other plugins you choose to download). You only need to run the init command one time. Now you are ready to create things.

As with any proper IaC tool, Terraform lays out what it will do before it does it and asks for user confirmation. This is known as the plan step in Terraform and we execute the following:

terraform plan

This will analyze the .tf file and (by default) output what it intends to do to the console; you can also provider the appropriate command line argument here and get the plan into a file.

The goal of this step is to give you (and your team) a chance to review what Terrafrom will create, modify, and destroy. Very important information.

The final step is to apply the changes, which is done (you guessed it) using the apply command:

terraform apply

By default, this command will also output the contents of plan and you will need to confirm the changes.  After doing so, Terraform will use its information to create our Resource Group in Azure.

Go check out Azure and you should find your new Resource Group created in the CentralUS region (if you used the above code block).  That is pretty cool. In our next step we will take this further and deploy our application.

Step 6: Really Deploy with Terraform

Using Terraform does not excuse you from knowing how Azure works or what you need to provision to support certain resources in fact, this knowledge becomes even more critical.  For our purposes we created a simple API that responds with some text to any request, for that we will need an App Service backed by a Container but, before that we need an App Service Plan – we can create this with Terraform:

resource “azurerm_app_service_plan” “test” {
  name=”example-plan”
  location=”${azurerm_resource_group.test.location}”
  resource_group_name=”${azurerm_resource_group.test.name}”
  kind=”Linux”
  reserved=true
  sku {
    tier=”Standard”
    size=”S1″
  }
}

Here we see one of the many advantages to defining things this way; we can reference back to previous blocks (remember what we created earlier). As when writing code, we want to centralize the definition of things, where appropriate.

This basically creates a simple App Service plan that uses the bare basics, your SKU needs may vary. Since we are using containers we could also use Windows here as well, but Linux just feels better and is more readily designed for supporting containers; at least in so far as I have found.

Running the apply at this point will add App Service Plan into your Resource Group. Next we need to get some information that will enable us to reference the Docker container we published previously.

data “azurerm_container_registry” “test” {
name=”HelloWorldTest”
resource_group_name=”${azurerm_resource_group.test.name}”
}

Here we see an example of a data node which is a read action – you are pulling in information about an EXISTING resource – an Azure Container Registry in this case. Note, this does NOT have to live in the same Resource Group as everything else, its a common approach for services like this to be in separate group for those that transcend environments.

Ok now we come to it, we are going to define the App Service itself.  Before I lay this out, I want to give a shout out to https://pumpingco.de/blog/deploy-an-azure-web-app-for-containers-with-terraform which inspired this approach with App Services.

Here is the block: https://gist.github.com/xximjasonxx/0d0bdda8741ac43197528937f6cec9eb (too long for the blockquote)

There is a lot going on here so let’s walk through it.  You can see that, as with the App Service Plan definition, we can reference back to other resources to get values such as the App Service Plan Id. Resources allow you to not just create them but reference their properties (Terraform will ensure things are created in the proper order).

The app_settings block let’s us pass values that Azure would otherwise add for us when we configure container support. Notice here though, we reference the Container Registry data block we created earlier.  This makes it a snap to get the critical values we will need to allow App Service access into the Container Registry.

The last two blocks I got from PumpingCode – I know what linux_fx_version does, though I have never seen it used in App Services, same with Identity.

Step 7: But does it work?

Always the ultimate question. Let’s try it.

  1. Make a change and build your docker image. Tag it and push it to your Azure Container registry – remember the tag you gave it
    • One tip here: You might have to change the port being exposed to 80 since App Service (I think) blocks all other ports
  2. Modify the .tf file so the appropriate image repository name and tag is represented for linux_fx_version. If you want some assurances you have the right values, you can log into the Azure Portal and check out your registry
  3. Run terraform apply – verify and accept the changes
  4. Once complete, try to access your App Service (I am assuming you changed the original and went with port 80)
  5. It might take some time but, if it worked, you should see your updated message being returned from the backend

Conclusion

The main point with IaC is to understand that modern applications are more than just their source code, especially when going to the Cloud. Having your infrastructure predefined can aid in automatic recovery from problems, enable better auditing of services, and truly represent your application.

In fact, IaC is the centerpiece to tools like Kubernetes as it allows it to maintain a minimum ideal state via YAML definitions for the abstract infrastructure. Pretty cool stuff if I do say so myself.

Of course, this here is all manual, where this gets really powerful is when you back it into a CD pipeline. That is a topic for another post, however 🙂

Advertisements

Auth0 and React + Redux

Recently, I have been experimenting with Auth0 inside a side project I have been working on. Its quite interesting and, for the most part, easy to use. However, I have found the Quick Starts for ReactJS lacking, especially with respect to React implementations that utilize Redux. So I thought I would take a crack at it.

Setting up the Application

Head over to auth0.com and create your free account and setup your application. I will spare  rehashing this as the Auth0 tutorials did this far better justice than I would. The important thing is to come away with the following bits of information:

  • domain
  • audience
  • clientID
  • redirectUri
  • scope
  • responseType

Simply choose to Create an Application once logged into Auth0 and you can pick the Quickstart of your choice. For readers here, its probably going to be ReactJS + Redux.

Assumptions

I am going to assume you have created your Auth0 application and have already configured the various bits of Redux into your ReactJs application (or at least expect that you know how to).

Ill also assume you are going to use routing in some degree within your ReactJS app, that plays a vital role in ensuring the authentication is done properly.

Let’s get started

If you read through the Auth0 getting started guide you will find it tells you to to create a file which holds the bits of information above, do that but instead using it as a modular function.  Here is my code:

Selection_004

The instance variable webauth is an instantiation of auth0.WebAuth which will take our data and allow us to interact with the Auth0 APIs.  You notice that I only ever create the client once, this is intentional. The one lesson I learned while doing this was to ensure you dont create things too much as it tends to screw with Redux.

By taking this approach I found it much easier to reference the Auth client.

Next…

You need to initiate the login process, by calling login on our Auth client. This will initiate the authentication process with Auth0.  The important thing to understand is what happens at the end of that process.

One of the values we collected and added to our WebAuth client was a callback. This callback is where the browser will go once authentication is complete. Within this route you need to be able to parse the result (using our processResult method).  This gets tricky with Redux due to state changes. You will find the process succeeding and then failing often putting your app into a weird state.

To mitigate this, you want to look at your Routing.   Here is an example my routing:

Selection_005

You can see that we want to intentionally not render the /callback route if the user is logged in. The flow here is interesting. Let’s explain.

Let’s assume you click a button to start the login process. Your application, when complete, will redirect to your callback URL, /callback in our case. Here is what this page looks like for me:

Selection_006

As soon as the component mounts we grab our authClient and call processResult. If it works, we initiate our saveLogin action. In my case, this will end up sending the login result into a Redux Saga which saves it into Local Storage. I then use the push method off Connected Router to go to my authenticated landing page.

However, doing this will cause parts of the application to check for changes. This action, I found, causes a second call to the Auth0 API which fails. This can make it seem like the login is failing when in fact it is working. The answer here is to eliminate the chance that callback remounts; for this reason I used isLoggedIn in my routing to remove that routing rule if the user is authenticated.

Conclusion

The important thing to remember with Redux is to limit the number of potential refresh vectors that can cause code to execute again. In this case, we can use selective routing features to ensure that certain Route rules are only published when they are appropriate.

Deploying to Kubernetes with Azure DevOps: A first pass

Kubernetes is all the talk around town these days as the next generation deployment platform for containerized applications. It has a lot of benefits and the full support of the major cloud providers (Google, Amazon, Microsoft) and a whole movement behind it represented by the Cloud Native Computing Foundation (CNCF).

My interest is mostly from how it can be used with respect to DevOps and deployments. Having tinkered considerably with MiniKube locally, EKS via labs, I decided to dive into setting up a deployment pipeline with AKS (Azure Kubernetes Service). I found it rather easy to be honest. Might finally be getting to the point where I can wrap my head around this.

The Setup

So, the prerequisite to this is an application that builds using (preferably) a Docker container. This is the Dockerfile I am using (generated via the Docker extension in VSCode):

FROM microsoft/dotnet:2.1-aspnetcore-runtime AS base
WORKDIR /app
EXPOSE 80
FROM microsoft/dotnet:2.1-sdk AS build
WORKDIR /src
COPY [“WebApp.csproj”, “./”]
RUN dotnet restore “./WebApp.csproj”
COPY . .
WORKDIR “/src/.”
RUN dotnet build “WebApp.csproj” -c Release -o /app
FROM build AS publish
RUN dotnet publish “WebApp.csproj” -c Release -o /app
FROM base AS final
WORKDIR /app
COPY –from=publish /app .
ENTRYPOINT [“dotnet”, “WebApp.dll”]

What needs to be understood is, when executed, this will produce a Docker image that can be stored in a registry and referenced later via a service supporting container (Azure AppService, FarGate, etc). For our purposes we will want to create a DevOps Build Pipeline that creates this image and stores it in Azure Container Registry.

The main goal of this section is that you have an application (does not have to be in .NET Core, could be in Python) that builds to a container.

Create the Build Pipeline

When you are dealing with containerized applications your build pipelines goal (along with integration checking and testing) is to produce an image. The image is the static format of a container. When we start an image in Docker that running instance is referred to as the container.

Azure DevOPs makes this relatively easy but, before we can do that, we need to create a registry which is a place we can store images. It is from here that Kubernetes will pull them to create the containers that allows the application to be executed.

Easy enough to create an Azure Container Registry through the Azure Portal, just remember to Enable admin mode.

With this created we now have a place to output the builds from our automated processes. The way the registry works is you create a repository. This is really a collection of similar images. You then tag these images to indicate a version (for development I often use the builder number and the Git commit hash, for releases I use a semantic version identifier).

To start off with, you need to define the source of your pipeline, this is going to be your source repository and appropriate branch, I am using develop in accordance with the gitflow approach to source management.

After you select your source you get to pick how you want to start your pipeline. Conveniently, Microsoft has a Docker container template which will get you 90% of what you will need.

You need to check the following fields for each of the two tasks that are created as part of this template:

  • Build an image
    • Docker File – should be the location of your Dockerfile. Default works well here, but I like to be specific (make sure you update Use Default Build Context if you take something other than the default)
    • Image Name – this is the image name with tag that you want to build. More simply, this what you will save to the registry.  Here is an example of what I used for a side project I have:
      • feedservice-dev:$(Build.BuildNumber)-$(Build.SourceVersion)
        • feedservice-dev – this is the name of the respository in ACR that the image will live under (the repository gets created for you)
        • $(Build.BuildNumber)-$(Build.SourceVersion) – this is the tag which will differentiate this image from others. Here I am using build variables to denote the tag.
  • Push an image
    • Image Name
      • This needs to match the image name specified in Build an image

On Using :latest
Within Docker there is a convention of :latest to designate the latest image created. I tend to not use this custom very often for production scenarios simply because I favor specificity. If you choose to use it, you just need to publish two images instead of one to the ACR (overwrite :latest each time)

Make sure to select Triggers and turn on Continuous Integration so we get builds that kick off automatically.

If all is working correctly, you should see the build kick off almost instantly after you push to the remote repo for the target branch. Once that build completes you should be able to see the new image in your registry.

Setting up Kubernetes

Azure actually makes this very easy. But first, a quick primer on Kubernetes.

Kubernetes is a container orchestration framework with a focus on maintaining the ideal state. It is composed of nodes which house its various resources. There are three core ones that you should be aware of:

  • Pod – equivalent to a VM, houses a container (or containers). A normal deployment will feature MANY pods depending on the level of scale being targeted
  • Deployment – A group of pods with conditions around ideal state. At the simplest level, we can specify the minimum number of pods that represent at deployment (generally in production you want no less than three here)
  • Service – A resource that enables access to pods either internally or externally. Often serves as a LoadBalancer in cloud based deployments

More information: https://kubernetes.io/

Beneath the pod concept, everything in Kubernetes resides on nodes. These nodes reside within a cluster. You can have as many clusters as you like and they can reside in different environments, both on-premise and in the cloud; you can even federate between them enabling more flexible deployment scenarios.

For the purposes of this tutorial we will create a hosted Kubernetes cluster on Microsoft Azure using their Azure Kubernetes Service (AKS).

It is easy enough, simply create an instance of the Kubernetes Service. Here are some important fields to be aware of as you create the Cluster:

  • Project Details
    • Resource group: I like to specify a separate one for the cluster however, I have noticed that AKS will still create other Resource groups for other components supporting the AKS instance
    • Kubernetes cluster name: Anything you can, no spaces
    • DNS prefix name: Anything you want
  • Authentication
    • Create a new service principal
    • Enable RBAC: Yes
  • Networking
    • HTTP Application Routing: Yes

Take the default for everything else

This process will take a while so go grab a cup of coffee or be social while it is happening.

Once it completes, Azure will provide you with some commands you can run through the Azure CLI (I recommend downloading this if you do not have it installed)

Kubernetes can be administered from ANY machine via kubectl. Supporting this is done via the notion of context. You can change your kubectl context to any Kubernetes deployment. Azure makes this easy by executing the following command

sudo az aks install-cli

This will install kubectl (if you dont have it) and set your context to AKS.

Kubernetes features a nice web dashboard to give you insight into what is happening within a cluster. You need to run a couple commands to get this to work:

az aks get-credentials –resource-group <YourRG> –name <YourClusterName>

Followed by

az aks browse –resource-group <YourRG> –name <YourClusterName>

This will launch your default web browser with the Kubernetes dashboard proxied to your localhost. I have found that sometimes when I start this I get a punch of warnings around configmaps and other errors. This stems from a permissions problem that requires a role be created. Use this command to fix things

kubectl create clusterrolebinding kubernetes-dashboard -n kube-system
–clusterrole=cluster-admin
–serviceaccount=kube-system:kubernetes-dashboard

After this runs, rerun the browse command from above and the errors should be gone.

The Kubernetes dashboard offers a wealth of information that can assist with analyzing and debugging problems with deployments. It is one of the best tools at your disposal when dealing with cluster management.

If everything is green you are good to move on to performing the initial deployment.

Perform the initial deployment

As I said above, the easiest way to manage your Kubernetes cluster is via local usage of kubectl with the context pointing at AKS. This will allow you to directly apply changes and perform the initial deployment; note this assumes you have your container image in ACR.

Here is the YAML file we are going to use: https://gist.github.com/xximjasonxx/47151a9274ae732dd063c7f9605365c4

You notice that there are two discrete sections defined: Deployment and Service. The deployment here indicates that ideal state for this “application” has 2 pods at minimum (indicated by replicas). Each Pod should host a container named giftlist-feedapi which is an instance of a feedservice-dev image; here I manually specified a certain image to start from. Not sure if this is best practice, but its just what I did through testing.

In the second section, we define the Service which acts as a LoadBalancer to grant external access to the pods. I should point that LoadBalancer really only works in the Cloud since you wont get a Load Balancer locally by default, need to use NodePort in that case.

Before we can apply our configuration, however, we need to give AKS the ability to talk to ACR so it can pull the images we stored there.  We do this by running the following sequence of commands:

AKS_RESOURCE_GROUP=<Your AKS Resource Group>
AKS_CLUSTER_NAME=<Your Cluster Name>
ACR_RESOURCE_GROUP=<Resource Group with Container Registry>
ACR_NAME=<Your ACR Name>

CLIENT_ID=$(az aks show –resource-group $AKS_RESOURCE_GROUP –name $AKS_CLUSTER_NAME –query “servicePrincipalProfile.clientId” –output tsv)

ACR_ID=$(az acr show –name $ACR_NAME –resource-group $ACR_RESOURCE_GROUP –query “id” –output tsv)

az role assignment create –assignee $CLIENT_ID –role acrpull –scope $ACR_ID

The use of environment variables here is done to ease the various commands involved. With this in place we are now ready to update our Cluster to support the various resources that will be needed to host our application.

We apply this configuration by using the apply subcommand of kubectl like so:

kubectl apply -f development.yaml

This command will return almost instantly. If you have the Dashboard still open you can watch everything happen. Once all of the deployments are fulfilled you can flip over to the Services section to see your service being created. This process tends to take a bit of time but should end with a public IP address being available that redirect to a Pod running your containerized service.

Alternatively you can use

kubectl get deployments

and

kubectl get services

To check on the status of the resources that are being deployed. I tend to favor the Dashboard since it will also expose the logs on the web, you can still get the logs, but the web makes it easier.

If everything goes right your Service which eventually yield an External-IP which you can access from a web browser.

Adding a Release Pipeline

Ok, so full disclosure, this part I am still working through as I am not 100% sure if this is the way it should be done.  But I figured its a good place to start.

First step, create a release pipeline in Azure DevOps and make sure the Continuous Integration trigger is set (click the lightning icon as shown below).

Selection_002

This artifact should be configured as the output from the build you created previously, shown here is mine:

Selection_003.png

With this configuration and enabling the CI trigger we can ensure that ANY build that is processed by our build pipeline also correlates to a release as well.

Next for Stages we will have only one. Make sure the Tasks are configured as such

  • Deployment Process
    • Name: Anything you want but, make it descriptive
  • Agent Job
    • Display name: Anything you want
    • Agent Pool: Ubuntu

Add a single Task of type Deploy to Kubernetes, configure it to run the set command with the following arguments (assuming you are following alone)

image deployment.v1.apps/feed-api-deployment giftlist-feedapi=giftlistregistry.azurecr.io/feedservice-dev:$(Build.BuildNumber)-$(Build.SourceVersion) –record

Please do not copy the above verbatim, there is some nuance here

Ok, so when you run apply as we did earlier you actually create entries in the Kubernetes data controller plane. We can actually make changes to these values using the set command which is what we are doing here

Basically we are using the set to change the image being used for our deployment. This will cause Kubernetes to begin orchestrating an update so, if you run the get deployments command enough, you will see Kubernetes spin up a third Pod while it replaces each Pod so as to maintain the ideal state.

Now, if everything works, you should be able to commit a change, see it automatically build which will generate a new image that gets saved to your ACR. Next, it kicks off a Release pipeline which updates your Kubernetes configuration to use the newly created image.

Pretty slick eh?

Closing Thoughts

Kubernetes is nothing more than a new way to deploy code for applications that use containers, which is becoming increasingly common across development teams. The use of K8s is a response to the problem of managing hundreds if not thousands of containers. In this example, I showed a very simple example of applying this process with Azure DevOps but there are many other topics I did not cover such as Helm charting.

The goal though is to understand that Kubernetes is a solid option but is NOT the only option for new applications. Like all things, it has its uses. Though, different from other things, Kubernetes seeks to provide a new platform that houses all environments for your applications so you can get as close to production as possible, if not all the way.

I hope to give more insight into this as I continue to look deeper into ways to use this though the creation of side projects.

Creating an Automated Blazor Deployment in AWS

Blazor is a framework I have written about previously that enables C# developers to build SPA applications similar to Angular and React in C# using WebAssembly. Principally, this showcases the ability for WebAssembly to open the web up to other languages beyond JavaScript without any plugins or extensions; WebAssembly is already widely supported in all major browsers.

In this article, I would like to discuss how we can deploy the output of Blazor to S3 on AWS and host it via a Static Website; this is a very common pattern for hosting SPA due to decreased cost and efficient scale. For this process we will use two Amazon services: CodeBuild and CodePipeline.

Creating the Blazor Application

I wont go into this step other than to say you can use the default app if you so desire, though I recommend the standalone and not the one that features a WebAPI backend. Our goal here is the deployment of static web assets, an API would not be included in that categorization.

Steps are here: https://blazor.net/

Once you have the source drop it into GitHub (we will reference this later) or you can use CodeCommit which is the AWS GitHub repository service. In my experience, there is not a significant advantage to using CodeCommit over GH.

Setup Code Deploy

In your AWS Console, look under Developer Tools for CodeBuild

Click Create Build Project – this will launch the wizard

Here are the relevant fields and their associated values:

  • Source
    • Pick GitHub and your repository (you will need to authorize access if you haven’t done so before)
  • Environment – Pick Linux as the OS
    • For Role: name it something logical, you will need to modify it later
  • Buildspec
    • Select Use Buildspec file – AWS does not support dragging and dropping for the creation of its build process. Using this option we will need to create a buildspec.yml file at the root of your application (you can add it now and we will cover the syntax in the next section)
  • Artifacts
    • No Artifacts – this seems weird but, we are going to have CodeBuild do the deploy since CodeDeploy does NOT support deploying a SPA to S3

Click Create Build Project to complete the creation.  You can run it if you want to verify it will pull the source but, the build is going to fail as we dont have a valid buildspec.yml file. Let’s create that next

Creating the BuildSpec

One of the areas that I knock AWS is its lack of a good visual way to build DevOps pipelines. While it has gotten better, developers are still left manually define their build processes via YAML or JSON. This is in contrast to Microsoft which leverages a more visual drag and drop designer.

The first thing is to be aware of the syntax for these files, Amazon Docs are here: https://docs.aws.amazon.com/codebuild/latest/userguide/build-spec-ref.html

For our application the main step we need to be aware of is the build step. This is a simple test application and so it requires only a simple standard build operation. Here is my recommendation:

commands:
  – dotnet restore
  – dotnet publish –no-restore -c Release
All this does is leverage the dotnet command line tool (which will be installed on the container hosting our build) to restore all NuGet packages and then publish. This will end up creating some folders in /bin/Release/netstandard2.0 which will be involved in the post_build step.

The next part is where I can only shake my head with AWS. Since code deploy does not support S3 deploys like this we need to invoke the aws command line tool to copy the relevant output artifacts into our S3 bucket. Obviously, before we do that we need to create the S3 bucket that will host the website.

Creating the S3 Bucket to host your site

Amazon allows you to serve static web content from S3 buckets for a fraction of the cost of using other services. Best of all, no setup, automatic scalability and the same 11 9’s reliability bucket objects get from S3. It is no wonder this has became the defacto way to serve SPAs. Since the output of a Blazor build is static web content we can (and will) use S3 to host the application.

Under Storage pick S3. You need to create a bucket. Take the defaults for permissions, I will give you the Bucket Policy JSON at the end that allow objects to be served publicly.

Once the bucket is created select it and access the Properties tab. Select Static Web Site Hosting. Be sure to fill in the Default Document with index.html. I am not sure if this is necessary but, I always do it just to be safe. You should also take note of the endpoint as this is where you will access your website.

Now select Permissions and select Bucket Policy. You can use the Policy Generator here if you want but, this is the general JSON that comprises the appropriate policy to enable public access:

{
“Version”: “2012-10-17”,
“Statement”: [
{
“Sid”: “PublicReadGetObject”,
“Effect”: “Allow”,
“Principal”: “*”,
“Action”: “s3:GetObject”,
“Resource”: “arn:aws:s3:::YourBucketName/*”
}
]
}

Note: This is a very simply policy created for this example. In a real production setting, you will want to lock down the policy as much as possible to prevent nefarious access.

When you save this you should see the orange Public tag appear in the Permissions tab.

With this, you bucket is now accessible. Our next step is to get our content in there

Updating the Buildspec to copy to S3

As I said earlier, the normal tool used for deployments, AWS CodeDeploy does not, as of this writing, support static web asset deployment to S3, an opportunity missed in my opinion. This being the case, we can leverage the aws cli to copy to our bucket. There are a number of ways to organize this, here is how I did it.

I added a post_build step to the build spec which consolidated everything I was going to copy into a single folder:

commands:
  – mv ./WeatherLookup/bin/Release/netstandard2.0/dist ./artifact
  – cp -R ./WeatherLookup/bin/Release/netstandard2.0/publish/wwwroot/css ./artifact/

You dont have to do it this way, I just find it easier and more sensible than targeting the files/folder individually with the S3 copy command.

Next, we need to perform the copy to S3. I choose to use the finally substep within post_build to perform this operation

finally:
  – aws s3 cp ./artifact s3://YourBucketName –recursive

By using the –recursive the CLI will handle copying ONLY the contents of artifact into our bucket. We do not want the root folder since that would interfere with our pathing when users access the website.

If you run your build now it will get father but, it will break on the final step. The reason is, the role we defined for CodeBuild does NOT have the appropriate permissions to communicate with S3. So we have to update that before things will work.

Updating the Permissions

Within your AWS Console access IAM from Security, Identity, and Compliance. From this menu access Roles. Look for your role, we will need to attach a policy to it that gives it the ability to perform PutObject against our S3 bucket. There are two ways to do this:

Option 1:
You can apply the AmazonS3FullAccess which will grant the role full access to all S3 buckets in your account. I do NOT recommend for anything outside a simple test case. Reason, its never a good idea to give this sort of access to a role as it could be abused.

Option 2:
You can create a custom policy that provides the permissions specifically needed by this role. This is what I choose to do and is what I recommend others do to get into good habits.

For this demonstration we are going to use Option 2. Select Policies from the IAM left hand navigation menu. Select Create Policy. For most cases I would recommend the Visual Editor as it will greatly assist in creating policy documents. For this, I will give you the JSON I used:

{
“Version”: “2012-10-17”,
“Statement”: [
{
“Sid”: “VisualEditor0”,
“Effect”: “Allow”,
“Action”: [
“s3:PutObject”,
“s3:ListAllMyBuckets”,
“s3:ListBucket”,
“s3:HeadBucket”
],
“Resource”: [
“arn:aws:s3:::YourBucketName”,
“arn:aws:s3:::YourBucketName/*”
]
}
]
}

This policy grants the ability for the executor to access our S3 bucket and use the PutObject command.  Click Save after Reviewing the Policy.

Go back into roles, select the role associated with your CodeBuild project and Attach the Policy. When you rerun the build everything should work. Once you get a Completed status, you can browse to your S3 url and see your site.

Automating the Build

Our build works now but, there is a problem: it has to be kicked off manually. Ideally for any sort of process such as this, whether around integration or deployment we need the action to start automatically.  This is where AWS CodePipeline comes into play.

CodePipeline wraps a source provider, build agent, and deploy agent so that it can operate as an automated pipeline, hence the name. We are going to do the same

From Developer Tools select CodePipeline and on the ensuing menu select Create Pipeline.

On the ensuing page fill in the configuration options, there are two sections of particular note:

  • Service Role – this is the role that is assumed by the Pipeline as it reaches into other services. Notable it is to communicate with the various APIs for the supporting services.
    • Provide a service role here, we will not be modifying it at a later step
  • Artifact Store – Pipelines often deal with artifacts that come out of the various steps. Here we specify where those get stored. S3 is a great location. Keep in mind that for our example we will have no artifacts.

Once you press Next you are asked to configure the Pipeline source. Here you will want to specify our GitHub repository (you will need to connect again, but not reauth). This is to allow CodePipeline to register the GitHub webhooks that will be used to tell the Pipeline when a PUSH occurs.

Next comes to Build Provider, select the CodeBuild instance we created earlier.

Next comes Deploy, here we will press Skip and not define a deploy step. For something that would deploy to Lambda, ELB, EC2, ECS, or something like that you would need to select your deploy project. As we stated, we cannot use CodeDeploy with an SPA to S3 deployment.

The final step is to review and initiate the creation of the pipeline. The process is pretty quick. Once complete, select your build from the list. Amazon has done a great job making the pipeline visualization screen look much more appealing.

By convention, CodePipeline will run an initial build. If your CodeDeploy step worked before this should complete in a few minutes.

To test the pipeline, make a change to your local repo and push the change to the remote repo. If all is correct, you will see your Pipeline begin executing almost immediately. Once it completes, refresh your S3 website Url and your change should be visible (remember to check your cache if you dont see it).

Congrats, you have a working Blazor web app deployment in AWS.

Closing Thoughts

S3 is an ideal place to host a static website like an SPA where API requests are used to get data for the execution. The biggest win here is constant which is going to substantially less than something like EC2 or ELB.

One of the considerations I did not cover above is environmentalization of this process, we are always pushing the code to the same bucket. Normally we would have different buckets (perhaps across different Amazon accounts) that hold the specific version of the web code for that environment. This is when you might need to use CodeDeploy to deploy the artifact from CodeBuild to a Lambda to copy the contents into the bucket serving the content.

My goal here was to take a very simple deployment and determine how Amazon’s capabilities compare to those of DevOps. Needless to say, I found Amazon wanting, the DevOps aspect is certainly an area where Microsoft has the advantage. Between not being able to target S3 with their deploy tool and thus resorting to a copy from the build step to not supporting an easy visual way to build the sequence of build steps, more work is needed here.

Using Active Directory Authenticate with Web API from Xamarin

After the week I had, this is a very necessary blog post. I spent the week, among other things, helping my new client setup their Xamarin and Web API to talk to each other and use AD Tokens as the validation mechanic. Speaking frankly, Microsoft has A LOT of information out there, not helped by the transition from ADAL to MSAL and the many forms AD on Azure takes (B2C, Vanilla, B2B, I am sure others). It was immensely difficult to bring this together.

For this we will be using a standard WebAPI backend which leverages the normal AspNetCore Authentication libraries, Xamarin.Forms on the front end using the ADAL libraries

Setup the Web API

Contrary to popular belief you do NOT need to use the Authentication/Authorization feature for an App Service. You can, but, honestly, I found this feature pretty useless. You can honestly accomplish the same thing with just straight Azure configuration.

Head into the Azure Active Directory portion of Azure (below) and select App Registrations from the sub navigation.

Screen Shot 2018-08-18 at 9.50.22 AM

We have to register our backend app with Azure AD so that Active Directory can create tokens for that API that will pass validation down the road.

When you Add a new App Registration you need to provide a few values:

  • Name: Whatever you want, it should be something that adequately describes the App
  • Application type: Web app / API (this will govern the next field so make sure this is selected)
  • Sign-on Url: This is the domain of your service that is the API that will be used. Hint: if you are using Azure App Service it will be something like https://<your name>.azurewebsites.net

The final step is to select your new registration after Creation and go into the Settings -> Reply URLs.

At the top you will see the base Url that you provided above. Modify it so it ends with /.auth/login/aad/callback

Registration is complete. Now let’s add the code that checks the token for us with each request.

It’s coding time – part 1

I am assuming you are using ASP .NET Core 2.1 for this project, if you aren’t, you might want to skip ahead or find a different guide for this portion of the process. Of course, you are welcome to read and perhaps my words will inspire the right path forward 🙂 Maybe

Ok, so in Startup.cs we need to look at the ConfigureServices method. The first thing you will want to add the following bit of code:

Screen Shot 2018-08-18 at 12.56.09 PM

Quick note: AddAzureBearer is not something that exists, you have to create it. What it really is doing is fronting the configuration for the Jwt (JSON Web Token) Bearer and passing some very important configuration options to the underlying provider. It actually comes out of the GitHub Azure Samples from Microsoft (hopefully it will find its way into the actual BCL at a later date) here.

You will also want to make sure you add the associated AzureAdOptions class which will receive the values from our Configuration when we call Bind based on the AzureAd key. Here is an example:

Screen Shot 2018-08-18 at 1.01.27 PM

Let’s talk about TWO of these values: Instance and TenantId

If you return to the Azure Active Directory section on the Azure Portal and select the App Registration you will notice there is a button called Endpoints at the top. Select this and you are given your Tenant specific endpoints for common Auth flow operations. Copy the top one (Federation Metadata document).

The Guid in this value is your TenantId so you can copy and paste that into the above configuration.

If you take the same Url you copied from the Federation box and paste it into a browser you will see an XML document. In the very first block you will see entityID. The Url prefix is the Instance value. Also, the Guid here is your TenantId as well. That is where these two values in particular come from.

ClientId always refers to the ApplicationId for the registration in the Azure App Portal

We now have our configuration set, but we need a way to generate a token. You CAN do this through Postman. This blog explains how though, its a bit convoluted.

Register the Xamarin Application

We now have our Backend set, so let’s turn our attention to the Frontend Xamarin app. First we need to register our app, same as above though we will select Native for the Application Type.

When this change is made the SignOn Url box is replaced with Redirect Uri. At a high level, this is the redirect point within the flow that signals the authentication is complete and control should be handed back to the App.

Use the value https://<your api>.azurewebsites.net/.auth./login/done – yes it does match the SignOn Url of the API, that is intentional.

Once the app is created click on Settings and confirm the .login value is present in the Redirect URIs section.

Next go to Required Permissions, use the search feature at the top to find the API app you created previously. After you finish adding REMEMBER to hit Grant Permissions so the new permissions take effect.

Congrats that is all you need to do for the Xamarin application

Its coding time – part 2

Ok, this gets a bit different, really is pick your poison. Rather than walking through both sections, here is a sample that has all of the code for this: https://github.com/rcervantes/xamarin-adal

It does rely on the now less used Mobile App application type but it has solid code for how to use ADAL.Net with Xamarin (as of this writing MSAL does not yet work properly).

Here are some notes:

  • The actual logic which handles the Authentication is identical in both Droid and iOS with the exception of what is passed as PlatformParameters. You could easily pass this value to the PCL and centralize all auth logic there
  • Droid is a bit weird and requires override of the OnActivityResult callback for the flow to complete. You can basically copy paste the code from the sample
  • When the sample refers to Resource you can pass the string of your Backend app Application Id Guid
  • Authority is a Url of the format: https://login.windows.net/<TenantId&gt;
  • Auth Redirect Uri format is: https://<Api Base Url>/.auth/login/done

Feed these values into your call to AquireToken and the app should start the AD Auth flow and, at the end, return to you an Access Token that you can pass up to your API as a Bearer token and things should work.

So that’s it. I hope this works for you, it was quite the slog to get it to work for myself and there are still some edge cases I want to look at it. In particular, if I use a custom domain for my Azure App Service, how does that effect the login flow, or does it use the Azure Urls anyway under the hood.

As always, if you have problems with your token, check our jwt.io. Leave me a comment if you need any additional help. Cheers

Reporting on Unit Tests with VSTS Containerized Apps

I am a purist at heart and when I do something I want to take full advantage of the tools I am using. In the case of Docker, that means emphasizing that ALL of my code should run in the same container as my final product. What is the value otherwise?

To that end, I set up about exploring how I might report on unit tests with a VSTS build. It is not an easy process because, in my view, VSTS and .NET do not naturally lend themselves to the containerized architectures. Microsoft is working hard on changing this and have made great strides but, there are still some issues to work out.

However, in this case the central problem has to do with what Docker creates, an image, which is immutable meaning, during its construction you can not read from it, nor would you want to.

Approach 1: Run the Tests before Image Creation

The simplest approach is to run the unit tests before you create the image and add a dependent build phase which only executes if all unit tests pass. While this is simple and would work, it violates, in my mind, the principles of containerization.

Code is run in the same way for all environments

This matters for testing as it is the idea spot you might find a difference. If someone was using a different version of a library and it worked there and even worked on the build server but didnt work in the container, you would never know until you deployed.

Admittingly this is rare for any experienced development team who would be keeping close tabs on this but, it does happen (happened at West Monroe when a member of our team insisted on using the Alpha branch while everyone else used Stable for Xamarin.

My goal was to find a way to perform the unit tests in the very same containerized environment the code would run. So, I turned to the God of Wisdom: Google

Approach 2: Docker Compose to the rescue

Docker Compose is one of those tools that was created for one purpose but, I think, ended up fulfilling another. While you can still deploy production code using Compose, the trend right now is towards Orchestration with something like Kubernetes. Still, Compose is great for applications that wont use Kubernetes but still need mimic local representations of production dependencies.

In my searching I came across this fantastic article on Medium by a fellow developer who found an ingenious way to accomplish what I was seeking using Docker Compose.

Running your unit tests with VSTS and Compose

The gist is, we can use a Dockerfile which creates a “test” image which has no ENTRYPOINT defined. We can then create a docker-compose file which references that Dockerfile and specifies the ENTRYPOINT in the compose file as the dotnet test command. Here is a sample from my final output.

version: ‘3’
services:
  myapp.tests:
    build:
      context: .
      dockerfile: MyApp.Tests/Dockerfile
    entrypoint: dotnet test MyApp.Tests/MyApp.Tests.csproj –logger trx -r /results
volumes:
  – /opt/vsts/work/_temp:/results

As you scan this Compose file it becomes a bit clearer what is happening. VSTS supports the ability to perform a Docker Compose command. We use this to launch our Test Image and mount its results location for the test results to a local folder (last line above). This way when we run our subsequent step to report the results we have access to the files (they are built and stored in the container remember).

Note: I recommend keeping the directory the same since you can be sure it exists

Here is the Docker Compose up command we will use from the VSTS task

up –abort-on-container-exit –build

Note: the task will preprend docker-compose for us, so we need only specify the arguments.

The –abort-on-container-exit and –build flags just ensure that we build the container image if it is not cached already and the container is exited when our ENTRYPOINT command finishes.

Finally, we come to publishing our Test Results, we can use the existing VSTS Publish Test Results task. Point the task at our mounted directory, specify the desired extension as .trx and the test type ise VSTest (even if you are using a different runner, say NUnit).

Now you should be able to run and see your test results. Should point out that, since we are using dotnet test as our entrypoint, the task WILL FAIL if a test does not pass. So keep that in mind so you can create the proper control flow to not create Docker images from builds that do not have passing unit tests

I hope that helps, I hope you got some good information out of this. Be sure to visit the link above and send thanks to Christian. That article really helped me out.

View story at Medium.com

DevOps with a Containerized app in Visual Studio Team Services

With any modern development project, I feel, you need to have good DevOps if you want a chance to be successful. Luckily, Microsoft has done a lot of investing into Visual Studio Online so that it is a one stop shot for development teams. Among these tools is a cutting edge Build and Release pipeline system.

In this post, I wanted to walk through my approach to handling a CI/CD pipeline with VSTS and containerized builds being deployed using App Services.

By the end you will end up with two builds: One which performs your typical CI Dev build that runs after each remote push, this will have a linked Release that deploys the created image to a Dev App Service. Similarly, you will have a Release Build that is triggered when a tag is pushed to the remote. It builds the image and tags it with the value from the Git tag. Finally, we will create a Staging Deployment where by users manually create releases and deploy specific versions to higher environments.

This is not a short post so, let’s get started.

Creating the CI Build

One of the most important builds for any development team is the CI or Continuous Integration builds. For this build, whenever we merge to our develop branch we want to build an image and, if valid, deploy it to our Azure Container Registry (ACR).

For starters, we need a Dockerfile that can create the image we will deploy to ACR. Here is the Dockerfile I used:

Selection_008

This is what is known as a multi-stage build where we separate the build and runtime components of our container, this reduces the size of the final container as SDKs can be rather large and are not needed to actually run code.

Here are the steps:

  • Download version 2.1 of the dotonet core SDK and refer to this stage as build
  • Set the working directory on the image to /code
  • Copy everything from the current directory into /code (our current working directory)
  • Run the dotnet restore command to restore our Nuget packages
  • Run the dotnet publish to build our application in Debug (it is a Dev build) and send the contents to /artifact
  • Download version 2.1 of the aspnetcore-runtime and name this stage runtime
  • Create your working directory /app
  • Copy all contents from /artifact from the build stage to the current working directory (/app)
  • Expose port 80 on spawned containers
  • Set the Entrypoint for the container as ContainerTest.Api.dll

We will create a derivative of this for the release build later on.

On VSTS, you will need to enter into the Builds and Releases section and click New +this will open the wizard to create a new pipeline.

First screen is selecting the source you want to download, we want to use develop since this is the branch our task and feature branches will ultimately come into. So this build will happen very frequently as an attempt to make sure changes dont break anything.

Next screen we pick our base template, Docker Container will be our selection. This will call our Dockerfile and expect to publish the image to a registry, we will use ACR for this, but you could use any Registry you so desired.

Selection_009

Important: You must make sure that the image(s) you build and the image(s) you publish are the same, or this process will fail.

Selection_011

Let’s go through the fields here, all of them are duplicated in the Publish task as well:

  • Azure Container Registry – because I indicated that this would be where my images are stored I was asked to select the registry. There is a field above this to select the Azure subscription, I have hidden it here for security
  • Action – this is obvious, the values will differ between Building and Publishing for obvious reasons
  • Dockerfile – again, obvious, we can leave the default here.
  • Image Name – Ok, so this is the actual name and tag of the image you will create. In ACR the image maps to a Repository and each individual item in that repository will be a tag
    • In this case we use the repository name as the repo name and the BuildId value as the tag. We can update the tag to be whatever we want
    • Ex: $(Build.BuildNumber)-$(Build.SourceVersion)
  • Additional Image Tags – new line delimited list if you want to create additional tags within the repo, or if your tag structure is long
  • Include Source Tags – will create a tag for any Git tag that is pushed
  • Include Latest – common practice in Docker, latest refers to the latest build for the image. You can also not include any tags and latest will get pushed

Again, it is critical that we duplicate the image name fields in the publish task so that it can find the image we just built.

Finally, we need to indicate that this is a build that is kicked off when the develop branch is pushed to. To do this, we edit our Build Pipeline and select the Trigger tab. Click to Enable Continuous Integration. Make sure you have develop specified. This will ensure the build is kicked off when develop is modified.

Releasing

Now, oddly enough even if you create a latest image and set your App Service to use the latest container it will not update when you push, because the App Service has to be told to update, and that is where the Release pipeline comes in.

First, hit Azure and create an App Service (Web App for Containers), when creating be sure to select Container (if you select Web App for Containers you wont have a choice).

Now, you will be asked to define a default image so, best to do this once one of your build from CI has completed. Be sure to test that it works after the provisioning process is complete.

Returning to VSTS, go to Releases. Release pipelines can do all the same things as Build pipelines but, their targeted purpose is to respond to a completed build or manually release code selected from completed builds.

When you select to Create a release pipeline you will be met with a side menu that requests selecting a template. For this case, we select Azure App Service Deployment.

Our next step is to determine what will be released and that means selecting an Artifact. There are many options here but, for this step since we want this release to happen whenever the CI build finishes we select Build. When you do this, most of the fields will get filled in, the Source Version Alias can be whatever you want, its just the name of the incoming artifact.

After we select our artifact we need to tell the release what to do. For our case, this is going to be super simple: we are going to deploy the image built in the Build Phase to our Dev environment AppService. Click the Phase link beneath the Environment.

Selection_013

So, let’s go through these settings cause they are important to understand:

  • App type: Must set this to Linux Web app because the images are all using Ubuntu
  • App Service name: So, I have noticed that if you dont use the Web App for Containers that it doesnt seem selectable in the menu, hence I mentioned using that template above
  • Image: The image you want to target, this is case sensitive
  • Tag: the tag you are deploying. Some of the environment values are carried over from the build, one of them is the BuildId

The last thing we need to do is set up our Release trigger. We can trigger releases manually, which will be the case for UAT and Production and, to some extent, QA. But for Dev we want it to have the latest and greatest.

So, once you have this in place, its time to test our CI build. Make a change and push to develop.

The build should start up and, hopefully, finish successfully (use the Download log on the Build detail to debug failures). After it finishes, switch over to Releases. You should see the next build start up.

Once that finishes, refresh your AppService endpoint and, after a time, you should see the change. If you get Service Unavailable, it usually means that you attempted to specify with an image tag that does not exist. To confirm this, view the Container Settings for your App Service and, if Tag is blank (or any required fields) it means the deployment specified the wrong tag. You can further confirm this in the Log for the Release.

That completes our first goal, we have a CI build which deploys to our AppService, up next is QA.

Creating the Release Build

Ideally, I wanted this build to kick off whenever a version tag was pushed to the develop branch. From this, we can tag the generated image file with its version and very easily have a historical listing of the versions that can be used by App Services and via the Release pipeline.

Before going any further its important that we understand how we can automatically invoke a build from a tag push, since it is not immediately obvious.

When you create a tag it is created at the path /refs/tags/<tag name>. Most build engines are wired to look for branch changes using a similar path structure. Knowing this we can hijack this to launch our build when a tag is pushed.

Clone the CI build and go into Edit it, click Triggers. You will need to enable Continuous Integration, as you did for the CI build. But you wont use a branch this time (shown below)

Selection_015

That is all there is to it. Now we just need to make some changes to our build process.

Tagging the Image

Simply put, we want to translate our Git tag to the tag for our container. This value is available to us, oddly enough through the Build.SourceBranchName environment variable. So we can use this in our Image Build and Push steps to correctly tag and push the right image.

Admittingly, this is a bit weird but, if you remember how we triggered it does make sense. I do hope Microsoft exposes this in a cleaner way moving forward because, it is not obvious you can do this.

Selection_017

The last thing we want to do is make sure that we build our .NET code in Release mode, since this is code that could potentially go into Production. The easiest way to do this is to create a copy of your Dockerfile and update Debug to Release.

Also note in the Image name the -release suffix added to the Image Name. This is so we do not drop this into our Dev repo (containertest). While there is no harm in doing such, I find this is easier to know which builds are releases and prevent mistakes.

Methodology

When we create a QA release we should view this as something that MIGHT go to Production. In reality, the vast majority of Release builds will be discarded somewhere along the way, but at least one will/should make it all the way through.

Additionally, in a proper build process we NEVER want to rebuild code that has been validated by a testing process as it opens the chance that a bug slips by. Thus, when we create a Release build that is the last time that code is compiled. This is where Containers really shine vs something like ZipFile deploy as they are specifically designed with this case in mind.

Finally, by separating our Dev and Release builds we are able to have a history and allow for easy rollbacks and deployments. By having this history, we can see a timeline of how an application developed.

Releasing the Release

So, we can use the same methodology to kick off his release build as we did with the CI build, when the build changes the release is kicked off.

Go ahead and Create a new Release Pipeline, as before we want to use Azure App Service Deployment as our template. For the Artifact, select the Release Build that was created previously. The beauty here is that since that build is ONLY triggered when a version tag is added, so this release pipeline will only ever fire when that release build is successful; this makes it ideal to deploy to QA environments.

Similar with the Release Build we created earlier, we need to reference the Build.SourceBranchName in the Deployment task so we indicate what Image we are deploying with.

As a tip, when a Release run finishes you can look at its details and click Logs and see a COMPLETE dump of all variables in context. This is VERY helpful for knowing what you have access to; this was more helpful than hours of Googling for me 🙂

Also, a good way to verify that the Release worked (in addition to visiting the Url or checking the Container Settings in the Web App) you can see the actual image and tag it attempts to deploy (you will not get an fail if the image does not exist, just Service Unavailable).

Selection_018

To test, create a tag anywhere in your Git commit history and push that tag to your remote. As a warning, when you do a git push it does NOT, by default, push tags. I use GitKraken so I can push tags individually. Just keep that in mind.

Also, if you are using the free tier of VSTS, it may take a second to start. You can check Queued Builds if you want to see the change was detected.

Once the build finishes, flip over to Releases and, again after some waiting, you will see the Release start. When it finishes you can check your AppService. Congrats.

Higher Environment Deployment

As we talked about before, once you build a QA release you are, effectively, creating a build that you might potentially release and, as such, rebuilding this code should absolutely be avoided. Using containers make this much easier over something like Zip.

Because we do not need to build anymore, additional actions take place only in the Release Pipeline. To close out this post, I will create a Staging Deployment where the user indicates what version they are deploying.

In Releases, choose to create a new Release Pipeline, I called mine Staging Deployment. The important thing with this pipeline is that for the Artifact Type you select Azure Container Registry (or whatever registry you are choosing to use).

Next go into the Tasks for your App Service Deployment task. Make sure we select the right Image Name (remember it is case sensitive) and use Build.BuildId for the tag. This is weird I know but, when the user creates the release they will specify a version (from the versions we have created) and it will be surface as the BuildId. Here is what mine looks like:

Selection_019

This is literally it for the configuration of the pipeline. Now, let’s invoke it.

From the Releases main landing screen select the Staging Deployment (or whatever you called it) Pipeline and from the three dots menu select + Release.

A side menu will appear prompting the user for certain details on this release, one of the, is version. When you click the dropdown a selection of available versions from the ACR will appear. Select the one you want. Here is what my screen looks like:

Selection_020

Click Create and the Pipeline will move to a Standby state, it wont actually deploy it yet, that is, correctly, a separate step.

FYI, the Refresh on these screens is a bit wonky so, make use of the manual Refresh button in the table’s upper left corner.

Here is what my screen looks like when I drill into this New Release I created.

Selection_021

Now, we click Deploy and wait till the process ends. Mine took 3m, though I use the free tier and a local agent built on an Agent Docker Container (future post for that).

Once its complete, go verify things and you should be go.

Closing

Let me be frank, there is NO REASON to not use Containers for applications these days. Orchestration is another matter but Containers should now be the defacto standard for the vast majority of applications.

In the example above, we were able to use Git tags and tagging to identify versions and make our builds but, more than that, there is a consistency here. We have a guarantee that our applications work because they are contained and have everything they need right inside, regardless of the host OS.