Deploying v2 Azure Functions with Terraform


You would not think this would be super difficult or, at the least Terraform’s documentation would cover such a common use case. But, I found it to be false, so I figured I would share the necessary changes needed.

First, some background – I am using a Terraform script to deploy a variety of Azure resources to support an internal bootcamp I will be giving at West Monroe in August.

Here is the completed Terraform block to deploy the V2 Azure Function:

This isnt something that is covered in the documentation that, in addition to specifying version we also need to include the linux_fx_version​in site_config.

I do not know if this is necessary if you use a Windows based App Service Plan (I am using Linux and sharing it with my App Service).

I found immense frustration in figuring this out and found myself annoyed that I could not find anything by V1 examples in the Terraform docs.


An Intro to Kubernetes

Kubernetes is a container orchestration framework from Google that has become a highly popular and widely used tool for developer seeking to orchestrate containers. The reasons for this are centered around the continued desire by developers and project teams to fully represent all aspects of their application within their codebase, even beyond the source code itself.

At its simplest form Kubernetes is an automated resource managed platform that works to maintain a declared ideal state for a system via the use of YAML-based spec files. While the topic of Kubernetes is very deep and encompasses many aspects of architecture and infrastructure there are 5 crucial concepts for the beginner to understand.

The Five Concepts

Cluster – the term Cluster essentially refers to a set of resources being managed by Kubernetes. The cluster can span multiple datacenters and cloud providers. Effectively each Cluster has a single control plane.

Node(s) – represents the individual blocks of compute resource that the cluster is managing. Using a cluster like minikube you only get a single node whereas many production system can contain thousands of nodes.

Pod – the most basic resource within Kubernetes. Responsible for hosting one or more containers to carry out a workload. In general, you will want to aim for a single container per pod unless using something like the sidecar pattern

Deployment – at a basic level ensures a minimum number of Pods are running per a given spec. Pod count can expand beyond this level but the replica count ensures a minimum. If number drops below, additional pods are recreated to ensure ideal state is maintained

Service – clusters are, by default, deny all and require services to “punch a hole” into the cluster. In this regard, we can think of a service as a router enabling a load balanced connection to a number of pods that match its declared criteria. Often services are fronted by an Ingress (beyond this post) which enables a cleaner entrance into the cluster for microservice architectures.

Visually, these concepts related to each other like this:

Options for Deployment

In his blog post, Kelsey Hightower of Google lays out how to setup Kubernetes yourself. Very much so, its well beyond most developers, myself included. Therefore, most of us will look towards managed options. In the cloud, all of the major players have managed Kubernetes options:

  • Azure Kubernetes Service
  • Google Kubernetes Engine (among other offerings)
  • Elastic Kubernetes Service (AWS)
  • Digital Ocean Kubernetes

Each of these options are very recent versions of Kubernetes and are already supporting customer deployments. However, one of the advantages Kubernetes comes with as a resource management platform, is the ability to also managed OnPrem resources. Due to this we have seen the rise of managed on-prem providers:

There is also minikube ( which serves as a prime setting for development and localized testing.


Deploying Our application


This is the application we are going to deploy, the pieces are:

  • Price Generator – gets latest stock price for some stock symbols and uses a random number generator to publish price changes to RabbitMQ
  • RabbitMQ – installed via Helm chart – receives price change notifications and notifies subscribers
  • StockPrice API – .NET Core Web API – listens for Price Changes and sends price changes to listening clients via SignalR
  • StockPrice App – ReactJS SPA application receives price changes via SignalR and updates price information in its UI

With the exception of RabbitMQ, each of these pieces are deployed as Docker containers from Docker Hub.  Here is a sample Dockerfile to build the .NET pieces:

You can ignore the –environment flag – this was something I was trying for with regard to specifying environment level configuration.

Next we push the image to Docker Hub (or which ever registry we have selected) –

For reference, here is the sequence of commands I ran for building and pushing the StockPriceApi container:

docker build -t xximjasonxx/kubedemo-stockapi:v1 .

docker push xximjasonxx/kubedemo-stockapi:v1

Once the images are in a registry we can apply the appropriate spec files to Kubernetes and create the resources.  Here is what the StockAPI spec file looks like:

What is defined here is as follows:

  • Define a Deployment that indicates a minimum of three replicas be present
  • Deployment is given a template for how to create Pods
  • Service is defined which includes matching metadata for the Pods created by the Deployment. This means, no matter how many Pods there are, all can be addressed by the service

To apply this change we run the following command:

kubectl apply -f stockapi-spec.yaml

Advantages to using Kubernetes

The main reason orchestrators like Kubernetes exist is due the necessity with using automation to manage the large number of containers required to support higher levels of scale. However, while a valid argument, the greater advantage is the predictability, portability, and managability of applications running in Kubernetes.

One of the big reasons for the adoption of containers is the ability to put developers as close to the code and environment in production. Through this, we gain a much higher degree of confidence that our code and designs will execute as expected. Kubernetes takes this a step further and enables us to, in a sense, containerize the application as a whole.

The spec files that I shard can be run anywhere that support Kubernetes and it will run, more or less the same. This means we can now see the ENTIRE application as we need it, not just pieces of it. This is a huge boon for developers, especially those working on systems that are inherently difficult to test.

When you start to consider, in addition, tools like Ansible, Terraform, Puppet and how they can effect configuration changes to Spec files. And that clusters can span multiple enviroments (cloud provider -> on-prem, multi cloud provider, etc) there are some interesting scenarios that come about.

Source Code is available here:

I will be giving this presentation at Beer City Code on June 1. It is currently submitted for consideration to Music City Code in Nashville, TN.

Pure Containerized Deploy with Terraform on Azure DevOps

In previous posts I have talked about the importance of Infrastructure as Code in creating a more complete solution that keeps with the core tenant of Cloud Native in that applications should manage their own architecture. In this post, I will walk through the process of deploying a container to an Azure App Service with Terraform.

Benefits of IaC (Infrastructure as Code)

When we start talking about cloud deployments we must inevitably come to see the configuration and deployment of Cloud services as being as much a part of the application as the source code itself. Any cloud application where the configuration for services is simply stored in the platform itself is encouraging disaster upon itself. A simple hit of the “Delete” key can leave teams scrambling for hours to restore service.

In addition, using IaC makes it very easy to spin up new environments which can be invaluable for testing. In fact, this is a chief benefit of a tool like Kubernetes (Jenkins X leverages this ability to create new environments per pull request). The end goal of DevOps is to see all environments and code handled in a way that requires a minimal amount of human interaction for management.


Terraform is created by HashiCorp and is billed as a IaC tool which supports all of the major players in Cloud and infrastructure. It serves as an alternative to something like Cloud Formation or Azure Resource Manager. Files are defined using the HCL language and use code to represent the targeted infrastructure.

It can be installed from here:

Our application

For this, I am referencing a microservice that I wrote for a side project (ListApp) that returns the Feed of events relevant to a user. At this stage of development, this is nothing more than a hard coded list which gets displayed in the UI.

I have already created the Dockerfile which builds the Docker image that I will use when deploying this image. You will see this referenced in the HCL code later.

Our application will be deployed on Azure.  Reference HashiCorp’s documentation on their Azure provider here to get through the initial steps and get started.

Building the initial pipeline

So, how I like to approach Terraform with .NET Core application is, I store my .tf file at the same level as my solution file (or whatever constitutes the root of your application) in a folder called terraform.

Azure DevOps makes it very easy to build pipelines which output Docker images and store them in a registry. But there is a trick to this process if you are going to use Terraform to deploy your code – publishing an artifact.


So, the reason you need to this, Azure DevOps operates on the notion of passing artifacts between pipelines and then operating on that artifact (usually you and build an artifact and then release it). When your artifact is a Docker container, you will not have an artifact per se, rather the release pipeline often targets the tagged Docker image in a registry somewhere. But in this case, we need the build to ALSO output our Terraform contents so they can be executed in the Release pipeline. Adding this task will accomplish that.

For more information on the actual process of building DevOps pipelines, go here

Before we get into building the release let’s cover off what the .tf file needs to look like. I posted this entry previously ( which describes the .tf file in detail and how you can use it, locally, to deploy a containerized NodeJS application to Azure.

Now let’s talk through of the changes needed to use it with Azure DevOps

Backend State

State is a very important aspect to Terraform, it has to know if it created something previously so it knows what to expect if it finds that resource.  A great example is an Azure App Service. Without knowing this state, Terraform may try to create an Azure App Service with the same name as one which already exists, causing a failure.

By default, Terraform stores this state information in a .tfstate file which it references whenever plan and apply is run.  This situation changes when you run in DevOps since you will never have the .tfstate file – builds should always be run from clean environments.

This is where we introduce the concept of “backend state” where Terraform stores its state to a central location that it can reference during the build.  The docs are reasonably good here:

In the end, what this amounts to is creating a storage account on Azure in which to store this state information.  Here is mine for Feed service:

This is relatively easy to understand, I am laying out what resource group, storage account, container, and what blob key I want to use when storing the state.

What is missing here is access_key and very intentionally. The docs lay this out quite nicely here:

Basically, as is often the case, we want to avoid committing sensitive credentials to source control, less they be discovered by others and give access where it was not intended.  We can pass the access_key value when we call init in our Release pipeline.

This is the full .tf file I am going to commit to source control which I will plan and apply in the Release pipeline.

Building the Release Pipeline

Returning to Azure DevOps we can now build our release pipeline.  Its simply a set of 4 steps:


Step 1: We install terraform into the container the release pipeline is being executed
Step 2: We call init which installs plugins and configures our backend for state storage
Step 3: We plan the deployment, this allows Terraform to get an idea of what changes are needed
Step 4: We apply the changes which updates our infrastructure as we desire it

So let’s talk specific for each of these steps:

Step 2 – init


Notice the _FeedServiceCIBuild after the DefaultWorkingDirectory – this is the name of your build artifact as it exists the Build pipeline. You can find this on the designer screen for the Release pipeline

We specify the -backend-config option to init in this case providing a key value pair for the access_key. I have hidden the actual value behind a pipeline variable. This will initialize Terraform to use my Azure Storage Account to store the state information

Step 3 – plan


Again, notice the use of _FeedServiceCIBuild as the root of where the terraform command will be executed.

We are also specifying the tag for the container created by the build pipeline. Reference the completed .tf file above to see how this is used. This is essential to updating our App Service to utilize the latest and greatest version of our code

Step 4 – apply


If this looks the same as the above, you are not going crazy. apply and plan often look the same.

One Important Note:
If you read only tutorials of using Terraform in CI they will make mention of a using


with plan and apply to prevent the system from blocking. Often they will also recommend outputting a tfplan file for consumption by apply. With Azure, at least, you dont need to do this. The new -auto-approve is automatically appended to these commands which appears to be the new flavor for CI tools to  use.

Testing things out

You should now be able to kick off builds (via CI or manual) which will build a container to hold the latest compiled source code. Once this is built a Release process can be kicked off (automatically or manually) to update the Azure App Service (or create it), to reference the new container.

And just like that, you have created a managed build and release that is not only automated but, also contains the information for your App Service that would otherwise be stored transiently in the portal. Pretty cool.

Infrastructure as Code with Terraform

The concepts of Infrastructure as Code (IaC) are one of the main pillars to modern DevOps and Cloud Native Applications.  The general idea is, the software itself should dictate its infrastructure needs and should always be able to quickly and automatically deploy to existing and new environments.

This is important because most applications today tend use not a single cloud service but many, often times configured in varying ways depending on the environment. The risk here is, if this configuration lives only in the Cloud than if user error occurs or the cloud provider has problems this valuable configuration and settings information can be lost. With IaC, a simple rerun of the release script is all that is needed to reprovision your services.

Additionally, doing this mitigates the “vault of knowledge” problem whereby a small group of persons understand how things are set up. If they depart or are otherwise unavailable during an outage the organization can be at risk. The configuration and settings information for infrastructure is as much a part of your application as any DLL or line of code, we need to treat it as such.

To show this in action, we will develop a simple NodeJS application that responds to Http Requests using ExpressJS, Containerize it, and then deploy it to Azure using Terraform.

Step 1: Build the application

When laying out the application, I always find it useful to create a separate directory for my infrastructure code files, in this case I will create a directory called terraform. I store my source files under a directory src.

For this simple application I will use ExpressJS and the default Hello World code from the ExpressJS documentation:

npm install express –save

Create a file index.js – paste the following contents (taken from ExpressJS Hello World:

const express = require(‘express’)
const app = express()
const port = 3000

app.get(‘/’, (req, res) => res.send(‘Hello World!’))

app.listen(port, () => console.log(`Example app listening on port ${port}!`))

We can run this locally using the following NPM command

npm start

Note, however, this does not come prebuilt after npm init so you might have to define it yourself. In essence, its the same as running node index.js at the command line.

Step 2: Containerize

Containerization is not actually required for this but, let’s be honest, if you are not using containers at this point you are only depriving yourself of easy more consistent deployments; in my view it has become a question of when I do NOT use containers versus the default of using containers.

Within our src directory we create a Dockerfile. Below is the contents of my Dockerfile which enable the application from above to be served via a container.

FROM node:jessie
COPY . .
RUN npm install
ENTRYPOINT [ “npm”, “start” ]

We start off by using the node:jessie base image (Jessie is the flavor of Linux inside the Container) – you can find additional base images here:

Next we set our directory within the container (where we will execute further commands) – in this case /app – note that you can call this whatever you like

Next we copy everything from the Dockerfile context directory (by default the directory where the Dockerfile lives). Note that for our example we are not creating a .dockerignore due to the simple nature. If this were more complicated you would want to make sure the node_modules directory was not copied, less it make your build time progressively longer

We then run the npm install command which populates node_modules with our dependencies. Recall in the previous point, we do not want to copy node_modules over, this is for two reasons:

  1. Often we will have development environment specific NPM packages which we likely do not want on the container – the goal with the container is ALWAYS to be as small as possible
  2. In accordance with #1, copy from the file system is often slower (especially in the Cloud) than simply downloading things – also to make sure we only download what we need (see Point #1)

Next we run a Docker command which instructs the container to have port 3000 open and accepting traffic. If you look at the ExpressJS script, this is the port it listen on, so this is poking a hole in the container firewall so the server can receive requests.

Finally, all Dockerfile will end with the EntryPoint Docker command. With the source in place, this is the command that gets run when the Docker image is started as a container. For web servers like this, this should be a command that blocks the program from exiting because, when the program exists the container will close down as well

Step 3: Publish to a Registry

When we build Dockerfile we create a Docker image. An image, by itself, is useless as it is merely a template for subsequent container [ we haven’t run ENTRYPOINT yet ]. Images are served from a Container Registry, this is where they live until be called on to become container ( an instance of execution ).

Now, generally, its a bad idea to use a laptop to run any sort of production services (these days the same is true for development as well) so, keeping your images in the local registry is not a good idea. Fortunately, all of the major cloud providers (and others) provide registries to store your images in:

  • Azure Container Registry (Microsoft Azure)
  • Elastic Container Registry (Amazon)
  • Docker Hub (Docker)
  • Container Registry (Google)

You can create the cloud registries above within the correct provider and publish your Docker images, more here:

Docker images being published to a registry such as this opens them up to being used, at scale, by other services include Kubernetes (though you can also host the registry itself in Kubernetes, but we wont get into that here).

The command to publish is actually more of a push (from the link above)

docker tag nginx

docker push

With this, we have our image in a centralized registry and we can pull it into App Service, AKS, or whatever.

Step 4: Understand Terraform

At the core of IaC is the idea of using code to provision infrastructure into, normally, Cloud providers. Both Azure and Amazon offer tools to automatically provision infrastructure based on a definition: CloudFormation (Amazon) and Azure Resource Manager (ARM) (Azure).

Terraform, by HashiCorp, is a third party version which can work with either and has gained immense popularity thanks to its ease of implementation. It can be downloaded here.

There is plenty of resources around the web and on HashiCorp’s site to explain how Terraform works at a conceptual level and how it interacts with each supported provider. Here are the basics:

  • We define a provider block that indicates the provider plugin we will use to create resources; this will be specific to our target provider
  • The provider block than governs the additional block types we can define in our HCL (HashiCorp Configuration Language)
  • We define resource blocks to indicate we want to create something
  • We define data blocks to indicate that we wish to query for certain values from existing resources

The distinction between resource and data is important as some elements are ONLY available as one or the others. One such example is a Container Registry. When you think about it, this makes sense. While we will certainly want to audit and deploy many infrastructure components with new releases, the container registry is not such a components. More likely, we want to be able to read from this component and use its data points in configuring other components, such as Azure App Service (we will see this later)

To learn more about Terraform (we will start covering syntax in the next step) I would advise reading through the HashiCorp doc site for Azure, its very thorough and pretty easy to make sense of things:

Step 5: Deploy with Terraform

Terraform definition files usually end with the .tf extension. I usually advise creating this in a separate folder if only to keep it separate from your application code.

Let’s start with a basic script which creates an Azure Resource Group

provider “azurerm” {
resource “azurerm_resource_group” “test” {

The first block defines the provider we will use (Azure in this case) and the target version we want to use of that provider. I also supply a Subscription Id which enables me to target a personal Azure subscription.

Open a command line (yeah, no GUI that I am aware of) and cd to the directory that holds your .tf file, execute the following command:

terraform init

Assuming the terraform program is in your PATH, this should get things going, you will see it download the provider and provision the .terraform which holds the binary for the provider plugin (and other plugins you choose to download). You only need to run the init command one time. Now you are ready to create things.

As with any proper IaC tool, Terraform lays out what it will do before it does it and asks for user confirmation. This is known as the plan step in Terraform and we execute the following:

terraform plan

This will analyze the .tf file and (by default) output what it intends to do to the console; you can also provider the appropriate command line argument here and get the plan into a file.

The goal of this step is to give you (and your team) a chance to review what Terrafrom will create, modify, and destroy. Very important information.

The final step is to apply the changes, which is done (you guessed it) using the apply command:

terraform apply

By default, this command will also output the contents of plan and you will need to confirm the changes.  After doing so, Terraform will use its information to create our Resource Group in Azure.

Go check out Azure and you should find your new Resource Group created in the CentralUS region (if you used the above code block).  That is pretty cool. In our next step we will take this further and deploy our application.

Step 6: Really Deploy with Terraform

Using Terraform does not excuse you from knowing how Azure works or what you need to provision to support certain resources in fact, this knowledge becomes even more critical.  For our purposes we created a simple API that responds with some text to any request, for that we will need an App Service backed by a Container but, before that we need an App Service Plan – we can create this with Terraform:

resource “azurerm_app_service_plan” “test” {
  sku {

Here we see one of the many advantages to defining things this way; we can reference back to previous blocks (remember what we created earlier). As when writing code, we want to centralize the definition of things, where appropriate.

This basically creates a simple App Service plan that uses the bare basics, your SKU needs may vary. Since we are using containers we could also use Windows here as well, but Linux just feels better and is more readily designed for supporting containers; at least in so far as I have found.

Running the apply at this point will add App Service Plan into your Resource Group. Next we need to get some information that will enable us to reference the Docker container we published previously.

data “azurerm_container_registry” “test” {

Here we see an example of a data node which is a read action – you are pulling in information about an EXISTING resource – an Azure Container Registry in this case. Note, this does NOT have to live in the same Resource Group as everything else, its a common approach for services like this to be in separate group for those that transcend environments.

Ok now we come to it, we are going to define the App Service itself.  Before I lay this out, I want to give a shout out to which inspired this approach with App Services.

Here is the block: (too long for the blockquote)

There is a lot going on here so let’s walk through it.  You can see that, as with the App Service Plan definition, we can reference back to other resources to get values such as the App Service Plan Id. Resources allow you to not just create them but reference their properties (Terraform will ensure things are created in the proper order).

The app_settings block let’s us pass values that Azure would otherwise add for us when we configure container support. Notice here though, we reference the Container Registry data block we created earlier.  This makes it a snap to get the critical values we will need to allow App Service access into the Container Registry.

The last two blocks I got from PumpingCode – I know what linux_fx_version does, though I have never seen it used in App Services, same with Identity.

Step 7: But does it work?

Always the ultimate question. Let’s try it.

  1. Make a change and build your docker image. Tag it and push it to your Azure Container registry – remember the tag you gave it
    • One tip here: You might have to change the port being exposed to 80 since App Service (I think) blocks all other ports
  2. Modify the .tf file so the appropriate image repository name and tag is represented for linux_fx_version. If you want some assurances you have the right values, you can log into the Azure Portal and check out your registry
  3. Run terraform apply – verify and accept the changes
  4. Once complete, try to access your App Service (I am assuming you changed the original and went with port 80)
  5. It might take some time but, if it worked, you should see your updated message being returned from the backend


The main point with IaC is to understand that modern applications are more than just their source code, especially when going to the Cloud. Having your infrastructure predefined can aid in automatic recovery from problems, enable better auditing of services, and truly represent your application.

In fact, IaC is the centerpiece to tools like Kubernetes as it allows it to maintain a minimum ideal state via YAML definitions for the abstract infrastructure. Pretty cool stuff if I do say so myself.

Of course, this here is all manual, where this gets really powerful is when you back it into a CD pipeline. That is a topic for another post, however 🙂

Auth0 and React + Redux

Recently, I have been experimenting with Auth0 inside a side project I have been working on. Its quite interesting and, for the most part, easy to use. However, I have found the Quick Starts for ReactJS lacking, especially with respect to React implementations that utilize Redux. So I thought I would take a crack at it.

Setting up the Application

Head over to and create your free account and setup your application. I will spare  rehashing this as the Auth0 tutorials did this far better justice than I would. The important thing is to come away with the following bits of information:

  • domain
  • audience
  • clientID
  • redirectUri
  • scope
  • responseType

Simply choose to Create an Application once logged into Auth0 and you can pick the Quickstart of your choice. For readers here, its probably going to be ReactJS + Redux.


I am going to assume you have created your Auth0 application and have already configured the various bits of Redux into your ReactJs application (or at least expect that you know how to).

Ill also assume you are going to use routing in some degree within your ReactJS app, that plays a vital role in ensuring the authentication is done properly.

Let’s get started

If you read through the Auth0 getting started guide you will find it tells you to to create a file which holds the bits of information above, do that but instead using it as a modular function.  Here is my code:


The instance variable webauth is an instantiation of auth0.WebAuth which will take our data and allow us to interact with the Auth0 APIs.  You notice that I only ever create the client once, this is intentional. The one lesson I learned while doing this was to ensure you dont create things too much as it tends to screw with Redux.

By taking this approach I found it much easier to reference the Auth client.


You need to initiate the login process, by calling login on our Auth client. This will initiate the authentication process with Auth0.  The important thing to understand is what happens at the end of that process.

One of the values we collected and added to our WebAuth client was a callback. This callback is where the browser will go once authentication is complete. Within this route you need to be able to parse the result (using our processResult method).  This gets tricky with Redux due to state changes. You will find the process succeeding and then failing often putting your app into a weird state.

To mitigate this, you want to look at your Routing.   Here is an example my routing:


You can see that we want to intentionally not render the /callback route if the user is logged in. The flow here is interesting. Let’s explain.

Let’s assume you click a button to start the login process. Your application, when complete, will redirect to your callback URL, /callback in our case. Here is what this page looks like for me:


As soon as the component mounts we grab our authClient and call processResult. If it works, we initiate our saveLogin action. In my case, this will end up sending the login result into a Redux Saga which saves it into Local Storage. I then use the push method off Connected Router to go to my authenticated landing page.

However, doing this will cause parts of the application to check for changes. This action, I found, causes a second call to the Auth0 API which fails. This can make it seem like the login is failing when in fact it is working. The answer here is to eliminate the chance that callback remounts; for this reason I used isLoggedIn in my routing to remove that routing rule if the user is authenticated.


The important thing to remember with Redux is to limit the number of potential refresh vectors that can cause code to execute again. In this case, we can use selective routing features to ensure that certain Route rules are only published when they are appropriate.

Deploying to Kubernetes with Azure DevOps: A first pass

Kubernetes is all the talk around town these days as the next generation deployment platform for containerized applications. It has a lot of benefits and the full support of the major cloud providers (Google, Amazon, Microsoft) and a whole movement behind it represented by the Cloud Native Computing Foundation (CNCF).

My interest is mostly from how it can be used with respect to DevOps and deployments. Having tinkered considerably with MiniKube locally, EKS via labs, I decided to dive into setting up a deployment pipeline with AKS (Azure Kubernetes Service). I found it rather easy to be honest. Might finally be getting to the point where I can wrap my head around this.

The Setup

So, the prerequisite to this is an application that builds using (preferably) a Docker container. This is the Dockerfile I am using (generated via the Docker extension in VSCode):

FROM microsoft/dotnet:2.1-aspnetcore-runtime AS base
FROM microsoft/dotnet:2.1-sdk AS build
COPY [“WebApp.csproj”, “./”]
RUN dotnet restore “./WebApp.csproj”
COPY . .
WORKDIR “/src/.”
RUN dotnet build “WebApp.csproj” -c Release -o /app
FROM build AS publish
RUN dotnet publish “WebApp.csproj” -c Release -o /app
FROM base AS final
COPY –from=publish /app .
ENTRYPOINT [“dotnet”, “WebApp.dll”]

What needs to be understood is, when executed, this will produce a Docker image that can be stored in a registry and referenced later via a service supporting container (Azure AppService, FarGate, etc). For our purposes we will want to create a DevOps Build Pipeline that creates this image and stores it in Azure Container Registry.

The main goal of this section is that you have an application (does not have to be in .NET Core, could be in Python) that builds to a container.

Create the Build Pipeline

When you are dealing with containerized applications your build pipelines goal (along with integration checking and testing) is to produce an image. The image is the static format of a container. When we start an image in Docker that running instance is referred to as the container.

Azure DevOPs makes this relatively easy but, before we can do that, we need to create a registry which is a place we can store images. It is from here that Kubernetes will pull them to create the containers that allows the application to be executed.

Easy enough to create an Azure Container Registry through the Azure Portal, just remember to Enable admin mode.

With this created we now have a place to output the builds from our automated processes. The way the registry works is you create a repository. This is really a collection of similar images. You then tag these images to indicate a version (for development I often use the builder number and the Git commit hash, for releases I use a semantic version identifier).

To start off with, you need to define the source of your pipeline, this is going to be your source repository and appropriate branch, I am using develop in accordance with the gitflow approach to source management.

After you select your source you get to pick how you want to start your pipeline. Conveniently, Microsoft has a Docker container template which will get you 90% of what you will need.

You need to check the following fields for each of the two tasks that are created as part of this template:

  • Build an image
    • Docker File – should be the location of your Dockerfile. Default works well here, but I like to be specific (make sure you update Use Default Build Context if you take something other than the default)
    • Image Name – this is the image name with tag that you want to build. More simply, this what you will save to the registry.  Here is an example of what I used for a side project I have:
      • feedservice-dev:$(Build.BuildNumber)-$(Build.SourceVersion)
        • feedservice-dev – this is the name of the respository in ACR that the image will live under (the repository gets created for you)
        • $(Build.BuildNumber)-$(Build.SourceVersion) – this is the tag which will differentiate this image from others. Here I am using build variables to denote the tag.
  • Push an image
    • Image Name
      • This needs to match the image name specified in Build an image

On Using :latest
Within Docker there is a convention of :latest to designate the latest image created. I tend to not use this custom very often for production scenarios simply because I favor specificity. If you choose to use it, you just need to publish two images instead of one to the ACR (overwrite :latest each time)

Make sure to select Triggers and turn on Continuous Integration so we get builds that kick off automatically.

If all is working correctly, you should see the build kick off almost instantly after you push to the remote repo for the target branch. Once that build completes you should be able to see the new image in your registry.

Setting up Kubernetes

Azure actually makes this very easy. But first, a quick primer on Kubernetes.

Kubernetes is a container orchestration framework with a focus on maintaining the ideal state. It is composed of nodes which house its various resources. There are three core ones that you should be aware of:

  • Pod – equivalent to a VM, houses a container (or containers). A normal deployment will feature MANY pods depending on the level of scale being targeted
  • Deployment – A group of pods with conditions around ideal state. At the simplest level, we can specify the minimum number of pods that represent at deployment (generally in production you want no less than three here)
  • Service – A resource that enables access to pods either internally or externally. Often serves as a LoadBalancer in cloud based deployments

More information:

Beneath the pod concept, everything in Kubernetes resides on nodes. These nodes reside within a cluster. You can have as many clusters as you like and they can reside in different environments, both on-premise and in the cloud; you can even federate between them enabling more flexible deployment scenarios.

For the purposes of this tutorial we will create a hosted Kubernetes cluster on Microsoft Azure using their Azure Kubernetes Service (AKS).

It is easy enough, simply create an instance of the Kubernetes Service. Here are some important fields to be aware of as you create the Cluster:

  • Project Details
    • Resource group: I like to specify a separate one for the cluster however, I have noticed that AKS will still create other Resource groups for other components supporting the AKS instance
    • Kubernetes cluster name: Anything you can, no spaces
    • DNS prefix name: Anything you want
  • Authentication
    • Create a new service principal
    • Enable RBAC: Yes
  • Networking
    • HTTP Application Routing: Yes

Take the default for everything else

This process will take a while so go grab a cup of coffee or be social while it is happening.

Once it completes, Azure will provide you with some commands you can run through the Azure CLI (I recommend downloading this if you do not have it installed)

Kubernetes can be administered from ANY machine via kubectl. Supporting this is done via the notion of context. You can change your kubectl context to any Kubernetes deployment. Azure makes this easy by executing the following command

sudo az aks install-cli

This will install kubectl (if you dont have it) and set your context to AKS.

Kubernetes features a nice web dashboard to give you insight into what is happening within a cluster. You need to run a couple commands to get this to work:

az aks get-credentials –resource-group <YourRG> –name <YourClusterName>

Followed by

az aks browse –resource-group <YourRG> –name <YourClusterName>

This will launch your default web browser with the Kubernetes dashboard proxied to your localhost. I have found that sometimes when I start this I get a punch of warnings around configmaps and other errors. This stems from a permissions problem that requires a role be created. Use this command to fix things

kubectl create clusterrolebinding kubernetes-dashboard -n kube-system

After this runs, rerun the browse command from above and the errors should be gone.

The Kubernetes dashboard offers a wealth of information that can assist with analyzing and debugging problems with deployments. It is one of the best tools at your disposal when dealing with cluster management.

If everything is green you are good to move on to performing the initial deployment.

Perform the initial deployment

As I said above, the easiest way to manage your Kubernetes cluster is via local usage of kubectl with the context pointing at AKS. This will allow you to directly apply changes and perform the initial deployment; note this assumes you have your container image in ACR.

Here is the YAML file we are going to use:

You notice that there are two discrete sections defined: Deployment and Service. The deployment here indicates that ideal state for this “application” has 2 pods at minimum (indicated by replicas). Each Pod should host a container named giftlist-feedapi which is an instance of a feedservice-dev image; here I manually specified a certain image to start from. Not sure if this is best practice, but its just what I did through testing.

In the second section, we define the Service which acts as a LoadBalancer to grant external access to the pods. I should point that LoadBalancer really only works in the Cloud since you wont get a Load Balancer locally by default, need to use NodePort in that case.

Before we can apply our configuration, however, we need to give AKS the ability to talk to ACR so it can pull the images we stored there.  We do this by running the following sequence of commands:

AKS_RESOURCE_GROUP=<Your AKS Resource Group>
AKS_CLUSTER_NAME=<Your Cluster Name>
ACR_RESOURCE_GROUP=<Resource Group with Container Registry>
ACR_NAME=<Your ACR Name>

CLIENT_ID=$(az aks show –resource-group $AKS_RESOURCE_GROUP –name $AKS_CLUSTER_NAME –query “servicePrincipalProfile.clientId” –output tsv)

ACR_ID=$(az acr show –name $ACR_NAME –resource-group $ACR_RESOURCE_GROUP –query “id” –output tsv)

az role assignment create –assignee $CLIENT_ID –role acrpull –scope $ACR_ID

The use of environment variables here is done to ease the various commands involved. With this in place we are now ready to update our Cluster to support the various resources that will be needed to host our application.

We apply this configuration by using the apply subcommand of kubectl like so:

kubectl apply -f development.yaml

This command will return almost instantly. If you have the Dashboard still open you can watch everything happen. Once all of the deployments are fulfilled you can flip over to the Services section to see your service being created. This process tends to take a bit of time but should end with a public IP address being available that redirect to a Pod running your containerized service.

Alternatively you can use

kubectl get deployments


kubectl get services

To check on the status of the resources that are being deployed. I tend to favor the Dashboard since it will also expose the logs on the web, you can still get the logs, but the web makes it easier.

If everything goes right your Service which eventually yield an External-IP which you can access from a web browser.

Adding a Release Pipeline

Ok, so full disclosure, this part I am still working through as I am not 100% sure if this is the way it should be done.  But I figured its a good place to start.

First step, create a release pipeline in Azure DevOps and make sure the Continuous Integration trigger is set (click the lightning icon as shown below).


This artifact should be configured as the output from the build you created previously, shown here is mine:


With this configuration and enabling the CI trigger we can ensure that ANY build that is processed by our build pipeline also correlates to a release as well.

Next for Stages we will have only one. Make sure the Tasks are configured as such

  • Deployment Process
    • Name: Anything you want but, make it descriptive
  • Agent Job
    • Display name: Anything you want
    • Agent Pool: Ubuntu

Add a single Task of type Deploy to Kubernetes, configure it to run the set command with the following arguments (assuming you are following alone)

image deployment.v1.apps/feed-api-deployment$(Build.BuildNumber)-$(Build.SourceVersion) –record

Please do not copy the above verbatim, there is some nuance here

Ok, so when you run apply as we did earlier you actually create entries in the Kubernetes data controller plane. We can actually make changes to these values using the set command which is what we are doing here

Basically we are using the set to change the image being used for our deployment. This will cause Kubernetes to begin orchestrating an update so, if you run the get deployments command enough, you will see Kubernetes spin up a third Pod while it replaces each Pod so as to maintain the ideal state.

Now, if everything works, you should be able to commit a change, see it automatically build which will generate a new image that gets saved to your ACR. Next, it kicks off a Release pipeline which updates your Kubernetes configuration to use the newly created image.

Pretty slick eh?

Closing Thoughts

Kubernetes is nothing more than a new way to deploy code for applications that use containers, which is becoming increasingly common across development teams. The use of K8s is a response to the problem of managing hundreds if not thousands of containers. In this example, I showed a very simple example of applying this process with Azure DevOps but there are many other topics I did not cover such as Helm charting.

The goal though is to understand that Kubernetes is a solid option but is NOT the only option for new applications. Like all things, it has its uses. Though, different from other things, Kubernetes seeks to provide a new platform that houses all environments for your applications so you can get as close to production as possible, if not all the way.

I hope to give more insight into this as I continue to look deeper into ways to use this though the creation of side projects.

Creating an Automated Blazor Deployment in AWS

Blazor is a framework I have written about previously that enables C# developers to build SPA applications similar to Angular and React in C# using WebAssembly. Principally, this showcases the ability for WebAssembly to open the web up to other languages beyond JavaScript without any plugins or extensions; WebAssembly is already widely supported in all major browsers.

In this article, I would like to discuss how we can deploy the output of Blazor to S3 on AWS and host it via a Static Website; this is a very common pattern for hosting SPA due to decreased cost and efficient scale. For this process we will use two Amazon services: CodeBuild and CodePipeline.

Creating the Blazor Application

I wont go into this step other than to say you can use the default app if you so desire, though I recommend the standalone and not the one that features a WebAPI backend. Our goal here is the deployment of static web assets, an API would not be included in that categorization.

Steps are here:

Once you have the source drop it into GitHub (we will reference this later) or you can use CodeCommit which is the AWS GitHub repository service. In my experience, there is not a significant advantage to using CodeCommit over GH.

Setup Code Deploy

In your AWS Console, look under Developer Tools for CodeBuild

Click Create Build Project – this will launch the wizard

Here are the relevant fields and their associated values:

  • Source
    • Pick GitHub and your repository (you will need to authorize access if you haven’t done so before)
  • Environment – Pick Linux as the OS
    • For Role: name it something logical, you will need to modify it later
  • Buildspec
    • Select Use Buildspec file – AWS does not support dragging and dropping for the creation of its build process. Using this option we will need to create a buildspec.yml file at the root of your application (you can add it now and we will cover the syntax in the next section)
  • Artifacts
    • No Artifacts – this seems weird but, we are going to have CodeBuild do the deploy since CodeDeploy does NOT support deploying a SPA to S3

Click Create Build Project to complete the creation.  You can run it if you want to verify it will pull the source but, the build is going to fail as we dont have a valid buildspec.yml file. Let’s create that next

Creating the BuildSpec

One of the areas that I knock AWS is its lack of a good visual way to build DevOps pipelines. While it has gotten better, developers are still left manually define their build processes via YAML or JSON. This is in contrast to Microsoft which leverages a more visual drag and drop designer.

The first thing is to be aware of the syntax for these files, Amazon Docs are here:

For our application the main step we need to be aware of is the build step. This is a simple test application and so it requires only a simple standard build operation. Here is my recommendation:

  – dotnet restore
  – dotnet publish –no-restore -c Release
All this does is leverage the dotnet command line tool (which will be installed on the container hosting our build) to restore all NuGet packages and then publish. This will end up creating some folders in /bin/Release/netstandard2.0 which will be involved in the post_build step.

The next part is where I can only shake my head with AWS. Since code deploy does not support S3 deploys like this we need to invoke the aws command line tool to copy the relevant output artifacts into our S3 bucket. Obviously, before we do that we need to create the S3 bucket that will host the website.

Creating the S3 Bucket to host your site

Amazon allows you to serve static web content from S3 buckets for a fraction of the cost of using other services. Best of all, no setup, automatic scalability and the same 11 9’s reliability bucket objects get from S3. It is no wonder this has became the defacto way to serve SPAs. Since the output of a Blazor build is static web content we can (and will) use S3 to host the application.

Under Storage pick S3. You need to create a bucket. Take the defaults for permissions, I will give you the Bucket Policy JSON at the end that allow objects to be served publicly.

Once the bucket is created select it and access the Properties tab. Select Static Web Site Hosting. Be sure to fill in the Default Document with index.html. I am not sure if this is necessary but, I always do it just to be safe. You should also take note of the endpoint as this is where you will access your website.

Now select Permissions and select Bucket Policy. You can use the Policy Generator here if you want but, this is the general JSON that comprises the appropriate policy to enable public access:

“Version”: “2012-10-17”,
“Statement”: [
“Sid”: “PublicReadGetObject”,
“Effect”: “Allow”,
“Principal”: “*”,
“Action”: “s3:GetObject”,
“Resource”: “arn:aws:s3:::YourBucketName/*”

Note: This is a very simply policy created for this example. In a real production setting, you will want to lock down the policy as much as possible to prevent nefarious access.

When you save this you should see the orange Public tag appear in the Permissions tab.

With this, you bucket is now accessible. Our next step is to get our content in there

Updating the Buildspec to copy to S3

As I said earlier, the normal tool used for deployments, AWS CodeDeploy does not, as of this writing, support static web asset deployment to S3, an opportunity missed in my opinion. This being the case, we can leverage the aws cli to copy to our bucket. There are a number of ways to organize this, here is how I did it.

I added a post_build step to the build spec which consolidated everything I was going to copy into a single folder:

  – mv ./WeatherLookup/bin/Release/netstandard2.0/dist ./artifact
  – cp -R ./WeatherLookup/bin/Release/netstandard2.0/publish/wwwroot/css ./artifact/

You dont have to do it this way, I just find it easier and more sensible than targeting the files/folder individually with the S3 copy command.

Next, we need to perform the copy to S3. I choose to use the finally substep within post_build to perform this operation

  – aws s3 cp ./artifact s3://YourBucketName –recursive

By using the –recursive the CLI will handle copying ONLY the contents of artifact into our bucket. We do not want the root folder since that would interfere with our pathing when users access the website.

If you run your build now it will get father but, it will break on the final step. The reason is, the role we defined for CodeBuild does NOT have the appropriate permissions to communicate with S3. So we have to update that before things will work.

Updating the Permissions

Within your AWS Console access IAM from Security, Identity, and Compliance. From this menu access Roles. Look for your role, we will need to attach a policy to it that gives it the ability to perform PutObject against our S3 bucket. There are two ways to do this:

Option 1:
You can apply the AmazonS3FullAccess which will grant the role full access to all S3 buckets in your account. I do NOT recommend for anything outside a simple test case. Reason, its never a good idea to give this sort of access to a role as it could be abused.

Option 2:
You can create a custom policy that provides the permissions specifically needed by this role. This is what I choose to do and is what I recommend others do to get into good habits.

For this demonstration we are going to use Option 2. Select Policies from the IAM left hand navigation menu. Select Create Policy. For most cases I would recommend the Visual Editor as it will greatly assist in creating policy documents. For this, I will give you the JSON I used:

“Version”: “2012-10-17”,
“Statement”: [
“Sid”: “VisualEditor0”,
“Effect”: “Allow”,
“Action”: [
“Resource”: [

This policy grants the ability for the executor to access our S3 bucket and use the PutObject command.  Click Save after Reviewing the Policy.

Go back into roles, select the role associated with your CodeBuild project and Attach the Policy. When you rerun the build everything should work. Once you get a Completed status, you can browse to your S3 url and see your site.

Automating the Build

Our build works now but, there is a problem: it has to be kicked off manually. Ideally for any sort of process such as this, whether around integration or deployment we need the action to start automatically.  This is where AWS CodePipeline comes into play.

CodePipeline wraps a source provider, build agent, and deploy agent so that it can operate as an automated pipeline, hence the name. We are going to do the same

From Developer Tools select CodePipeline and on the ensuing menu select Create Pipeline.

On the ensuing page fill in the configuration options, there are two sections of particular note:

  • Service Role – this is the role that is assumed by the Pipeline as it reaches into other services. Notable it is to communicate with the various APIs for the supporting services.
    • Provide a service role here, we will not be modifying it at a later step
  • Artifact Store – Pipelines often deal with artifacts that come out of the various steps. Here we specify where those get stored. S3 is a great location. Keep in mind that for our example we will have no artifacts.

Once you press Next you are asked to configure the Pipeline source. Here you will want to specify our GitHub repository (you will need to connect again, but not reauth). This is to allow CodePipeline to register the GitHub webhooks that will be used to tell the Pipeline when a PUSH occurs.

Next comes to Build Provider, select the CodeBuild instance we created earlier.

Next comes Deploy, here we will press Skip and not define a deploy step. For something that would deploy to Lambda, ELB, EC2, ECS, or something like that you would need to select your deploy project. As we stated, we cannot use CodeDeploy with an SPA to S3 deployment.

The final step is to review and initiate the creation of the pipeline. The process is pretty quick. Once complete, select your build from the list. Amazon has done a great job making the pipeline visualization screen look much more appealing.

By convention, CodePipeline will run an initial build. If your CodeDeploy step worked before this should complete in a few minutes.

To test the pipeline, make a change to your local repo and push the change to the remote repo. If all is correct, you will see your Pipeline begin executing almost immediately. Once it completes, refresh your S3 website Url and your change should be visible (remember to check your cache if you dont see it).

Congrats, you have a working Blazor web app deployment in AWS.

Closing Thoughts

S3 is an ideal place to host a static website like an SPA where API requests are used to get data for the execution. The biggest win here is constant which is going to substantially less than something like EC2 or ELB.

One of the considerations I did not cover above is environmentalization of this process, we are always pushing the code to the same bucket. Normally we would have different buckets (perhaps across different Amazon accounts) that hold the specific version of the web code for that environment. This is when you might need to use CodeDeploy to deploy the artifact from CodeBuild to a Lambda to copy the contents into the bucket serving the content.

My goal here was to take a very simple deployment and determine how Amazon’s capabilities compare to those of DevOps. Needless to say, I found Amazon wanting, the DevOps aspect is certainly an area where Microsoft has the advantage. Between not being able to target S3 with their deploy tool and thus resorting to a copy from the build step to not supporting an easy visual way to build the sequence of build steps, more work is needed here.