Heading to New Signature

I decided to leave my post at West Monroe after nearly 6 years in their service. My new position will be a DevOps Consultant at New Signature – I will be tasked with helping clients adopt a Cloud first and DevOps centric mentality. I am absolutely ecstatic for my new role.

In saying that, leaving WMP is bittersweet. I am excited for the new challenge but, I am also cognizant of just what a great place to work it is. I owe them for helping me recover my career after I went through some hard times before that. I rarely told this story but, after working at Centare and having things not pan out at all how I thought, I questioned staying in consulting. Combined with a lot of personal pain I was in a rough spot when they offered me a Senior Consultant position in 2014.

Along with their offer I had a mobile development company make an offer as well so, it really came down to what I wanted. Luckily my girlfriend, now wife, helped me see clearly and I decided to take the plunge into WMP, a company I thought was going to be a challenge to work for due to the perceived corporate nature. But I did not find this “corporate culture” instead I found a home. I found great people that were smart and pushed me as I pushed them to be the best we could be.

Over the next 6 years I went through what I can only describe as a career renaissance. Thanks to WMP I regained my vigor in the community and achieved the status of Microsoft MVP (if only for a few years) and I reignited my passion for backend and DevOps, something that I know champion heavily and led me to finding the job at New Signature. But more than anything, the people I met are something I can never forget and was what made leaving so hard.

As I turn to the future, I am nervous. I will be taking my family to Atlanta to build a better life for my son and finally put down roots. This will be a challenge but, I have already been greeted by wonderful people at New Signature and I am very excited for the challenges this new post will offer me.

Big thanks to everyone at West Monroe and I look forward to the day our paths cross again.

Setting up a Microservices Example on AKS

Kubernetes is a platform that abstracts the litany of operational tasks for applications into a more automative fashion and enables application needs declarations via YAML files. Its ideal for Microservice deployments. In this post, I will walk through creating a simple deployment using Azure AKS, Microsoft managed Kubernetes offering.

Create the Cluster

In your Azure Portal (you can do this from the az command line as well) search for kubernetes and select ‘Kubernetes Service’. Creating the cluster is very easy, just follow the steps.

  • Take all of the defaults (you can adjust the number of nodes, but I will show you how to cut cost for this)
  • You want to be using VM Scale Sets (this is a group of VMs that comprise the nodes in your cluster)
  • Make sure RBAC is enabled in the Authentication section of the setup
  • Change the HTTP application routing flag to Yes
  • It is up to you if you want to link your service into App Insights

Full tutorial here: https://docs.microsoft.com/en-us/azure/aks/kubernetes-walkthrough

Cluster creation takes time. Go grab a coffee or a pop tart.

Once complete you will notice several new Resource Groups have been created. The one you specified contains the Kubernetes services itself, I consider this the main resource group that I will deploy other services into – the others are for supporting the networking needed by the Kubernetes service.

I want to draw you attention to the resource group that starts with MC (or at least mine does, it will have the region you deployed to). Within this resource group you will find a VM scale set. Assuming you are using this cluster for development, you shut off the VMs within this scale set to save on cost. Just a word to the wise.

To see the Cluster in action, proxy the dashboard: https://docs.microsoft.com/en-us/azure/aks/kubernetes-dashboard

Install and Configure kubectl

This post is not an intro and setup of Kubernetes per se so I assume that you already have the kubectl tool installed locally if not: https://kubernetes.io/docs/tasks/tools/install-kubectl

Without going to deep into it, kubectl connects to a Kubernetes cluster via a context. You can actually see the current context with this command:

kubectl config current-context

This will show you which Kubernetes cluster your kubectl instance is currently configured to communicate with. You can use the command line to see all available contexts or read the ~/.kube/config file (Linux) to see everything.

For AKS, you will need to update kubectl to point at your new Kubernetes service as the context. This is very easy.

az aks get-credentials -n <your service name> -g <your resource group name>

Executing this command will create the context information locally and set your default context to your AKS cluster.

If you dont have the Azure Command Line tools, I highly recommend downloading them. (https://docs.microsoft.com/en-us/cli/azure/install-azure-cli?view=azure-cli-latest).

Deploy our Microservices

Our example will have three microservices – all of which are simple and contrived to be used to play with our use cases. The code is here: https://github.com/xximjasonxx/MicroserviceExample

Kubernetes runs everything as containers so, before we can start talking about our services we need a place to store the Docker images so Kubernetes can pull them. You can use Docker Hub, I will use Azure Container Registry, Azure’s Container Registry service, it has very nice integration with the Kubernetes service.

You can create the Registry by searching for container in the Azure search bar and selecting ‘Container Registry’. Follow the steps to create it, I recommend storing it in the same Resource Group that your Kubernetes service exists in, you will see why in a moment. Full tutorial: https://docs.microsoft.com/en-us/azure/container-registry/container-registry-get-started-portal

Once this is created we need to attach it to our Kubernetes service so images can be pulled when requested by our Kubernetes YAML spec files. This process is very easy, and documented here: https://docs.microsoft.com/en-us/azure/aks/cluster-container-registry-integration

We are now ready to actually deploy our microservices as Docker containers running on Kubernetes.

Names API and Movies API

Each of these APIs are structured the same and serve as the source of data for our main service (user-api) which we will talk about next. Assuming you are using the cloned source, you can run the following commands to push these APIs into the ACR:

docker build -t <acr url>/names-api:v1 .
az acr login –name <acr name>
docker push <acr yrl>/names-api:v1

The commands are the same for movies-api. Notice the call to az acr login which grants the command line access to the ACR for pushing – normally this would all be done by a CI process like Azure DevOps.

Once the images is in the ACR (you can check via Repositories under the Registry in the Azure Portal) you are ready to have Kubernetes call for it. This, again, takes an az aks command line call. Details are here: https://docs.microsoft.com/en-us/azure/aks/cluster-container-registry-integration

As a personal convention I store my Kubernetes related specs in a folder called k8s this enables me to run all of the files using the following command:

kubectl apply -f k8s/

For this example, I am only using a single spec file that defines the following:

  • A namespace for our resources
  • A deployment which ensures at least three pods are always active for each of the two APIs
  • A service that handles routing to the various pods being used by our service
  • An Ingress that enables cleaner pathing for the services via URL pattern matching

If you are not familiar with these resources and their uses, I would recommend reviewing the Kubernetes documentation here: https://kubernetes.io/docs/home/

If you head back to your Kubernetes dashboard the namespaces should appear in your dropdown list (left side). Selecting this will bring up the Overview for the namespace. Everything should be green or Creating (yellow).

Once complete, you can go back into Azure and access the same Resource Group that contains your VM scale set, look for the Public IP Address address.  Here are two URLs you  can use to see the data coming out of these services:

http://<your public IP>/movie – returns all source movies
http://<your public IP>name – returns all source people

The URL pathing here is defined by the Ingress resources – you can learn more about Ingress resources here: https://kubernetes.io/docs/concepts/services-networking/ingress. Ingress is one of the most important tools you have in your Kubernetes toolbox, especially when building microservice applications.

User API

The User API service is our main service and will call the other two services we just deployed. Because it will call them it needs to know the URL, but I do not want to hard code this, I want it to be something I can inject. Kubernetes offers ConfigMap for just this purpose.  Here is the YAML I defined for my ConfigMap:

ConfigMaps are key value pairs under a common name, server-hostnames. Then, we can access our values via their respective keys.

How we get these values into our API happens via the Pods which are provisioned for our Deployment. Here is that YAML:

Note the env section of the YAML. We can load our ConfigMap values into environment variables which are then accessible from within the containers. Here is an example of reading it (C#):

As with the other two services you can run a kubectl apply command against the k8s directory to have all of this created for you. Of note though, if you change namespaces or service names you will need to update the ConfigMap values.

Once deployed you can access our main endpoint /user off the public Url as before. This will randomly build the Person list with a set of favorite movies.

Follow up

So, this was, as I said, a simple example of deploying microservices to Azure AKS. This is but the first step in this process and up next is handling concepts like retry, circuit breaking, and service isolation (where I define what services can talk to). Honestly, this is best handled through a tool like Isito.

I hope to not show more of that in the future.

Dynamic Routing with Nginx Ingress in Minikube

So, this is something I decided to set my mind to understanding how I can use Ingress as a sort of API Gateway in Kubernetes. Ingress is the main means of enabling applications to access a variety of services hosted within a Kubernestes cluster and its underpins many of the more sophisticated deployments you will come across.

For my exercise I am going to use minikube to avoid the $8,000 bill Amazon was gracious enough to forgive last year 🙂 In addition, for the underlying service, I am using a .NET Core WebAPI Hosted via OpenFaaS (howto).

Understanding OpenFaaS

Without going to deep into how to set this up (I provided the link above) I created a single Controller called calc that has actions for various mathematical operations (add, substract, multiple, and divide). Each of these actions can be called via the following URL structure:

<of-gateway url>:8080/function/openfaas-calc-api.openfaas-fn/calc/<op_name>

Note: open-faas-calc-api is the name of the API in OpenFaaS as I named it, yours will likely differ

The goal of our Ingress is, via the IP returned by minikube ip we want to simplify the URI structure to the following:

<minikube ip>/calc/<op_name>

Within our Ingress definition we will rewrite this request to match the URL structure shown above.

Create a basic Ingress

Let’s start with the basics first, here is the configuration that is a good starting point for doing this:

You can find the schema for an Ingress definition in the Kubernetes documentation here. Ingress is a standard component in Kubernetes that is implemented by a vendor (minikube support Nginx out of the box, other vendors include Envoy, Kong, Treafik, and others).

If you run a kubectl apply on this file the following commands will work

<minikube ip>/function/openfaas-calc-api.openfaas-fn/calc/<op_name>

However, this is not what we want. To achieve the rewrite of our URL we need to use annotations to configure NGINX specifically – we actually used the ingress.class annotation above.

Annotate

NGINX Ingress Controller contains a large number of supported annotations, documented here. For our purposes we are interested in two of them:

  • rewrite-target
  • use-regex

Here is what our updated configuration file looks like:

You can see the minusha we need to pass for OpenFaaS calls has been moved to our rewrite-target. The rewrite-target is the URL that will ultimately be passed to the backend service matched via path (and host if supposed).

What is interesting here is we have given a Regex pattern to the path value meaning, our rule will apply for ANY URL that has /calc/<anything> as a format. The (.*) is a Regex capture group enabling us to extract the value. We can have as many as we like and they get numbered $1, $2, $3, and so on.

In our case, we are only matching one thing – the operation name. When it is found, we use $1 to update our rewrite-target. The result is the correct underlying URL that our service is expecting.

We can now call our service with the following URL and have it respond:

<minikube ip>/calc/<op_name>

Thus we have achieved what we were after.

Additional Thoughts

Ingress is an extremely powerful concept within Kubernetes and it enables a wide array of functionality often seen with PaaS services such as API Gateway (Amazon) and API Management (Azure). Without a doubt it is a piece of the overall landscape developers will want to be well versed in to ensure they can create simple and consistent URL to enable REST, gRPC, and other style of services for external access.

Authenticating JWT Tokens with Azure Functions

So recently, I decided to work on creating some HTTP exposed Azure Functions to return data if a JWT token was valid and various 4xx response codes otherwise. Needless to say, I did not expect it to be as hard as it turned out to be, I would say that Microsoft has work to do to enables support of full blown APIs with Azure Functions provided they are not held behind an API Management gateway service; this may be what is intended.

How did I create my token?

So, I used JwtSecurityToken in the Microsoft.IdentityModel.Tokense Nuget package with a Symmetric Security Key to generate a signed signature. This was pretty easy – here is my token generation code:

For our purposes we want to be able to decode the token to get some non confidential information (the username) so we can do some lookup for user related information – we could also choose to use the UserId as well here if we so desired (in fact we should if the user can change their username)

Decrypting the Token

Here is my code for decrypting the token above via a Read service I wrote as a common method for other Microservices:

The important thing here is that we use the same Issuer, Audience, and Key as during the encryption process. Validate will use these values and check our token – there are a variety of exceptions that can come out of this operation so you will want to the call to be ready to catch them for the various error cases: <Docs>

Ok so that all is actually pretty easy, now lets get into the hard part. Our goal is, when our Azure Function is called we want to receive the parsed result from the JWT token so we can centralize this logic and use it across many functions.

Normally, the way you would do this is to create a Filter that checks the request and, if valid, passes the value to some sort of base class that holds our function. Often this requires DI since we are injecting our Read Service into the Filter. We support this in normal Web API with ServiceFilter. Unfortunately, Microsoft currently does not support this, or any approach that I could find for Azure Functions. So what do we do.

Introducing Extensions

So, the Function runtime does support custom extensions which can act, in a way, like filters do in .NET Core (Azure Functions do actually support Filters, they are just new and arent as feature rich as their MVC/WebAPI counterparts).

Using an extension we can make our Azure Function call look like the following:

Do you see it? UserToken is our custom extension. Its job is to look at the incoming request and grab the token, decode it, and pass an object with various bits of claim data. Be careful with what you pass, you do not want sensitive data in your claims data since anyone can head over to jwt.io and decode it and see your claims.

First Step: Create a Value Provider

Extensions are a means to call custom bindings. Bindings seek to provide a value. Azure Function host provides the IValueProvider that we need to implement to create our Value Provider. This class will perform the operation relevant to our custom binding. Below are the two pieces of this class that are relevant: Constructor and GetValueAsync

As I mentioned earlier, the Validate method (called by ReadToken) can throw a litany of exceptions depending on problems with the token. Ultimately, the value returned from here is what our Azure Function receives.

The reason I choose to include the constructor was to begin to illustrate how the ReadTokenService is hydrated – you will find that DI is rather limited at this level and requires some odd hacks to get it to work. We will get into it as we unwrap this.

Ok good, this is your value provider, now we need to create the binding which calls this.

Part 2: Create our Binding to call the Value Provider

The binding is the layer between the Extension and the Value Provider. It immediately receives a binding context that gives it information about the incoming request so we can extract information – this is where we get the raw headers that contain our token from. Here we implement the IBinding interface. Here is my constructor, ToParameterDescriptor, and BindingAsync(BindingContext context):

So, the first thing to unpack here is the constructor – technically there is NO DI support within extensions (for some reason). How I got around this was I passed IServiceProvider which is our DI Container that I can extract dependencies from – this is what I do via the Service Locator Pattern: we extract both our configuration facade and the service to read our token.

Where this comes into play is when we create our ValueProvider – we pass the service to read the token into the constructor as we create it.

The remaining code in BindAsync is our logic for extracting the raw token from the Auth header (if it is present) and passing it, again via the constructor, to our Value Provider.

As for the ParameterDescriptor, I dont really know what this is doing or what it is used for, it doesnt seem to have an impact, positive or negative, on this use case.

Ok, so no we have create a Binding which calls our Value Provider to carry out the operation. We use Service Locator pattern on the DI container to extract the dependencies that we need. Our next step is to create the Binding Provider

Part 4: Create the Binding Provider

Our extension calls a specific binding provider to get the binding to carry out the intended operation for the extension. This is driven using the IBindingProvider interface and implementing the TryCreateAsync method. For our example, this class is very tiny, I show it in its entirety below:

Again, you can see I pass IServiceProvider into this method via the constructor and then pass it to our binding which we described in the previous step.  I am sure you can see where this is going 🙂

Part 5: Create the Extension Provider

We have finally arrived at the extension provider. This is where we register our Extension with the runtime so it can be used within our code. This implements the IExtensionConfigProvider to support the Initialize method:

And this is where we can get our IServiceProvider reference that we pass into all of those layers. In truth, since Azure Function do NOT support DI we manually build our container and pass it into our lower levels.

The catch to this approach though is, you do NOT want to write your dependency registration twice. To that end, I wrote an extension method called RegisterDependencies so I wouldnt need to have duplicate code. Additionally, I had to manually register the IConfiguration facade (this is done for you in normal startup flow).

The final block here is adding the binding rule for our parameter level attribute so that, when the runtime sees the attribute it knows to invoke the create method on our binding provider. Here is the code for our attribute:

The one thing extra here is the user of the Binding attribute to denote the attribute represents a binding that will be used in Azure Function.

Part 6: Change our Starting Host

So, if you have ever worked with Azure Functions v2.0 you are recommended to use the FunctionsStartup class. Supporting this is a means to register extensions in a declarative way, a way that I could not get to work though I suspect involves the steps listed here. Regardless, IFunctionsHostBuilder (the type passed to the Configure method when using the interface) does NOT have a way to register extensions from code. So what to do?

Well, it turns out you can change IFunctionsHostBuilder with IWebJobsStartup which is the old way of doing this and that will provide a method to register the extension provider – shown below:

Again, note the call to RegisterDependencies which unifies our registration so we do not have duplicate code. I have yet to notice any irregularities with using the web jobs approach vs the Function host yet, please comment if you see anything.

Part 7: Handling the Result of our Token Parse

So, Azure Functions do offer the FunctionExceptionFilterAttribute base class which enables a hook to respond to uncaught exceptions raised by functions. Unfortuantely, this hook does not enable you to alter the response so, even if you catch the relevant exception the response code is already written – it seems to be more for logging that handling.

So, the best I could come up with is each function has to be aware of how to intepert a failed parse response. Here is my complete function that shows this:

You can see that we introduced an enum called TokenState that I pass from the underlying Value Provider. This is not an ideal approach since it means each developer writing a function must be aware of the various auth scenarios that can occur for their function. This leads to duplication and create error proned code. But, it is the best that I can find, this far.

Closing

So, honestly, disappointed in Microsoft. I feel like the Azure Functions are really designed to be used behind an API Management gateway to alleviate some of the checks but, the DI maturity is abhorid. I really do hope this is more the case of me missing something than this being the actual state, especially given the rising importance of serverless in modern architectures.

I know I showed a lot of code and I hope you had good takeways from this. Please leave comments and I will do my best to answer any questions. Or I would love to know if I missed something with the way to do this that makes it easier.

Using Terraform to Deploy Event Grid Subscriptions for Function Apps

I recently set a goal for myself to create a microservice style application example whereby I would enable individual services to listen for event coming through Azure Event Grid. As a caveat to this, I wanted to build everything up, and thus have everything managed, via Terraform scripts. This proved very challenging and I wanted to take time to discuss my final approach.

Terraform, no datasource for Event Grid Topics

So, annoyingly, Terraform does NOT contain a datasource for Event Grid topics, meaning in order to reference the properties of a target topic you need to either store the values in a vault or something similar, or grab the outputs from creation and pass them around as parameters; I choose to do the later, for now.

Capturing the Relevant values from Topic Creation

As part of my environment setup process, I defined the following script in Terraform HCL to create the relevant topic for the environment; this does mean I will have a single Topic for each environment which often sufficient for most use cases

Key to this is the outputs. I am using Azure DevOps to execute the build pipeline. In ADO, you are able to give task names (Output Variables) which then allows you to reference task level variables for that task from other tasks. I use this bash script to extract the topic_id from the above:

If you were following along in Azure DevOps, the EnvTerraform is the custom name I gave via Output Variables for the Apply Terraform operation.  I am using the jq command line tool to parse the JSON that comes out of this output file.

Finally, we can use an echo command to ask the ADO runtime to set our variable. This variable is scoped, privately, to the task. We use the isOutput parameter to indicate that it should be visible outside the task. And finally we give it the value we wish to set.

The importance of this will become clear soon.

Create the Event Subscription

Event Grid Topics contain subscription which contain the routing criteria for message to various endpoints. Event Grid supports a wide range of endpoints and is, in my view, one of the most useful PaaS components offered by the Azure platform. For our case, we want to route our events to a WebHook which will invoke an Azure Function marked with the EventGridTrigger attribute, available via the relevant Nuget package.

Before we get started there is a problem we must solve for. When subscriptions are created, Azure will send a pulse to the webhook endpoint to ensure it is valid. This endpoint just needs to return a 2xx status code. One issue, in order to even get to our method, we need to get passed the enforced Function App authentication. Thus, we need to pass our MasterKey in the subscription to enable Event Grid to actually call our function.

It turns out this is no small task. The only way to get this value is to use the Azure CLI and then, again, expose the value as a Task level variable.

Here is the script I threw inside an Azure CLI task in Azure DevOps:

Some notes here:

  • First execution gets us the subscriptionId for the active subscription being used by the Azure CLI task
  • Next we need to get the appName – as I want my script to be fairly generalized I am undertaking an approach where I pass the created function app hostname into the script and parse out the name of the function app from the Url using sed
  • Next, build the resource Id for the Azure Function app – you can also get this from the output of a function app resource or datasource – I choose not to do it this way as a matter of preference and convenience
  • Next, using the Management API for the Function App, we ask for a list of all keys and again use jq to grab the lone masterKey
  • Finally, using the echo approach to create our output variable (named funcAppMasterKey) so we can use it later and we will

In terms of the actual Terraform script to create the Event Subscription, it looks like this:

One massive tip here, if you specify topic_id for scope, do NOT specify topic_name. My thought is TF concatenates these values under the hood but I was never able to get it to work that way.

For the webhook, we follow a standard format and specify the exact name of our trigger. This is roughly the same Url structure you would use to invoke the trigger method locally for testing.

Finally, notice the use of masterKey at the end of the webhook Url. This is passed in as a parameter variable based on the value we discovered in the Azure CLI task.

Running this

For my purposes, I elected to approach solving this by breaking apart my TF script into two parts: one which creates all of the normal Azure resources, including the Function App and then a second that specifies the subscriptions I wish to create – there are certainly other ways to approach this.

By splitting things apart I was able to perform my Azure CLI lookup in an orderly fashion.

Deploying Containerized Azure Functions with Terraform

I am a huge fan of serverless and its ability to create more simple deployments with less for me to worry about and still with the reliability and scalability I need. I am also a fan of containers and IaC (Infrastructure as Code) so the ability to combine all three is extremely attractive from a technical, operational, and cost optimization standpoint.

In this post, I will go through a recent challenge that I completed where I used HashiCorp Terraform to setup an Azure Function app where the backing code is hosted by a Docker Container. I feel this is a much better way to handle serverless deployments instead of the referenced Zip file I have used in the past.

You need to be Premium

One of the things you first encounter when seeking out this approach is that Microsoft will only allow Function Apps to use Custom Docker Images if they use a Premium or Dedicated App Service Plan (https://docs.microsoft.com/en-us/azure/azure-functions/functions-create-function-linux-custom-image?tabs=nodejs#create-an-app-from-the-image) on Linux.

For this tutorial I will use the basic Premium Plan (SKU P1V2). A quick reminder, you Function App AND its App Service Plan MUST be in the same Azure region. I ran into a problem trying to work with Elastic Premium which, as of this writing, is only available (in the US) in East and West regions.

Terraform to create the App Service Plan (Premium P1V2)

Pretty straightforward. Far as I am aware container based hosting is ONLY available on Linux plans – Windows Container support is no doubt coming but, no idea when it will be available, if ever.

Create the Dockerfile

No surprise that the Docker image has to have a certain internal structure for the function app to be able to use it. Here is the generic Dockerfile you can get using the func helper via the Azure Function Tools (npm).

func init MyFunctionProj –docker

This will start a new Azure Function App project that targets Docker. You can use this as a starting point or just to get the Dockerfile. Below is the contents of that Dockerfile:

At the very least its a good starting point. I assume that when Azure runs the image it looks mount into known directories hence the need to conform as this Dockerfile does.

Push the Image

As with any strategy that involves Docker containers we need to push the source image to a spot where it can be accessed by other services. I wont go into how to do that, but I will assume you are hosting the image in Azure Container Registry.

Deploy the Function App

Back to our Terraform script, we need to deploy our Function App – here is the script to do this:

The MOST critical AppSetting here is WEBSITES_ENABLE_APP_SERVICE_STORAGE and its value MUST be false. This indicates to Azure to NOT look in storage for metadata (as is normal). The other all cap AppSettings are access to the Azure Container Registry – I assume these will change if you use something like Docker Hub to host the container image.

Note also the linux_fx_version setting. If you have visited my blog before you will have seen this when deploying Azure App Service instances (not surprising since a Function App is an App Service under the hood).

Troubleshooting Tips

By far the best way I found to troubleshoot this process was to access the Kudu options from Platform Features for the Azure Function App. Once in, you can access the Docker Container logs (have to click a couple links) and it gives you the Docker output. You can use this to figure out why an image may not be starting.

This was what led me to ultimately discovered the APP_SERVICE_STORAGE setting (above) as the reason why, despite the container starting, I never saw my functions in the navigation.

Hope this helps people out. I think this is a very solid way to deploy Azure Functions moving forward though, I do wish a Premium plan was not required.

Using Anchore with Azure DevOps

Anchore is a Container Image scanning tool that is used to validate the security of containers deployed for applications. I recently undertook an effort to build a custom Azure DevOps task to enable integration with this tool; nothing previously existed.

To get a feel for how this process works, this is a high level diagram of the underlying steps:

AnchoreFlow

Under the covers we contact the given Anchore Engine server after adding out built image to an accessible registry. This contact exists as a polling operation which waits for the status to change.

The Setup

For my approach, I elected to standup an Ubuntu VM in the Azure cloud and opt to run the server using docker-compose. The steps to this are here: https://docs.anchore.com/current/docs/engine/engine_installation/docker_compose. Despite being listed in the Enterprise documentation (the pay for version of Anchore, it does work for the OSS version).

Setup can take a few minutes once started since the engine needs to download information necessary to carry out scanning related functions. This actually makes it a bit faster since the information will be cached ahead of your scan requests.

Once that is complete you need to add a user. This is done as part of an account. When you perform commands against the API and give this username and password, the information is stored relative to that account. Meaning two accounts do not share things like images or registered registries.

Link: https://docs.anchore.com/current/docs/engine/usage/cli_usage/accounts

Next, we need to register our registry, though I am not certain if you need to this if using a public repository on Docker Hub – for my purpose I am using Azure Container Registry (ACR). When the ACR is created you will need to enable admin mode to have ACR generate a username and password that can be used to access.

Acr

Once you have established these values, you simply need to register the registry using the registry add command:

anchore-cli –u someUser –p somePass –url someUrl registry add SomeAcrLoginServer SomeAcrUsername SomeAcrPassword

Link: https://docs.anchore.com/current/docs/engine/usage/cli_usage/registries

With this, our Anchore Engine is set to go.

I have noticed that if the engine is kept idle for a period of time, it will stop analyzing incoming images. You need only stop and start using docker-compose. Not entirely sure why this is happening

Using the Anchore Task

The task is available, publicly, here: https://marketplace.visualstudio.com/items?itemName=Farrellsoft.anchore-task

The task contains a full listing of the various properties that are currently available; it pales in comparison to the full functionality offered by Anchore Engine, but more functionality will be added over time.

In terms of the flow you should be thinking about, here is a diagram that lays out how I see it working:

anchore-task-flow

One of the key things to note here is the “double push” of the image. We do this because we would want to segregate images created with each build with those that are suitable to move on the higher environments. Also, in the event of a rollback, we would want Ops to have to figure out which images passed the check and which ones did not.

We also do it because Anchore needs to be able to grab the image for scanning and will not be able to access it on our build agent.

Previous to the steps above you might choose to run unit tests on the artifact before it goes into the image. You could also choose to run your unit tests within the container to guarantee your assumptions within the actual execution environment.

Some Notes

I set up the Anchore Task to only make 100 attempts to check for status change, with a 5s wait between each attempt. In the event the Anchore Engine server is experiencing problems, I would rather the build fail than it enter an endless looping state. In the future, I intend to make this a configurable flag.

As I am writing this, we are still in the early days of the extension and there will no doubt be more features added. My main focus with this post was to cover setup and general usage.