Reporting on Unit Tests with VSTS Containerized Apps

I am a purist at heart and when I do something I want to take full advantage of the tools I am using. In the case of Docker, that means emphasizing that ALL of my code should run in the same container as my final product. What is the value otherwise?

To that end, I set up about exploring how I might report on unit tests with a VSTS build. It is not an easy process because, in my view, VSTS and .NET do not naturally lend themselves to the containerized architectures. Microsoft is working hard on changing this and have made great strides but, there are still some issues to work out.

However, in this case the central problem has to do with what Docker creates, an image, which is immutable meaning, during its construction you can not read from it, nor would you want to.

Approach 1: Run the Tests before Image Creation

The simplest approach is to run the unit tests before you create the image and add a dependent build phase which only executes if all unit tests pass. While this is simple and would work, it violates, in my mind, the principles of containerization.

Code is run in the same way for all environments

This matters for testing as it is the idea spot you might find a difference. If someone was using a different version of a library and it worked there and even worked on the build server but didnt work in the container, you would never know until you deployed.

Admittingly this is rare for any experienced development team who would be keeping close tabs on this but, it does happen (happened at West Monroe when a member of our team insisted on using the Alpha branch while everyone else used Stable for Xamarin.

My goal was to find a way to perform the unit tests in the very same containerized environment the code would run. So, I turned to the God of Wisdom: Google

Approach 2: Docker Compose to the rescue

Docker Compose is one of those tools that was created for one purpose but, I think, ended up fulfilling another. While you can still deploy production code using Compose, the trend right now is towards Orchestration with something like Kubernetes. Still, Compose is great for applications that wont use Kubernetes but still need mimic local representations of production dependencies.

In my searching I came across this fantastic article on Medium by a fellow developer who found an ingenious way to accomplish what I was seeking using Docker Compose.

Running your unit tests with VSTS and Compose

The gist is, we can use a Dockerfile which creates a “test” image which has no ENTRYPOINT defined. We can then create a docker-compose file which references that Dockerfile and specifies the ENTRYPOINT in the compose file as the dotnet test command. Here is a sample from my final output.

version: ‘3’
services:
  myapp.tests:
    build:
      context: .
      dockerfile: MyApp.Tests/Dockerfile
    entrypoint: dotnet test MyApp.Tests/MyApp.Tests.csproj –logger trx -r /results
volumes:
  – /opt/vsts/work/_temp:/results

As you scan this Compose file it becomes a bit clearer what is happening. VSTS supports the ability to perform a Docker Compose command. We use this to launch our Test Image and mount its results location for the test results to a local folder (last line above). This way when we run our subsequent step to report the results we have access to the files (they are built and stored in the container remember).

Note: I recommend keeping the directory the same since you can be sure it exists

Here is the Docker Compose up command we will use from the VSTS task

up –abort-on-container-exit –build

Note: the task will preprend docker-compose for us, so we need only specify the arguments.

The –abort-on-container-exit and –build flags just ensure that we build the container image if it is not cached already and the container is exited when our ENTRYPOINT command finishes.

Finally, we come to publishing our Test Results, we can use the existing VSTS Publish Test Results task. Point the task at our mounted directory, specify the desired extension as .trx and the test type ise VSTest (even if you are using a different runner, say NUnit).

Now you should be able to run and see your test results. Should point out that, since we are using dotnet test as our entrypoint, the task WILL FAIL if a test does not pass. So keep that in mind so you can create the proper control flow to not create Docker images from builds that do not have passing unit tests

I hope that helps, I hope you got some good information out of this. Be sure to visit the link above and send thanks to Christian. That article really helped me out.

View story at Medium.com

Advertisements

DevOps with a Containerized app in Visual Studio Team Services

With any modern development project, I feel, you need to have good DevOps if you want a chance to be successful. Luckily, Microsoft has done a lot of investing into Visual Studio Online so that it is a one stop shot for development teams. Among these tools is a cutting edge Build and Release pipeline system.

In this post, I wanted to walk through my approach to handling a CI/CD pipeline with VSTS and containerized builds being deployed using App Services.

By the end you will end up with two builds: One which performs your typical CI Dev build that runs after each remote push, this will have a linked Release that deploys the created image to a Dev App Service. Similarly, you will have a Release Build that is triggered when a tag is pushed to the remote. It builds the image and tags it with the value from the Git tag. Finally, we will create a Staging Deployment where by users manually create releases and deploy specific versions to higher environments.

This is not a short post so, let’s get started.

Creating the CI Build

One of the most important builds for any development team is the CI or Continuous Integration builds. For this build, whenever we merge to our develop branch we want to build an image and, if valid, deploy it to our Azure Container Registry (ACR).

For starters, we need a Dockerfile that can create the image we will deploy to ACR. Here is the Dockerfile I used:

Selection_008

This is what is known as a multi-stage build where we separate the build and runtime components of our container, this reduces the size of the final container as SDKs can be rather large and are not needed to actually run code.

Here are the steps:

  • Download version 2.1 of the dotonet core SDK and refer to this stage as build
  • Set the working directory on the image to /code
  • Copy everything from the current directory into /code (our current working directory)
  • Run the dotnet restore command to restore our Nuget packages
  • Run the dotnet publish to build our application in Debug (it is a Dev build) and send the contents to /artifact
  • Download version 2.1 of the aspnetcore-runtime and name this stage runtime
  • Create your working directory /app
  • Copy all contents from /artifact from the build stage to the current working directory (/app)
  • Expose port 80 on spawned containers
  • Set the Entrypoint for the container as ContainerTest.Api.dll

We will create a derivative of this for the release build later on.

On VSTS, you will need to enter into the Builds and Releases section and click New +this will open the wizard to create a new pipeline.

First screen is selecting the source you want to download, we want to use develop since this is the branch our task and feature branches will ultimately come into. So this build will happen very frequently as an attempt to make sure changes dont break anything.

Next screen we pick our base template, Docker Container will be our selection. This will call our Dockerfile and expect to publish the image to a registry, we will use ACR for this, but you could use any Registry you so desired.

Selection_009

Important: You must make sure that the image(s) you build and the image(s) you publish are the same, or this process will fail.

Selection_011

Let’s go through the fields here, all of them are duplicated in the Publish task as well:

  • Azure Container Registry – because I indicated that this would be where my images are stored I was asked to select the registry. There is a field above this to select the Azure subscription, I have hidden it here for security
  • Action – this is obvious, the values will differ between Building and Publishing for obvious reasons
  • Dockerfile – again, obvious, we can leave the default here.
  • Image Name – Ok, so this is the actual name and tag of the image you will create. In ACR the image maps to a Repository and each individual item in that repository will be a tag
    • In this case we use the repository name as the repo name and the BuildId value as the tag. We can update the tag to be whatever we want
    • Ex: $(Build.BuildNumber)-$(Build.SourceVersion)
  • Additional Image Tags – new line delimited list if you want to create additional tags within the repo, or if your tag structure is long
  • Include Source Tags – will create a tag for any Git tag that is pushed
  • Include Latest – common practice in Docker, latest refers to the latest build for the image. You can also not include any tags and latest will get pushed

Again, it is critical that we duplicate the image name fields in the publish task so that it can find the image we just built.

Finally, we need to indicate that this is a build that is kicked off when the develop branch is pushed to. To do this, we edit our Build Pipeline and select the Trigger tab. Click to Enable Continuous Integration. Make sure you have develop specified. This will ensure the build is kicked off when develop is modified.

Releasing

Now, oddly enough even if you create a latest image and set your App Service to use the latest container it will not update when you push, because the App Service has to be told to update, and that is where the Release pipeline comes in.

First, hit Azure and create an App Service (Web App for Containers), when creating be sure to select Container (if you select Web App for Containers you wont have a choice).

Now, you will be asked to define a default image so, best to do this once one of your build from CI has completed. Be sure to test that it works after the provisioning process is complete.

Returning to VSTS, go to Releases. Release pipelines can do all the same things as Build pipelines but, their targeted purpose is to respond to a completed build or manually release code selected from completed builds.

When you select to Create a release pipeline you will be met with a side menu that requests selecting a template. For this case, we select Azure App Service Deployment.

Our next step is to determine what will be released and that means selecting an Artifact. There are many options here but, for this step since we want this release to happen whenever the CI build finishes we select Build. When you do this, most of the fields will get filled in, the Source Version Alias can be whatever you want, its just the name of the incoming artifact.

After we select our artifact we need to tell the release what to do. For our case, this is going to be super simple: we are going to deploy the image built in the Build Phase to our Dev environment AppService. Click the Phase link beneath the Environment.

Selection_013

So, let’s go through these settings cause they are important to understand:

  • App type: Must set this to Linux Web app because the images are all using Ubuntu
  • App Service name: So, I have noticed that if you dont use the Web App for Containers that it doesnt seem selectable in the menu, hence I mentioned using that template above
  • Image: The image you want to target, this is case sensitive
  • Tag: the tag you are deploying. Some of the environment values are carried over from the build, one of them is the BuildId

The last thing we need to do is set up our Release trigger. We can trigger releases manually, which will be the case for UAT and Production and, to some extent, QA. But for Dev we want it to have the latest and greatest.

So, once you have this in place, its time to test our CI build. Make a change and push to develop.

The build should start up and, hopefully, finish successfully (use the Download log on the Build detail to debug failures). After it finishes, switch over to Releases. You should see the next build start up.

Once that finishes, refresh your AppService endpoint and, after a time, you should see the change. If you get Service Unavailable, it usually means that you attempted to specify with an image tag that does not exist. To confirm this, view the Container Settings for your App Service and, if Tag is blank (or any required fields) it means the deployment specified the wrong tag. You can further confirm this in the Log for the Release.

That completes our first goal, we have a CI build which deploys to our AppService, up next is QA.

Creating the Release Build

Ideally, I wanted this build to kick off whenever a version tag was pushed to the develop branch. From this, we can tag the generated image file with its version and very easily have a historical listing of the versions that can be used by App Services and via the Release pipeline.

Before going any further its important that we understand how we can automatically invoke a build from a tag push, since it is not immediately obvious.

When you create a tag it is created at the path /refs/tags/<tag name>. Most build engines are wired to look for branch changes using a similar path structure. Knowing this we can hijack this to launch our build when a tag is pushed.

Clone the CI build and go into Edit it, click Triggers. You will need to enable Continuous Integration, as you did for the CI build. But you wont use a branch this time (shown below)

Selection_015

That is all there is to it. Now we just need to make some changes to our build process.

Tagging the Image

Simply put, we want to translate our Git tag to the tag for our container. This value is available to us, oddly enough through the Build.SourceBranchName environment variable. So we can use this in our Image Build and Push steps to correctly tag and push the right image.

Admittingly, this is a bit weird but, if you remember how we triggered it does make sense. I do hope Microsoft exposes this in a cleaner way moving forward because, it is not obvious you can do this.

Selection_017

The last thing we want to do is make sure that we build our .NET code in Release mode, since this is code that could potentially go into Production. The easiest way to do this is to create a copy of your Dockerfile and update Debug to Release.

Also note in the Image name the -release suffix added to the Image Name. This is so we do not drop this into our Dev repo (containertest). While there is no harm in doing such, I find this is easier to know which builds are releases and prevent mistakes.

Methodology

When we create a QA release we should view this as something that MIGHT go to Production. In reality, the vast majority of Release builds will be discarded somewhere along the way, but at least one will/should make it all the way through.

Additionally, in a proper build process we NEVER want to rebuild code that has been validated by a testing process as it opens the chance that a bug slips by. Thus, when we create a Release build that is the last time that code is compiled. This is where Containers really shine vs something like ZipFile deploy as they are specifically designed with this case in mind.

Finally, by separating our Dev and Release builds we are able to have a history and allow for easy rollbacks and deployments. By having this history, we can see a timeline of how an application developed.

Releasing the Release

So, we can use the same methodology to kick off his release build as we did with the CI build, when the build changes the release is kicked off.

Go ahead and Create a new Release Pipeline, as before we want to use Azure App Service Deployment as our template. For the Artifact, select the Release Build that was created previously. The beauty here is that since that build is ONLY triggered when a version tag is added, so this release pipeline will only ever fire when that release build is successful; this makes it ideal to deploy to QA environments.

Similar with the Release Build we created earlier, we need to reference the Build.SourceBranchName in the Deployment task so we indicate what Image we are deploying with.

As a tip, when a Release run finishes you can look at its details and click Logs and see a COMPLETE dump of all variables in context. This is VERY helpful for knowing what you have access to; this was more helpful than hours of Googling for me 🙂

Also, a good way to verify that the Release worked (in addition to visiting the Url or checking the Container Settings in the Web App) you can see the actual image and tag it attempts to deploy (you will not get an fail if the image does not exist, just Service Unavailable).

Selection_018

To test, create a tag anywhere in your Git commit history and push that tag to your remote. As a warning, when you do a git push it does NOT, by default, push tags. I use GitKraken so I can push tags individually. Just keep that in mind.

Also, if you are using the free tier of VSTS, it may take a second to start. You can check Queued Builds if you want to see the change was detected.

Once the build finishes, flip over to Releases and, again after some waiting, you will see the Release start. When it finishes you can check your AppService. Congrats.

Higher Environment Deployment

As we talked about before, once you build a QA release you are, effectively, creating a build that you might potentially release and, as such, rebuilding this code should absolutely be avoided. Using containers make this much easier over something like Zip.

Because we do not need to build anymore, additional actions take place only in the Release Pipeline. To close out this post, I will create a Staging Deployment where the user indicates what version they are deploying.

In Releases, choose to create a new Release Pipeline, I called mine Staging Deployment. The important thing with this pipeline is that for the Artifact Type you select Azure Container Registry (or whatever registry you are choosing to use).

Next go into the Tasks for your App Service Deployment task. Make sure we select the right Image Name (remember it is case sensitive) and use Build.BuildId for the tag. This is weird I know but, when the user creates the release they will specify a version (from the versions we have created) and it will be surface as the BuildId. Here is what mine looks like:

Selection_019

This is literally it for the configuration of the pipeline. Now, let’s invoke it.

From the Releases main landing screen select the Staging Deployment (or whatever you called it) Pipeline and from the three dots menu select + Release.

A side menu will appear prompting the user for certain details on this release, one of the, is version. When you click the dropdown a selection of available versions from the ACR will appear. Select the one you want. Here is what my screen looks like:

Selection_020

Click Create and the Pipeline will move to a Standby state, it wont actually deploy it yet, that is, correctly, a separate step.

FYI, the Refresh on these screens is a bit wonky so, make use of the manual Refresh button in the table’s upper left corner.

Here is what my screen looks like when I drill into this New Release I created.

Selection_021

Now, we click Deploy and wait till the process ends. Mine took 3m, though I use the free tier and a local agent built on an Agent Docker Container (future post for that).

Once its complete, go verify things and you should be go.

Closing

Let me be frank, there is NO REASON to not use Containers for applications these days. Orchestration is another matter but Containers should now be the defacto standard for the vast majority of applications.

In the example above, we were able to use Git tags and tagging to identify versions and make our builds but, more than that, there is a consistency here. We have a guarantee that our applications work because they are contained and have everything they need right inside, regardless of the host OS.

Building a Real Time Data Pipeline in Azure – Part 3

Part 1 – here Part 2 – here

In the first two parts of this series we created an Event Hub that we could blast with data in high amounts, this is a common use case one sees with the sort of real time application we are building the backbone for. Next, we showed how to setup a Stream Analytics job to output query results on the data based on a bounding window which allowed us to output these results to a storage medium.

At the conclusion of Part 2 we had these results streaming into Blob storage which is good, but not overly practical (at least not until Azure has something like Athena). Truthfully to effectively use this data we need it in a storage medium that supports querying.

Rules of the Land

I am a huge fan of Azure CosmosDB and the various DB providers it offers including DocumentDB. At the time of this writing the ONLY supported CosmosDB API that allows an Analytics Job output to as a destination. For now, this rule must be followed as you cannot use anything else; if you do not follow it, be prepared for a cryptic error.

Setting the output

Returning to your output screen (and you can have multiple outputs if you want) you can click Add and select Cosmos DB. As with the previous section, you will want as much automated as possible, I would even recommend having your DB prepared ahead of time but, the wizard can do it for you as well.

Testing the process

Once you have all of this in place you can turn on the hose (make sure the Analytics job is started) and wait for data to appear. The truth is, debugging this is simply a matter of checking the flow at each point and seeing where data is stopping if its not making it all the way through.

Your next step is to write an application that queries this data as you need it to provide insights into the data you are creating. Once you have it in the output you can just write your normal queries against it.

Building a Real Time Data Pipeline in Azure – Part 2

Part 1 here

Continuing on with this series, we now turn our attention to how to ingest and process the data collected through the Event Hub. For real time apps, this is where we want to perform our bounded queries to generate data within a window. Remember, traditionally we naturally get bounded queries because we are querying a fixed data set. With this sort of application our data is constantly coming in and so we have to be cognizant of limiting our queries within a given window.

Creating our Stream Analytics Job

Click Add within your Resource Group and stream for Stream Analytics and look for Stream Analytics job.

For this demo, you can set Streaming Units to 1. Streaming Units is a measure of the scalability for the job, you should adjust this based on your incoming load. For more information see here.

When we look at this sort of service we concern ourselves with three aspects: InputQuery, and Output. For the job to work we will need to fulfill each of these pieces.

Configure the Input

For Input, we already have this, our Event Hub. We can click on Inputs and select Add Stream Input. From here you can select your Event Hub, you will be promoted to fill in values from the right side panel. As a tip, try to use the EventHub selection, dont try to specify the values. I have found that things wont work properly.

This provides an input into our analytics job for our next section we will configure the query.

Configure the Query

As I mentioned before, the thing to remember with this sort of process is you are dealing with unbounded data. Because of this we need to make sure the queries look at the data in a bounded context.

From the Overview for your Stream Analytics Job click the Edit Query which will drop you into the Query editor. Some important things to take note of here:

  • When you created your Input you gave it a name, this serves as the table you will use as the source for your data
  • The query result is automatically directed to your output. There are ways to send multiple outputs, but I wont be talking that here

So here is a sample query that I used to select the sum of shares purchased or sold on a per minute basis:

Selection_004

There are a couple things to point out here:

  • We use System.Timestamp to produce a timestamp for the event. This will be used on the frontend if you want to graph this data over time. It also serves as a primary key (when combined with symbol) in this case
  • StockTransactionsRaw is the input that we defined for this Stream Analytics job, you can call this whatever
  • TumblingWindow is a feature supported within Analytics job to allow for creating a bounded query. In this case, we slide a window on a per minute basis. Here is more information on TumblingWindow – click

One big tip I can give here is using Sample Data to test. Once you have your Input stream going for a bit you can return to the Inputs list and click Sample Data for that specific input. This will generate a file you can download with the data being received through that input.

Once you have this file, you can return to the Edit Query screen and click the Test button, this will let you upload the file you downloaded and will display the query results. I found this a great way to test your query logic.

Once you have things working to your liking you need to move on to the Output.

Configure the Output

For now, we are going to use Blob Storage to handle our data. We will cover hooking this up to Cosmos in Part 3. I want to break this up a bit just so its not too much being covered in one entry and, I like 3 as a number 🙂

Click on Outputs and select Add and choose Blob Storage. Here is what my configuration looks like:

Selection_005

This will drop the raw data into files within your blob storage that you can read back later. Though, the real value here will be the ability to query it from something like Cosmos which we cover in the next section.

So, now you have an Event Hub that is able to ingest large amounts of data and it pumps the data into the Analytics Job which runs a query (or queries) against it and it sends the outputs, for now, to our Blob Storage.

Looking to catch you in Part 3.

Building a Real Time Data Pipeline in Azure – Part 1

Previously I walked through the process of using the Amazon Kinesis Service to build a real time analytics pipeline in which a bunch of data was ingested and then processed with the results pushed out to a data store which we could then query at our leisure. You can find the start of this series here.

This same ability is available in Microsoft Azure using a combination of Event Hubs, Analytic Streams, and CosmosDB. Let’s walk through creating this pipeline.

Event Hubs

Event Hubs in Azure are similar to Kinesis Firehose in that the aim is to intake a lot of data at scale. This is where it differs from something like Service Bus or Event Grid both of which are more geared towards enabling event driven programming.

To create an Event Hub you need to first create a namespace, it is within these namespaces that you created the hubs you will send data to. To create the namespace search for and select Event Hubs by Microsoft.

One of the important aspects here is the determination of your throughput units which determines how much load your Hub can accommodate. For a simple demo, you can leave it at 1, for more advanced scenarios you would increase it.

Enable Auto Inflate allows you to specify the minimum number of throughput units but lets Azure automatically increase the number as needed.

Once you have your Namespace, click the local Add button to add your Hub. As a convention, I always suffix these with hub.

When creating the Hub you need to be aware of the Partition count. Partitions are an important part of this sort of big data processing as they enabling data sharding which lets the system better balance the amount of data.

The number here MUST range of 2 to 32. You should try to think of a logical grouping of your data so that you can balance it. For example, in my sample I only have 10 user Ids so I create 10 partition and each one handles an individual user. In a real life scenario I might have users so this would not work.

Regardless, selecting the appropriate partition key is essential for maximizing the processing power of Event Hubs (and other stream services).

This bit of documentation explains this better: https://docs.microsoft.com/en-us/azure/event-hubs/event-hubs-features

Connecting to your Hub

So now that you have a hub up we need to write events to it. This can be accomplished in a variety of ways, but I have not yet found a way to do it via API Management. I sense that this sort of data processing is of a different nature than pure Event Driven programming.

For starters you will need the Microsoft.Azure.EventHubs NuGet package which will make writing to the endpoint much easier.

Within your application you need to create a client reference to EventHubClient this is created using your Endpoint and Shared Access Key.

You can get these values in one of two ways: globally or locally.

  • At the Namespace level, you can click on Shared access policies to work with policies that apply to ALL Hubs within that namespace – by default Microsoft expects this is what you will do and so creates the RootManageSharedAccessKey here
  • At the Hub level, you can click on Shared access policies to work with policies that apply ONLY to that Hub – this is what I recommend as I would rather not have my Namespace that open

Regardless of how you do this, a policy can support up to three permissions: ManageSend, and Listen. In the context of eventing these should be pretty self explanatory.

Returning to the task at hand, we need to select the appropriate policy and copy the Connection String – primary key value. As a note, if you copy this value at the Namespace level you will NOT see the EntityPath included in the connection string, if you copy at the Hub level you will.

EntityPath can be applied via EntityHubsConnectionStringBuilder. The value is the name of the Hub you will connect to within the namespace.

Selection_002

Again, if you copy the Connection String from the Hub level you can OMIT the EntityPath specification that I have in the code above. I think that is the better approach anyway.

Sending Events

Once we have our client up and configured we can start to send events (or messages) to our Hub. We do this by calling SendAsync on the EventHubClient instance. The message can be passed in a variety of ways, but the standard is as a UTF8 byte array. This is easy to do with the Encoding class in .NET.

Selection_003

As with any sort of test involving streaming you need to make sure you generate a considerable amount of data so you can see the ingestion rates in the Hub metrics.

Ok so, we now have events streaming from our apps up to the Hub, now we need to do something with our data. In the next section, we will set up a Stream Analytics Job that runs against our incoming data, produces results, and drops the data into a Blob storage container.

Part 2 – here

Getting Started with Azure Event Grid

One of the most interesting scenarios the Cloud gives rise to is better support for building event driven backends. Such backends are necessary as the complexity and scale of applications grow and developers want to decouple service references. Additionally, cloud providers are raising events from their own services and allow developers to hook into these events, this is most noticeable on AWS.

While this tend is not new on Azure (EventHubs have existed for a while) support for it compared to AWS has lagged. But by introducing EventGrid, Microsoft seems to be positioning themselves to support the same sort of event driven programming as allowed with in Amazon with the added benefit of additional flexibility supported, mainly, by EventGrid.

What is Event Grid?

In the most simplest sense, EventGrid is a router. It allows events to be posted to a topic (similar to a Pub/Sub) and can support multiple subscriptions, each with differing configurations, including what eventType to support and what matching subjects will be sent to subscribers.

In effect, EventGrid allow you to define how you want the various services you are using to communicate. In that way, it can potentially provide a greater level of flexibility, I feel, than AWS, whose eventing generally lacks a filtering mechanism.

Creating an Event Topic

So, the first thing to understand is that the concept of topic will be appear twice. The first is the name of the EventGrid itself, which is a topic itself.

Choose to Add a Resource and search for event grid. Select Event Grid Topic from Microsoft from the returned results.

By default, the name of the topic will match the given name of the Event Grid so, pick wisely if this is for something where the name will need to have meaning.

Once this completes we will need to create a subscription.

Before you create a subscription

When you create a subscription you will be asked to give an endpoint type. This is where matched events to the topics will be sent. If you use a standard Azure service (ie Event Hubs) you can select and go, however, if you use an Azure Function you will need to do something special to ensure the subscription can be created.

If you fail to follow this, you will receive (at the time of this writing) an undefined error in the Portal. The only way to get more information is to use the Azure CLI to create the subscription.

The error is in reference to a validation event that is sent to the perscribed endpoint to make sure it is valid. There is a good guide on doing this here. Essentially, we have to look for a certain event type and return the provided validation code as proof that we own the event endpoint.

I do not believe you have to do this with other Azure service specific endpoints, but you certainly have to do it if your endpoint is going to be of type WebHook.

The destination for this Event Grid subscription is the Azure Function Url.

Regardless of your endpoint type…

You will need to setup the various filtering mechanisms that indicate which events, when received, are passed through your subscription to the endpoint.

Most of these are fairly self explanatory except for the Event Schema field which refers to the structure of the data coming through the topic. It is necessary for the subscription to understand the arrangement so it can be analyzed.

The default is the pre-defined EventGridEvent which looks like this:

[
    {
        "id": "string",
        "eventType": "string",
        "subject": "string",
        "eventTime": "string",
        "data": "object",
        "dataVersion": "string"
    }
]

The key here is the data object which will contain your payload.

I am not sure of the other schemas as I have not explored them, mainly because the .NET EventGrid libraries have good native support for the EventGridEvent schema.

Let’s Send an Event

There are quite a few ways to send events to EventGrids. For any attempt you are going to need your Topic Endpoint which you can find on the Overview section for your Event Grid. This is the Url you will submit new events to for propagation within Azure, it is also the value you will need to get from any Azure service that you want to submit events to the Event Grid.

Disclaimer: At the time of this writing, there was some weirdness in the portal where you needed to use the CLI to route events from Azure services (like Storage) to an EventGrid.

Receiving Events from an Azure Resource
https://docs.microsoft.com/en-us/azure/storage/blobs/storage-blob-event-quickstart?toc=%2fazure%2fevent-grid%2ftoc.json

Within this tutorial you are shown how to use the Azure CLI (via the command shell or through the portal) to create the appropriate subscriptions. Do note that, once created, you will not see this listed in the portal

Send a Custom Event
This is probably the case you came here to read about. Principally, sending custom events involves posting to the Event Grid Topic Url with a schema for the body that matches a subscription.

Doing this is easiest if you use the Microsoft.Azure.EventGrid NuGet library which will expose the the PublishEventAsync method, here is an example:

Selection_001

You can also send to this endpoint manually using the HttpClient library, but I recommend using the NuGet package. To be a better idea of what is in there, here is the source page: https://github.com/Azure/azure-sdk-for-net/tree/psSdkJson6/src/SDKs/EventGrid/DataPlane

Keep in mind that this will send an event using the EventGridEvent schema, which is what I recommend.

How do I test that things are working…

Once you have things in place, I recommend selecting the Azure Function and clicking the Run button. This will enter into a mode where you can see a streaming log at the bottom. This is the stream for the function and is not relegated to just what is passed via testing.

Once you have this up, send an event to your Event Grid Topic and wait for a log message to appear which will tell you things are connected correctly.

What are my next steps?

This gets into a bigger overall point but, in general, I do NOT recommend using the Event Grid Topic Url for Production because it couples you to a Url that cannot be versioned and breaks the identity of your API.

Instead, as should be the case when you use Azure Functions you need to set things up behind an Azure API Management service. This lets you customize what the Url looks like, version, and maintain better control over its usage.

Additionally, in an event driven system you should view events as a way to maximize throughput so users should be posting to this endpoint and the event should fan out.

To put our Event Grid Topic endpoint behind an Azure API Management Route, do this:

  1. Create an API Management service instance (this will take time) and wait for it to activate
  2. Create a Blank API
    • Tip: When you create the API you will be asked to select a Product. Without getting too much into what this is, select Unlimited to make things easy on yourself
  3. Access your API definition via the APIs link navigation bar
  4. Add a POST route –  you can provide whatever values you want to the required fields
  5. Copy the aeg-sas-key from the Access Keys section under your Event Grid Topic and make this a required header for your POST route
  6. Once the route is created selected the Edit icon for the Backend
  7. Select Http(s) endpoint and paste the Event Grid Topic Url, make sure to NOT include the /events at the end of this URL
    Selection_003
  8. Hit Save
  9. Click on your API Operation (POST route) in the selection method to the left
    Selection_004
  10. Select Inbound Processing
  11. Change the Backend line such that it reads with the POST method and the resource is /events
  12. Hit Save and you can click Test to check this. Building the payload is a bit tedious. Use the source here to understand how the JSON should look for the EventGridEvent that I assume you are sending.

Ok so let’s recap.

The first bit of this is to create an Azure API Management service instance and then create a POST path. Azure API Management is a huge topic that I have covered part of in the past. It is an essential tool when building a MicroService architecture in Azure as it allows for unification of different service types (serverless, containerized, traditional, etc) into what appears to be a single unified API.

All requests to Azure Event Grid require the aeg-sas-key so, we make sure that Azure API Management does not allow any requests to the proxy to come through unless that header is at least present. Event Grid will determine its validity.

Once the general route is created we need to tell it where to forward the request it receives. This is simply the Event Grid Topic endpoint that we can get from the Event Grid Topic Overview page in the Azure Portal. However, we do NOT want to take the entire Url as we will need to use Url Rewriting instead of Url Forwarding. So, I recommend taking everything but the trailing /events which we will use in the next section. Be sure to also check Override to allow for text entry into the Url field.

Ok, now lets complete the circle. You need to click Inbound Processing and change the default processing from URL Forwarding to URL Rewrite. There is a bit of a quirk here where you cannot leave the text field for your backend blank. This is where you will want to drop the /events that we omitted from our backend URL.

You will want to use the Test feature to test this. I provided the source code for the Azure Event Grid EventGridEvent class so you can see how the JSON needs to be in test.

Conclusion

Event Driven Programming is a very important and vital concept in enabling high throughput endpoints. The design is inherently serverless and enables developers to tap into more of Azure’s potential and easily declare n number of things that happen when an endpoint is called.

API Management is a vital tool in creating a unified and consistent look interface for external developers to use. There are many tools it comes with including Header enforcement and event event body validation and transformation and it can integrated with an authentication mechanism. If you are going to create a Microservice type API or if you intend to break up a monolith, API Management is a create way to get free versioning and abstraction.

The biggest thing to remember is that Event Grid is geared towards enabling Event Driven scenarios, it is not designed for processing a high number of events, that is the realm of Event Hubs. If you were going to do real time processing, you would feed your data pipeline into Event Hubs and route those events to Event Grid (potentially).

More information here

MVP for another year

It is with great humility that I announce Microsoft’s decision to renew my MVP status for an additional  year. While it is my second renewal, it comes as the first true renewal that I have had since being selected in 2016.

What I mean by that is, after I was selected in 2016 the program’s renewal cycle changed and as part of the change I was grandfathered into the MVP program for 2017 into 2018. This mean’t it would be my accomplishments in 2017 that would dictate whether my MVP status would continue.

I spoke and blogged quite a bit in early 2017 but, shut things down around August to focus on my wedding and honeymoon. What’s more, throughout 2017 I was tasked with a large $4mil web project using NodeJS, AWS, and ReactJS for West Monroe. I was worried as this certainly drew my focus away from what got me my MVP. In addition, I decided to also refocus on the web and away from Xamarin (this as part of an overall decision to focus more on the Cloud side of things).

2018 has also not been easy as the AWS project finishes up and I celebrate the birth of my first child, my son Ethan. But I am committed to finding the balance and have already spoken at two conferences (CodeMash and Chicago Code Camp) and am selected to speak at TechBash and have abstracts out to VSLive.

In the end, my willingness to share my ideas here and the awesome people who have to read what I wrote and even share some of my articles on forums and StackOverflow, helped get that MVP renewal and so, I send out a Thank You to all.