Private Endpoints with Terraform

Warning: This is a fairly lengthy one – if you are just here for the code: jfarrell-examples/private-endpoint-terraform (github.com) – Cheers.

In my previous post I talked about the need for security to be a top consideration when building apps for Azure, or any cloud for that matter. In that post, I offered an explanation of the private endpoint feature Azure services can use to communicate within Virtual Network resources (vnet).

While this is important, I decided to take my goals one step further by leveraging Terraform to create this outside of the portal. Infrastructure as Code (IaC) is a crucial concept for teams that wish to ensure consistency and predictability across environments. While there exists more than one operating model for IaC the concepts are the same:

  • Infrastructure configuration and definition should be a part of the codebase
  • Changes to infrastructure should ALWAYS be represented in scripts which are run continuously to mitigate drift
  • There should be a defined non-manual action which causes these scripts to be evaluated against reality

Terraform is a very popular tool to accomplish this, for a number of reasons:

  • Its HashiCorp Configuration Language (HCL) tends to be more readable than the formats used by ARM (Azure Resource Manager) or AWS Cloud Foundation
  • It supports multiple cloud both in configuration and in practice. This means, a single script could manage infrastructure across AWS, Azure, Google Cloud, and others.
  • It is free

However, it is also to note Terraform’s weakness in being a third party product. Neither Microsoft or others officially support this tool and as such, their development tends to be behind native tooling for specific platforms. This means, certain bleeding edge features may not be available in Terraform. Granted, one can mitigate this in Terraform by importing a native script into the Terraform file.

All in all, Terraform is a worthwhile tool to have at one’s disposal given the use cases it can support. I have yet to observe a situation in which there was something a client was relying on that Terraform did not support.

How to Private Endpoints Work?

Understanding how Private Endpoints work in Azure is a crucial step to building them into our Infrastructure as Code solution. Here is a diagram:

In this example, I am using an Azure App Service (Standard S1 SKU) which allows me to integrate with a subnet within the vnet. Once my request leaves the App Service it arrives at a Private DNS Zone which is linked to the vnet (it shown as part of the vnet, that is not true as its a global resource. But for the purposes of this article we can think of it as part of the vnet.

Within this DNS Zone we deploy an A Record with a specific name matching the resource we are targeting. This gets resolved to the private IP of a NiC interface that effectively represents our service. For its part, the service is not actually in the vnet, rather it is configured to only allow connections from the private endpoint. In effect, a tunnel is created to the service.

The result of this, as I said in the previous post, your traffic NEVER leaves the vnet. This is an improvement over the Service Endpoint offering which only guarantees workloads will never leave the Azure backbone. That is fine for most things but, Private Endpoints offer an added level of security for your workloads.

Having said all that, let’s walk through building this provisioning process in Terraform. For those who want to see code this repo contains the Terraform script in its entirety. As this material is for education purposes only, this code should not be considered production ready.

Below is the definition for App Service and the Swift connection which supports the integration.

Create a Virtual Network with 3 subnets

Our first step, as it usually is with any secure application, create a Virtual Network (vnet). In this case we will give it three subnets. I will take advantage of Terraform’s module concept to enable reuse of the definition logic. For the storage and support subnets we can use the first module shown below, for the apps we can use the second, as its configuration is more complicated and I have not taken the time to unify the definition.

# normal subnet with service endpoints
# create subnet
resource "azurerm_subnet" "this" {
name = var.name
resource_group_name = var.rg_name
virtual_network_name = var.vnet_name
address_prefixes = var.address_prefixes
service_endpoints = var.service_endpoints
enforce_private_link_endpoint_network_policies = true
enforce_private_link_service_network_policies = false
}
# output variables
output "subnet_id" {
value = azurerm_subnet.this.id
}
# delegated subnet, needed for integration with App Service
# create subnet
resource "azurerm_subnet" "this" {
name = var.name
resource_group_name = var.rg_name
virtual_network_name = var.vnet_name
address_prefixes = var.address_prefixes
service_endpoints = var.service_endpoints
delegation {
name = var.delegation_name
service_delegation {
name = var.service_delegation
actions = var.delegation_actions
}
}
enforce_private_link_endpoint_network_policies = false
enforce_private_link_service_network_policies = false
}
# output variables
output "subnet_id" {
value = azurerm_subnet.this.id
}
view raw subnets.tf hosted with ❤ by GitHub

Pay very close attention to the enforce properties. These are set in a very specific way to enable our use case. Do not worry though, IF you make a mistake the error messages reported back from ARM are pretty helpful to make corrections.

Here is an example of calling these modules:

# apps subnet
module "apps_subnet" {
source = "./modules/networking/delegated_subnet"
rg_name = azurerm_resource_group.rg.name
vnet_name = module.vnet.vnet_name
name = "apps"
delegation_name = "appservice-delegation"
service_delegation = "Microsoft.Web/serverFarms"
delegation_actions = [ "Microsoft.Network/virtualNetworks/subnets/action" ]
address_prefixes = [ "10.1.1.0/24" ]
}
# storage subnet
module "storage_subnet" {
source = "./modules/networking/subnet"
depends_on = [
module.apps_subnet
]
rg_name = azurerm_resource_group.rg.name
vnet_name = module.vnet.vnet_name
name = "storage"
address_prefixes = [ "10.1.2.0/24" ]
service_endpoints = [ "Microsoft.Storage" ]
}
view raw make_subnets.tf hosted with ❤ by GitHub

One tip I will give you for building up infrastructure, while Azure documentation is very helpful, for myself I will create the resource in the portal and choose the Export Template option. Generally, its pretty easy to map the ARM syntax to Terraform and glean the appropriate values – I know the above can seem a bit mysterious if you’ve never gone this deep.

Create the Storage Account

Up next we will want to create our storage account. This is due to the fact that our App Service will have a dependency on the storage account as it will hold the Storage Account Primary Connection string in its App Settings (this is not the most secure option, we will cover that another time).

I generally always advise the teams I work with to ensure a Storage Account is set to completely Deny public traffic – there are just too many reports of security breaches which start with a malicious user finding sensitive data on a publicly accessible storage container. Lock it down from the start.

resource "azurerm_storage_account" "this" {
name = "storage${var.name}jx02"
resource_group_name = var.rg_name
location = var.rg_location
account_tier = "Standard"
account_kind = "StorageV2"
account_replication_type = "LRS"
network_rules {
default_action = "Deny"
bypass = [ "AzureServices" ]
}
}
# outputs
output "account_id" {
value = azurerm_storage_account.this.id
}
output "account_connection_string" {
value = azurerm_storage_account.this.primary_connection_string
}
output "account_name" {
value = azurerm_storage_account.this.name
}
view raw storage.tf hosted with ❤ by GitHub

One piece of advice, however, make sure you add an IP Rule so that your local machine can still communicate with the storage account as you update it – it does support CIDR notation. Additionally, the Terraform documentation notes a property virtual_network_subnet_ids in the network_rules block – you do NOT need this for what we are doing.

Now that this is created we can create the App Service.

Create the App Service

Our App Service needs to be integrated with our vnet (reference the diagram above) so as to allow communication with the Private DNS Zone we will create next. This is accomplished via a swift connection. Below is the definition used to create an Azure App Service which is integrated with a specific Virtual Network.

# create the app service plan
resource "azurerm_app_service_plan" "this" {
name = "plan-${var.name}"
location = var.rg_location
resource_group_name = var.rg_name
kind = "Linux"
reserved = true
sku {
tier = "Standard"
size = "S1"
}
}
# create the app service
resource "azurerm_app_service" "this" {
name = "app-${var.name}ym05"
resource_group_name = var.rg_name
location = var.rg_location
app_service_plan_id = azurerm_app_service_plan.this.id
site_config {
dotnet_framework_version = "v5.0"
}
app_settings = {
"StorageAccountConnectionString" = var.storage_account_connection_string
"WEBSITE_DNS_SERVER" = "168.63.129.16"
"WEBSITE_VNET_ROUTE_ALL" = "1"
"WEBSITE_RUN_FROM_PACKAGE" = "1"
"EventGridEndpoint" = var.eventgrid_endpoint
"EventGridAccessKey" = var.eventgrid_access_key
}
}
# create the vnet integration
resource "azurerm_app_service_virtual_network_swift_connection" "swiftConnection" {
app_service_id = azurerm_app_service.this.id
subnet_id = var.subnet_id
}
view raw appservice.tf hosted with ❤ by GitHub

Critical here is the inclusion of two app settings shown in the Terraform:

  • WEBSITE_DNS_SERVER set to 168.63.129.16
  • WEBSITE_VNET_ROUTE_ALL set to 1

Reference: Integrate app with Azure Virtual Network – Azure App Service | Microsoft Docs

This information is rather buried in the above link and it took me effort to find it. Each setting has a distinct purpose. WEBSITE_DNS_SERVER indicate where outgoing requests should look to for name resolution. You MUST have this value to target the Private DNS Zone linked to the vnet. The WEBSITE_VNET_ROUTE_ALL setting tells the App Service to send ALL outbound calls to the vNet (this may not be practical depending on your use case).

For those eagle eyed readers, you can see settings for an Event Grid here. In fact, the code shows how to integrate Private Endpoints with Azure Event Grid, the technique is similar. We wont cover it as part of this post, but its worth understanding.

Create the Private DNS Rule

Ok, this is where things start to get tricky, mainly due to certain rules you MUST follow to ensure the connection is made successfully. What is effectively going to happen is, our DNS Zone name is PART of the target hostname we need to match. The match will then resolve to the private IP of our NiC card (part of the private endpoint connection).

Here is the definition for the storage DNS Zone. The name of the zone is crucial, as such I have included how the module is called as well.

# create dns zone resource
resource "azurerm_private_dns_zone" "this" {
name = var.name
resource_group_name = var.rg_name
}
# create link to vnet
resource "azurerm_private_dns_zone_virtual_network_link" "this" {
name = "vnet-link"
resource_group_name = var.rg_name
private_dns_zone_name = azurerm_private_dns_zone.this.name
virtual_network_id = var.vnet_id
}
# define outputs
output "zone_id" {
value = azurerm_private_dns_zone.this.id
}
output "zone_name" {
value = azurerm_private_dns_zone.this.name
}
# how it is called from the main Terrafrom file
module "private_dns" {
source = "./modules/networking/dns/private_zone"
depends_on = [
module.vnet
]
name = "privatelink.blob.core.windows.net"
rg_name = azurerm_resource_group.rg.name
vnet_id = "/subscriptions/${data.azurerm_subscription.current.subscription_id}/resourceGroups/${azurerm_resource_group.rg.name}/providers/Microsoft.Network/virtualNetworks/${module.vnet.vnet_name}"
}
view raw storage_dns.tf hosted with ❤ by GitHub

Ok there is quite a bit to unpack here, let’s start with the name. The name here is mandatory. If your Private Endpoint will target a Storage Account the name of the DNS Zone MUST be privatelink.blob.core.windows.net. Eagle eyed readers will recognize this URL as the standard endpoint for Blob service within a Storage account.

This rule holds true with ANY other service that integrates with Private Endpoint. The full list can be found here: Azure Private Endpoint DNS configuration | Microsoft Docs

A second thing to note in the call is the structure of the value passed to the vnet_id parameter. For reasons unknown, Terraform did NOT resolve this based on context, so I ended up having to build it myself. You can see the usage of the data “azurerm_subscription” block in the source code. All it does is give me a reference to the current subscription so I can get the ID for the resource Id string.

Finally, notice that, following the creation of the Private DNS Zone, we are linking our Vnet to it via the azurerm_private_dns_zone_virtual_network_link resource. Effectively, this informs the Vnet that it can use this DNS Zone when routing calls coming into the network – this back to the flags we set on the Azure App Service.

Now we can create the Private Endpoint resource proper.

Create the Private Endpoint

First, I initially thought that you had to create one Private Endpoint per need however, later reading suggests that might not be the case – I have not had time to test this so, for this section, I will assume it is one per.

When you create a private endpoint the resource will get added to your resource group. However, it will also prompt the creation of a Network Interface resource. As I have stated, this interface is effectively your tunnel to the resource connected through the Private Endpoint. This interface will get assigned an IP consistent with the CIDR range of the subnet specified to the private endpoint. We will need this to finish configuring routing within the DNS Zone.

Here is the creation block for the Private Endpoint:

# create the resource
resource "azurerm_private_endpoint" "this" {
name = "pe-${var.name}"
resource_group_name = var.rg_name
location = var.rg_location
subnet_id = var.subnet_id
private_service_connection {
name = "${var.name}-privateserviceconnection"
private_connection_resource_id = var.resource_id
is_manual_connection = false
subresource_names = var.subresource_names
}
}
# outputs
output "private_ip" {
value = azurerm_private_endpoint.this.private_service_connection[0].private_ip_address
}

I am theorizing you can specify multiple private_service_connection blocks, thus allowing the private endpoint resource to be shared. However, I feel this might make resolution of the private IP harder. More research is needed.

The private_service_connection block is critical here as it specifics which resource we are targeting (private_connection_resource_id) and what service(s) (group(s)) within that resource we specifically want access to. For example, in this example we are targeting our Storage Account and want access to the blob service – here is the call from the main file:

# create private endpoint
module "private_storage" {
source = "./modules/networking/private_endpoint"
depends_on = [
module.storage_subnet,
module.storage_account
]
name = "private-storage"
rg_name = azurerm_resource_group.rg.name
rg_location = azurerm_resource_group.rg.location
subnet_id = module.storage_subnet.subnet_id
resource_id = module.storage_account.account_id
subresource_names = [ "blob" ]
}

The key here is the output variable private_ip which we will use to configure the A record next. Without this value, requests from our App Service being routed through the DNS Zone will not be able to determine a destination.

Create the A Record

The final bit here is the creation of an A Record in the DNS Zone to give a destination IP for incoming requests. Here is the creation block (first part) and how it is called from the main Terraform file (second part).

# create the resources
resource "azurerm_private_dns_a_record" "this" {
name = var.name
zone_name = var.zone_name
resource_group_name = var.rg_name
ttl = 300
records = var.ip_records
}
# calling from main terraform file
module "private_storage_dns_a_record" {
source = "./modules/networking/dns/a_record"
depends_on = [
module.private_dns,
module.private_storage
]
name = module.storage_account.account_name
rg_name = azurerm_resource_group.rg.name
zone_name = module.private_dns.zone_name
ip_records = [ module.private_storage.private_ip ]
}
view raw a_record.tf hosted with ❤ by GitHub

It is that simple. The A Record is added to the DNS Zone and its done. But LOOK OUT back to the naming aspect again. The name here MUST be the name of your service, or at least the unique portion of the URL when referencing the service. I will explain in the next section.

Understanding Routing

This is less obvious with the storage account than it is with Event Grid or other services. Consider what your typical storage account endpoint looks like:

mystorageaccount.blob.core.windows.net

Now here is the name of the attached Private DNS Zone: privatelink.blob.core.windows.net

Pretty similar right? Now look at the name of the A Record – it will be the name of your storage account. Effectively what happens here is the calling URL is mystorageaccount.privatelink.blob.core.windows.net. But yet, the code we deploy can still call mystorageaccount.blob.core.windows.net and work fine, why? The answer is here: Use private endpoints – Azure Storage | Microsoft Docs

Effectively, the typical endpoint gets translated to the private one above which then gets matched by the Private DNS Zone. The way I further understand it is, if you were calling this from a peered Virtual Network (on-premise or in Azure) you would NEED to use the privatelink endpoint.

Where this got hairy for me was with Event Grid because of the values returned relative to the values I needed. Consider the following Event Grid definition:

resource "azurerm_eventgrid_topic" "this" {
name = "eg-topic-${var.name}jx01"
resource_group_name = var.rg_name
location = var.rg_location
input_schema = "EventGridSchema"
public_network_access_enabled = false
}
output "eventgrid_topic_id" {
value = azurerm_eventgrid_topic.this.id
}
output "event_grid_topic_name" {
value = azurerm_eventgrid_topic.this.name
}
output "eventgrid_topic_endpoint" {
value = azurerm_eventgrid_topic.this.endpoint
}
output "eventgrid_topic_access_key" {
value = azurerm_eventgrid_topic.this.primary_access_key
}
view raw eventgrid.tf hosted with ❤ by GitHub

The value of the output variable eventgrid_topic_name is simply the name of the Event Grid instance, as expected. However, if you inspect the value of the endpoint you will see that it incorporates the region into the URL. For example:

https://someventgridtopic.eastus-1.eventgrid.azure.net/api/events

Given the REQUIRED name of a DNS Zone for the Event Grid Private Endpoint is privatelink.eventgrid.azure.net my matched URL would be: someeventgrid.privatelink.eventgrid.azure.net which wont work – I need the name of the A Record to be someeventgrid.eastus-1 but this value was not readily available. Here is how I got it:

module "private_eventgrid_dns_a_record" {
source = "./modules/networking/dns/a_record"
depends_on = [
module.private_dns,
module.private_storage,
module.eventgrid_topic
]
name = "${module.eventgrid_topic.event_grid_topic_name}.${split(".", module.eventgrid_topic.eventgrid_topic_endpoint)[1]}"
rg_name = azurerm_resource_group.rg.name
zone_name = module.private_eventgrid_dns.zone_name
ip_records = [ module.private_eventgrid.private_ip ]
}

It is a bit messy but, the implementation here is not important. I hope this shows how the construction of the Private Link address through the DNS zone is what allows this to work and emphasizes how important the naming of the DNS Zone and A Record are.

In Conclusion

I hope this article has shown the power of Private Endpoint and what it can do for the security of your application. Security is often overlooked, especially with the cloud. This is unfortunate. As more and more organizations move their workloads to the cloud, they will have an expectation for security. Developers must embrace these understandings and work to ensure what we create in the Cloud (or anywhere) is secure by default.

Advertisement

Securing Access to Storage with Private Link

Security. Security is one of the hardest and yet most vital pieces to any application we build, whether on-premise or in the cloud. We see evidence of persons not taking security seriously all the time in the news in the form of data leaks, ransomware, etc. No cloud can automatically safeguard you from attacks but, they do offer some very sophisticated tools to aid in helping you more easily bring a strong security posture to your organization.

In Azure, the release of the Private Link feature is a crucial means to ensure we safeguard access to our PaaS (Platform as a Service) deployments. Using Private Link you can create a secure tunnel from your vNet (Virtual Network) to the PaaS service (Azure Storage for example). By using this tunnel, you can ensure that NO TRAFFIC between your assets in the vNet and the dependent services traverse the public internet. Why is this important?

Encryption can definitely aid to ensure data is kept private while in transit but, it is not perfect and for some industries (finance, insurance, healthcare, etc) the privacy of data is not something that can be left to chance.

Defense in Depth

As with any security posture, its strength lies in the layers of protection which exist whether that in NVAs checking incoming traffic, microsegmentation of networks, and/or IP whitelisting, the key is to never rely solely on one measure to keep you safe.

Private Link, for example, keeps your traffic off the internet but, does not protect you from a malicious actor accessing your storage account from a compromised machine. Ensuring your adhere to a defense in depth mentality is the best security along with assuming and accepting you will have a breach and that, the focus is not so much on preventing the breach but, rather, limiting the damage an attacker can do.

Establish your Private Link endpoint (Storage)

Private Link is available for the following services:

  • Azure Machine Learning
  • Azure Synapse Analytics
  • Azure Event Hub
  • Azure Monitor
  • Azure Data Factory
  • Azure App Configuration
  • Azure-managed Disks
  • Azure Container Registry
  • AKS (Azure Kubernetes Service)
  • Azure SQL Database
  • Azure CosmosDB
  • Azure Database for Postgres
  • Azure Database for MySQL
  • Azure Database for MariaDB
  • Azure Event Grid
  • Azure Service Bus
  • Azure IoT Hub
  • Azure Digital Twins
  • Azure Automation
  • Azure Backup
  • Azure Key Vault
  • Azure Storage (all services)
  • Azure FileSync
  • Azure Batch
  • Azure SiganlR Service
  • Azure Web Apps
  • Azure Search
  • Azure Relay
  • Azure Load Balancer (Standard Only)

Source: https://docs.microsoft.com/en-us/azure/private-link/availability

This visual shows what is going on under the hood:

Manage a Private Endpoint connection in Azure | Microsoft Docs

The Private Link provides a specific endpoint within a Private DNS Zone within the vNet, this DNS is updated with the mapping for the endpoint. Therefore, for machines WITHIN the vNet you can use the typical endpoint and connection strings for services over the Private Link as you would if it did not exist – Private DNS will ensure the address is resolved to the private link IP which in turn allows communication with the connected PaaS service.

Creating the Initial Resources

The Virtual Network offering from Azure is a way to isolate your resources deployed to Azure, most commonly Virtual Machines and other features deployed through the IaaS (Infrastructure as a Service) model. By using vNets and Network Security Groups (NSGs) you can effectively control the incoming and outgoing traffic for this custom network you have created in Azure.

Here is a good link on creating a Virtual Network: https://docs.microsoft.com/en-us/azure/virtual-network/quick-create-portal

The short story is, for the purposes here, you can take the default CIDR for the Address Space (10.0.0.0/16) and the default Subnet CIDR (10.0.0.0/24). This will work fine for this bit (for those more familiar with CIDR you can adjust this to be more practical).

This Virtual Network not being setup we can create our Storage Account. There is an option on the Networking tab for Private Endpoint. By choosing to Add private endpoint you can select the vNet and the subnet in which this Private Endpoint will be created.

A quick note, Private Link will ONLY allow access to a specific service within the storage account; you dont get unfettered access to all services, its designed to be a pipe to a specific service. For this demo, I will be select blob as my file upload code will create and retrieve objects from Azure Storage Blob Service.

Keep the default settings for the Private DNS integration section.

For details on creating Azure Storage Accounts, follow this link: https://docs.microsoft.com/en-us/azure/storage/common/storage-account-create?tabs=azure-portal – please make sure you do not allow anonymous access to containers, this is the number one way data leaks are caused.

Let’s Access Our Blobs

So, right now we have a pretty useless setup. Sure its secure, but we cant do anything with our Private Endpoint. We could deploy a Virtual Machine running IIS or Nginx (or whatever you prefer), through some .NET Core code up there and interact with the endpoint.

Before doing this, consult this link to understand the changes, if any, are needed to allow your application to talk to the Private Endpoint: https://docs.microsoft.com/en-us/azure/private-link/private-endpoint-dns

That works but, I have a different idea. One of the new features in Azure AppService is the ability to allow the App Service to integrate with a Virtual Network. Since AppService is a PaaS it also means we are not required to patch and maintain and underlying OS, we can just deploy our code and indicate to Azure how much in the way of resources we need. Let’s use this.

Prepare our VNet for App Service Integration

So, disclaimer, I am not a Networking expert, though I know enough to get by, there are people who are far more versed than I am. I say this because, one of the requirements for Azure App Service VNet Integration is the App Service be injected into an empty Subnet.

To achieve this, I added a secondary address space to my VNet and created a single subnet within this new Address Space, called Apps. There is likely a better way to do this, but this is what worked for me.

Here is a screen shot of what my VNet Address space and Subnets look like:

VNet Address Space
VNet Subnets

Make sure this is created AHEAD of the next phase where we will create and join an App Service instance to this VNet.

Join an App Service to our Virtual Network

I wont go into how to create an Azure AppService, here is a link explaining it: https://docs.microsoft.com/en-us/azure/app-service/quickstart-dotnetcore?tabs=netcore31&pivots=development-environment-vs

Once the App Service is created, you can upload your code to it. For simplicity, I am posting gists with the code for reading and writing blob data from a Web API, but there are numerous examples you could draw from:

The StorageAccountConnectionString is the value straight from the Azure portal, no changes. This is the beauty of the Private DNS Zone that gets added to the Virtual Network when you add a Private Link. More on that later.

Deploy your application to your Azure AppService and add StorageAccountConnectionString to your Configuration blade, again, the value is verbatim what is listed in the Access Keys blade in the Storage Account.

You can now try to use your application at it should fail. The error, if you look in the logs will be 403 Forbidden which makes sense, we have NOT joined the App Service yet, this proves our security is working – the storage account is NOT visible to anyone outside of our Virtual Network.

I expect you will want to test things locally with the Storage Account, it is onerous to only do your testing when deployed to Azure. While the Storage Account IS setup to allow connections via the Private Link you can also poke a whole in the Storage Account firewall to allow specific IP addresses (or ranges). This can be found under the Networking blade of the Storage Account, look for the Firewall section. There is an option to add your current IP to the allowed list.

In the Azure App Service select the Networking blade. The very first option in the listing is VNet Integration – this is what you want to configure. Select the Click here to configure link.

Select the Add VNet option. This will open a panel with your available VNets and their respective Subnets. This is what we are after. You should see your Apps (or whatever you used) Subnet – select this. Press Ok.

The Configuration screen will update with the new details. Give it a few minutes for the stars to align before you test. Not super long but just long enough that if you test right away it might not work.

What are the results?

If everything is working, your App Service can talk to the Storage Account with NO CONFIGURATION changes. Yet, this is very secure since only the App Service can talk to the Storage Account over the web, lessening the chances of a data leak. There are other ways to further secure this, NSGs and Firewall rules being at the top of the list. But, this is a good start to creating a secure application.

Difference with Service Connections

You may be away of another feature in Azure known as Service Connections, and make no mistake they share a lot of similarities with Private Links but, the two are different.

The greatest difference is that Service Connections expose a Public IP, Private Links do not, they only ever user Private IPs. Thus the two address distinctly different use cases. The Service Connection can limit access to the underlying service but data is still traversing the public web. With Private Link (Endpoints) the data NEVER leaves the VNet and is thus more secure.

MVP Ravi explains this difference in more detail: https://scomandothergeekystuff.com/2020/06/22/azure-service-endpoints-versus-azure-private-links/

Regardless of which you choose I implore you to consider security first and foremost in your applications. There are enough tools and features in Azure to keep security breaches to a minimum.

Single Purpose Events

Over the last few weeks Jeff Fritz and I have been slowly rolling out (darkly) support in KlipTok for tracking the live status of tracked channels. This feature leverages Twitch’s EventSub system that enables a sort of event driven programming for client applications. The integration of EventSub into KlipTok will enable the site to take the next step in terms of functionality and offerings.

Tracking stream status involves receiving callbacks for two different events which can occur: stream.online and stream.offline. Following this, I implemented support for this using Azure Service Bus with SQLFilters. The filters were organized as such:

EventType = ‘stream.online’ and EventVersion = ‘1’ => Stream is online

EventType = ‘stream.offline’ and EventVersion = ‘1’ => Stream is offline

Initial testing of this logic showed it worked as the events were received. Following the success small tests I worked with Jeff to subscribe all channels that had opted into tracking. Once successful, I began monitoring the channels during control times (when @csharpfritz stream was active) as well as random points throughout the day. For the most part I had confirmed the functionality working properly.

But then I started seeing some irregularities. @wintergaming is a popular StarCraft 2 Twitch channel but I noticed that the entry never seemed to go offline. Digging deeper into Twitch’s documentation I realized that, in fact, there is a difference between a channel being online and a channel being live; @wintergaming for example is NEVER offline rather, he plays reruns when he is not live. According to the documentation, this is still a valid online status, a status which i was not accounting for. As our requirement for KlipTok was to note LIVE channels, a change needed to be made.

Many Options

As I sat down to address this I was faced with a few ways to go about it. A couple approaches which I considered were:

  • I could update the StreamOnlineUpdate subscription such that, if the type was NOT ‘live’ I should treat it as an offline event and delete the entry from the LiveChannel tracking table
  • I could update the StreamOnlineUpdate subscription such that, if the not ‘live’ type was detected the event would be redirected to the Offline subscription

Both of these were, in my mind, bad choices. For starters, taking Option 1 would create duplicative code between the Online and Offline subscriptions. Further, it would obscure the purpose of the OnlineUpdate whose intent is to handle the case of the channel being ‘online’. I decided to not pursue this.

Option 2 is a bit better since it avoids duplicating logic but, when creating event handlers, the intent should be as straightforward and clear as possible. Event redirection like this is foolish and only adds extra processing logic to the handler. It would be different if I was creating a new event to “advance the flow”. But, in this case, I am effectively using the subscriptions as a mechanism of logic checking.

So, I thought about it more deeply and I realized that I was restricting myself based on the naming I had chosen. Recall what I said earlier “there is a difference between a channel being online and a channel being live”. The solution lie in honoring this distinction Twitch was making in our system as well.

Thus, the solution I arrived at is to alter the names of the subscriptions as such:

  • StreamOnlineUpdate => StreamLiveUpdate
  • StreamOfflineUpdate => StreamNotLiveUpdate

By creating this distinction, it meant that I could not adjust the Service Bus SQL Filter to only send the message to StreamLiveUpdate if the channel is, in fact, live. In other cases, the channel is NOT live and thus we should send to the StreamNotLiveUpdate.

In effect, this enables the sort of Single Purpose Events which are ideal in complex system which depend on low amounts of coupling to ensure the sanity of the maintainers.

Making it Work

The SQL Filter syntax of Service Bus works quite well (though I am still partial to what is offered through EventGrid) and enables clear definition of criteria. Unlike EventGrid however, the message itself cannot be analyzed (or if it can I have not found out how). Thus, we rely on the use of the Message class (from Microsoft.Azure.ServiceBus NuGet package) to apply custom UserProperties that we can use for filtering.

We end up defining the following for StreamLiveUpdate

EventVersion = ‘1’ and EventType = ‘stream.online’ and StreamType = ‘live’

EventVersion comes from Twitch so we can distinguish between different formats of the Event (this approach is also highly advised for testing and development to ensure discreteness with events already in play).

The EventType is the Twitch event being received. Our code also, upon knowing its receiving a stream type event offers the StreamType value as well, which will contain live, rerun, and other values indicating what the online stream type corresponds to.

For StreamNotLiveUpdate we define the following SQL Filter:

EventVersion = ‘1’ and (EventType = ‘stream.offline’ or (EventType = ‘stream.online’ and StreamType <> ‘live’))

You can see this combines our criteria for the normal case (EventType = ‘stream.offline’) and the exceptional case around an online event that is NOT of type live.

Conclusion

Through this approach we ensure our event handlers have but a single purpose, an application of the Single Responsibility Principle from SOLID design. The only time we should have to modify the Live event handler is if the meaning of a live channel changes. We are not redirecting or overloading this handler and obscuring its meaning. Instead we adjusted our understanding to better match the reality of the data we would be receiving. Thus, we are able to maintain a single purpose event pattern and control the complexity of the feature.

Common Misconception #4 – Duplication is bad

Perspectives on duplication are as varied and wide-ranging as any topic in software development. Newer programmers are often fed acronyms such as DRY (Dont Repeat Yourself) and constantly bombarded with warnings from senior programmers about the dangers of duplication. I can even recall getting these lessons when I was still in academia.

So is duplication actually bad? The simpler answer ‘Yes’, but the real answer is much more nuanced and boils down to one of the most common phrases in software engineering: ‘it depends’. But what does it depend on? How can teams prevent themselves from going over the deep end and introducing leaky abstractions or over complexity all in the name of adhering to DRY?

It is about the Lifecycle

Whenever a block of code is authored it immediately transitions to something which must be maintained. I once heard a senior engineer remark “there is no such thing as new code. There is only legacy code and code which is not yet written”. So, once code is written we must immediately begin discussing its lifecycle.

Code evolves over time and this evolution should be encouraged – something that is actively pursued through attempts to decouple code and promote cohesion. Too often the reason something is “shared” is because we created a Person class in one system and felt that creating a second Person class in another system would be duplicative. However, in making this assumption, developers will unknowingly increase system coupling resulting in a far greater problem. A problem they could avoid if they considered the “lifecycle” of each Person class.

More prominently, this notion gets applied to business logic and, in that case, it is correct. Certain business logic absolutely has a standard lifecycle. In fact, each line of code you write will have a lifecycle, and this what you need to use to decide whether duplicating something makes sense.

An example

When I was working as a Principal at West Monroe Partners some years ago I was assigned to a project in which through a combination of misgivings a multitude of mistakes had been made which hampered team progress and efficiency. One of these was a rather insane plan to share database entities through a NuGet package.

Database entities, in theory, do not change that often once established, but that is theory. More often, and especially as a system is being actively developed, they change constantly – this is especially true in the case of this project which had three active development teams all using the same database. The result was near constant updates across every project whenever a change was made – and failure to do so would often manifest as an error in a deployed environment since Entity Framework would complain the schema expected did not match.

While the team may have had a decent sense to reduce duplication by sharing entities it is a high risk move in complex systems. In the best case, you end up with bloated class definitions and API calls where returned objects may or may not have all fields populated. This becomes even more true if you are approach system design with a microservice based mindset – as each service should contain its own entities (unless you are sharing the DB which is a different problem altogether).

Should all code be segregated then?

The short answer is “No”. Again, we return to point on lifecycle. In fact, this relates to the core principle in Microservice design where services and their lifecycles are independent of each other. Spelt out “no service should be reliant on another service in deployment” – if this rule is broken then the advantages of microservices is effectively lost. The lifecycle of each service must be respected.

It is the same with code. Code lifecycles must be understood and respected. Just because you define two Person class definitions does not mean you have created duplication, even if the definitions are the same. You are giving yourself the ability to change each over time according to system needs.

Some code, logging code or perhaps a common set of POCO classes may need to be shared – this is where I tend to lean on custom NuGet feeds. But, generally this is a last resort as it is easy to go overboard with things like NuGet and fall into the “pad left” problem – where you decompose everything so much that you wind up with an extensive chain of dependencies which need to be revved for a release. Link.

As with most things, there is a necessary balance to strike here and you should not expect to get it right immediately – frankly the same lesson is applied in Microservice where, you never start with Microservices, you create new services as needed.

Why is it a misconception?

I find that the concept of DRY is overused and, more often, taken way too literally. I think we can all agree that what is often meant by DRY is to ensure we dont need to update the same logic in multiple places. DRY is not telling us that have two Person classes is bad. It can be bad but whether that is so is determined by circumstances and is not a hard and fast rule.

The misconception is dangerous because strict adherence to DRY can actually make our code LESS maintainable and sacrifice clarity for reduced keystrokes. As developers, we need to constantly be thinking and evaluating whether centralization and abstraction make sense or if we are doing it because we may be overthinking the problem or taking DRY too literally.

So I made a Stock Data App

I decided to build a Event Driven Stock Price Application using Event Grid, SignalR, and ReactJS. Just a little something to play with as I prepare to join Microsoft Consulting Services. I thought I would recount my experience here. First, here is what the flow looks like:

Figure 1 – Diagram of Stock App

While the diagram may look overbearing it really is quite simple:

  • Producer console app starts with some seed data of stock prices I gathered
  • It adjusts these values using some random numbers
  • The change in price is sent to an Event Grid topic with an EventType declared
  • The EventGrid subscriptions look for events with a matching EventType
  • Those that match will fire their respective Azure function
  • The Azure Function will then carry out its given task

I really prefer Event Grid for my event driven applications. Its fast, cost effective, and has a better interaction experience than Service Bus topics, in my opinion. The subscription filters can get down to analyzing the raw JSON coming through and it supports the up and coming Cloud Events (cloudevents.io) standard. It also can tie into the tenant providers and respond to native Azure events, such as blob creation/deletion. All in all, it is one of my favorite Azure services.

So regarding the application, I choose to approach this from a purely event driven fashion. All price changes are seen as events. The CalculateChangePercent receives all events and, using the symbol as the partition key, looks up the most recent price stored in the database.

Based on this and the incoming data it determines the change percent and creates a new event. Here is the code for that:

[FunctionName("CalculateChangePercent")]
public void CalculateChangePercent(
[EventGridTrigger] EventGridEvent incomingEvent,
[Table("stockpricehistory", Connection = "AzureWebJobsStorage")] CloudTable stockPriceHistoryTable,
[EventGrid(TopicEndpointUri = "TopicUrlSetting", TopicKeySetting = "TopicKeySetting")] ICollector<EventGridEvent> changeEventCollector,
ILogger logger)
{
var stockData = ((JObject)incomingEvent.Data).ToObject<StockDataPriceChangeEvent>();
var selectQuery = new TableQuery<StockDataTableEntity>().Where(
TableQuery.GenerateFilterCondition(nameof(StockDataTableEntity.PartitionKey), QueryComparisons.Equal, stockData.Symbol)
);
var symbolResults = stockPriceHistoryTable.ExecuteQuery(selectQuery).ToList();
var latestEntry = symbolResults.OrderByDescending(x => x.Timestamp)
.FirstOrDefault();
if (latestEntry != null)
{
var oldPrice = (decimal) latestEntry.Price;
var newPrice = stockData.Price;
var change = Math.Round((oldPrice newPrice) / oldPrice, 2) * 1;
stockData.Change = change;
}
changeEventCollector.Add(new EventGridEvent()
{
Id = Guid.NewGuid().ToString(),
Subject = $"{stockData.Symbol}-price-change",
Data = stockData,
EventType = "EventDrivePoc.Event.StockPriceChange",
DataVersion = "1.0"
});
}
view raw create-event.cs hosted with ❤ by GitHub

This is basically “event redirection”, that is taking one event and create one or more events from it. Its a very common approach to handle sophisticated event driven workflows. In this case, once the change percent is calculated the information is ready for transmission and persistence.

This sort of “multi-casting” is at the heart of what makes event driven so powerful and, so risky. Here two subscribers will receive the exact same event and take very different operations:

  • Flow 1 – this flow takes the incoming event and saves it to a persistence store. Usually, this needs to be something high availability, consistency is usually not something we care about.
  • Flow 2 – this flow takes the incoming event and sends it to the Azure SignalR service so we can have a real time feed of the stock data. This approach in turn allows connecting clients to also be event driven since we will “push” data to them.

Let’s focus on Flow 1 as it is the most typical flow. Generally, you will always want a record of the events the system received either for analysis or potential playback (in the event of state loss or debugging). This is what is being accomplished here with the persistence store.

The reason you will often see this as a Data Warehouse or some sort of NoSQL database is, consistency is not a huge worry and NoSQL database emphasize the AP portion of the CAP theorem (link) and are well suited to handling high write volumes – this is typical in event heavy systems, especially as you get closer to patterns such as Event Sourcing (link). There needs to be a record of the events the system processed.

This is not to say you should rely on a NoSQL database over an RDBMS (Relational Database Management System), each has their place and there are many other patterns which can be used. I like NoSQL for things like ledgers because they dont enforce a standard schema so all events can be stored together which allows for easier re-sequencing.

That said, there are also patterns which periodically read from NoSQL stores and create data into RDBMS – this is often done if data ingestion needs are such that a high volume is expected but the data itself can be trusted to be consistent. This may create data into a system where we need consistency checks for other operations.

Build the Front End

Next on my list was to build a frontend reader to see the data as it came across. I choose to use ReactJS for a few reasons:

  • Most examples seem to use JQuery and I am not particularly fond of JQuery these days
  • ReactJS is, to me, the best front end JavaScript framework and I hadnt worked with it in some time
  • I wanted to ensure I still understood how to implement the Redux pattern and ReactJS has better support than Angular; not sure about Vue.js

If you have never used the Redux pattern, I highly recommend it for front end applications. It emphasizes a mono-directional flow of data built on deterministic operations. Here is a visual:

https://xximjasonxx.files.wordpress.com/2021/05/2821e-1bzq8fpvjwhrbxoed3n9yhw.png

I first used this pattern several years ago when leading a team at West Monroe, we built a task completion engine for restaurants, we got pretty deep into the pattern. I was quite impressed.

Put simply, the goal of Redux is that all actions are handled the same and state is recreated each time a change is made, as opposed to updating state. By taking this mentality, operations are deterministic meaning the same result will occur no matter how many times the same action is executed. This bakes very nicely with the event driven model from the backend which SignalR carries to the frontend.

Central to this is the Store which facilitates subscribing and dispatching events. I wont go much deeper into Redux here, much better sources out there such as https://redux.js.org/. Simply put, when SignalR sends out a messages it sends an event to listeners – in my case its the UpdateStockPrice event. I can use a reference to the store to dispatch the event, which allows my reducers to see it and change their state.

Once a reducer changes state, a state updated event is raised and any component which is connected will update, if needed (ReactJS uses shadow DOM to ensure components only change if they were actually changed). Here is the code which is used (simplified):

// located at the bottom of index.js the application bootstrap
let connection = new HubConnectionBuilder()
.withAutomaticReconnect()
.withUrl("https://func-stockdatareceivers.azurewebsites.net/api/stockdata&quot;)
.build();
connection.on('UpdateStockPrice', data => {
store.dispatch({
type: UpdateStockPriceAction,
data
});
});
connection.start();
// reducers look for actions and make changes. The format of the action (type, data) is standard
// if the reducer is unaware of the action, we return whatever the current state held is
const stockDataReducer = (state = initialState, action) => {
switch (action.type) {
case UpdateStockPriceAction:
const newArray = state.stockData.filter(s => s.Symbol !== action.data.Symbol);
newArray.push(action.data);
newArray.sort((e1, e2) => {
if (e1.Symbol > e2.Symbol)
return 1;
if (e1.Symbol < e2.Symbol)
return 1;
return 0;
});
return { stockData: newArray };
default:
return state;
}
};
// the component is connected to the store and will rerender when state change is made
class StockDataWindow extends Component {
render() {
return (
<div>
{this.props.stockData.map(d => (
<StockDataLine stockData={d} key={d.Symbol} />
))}
</div>
);
}
};
const mapStateToProps = state => {
return {
stockData: state.stockData
};
};
export default connect(mapStateToProps, null)(StockDataWindow);
view raw update.js hosted with ❤ by GitHub

This code makes use of the redux and react-redux helper libraries. ReactJS, as I said before, supports Redux extremely well, far better than Angular last I checked. It makes the pattern very easy to implement.

So what happens is:

  • SignalR sends a host of price change events to our client
  • Our client dispatches events for each one through our store
  • The events (actions) are received by our reducer which changes its state
  • This state change causes ReactJS to fire render for all components, updating Shadow DOM
  • Shadow DOM is compared against action DOM and components update where Shadowm DOM differs

This whole process is very quick and is, at its heart, deterministic. In the code above, you notice the array is recreated each time rather than pushing the new price or trying to find the existing index an updating. This may seems strange but, it very efficiently PREVENTS side effects – which often manifest as some of the more nastier bugs.

As with our backend, the same action could be received by multiple reducers – there is no 1:1 rule.

Closing

I wrote this application more to experiment with Event Driven programming on the backend and frontend. I do believe this sort of pattern can work well for most applications; in terms of Redux I think any application of even moderate complexity can benefit.

Code is here: https://github.com/jfarrell-examples/StockApp

Happy Coding

Common Misconception #3 – DevOps is a tool

DevOps is a topic very near and dear to me. Its something that I helped organizations with a lot as an App Modernization Consultant in the Cognizant Microsoft Business Group. However, I find it ubiquitous that DevOps is misunderstood or misrepresented to organizations.

What is DevOps?

In the most simplest sense, DevOps is a culture that is focused on collaboration that aims to maximize team affinity and organizational productivity. While part of its adherence is the adoption of tools that allow team to effectively scale, at its core its a cultural shift to remove team silos and emphasize free and clear communication. One could argue, as Gene Kim does in his book The Phoenix Project, that the full realization of DevOps is the abolishment of IT departments; instead IT is seen as a resource embedded in each department.

From a more complex perspective, the tenants of DevOps mirror the tenants of Agile and focus on small iterations allowing organizations (and the teams within) to adjust more easily to changing circumstances. These tenants (like with Agile) are rooted in Lean Management which was borne out of the Toyota Production System (TPS) (link) which revolutionized manufacturing and allowed Toyota to keep pace with GM and Ford, despite the later two being much larger.

The Three Ways

DevOps culture carries forth from TPS the three ways which govern how work flows through the system, how it is evaluated for quality, and how observations upon that work inform future decisions and planning. For those familiar with Agile, this should, again, sound familiar – DevOps and Agile share many similarities in terms of doctrine. A great book for understanding The Three Ways (also authored by Gene Kim) is The DevOps Handbook.

The First Way

The First Way focuses on maximizing the left to right flow of work. For engineers this would be the flow of a chance from conception to production. The critical idea to this Way is the notion of small batches. We want teams to consistently and quickly sending work through flows and to production (or some higher environment as quickly as possible). Perhaps contrary to established thought, the First Way stresses that the faster a team moves the higher their quality.

Consider, if a team works on a large amount of changes (100), does testing, and then ultimately deploys the change the testing and validation is spread out across these 100 changes. Not only are teams at the mercy of a quality process which must be impossibly strict, if a problem does occur, the team must sort out WHICH of the 100 changes caused the problem. Further, the sheer size of the deployment would likely make rollback very difficult, if not impossible. Thus, the team may also be contending with downtime or limited options to ensure the problem does not introduce bad data.

Now consider, if that same team deployed 2 changes. The QA team can focus on a very narrow set of testing and if something goes wrong, diagnosing is much easier given the smaller size. Further, the changes could likely be backed out (or turned off) to prevent the introduction of bad data into the system.

There is a non-linear relationship between the size of the change and the potential risk of integrating the change – when you go from a ten-line code change to a one-hundred-line code change, the risk of something going wrong is more than 10x higher, and so forth

Randy Shoup, DevOps Manager, Google

Smaller batch sizes can help your teams move faster and get their work in front of stakeholders more efficiently and quickly, doing so induces better communication between the team and their users, which ultimately help each side get what they want out of the process. There is nothing worse than going off in a corner for 4 months, building something, and having it fall short of the needs of the business.

The Second Way

Moving fast is great but is only part of the equation. Like so much in DevOps (and Agile) the core learnings are defined in a way that is supplementary to each other. The Second Way emphasizes the need for fast feedback cycles or more directly, is aimed at ensuring that the speed is supported by automated and frequent quality checks.

The Second Way is often tied to a concept in DevOps called shift-left, shown by the graph visual below:

Shift Left in action

It is not uncommon for organizations embracing a siloed approach to Quality Assurance to start QA near the end of a cycle, generally to ensure they can validate the complete picture. While this makes sense, it value is misplaced. I would ask anyone who has built or tested software how often this process ends up being a bottleneck (reasons be damned) in delivery? If you are like most clients I have worked with, the answer is Yes and Always.

The truth is, such a model does not work if we want teams to move with speed and quality. Shift Left therefore emphasizes that it is the people at the LEFT who need to do the testing (in the case of engineering that would be the developers). The goal is to discovered a problem as quickly as possible so that it can be corrected building on the ubiquitous understanding that the earlier a problem is found the cheaper it is to fix.

To but it bluntly, teams cannot make changes to systems, teams, or anything is there is not a sense of validation to know what they did worked. For engineering, we can only know something is working if we can test that is working, hence the common rule for high-performing teams that no problem should ever occur twice.

I cannot overstate how important these feedback cycles are, especially in terms of automation. Especially in engineering, giving developers confidence that IF they make a mistake (and they will) it will get caught before it gets to production is HUGE. Without this confidence, the value provided by The First Way will be limited.

And equally critical to creating the cycles is UNDERSTANDING the means of testing and what use case is best tested by what. Here is an image of the Testing Pyramid which I commonly use with clients when explaining feedback cycles for Engineering.

For those wondering where manual testing goes – it is at the very top and has the fewest number. Manual tests should be transitioned to an automated tool.

A final point I want to share here, DevOps considers QA as a strategic resource NOT a tactical one. That is, high functioning teams do NOT expect QA persons to do the testing, these individual are expected to ORGANIZE the testing. From this standpoint, they would plan out what tests are needed and ensure the testing is happening. In some cases, they may be called on to educate developers on what tests fit certain use cases. Too often, I have seen teams view QA as the person who must do the testing – this is false and only encourages bottlenecking. Shift-left is very clear that DEVELOPERS need to do the majority of testing since they are closer to a given change than QA.

The Third Way

No methodology is without fault and it would be folly to believe there is a prescriptive approach to anything that fits any team. Thus, The Third Way stresses that we should use metrics to learn about and modify our process. This includes how we work as well as how our systems work. The aim is to create a generative culture that is constantly accepting of new ideas and seeks to improve itself. Teams embracing this Way apply the scientific method to any change in process and work to build high-trust. Any failure is seen, not as a time to blame or extort, but rather to learn and evolve.

“The only sustainable competitive advantage is an organization’s ability to learn faster than the competition”

Peter Senge – Founder of the Society for Organizational Learning

For any organization the most valuable asset is the knowledge of their employees for only through this knowledge can improvements be made that enable their products to continue to produce value for customers. Put another way:

Agility is not free. It’s cost is the continual investment to ensure teams can maintain velocity. I have seen software engineering department leads ask, over and over, why is the team not hitting their pre-determined velocity. Putting aside the fallacy of telling a team what speed they should work at, velocity is not free. If I own a sports car, but I perform not maintenance on it, soon it will drive the same as a typical consumer sedan. Why?

“.. in the absence of improvements, processes do NOT stay the same. Due to chaos and entropy, processes actually degrade over time”

Mike Rother – Toyota Kata

No organization, least of all engineering, can hope to achieve its goals if it does not continually invest in the process of reaching those goals. Teams which do not perform maintenance on themselves are destined to fail and, depending on the gravity of the failure, the organization could lose more than just money.

In Scrum, teams will use the Sprint Retrospective to call to attention things which should be stopped, started, and continued as a way to ensure they are continually enhancing their process. However, too often, I have seen these same teams shy away from ensuring, in each sprint, there is time taken to remove technical debt or add some automation, usually because they must hit a target velocity or a certain feature. This completely gets away from the spirit of Agile and DevOps.

Its about culture

Hopefully, despite my occasional references to engineering, you can understand that The Three Ways are about culture and about embracing many lessons learned from manufacturing about how to effectively move work through flows. DevOps is an extremely deep topic that, regrettably, often gets boiled down to a somewhat simplistic question of “Do you have automated builds?”. And yes, automation is a key to embracing DevOps but less important than establishing the cultural norms to support it. Simply having an automated build is indifferent if all work must be pass through central figures or certain work is given into silos where the timeline is no longer the teams.

Further Reading

The topic of DevOps is well covered, especially if you are a fan of Gene Kim. I recommend these books to help understand DevOps culture better – I list them in order of quality:

  • The DevOps Handbook (Gene Kim) – Amazon
  • The Phoenix Project (Gene Kim et al) – Amazon
  • Effective DevOps (Davis and Daniels) – Amazon
  • The Unicorn Project (Gene Kim) – Amazon
  • Accelerate (Forsgren et al) – Amazon

Thank you for reading

Common Misconception #2 – Serverless is good for APIs

The next entry in this series is something that hits very close to home for me: Serverless for APIs. Let me first start off by saying, I am not stating this as an unequivocal rule. As with anything in technology there are cases where this makes sense. And, in fact, much of my consternation could be alleviated by using Containers. Nevertheless, I still believe the following to be true:

For any non-simple API, a serverless approach is going to be more harmful and limiting, and in some cases more costly, than using a traditional server.

Background

When AWS announced Lambda back in 2014 it marked the first time a major cloud platform had added support for what would be come known as FaaS (Function as a Service). The catchphrase was serverless which, did not make a lot of sense to people since there was obviously still a server involved – but marketing people gotta market.

The concept was simple enough, using Lambda I could deploy just the code I wanted to run and pay per invocation (and the cost was insanely cheap). One of the things Lambda enabled was the ability to listen for internal events from things like S3 or DynanmoDB and create small bits of code which responded to those events. This enabled a whole new class of event driven applications as Lambda could serve as the glue between services – EventBridge came along later (a copy of Azure’s EventGrid service) and further elevated this paradigm.

One of the Events is a web request

One of the most common types of applications people write are APIs and so, Lambda ensured to include support for supporting web calls – effectively by listening for a request event coming from outside the cloud. Using serverless and S3 static web content, a company could effectively run a super sophisticated website for a fraction of the cost of traditional serving models.

This ultimately led developers to use Lambda and Azure Functions as a replacement for Elastic Beanstalk or Azure App Service. And this is where the misconception lies. While Lambda is useful to glue services together and provide for simple webhooks it is often ill-suited for complex APIs.

The rest of this will be in the context of Azure Functions but, conceptually the same problems existing with Google Cloud Functions and AWS Lambdas.

You are responding to an Event

In traditional web server applications, a request is received by the ISAPI (Internet Server Application Program Interface) where it is analyzed to determine its final destination. And this destination can be affected by code, filters, and other mechanisms.

Serverless, however, is purely event driven which means, once an event enters the system it cannot be cancelled or redirected, it must invoke its handler. Consider the following problem that was encountered with Azure Function Filters while developing KlipTok.

On KlipTok there was a need to ensure that each request contained a valid header. In traditional ASP .NET Core, we would write a simple piece of middleware to intercept the request and, if necessary, short-circuit it should it be deemed invalid. While technically possible in Azure Functions it requires a fairly robust and in-depth knowledge and customization to achieve.

In the end, we leveraged IFunctionsInvocationFilter (a preview feature) which allowed code to run ahead of the functions execution (no short-circuit allowed) and mark the request. Each function then had to check for this mark. It did allow us to reduce the code but certainly was not as clean as a traditional API framework.

The above example is one of many examples of elements which are planned and allotted for in full-fledged API frameworks (being able to plug into the ISAPI being another) but are otherwise lacking in serverless framework, Azure Functions in this case. While there does exist the ability to supplement some of these features with containerization or third party libraries, I still believe such a play detracts from the intended purpose of serverless: to be the glue in complex distributed systems.

It is not to say you never should

The old saying “never say never” certainly holds true in Software Engineering as much as anywhere. I am not trying to say you should NEVER do this, there are cases where Serverless makes sense. This is usually because the API is simple or the Serverless piece is leverage by a proxy API or represents specific routes within the API. But I have, too often, seen teams leverage serverless as if it was a replacement for Azure App Service or Elastic Beanstalk – it is not.

As with most things teams need to be aware and make informed decisions with an eye on the likely road of evolution a software product will take. While tempting, Azure Functions have a laundry list of drawbacks you need to be aware of including:

  • Pricing which will vary with load taken by the server (if using Consumption style plans)
  • Long initial request time as the Cloud provider must standup the infrastructure to support the serverless code – often times our methods will go to sleep
  • Difficulties with organization and code reuse. This is certainly much easier in Azure than AWS but, still something teams need to consider as the size of the API grows
  • Diminished support for common and expected API features. Ex: JWT authentication and authorization processing, dependency injection in filters, lack of ability to short circuit.

There are quite a few more but, you get the idea. In general, the aim for a serverless method is to be simple and short.

There are simply better options

In the end, the main point here is, while you can write APIs in Serverless often times you simply shouldnt – there are simply better options available. A great example is the wealth of features web programmers will be used to and expect when building APIs that are simply not available or not easy to implement with serverless programming. Further, as project sizes grow the ability to properly maintain and manage the codebase because more difficult with serverless than with traditional programming.

In the end, serverless main purpose should be to glue your services together, enabling you to easily build a chain like this:

The items in blue represent the Azure Functions this sequence would require (at a minimum). The code here is fairly straightforward thanks to the use of bindings. These elements hold the flow together and support automated retry and fairly robust failure handling right out of the box.

Bindings are the key to using Serverless correctly

I BELIEVE EventBridge in AWS enables something like this but, as is typical, Microsoft has a much more thoughtout experience for developers in Azure than AWS has – especially here.

Triggers and bindings in Azure Functions | Microsoft Docs

Bindings in Azure allow Functions to connect to services like ServiceBus, EventGrid, Storage, SignalR, SendGrid, and a whole lot more. Developers can even author their own bindings. By using them, the need to write the boilerplate connect and listen code is removed so the functions can contain code which is directed at their intended purpose. One of these bindings is an input trigger called HttpTrigger, and if you have ever written a Azure Function you are familiar with it. Given what we have discussed, its existence should make more sense to you.

A function is always triggered by an event. And the one that everyone loves to listen for in the HttpTrigger that is an event to your function app which matches certain criteria defined in the HttpTrigger attribute.

So returning to the main point, everything in serverless is the result of an event so, we want to view the methods created as event handler not full fledged endpoints. While serverless CAN support an API, it lacks many of the core features which are built into API frameworks and therefore should be avoided for all but simple APIs.

Common Misconception #1 – Entity Framework Needs a Data Layer

This is the first post in what I hope to be a long running series on common misconceptions I come across in my day to day as a developer and architect in the .NET space though, some of the entries will be language agnostic. The goal is to clear up some of the more common problems I find teams get themselves into when building applications.

A little about me, I am a former Microsoft MVP and have been working as a consultant in the .NET space for close to 15yrs at this point. One of the common tasks I find myself doing is helping teams develop maintainable and robust systems. This can be from the standpoint of embracing more modern architecture such as Event Driven systems or using containers or it can be a modernization of the process to support more efficient workflows that enable teams to deliver more consistent and reliable outcomes while balancing the effort with sustainability.

The first misconception is one which I run across A LOT. And that is the way in which I find teams leveraging Entity Framework.

Entity Framework is a Repository

One of the most common data access patterns right now is the Repository pattern – Link. The main benefit is that it enables developers to embrace the Unit of Work technique which results in simpler more straightforward code. However, too often I see teams build their repository and simple create data access methods on the classes – effectively creating a variant of the Active Record or Provider pattern with the name Repository.

This is incorrect and diminishes much of the value the Repository pattern is design to bring mainly, that operations can work with data in memory as if they were talking to the database and save their changes at the end. Something like this:

Repository patterns works VERY well with web applications and frameworks like ASP .NET because we can SCOPE the database connection (called Context in Entity Framework) to the request allowing our application to maximize the connection pool.

In the above flow, we only talk to the database TWO times despite the operations, everything is done in memory and the underlying framework will handle the details for us. Too often I see code like such:

public async Task<bool> DoWork(IList<SomeItem> items)
{
foreach (var item in items.Where(x => x.Id % 2 == 0))
{
await _someRepo.DeleteItem(item.Id);
}
}
view raw bad-repo.cs hosted with ❤ by GitHub

This looks fairly benign but it is actually quite bad as it machine guns the database with each Id. In a small, low traffic application this wont be a problem but, in a larger site with high volume this is likely to cause bottlenecks, record locking, and other problems. How could this be written better?

// variant 1
public async Task<bool> DoWork(IList<SomeItem> items)
{
// assume _context is our EF Context
foreach (var item in items.Where(x => x.Id % 2 == 0))
{
var it = await _context.FirstOrDefaultAsync(x => x.Id == item.Id);
_context.Remove(it);
}
await _context.SaveChangesAsync();
}
// variant 2
public async Task<bool> DoWork(IList<SomeItem> items)
{
// assume _context is our EF Context
var targetItems = await _context.Items.Where(
x => items.Where(x => x.Id % 2 == 0).Contains(x.Id)).ToListAsync());
foreach (var item in targetItems)
{
_context.Remove(item);
}
await _context.SaveChangesAsync();
}
view raw good-repo.cs hosted with ❤ by GitHub

In general, reads are less a problem for locking and throughout that write operations (create, update, delete) so, reading the database as in Variant 1 is not going to be a huge problem right away. Variant 2 leans on EF for SQL Generation to create a query which gets our items in one shot.

But the key thing to notice in this example is the direct use of the context. Indeed, what I have been finding is I dont create a data layer at all and instead allow Entity Framework to be the data layer itself. This opens a tremendous amount of possibilities as we can then take a building block approach to our service layer.

Services facilitate the Operation

The term “service” is horrendously overused in software engineering as it applies to so many things. In my case, I am using it to describe the classes which do the thing. Taking a typical example application here is how I prefer to organize things:

  • Controller – the controller is the traffic cop determining if the provided data meets acceptable criteria such that we can accept that request. There is absolutely no business logic here HOWEVER, for simple reads we may choose to inject our Context to perform those reads
  • Service – the guts of the application, this contains a variety of services varying in size and types. I try to stick with the Single Responsibility Principle in defining these classes. At a minimum we have a set of facilitators which facilitate a business process (we will cover this next) and other smaller services which are reusable blocks.
  • Data Layer – this is the EF context. Any custom mapping or definitions are written here

The key feature of a facilitator is the call to SaveChanges as this will mark the end of the Unit of Work. By taking this approach we get a transaction for free since the code can validate the data as it places it into the context, instead of waiting for a SQL Exception to indicate a problem.

By taking this approach, code is broken into reusable modules which can be reinjected and reused, plus it is VERY testable. This is an example flow I wrote up for a client:

Here the Process Payment Service is the facilitator and calls on the sub-services (shaded in blue). Each of these gets a context injection but, since the context is scoped each gets the same one. This means everyone gets to work with what is essentially their own copy of the database during their execution run.

The other benefit this approach has is avoid what I refer to as service wastelands. These are generic service files in our code (PersonService, TransactionService, PaymentService, etc) which becoming dumping grounds for methods – I have seen some of these files have upwards of 100 methods. Teams need to avoid doing this because the file becomes so long that ensuring uniqueness and efficiency among the methods becomes an untenable task.

Instead, teams should focus on creating purpose driven services which either facilitate a process or contain core business logic that may be reused in the code base. Combined with using Entity Framework as the data layer, code becomes cleaner and more straightforward.

What are Exceptions?

So, am I saying you should have NO Data Layer ever? No. As with anything this not black and white and there are cases for a data layer of sorts. For example, some queries to the database are too complex to put into a LINQ statement and developers will need to resort to SQL. For these cases, you will want to have a wrapper around the call for both reuse and maintenance.

But, do not take that to mean you need a method to ensure you do not rewrite FirstOrDefault in two or more spots. Of course, if you have a particularly complex LINQ query you might chose to hide it. However, keep in mind the MAIN REASON to hide code is to avoid requiring another person to have certain intimate knowledge of a process to carry out the operation. It is NOT, despite popular opinion, to avoid duplication (that is an entirely separate issue I will discuss later).

Indeed, the reason you should be hiding something is because it is complex in nature and error prone in its implementation such that problems could arise later. A simple Id look up does not fall into this category.

Conclusion

The main point I made here is Entity Framework IS an implementation of the Repository pattern and so, placing a repository pattern around it is superfluous. ASP .NET Core contains methods to ensure the context is scoped appropriately and disposed of with the end of a request. Leverage this and use the context directly in your services and lean on the Unite of Work pattern while treating the Context as your in-memory database. Let Entity Framework take responsibility for updating the database when you are complete.

Manual JWT Validation in .NET Core

Recently, I have been working with Jeff Fritz over at https://www.twitch.tv/csharpfritz as part of his effort to build a TikTok like site for Twitch, uniquely called KlipTok (https://www.kliptok.com). Mainly my efforts have been on shoring up the backend code in the BackOffice using Azure Functions.

This was one of my first major exposures with the Twitch API. Its fine overall but, it oddly does not use JWT tokens to communication states back and forth, rather an issues string is required for authenticated requests. I wanted to try a different approach to handling token auth and refresh so, I devised the following POC: https://github.com/jfarrell-examples/TwitchTokenPoc.

One of the aspects of the Twitch API is that tokens can expire and calls should be ready to refresh an access token which enters this state. The trouble is, these are two tokens and I didnt want the clients required to send both tokens, nor did I want the client to have to resubmit a request. I decided, I would create my own token and store within it, as claims, the access token and refresh token.

Taking this approach would allow the POC to, in effect, make it seem like Twitch is issues JWT tokens while still allowing the backend to perform the refresh. I decided, for additional security, I would encrypt the token claims in my JWT using Azure Key Vault Keys.

Part 1: Creating the Token

This approach hinges on what I refer to as token interception. As part of any OAuth/OIDC flow, there is a callback after the third party site (Twitch in this case) has completed the login. Tokens are sent to this callback for the sole purpose of allowing the caller to store them.

In order to achieve this, I created a method which a client would call at the very start. This contacts Twitch and reissues the active tokens, if they exist, or requests the user to login in again:

public IActionResult Get()
{
var redirectUri = WebUtility.UrlEncode("https://localhost:5001/home/callback");
var urlString = @$"https://id.twitch.tv/oauth2/authorize?client_id={_configuration["TwitchClientId"]}"
+ $"&redirect_uri={redirectUri}"
+ "&response_type=code"
+ "&scope=openid";
return Redirect(urlString);
}
view raw login.cs hosted with ❤ by GitHub

The key here is the redirectUri which redirects the provided response code back to the application. Here we can create the token and send it to the client. You can find this method in the provided GitHub repository, HomeController.

You can find MANY examples of creating a JWT Token on the internet, I will use this one for reference: https://www.c-sharpcorner.com/article/asp-net-web-api-2-creating-and-validating-jwt-json-web-token/

Here is my code which creates the token string with the access token and refresh token as claims:

public async Task<string> CreateJwtTokenString(string accessToken, string refreshToken)
{
var jwtSigningKey = await _keyVaultService.GetJwtSigningKey();
var securityKey = new SymmetricSecurityKey(Encoding.UTF8.GetBytes(jwtSigningKey));
var signingCredentials = new SigningCredentials(securityKey, SecurityAlgorithms.HmacSha256Signature);
var secToken = new JwtSecurityToken(
issuer: _configuration["Issuer"],
audience: _configuration["Audience"],
claims: new List<Claim>
{
new Claim("accessToken", await _cryptoService.Encrypt(accessToken)),
new Claim("refreshToken", await _cryptoService.Encrypt(refreshToken))
},
notBefore: null,
expires: DateTime.Now.AddDays(1),
signingCredentials);
return new JwtSecurityTokenHandler().WriteToken(secToken);
view raw jwt-create.cs hosted with ❤ by GitHub

The actual signing key is stored as a secret in Azure Key Vault with access controlled using ClientSecretCredentials, those values are stored in environment variables and not located in source code. You can find more information on this approach here: https://jfarrell.net/2020/07/14/controlling-azure-key-vault-access/ The one critical point I will make is ClientSecretCredential is only appropriate for local development – when deploying into Azure be sure code is using a Managed Identity driven approach.

I defined a simple method which grabs the Encryption key from Azure Key Vault and encrypts (or decrypts the data).

// getting the key
private KeyClient KeyClient => new KeyClient(
vaultUri: new Uri(_configuration["KeyVaultUri"]),
credential: _getCredentialService.GetKeyVaultCredentials());
public async Task<KeyVaultKey> GetEncryptionKey()
{
var keyResponse = await KeyClient.GetKeyAsync("encryption-key");
return keyResponse.Value;
}
// usage
public async Task<string> Encrypt(string rawValue)
{
var encryptionKey = await _keyVaultService.GetEncryptionKey();
var cryptoClient = new CryptographyClient(encryptionKey.Id, _getCredentialService.GetKeyVaultCredentials());
var byteData = Encoding.Unicode.GetBytes(rawValue);
var encryptResult = await cryptoClient.EncryptAsync(EncryptionAlgorithm.RsaOaep, byteData);
return Convert.ToBase64String(encryptResult.Ciphertext);
}
view raw gistfile1.txt hosted with ❤ by GitHub

The beauty of using Azure Key Vault is NO ONE but Azure is aware of the key. Using this, even if our JWT token is somehow leaked, the data within is not easy to decipher.

Once generated, this token can be passed back to the client either as data or in some header, allowing the client to store it. We can then use the built-in validation to require the token with each call.

Part 2: Validating the Token

Traditionally, tokens are signed by an authority and the underlying system will contact that authority to validate the token. However, in our case, we have no such authority so, we will want to MANUALLY validate the token, mainly its signature.

It turns out this is rather tricky to perform in ASP .NET Core due to the way the validation middleware is implemented. The best way I found to get it work and be clean is to adjust the way you register certain dependencies in ConfigureServices, as such:

var keyVaultService = new KeyVaultService(new GetCredentialService(Configuration), Configuration);
var tokenSecurityValidator = new JwtSecurityTokenValidator(Configuration, keyVaultService);
services.AddTransient<CryptoService>()
.AddTransient<JwtTokenService>()
.AddSingleton<TwitchAuthService>()
.AddSingleton(p => keyVaultService)
.AddTransient(p => tokenSecurityValidator)
.AddSingleton<GetCredentialService>()
.AddTransient<TwitchApiService>()
.AddTransient<GetTokensFromHttpRequestService>()
.AddTransient<ProcessApiResultFilter>();
// add auth middleware
services.AddAuthentication(JwtBearerDefaults.AuthenticationScheme)
.AddJwtBearer(options =>
{
options.RequireHttpsMetadata = false;
options.SecurityTokenValidators.Add(tokenSecurityValidator);
});
view raw statup.cs hosted with ❤ by GitHub

You can see the keyVaultService and tokenSecurityValidator are defined as concrete dependencies and we use the provider override syntax for AddSingleton to pass the instance directly. This is done so we can pass the direct instance of tokenSecurityValidator to our the options for validating our Bearer token.

This class calls on its dependencies and validates the signature of the token and ensures it matches with our expectations: https://github.com/jfarrell-examples/TwitchTokenPoc/blob/master/JwtSecurityTokenValidator.cs

The result of adding this (and the appropriate Use methods in the Configure method) is we can fully leverage [Authorize] on our actions and controllers. Users who pass no token or a token that we cannot validate will receive a 401 Unauthorized.

Part 3: Performing the Refresh

First step with any call is the ability to GET the token for the request so it can be used. There are MANY ways to do this. As I wanted to keep this simple I elected to use the IHttpContextAccessor. This is a special dependency you can have ASP .NET Core inject that lets you access the HttpContext anywhere in the call chain. I wrapped this in a service:
https://github.com/jfarrell-examples/TwitchTokenPoc/blob/master/Services/GetTokensFromHttpRequestService.cs

This class very simply yanks the token from the incoming request and return the specific claim that represents the token. It also calls the decryption method so the fetched token is ready for immediate use.

This is by no means a perfect approach, in fact were I to see this in Production code I would comment that its a violation of the separation of concerns since a web concerns is being accessed in the services layer. More ideally, you would want to use middleware or similar to hydrate a scoped dependency which can be injected into your layers.

The TwitchApiService (https://github.com/jfarrell-examples/TwitchTokenPoc/blob/master/Services/TwitchApiService.cs) houses the logic to perform the request for user data from Twitch that I chose to showcase the refresh functionality.

This code is crucial for the functionality:

client.DefaultRequestHeaders.Add("Client-Id", _configuration["TwitchClientId"]);
client.DefaultRequestHeaders.Authorization =
new AuthenticationHeaderValue("Bearer", await _getTokensFromHttpRequestService.GetAccessToken());
var result = new ApiResult<TwitchUser>();
var response = await client.GetAsync($"helix/users?login={loginName}");
if (response.StatusCode == HttpStatusCode.Unauthorized)
{
// refresh tokens
var (accessToken, refreshToken) = await _authService.RefreshTokens(await _getTokensFromHttpRequestService.GetRefreshToken());
result.TokensChanged = true;
result.NewAccessToken = accessToken;
result.NewRefreshToken = refreshToken;
// re-execute the request with the new access token
client.DefaultRequestHeaders.Authorization =
new AuthenticationHeaderValue("Bearer", accessToken);
response = await client.GetAsync($"helix/users?login={loginName}");
}
if (response.IsSuccessStatusCode == false)
throw new Exception($"GetUser request failed with status code {response.StatusCode} and reason: '{response.ReasonPhrase}'");
var responseContent = await response.Content.ReadAsStringAsync();
view raw call.cs hosted with ❤ by GitHub

I wrote this in a very heavy fashion, it simple makes the call, check if it failed with a 401 Unauthorized and, if so, refreshes the token using the TwitchAuthService () and then makes the same call again.

The result is a return to the caller with the appropriate data (or an error if the request still failed).

Part 4: Notify of new Token

Something you may have noticed in the previous code, the use of a generic ApiResult<T>. This is necessary because JWT tokens are designed to be immutable. This means they cannot be changed once created, its this aspect which makes them secure. However, in this case, we are creating a token with data that will change (on a refresh) and thus necessitate a regeneration of the token.

The purpose of this ApiResult<T> class it to hold NOT JUST the result but to tell us if the token needs to change. If it does change, that new version must be passed to the client so it can be saved. This may seem like a drawback to the approach but, in actuality this is a typical part of any application interacting with an OAuth flow where token refresh is being used.

However, what we DO NOT want to do is require logic in every action to check the result, rebuild the token, and pass it to the caller. Instead, we want to intercept the return result and, in a central spot, strip away the extra data and ensure our new token, if appropriate, is in the response headers.

To that end I created the following ActionFilter:

public class ProcessApiResultFilter : IActionFilter
{
private readonly JwtTokenService _jwtTokenService;
public ProcessApiResultFilter(JwtTokenService jwtTokenService)
{
_jwtTokenService = jwtTokenService;
}
public void OnActionExecuting(ActionExecutingContext context)
{
// no action
}
public void OnActionExecuted(ActionExecutedContext context)
{
if ((context.Result as OkObjectResult)?.Value is ApiResult result)
{
if (result.TokensChanged)
{
var newTokenString = _jwtTokenService.CreateJwtTokenString(
result.NewAccessToken, result.NewRefreshToken).Result;
context.HttpContext.Response.Headers.Add("X-NewToken", newTokenString);
}
context.Result = new ObjectResult(result.Result);
}
}
}
view raw filter.cs hosted with ❤ by GitHub

Our ApiResult<T> inherits from ApiResult which gives it the non-generic read only Result property, which is used in the code sample above. The ApiResult<T> includes a setter whose accepted type is T. This allows the application to interact with it in a type-safe way.

Above you can see the Result being sent to the user is altered so its the inner result. Meanwhile, if the token changes we regenerate that token using our JwtTokenService and its stored in the X-NewToken header in the response. Client can now check for this header when receiving the response and update their stores as needed.

One final thing, I am using Dependency Injection in the filter. To achieve this you must wrap its usage in the ServiceFilterAttribute. Example here: https://github.com/jfarrell-examples/TwitchTokenPoc/blob/master/Startup.cs#L27

And that is it. Let’s walk through the example again.

Understanding what happens

A given client will make its initial page the response to /Login which will return the Twitch Login screen OR, if a token is already present, the callback will be called instantly. This callback will generate a token and send it down to the caller (right now its printed to the screen), generally this would be a page in your client app that will store the token and show the initial page.

When the client makes a request, they MUST pass the custom JWT Token given to them, the application will be checking for it as an Authorization Bearer token – failure to pass it will result in a 401 Unauthorized being sent back.

The application, after validating the token, will proceed with its usual call to the Twitch API. Part of this will access whatever the Access Token was passed. If Twitch responds with a 401 Unauthorized, the code will extract the refresh token from the JWT Token and refresh the access token. Upon successfully doing this, the call to Twitch will be executed again.

The result is sent back to the caller in a wrapper, ApiResult<T> which, along with carrying the call result, also contains information on whether the token changed. The caller will simply return this result as it would any normal Action call.

We use a special ActionFilter to intercept the response, and rewrite it so the caller returns the expected result in the response body. If the token did change, the new token is written into the response behind the X-NewToken header.

Throughout the process, we never reveal the tokens and all of the values involved in signing, encryption, and decryption are stored in Azure Key Vault outside of our application. For local dev, we are using an App Registration to govern access to the Key Vault, if we were deployed in Azure we would want to associate our Azure service to a managed identity.

Conclusion

Hopefully, this example has been instructive and helpful. I know I learned quite a bit going through this process. So, if it helps you, drop me a comment and let me know. If something does not make sense feel free to also drop me a comment. Cheers.

Getting Started with KEDA and Queues

One of the limitations inside Kubernetes was the metrics that were supported to allow for scaling within the cluster for a deployment. The HorizontalPodAutoScaler or HPA for short, could only monitor CPU Utilization to determine if more Pods needed to be added to support a given workload. As you can imagine, in a queue based or event system, CPU usage wont tell, accurately, whether or not more pods are needed.

Note: The Kubernetes team realizing has added support for custom-metrics into the platform: https://github.com/kubernetes/community/blob/master/contributors/design-proposals/instrumentation/custom-metrics-api.md

Noticing this, Microsoft engineers began work on a project to address this, called KEDA (Kubernetes Event-Driven Autoscaling) comprised of custom resources which were capable of triggering scaling events based on external cluster criteria: queue tail length, message availability, etc. Now in 2.1 the team has added support for MANY popular external products which would dictate scaling needs in unique ways.

Here is the complete list: https://keda.sh/docs/2.1/scalers/

For this post, I wanted to walk through how to set up a configuration whereby I could use KEDA to create jobs in Kubernetes based on the tail length of an Azure Storage Queue. As is expected with a newer project, KEDA’s documentation still needs work and certain things are not entirely clear. So I view this as an opportunity to supplement the teams work. That being said, this is still very much an alpha product and, as such, I expect future iterations to not work with the steps I lay out here. But as of right now, Feb 2020, they work.

Full source code: https://github.com/jfarrell-examples/keda-queue-test

First step, Create a cluster an Azure Queue Storage

Head out to the portal and be sure to create an AKS cluster (or a Kubernetes cluster in general, doesnt matter who the provider is) and an Azure Storage account (this one you will need in Azure). Once the storage account is created, create a Queue (shown below) and saved the connection string off somewhere you can copy from later.

As indicated, you could use GKE (Google Kubernetes Engine) or something else if you wanted. KEDA also supports other storage and events outside of Azure but, I am using Azure Queue Storage for this demo hence why I will assume the Queue Storage is in Azure.

Now, let’s install KEDA

As with anything involving custom resources in Kubernetes, KEDA must be installed for those resources to exist. KEDA has a variety of ways it can be installed, laid out here: https://keda.sh/docs/2.1/deploy/

A quick note on this, BE CAREFUL of the version!! I am using v2.1 for this and that is important since the specification for ScaledJob changes between 2.0 and 2.1. If you read through the third approach to deployment, where you run kubectl apply against a remote file, be sure to replace the version of the file to v2.1.0. I noted with Helm at least I did NOT get v2.1 from the given charts repo.

If you run the third approach, creation of the keda namespace will happen for you, this is where the internal of KEDA will be installed and run from, your code does NOT need to go in here and I wont be doing that just to put you at ease.

Once the installation completes I recommend running the following command to make sure everything is up and running:

kubectl get all -n keda

Note that I used the shorthand -n because I have had it happen where the –namespace doesnt copy correctly and you end up with command syntax errors. If you see something like this, KEDA is up and running:

Let’s setup the KEDA Scaler

For starters, we need a secret to hold that connection string for our Queue Storage from earlier. Here is a simple secret definition to create a secret that KEDA can use to monitor the queue tail length. REMEMBER when you provide the value to the secret it MUST be base64 encoded. I wont show my value as I do not wish to dox myself.

Linux users you can use the built-in base64 command to generate the value for the secret file. Everyone else, you can quickly Google a Base64 encoder and convert your string.

echo “your connection string” | base64

Use kubectl apply -f to create the secret. Since the namespace is provided in file, it will be placed in that namespace for you.

Next, we are going to get into KEDA specific components TriggerAuthentication and ScaledJob. These two resources will be critical to supporting our intended functionality.

First, there is the specification for TriggerAuthentication: https://keda.sh/docs/2.1/concepts/authentication/#re-use-credentials-and-delegate-auth-with-triggerauthentication

As you can see, there are a number of ways to provide authentication, we will be using secretTargetRef. The purpose is to give our trigger a way to authentication to our Queue Storage such that it can determine the various property values it needs to find out if a scaling action needs to be taken (up or down).

Building on what we did with the creation of our Secret we add the following definition and apply it via kubectl apply -f

Comparing the Secret with this file you can see where things start to match up. We are simply telling the trigger it can find the connection string at the appropriate key in a certain secret. Many of the examples on the KEDA website will use podIdentity which as I have come to understand refers back to MSI. This is a better approach, albeit more complicated, than what I am showing here. We should always avoid storing sensitive information in our cluster (like connection strings) due to the less than stellar security around Secrets in general – base64 is not in anyway secure.

The final piece is the creation of the ScaledJob. KEDA mostly focuses around scaling deployments, which makes a lot of sense but, it can also serve to scale up Kubernetes Jobs as needed to fulfill deferred processing. Effectively, KEDA creates a psuedo deployment around the job and scales the number up as needed based on the scaling strategy specified.

This looks like quite a bit but, when you break it down it has a very straightforward purpose and a structure that is consistent with other Kubernetes objects. Let’s break it down in four parts:

The first part is identification, what we are naming the ScaledJob and where it is going to be stored within the cluster. Notice the apiVersion value keda.sh/v1alpha1 this is a clear indication of the Spec being in ALPHA meaning, I fully expect this to change.

The second part is the details for the actual ScaledJob, that is things which are specific to this instance of the resource. Here we tell the resource to check the length of our queue every 5 seconds and that it should trigger based on an azure-queue with authentication stored in our trigger auth that we defined previously.

The third and fourth part are actually all relating to the same thing which is the configuration of the created Kubernetes Job instances that will perform the work – I broke this apart based on my own personal style when constructing YAML files for Kubernetes. To keep things simple we are not going to have the job leverage parallelism, so we leave this at 1, which is also the default.

The last section lays out the template for the Pods that will carry out the work. You notice the custom image xximjasonxx/printmessage which will grab the message from the queue and print out its contents. We are also reusing the Secret here to provide the container with the connection string of the Queue so it can take items off.

All of this is available for reference in the GitHub repo I linked above.

Let’s test it

In the provided source code, I included a command line program that can send messages to our queue in the form of random numbers – SendMessage. To run this, open a Command Line window up to the directory holding the .csproj file and run the following command:

dotnet run “<connection string>” 150

The above command will send 100 messages to the queue – I should note that the queue name in the container is HARD CODED as test-queue. Feel free to download the code and make the appropriate change for your own queue name if need be – you will need to do it for both Print and Send message programs.

After running the above command you can run the following kubectl command to see the results of your experiment. Should look something like this:

This shows that it is working and, in fact, we can do a kubectl logs on one of the pods and we can see the output message sent to the queue. Or so it appears, let’s take a closer look.

Execute the following command to COUNT how many pods were actually created:

kubectl get po | wc -l

Remember to subtract one as the wc program will also count the header line. If you get similar to what I got it will be around 300. But that does not make any sense, we only sent 150 items to our queue. The answer is, the way printmessage:v3 is written, it contains logic to print that no data was found as the queue becomes empty. While valid, with the 10 completion rule being enforced this will spin up unnecessary pods. Let’s change the image used for the job to a special image: printmessage:v3-error. This image will throw an uncaught exception when the queue is empty. The updated definition for ScaledJob is below:

Before running things again I recommend executing these two commands, they assume the ONLY thing in the current space are jobs and pods related to KEDA. If you are sharing the namespace with other resources you will have to modify these commands.

kubectl delete po –all

kubectl delete job –all

Make sure to run kubectl apply to get the updated ScaledJob definition into your cluster. Run the SendMessage program again. This is what I got:

Notice how, even though we specified the job needs to complete 10 times, none of these did. Your results are likely going to vary depending on when items were pulled from the queue. But as the queue gets shorter more jobs will start to fail as the Pods attempt to grab data that does not exist.

The other thing to notice is that the Pods, if they fail, will self terminate. So, if I run my wc -l check again on the Pods I get a number that makes more sense:

kubectl get po | wc -l

Result should be 151 which, subtracting the header row gives us the 150 items we sent to the queue

Why is this happening?

The key value for controlling this behavior is the backoffLimit specified as part of the job spec. It tells a job how many times it should try to restart failing pods under its control. I have set it to 1 which effectively means it will not retry and only accept one failure.

The reason this is so important is control over resources that are scaling to match processing workloads is crucial from the standpoint of maintaining healthy resource consumption. We do not want our pods to go crazy and overwhelm the system and starve other processes.