Understanding Infrastructure as Code (and DevOps)

The rise of IaC as a model for teams aligns very naturally with the high adoption of teams using the cloud to deploy applications. The ability to provision infrastructure on-demand is essential but, moreover, it allows teams to define the service and infrastructure applications use within the same VCS (Version Control System) where their application code resides. Effectively, this allows the team to see the application not just as the source code but also be inclusive on the underlying support service which allow the application code to work.

Difference from Configuration as Code

Infrastructure as Code is a rather broad term. The modern definition is, as I stated above, more aligned with provisioning infrastructure from scripts on-demand. Configuration as Code is more aligned with on-premises deployments where infrastructure cannot be created on-demand. In these environments the fleet is static and so it is the configuration of the relevant members of the fleet which are important.

It is with Configuration as Code where you commonly see tools like Octopus Deploy (link), Ansible (link), Chef (link), Puppet (link), and others. This is not to say these tools CANNOT spin up infrastructure on-demand, they most certainly can, but it is not their main use case.

In both approaches, and with IaC in general, the central idea is that your application is not merely its code, but also the infrastructure (or configuration of certain infrastructure) which need to be persisted.

Why is it important?

Regardless of the flavor, IaC is vitally important to the modern application development team. Why? Well let me ask you a question: Where is the best place to test an application?

The only viable answer here is: Production. But wait, testing in production is risky and can cause all sorts of problems!! You are clearly crazy Jason!!.

Yes, but let me amend the answer: the best place to test is an environment that is Production-like. This answer is the reason we have things like Docker, Kubernetes, and IaC. We need the environments we develop in and run tests in to be as close to production as possible.

Now, I don’t mean that your developers should have an environment that has the performance specs or disaster recovery features of Production but, from a configuration standpoint, it should be identical. And above all, the team MUST HAVE the same configurations in development as they do in Production. That means, if developers are planning to deploy their application to .NET 5.0.301 on Windows Server 2019, ideally their development environments should be using .NET on Windows Server 2019 – or at least when the application is deployed for testing that environment should be running Windows Server 2019.

Mitigating Drift

The principal goal of placing infrastructure or configuration into VCS as code is to ensure consistency. This aids in ensuring a GUARANTEE that environments are configured in a way that is expected. There is nothing worse than having to find a flag or setting that someone (who is no longer around) applied three years ago when setting up a second server and trying to figure out why “it doesnt work on that machine”.

With proper IaC control, we ensure that EVERY configuration and service is under source controlled so we can quickly get an accurate understanding of the services involved in supporting an application and the configuration of those services. And the more consistent we are, the lower the chance that a difference in environments allows a bug to manifest in production which can not be duplicated in any other environment.

Production is still Production

All this being said, it is important to understand that Production is still Production. That means, there will be additional safeguards in place to ensure proper function in case of disaster and the specs are generally higher. The aim of IaC is NOT to say you should run a Premium App Service Plan at the cost of thousands of dollars per month in Development. The aim is to ensure you are aiming for the same target.

That said, one of other benefits of IaC is the ability to spin up ephemeral environments to perform testing with production style specs – this can include variants of chaos testing (link). This is something done, usually ahead of a production release. IaC is vital here as it allows the creation of said environment easily and guarantees an EXACT replica of production. Another alternative is blue/green deployments which conforms to the sort of shift right testing (link) that IaC enables.

Understanding Operating Models

As you begin the IaC journey it is important to have an understanding of the sort of operating models which go along with it. This helps you understand how changes to various parts of your infrastructure should be handled; this is often the toughest concept for those just getting started to grasp.

Shared vs Bespoke Infrastructure

In many cases, there might be infrastructure which is shared for all applications and then infrastructure which is bespoke for each application. This understanding and differentiation is core to selecting the right operating model. This understanding also underpins how IaC scripts are broken apart and how frequently each is fun. As an example, when adopting a Hub and Spoke deployment model, the code which builds the hub and the spoke connection points is run FAR less frequently than the code which builds the services applications in the spoke rely upon.

Once you understand this differentiation and separation you can choose the operating model. Typically there are considered to be three operating models:

  • ManualOps – this is where the IaC scripts in question are run manually, often by an operations team member. The scripts are provided either by the application team or by a central operations teams. This approach is commonly used when organizations and teams are just getting started with IaC and may not have the time or knowledge of how to work Infrastructure updates into automated pipelines
  • GitOps – coined by WeaveWorks (link) this model centralizes on kicking off infrastructure updates via operatins in Git, usually a merge action. While not essentially driven by a Continuous Integration (CI) process, it is the most common. The key to operating with this model is ensure ALL change to infrastructures are performed via an update to source control, thereby guaranteeing what is in source represents what is deployed.
  • NoOps – NoOps is a derivation of GitOps which emphasizes a lack of operations involvement per se. Instead of running scripts based on a Git operations or manual, it is ALWAYS run with each check in. Through this, application teams take over the ownership of their operations responsibilities. This is the quintessential model for teams operating in a DevOps centric culture.

Which operating model you select is impacted, again by the nature of the infrastructure being supported, but also your teams maturity. DevOps and IaC is a journey, it is not a destination. Not all teams progress (or need to progress) to the same destination.

Centralized Control for Decentralized Teams

In DevOps, and DevSecOps, the question is first, how to involve the necessary disciplines in the application development process such that no specific concern is omitted or delayed – security often gets the short end of the stick. I cannot tell you how many projects I have seen save their security audit for near the end of the project. Rarely does their audit not yield issues and, depending on the timeline and criticality, some organizations ignore the results and recommendations of these audits at their own peril.

I can recall a project for a healthcare client that I was party to years ago. The project did not go well and encountered many problems throughout its development. As a result, the security audit was pushed to the end of the project. When it happened, the auditer noted that the application did not encrypt sensitive data and was not compliant with many HIPAA regulations. The team took the feedback and concluded it would take 2-3 months to address the problems.

Given where the project was and the relationship with the client, we were told to deliver the application as is. The result was disastrous. The client ended up suing our company. The result was not disclosed. But it it just goes to show that security must be checked early and often.

How DevOps, and DevSecOps, approach this is a couple key ways:

  1. The use of the Liaison model (popularized by Google) in which the key areas of Infrastructure, Quality, Security, and Operations delegate a representative who is part time on projects to ensure teams have access to the resources and knowledge needed to carry out tasks.
  2. Creation of infrastructure resources is done through shared libraries which are “blessed” by teams to ensure that certain common features are created.

IaC can help teams shore up #2. Imagine if each time a team wanted to create a VM they had to use a specific module that would limit what OS images they could use, ensure certain ports were closed, and installed standard monitoring and security settings to the machine. This brings about consistency while still allowing teams to self service as needed. For operations, the image could require a tag for the created instances so operations can track them centrally. The possibilities are endless.

This is what is meant by “centralized control for decentralized teams”. Teams could even work with Infrastructure, Operations, and Security to make changes to these libraries in controlled ways. This lets the organizations maintain control over the decentralization necessary to allow teams to operate efficiently.

Using Modules with Terraform

Most IaC tools (if not all) support this modularization concept to some degree, Terraform (link) is no exception. The use of modules can ensure that the service that teams do deploy conform to certain specifications. Further, since Terraform modules are simply directories containing code files, they can easily be zipped and deployed to a “artifact” server (Azure Artifact or GitHub Packages to name a couple) where other teams can download the latest version or a specific version.

Let’s take a look at what a script that uses modules can look like. This is an example application that leverages Private Endpoint to ensure traffic from the Azure App Service to the Azure Blob Storage Container never leaves the VNet. Further, it uses an MSI (Managed Service Identity) with RBAC (Role Based Access Control) to grant specific rights on the target container to the Identity representing the App Service. This is a typical approach to building secure applications in Azure.

# create the resource group
resource azurerm_resource_group this {
name = "rg-secureapp2"
location = "eastus2"
}
# create random string generator
resource random_string this {
length = 4
special = false
upper = false
number = true
}
locals {
resource_base_name = "secureapp${random_string.this.result}"
allowed_ips = var.my_ip == null ? [] : [ var.my_ip ]
}
# create the private vnet
module vnet {
source = "./modules/virtualnetwork"
depends_on = [
azurerm_resource_group.this
]
network_name = "secureapp2"
resource_group_name = azurerm_resource_group.this.name
resource_group_location = azurerm_resource_group.this.location
address_space = [ "10.1.0.0/16" ]
subnets = {
storage_subnet = {
name = "storage"
address_prefix = "10.1.1.0/24",
allow_private_endpoint_policy = true
service_endpoints = [ "Microsoft.Storage" ]
}
apps_subnet = {
name = "apps"
address_prefix = "10.1.2.0/24"
delegations = {
appservice = {
name = "appservice-delegation"
service_delegations = {
webfarm = {
name = "Microsoft.Web/serverFarms"
actions = [
"Microsoft.Network/virtualNetworks/subnets/action"
]
}
}
}
}
}
}
}
# create storage account
module storage {
source = "./modules/storage"
depends_on = [
module.vnet
]
resource_group_name = azurerm_resource_group.this.name
resource_group_location = azurerm_resource_group.this.location
storage_account_name = local.resource_base_name
container_name = "pictures"
vnet_id = module.vnet.vnet_id
allowed_ips = local.allowed_ips
private_endpoints = {
pe = {
name = "pe-${local.resource_base_name}"
subnet_id = module.vnet.subnets["storage"]
subresource_names = [ "blob" ]
}
}
}
# create app service
module appservice {
source = "./modules/appservice"
depends_on = [
module.storage
]
resource_group_name = azurerm_resource_group.this.name
resource_group_location = azurerm_resource_group.this.location
appservice_name = local.resource_base_name
storage_account_endpoint = module.storage.container_endpoint
private_connections = {
pc = {
subnet_id = module.vnet.subnets["apps"]
}
}
}
# assign the identity to the storage account roles
resource azurerm_role_assignment this {
scope = module.storage.storage_account_container_id
role_definition_name = "Storage Blob Data Contributor"
principal_id = module.appservice.appservice_identity_id
depends_on = [
module.appservice,
module.storage
]
}
view raw main.tf hosted with ❤ by GitHub

For this particular script, the modules are all defined locally so, I am not downloading them from a central store but, doing so would be trivial. The modules of Terraform give the ability to also hide certain bits of logic from the callers. For example, there are a variety of rules which must be followed when setting up a Private Endpoint for an Azure Storage account (creation of DNS zone, usage of the correct Private IP, specific names which must be used) all of which can be encapsulated within the module.

There is even validation rules which can be written for Module Input parameters, again, allows Infrastructure or Security to enforce their core concerns on the teams using the modules. This is the power of IaC in large organizations. Its not an easy level to achieve but, achieving it helps team gain efficiencies which were, previously, difficult, if not impossible, to achieve.

Full source code is available here: https://github.com/jfarrell-examples/SecureAppTF

Lessons to Learn

DevOps is always an interesting conversation with clients. Many managers and organizations are looking for a path to get from Point A to Point B. Sadly, DevOps does not work that way. In many ways, as I often have to explain, its a journey, not a destination. The way DevOps is embraced will change from team to team, organization to organization, person to person.

One of the key mistakes I see clients happen upon is “too much, too soon”. Many elements of DevOps take a good amount of time and pivoting to get used to. Infrastructure as Code is one such element that can take an especially long time (this is not to imply that DevOps and IaC must go together. IaC is a component of DevOps, yes, but it can equally stand on its own).

It is important, with any transformation, to start small. I have seen organizations hoist upon their teams a sort of mandate to “codify everything” or “automate everything”. While good intentioned, this advice comes from a misunderstanding of DevOps as a “thing” rather than a culture.

Obviously, advice on DevOps transformations is out of scope for this post and is unique to each client situation. But, it is important to be committed to a long term plan. Embracing IaC (and DevOps) is not something that happens overnight and there are conversations that need to take place and, as with any new thing, you will need to address the political ramifications of changing the way work is done – be prepared for resistance.

Securing Configuration with Key Vault

In my previous post (here), I talked about the need to consider security when you build your application and focused mainly on securing network traffic. In keeping with a focus on DevOps, we took an Infrastructure as Code (IaC) approach which used Terraform to represent infrastructure in a script. But, as someone point out to me privately, I only covered a part of security, and not even the bit which generally leads to more security flaws.

The same codebase as in the aforementioned post is used: jfarrell-examples/SecureApp (github.com)

Configuration Leaking

While securing network access and communication direction is vital the more likely avenue for an attack tends to be an attacker finding your value in source code or in an Azure configuration section. In the example for Private Endpoints I stored the connection string for the Storage Account and the Access Key for the Event Grid in the Azure App Service Configuration section.

This is not entirely bad, as Configuration can be secured with RBAC to keep them visible to certain persons. However, this is still not advised as you would not be be following a “defense in depth” mentality. Defense in Depth calls for us to never rely on ONE mechanism for security but rather, force a would be attacked to conquer multiple layers. For web applications (and most Azure apps) the defacto standard for securing values in Azure Key Vault (Key Vault | Microsoft Azure).

By using Azure Key Vault with Managed Identity, you can omit your sensitive values from the configuration section and use a defined identity for the calling service to ensure only necessary access for the service to request its values. Doing so lessens the chance that you will leak configuration values to users with nefarious intentions.

Understanding Managed Identity

Most services in Azure can make use of managed identities in on of two flavors:

  • System Assigned Identity – this is an identity that will be managed by Azure. It can only be assigned to a SINGLE resource
  • User Managed Identity – this is an identity created and managed by a user. It can be assigned to as many resources as needed. It is ideal for situations involving things like scale-sets where new resources are created and need to use the same identity.

For this example we will use a System Assigned Identity as Azure App Service does not span multiple resources within a single region, Azure performs some magic behind the scenes to maintain the same identity for the server farm machines which support the App Service as it scales.

The identity of the service effectively represents a user, or more accurately a service principal. This service principal has an object_id that we can use in Key Vault Access policy. These policies, separate from RBAC settings, dictate what a specific identity can do against that key vault.

Policies are NOT specific to certain secrets, keys, and certificates. If you GET secret permission to an identity it allows that identity to read ANY secret in the vault. This is not always advisable. To improve your security posture, create multiple key vaults to segment access to secret, key, and certificate values.

We can use Terraform to create the Key Vault and an App Service with an identity, and make the association. This is key because, doing so allows us to create these secret values through IaC scripts versus relying on engineers to do it manually.

Create the App Service with a System Identity

Here is code for the App Service, note the identity block:

# create the app service
resource "azurerm_app_service" "this" {
name = "app-${var.name}ym05"
resource_group_name = var.rg_name
location = var.rg_location
app_service_plan_id = azurerm_app_service_plan.this.id
site_config {
dotnet_framework_version = "v5.0"
}
app_settings = {
"WEBSITE_DNS_SERVER" = "168.63.129.16"
"WEBSITE_VNET_ROUTE_ALL" = "1"
"WEBSITE_RUN_FROM_PACKAGE" = "1"
"EventGridEndpoint" = var.eventgrid_endpoint
"KeyVaultEndpoint" = var.keyvault_endpoint
}
identity {
type = "SystemAssigned"
}
}
# outputs
output "system_id" {
value = azurerm_app_service.this.identity.0.principal_id
}
view raw appservice2.tf hosted with ❤ by GitHub

The change from the previous version of this module is, the StorageAccountConnectionString and EventGridAccessKey is no longer present. We only provide the endpoints for our KeyVault and EventGrid, the sensitive values are held in Key Vault and accessed using the App Service’s Managed Identity.

Setup the Key Vault

First, I want to show you the creation block for Key Vault, here:

terraform {
required_providers {
azurerm = {
source = "hashicorp/azurerm"
version = "=2.62.1"
}
}
}
variable "name" {
type = string
}
variable "rg_name" {
type = string
}
variable "rg_location" {
type = string
}
variable "tenant_id" {
type = string
}
variable "secrets" {
type = map
}
# get current user
data "azurerm_client_config" "current" {}
# create the resource
resource "azurerm_key_vault" "this" {
name = "kv-${var.name}"
resource_group_name = var.rg_name
location = var.rg_location
tenant_id = var.tenant_id
sku_name = "standard"
# define an access policy for terraform connection
access_policy {
tenant_id = var.tenant_id
object_id = data.azurerm_client_config.current.object_id
secret_permissions = [ "Get", "Set", "List" ]
}
}
# add the secrets
resource "azurerm_key_vault_secret" "this" {
for_each = var.secrets
name = each.key
value = each.value
key_vault_id = azurerm_key_vault.this.id
}
#outputs
output "key_vault_endpoint" {
value = azurerm_key_vault.this.vault_uri
}
output "key_vault_id" {
value = azurerm_key_vault.this.id
}
view raw keyvault.tf hosted with ❤ by GitHub

The important thing to point out here is the definition of access_policy in this module. This is not the access being given to our App Service, it is instead a policy to allow Terraform to update Key Vault (the actual permissions are provided as parameters).

The output here is the Key Vault URI (for use as a configuration setting to the App Service) and the Event Grid endpoint (also for use as a configuration setting in App Service).

Creation of this Key Vault MUST precede the creation of the App Service BUT, we cannot create the App Service Access Policy until the App Service is created, we need the Identity’s obejct_id (see above).

Here is the access policy that gets created to allow the Managed Identity representing the App Service to Get secrets from the Key Vault:

terraform {
required_providers {
azurerm = {
source = "hashicorp/azurerm"
version = "=2.62.1"
}
}
}
variable "key_vault_id" {
type = string
}
variable "tenant_id" {
type = string
}
variable "object_id" {
type = string
}
variable "secret_permissions" {
type = list(string)
default = []
}
variable "key_permissions" {
type = list(string)
default = []
}
variable "certificate_permissions" {
type = list(string)
default = []
}
# create resource
resource "azurerm_key_vault_access_policy" "this" {
key_vault_id = var.key_vault_id
tenant_id = var.tenant_id
object_id = var.object_id
key_permissions = var.key_permissions
secret_permissions = var.secret_permissions
certificate_permissions = var.certificate_permissions
}
view raw access_policy.tf hosted with ❤ by GitHub

This policy does need to support List permission on the Key Vault Secret so that the configuration provider can bring all secrets into the configuration context for our App Service. This is the .NET Core code to bring the Key Vault Secrets into the IConfiguration instance for the web app:

public class Program
{
public static void Main(string[] args)
{
CreateHostBuilder(args).Build().Run();
}
public static IHostBuilder CreateHostBuilder(string[] args) =>
Host.CreateDefaultBuilder(args)
.ConfigureAppConfiguration((ctx, config) =>
{
var builtConfig = config.Build();
var keyVaultEndpoint = builtConfig["KeyVaultEndpoint"];
var secretClient = new SecretClient(
new Uri(keyVaultEndpoint),
new Azure.Identity.ManagedIdentityCredential());
config.AddAzureKeyVault(secretClient, new KeyVaultSecretManager())
})
.ConfigureWebHostDefaults(builder => builder.UseStartup<Startup>());
}
view raw program.cs hosted with ❤ by GitHub

The result here is, the Web App will use the Managed Identity of the Azure App Service to communication with our Key Vault at the given endpoint to bring our sensitive values into the web app. This gives us a solid amount of security and diminishes the chances that configuration values leak into places where they can be exposed.

Make it more secure

One issue with the above approach is, it requires some fudging because local development will NOT have a managed identity. Instead, they will need to use something else, such as a InteractiveCredentials or ClientSecretCredentials (available in Azure.Identity NuGet package). These are fine but, aside from requiring a person to authenticate with Azure when they run the app or finding a way to ensure sensitive client authorization values do NOT leak into source, it is a bit onerous.

The way to make our approach more secure is to introduce Azure App Configuration which can integrate with Key Vault in much the same way App Service does. The added benefit is Azure App Configuration can replace your local configuration in Azure and offers numerous features to aid in the management of these values across environments.

Unfortunately, at the time of this writing, Terraform does NOT support managing the keys within Azure App Configuration. Still, while its not perfectly secure, just using Key Vault is usually an improvement over existing sensitive data management techniques I typically see organization and teams using.

Private Endpoints with Terraform

Warning: This is a fairly lengthy one – if you are just here for the code: jfarrell-examples/private-endpoint-terraform (github.com) – Cheers.

In my previous post I talked about the need for security to be a top consideration when building apps for Azure, or any cloud for that matter. In that post, I offered an explanation of the private endpoint feature Azure services can use to communicate within Virtual Network resources (vnet).

While this is important, I decided to take my goals one step further by leveraging Terraform to create this outside of the portal. Infrastructure as Code (IaC) is a crucial concept for teams that wish to ensure consistency and predictability across environments. While there exists more than one operating model for IaC the concepts are the same:

  • Infrastructure configuration and definition should be a part of the codebase
  • Changes to infrastructure should ALWAYS be represented in scripts which are run continuously to mitigate drift
  • There should be a defined non-manual action which causes these scripts to be evaluated against reality

Terraform is a very popular tool to accomplish this, for a number of reasons:

  • Its HashiCorp Configuration Language (HCL) tends to be more readable than the formats used by ARM (Azure Resource Manager) or AWS Cloud Foundation
  • It supports multiple cloud both in configuration and in practice. This means, a single script could manage infrastructure across AWS, Azure, Google Cloud, and others.
  • It is free

However, it is also to note Terraform’s weakness in being a third party product. Neither Microsoft or others officially support this tool and as such, their development tends to be behind native tooling for specific platforms. This means, certain bleeding edge features may not be available in Terraform. Granted, one can mitigate this in Terraform by importing a native script into the Terraform file.

All in all, Terraform is a worthwhile tool to have at one’s disposal given the use cases it can support. I have yet to observe a situation in which there was something a client was relying on that Terraform did not support.

How to Private Endpoints Work?

Understanding how Private Endpoints work in Azure is a crucial step to building them into our Infrastructure as Code solution. Here is a diagram:

In this example, I am using an Azure App Service (Standard S1 SKU) which allows me to integrate with a subnet within the vnet. Once my request leaves the App Service it arrives at a Private DNS Zone which is linked to the vnet (it shown as part of the vnet, that is not true as its a global resource. But for the purposes of this article we can think of it as part of the vnet.

Within this DNS Zone we deploy an A Record with a specific name matching the resource we are targeting. This gets resolved to the private IP of a NiC interface that effectively represents our service. For its part, the service is not actually in the vnet, rather it is configured to only allow connections from the private endpoint. In effect, a tunnel is created to the service.

The result of this, as I said in the previous post, your traffic NEVER leaves the vnet. This is an improvement over the Service Endpoint offering which only guarantees workloads will never leave the Azure backbone. That is fine for most things but, Private Endpoints offer an added level of security for your workloads.

Having said all that, let’s walk through building this provisioning process in Terraform. For those who want to see code this repo contains the Terraform script in its entirety. As this material is for education purposes only, this code should not be considered production ready.

Below is the definition for App Service and the Swift connection which supports the integration.

Create a Virtual Network with 3 subnets

Our first step, as it usually is with any secure application, create a Virtual Network (vnet). In this case we will give it three subnets. I will take advantage of Terraform’s module concept to enable reuse of the definition logic. For the storage and support subnets we can use the first module shown below, for the apps we can use the second, as its configuration is more complicated and I have not taken the time to unify the definition.

# normal subnet with service endpoints
# create subnet
resource "azurerm_subnet" "this" {
name = var.name
resource_group_name = var.rg_name
virtual_network_name = var.vnet_name
address_prefixes = var.address_prefixes
service_endpoints = var.service_endpoints
enforce_private_link_endpoint_network_policies = true
enforce_private_link_service_network_policies = false
}
# output variables
output "subnet_id" {
value = azurerm_subnet.this.id
}
# delegated subnet, needed for integration with App Service
# create subnet
resource "azurerm_subnet" "this" {
name = var.name
resource_group_name = var.rg_name
virtual_network_name = var.vnet_name
address_prefixes = var.address_prefixes
service_endpoints = var.service_endpoints
delegation {
name = var.delegation_name
service_delegation {
name = var.service_delegation
actions = var.delegation_actions
}
}
enforce_private_link_endpoint_network_policies = false
enforce_private_link_service_network_policies = false
}
# output variables
output "subnet_id" {
value = azurerm_subnet.this.id
}
view raw subnets.tf hosted with ❤ by GitHub

Pay very close attention to the enforce properties. These are set in a very specific way to enable our use case. Do not worry though, IF you make a mistake the error messages reported back from ARM are pretty helpful to make corrections.

Here is an example of calling these modules:

# apps subnet
module "apps_subnet" {
source = "./modules/networking/delegated_subnet"
rg_name = azurerm_resource_group.rg.name
vnet_name = module.vnet.vnet_name
name = "apps"
delegation_name = "appservice-delegation"
service_delegation = "Microsoft.Web/serverFarms"
delegation_actions = [ "Microsoft.Network/virtualNetworks/subnets/action" ]
address_prefixes = [ "10.1.1.0/24" ]
}
# storage subnet
module "storage_subnet" {
source = "./modules/networking/subnet"
depends_on = [
module.apps_subnet
]
rg_name = azurerm_resource_group.rg.name
vnet_name = module.vnet.vnet_name
name = "storage"
address_prefixes = [ "10.1.2.0/24" ]
service_endpoints = [ "Microsoft.Storage" ]
}
view raw make_subnets.tf hosted with ❤ by GitHub

One tip I will give you for building up infrastructure, while Azure documentation is very helpful, for myself I will create the resource in the portal and choose the Export Template option. Generally, its pretty easy to map the ARM syntax to Terraform and glean the appropriate values – I know the above can seem a bit mysterious if you’ve never gone this deep.

Create the Storage Account

Up next we will want to create our storage account. This is due to the fact that our App Service will have a dependency on the storage account as it will hold the Storage Account Primary Connection string in its App Settings (this is not the most secure option, we will cover that another time).

I generally always advise the teams I work with to ensure a Storage Account is set to completely Deny public traffic – there are just too many reports of security breaches which start with a malicious user finding sensitive data on a publicly accessible storage container. Lock it down from the start.

resource "azurerm_storage_account" "this" {
name = "storage${var.name}jx02"
resource_group_name = var.rg_name
location = var.rg_location
account_tier = "Standard"
account_kind = "StorageV2"
account_replication_type = "LRS"
network_rules {
default_action = "Deny"
bypass = [ "AzureServices" ]
}
}
# outputs
output "account_id" {
value = azurerm_storage_account.this.id
}
output "account_connection_string" {
value = azurerm_storage_account.this.primary_connection_string
}
output "account_name" {
value = azurerm_storage_account.this.name
}
view raw storage.tf hosted with ❤ by GitHub

One piece of advice, however, make sure you add an IP Rule so that your local machine can still communicate with the storage account as you update it – it does support CIDR notation. Additionally, the Terraform documentation notes a property virtual_network_subnet_ids in the network_rules block – you do NOT need this for what we are doing.

Now that this is created we can create the App Service.

Create the App Service

Our App Service needs to be integrated with our vnet (reference the diagram above) so as to allow communication with the Private DNS Zone we will create next. This is accomplished via a swift connection. Below is the definition used to create an Azure App Service which is integrated with a specific Virtual Network.

# create the app service plan
resource "azurerm_app_service_plan" "this" {
name = "plan-${var.name}"
location = var.rg_location
resource_group_name = var.rg_name
kind = "Linux"
reserved = true
sku {
tier = "Standard"
size = "S1"
}
}
# create the app service
resource "azurerm_app_service" "this" {
name = "app-${var.name}ym05"
resource_group_name = var.rg_name
location = var.rg_location
app_service_plan_id = azurerm_app_service_plan.this.id
site_config {
dotnet_framework_version = "v5.0"
}
app_settings = {
"StorageAccountConnectionString" = var.storage_account_connection_string
"WEBSITE_DNS_SERVER" = "168.63.129.16"
"WEBSITE_VNET_ROUTE_ALL" = "1"
"WEBSITE_RUN_FROM_PACKAGE" = "1"
"EventGridEndpoint" = var.eventgrid_endpoint
"EventGridAccessKey" = var.eventgrid_access_key
}
}
# create the vnet integration
resource "azurerm_app_service_virtual_network_swift_connection" "swiftConnection" {
app_service_id = azurerm_app_service.this.id
subnet_id = var.subnet_id
}
view raw appservice.tf hosted with ❤ by GitHub

Critical here is the inclusion of two app settings shown in the Terraform:

  • WEBSITE_DNS_SERVER set to 168.63.129.16
  • WEBSITE_VNET_ROUTE_ALL set to 1

Reference: Integrate app with Azure Virtual Network – Azure App Service | Microsoft Docs

This information is rather buried in the above link and it took me effort to find it. Each setting has a distinct purpose. WEBSITE_DNS_SERVER indicate where outgoing requests should look to for name resolution. You MUST have this value to target the Private DNS Zone linked to the vnet. The WEBSITE_VNET_ROUTE_ALL setting tells the App Service to send ALL outbound calls to the vNet (this may not be practical depending on your use case).

For those eagle eyed readers, you can see settings for an Event Grid here. In fact, the code shows how to integrate Private Endpoints with Azure Event Grid, the technique is similar. We wont cover it as part of this post, but its worth understanding.

Create the Private DNS Rule

Ok, this is where things start to get tricky, mainly due to certain rules you MUST follow to ensure the connection is made successfully. What is effectively going to happen is, our DNS Zone name is PART of the target hostname we need to match. The match will then resolve to the private IP of our NiC card (part of the private endpoint connection).

Here is the definition for the storage DNS Zone. The name of the zone is crucial, as such I have included how the module is called as well.

# create dns zone resource
resource "azurerm_private_dns_zone" "this" {
name = var.name
resource_group_name = var.rg_name
}
# create link to vnet
resource "azurerm_private_dns_zone_virtual_network_link" "this" {
name = "vnet-link"
resource_group_name = var.rg_name
private_dns_zone_name = azurerm_private_dns_zone.this.name
virtual_network_id = var.vnet_id
}
# define outputs
output "zone_id" {
value = azurerm_private_dns_zone.this.id
}
output "zone_name" {
value = azurerm_private_dns_zone.this.name
}
# how it is called from the main Terrafrom file
module "private_dns" {
source = "./modules/networking/dns/private_zone"
depends_on = [
module.vnet
]
name = "privatelink.blob.core.windows.net"
rg_name = azurerm_resource_group.rg.name
vnet_id = "/subscriptions/${data.azurerm_subscription.current.subscription_id}/resourceGroups/${azurerm_resource_group.rg.name}/providers/Microsoft.Network/virtualNetworks/${module.vnet.vnet_name}"
}
view raw storage_dns.tf hosted with ❤ by GitHub

Ok there is quite a bit to unpack here, let’s start with the name. The name here is mandatory. If your Private Endpoint will target a Storage Account the name of the DNS Zone MUST be privatelink.blob.core.windows.net. Eagle eyed readers will recognize this URL as the standard endpoint for Blob service within a Storage account.

This rule holds true with ANY other service that integrates with Private Endpoint. The full list can be found here: Azure Private Endpoint DNS configuration | Microsoft Docs

A second thing to note in the call is the structure of the value passed to the vnet_id parameter. For reasons unknown, Terraform did NOT resolve this based on context, so I ended up having to build it myself. You can see the usage of the data “azurerm_subscription” block in the source code. All it does is give me a reference to the current subscription so I can get the ID for the resource Id string.

Finally, notice that, following the creation of the Private DNS Zone, we are linking our Vnet to it via the azurerm_private_dns_zone_virtual_network_link resource. Effectively, this informs the Vnet that it can use this DNS Zone when routing calls coming into the network – this back to the flags we set on the Azure App Service.

Now we can create the Private Endpoint resource proper.

Create the Private Endpoint

First, I initially thought that you had to create one Private Endpoint per need however, later reading suggests that might not be the case – I have not had time to test this so, for this section, I will assume it is one per.

When you create a private endpoint the resource will get added to your resource group. However, it will also prompt the creation of a Network Interface resource. As I have stated, this interface is effectively your tunnel to the resource connected through the Private Endpoint. This interface will get assigned an IP consistent with the CIDR range of the subnet specified to the private endpoint. We will need this to finish configuring routing within the DNS Zone.

Here is the creation block for the Private Endpoint:

# create the resource
resource "azurerm_private_endpoint" "this" {
name = "pe-${var.name}"
resource_group_name = var.rg_name
location = var.rg_location
subnet_id = var.subnet_id
private_service_connection {
name = "${var.name}-privateserviceconnection"
private_connection_resource_id = var.resource_id
is_manual_connection = false
subresource_names = var.subresource_names
}
}
# outputs
output "private_ip" {
value = azurerm_private_endpoint.this.private_service_connection[0].private_ip_address
}
view raw private_endpoint.tf hosted with ❤ by GitHub

I am theorizing you can specify multiple private_service_connection blocks, thus allowing the private endpoint resource to be shared. However, I feel this might make resolution of the private IP harder. More research is needed.

The private_service_connection block is critical here as it specifics which resource we are targeting (private_connection_resource_id) and what service(s) (group(s)) within that resource we specifically want access to. For example, in this example we are targeting our Storage Account and want access to the blob service – here is the call from the main file:

# create private endpoint
module "private_storage" {
source = "./modules/networking/private_endpoint"
depends_on = [
module.storage_subnet,
module.storage_account
]
name = "private-storage"
rg_name = azurerm_resource_group.rg.name
rg_location = azurerm_resource_group.rg.location
subnet_id = module.storage_subnet.subnet_id
resource_id = module.storage_account.account_id
subresource_names = [ "blob" ]
}
view raw call_pe_storage.tf hosted with ❤ by GitHub

The key here is the output variable private_ip which we will use to configure the A record next. Without this value, requests from our App Service being routed through the DNS Zone will not be able to determine a destination.

Create the A Record

The final bit here is the creation of an A Record in the DNS Zone to give a destination IP for incoming requests. Here is the creation block (first part) and how it is called from the main Terraform file (second part).

# create the resources
resource "azurerm_private_dns_a_record" "this" {
name = var.name
zone_name = var.zone_name
resource_group_name = var.rg_name
ttl = 300
records = var.ip_records
}
# calling from main terraform file
module "private_storage_dns_a_record" {
source = "./modules/networking/dns/a_record"
depends_on = [
module.private_dns,
module.private_storage
]
name = module.storage_account.account_name
rg_name = azurerm_resource_group.rg.name
zone_name = module.private_dns.zone_name
ip_records = [ module.private_storage.private_ip ]
}
view raw a_record.tf hosted with ❤ by GitHub

It is that simple. The A Record is added to the DNS Zone and its done. But LOOK OUT back to the naming aspect again. The name here MUST be the name of your service, or at least the unique portion of the URL when referencing the service. I will explain in the next section.

Understanding Routing

This is less obvious with the storage account than it is with Event Grid or other services. Consider what your typical storage account endpoint looks like:

mystorageaccount.blob.core.windows.net

Now here is the name of the attached Private DNS Zone: privatelink.blob.core.windows.net

Pretty similar right? Now look at the name of the A Record – it will be the name of your storage account. Effectively what happens here is the calling URL is mystorageaccount.privatelink.blob.core.windows.net. But yet, the code we deploy can still call mystorageaccount.blob.core.windows.net and work fine, why? The answer is here: Use private endpoints – Azure Storage | Microsoft Docs

Effectively, the typical endpoint gets translated to the private one above which then gets matched by the Private DNS Zone. The way I further understand it is, if you were calling this from a peered Virtual Network (on-premise or in Azure) you would NEED to use the privatelink endpoint.

Where this got hairy for me was with Event Grid because of the values returned relative to the values I needed. Consider the following Event Grid definition:

resource "azurerm_eventgrid_topic" "this" {
name = "eg-topic-${var.name}jx01"
resource_group_name = var.rg_name
location = var.rg_location
input_schema = "EventGridSchema"
public_network_access_enabled = false
}
output "eventgrid_topic_id" {
value = azurerm_eventgrid_topic.this.id
}
output "event_grid_topic_name" {
value = azurerm_eventgrid_topic.this.name
}
output "eventgrid_topic_endpoint" {
value = azurerm_eventgrid_topic.this.endpoint
}
output "eventgrid_topic_access_key" {
value = azurerm_eventgrid_topic.this.primary_access_key
}
view raw eventgrid.tf hosted with ❤ by GitHub

The value of the output variable eventgrid_topic_name is simply the name of the Event Grid instance, as expected. However, if you inspect the value of the endpoint you will see that it incorporates the region into the URL. For example:

https://someventgridtopic.eastus-1.eventgrid.azure.net/api/events

Given the REQUIRED name of a DNS Zone for the Event Grid Private Endpoint is privatelink.eventgrid.azure.net my matched URL would be: someeventgrid.privatelink.eventgrid.azure.net which wont work – I need the name of the A Record to be someeventgrid.eastus-1 but this value was not readily available. Here is how I got it:

module "private_eventgrid_dns_a_record" {
source = "./modules/networking/dns/a_record"
depends_on = [
module.private_dns,
module.private_storage,
module.eventgrid_topic
]
name = "${module.eventgrid_topic.event_grid_topic_name}.${split(".", module.eventgrid_topic.eventgrid_topic_endpoint)[1]}"
rg_name = azurerm_resource_group.rg.name
zone_name = module.private_eventgrid_dns.zone_name
ip_records = [ module.private_eventgrid.private_ip ]
}
view raw eg_a_record_call.tf hosted with ❤ by GitHub

It is a bit messy but, the implementation here is not important. I hope this shows how the construction of the Private Link address through the DNS zone is what allows this to work and emphasizes how important the naming of the DNS Zone and A Record are.

In Conclusion

I hope this article has shown the power of Private Endpoint and what it can do for the security of your application. Security is often overlooked, especially with the cloud. This is unfortunate. As more and more organizations move their workloads to the cloud, they will have an expectation for security. Developers must embrace these understandings and work to ensure what we create in the Cloud (or anywhere) is secure by default.

Securing Access to Storage with Private Link

Security. Security is one of the hardest and yet most vital pieces to any application we build, whether on-premise or in the cloud. We see evidence of persons not taking security seriously all the time in the news in the form of data leaks, ransomware, etc. No cloud can automatically safeguard you from attacks but, they do offer some very sophisticated tools to aid in helping you more easily bring a strong security posture to your organization.

In Azure, the release of the Private Link feature is a crucial means to ensure we safeguard access to our PaaS (Platform as a Service) deployments. Using Private Link you can create a secure tunnel from your vNet (Virtual Network) to the PaaS service (Azure Storage for example). By using this tunnel, you can ensure that NO TRAFFIC between your assets in the vNet and the dependent services traverse the public internet. Why is this important?

Encryption can definitely aid to ensure data is kept private while in transit but, it is not perfect and for some industries (finance, insurance, healthcare, etc) the privacy of data is not something that can be left to chance.

Defense in Depth

As with any security posture, its strength lies in the layers of protection which exist whether that in NVAs checking incoming traffic, microsegmentation of networks, and/or IP whitelisting, the key is to never rely solely on one measure to keep you safe.

Private Link, for example, keeps your traffic off the internet but, does not protect you from a malicious actor accessing your storage account from a compromised machine. Ensuring your adhere to a defense in depth mentality is the best security along with assuming and accepting you will have a breach and that, the focus is not so much on preventing the breach but, rather, limiting the damage an attacker can do.

Establish your Private Link endpoint (Storage)

Private Link is available for the following services:

  • Azure Machine Learning
  • Azure Synapse Analytics
  • Azure Event Hub
  • Azure Monitor
  • Azure Data Factory
  • Azure App Configuration
  • Azure-managed Disks
  • Azure Container Registry
  • AKS (Azure Kubernetes Service)
  • Azure SQL Database
  • Azure CosmosDB
  • Azure Database for Postgres
  • Azure Database for MySQL
  • Azure Database for MariaDB
  • Azure Event Grid
  • Azure Service Bus
  • Azure IoT Hub
  • Azure Digital Twins
  • Azure Automation
  • Azure Backup
  • Azure Key Vault
  • Azure Storage (all services)
  • Azure FileSync
  • Azure Batch
  • Azure SiganlR Service
  • Azure Web Apps
  • Azure Search
  • Azure Relay
  • Azure Load Balancer (Standard Only)

Source: https://docs.microsoft.com/en-us/azure/private-link/availability

This visual shows what is going on under the hood:

Manage a Private Endpoint connection in Azure | Microsoft Docs

The Private Link provides a specific endpoint within a Private DNS Zone within the vNet, this DNS is updated with the mapping for the endpoint. Therefore, for machines WITHIN the vNet you can use the typical endpoint and connection strings for services over the Private Link as you would if it did not exist – Private DNS will ensure the address is resolved to the private link IP which in turn allows communication with the connected PaaS service.

Creating the Initial Resources

The Virtual Network offering from Azure is a way to isolate your resources deployed to Azure, most commonly Virtual Machines and other features deployed through the IaaS (Infrastructure as a Service) model. By using vNets and Network Security Groups (NSGs) you can effectively control the incoming and outgoing traffic for this custom network you have created in Azure.

Here is a good link on creating a Virtual Network: https://docs.microsoft.com/en-us/azure/virtual-network/quick-create-portal

The short story is, for the purposes here, you can take the default CIDR for the Address Space (10.0.0.0/16) and the default Subnet CIDR (10.0.0.0/24). This will work fine for this bit (for those more familiar with CIDR you can adjust this to be more practical).

This Virtual Network not being setup we can create our Storage Account. There is an option on the Networking tab for Private Endpoint. By choosing to Add private endpoint you can select the vNet and the subnet in which this Private Endpoint will be created.

A quick note, Private Link will ONLY allow access to a specific service within the storage account; you dont get unfettered access to all services, its designed to be a pipe to a specific service. For this demo, I will be select blob as my file upload code will create and retrieve objects from Azure Storage Blob Service.

Keep the default settings for the Private DNS integration section.

For details on creating Azure Storage Accounts, follow this link: https://docs.microsoft.com/en-us/azure/storage/common/storage-account-create?tabs=azure-portal – please make sure you do not allow anonymous access to containers, this is the number one way data leaks are caused.

Let’s Access Our Blobs

So, right now we have a pretty useless setup. Sure its secure, but we cant do anything with our Private Endpoint. We could deploy a Virtual Machine running IIS or Nginx (or whatever you prefer), through some .NET Core code up there and interact with the endpoint.

Before doing this, consult this link to understand the changes, if any, are needed to allow your application to talk to the Private Endpoint: https://docs.microsoft.com/en-us/azure/private-link/private-endpoint-dns

That works but, I have a different idea. One of the new features in Azure AppService is the ability to allow the App Service to integrate with a Virtual Network. Since AppService is a PaaS it also means we are not required to patch and maintain and underlying OS, we can just deploy our code and indicate to Azure how much in the way of resources we need. Let’s use this.

Prepare our VNet for App Service Integration

So, disclaimer, I am not a Networking expert, though I know enough to get by, there are people who are far more versed than I am. I say this because, one of the requirements for Azure App Service VNet Integration is the App Service be injected into an empty Subnet.

To achieve this, I added a secondary address space to my VNet and created a single subnet within this new Address Space, called Apps. There is likely a better way to do this, but this is what worked for me.

Here is a screen shot of what my VNet Address space and Subnets look like:

VNet Address Space
VNet Subnets

Make sure this is created AHEAD of the next phase where we will create and join an App Service instance to this VNet.

Join an App Service to our Virtual Network

I wont go into how to create an Azure AppService, here is a link explaining it: https://docs.microsoft.com/en-us/azure/app-service/quickstart-dotnetcore?tabs=netcore31&pivots=development-environment-vs

Once the App Service is created, you can upload your code to it. For simplicity, I am posting gists with the code for reading and writing blob data from a Web API, but there are numerous examples you could draw from:

The StorageAccountConnectionString is the value straight from the Azure portal, no changes. This is the beauty of the Private DNS Zone that gets added to the Virtual Network when you add a Private Link. More on that later.

Deploy your application to your Azure AppService and add StorageAccountConnectionString to your Configuration blade, again, the value is verbatim what is listed in the Access Keys blade in the Storage Account.

You can now try to use your application at it should fail. The error, if you look in the logs will be 403 Forbidden which makes sense, we have NOT joined the App Service yet, this proves our security is working – the storage account is NOT visible to anyone outside of our Virtual Network.

I expect you will want to test things locally with the Storage Account, it is onerous to only do your testing when deployed to Azure. While the Storage Account IS setup to allow connections via the Private Link you can also poke a whole in the Storage Account firewall to allow specific IP addresses (or ranges). This can be found under the Networking blade of the Storage Account, look for the Firewall section. There is an option to add your current IP to the allowed list.

In the Azure App Service select the Networking blade. The very first option in the listing is VNet Integration – this is what you want to configure. Select the Click here to configure link.

Select the Add VNet option. This will open a panel with your available VNets and their respective Subnets. This is what we are after. You should see your Apps (or whatever you used) Subnet – select this. Press Ok.

The Configuration screen will update with the new details. Give it a few minutes for the stars to align before you test. Not super long but just long enough that if you test right away it might not work.

What are the results?

If everything is working, your App Service can talk to the Storage Account with NO CONFIGURATION changes. Yet, this is very secure since only the App Service can talk to the Storage Account over the web, lessening the chances of a data leak. There are other ways to further secure this, NSGs and Firewall rules being at the top of the list. But, this is a good start to creating a secure application.

Difference with Service Connections

You may be away of another feature in Azure known as Service Connections, and make no mistake they share a lot of similarities with Private Links but, the two are different.

The greatest difference is that Service Connections expose a Public IP, Private Links do not, they only ever user Private IPs. Thus the two address distinctly different use cases. The Service Connection can limit access to the underlying service but data is still traversing the public web. With Private Link (Endpoints) the data NEVER leaves the VNet and is thus more secure.

MVP Ravi explains this difference in more detail: https://scomandothergeekystuff.com/2020/06/22/azure-service-endpoints-versus-azure-private-links/

Regardless of which you choose I implore you to consider security first and foremost in your applications. There are enough tools and features in Azure to keep security breaches to a minimum.

Single Purpose Events

Over the last few weeks Jeff Fritz and I have been slowly rolling out (darkly) support in KlipTok for tracking the live status of tracked channels. This feature leverages Twitch’s EventSub system that enables a sort of event driven programming for client applications. The integration of EventSub into KlipTok will enable the site to take the next step in terms of functionality and offerings.

Tracking stream status involves receiving callbacks for two different events which can occur: stream.online and stream.offline. Following this, I implemented support for this using Azure Service Bus with SQLFilters. The filters were organized as such:

EventType = ‘stream.online’ and EventVersion = ‘1’ => Stream is online

EventType = ‘stream.offline’ and EventVersion = ‘1’ => Stream is offline

Initial testing of this logic showed it worked as the events were received. Following the success small tests I worked with Jeff to subscribe all channels that had opted into tracking. Once successful, I began monitoring the channels during control times (when @csharpfritz stream was active) as well as random points throughout the day. For the most part I had confirmed the functionality working properly.

But then I started seeing some irregularities. @wintergaming is a popular StarCraft 2 Twitch channel but I noticed that the entry never seemed to go offline. Digging deeper into Twitch’s documentation I realized that, in fact, there is a difference between a channel being online and a channel being live; @wintergaming for example is NEVER offline rather, he plays reruns when he is not live. According to the documentation, this is still a valid online status, a status which i was not accounting for. As our requirement for KlipTok was to note LIVE channels, a change needed to be made.

Many Options

As I sat down to address this I was faced with a few ways to go about it. A couple approaches which I considered were:

  • I could update the StreamOnlineUpdate subscription such that, if the type was NOT ‘live’ I should treat it as an offline event and delete the entry from the LiveChannel tracking table
  • I could update the StreamOnlineUpdate subscription such that, if the not ‘live’ type was detected the event would be redirected to the Offline subscription

Both of these were, in my mind, bad choices. For starters, taking Option 1 would create duplicative code between the Online and Offline subscriptions. Further, it would obscure the purpose of the OnlineUpdate whose intent is to handle the case of the channel being ‘online’. I decided to not pursue this.

Option 2 is a bit better since it avoids duplicating logic but, when creating event handlers, the intent should be as straightforward and clear as possible. Event redirection like this is foolish and only adds extra processing logic to the handler. It would be different if I was creating a new event to “advance the flow”. But, in this case, I am effectively using the subscriptions as a mechanism of logic checking.

So, I thought about it more deeply and I realized that I was restricting myself based on the naming I had chosen. Recall what I said earlier “there is a difference between a channel being online and a channel being live”. The solution lie in honoring this distinction Twitch was making in our system as well.

Thus, the solution I arrived at is to alter the names of the subscriptions as such:

  • StreamOnlineUpdate => StreamLiveUpdate
  • StreamOfflineUpdate => StreamNotLiveUpdate

By creating this distinction, it meant that I could not adjust the Service Bus SQL Filter to only send the message to StreamLiveUpdate if the channel is, in fact, live. In other cases, the channel is NOT live and thus we should send to the StreamNotLiveUpdate.

In effect, this enables the sort of Single Purpose Events which are ideal in complex system which depend on low amounts of coupling to ensure the sanity of the maintainers.

Making it Work

The SQL Filter syntax of Service Bus works quite well (though I am still partial to what is offered through EventGrid) and enables clear definition of criteria. Unlike EventGrid however, the message itself cannot be analyzed (or if it can I have not found out how). Thus, we rely on the use of the Message class (from Microsoft.Azure.ServiceBus NuGet package) to apply custom UserProperties that we can use for filtering.

We end up defining the following for StreamLiveUpdate

EventVersion = ‘1’ and EventType = ‘stream.online’ and StreamType = ‘live’

EventVersion comes from Twitch so we can distinguish between different formats of the Event (this approach is also highly advised for testing and development to ensure discreteness with events already in play).

The EventType is the Twitch event being received. Our code also, upon knowing its receiving a stream type event offers the StreamType value as well, which will contain live, rerun, and other values indicating what the online stream type corresponds to.

For StreamNotLiveUpdate we define the following SQL Filter:

EventVersion = ‘1’ and (EventType = ‘stream.offline’ or (EventType = ‘stream.online’ and StreamType <> ‘live’))

You can see this combines our criteria for the normal case (EventType = ‘stream.offline’) and the exceptional case around an online event that is NOT of type live.

Conclusion

Through this approach we ensure our event handlers have but a single purpose, an application of the Single Responsibility Principle from SOLID design. The only time we should have to modify the Live event handler is if the meaning of a live channel changes. We are not redirecting or overloading this handler and obscuring its meaning. Instead we adjusted our understanding to better match the reality of the data we would be receiving. Thus, we are able to maintain a single purpose event pattern and control the complexity of the feature.

Common Misconception #4 – Duplication is bad

Perspectives on duplication are as varied and wide-ranging as any topic in software development. Newer programmers are often fed acronyms such as DRY (Dont Repeat Yourself) and constantly bombarded with warnings from senior programmers about the dangers of duplication. I can even recall getting these lessons when I was still in academia.

So is duplication actually bad? The simpler answer ‘Yes’, but the real answer is much more nuanced and boils down to one of the most common phrases in software engineering: ‘it depends’. But what does it depend on? How can teams prevent themselves from going over the deep end and introducing leaky abstractions or over complexity all in the name of adhering to DRY?

It is about the Lifecycle

Whenever a block of code is authored it immediately transitions to something which must be maintained. I once heard a senior engineer remark “there is no such thing as new code. There is only legacy code and code which is not yet written”. So, once code is written we must immediately begin discussing its lifecycle.

Code evolves over time and this evolution should be encouraged – something that is actively pursued through attempts to decouple code and promote cohesion. Too often the reason something is “shared” is because we created a Person class in one system and felt that creating a second Person class in another system would be duplicative. However, in making this assumption, developers will unknowingly increase system coupling resulting in a far greater problem. A problem they could avoid if they considered the “lifecycle” of each Person class.

More prominently, this notion gets applied to business logic and, in that case, it is correct. Certain business logic absolutely has a standard lifecycle. In fact, each line of code you write will have a lifecycle, and this what you need to use to decide whether duplicating something makes sense.

An example

When I was working as a Principal at West Monroe Partners some years ago I was assigned to a project in which through a combination of misgivings a multitude of mistakes had been made which hampered team progress and efficiency. One of these was a rather insane plan to share database entities through a NuGet package.

Database entities, in theory, do not change that often once established, but that is theory. More often, and especially as a system is being actively developed, they change constantly – this is especially true in the case of this project which had three active development teams all using the same database. The result was near constant updates across every project whenever a change was made – and failure to do so would often manifest as an error in a deployed environment since Entity Framework would complain the schema expected did not match.

While the team may have had a decent sense to reduce duplication by sharing entities it is a high risk move in complex systems. In the best case, you end up with bloated class definitions and API calls where returned objects may or may not have all fields populated. This becomes even more true if you are approach system design with a microservice based mindset – as each service should contain its own entities (unless you are sharing the DB which is a different problem altogether).

Should all code be segregated then?

The short answer is “No”. Again, we return to point on lifecycle. In fact, this relates to the core principle in Microservice design where services and their lifecycles are independent of each other. Spelt out “no service should be reliant on another service in deployment” – if this rule is broken then the advantages of microservices is effectively lost. The lifecycle of each service must be respected.

It is the same with code. Code lifecycles must be understood and respected. Just because you define two Person class definitions does not mean you have created duplication, even if the definitions are the same. You are giving yourself the ability to change each over time according to system needs.

Some code, logging code or perhaps a common set of POCO classes may need to be shared – this is where I tend to lean on custom NuGet feeds. But, generally this is a last resort as it is easy to go overboard with things like NuGet and fall into the “pad left” problem – where you decompose everything so much that you wind up with an extensive chain of dependencies which need to be revved for a release. Link.

As with most things, there is a necessary balance to strike here and you should not expect to get it right immediately – frankly the same lesson is applied in Microservice where, you never start with Microservices, you create new services as needed.

Why is it a misconception?

I find that the concept of DRY is overused and, more often, taken way too literally. I think we can all agree that what is often meant by DRY is to ensure we dont need to update the same logic in multiple places. DRY is not telling us that have two Person classes is bad. It can be bad but whether that is so is determined by circumstances and is not a hard and fast rule.

The misconception is dangerous because strict adherence to DRY can actually make our code LESS maintainable and sacrifice clarity for reduced keystrokes. As developers, we need to constantly be thinking and evaluating whether centralization and abstraction make sense or if we are doing it because we may be overthinking the problem or taking DRY too literally.

So I made a Stock Data App

I decided to build a Event Driven Stock Price Application using Event Grid, SignalR, and ReactJS. Just a little something to play with as I prepare to join Microsoft Consulting Services. I thought I would recount my experience here. First, here is what the flow looks like:

Figure 1 – Diagram of Stock App

While the diagram may look overbearing it really is quite simple:

  • Producer console app starts with some seed data of stock prices I gathered
  • It adjusts these values using some random numbers
  • The change in price is sent to an Event Grid topic with an EventType declared
  • The EventGrid subscriptions look for events with a matching EventType
  • Those that match will fire their respective Azure function
  • The Azure Function will then carry out its given task

I really prefer Event Grid for my event driven applications. Its fast, cost effective, and has a better interaction experience than Service Bus topics, in my opinion. The subscription filters can get down to analyzing the raw JSON coming through and it supports the up and coming Cloud Events (cloudevents.io) standard. It also can tie into the tenant providers and respond to native Azure events, such as blob creation/deletion. All in all, it is one of my favorite Azure services.

So regarding the application, I choose to approach this from a purely event driven fashion. All price changes are seen as events. The CalculateChangePercent receives all events and, using the symbol as the partition key, looks up the most recent price stored in the database.

Based on this and the incoming data it determines the change percent and creates a new event. Here is the code for that:

[FunctionName("CalculateChangePercent")]
public void CalculateChangePercent(
[EventGridTrigger] EventGridEvent incomingEvent,
[Table("stockpricehistory", Connection = "AzureWebJobsStorage")] CloudTable stockPriceHistoryTable,
[EventGrid(TopicEndpointUri = "TopicUrlSetting", TopicKeySetting = "TopicKeySetting")] ICollector<EventGridEvent> changeEventCollector,
ILogger logger)
{
var stockData = ((JObject)incomingEvent.Data).ToObject<StockDataPriceChangeEvent>();
var selectQuery = new TableQuery<StockDataTableEntity>().Where(
TableQuery.GenerateFilterCondition(nameof(StockDataTableEntity.PartitionKey), QueryComparisons.Equal, stockData.Symbol)
);
var symbolResults = stockPriceHistoryTable.ExecuteQuery(selectQuery).ToList();
var latestEntry = symbolResults.OrderByDescending(x => x.Timestamp)
.FirstOrDefault();
if (latestEntry != null)
{
var oldPrice = (decimal) latestEntry.Price;
var newPrice = stockData.Price;
var change = Math.Round((oldPrice newPrice) / oldPrice, 2) * 1;
stockData.Change = change;
}
changeEventCollector.Add(new EventGridEvent()
{
Id = Guid.NewGuid().ToString(),
Subject = $"{stockData.Symbol}-price-change",
Data = stockData,
EventType = "EventDrivePoc.Event.StockPriceChange",
DataVersion = "1.0"
});
}
view raw create-event.cs hosted with ❤ by GitHub

This is basically “event redirection”, that is taking one event and create one or more events from it. Its a very common approach to handle sophisticated event driven workflows. In this case, once the change percent is calculated the information is ready for transmission and persistence.

This sort of “multi-casting” is at the heart of what makes event driven so powerful and, so risky. Here two subscribers will receive the exact same event and take very different operations:

  • Flow 1 – this flow takes the incoming event and saves it to a persistence store. Usually, this needs to be something high availability, consistency is usually not something we care about.
  • Flow 2 – this flow takes the incoming event and sends it to the Azure SignalR service so we can have a real time feed of the stock data. This approach in turn allows connecting clients to also be event driven since we will “push” data to them.

Let’s focus on Flow 1 as it is the most typical flow. Generally, you will always want a record of the events the system received either for analysis or potential playback (in the event of state loss or debugging). This is what is being accomplished here with the persistence store.

The reason you will often see this as a Data Warehouse or some sort of NoSQL database is, consistency is not a huge worry and NoSQL database emphasize the AP portion of the CAP theorem (link) and are well suited to handling high write volumes – this is typical in event heavy systems, especially as you get closer to patterns such as Event Sourcing (link). There needs to be a record of the events the system processed.

This is not to say you should rely on a NoSQL database over an RDBMS (Relational Database Management System), each has their place and there are many other patterns which can be used. I like NoSQL for things like ledgers because they dont enforce a standard schema so all events can be stored together which allows for easier re-sequencing.

That said, there are also patterns which periodically read from NoSQL stores and create data into RDBMS – this is often done if data ingestion needs are such that a high volume is expected but the data itself can be trusted to be consistent. This may create data into a system where we need consistency checks for other operations.

Build the Front End

Next on my list was to build a frontend reader to see the data as it came across. I choose to use ReactJS for a few reasons:

  • Most examples seem to use JQuery and I am not particularly fond of JQuery these days
  • ReactJS is, to me, the best front end JavaScript framework and I hadnt worked with it in some time
  • I wanted to ensure I still understood how to implement the Redux pattern and ReactJS has better support than Angular; not sure about Vue.js

If you have never used the Redux pattern, I highly recommend it for front end applications. It emphasizes a mono-directional flow of data built on deterministic operations. Here is a visual:

https://xximjasonxx.files.wordpress.com/2021/05/2821e-1bzq8fpvjwhrbxoed3n9yhw.png

I first used this pattern several years ago when leading a team at West Monroe, we built a task completion engine for restaurants, we got pretty deep into the pattern. I was quite impressed.

Put simply, the goal of Redux is that all actions are handled the same and state is recreated each time a change is made, as opposed to updating state. By taking this mentality, operations are deterministic meaning the same result will occur no matter how many times the same action is executed. This bakes very nicely with the event driven model from the backend which SignalR carries to the frontend.

Central to this is the Store which facilitates subscribing and dispatching events. I wont go much deeper into Redux here, much better sources out there such as https://redux.js.org/. Simply put, when SignalR sends out a messages it sends an event to listeners – in my case its the UpdateStockPrice event. I can use a reference to the store to dispatch the event, which allows my reducers to see it and change their state.

Once a reducer changes state, a state updated event is raised and any component which is connected will update, if needed (ReactJS uses shadow DOM to ensure components only change if they were actually changed). Here is the code which is used (simplified):

// located at the bottom of index.js the application bootstrap
let connection = new HubConnectionBuilder()
.withAutomaticReconnect()
.withUrl("https://func-stockdatareceivers.azurewebsites.net/api/stockdata&quot;)
.build();
connection.on('UpdateStockPrice', data => {
store.dispatch({
type: UpdateStockPriceAction,
data
});
});
connection.start();
// reducers look for actions and make changes. The format of the action (type, data) is standard
// if the reducer is unaware of the action, we return whatever the current state held is
const stockDataReducer = (state = initialState, action) => {
switch (action.type) {
case UpdateStockPriceAction:
const newArray = state.stockData.filter(s => s.Symbol !== action.data.Symbol);
newArray.push(action.data);
newArray.sort((e1, e2) => {
if (e1.Symbol > e2.Symbol)
return 1;
if (e1.Symbol < e2.Symbol)
return 1;
return 0;
});
return { stockData: newArray };
default:
return state;
}
};
// the component is connected to the store and will rerender when state change is made
class StockDataWindow extends Component {
render() {
return (
<div>
{this.props.stockData.map(d => (
<StockDataLine stockData={d} key={d.Symbol} />
))}
</div>
);
}
};
const mapStateToProps = state => {
return {
stockData: state.stockData
};
};
export default connect(mapStateToProps, null)(StockDataWindow);
view raw update.js hosted with ❤ by GitHub

This code makes use of the redux and react-redux helper libraries. ReactJS, as I said before, supports Redux extremely well, far better than Angular last I checked. It makes the pattern very easy to implement.

So what happens is:

  • SignalR sends a host of price change events to our client
  • Our client dispatches events for each one through our store
  • The events (actions) are received by our reducer which changes its state
  • This state change causes ReactJS to fire render for all components, updating Shadow DOM
  • Shadow DOM is compared against action DOM and components update where Shadowm DOM differs

This whole process is very quick and is, at its heart, deterministic. In the code above, you notice the array is recreated each time rather than pushing the new price or trying to find the existing index an updating. This may seems strange but, it very efficiently PREVENTS side effects – which often manifest as some of the more nastier bugs.

As with our backend, the same action could be received by multiple reducers – there is no 1:1 rule.

Closing

I wrote this application more to experiment with Event Driven programming on the backend and frontend. I do believe this sort of pattern can work well for most applications; in terms of Redux I think any application of even moderate complexity can benefit.

Code is here: https://github.com/jfarrell-examples/StockApp

Happy Coding

Common Misconception #3 – DevOps is a tool

DevOps is a topic very near and dear to me. Its something that I helped organizations with a lot as an App Modernization Consultant in the Cognizant Microsoft Business Group. However, I find it ubiquitous that DevOps is misunderstood or misrepresented to organizations.

What is DevOps?

In the most simplest sense, DevOps is a culture that is focused on collaboration that aims to maximize team affinity and organizational productivity. While part of its adherence is the adoption of tools that allow team to effectively scale, at its core its a cultural shift to remove team silos and emphasize free and clear communication. One could argue, as Gene Kim does in his book The Phoenix Project, that the full realization of DevOps is the abolishment of IT departments; instead IT is seen as a resource embedded in each department.

From a more complex perspective, the tenants of DevOps mirror the tenants of Agile and focus on small iterations allowing organizations (and the teams within) to adjust more easily to changing circumstances. These tenants (like with Agile) are rooted in Lean Management which was borne out of the Toyota Production System (TPS) (link) which revolutionized manufacturing and allowed Toyota to keep pace with GM and Ford, despite the later two being much larger.

The Three Ways

DevOps culture carries forth from TPS the three ways which govern how work flows through the system, how it is evaluated for quality, and how observations upon that work inform future decisions and planning. For those familiar with Agile, this should, again, sound familiar – DevOps and Agile share many similarities in terms of doctrine. A great book for understanding The Three Ways (also authored by Gene Kim) is The DevOps Handbook.

The First Way

The First Way focuses on maximizing the left to right flow of work. For engineers this would be the flow of a chance from conception to production. The critical idea to this Way is the notion of small batches. We want teams to consistently and quickly sending work through flows and to production (or some higher environment as quickly as possible). Perhaps contrary to established thought, the First Way stresses that the faster a team moves the higher their quality.

Consider, if a team works on a large amount of changes (100), does testing, and then ultimately deploys the change the testing and validation is spread out across these 100 changes. Not only are teams at the mercy of a quality process which must be impossibly strict, if a problem does occur, the team must sort out WHICH of the 100 changes caused the problem. Further, the sheer size of the deployment would likely make rollback very difficult, if not impossible. Thus, the team may also be contending with downtime or limited options to ensure the problem does not introduce bad data.

Now consider, if that same team deployed 2 changes. The QA team can focus on a very narrow set of testing and if something goes wrong, diagnosing is much easier given the smaller size. Further, the changes could likely be backed out (or turned off) to prevent the introduction of bad data into the system.

There is a non-linear relationship between the size of the change and the potential risk of integrating the change – when you go from a ten-line code change to a one-hundred-line code change, the risk of something going wrong is more than 10x higher, and so forth

Randy Shoup, DevOps Manager, Google

Smaller batch sizes can help your teams move faster and get their work in front of stakeholders more efficiently and quickly, doing so induces better communication between the team and their users, which ultimately help each side get what they want out of the process. There is nothing worse than going off in a corner for 4 months, building something, and having it fall short of the needs of the business.

The Second Way

Moving fast is great but is only part of the equation. Like so much in DevOps (and Agile) the core learnings are defined in a way that is supplementary to each other. The Second Way emphasizes the need for fast feedback cycles or more directly, is aimed at ensuring that the speed is supported by automated and frequent quality checks.

The Second Way is often tied to a concept in DevOps called shift-left, shown by the graph visual below:

Shift Left in action

It is not uncommon for organizations embracing a siloed approach to Quality Assurance to start QA near the end of a cycle, generally to ensure they can validate the complete picture. While this makes sense, it value is misplaced. I would ask anyone who has built or tested software how often this process ends up being a bottleneck (reasons be damned) in delivery? If you are like most clients I have worked with, the answer is Yes and Always.

The truth is, such a model does not work if we want teams to move with speed and quality. Shift Left therefore emphasizes that it is the people at the LEFT who need to do the testing (in the case of engineering that would be the developers). The goal is to discovered a problem as quickly as possible so that it can be corrected building on the ubiquitous understanding that the earlier a problem is found the cheaper it is to fix.

To but it bluntly, teams cannot make changes to systems, teams, or anything is there is not a sense of validation to know what they did worked. For engineering, we can only know something is working if we can test that is working, hence the common rule for high-performing teams that no problem should ever occur twice.

I cannot overstate how important these feedback cycles are, especially in terms of automation. Especially in engineering, giving developers confidence that IF they make a mistake (and they will) it will get caught before it gets to production is HUGE. Without this confidence, the value provided by The First Way will be limited.

And equally critical to creating the cycles is UNDERSTANDING the means of testing and what use case is best tested by what. Here is an image of the Testing Pyramid which I commonly use with clients when explaining feedback cycles for Engineering.

For those wondering where manual testing goes – it is at the very top and has the fewest number. Manual tests should be transitioned to an automated tool.

A final point I want to share here, DevOps considers QA as a strategic resource NOT a tactical one. That is, high functioning teams do NOT expect QA persons to do the testing, these individual are expected to ORGANIZE the testing. From this standpoint, they would plan out what tests are needed and ensure the testing is happening. In some cases, they may be called on to educate developers on what tests fit certain use cases. Too often, I have seen teams view QA as the person who must do the testing – this is false and only encourages bottlenecking. Shift-left is very clear that DEVELOPERS need to do the majority of testing since they are closer to a given change than QA.

The Third Way

No methodology is without fault and it would be folly to believe there is a prescriptive approach to anything that fits any team. Thus, The Third Way stresses that we should use metrics to learn about and modify our process. This includes how we work as well as how our systems work. The aim is to create a generative culture that is constantly accepting of new ideas and seeks to improve itself. Teams embracing this Way apply the scientific method to any change in process and work to build high-trust. Any failure is seen, not as a time to blame or extort, but rather to learn and evolve.

“The only sustainable competitive advantage is an organization’s ability to learn faster than the competition”

Peter Senge – Founder of the Society for Organizational Learning

For any organization the most valuable asset is the knowledge of their employees for only through this knowledge can improvements be made that enable their products to continue to produce value for customers. Put another way:

Agility is not free. It’s cost is the continual investment to ensure teams can maintain velocity. I have seen software engineering department leads ask, over and over, why is the team not hitting their pre-determined velocity. Putting aside the fallacy of telling a team what speed they should work at, velocity is not free. If I own a sports car, but I perform not maintenance on it, soon it will drive the same as a typical consumer sedan. Why?

“.. in the absence of improvements, processes do NOT stay the same. Due to chaos and entropy, processes actually degrade over time”

Mike Rother – Toyota Kata

No organization, least of all engineering, can hope to achieve its goals if it does not continually invest in the process of reaching those goals. Teams which do not perform maintenance on themselves are destined to fail and, depending on the gravity of the failure, the organization could lose more than just money.

In Scrum, teams will use the Sprint Retrospective to call to attention things which should be stopped, started, and continued as a way to ensure they are continually enhancing their process. However, too often, I have seen these same teams shy away from ensuring, in each sprint, there is time taken to remove technical debt or add some automation, usually because they must hit a target velocity or a certain feature. This completely gets away from the spirit of Agile and DevOps.

Its about culture

Hopefully, despite my occasional references to engineering, you can understand that The Three Ways are about culture and about embracing many lessons learned from manufacturing about how to effectively move work through flows. DevOps is an extremely deep topic that, regrettably, often gets boiled down to a somewhat simplistic question of “Do you have automated builds?”. And yes, automation is a key to embracing DevOps but less important than establishing the cultural norms to support it. Simply having an automated build is indifferent if all work must be pass through central figures or certain work is given into silos where the timeline is no longer the teams.

Further Reading

The topic of DevOps is well covered, especially if you are a fan of Gene Kim. I recommend these books to help understand DevOps culture better – I list them in order of quality:

  • The DevOps Handbook (Gene Kim) – Amazon
  • The Phoenix Project (Gene Kim et al) – Amazon
  • Effective DevOps (Davis and Daniels) – Amazon
  • The Unicorn Project (Gene Kim) – Amazon
  • Accelerate (Forsgren et al) – Amazon

Thank you for reading

Common Misconception #2 – Serverless is good for APIs

The next entry in this series is something that hits very close to home for me: Serverless for APIs. Let me first start off by saying, I am not stating this as an unequivocal rule. As with anything in technology there are cases where this makes sense. And, in fact, much of my consternation could be alleviated by using Containers. Nevertheless, I still believe the following to be true:

For any non-simple API, a serverless approach is going to be more harmful and limiting, and in some cases more costly, than using a traditional server.

Background

When AWS announced Lambda back in 2014 it marked the first time a major cloud platform had added support for what would be come known as FaaS (Function as a Service). The catchphrase was serverless which, did not make a lot of sense to people since there was obviously still a server involved – but marketing people gotta market.

The concept was simple enough, using Lambda I could deploy just the code I wanted to run and pay per invocation (and the cost was insanely cheap). One of the things Lambda enabled was the ability to listen for internal events from things like S3 or DynanmoDB and create small bits of code which responded to those events. This enabled a whole new class of event driven applications as Lambda could serve as the glue between services – EventBridge came along later (a copy of Azure’s EventGrid service) and further elevated this paradigm.

One of the Events is a web request

One of the most common types of applications people write are APIs and so, Lambda ensured to include support for supporting web calls – effectively by listening for a request event coming from outside the cloud. Using serverless and S3 static web content, a company could effectively run a super sophisticated website for a fraction of the cost of traditional serving models.

This ultimately led developers to use Lambda and Azure Functions as a replacement for Elastic Beanstalk or Azure App Service. And this is where the misconception lies. While Lambda is useful to glue services together and provide for simple webhooks it is often ill-suited for complex APIs.

The rest of this will be in the context of Azure Functions but, conceptually the same problems existing with Google Cloud Functions and AWS Lambdas.

You are responding to an Event

In traditional web server applications, a request is received by the ISAPI (Internet Server Application Program Interface) where it is analyzed to determine its final destination. And this destination can be affected by code, filters, and other mechanisms.

Serverless, however, is purely event driven which means, once an event enters the system it cannot be cancelled or redirected, it must invoke its handler. Consider the following problem that was encountered with Azure Function Filters while developing KlipTok.

On KlipTok there was a need to ensure that each request contained a valid header. In traditional ASP .NET Core, we would write a simple piece of middleware to intercept the request and, if necessary, short-circuit it should it be deemed invalid. While technically possible in Azure Functions it requires a fairly robust and in-depth knowledge and customization to achieve.

In the end, we leveraged IFunctionsInvocationFilter (a preview feature) which allowed code to run ahead of the functions execution (no short-circuit allowed) and mark the request. Each function then had to check for this mark. It did allow us to reduce the code but certainly was not as clean as a traditional API framework.

The above example is one of many examples of elements which are planned and allotted for in full-fledged API frameworks (being able to plug into the ISAPI being another) but are otherwise lacking in serverless framework, Azure Functions in this case. While there does exist the ability to supplement some of these features with containerization or third party libraries, I still believe such a play detracts from the intended purpose of serverless: to be the glue in complex distributed systems.

It is not to say you never should

The old saying “never say never” certainly holds true in Software Engineering as much as anywhere. I am not trying to say you should NEVER do this, there are cases where Serverless makes sense. This is usually because the API is simple or the Serverless piece is leverage by a proxy API or represents specific routes within the API. But I have, too often, seen teams leverage serverless as if it was a replacement for Azure App Service or Elastic Beanstalk – it is not.

As with most things teams need to be aware and make informed decisions with an eye on the likely road of evolution a software product will take. While tempting, Azure Functions have a laundry list of drawbacks you need to be aware of including:

  • Pricing which will vary with load taken by the server (if using Consumption style plans)
  • Long initial request time as the Cloud provider must standup the infrastructure to support the serverless code – often times our methods will go to sleep
  • Difficulties with organization and code reuse. This is certainly much easier in Azure than AWS but, still something teams need to consider as the size of the API grows
  • Diminished support for common and expected API features. Ex: JWT authentication and authorization processing, dependency injection in filters, lack of ability to short circuit.

There are quite a few more but, you get the idea. In general, the aim for a serverless method is to be simple and short.

There are simply better options

In the end, the main point here is, while you can write APIs in Serverless often times you simply shouldnt – there are simply better options available. A great example is the wealth of features web programmers will be used to and expect when building APIs that are simply not available or not easy to implement with serverless programming. Further, as project sizes grow the ability to properly maintain and manage the codebase because more difficult with serverless than with traditional programming.

In the end, serverless main purpose should be to glue your services together, enabling you to easily build a chain like this:

The items in blue represent the Azure Functions this sequence would require (at a minimum). The code here is fairly straightforward thanks to the use of bindings. These elements hold the flow together and support automated retry and fairly robust failure handling right out of the box.

Bindings are the key to using Serverless correctly

I BELIEVE EventBridge in AWS enables something like this but, as is typical, Microsoft has a much more thoughtout experience for developers in Azure than AWS has – especially here.

Triggers and bindings in Azure Functions | Microsoft Docs

Bindings in Azure allow Functions to connect to services like ServiceBus, EventGrid, Storage, SignalR, SendGrid, and a whole lot more. Developers can even author their own bindings. By using them, the need to write the boilerplate connect and listen code is removed so the functions can contain code which is directed at their intended purpose. One of these bindings is an input trigger called HttpTrigger, and if you have ever written a Azure Function you are familiar with it. Given what we have discussed, its existence should make more sense to you.

A function is always triggered by an event. And the one that everyone loves to listen for in the HttpTrigger that is an event to your function app which matches certain criteria defined in the HttpTrigger attribute.

So returning to the main point, everything in serverless is the result of an event so, we want to view the methods created as event handler not full fledged endpoints. While serverless CAN support an API, it lacks many of the core features which are built into API frameworks and therefore should be avoided for all but simple APIs.

Common Misconception #1 – Entity Framework Needs a Data Layer

This is the first post in what I hope to be a long running series on common misconceptions I come across in my day to day as a developer and architect in the .NET space though, some of the entries will be language agnostic. The goal is to clear up some of the more common problems I find teams get themselves into when building applications.

A little about me, I am a former Microsoft MVP and have been working as a consultant in the .NET space for close to 15yrs at this point. One of the common tasks I find myself doing is helping teams develop maintainable and robust systems. This can be from the standpoint of embracing more modern architecture such as Event Driven systems or using containers or it can be a modernization of the process to support more efficient workflows that enable teams to deliver more consistent and reliable outcomes while balancing the effort with sustainability.

The first misconception is one which I run across A LOT. And that is the way in which I find teams leveraging Entity Framework.

Entity Framework is a Repository

One of the most common data access patterns right now is the Repository pattern – Link. The main benefit is that it enables developers to embrace the Unit of Work technique which results in simpler more straightforward code. However, too often I see teams build their repository and simple create data access methods on the classes – effectively creating a variant of the Active Record or Provider pattern with the name Repository.

This is incorrect and diminishes much of the value the Repository pattern is design to bring mainly, that operations can work with data in memory as if they were talking to the database and save their changes at the end. Something like this:

Repository patterns works VERY well with web applications and frameworks like ASP .NET because we can SCOPE the database connection (called Context in Entity Framework) to the request allowing our application to maximize the connection pool.

In the above flow, we only talk to the database TWO times despite the operations, everything is done in memory and the underlying framework will handle the details for us. Too often I see code like such:

public async Task<bool> DoWork(IList<SomeItem> items)
{
foreach (var item in items.Where(x => x.Id % 2 == 0))
{
await _someRepo.DeleteItem(item.Id);
}
}
view raw bad-repo.cs hosted with ❤ by GitHub

This looks fairly benign but it is actually quite bad as it machine guns the database with each Id. In a small, low traffic application this wont be a problem but, in a larger site with high volume this is likely to cause bottlenecks, record locking, and other problems. How could this be written better?

// variant 1
public async Task<bool> DoWork(IList<SomeItem> items)
{
// assume _context is our EF Context
foreach (var item in items.Where(x => x.Id % 2 == 0))
{
var it = await _context.FirstOrDefaultAsync(x => x.Id == item.Id);
_context.Remove(it);
}
await _context.SaveChangesAsync();
}
// variant 2
public async Task<bool> DoWork(IList<SomeItem> items)
{
// assume _context is our EF Context
var targetItems = await _context.Items.Where(
x => items.Where(x => x.Id % 2 == 0).Contains(x.Id)).ToListAsync());
foreach (var item in targetItems)
{
_context.Remove(item);
}
await _context.SaveChangesAsync();
}
view raw good-repo.cs hosted with ❤ by GitHub

In general, reads are less a problem for locking and throughout that write operations (create, update, delete) so, reading the database as in Variant 1 is not going to be a huge problem right away. Variant 2 leans on EF for SQL Generation to create a query which gets our items in one shot.

But the key thing to notice in this example is the direct use of the context. Indeed, what I have been finding is I dont create a data layer at all and instead allow Entity Framework to be the data layer itself. This opens a tremendous amount of possibilities as we can then take a building block approach to our service layer.

Services facilitate the Operation

The term “service” is horrendously overused in software engineering as it applies to so many things. In my case, I am using it to describe the classes which do the thing. Taking a typical example application here is how I prefer to organize things:

  • Controller – the controller is the traffic cop determining if the provided data meets acceptable criteria such that we can accept that request. There is absolutely no business logic here HOWEVER, for simple reads we may choose to inject our Context to perform those reads
  • Service – the guts of the application, this contains a variety of services varying in size and types. I try to stick with the Single Responsibility Principle in defining these classes. At a minimum we have a set of facilitators which facilitate a business process (we will cover this next) and other smaller services which are reusable blocks.
  • Data Layer – this is the EF context. Any custom mapping or definitions are written here

The key feature of a facilitator is the call to SaveChanges as this will mark the end of the Unit of Work. By taking this approach we get a transaction for free since the code can validate the data as it places it into the context, instead of waiting for a SQL Exception to indicate a problem.

By taking this approach, code is broken into reusable modules which can be reinjected and reused, plus it is VERY testable. This is an example flow I wrote up for a client:

Here the Process Payment Service is the facilitator and calls on the sub-services (shaded in blue). Each of these gets a context injection but, since the context is scoped each gets the same one. This means everyone gets to work with what is essentially their own copy of the database during their execution run.

The other benefit this approach has is avoid what I refer to as service wastelands. These are generic service files in our code (PersonService, TransactionService, PaymentService, etc) which becoming dumping grounds for methods – I have seen some of these files have upwards of 100 methods. Teams need to avoid doing this because the file becomes so long that ensuring uniqueness and efficiency among the methods becomes an untenable task.

Instead, teams should focus on creating purpose driven services which either facilitate a process or contain core business logic that may be reused in the code base. Combined with using Entity Framework as the data layer, code becomes cleaner and more straightforward.

What are Exceptions?

So, am I saying you should have NO Data Layer ever? No. As with anything this not black and white and there are cases for a data layer of sorts. For example, some queries to the database are too complex to put into a LINQ statement and developers will need to resort to SQL. For these cases, you will want to have a wrapper around the call for both reuse and maintenance.

But, do not take that to mean you need a method to ensure you do not rewrite FirstOrDefault in two or more spots. Of course, if you have a particularly complex LINQ query you might chose to hide it. However, keep in mind the MAIN REASON to hide code is to avoid requiring another person to have certain intimate knowledge of a process to carry out the operation. It is NOT, despite popular opinion, to avoid duplication (that is an entirely separate issue I will discuss later).

Indeed, the reason you should be hiding something is because it is complex in nature and error prone in its implementation such that problems could arise later. A simple Id look up does not fall into this category.

Conclusion

The main point I made here is Entity Framework IS an implementation of the Repository pattern and so, placing a repository pattern around it is superfluous. ASP .NET Core contains methods to ensure the context is scoped appropriately and disposed of with the end of a request. Leverage this and use the context directly in your services and lean on the Unite of Work pattern while treating the Context as your in-memory database. Let Entity Framework take responsibility for updating the database when you are complete.