Durable Functions: Part 4 – Analyze and Download

All code for this series can be found here: https://github.com/jfarrell-examples/DurableFunctionExample

We are here now at the final part of our example (Part 1, Part 2, Part 3) that will focus on what happens after we approve our upload as shown in Part 3. That is, we will leverage Cognitive Services to gather data about image and store it in the Azure Table Storage we have been using. As to why this part is coming so much later, I moved into a house so I was rather busy 🙂

In the previous blog posts we built up a Durable Function Orchestrator which is initiated by a blob trigger from the file upload. To this point, we have uploaded the file and allowed a separate HTTP Trigger function to “approve” this upload, thereby demonstrating how Durable Functions enable support of workflows that can advance in a variety of different ways. Our next step will use an ActivityTrigger, which is a function that is ONLY ever executed in the context of an orchestrator by an orchestrator.

Building the Activity Trigger

ActivityTriggers are identified by their trigger parameter as shown in the code sample below (only the function declaration):

[FunctionName("ProcessFile")]
public static async Task<bool> ProcessFile(
[ActivityTrigger] string fileId,
[Blob("files/{fileId}", FileAccess.Read, Connection = "StorageAccountConnectionString")] Stream fileBlob,
[Table("ocrdata", Connection = "TableConnectionString")] CloudTable ocrDataTable,
ILogger log)
{
}
view raw trigger1.cs hosted with ❤ by GitHub

In this declaration we are indicating this function is called as an activity within an orchestration flow. Further, as we have with other functions we are referencing the related Blob and, new here, the ocrData cloud table which will hold the data outputted from the OCR process (Optical Character Recognition, Computer Vision essentially).

To “call” this method we expand our workflow and add the CallActivityAsync call:

[FunctionName("ProcessFileFlow")]
public static async Task RunOrchestrator(
[OrchestrationTrigger] IDurableOrchestrationContext context,
[Table("metadata", Connection = "TableConnectionString")] CloudTable metadataTable,
ILogger log)
{
var input = context.GetInput<ApprovalWorkflowData>();
var uploadApprovedEvent = context.WaitForExternalEvent<bool>("UploadApproved");
await Task.WhenAny(uploadApprovedEvent);
// run through OCR tools
var ocrProcessTask = context.CallActivityAsync<bool>(nameof(ProcessFileFunction.ProcessFile), input.TargetId);
await Task.WhenAny(ocrProcessTask);
}
view raw workflow3.cs hosted with ❤ by GitHub

This approach enables us to fire “parallel” tasks and further leverage our pool of Azure Functions handlers (200 at a time). This can be more effective than trying to leverage the parallel processing within the Azure Function instance itself, but always consider how best to approach a problem needing parallelism to solve.

I am not certain if the Function attribute is necessary on the function since, as you can see, we are referring to by its canonical name in C#. We also pass in the Target Id for the Azure Table record, this so a FK relationship can exist for this data. This is purely stylistic – in many cases it may make more sense for all data to live together, this is one of the strengths of document databases like DocumentDb and Mongo.

Finally, we have our Function “wait” for activity to complete. This activity, as I indicated, can spawn other activities and use its separate function space as needed.

Using Cognitive Services

A discussion on how to setup Cognitive Services within Azure is outside the scope of this article instead, I would invite you to follow Microsoft’s documentation here: https://docs.microsoft.com/en-us/azure/search/search-create-service-portal

Once you have cognitive services setup, you can update your settings so that your keys and URL match your service, install the necessary Nuget package:

  • Microsoft.Azure.CognitiveServices.Vision.ComputerVision (link)

As a first step, we need to make sure the OcrData table is created and indicate what bits of the computer vision data we want. To do this efficient I created the follow extension method:

public static List<OcrResult> AsResultList(this ImageAnalysis analysisResult, string fileId)
{
var returnList = new List<OcrResult>();
returnList.AddRange(analysisResult.Adult.AsOcrPairs(fileId, OcrType.ComputerVision));
returnList.AddRange(analysisResult.Color.AsOcrPairs(fileId, OcrType.ComputerVision));
returnList.AddRange(analysisResult.ImageType.AsOcrPairs(fileId, OcrType.ComputerVision));
returnList.AddRange(analysisResult.Description.Captions.FirstOrDefault()?.AsOcrPairs(fileId, OcrType.ComputerVision));
//returnList.AddRange(analysisResult.Brands.AsOcrPairs(fileId, OcrType.ComputerVision));
//returnList.AddRange(analysisResult.Faces.AsOcrPairs(fileId, OcrType.ComputerVision));
return returnList;
}
static IEnumerable<OcrResult> AsOcrPairs(this object obj, string fileId, OcrType ocrType)
{
foreach (var propertyInfo in obj.GetType().GetProperties())
{
if (typeof(IEnumerable).IsAssignableFrom(propertyInfo.PropertyType) == false || propertyInfo.PropertyType == typeof(string))
{
yield return new OcrResult(fileId)
{
KeyName = propertyInfo.Name,
OcrValue = propertyInfo.GetValue(obj).ToString(),
OcrType = ocrType
};
}
}
}
view raw extension.cs hosted with ❤ by GitHub

All this does is allow me to specify parent object points in the return structure for Ocr results and create a name value pair that I can return and more easily insert into the Table Storage schema I am aiming to achieve. Once I have all of these OcrPairs, I use a batch insert operation to update the OcrData table.

var computerVisionResults = await ProcessWithComputerVision(fileBlob, fileId);
// save the batch data
var batchOperation = new TableBatchOperation();
computerVisionResults.ForEach(result => batchOperation.Insert(result));
batchOperation.Insert(new OcrResult(fileId) { KeyName = FileLengthKeyName, OcrValue = fileBlob.Length.ToString(), OcrType = OcrType.None });
var executeReslt = await ocrDataTable.ExecuteBatchAsync(batchOperation);
view raw insert.cs hosted with ❤ by GitHub

Approve and allow the file to be downloaded

Now that the Ocr data has been generated our Task.WhenAny will allow the orchestrator to proceed. The next step is to wait for an external user to indicate their approval for the data to be downloaded – this is nearly a carbon copy of the step which approved the uploaded file for processing.

Once the approval is given, our user can call the DownloadFile function to download the data and get a tokenized URL to use for raw download (our blob storage is private and we want to control access to blobs). Here is our code for the download action:

[FunctionName("DownloadFile")]
public static IActionResult Run(
[HttpTrigger(AuthorizationLevel.Anonymous, "get", Route = "download/{fileId}")] HttpRequest req,
string fileId,
[Table("metadata", "{fileId}", "{fileId}", Connection = "TableConnectionString")] FileMetadata fileMetadata,
[Table("ocrdata", Connection = "TableConnectionString")] CloudTable fileOcrDataTable,
[Blob("files/{fileId}", FileAccess.Read, Connection = "StorageAccountConnectionString")] CloudBlockBlob fileBlob,
ILogger log)
{
if (!fileMetadata.ApprovedForDownload)
{
return new StatusCodeResult(403);
}
var readQuery = new TableQuery<OcrResult>();
TableQuery.GenerateFilterCondition(nameof(OcrResult.PartitionKey), QueryComparisons.Equal, fileId);
var ocrResults = fileOcrDataTable.ExecuteQuery(readQuery).ToList();
return new OkObjectResult(new DownloadResponse
{
Metadata = ocrResults,
FileId = fileId,
DownloadUrl = GenerateSasUrlForFileDownload(fileBlob, fileId)
});
}
static string GenerateSasUrlForFileDownload(CloudBlockBlob blob, string fileId)
{
var policy = new SharedAccessBlobPolicy()
{
SharedAccessExpiryTime = DateTime.Now.AddHours(1),
Permissions = SharedAccessBlobPermissions.Read
};
return blob.Uri + blob.GetSharedAccessSignature(policy);
}
view raw download.cs hosted with ❤ by GitHub

That is quite a bit of code but, in essence, we are simply gathering all data associated with the data entry being requested for download and generating a special URL for download out of our blob storage that will be good for only one hour – a lot more restrictions can be placed on this so its an ideal way to allow external users to have temporary and tightly controlled access to blobs.

And that is it, you can call this function through Postman and it will give you all data collected for this file and a link to download the raw file. There is also a check to ensure the file has been approved for download.

Closing

When I started to explore Durable Functions this was the antithesis of what I was after: event based workflow execution with a minimal amount of code needing to be written and managed.

As I said in Part 1 – for me event driven programming is the way to go in 95% of cloud based backends; the entire platform is quite literally begging us to leverage the internal events and APIs to reduce the amount of code we need to write while still allowing us to deliver on value propositions. True, going to event approach does create new challenges but, I feel that trade-off in most cases is well worth it.

In one of my training classes I explore how we can write “codeless” applications using API Management by effectively using APIM to “proxy” Azure APIs (Key Vault and Storage notably). Sure, there are cases where we need to support additional business logic but, there are also many cases where we write a service to store data to blob storage when we dont need to – when we can just store it there and use events to filter and process things.

In the end, the cloud gives you a tremendous amount of options for what you can do and how to solve problems. And that really is the most important thing: having options and realizing the different ways you can solve problems and deliver value.

Durable Functions: Part 3 – Approve the Upload

All code for this series can be found here: https://github.com/jfarrell-examples/DurableFunctionExample

In Part 1 of this series, we explained what we were doing and why including the aspects of Event Driven Design we are hoping to leverage using Durable Functions (and indeed Azure Functions) for this task.

In Part 2, we build our file uploader that sent our file to blob storage and recorded a dummy entry in Azure Table Storage that will later hold our metadata. We also explained why we choose Azure Table Storage over Document DB (Cosmos default offering)

Here in Part 3, we will start to work with Durable Functions directly by triggering it based on the afore mentioned upload operation (event) and allowing its progression to be driven by a human rather than pure backend code. To that end, we will create an endpoint that enables a human to approve a file by its identifier which, advances the file through the workflow represented by the Durable Function.

Defining Our Workflow

Durable Function workflows are divided into two parts: The Orchestrator Client and the Orchestrator itself.

  • The Orchestrator Client is exactly what it sounds like, the client which launches the orchestrator. Its main responsibility is initializing the long running Orchestrator function and generating an instanceId which can be thought of as a workflow Id
  • The Orchestrator, as you might expect, represents our workflow in code with the stopping points and/or fan outs that will happens as a result of operations. Within this context you can start subworkflows if desired or (as we will show) wait for a custom event to allow advancement

To that end, I have below the code for the OrchestratorClient that I am using as part of this example.

[FunctionName("ApproveFile_Start")]
public static async Task HttpStart(
[BlobTrigger("files/{id}", Connection = "StorageAccountConnectionString")] Stream fileBlob,
string id,
[Table("metadata", "{id}", "{id}", Connection = "TableConnectionString")] FileMetadata metadata,
[Table("metadata", Connection = "TableConnectionString")] CloudTable metadataTable,
[DurableClient] IDurableOrchestrationClient starter,
ILogger log)
{
// Function input comes from the request content.
string instanceId = await starter.StartNewAsync("ProcessFileFlow", new ApprovalWorkflowData { TargetId = id });
metadata.WorkflowId = instanceId;
var replaceOperation = TableOperation.Replace(metadata);
var result = await metadataTable.ExecuteAsync(replaceOperation);
log.LogInformation($"Started orchestration with ID = '{instanceId}'.");
log.LogInformation("Flow started");
}
view raw approve_start.cs hosted with ❤ by GitHub
Approve Start Function (Github Gist)

First, I want to call attention to the definition block for this function. You can see a number of parameter, most of which have an attribute decoration. The one to focus on is the BlobTrigger as it does two things:

  • It ensures this functions is called whenever a new object is written to our files container in the storage account defined by our Connection. The use of these types of triggers is essential for achieving the event driven style we are after and which yield substantial benefit when used with Azure Functions
  • It defines the parameter id via its binding notation {id} through this, we can use this value in other parameters which feature binding (such as if we wanted to output a value to a Queue or something similar)

The Table attributes each perform a separate action:

  • The first parameter (FileMetadata) extracts from Azure Table Storage the row with the provided RowKey/PartitionKey combination (refer to Part 2 for how we stored this data). Notice the use of {id} here – this value is defined via the same notation used in the BlobTrigger parameter
  • The second parameter (CloudTable) brings forth a CloudTable reference to our Azure Storage Table. Table does not support an output operation, or at least not a straightforward one. So, I am using this approach to save the entity from the first parameter back to the table, once I update some values

What is most important for this sort of function is the DurableClient reference (Need this Nuget package). This is what we will use to start the action workflow.

Reference Line 11 of our code sample and the call to StartNewAsync. This literally starts an orchestrator to represent the workflow. It returns an InstanceId which we save back to our Azure Table Storage Entity. Why? We could technically have the user pass the InstanceId received from IDurableOrchestrationClient but, for this application, that would run contrary to the id they were given after file upload so, instead we choose to have them send us the file id, perform a look up so we can access the appropriate workflow instance, your mileage may vary.

Finally, since this method is pure backend there is no reason to return anything though you certainly could. In the documentation here Microsoft lays out a number of architectural patterns that make heavy use of the parallelism offered through Durable Functions.

Managing the Workflow

Noting the above code on Line 11 we actually name the function that we want to start, this function is expected to have one argument of type IDurableOrchestrationContext (Note Client vs Context) that is decorated with the OrchestratioinTrigger attribute. This denotes the method is triggered by a DurableClient starting a workflow with this given name (the name here is ProcessFileFlow).

The code for this workflow (at least the initial code) is shown below:

[FunctionName("ProcessFileFlow")]
public static async Task RunOrchestrator(
[OrchestrationTrigger] IDurableOrchestrationContext context,
ILogger log)
{
var uploadApprovedEvent = context.WaitForExternalEvent<bool>("UploadApproved");
await Task.WhenAny(uploadApprovedEvent);
log.LogInformation("File Ready");
}
view raw workflow1.cs hosted with ❤ by GitHub
Workflow Function – Part 1

I feel it is necessary to keep this function very simple and only contain code that represents steps in the flow or any necessary logic for branching. Any updates to the related info elements is kept in the functions themselves.

For this portion of our code base, I am indicating to the Orchestration Context that advancement to the next step can only occur when an external event called UploadApproved is received. This is, of course, an area that we could provide a split of even a time out concept (this so we dont have n number of workflows sitting waiting for an event that may never be coming).

To raise this event, we need to build a separate function (I will use an HttpTrigger) that can raise this event. Here is the code I choose to use:

[FunctionName("ApproveFileUpload")]
public static async Task<IActionResult> Run(
[HttpTrigger(AuthorizationLevel.Anonymous, "post", Route = "approve/{fileId}")] HttpRequest req,
[Table("metadata", "{fileId}", "{fileId}", Connection = "TableConnectionString")] FileMetadata fileMetadata,
[Table("metadata", Connection = "TableConnectionString")] CloudTable metadataTable,
[DurableClient] IDurableOrchestrationClient client,
ILogger log)
{
var instanceId = fileMetadata.WorkflowId;
fileMetadata.ApprovedForAnalysis = true;
var replaceOperation = TableOperation.Replace(fileMetadata);
await metadataTable.ExecuteAsync(replaceOperation);
await client.RaiseEventAsync(instanceId, "UploadApproved", fileMetadata.ApprovedForAnalysis);
return new AcceptedResult(string.Empty, fileMetadata.RowKey);
}
view raw upload_approve.cs hosted with ❤ by GitHub
Upload Approve Http Function

Do observe that, as this an example, we are omitting a lot of functionality that would pertain to authentication and authorization to allow the UploadApprove action – as such this code should not be taken literally and used only to understand the concept we are driving towards.

Once again, we leverage bindings to simplify our code, mainly based on the fileId provided by the caller we can bring in the FileMetadata reference represented in our Azure Table Storage (we also bring in CloudTable so the afore mentioned entry can be updated to denote the file upload has been approved).

Using the IDurableOrchestrationClient injected into this function we can use the RaiseEventAsync method with the InstanceId extracted from the Azure Table Storage record to raise the UploadApproved event. Once this event is raised, our workflow advances.

Next Steps

Already we see the potential use cases for this approach, as the ability to combine workflow advancement with code based approaches makes our workflows even more dynamic and flexible.

In Part 4, we will close out the entire sample as we add two more approval steps to the workflow (one code driven and the other user driven) and then add a method to download the file.

I hope this was informative and has given you an idea of the potential durable functions hold. Once again, here is the complete code for reference: https://github.com/jfarrell-examples/DurableFunctionExample

Durable Functions: Part 2 – The Upload

I covered the opening to this series in Part 1 (here). Our overall goal is to create a File Approval flow using Azure Durable Functions and showcase how complex workflows can be me managed within this Azure offering. This topic exists in a much larger topic that is Event Driven design. Under EDD we aim to build “specialized” components which respond to events. By doing taking this approach we write only what code should do an alleviate ourselves of boilerplate and unrelated code. This creates a greater degree of decoupling which can help us when change inevitably comes. It also allows us to solve specific problems without generating wasteful logic which can hide bugs and creates other problems.

In this segment of the series, we will talk through building the upload functionality to allow file ingestion. Our key focus will be the Azure Function binding model that allows boilerplate code to be neatly extracted away from our function. Bindings also underpin the event driven ways we can working with functions, specifically allowing them to be triggered by an Azure Event.

Let’s get started. As I move forward I will be assuming that you are using Visual Studio Code with the Azure Function tools to create your functions. This is highly recommended and is covered in detail in Part 1.

Provision Azure Resources

The first thing we will want to do is setup our Azure resources for usage, this includes:

  • Resource Group
  • Azure Storage Account (create a container)
  • Azure Table Storage (this is the Table option within Cosmos)
  • Cognitive Services (we will use this in Part 3)

As a DevOps professional my recommended approach to deploying infrastructure to any cloud environment is to use a scripted approach, ideally Terraform or Pulumi. For this example, we will not go into that since it is not strictly my aim to extol good DevOps practices as part of this series (we wont be deploying CI/CD either).

For this simple demo, I will leave these resources publically available thus, we can update local.settings.json with the relevant connection information as we develop locally. local.settings.json is a special config file that, by default, the template for an Azure Function project created by the VSCode Azure Functions will excludes from source control. Always be diligent and refrain from checking credentials into source control, especially for environments above Development.

Getting started, you will want to having the following values listed in local.settings.json:

  • AzureWebJobsStorage – used by Azure Functions runtime, the value here should be the connection string to the storage account you created
  • FUNCTIONS_WORKER_RUNTIMEdotnet – just leave this alone
  • StorageAccountConnectionString – this is the account where our uploads are saved too, again it can share the same Storage Account that you previously created
  • TableConnectionString– this is the connection string to the Azure Table Storage instance
  • CognitiveServicesKey – the value of the key given when you create an instance of the Azure Cognitive Services resource
  • CognitiveServicesEndpoint – the value of the endpoint to access your instance of the Azure Cognitive services

Here is the complete code for the Azure Function which handles this upload:

public static class UploadFile
{
[FunctionName("UploadFile")]
public static async Task<IActionResult> Run(
[HttpTrigger(AuthorizationLevel.Anonymous, "post", Route = "file/add")] HttpRequest uploadFile,
[Blob("files", FileAccess.Write, Connection = "StorageAccountConnectionString")] CloudBlobContainer blobContainer,
[Table("metadata", Connection = "TableConnectionString")] CloudTable metadataTable,
ILogger log)
{
var fileName = Guid.NewGuid().ToString();
await blobContainer.CreateIfNotExistsAsync();
var cloudBlockBlob = blobContainer.GetBlockBlobReference(fileName);
await cloudBlockBlob.UploadFromStreamAsync(uploadFile.Body);
await metadataTable.CreateIfNotExistsAsync();
var addOperation = TableOperation.Insert(new FileMetadata
{
RowKey = fileName,
PartitionKey = fileName,
});
await metadataTable.ExecuteAsync(addOperation);
return new CreatedResult(string.Empty, fileName);
}
}
view raw upload.cs hosted with ❤ by GitHub
Figure 1 – File Upload for Azure Function

The code looks complex but, it is actually relatively simple given to the heavy use of Azure Function bindings. There are three in use:

  • HttpTrigger – most developers will be familiar with this trigger. Through it, Azure will listen for Http requests to a specific endpoint and route and execute this function when such a request is detected
  • Blob – You will need this Nuget package. This creates a CloudBlobContainer initialized with the given values. It makes it incredibly easy to write data to the container.
  • Table – Stored in the same Nuget as the Blob bindings. This, like the blob, opens up a table connection to make it easy to add data, even under high volume scenarios

Bindings are incredibly useful when developing Azure Functions. Most developers are only familiar with HttpTrigger which is used to respond to Http requests but there is a huge assortment and support for events from many popular Azure resources. Using these removes the need to write boilerplate code which can clutter our functions and obscure their purpose.

Blob and Table can be made to represent an item in their respective collections or a collection of items. The documentation (here) indicates what types method arguments using these attributes can be. Depending on how you use the attribute it can be a reference to the table itself, a segment of data from that table (using the partition key), or an item itself. The Blob attribute has similar properties (here).

One thing to keep in mind is a “bound parameter” must be declared as part of a trigger binding attribute to be used by other non-trigger bindings. Essentially it is important to understand that bindings are bound BEFORE the function is run, not after. Understanding this is essential to creating concise workflows using bindings.

Understanding Binding through an example

Taking our code sample from above (repasted here)

public static class UploadFile
{
[FunctionName("UploadFile")]
public static async Task<IActionResult> Run(
[HttpTrigger(AuthorizationLevel.Anonymous, "post", Route = "file/add")] HttpRequest uploadFile,
[Blob("files", FileAccess.Write, Connection = "StorageAccountConnectionString")] CloudBlobContainer blobContainer,
[Table("metadata", Connection = "TableConnectionString")] CloudTable metadataTable,
ILogger log)
{
var fileName = Guid.NewGuid().ToString();
await blobContainer.CreateIfNotExistsAsync();
var cloudBlockBlob = blobContainer.GetBlockBlobReference(fileName);
await cloudBlockBlob.UploadFromStreamAsync(uploadFile.Body);
await metadataTable.CreateIfNotExistsAsync();
var addOperation = TableOperation.Insert(new FileMetadata
{
RowKey = fileName,
PartitionKey = fileName,
});
await metadataTable.ExecuteAsync(addOperation);
return new CreatedResult(string.Empty, fileName);
}
}
view raw upload.cs hosted with ❤ by GitHub

So here I am creating the unique Id (called fileName) in code. If I wanted to, I could specify {id} in the HttpTrigger as part of the path. This would give me access to the value of {id} in other bindings or as a parameter to the function called id. In this case, it would amount to relying on the user to give me a unique value, which would not work.

I hope that explains this concept, I find understanding it makes things easier and more straightforward with how you may choose to write your code. If not, there will be other examples of this in later sections and I am happy to explain more in the comments.

The Upload Process

Now that we have covered the binding the code should make a lot more sense if it did not before.

Simply put, we:

  1. Call Guid.NewGuid().ToString() and get a string representation or a new Guid. This is our unique Id for this file upload
  2. The binary stream accepted through the Http Post request is saved to a block reference into our Azure Storage Account
  3. Next, the initial record for our entry is created in the Azure Table Storage (Approval flags are both set to false)
  4. We return a 201 Created response as is the standard for Post operations which add new state to systems

Straightforward and easy to understand and thanks to the bindings all heavy lifting was done outside of the scope our function allowing it to clearly express its intent.

Why Azure Table Storage?

Azure Table Storage is an offering that has existed for a long time in Microsoft Azure, it only recently came to be under the Cosmos umbrella along with other NoSQL providers. The use of Table Storage here is intentional due to its cost effectiveness and speed. But, it does come with some trade-offs:

  • Cosmos Core (DocumentDb) offering is designed as a massive scalable NoSQL system. For larger systems with more volume, I would opt for this over Table Storage – though you get what you pay for, its not cheap
  • DocumentDb is a document-database meaning the schema is never set in stone and can always be changed as new records are added. This is not the case with Table Storage which will set its schema based on the first record written.

When making this decision it is important to consider not just current requirements but also near-term requirements as well. I tend to like Table Storage when the schema is not going to have a lot of variance and/or I want a NoSQL solution that is cheap and still effective. Cosmos Core is the other extreme where I am willing to pay more high redundancy and greater performance as well as using a document database where my schema can be different insert to insert.

Triggering the Workflow

Reading the upload code you may have wondered where the workflow is triggered or how it is triggered. By now the answer should not surprise you: binding. Specifically, a BlobTrigger which can listen for new blobs being added (or removed) and trigger a function when that case is detected. Here is declaration of the Azure Durable Function which represents the bootstrapping of our workflow.

[FunctionName("ApproveFile_Start")]
public static async Task HttpStart(
[BlobTrigger("files/{id}", Connection = "StorageAccountConnectionString")] Stream fileBlob,
string id,
[Table("metadata", "{id}", "{id}", Connection = "TableConnectionString")] FileMetadata metadata,
[Table("metadata", Connection = "TableConnectionString")] CloudTable metadataTable,
[DurableClient] IDurableOrchestrationClient starter,
ILogger log)
{
//
}
view raw workflow_start.cs hosted with ❤ by GitHub
Figure 2 – Workflow start declaration

As you can see here, we are starting to get a bit nuts with our triggers. Here is a brief summary:

  • We use a BlobTrigger to initiate this function and {id} to grab the name of newly created blob (this will be the Guid which is generated during upload)
  • The Table attribute is used twice: once to reference the actual Table Storage record represented by the newly created blob and another as a reference to the metadata table where the referenced row exists (we need this to write the row back once its updated)
  • Finally DurableClient (from this Nuget package) which provides the client that allows us to start the orchestrator that will manage the workflow

I will go into much more depth on this in Part 3 but the one point I do want to call out is Table attribute is NOT two way. This means, even if you reference the single item (as we did in our example), changes to that single item are NOT saved back to the table – you must do this manually. This is important as it drives the reason we see some rather creative uses of this attribute.

Closing

We explored some code in this portion of the series, though it was not immediately tied to Durable Functions it was, tied to event driven programming. Using these bindings we can create code that alleviates itself from mundane and boilerplate operations and allow other systems to manage this on our behalf.

Doing this gets us close to the event driven model discussed in Part 1 and allows each function to specialize in what it must do. By cutting our excess and unnecessary code we can remove bugs and complexities that can make it more difficult to manage our code base.

In Part 3, we are going to dive deeper and really start to explore Durable Functions and showing how they can be initiated and referenced in subsequent calls, including those that can advance the workflow, circa a human operation.

The complete code for this entire series is here: https://github.com/jfarrell-examples/DurableFunctionExample

Durable Functions: Part 1 – The Intro

No Code in this post. Here we establish the starting point

Event Driven Programming is a popular way to approach complex systems with a heavy emphasis on breaking applications apart and into smaller, more fundamental pieces. Done correctly, taking an event driven approach can make coding more fun and concise and allow for “specialization” over “generalization”. In doing so, we get closer to the purity of code that does only what it needs to do and nothing more, which should always be our aim as software developers.

In realizing this for cloud applications I have become convinced that, with few exceptions, serverless technologies must be employed as the glue for complex systems. The more they mature, the greater the flexibility they offer for the Architect. In truth, not using serverless can, and should be, viewed in most cases as an anti-pattern. I will note that I am referring explicitly to tooling such as AWS Lamba, Google Cloud Functions, and Azure Functions, I am not speaking to “codeless” solutions such as Azure Logic Apps or similar tools in other platforms – the purpose of such tools is mainly to allow less technical persons to build out solutions. Serverless technologies, such as those mentioned, remain in the domain of the Engineer/Developer.

Very often I find that engineers view serverless functions as more of a “one off” technology, good for that basic endpoint that can run in Consumption. As I have shown before, Azure Functions in particular are very mature and through the use of “bindings” can enable highly sophisticated scenarios without need for writing excessive amounts of boilerplate code. Further, offerings such as Durable Functions in Azure (Step Functions in AWS) enable serverless to go a step further and actually maintain a semblance of state between calls – thus enabling sophisticated multi-part workflows that feature a wide variety of inputters for workflow progression. I wanted to demonstrate this in this series.

Planning Phase

As with any application, planning is crucial and our File Approver application shall be no different. In fact, with event driven applications planning is especially crucial because while Event Driven systems offer a host of advantages they also require certain questions to be answered. Some common questions:

  • How can I ensure events get delivered to the components of my system?
  • How do I handle a failure in one component but success in another?
  • How can I be alerted if events start failing?
  • How can I ensure events that are sent during downtime are processed? And in the correct order?

Understandable, I hope, these questions are too big to answer as part of this post but, are questions I hope you, as an architect, are asking your team when you embark on this style of architecture.

For our application, we will adopt a focus on the “golden path”. That is, the path which assumes everything goes correctly. The following diagram shows our workflow:

Our flow is quite simple and straightforward

  • Our user uploads a file to an Azure Function that operates off an HttpTrigger
  • After receiving this file, the binary data is written to Azure Blob Storage and a related entry is made in Azure Table Storage
  • The creation of the blob triggers Durable Function Orchestration which will manage a workflow that aims to gather data about the file contents and ultimately allow users to download it
  • Our Durable workflow contains three steps, two of which will pause our workflow waiting for human actions (done via Http API calls). The other is a “pure function” that is only called as part of this workflow
  • Once all steps are complete the file is marked available for download. When requested the Download File function will return the gathered metadata for the file AND the generated SAS Token allowing persons to download the file for a period of 1hr

Of course, we could accomplish this same goal with a traditional approach but, that would leave us to write a far more sophisticated solution than I ended up with. For reference, here is the complete source code: https://github.com/jfarrell-examples/DurableFunctionExample

Azure Function Bindings

Bindings are a crucial components of efficient Azure Function design, at present I am not aware of a similar concept in AWS but, I do not discount its existence. Using bindings we can write FAR LESS code and make our functions easier to understand with more focus on the actual task instead of logic for connecting and reading from various data source. In addition, the triggers tie very nicely into the whole Event Driven paradigm. You can find a complete list of ALL triggers here:

https://docs.microsoft.com/en-us/azure/azure-functions/functions-bindings-storage-blobNote: this is a direct link to the Blob storage triggers, see the left hand side for a complete list.

Throughout my code sample you will see references to bindings for Durable Functions, Blobs, Azure Table Storage, and Http. Understanding these bindings is, as I said, crucial to your sanity when developing Azure Functions.

Visual Studio Code with Azure Function Tools

I recommend Visual Studio Code when developing any modern application since its lighter and the extensions give you a tremendous amount of flexibility. This is not to say you cannot use Visual Studio, the same tools and support exist, I just find Visual Studio Code (with the right extensions) to be the superior product, YMMV.

Once you have Visual Studio Code you will want to install two separate things:

  • Azure Functions Extension VSCode
  • Azure Function Core Tools (here)

I really cannot say enough good things about Azure Function Core Tools. It has come a long way from version 1.x and the recent versions are superb. In fact, I was able to complete my ENTIRE example without ever deploying to Visual Studio, using breakpoints all along the way.

The extension for Visual Studio Code is also very helpful for both creating and deploying Azure Functions. Unlike traditional .NET Core applications, I do not recommend using the command line to create the project. Instead, open Visual Studio Code and access your Azure Tools. If you have the Functions extension installed, you will see a dedicated blade – expand it.

The first icon (looks like a folder) enables you to create a Function project through Code. I recommend this approach since it gets you started very easily. I have not ruled out the existence of templates that could be downloaded and use through dotnet new but this works well enough.

Keep in mind that a Function project is 1:1 with a Function app so, you will want to target an existing directory if you play to have more than one in your solution. Note that this is likely completely different in Visual Studio, I do not have any advice for that approach.

When you go through the creation process you will be asked to create a function. For now, you can create whatever you like, I will be diving into our first function in Part 2, as you create subsequent functions use the lightning icon next to the folder. Doing this is not required, it is perfectly acceptable to build your functions up but, using this gets the VSCode settings correct to enable debugging with the Core Tools so, I highly recommend it.

The arrow (third icon) is for deploying. Of course, we should never use this outside of testing since we would like a CI/CD process to test and deploy code efficiently – we wont be covering CI/CD for Azure Functions in this series but, we will certainly in a future series.

Conclusion

Ok so, now we understand a little about what Durable Functions are and how they play a role in Event Driven Programming. I also walked through the tools that are used when developing Azure Functions and how to use them.

Moving forward into Part 2, we will construct our File Upload portion of the pipeline and show how it starts our Durable Function workflow.

Once again the code is available here:
https://github.com/jfarrell-examples/DurableFunctionExample

Connect to Azure SQL with MSI

Quick what type of password cannot be cracked? The answer is, one that is not known to anyone. You cannot reveal what you do not know. This is why so many people use Password Managers, we create insanely long passwords that we cannot remember, nor do we need to, and use them – their length and complexity makes it very difficult to crack. Plus, by making it easy to create these kinds of passwords we can avoid the other problem where the same password is used everywhere.

If you were to guess which password you would LEAST like to be compromised I am willing to bet many of you would indicate the password your web app uses to communicate with its database. And yet, I have seen so many cases where passwords to database are stored in web.config and other settings files in plain text for any would be attacker to read and use at their leisure. So I figured tonight I would tackle one of the easiest and most common ways to secure a password.

Remember RBAC

If you have been following my blog you know that, particularly of late, I have been harping on security through RBAC (Role Based Access Control). In the cloud especially, it is vital that applications only have access to what they need to carry out their role, such is the emphasis of least privileged security.

In Microsoft Azure, as well as other cloud platforms, we can associate the ability to read and update a database with a particular role and grant our web application an identity that is a member of that role. In doing so, we alleviate ourselves from having to manage a password while still ensuring that the application can only access data relevance to its task and purpose.

Microsoft actually has a doc article that details the steps I am going to take quite well. I will be, more or less, running through it and will enumerate any gotchas Link: https://docs.microsoft.com/en-us/azure/app-service/app-service-web-tutorial-connect-msi

Creating your API

As I often do, I like to start from the default project template for a .NET Core Web API project. This means I have a basic API setup with the WeatherForecast related assets. The first goal will be to set this up as an EF Core driven application that auto creates its database and seeds with some data – effectively we are going to replace the GET call with a database driven select type operation.

To aid with this, and to remove myself from writing out how to build an API I am providing the source code here: https://github.com/jfarrell-examples/DatabaseMSI. From this point I will only call out certain pieces of this code and shall assume, moving forward, you have an API that you can call an endpoint and it will return data from the database.

Create a Database Admin

For the majority of these steps you will want to have the Azure CLI installed and configured for your Azure instance. You can download it here

Access Azure Active Directory from the Portal and create a new user. You can find this option off the main landing in the left navigation sidebar. Your use does not need to be anything special though, I recommend setting the password yourself.

Once the user is created, open a private window or tab and log in to https://portal.azure.com as that user. You do this to validate the account and reset the password, it shows as Expired otherwise. We are going to use this user as your Azure SQL Admin (yes, I assume you already created this).

The tutorial linked above provides a command line query to search for your newly created Azure AD User and get its corresponding objectId (userId for the uninitiated). I personally prefer just using the following command:

az ad user list –query “[].{Name: displayName,Id: objectId}” -o table

This will format things nicely and require you only to look for the display name you gave the user. You will want to save the objectId to a shell variable or paste it somewhere you can easily copy it.

az sql server ad-admin create –resource-group <your rg> –server-name <db-server-name> –display-name ADMIN –object-id <ad-user-objectId>

This command will install our user as an admin to the target SQL Server. Replace the values above as shown. You can use whatever you like for display-name.

Congrats you have now linked the AD User to SQL Server and given them admin rights. We wont connect as this user, but we need this user to carry out certain tasks.

Configure the WebAPI to Support MSI Login

As a note, the link above also details the steps for doing this with ASP .NET, I wont be showing that, I will be focusing only on ASP .NET Core.

We need to inform that which is managing our database connection, if anything (EF Core for me in this case) that we are going to use MSI authentication. As with most MSI related things, this will entail getting an access token from the identity authority within Azure.

Open the DbContext and add the following code as part of your constructor:

public Class1
{
public Class1(DbContextOptions<MyContext> options) : base(options)
{
if (configuration["Env"] == "Cloud")
{
var conn = (Microsoft.Data.SqlClient.SqlConnection)Database.GetDbConnection();
conn.AccessToken = (new Microsoft.Azure.Services.AppAuthentication.AzureServiceTokenProvider()).GetAccessTokenAsync("https://database.windows.net/").Result;
}
Database.EnsureCreated();
}
}

view raw
ctx.cs
hosted with ❤ by GitHub

For this to work you will need to add the Microsoft.Azure.Services.AppAuthentication NuGet package but the rest of this can be pasted in as a good starting point.

I have also added a AppSettings Env which denotes the present environment. In this case, since I am showing an example I will only have Local and Cloud. In professional projects the set of allowable values is going to be higher. From a purpose standpoint, this allows the code to use a typical connection method (username and password) locally.

Remember, it is essential that, when developing systems that will access cloud resources, we ensure a solid way for developers to interact with those same resources (or a viable alternative) without having to change code or jump through hoops.

The final bit is to prepare our connection string for use in the Cloud with MSI. In .NET Core this change is absurdly simple, the tutorial shows this but you will want this connection string to be used in your cloud environments:

“Server=tcp:<server-name>.database.windows.net,1433;Database=<database-name>;”

With this in place, we can now return to the Cloud and finish our setup.

Complete the manged identity setup

The use of MSI is built upon the concept of identity in Azure. There are, fundamentally, two types: user defined and system assigned. The later is the most common as it allows Azure to manage the underlying authentication mechanics.

The enablement of this identity for your Azure Resources is easy enough from the portal but, it can also be done via the Azure CLI using the following command (available in the tutorial linked above):

az webapp identity assign –resource-group <rg-name> –name <app-name>

This will return you a JSON object showing the pertinent values for your Managed Identity – copy and paste them somewhere.

When you use a Managed Identity, by default, Azure will name the identity after the resource for which it applies, in my case this was app-weatherforecast. We need to configure the rights within Azure SQL for this identity – to do that, we need to enter the database.

There are a multitude of ways to do this but, I like how the tutorial approaches it using Cloud Shell. sqlcmd is a program you can download locally but, I always prefer to NOT add additional firewall rules to support external access. Cloud Shell allows me to handle these kind of operations within the safety of the Azure Firewall.

sqlcmd -S <server-name>.database.windows.net -d <db-name> -U <aad-user-name> -P “<aad-password>” -G -l 30

This command will get you to a SQL prompt within your SQL Server. I want to point out that the add-user-name is the domain username assigned to the user you created earlier in this post. You will need to include the “@mytenant.domain” suffix as part of the username. You are logging in as the ADMIN user you created earlier.

When your application logs into the SQL Server it will do as a user with the name from the identity given (as mentioned above). To support this we need to do a couple things:

  • We must create a user within our SQL Server database that represents this user
  • For the created user we must assigned the appropriate SQL roles, keeping in mind principle of least privileged access

From the referenced tutorial, you will want to execute the following SQL block:

CREATE USER [<identity-name>] FROM EXTERNAL PROVIDER;
GO
ALTER ROLE db_datareader ADD MEMBER [<identity-name>];
GO
ALTER ROLE db_datawriter ADD MEMBER [<identity-name>];
GO
ALTER ROLE db_ddladmin ADD MEMBER [<identity-name>];
GO

Remember, identity-name here is the name of the identity we created earlier, or the name of your Azure resource if using System Assigned identity.

So, your case may vary but, deeply consider what roles your application needs. If you application will only be access the database to read you can forgo adding the datawriter and ddladmin roles.

If the database is already set in stone and you wont need new tables created by an ORM than you likely will not need the ddladmin role. Always consider, carefully, the rights given to a user. Remember, our seminal aim as developers is to ensure that, in the event of a breach, we limit what the attacker can do – thus if they somehow spoof our MSI in this case, we would want them to be confined to ONLY this database. If we used a global admin, they would then have access to everything.

Congrats. That is it, you now have MSI authentication working for your application.

Closing Thoughts

Frankly, there are MANY ways to secure the credentials for critical systems like databases in applications, from encryption, to process restrictions, to MSI – all have their place and all address the radically important goal of limiting access.

The reason I like MSI over many of these options is two principle reasons:

  1. It integrates perfectly into Azure and takes advantage of existing features. I always prefer to let someone else do something for me if they are better at it, and Microsoft is better at identity management than I am. Further, since we can associate with roles inside Azure its easier to limit access to the database and other systems the corresponding application accesses
  2. It totally removes the need to store and manage a password. As you saw above, we never referenced a password at any point. This is important since an attacker cannot steal what is never made available.

Attackers are going to find our information, they are going to hack our systems, we can do what we can to prevent this but, it will happen. So, the least we can do is make their efforts useless or limit what they can steal. Keep passwords out of source code, use Key Vault, and leverage good automation pipelines to ensure sensitive values are never exposed or “kept in an Excel somewhere”. Ideally, the fewer people that know these passwords the better.

The database is, for many applications, the critical resources and using MSI can go a long way to protecting our data and ensuring proper access and limit blast radius for attacks.

Thanks.

Customizing the Auth flow with Auth0

Using Auth0 for applications is a great way to offload user management and role management to a third party provider which can aid in limiting the blast radius of a breach. While, I will not yet be getting into truly customizing the login experience (UI related) I do want to cover how we can take control of the process to better hide our values and control the overall experience.

You dont control Authentication

In previous posts I discussed the difference between Authentication and Authorization and I bring it up here as well. For applications, Authentication information is the information we least want in the hands of an attacker – being able to log into a system legitimately can make it very hard to track down what has been lost and what has been compromised. This is why authentication mechanisms leveraging OAuth or OpenId rely on performing the authentication OUTSIDE of your system.

By performing the authentication outside of your system and returning a token your site never even see’s the user credentials and you cannot expose what you do not have. Google Pay and other contactless payment providers operate on a similar principal – they grab the credit card number and information for the merchant and pass back a token with the status.

Understanding this principle is very important when designing systems of authentication. Authorization information is less important in terms of data loss but, highly important in terms of proper provisioning and management.

For the remainder of this we will be looking mainly at how to initiate authentication in the backend.

Preparation

For this example, I constructed a simple ASP .NET Core MVC Application and created an Index view with a simple Login button – this could have been a link.

@{
ViewData["Title"] = "Home Page";
}
<div class="text-center">
@using (Html.BeginForm("Login", "Home", FormMethod.Post))
{
<input type="submit" value="Login" />
}
</div>

view raw
home.cshtml
hosted with ❤ by GitHub

The goal here is to have a way for the user to initiate the login process. This post will not cover how to customize this Login Screen (hoping to cover that in the future).

Let’s get started

Looking at the code sample above you can see that, submitting the Form goes to a controller called Home and an action called Login. This does not actually submit anything because, remember, Auth0 operates as a third party and we want our users to login and be authenticated there rather than on our site. Our site only cares about the tokens that indicate Auth0 verified the user and their access to the specific application.

Here is the skeleton code for the action that will receive this Post request:

[HttpPost]
public IActionResult Login([FromForm]LoginFormViewModel viewModel)
{
return Ok();
}

view raw
login1.cs
hosted with ❤ by GitHub

This is where the fun begins.

Auth0 Authentication API

OAuth flows are nothing more than a back and forth of specific URLs which first authorize our application request to log in, then authorize the user based on credentials, after which a token is generated and a callback URL is invoked. We want to own this callback URL.

Install the following Nuget package: Auth0.AuthenticationApi – I am using version 7.0.9. Our first step is to construct the Authentication Url which will send us to to Auth0 Lock which handles authentication.

[HttpPost]
public IActionResult Login([FromForm]LoginFormViewModel viewModel)
{
var authClient = new AuthenticationApiClient("<your domain>.auth0.com");
var authUrl = authClient.BuildAuthorizationUrl()
.WithClient("<your client id>")
.WithResponseType(AuthorizationResponseType.Code)
.WithConnection("Username-Password-Authentication")
.WithRedirectUrl("https://localhost:5001/home/callback")
.WithScope("openid offline_access")
.WithState("1234567890")
.Build()
.ToString();
return Redirect(authUrl);
}

view raw
login2.cs
hosted with ❤ by GitHub

So a few things here:

  • The use of the Client Id indicates to Auth0 for which application we want to authenticate against
  • Response Type is code meaning, we are asking for an authorization code that we can use to get other tokens
  • Connection indicates what connection scope our login can use. Connection scopes indicate where user credential information is stored (you can have multiple). In this case I specify Username-Password-Authentication which will disallow social logins
  • The redirect URI indicates what URL the auth code is passed to. In our case, we want it passed back to us so this URL is for another controller/action combination on our backend. Be sure your Application Settings also specify this URL
  • Scope is the access rights the given User will have. By default we want them to be able to access their information
  • State is a random variable, you will often see it as a value called nonce. This is just a random value designed to make the request unique

After we specify these values and call Build and ToString to get a URL that we can Redirect to. This will bring up the Login Lock screen for Auth0 to allow our user to present their credentials.

Receive the callback

Our next bit is to define the endpoint that will receive the callback from Auth0 when the login is successful. Auth0 will send us a code in the query string that indicates the login was successful.

[HttpGet]
public async Task<IActionResult> Callback([FromQuery]string code)
{
return Ok();
}

view raw
callback.cs
hosted with ❤ by GitHub

This is not atypical for application which use this – if you ever looked at the Angular sample it too provides for a route that handles the callback to receive the code. Once we get the code we can ask for a token. Here is the complete code:

[HttpGet]
public async Task<IActionResult> Callback([FromQuery]string code)
{
var authClient = new AuthenticationApiClient("<domain>.auth0.com");
var tokenResponse = await authClient.GetTokenAsync(new AuthorizationCodeTokenRequest
{
RedirectUri = "https://localhost:5001/home/auth_callbabck",
Code = code,
ClientId = "<your client id>",
ClientSecret = "<your client secret>"
});
CustomContext.IdToken = tokenResponse.IdToken;
CustomContext.AccessToken = tokenResponse.AccessToken;
CustomContext.RefreshToken = tokenResponse.RefreshToken;
return Redirect("CodeView");
}

view raw
callback2.cs
hosted with ❤ by GitHub

Here we are asking for authorization to the application and it will come with two pieces of information that we want – Access Token and Id Token. The former is what you pass to other APIs that you want to access (your permissions are embedded in this token) and the Id Token represents you user with all of their information.

To aid in what these tokens look like (wont cover Refresh tokens here) I have created this simple custom C# class and Razor view:

namespace Auth0LoginCustom
{
public static class CustomContext
{
public static string AuthCode;
public static string AccessToken;
public static string RefreshToken;
public static string IdToken;
}
}

view raw
context.cs
hosted with ❤ by GitHub

@{
ViewData["Title"] = "Code Page";
}
<div class="text-center" style="width: 500px">
@CustomContext.AuthCode
<hr />
@CustomContext.RefreshToken
<hr />
@CustomContext.IdToken
<hr />
@CustomContext.AccessToken
</div>

view raw
codeview.cshtml
hosted with ❤ by GitHub

Successfully logging in will eventually land you on this Razor page where you should values for everything but AuthCode (there is no set in the code snippet). But something might strike you as weird, why is the access_token so short. In fact, if you run it through jwt.io you may find it lacks any real information.

Let’s explain.

By default Tokens can only talk to Auth0

In previous posts I have discussed accessing APIs using Auth0 Access Tokens. Core to that is the definition of an audience. I deliberately left his code off when we built our Authentication Url as part of login. Without it, Auth0 will only grant access to the userinfo API hosted in Auth0. If we also want that token to be good for our other APIs we need to register them with Auth0 and indicate our UI app can access it.

Before discuss this further, lets update our Auth URL building code as such:

[HttpPost]
public IActionResult Login([FromForm]LoginFormViewModel viewModel)
{
var authClient = new AuthenticationApiClient("<domain>.auth0.com");
var authUrl = authClient.BuildAuthorizationUrl()
.WithClient("<client id>")
.WithResponseType(AuthorizationResponseType.Code)
.WithConnection("Username-Password-Authentication")
.WithRedirectUrl("https://localhost:5001/home/callback")
.WithScope("openid offline_access")
.WithState("1234567890")
.WithAudience("<api identifier>")
.Build()
.ToString();
return Redirect(authUrl);
}

view raw
auth_final.cs
hosted with ❤ by GitHub

The difference here is Line 12 where we specify our audience. Note that we are NOT allowed to specify multiple audiences so, when designing a microservice API this can be tricky. I want to cover this more in-depth in a later post.

With this in place you will see two things if you run through the flow again:

  • Your access token is MUCH longer and will contain relevant information for accessing your API
  • The refresh token will be gone (I cannot speak to this yet)

This will now return to your call the Access Token you can use and store to access APIs. Congrats.

Why use this?

So when would an approach this like this be practical? I like it for applications that need more security. You see, with traditional SPA application approaches you wind up exposing your client Id and potentially other information that, while not what I could sensitive, is more than you may want to expose.

Using this approach all of the information remains in the backend and is facilitated outside of the users control or ability to snoop.

Conclusion

In this post, I showed how you can implement the OAuth flow yourself using Auth0 API. This is not an uncommon use case and can be very beneficial should your application require tighter control over the process.

Controlling Azure Key Vault Access

Security is the name of the game with cloud applications these days. Gone are the days of willy-nilly handling of passwords and secrets, finding your production database password in web.config files and storing keys on servers in the earnest hope they are never discovered by an attacker. These days, great care must be given to not only controlling access to sensitive values but, ensuring proper isolation in the event of breach to limit the blast radius. For this reason, teams often turn to products like Azure Key Vault as a way to securely keep data off services and machines so would be attackers must go through yet another hoop to get the data they want.

As I work with teams I still find a lot of confusion around this aspect, in particular since teams can often find themselves in the Circle of Life as I call it – where they are constantly trying to hide their values but then must secure the way they are hiding values, and so on and so forth. I want to talk about the principles of isolation and impressing upon teams that, rather than try to prevent a hack, prevent the hacker from getting everything.

Footnote: Security is a huge topic, especially when it comes to Cloud products. I wont be going into everything, or even 25% of everything. When it comes to building applications the best advice I can give is, try to think like an attacker

Key Vault

For most software development teams, Key Vault is the most common way to achieve the pattern of keeping sensitive values off services and in a secure location. Key Vault fully encrypts everything and has three offerings, each with different use cases:

  • Keys – The keys feature is used mostly for encryption and decryption. It operates on the principle “you cannot reveal what you do not know”. These are managed keys which means, you, as the developer and accessor, do not know the actual raw value. This is very useful for asymmetric encryption, such as what is used with JWT, where the secrecy of the private key is paramount
  • Secrets – For most developers, this will be the feature you interact with the most. This lets you store known values and retrieve them via an HTTP call to the Key Vault. The values are versioned and are soft-deleted when removed. This gives you an ideal place to keep passwords and other sensitive values without revealing them in source code
  • Certificates – This feature lets you store certificates that can be used later. I honestly have not used this feature too much

A word of advice with Secrets, do not store everything here. Not everything is sensitive, nor does it require an extra lookup from a separate machine. Pick and choose carefully what is secret and what is not.

As an extra boon, you can “link” a Key Vault to a Variable Group in Azure DevOps Library. This lets you use sensitive values in your pipeline. Its a great way to inject certain values into services. But, always remember, what you send to a service has to get stored somewhere

Role Based Access Control (RBAC)

If you have spent any large amount of time around Cloud in the last 4-5 years you have likely heard the term RBAC thrown around. Modern cloud architectures are built around the notion of roles and we create specific roles for specific use cases. This enable very tight control over what a user/application can do and, more importantly, what they cannot do. Using roles also lessens the possibility of an application breaking when a login is removed or a password change is forced.

In Microsoft Azure, RBAC is mainly used (from the context of application development) with Managed Service Identity (MSI). In Azure we can either create this identity manually or ask Azure to create it for us (you will often see an Identity option for most Azure resources, this indicates to Azure that is should created a “System Managed identity”. Alternatively, you can create “User Managed Identity”. Each of these would be referred to, by Azure, as a Service Principal. Its important though to understand the difference between these two, especially how they aid code running locally and code running in the cloud.

User Managed Identity vs System Assigned Identity

Without getting too deep, Identities in Azure are held in Azure Active Directory service, which every tenant receives. User Managed identities are created using App Registrations, by this the creator can pick information like the name and, most importantly, the secret or certificate that acts as the password.

For System Assigned Identities, these are found under Enterprise Applications and most of its data is, as you would expect, managed by Azure. This means the Secret is unknown to you. This is ideal for a production scenario where, the less you know the less you can expose. In contrast, the App Registration is ideal for development and cases where we want the user to specify their identity verbosely.

When to use?

Let’s return to something I said earlier – “we cannot prevent ourselves from being hacked, we can minimize the blast radius of a breach”, this often refereed to as “defense in depth (or layers)”. Simply put, dont give the attacker something that will work everywhere.

  • If they get a login to a development environment, ensure that login does not work in Dev.
  • If your staging database password gets posted on reedit, make sure its not the same one you use for production
  • If the attacker compromises a role, ensure that role has access to only what it needs for the environment that it needs
  • Segregate your environments so they dont talk to each other. If an attacker breaches the virtual machine holding the dev version of your website, they should not be able to access the production servers

There are many more, but hopefully you get the idea. Taking this approach you can understand that, exposing sensitive data in an environment like development is acceptable as long as the information does not grant would-be attackers access to critical data.

Let’s take this approach and develop a simple application that uses Key Vault to return a sensitive value.

Create the Identities for Key Vault Access

Assuming you know how to create a Key Vault in Azure you need to define Access Policies. Key Vault’s do not support the notion of “granular access” which means when you define a permission that permission applies to everything in that category.

If you enable GET access to Secrets in your Key Vault, the authorized party can retrieve ALL secrets from that Key Vault. Consider this when defining your infrastructure, it is not uncommon for teams segregate data for apps across multiple key vaults to control access

Before you can grant access you need to define the parties requesting access. Create an Azure App Service. Once created, find the Identity option, switch it On and hit Save. After a time Azure will come back with an ObjectId – this Id represents the new Identity created (and managed) by Azure – the display name will be the same as the name of the service (the Azure App Service in this case).

Next, access the Azure Active Directory for your tenant, select App Registrations. Create a new Registration with any name you like (this will be the username of the Service Principal/Identity). Once created, select Certificates & Secrets from the left hand menu. Create a Secret and copy the value somewhere (you wont be able to see it again once you leave) – also take note of the Application (client id).

Go back to the top level of Azure Active Directory and select Enterprise Registrations – you should be able to search for an find your Managed Identity by its name (remember the name of the Azure App Service). Select it and note its Application (client id).

Ok, we have our identities, now we just need to use them.

Code the app

I created a simple WebApi with the following command:

dotnet new webapi -o KeyVaultTest

Install the following Nuget packages:

  • Azure.Security.KeyVault.Secrets (v4.0.3)
  • Azure.Identity (v1.1.1)

Here is the code for my controller

using System;
using System.Threading.Tasks;
using Azure.Core;
using Azure.Identity;
using Azure.Security.KeyVault.Secrets;
using Microsoft.AspNetCore.Mvc;
using Microsoft.Extensions.Configuration;
namespace KeyVaultTest.Controllers
{
[ApiController]
[Route("[controller]")]
public class SecretController : ControllerBase
{
private readonly IConfiguration _configuration;
public SecretController(IConfiguration configuration)
{
_configuration = configuration;
}
[HttpGet("{secretName}")]
public async Task<IActionResult> Get(string secretName)
{
TokenCredential credential = new ClientSecretCredential("<tenant id>", "<application id>", "<application secret>");
if (_configuration["Env"] == "Development")
credential = new ManagedIdentityCredential();
var client = new SecretClient(vaultUri: new Uri("https://<my vault name>.vault.azure.net/"), credential);
var getSecretResponse = await client.GetSecretAsync(secretName);
return Ok(new {
Name = getSecretResponse.Value.Name,
Value = getSecretResponse.Value.Value,
Env = _configuration["Env"]
});
}
}
}

view raw
secretcontroller.cs
hosted with ❤ by GitHub

Thanks to Azure.Identity there are a number of ways to get credentials for accessing Azure services (here). Here I am using two, but the usage depends on the context.

  • If the environment is Local (my local machine) I use the ClientSecretCredential where I expose the values from the App Registration earlier. As I have said, I do not care if these values leak as they will only ever get a person into my Dev Key Vault to read the secrets there. I will NOT use this anywhere else and other environments will not support this type of access
  • If the environment is Development, I am running on Azure and can use ManagedIdentityCredential where Azure will interrogate the service (or VM) for the assigned identity. This has the advantage of exposing NOTHING in the environment to a would-be attacker. They can only do what the machine can already do so, the damage would be so confined.

By taking this approach we enable tighter security in the cloud while still allowing developers to easily do their jobs locally.

I tend to favor the environment injection approach since it still enables the code to be built one time. Using something like Precompiler directives works but requires the code to be special built for each environment

Conclusion

Security is vitally important for applications. It is important to do what we can to remove sensitive values from code bases and configuration files. We can use tools like Key Vault to ensure these values are kept safe and are not exposed in the event of a breach. We can further remove values by using RBAC to give identities access to only what they need. This enables us to have fewer values in code and makes our application easier to manage overall

Amend to previous – Jwt token mapping

When I write entries on this site I do my best to fully research and solve issues I encounter in a meaningful way in line with maintainability and cleanliness. Occasionally, I am not as successful in this endeavor as I would like – as was the case with how I referenced the the User Id from Auth0 in my previous entry: here

In this post, I expressed frustration at the lack of mapping for, what I believe are, common values that are part of the Jwt standard which meant a certain amount of digging and kludgy code to extract the needed value. I was wrong.

This evening, while researching something else, I stumbled across the proper way to achieve what I was looking for. It turns out, you can get it to work with a simple definition in the Startup.cs class when you configure the JwtBearer – see below:

services.AddAuthentication(options =>
{
options.DefaultAuthenticateScheme = JwtBearerDefaults.AuthenticationScheme;
options.DefaultChallengeScheme = JwtBearerDefaults.AuthenticationScheme;
options.DefaultScheme = JwtBearerDefaults.AuthenticationScheme;
}).AddJwtBearer(options =>
{
options.Authority = Configuration["Auth0:Domain"];
options.Audience = Configuration["Auth0:Audience"];
options.SaveToken = true;
options.TokenValidationParameters = new TokenValidationParameters
{
NameClaimType = ClaimTypes.NameIdentifier
};
});

view raw
startup.cs
hosted with ❤ by GitHub

You can see on line 11 what I mean. By doing this, you can change the controller code to the following:

[HttpGet]
public async Task<IActionResult> Get()
{
var rng = new Random();
var data = Enumerable.Range(1, 5).Select(index => new WeatherForecast
{
Date = DateTime.Now.AddDays(index),
TemperatureC = rng.Next(20, 55),
Summary = Summaries[rng.Next(Summaries.Length)]
})
.ToArray();
return Ok(new {
WeatherMan = (await _userService.GetUserAsync(this.User.Identity.Name)).FullName,
Data = data
});
}

view raw
controller.cs
hosted with ❤ by GitHub

Now, instead of casting and digging into the claims to perform matching, we can let .NET Core do it for us and simply use the .Name property on the resolved identity.

Neat, eh. This makes the code much cleaner and more straightforward.

Using the Management API with .NET Core 3.1

As part of my continuing dig into Auth0 I wanted to see how I could access user information for a user represented by a JWT Access Token issued by Auth0. For the uninitiated, Auth0 login will yield, to front end applications, 2 tokens: id_token and access_token.

The id_token is provided from Auth0 to represent the user who logged into the registered application. The access token is provided as a means to make calls to supported audiences. We talked a little about this in the previous post (Authorization with Auth0 in .NET Core 3.1). Thus, following login, we would keep the id_token stored as a way to identify our user to the application and the access_token to enable access to allowed APIs.

However, if you run each of these tokens through jwt.io you will see a large difference. Notably, the access_token contains NO user information, with the exception of the user_id as the subject (sub). The reason for this is to keep the payload small and to reduce the attack surface. The truth is, there is often no reason for the API being called to know anything about the user outside of permissions and the User_Id – passing around name, email, and other information is most often going to be superfluous and thus needs to be avoided.

The question is though, if we do need the user’s information, how can we get it? That is what I want to explore in this entry.

How should we view Auth0?

First and foremost, I want to point out that Auth0 is more than just a way to validate and generate tokens. Honestly, it should almost be viewed as a UserService as a Service. It replaces the service that would traditionally handle user related responsibilities. It even has the capability to integrate with existing accounts and data store mechanisms. As I have continued to use it, I have found that using it for login and user management is a speed boost to development and alleviates my teams from having to build the necessary UI that goes with such a system.

How do we get the UserId from our Token in .NET Core?

Honestly, I was disappointed here – I would have hoped .NET Core was smart enough to map certain well known claims to ClaimsIdentity properties so you could easily user the this.User property that exists on ControllerBase. Maybe I missed something but, given the popularity of JWT and token style logins, I would this this would be more readily available. This is how I ended up pulling out the UserId:

((ClaimsIdentity)User.Identity).Claims.First(x => x.Type.EndsWith(“nameidentifier”)).Value;
Auth0 sub claim gets mapped to the nameidentifier claim (endswith is used since its a namespaced type). I find this rather messy – I would expect common claims to be mapped to properties automatically or with some configuration – I was not able to find either.

Getting the User Information

Auth0 offers a standard API to all users known as the Management API. This is registered to your account with its own ClientId and Secret. When an access_token is provided we can use it, with a tenant level endpoint, to get an access_token to the Management API. Using this we can information about users, the tenant, just about anything.

First, you need to contact your tenant level OAuth endpoint and acquire an access_token that can be used to access the Management API. This call is relatively simple and is shown below – I am using the standard HttpClient class from System.Net.Http.

using (var client = new HttpClient())
{
client.BaseAddress = new System.Uri("https://<your tenant>.auth0.com/");
var response = await client.PostAsync("oauth/token", new FormUrlEncodedContent(
new Dictionary<string, string>
{
{ "grant_type", "client_credentials" },
{ "client_id", "<client_id>" },
{ "client_secret", "<client_secret>" },
{ "audience", "https://<your tenant>.auth0.com/api/v2/" }
}
));
var content = await response.Content.ReadAsStringAsync();
var jsonResult = JObject.Parse(content);
}

view raw
client.cs
hosted with ❤ by GitHub

The client_id and client_secret are from the M2M application created for your API (under Applications) – hint, if you do not see this, access the API in question (WeatherApi for me) and select the API Explorer, this will auto create the M2M application with the client secret and id that are needed for this exercise.

Selection_011

The end result of this call is a JSON payload that contains, among other things, an access_token – it is this token we can use to access the Management API.

The best way to access the Management API with .NET is to use the supplied NuGet packages from Auth0:

  • Auth0.Core
  • Auth0.ManagementApi

The rest is very easy, we simply new up an instance of ManagementApiClient and call the appropriate method. Here is the complete code sample:

using (var client = new HttpClient())
{
client.BaseAddress = new System.Uri("https://<your tenant>.auth0.com/");
var response = await client.PostAsync("oauth/token", new FormUrlEncodedContent(
new Dictionary<string, string>
{
{ "grant_type", "client_credentials" },
{ "client_id", "<client_id>" },
{ "client_secret", "<client_secret>" },
{ "audience", "https://<your tenant>.auth0.com/api/v2/" }
}
));
var content = await response.Content.ReadAsStringAsync();
var jsonResult = JObject.Parse(content);
var mgmtToken = jsonResult["access_token"].Value<string>();
using (var mgmtClient = new ManagementApiClient(mgmtToken, new System.Uri("https://<your tenant>.auth0.com/api/v2")))
{
return await mgmtClient.Users.GetAsync(userId);
}
}

view raw
client2.cs
hosted with ❤ by GitHub

Really pretty straightforward – again my only complaint was getting the UserId (sub) out of the claims in .NET, I would expect it to be easier.

Final Thoughts

Centrally, I am starting to see Auth0 more as a service for user management than a simple vendor. If you think about it, having user data in a separate service does make things more secure since if you DB is breached the passwords are safe as they are physically not there. Further, you can take advantage, to a point, of the scale offered by Auth0 and the ability for it to integrate with services like Salesforce, Azure AD, Facebook, and other user info sources.

I am personally looking forward to using Auth0 extensively for my next project.

Authorization with Auth0 in .NET Core 3.1

Auth0 (https://auth0.com) remains one of the leaders in handling authentication and user management for sites. While it may seem odd to some to offload such a critical aspect of your application to a third party, the truth is, its not as far fetched as you think. Consider the popularity of touch to pay systems like Samsung Pay, Google Pay, and Apple Pay. Each of these (along with others) use a one time token exchange to allow payment. This enables payment information to be collected elsewhere lessening the impact in the event of a breach.

For this article, I wont dive to deeply into the intricacies of Auth0, its a very wide platform with a lot of functionality, a lot of which I am not an expert on. For my goal here, I wanted to show how I could use a Google and Username/Password login to access a service requiring a token and FURTHER how I could surface the permissions defined in Auth0 to facilitate authorization within our API (which uses .NET Core 3.1).

Authorization vs Authentication: Know the Difference

One of the key things that need to be understood when using Auth0 (or any identity provider) is the difference between Authentication and Authorization, in this case how it relates to APIs.

When we talk about Authentication we talk about someone have access to a system in a general sense. Usually, for an API, this is distinguished by the request having a token (JWT or otherwise) that is not expired and valid for the system. If this is not passed, or that which is passed is invalid the API shall return a 401 Unauthorized.

When we talk about Authorization we talk about whether someone CAN call an API endpoint. This is delineated through claims that are registered with the token. Information in these claims can be roles, permissions, or other data. When the given token does NOT supply the appropriate claims or data to access an endpoint the API shall return a 403 Forbidden.

Now that we understand that, let’s move on.

Establish a check for Authentication

There are multiple ways to go about this each with varying degrees of appropriateness. In my previous blog entry I did this through Kong (https://jfarrell.net/2020/06/11/kong-jwt-and-auth0/) and I will show it here in .NET Core as well.  What is the difference? It comes down to what you are trying to do.

With Kong JWT plugin you can ideally require authentication across MANY APIs that are being fronted by the Kong Gateway. In general, its very common when you have microservices that requires a token to place them behind such a gateway that enforces the token being present – this way you do not need to configure and maintain the middleware for such an operation across multiple APIs that may be in different languages and even managed by disparate teams (or external teams).

.NET Core (as with other API frameworks) supports this for the API itself either via a custom connection or an identity provider, such as Auth0.

To enable this feature you need to add two NuGet packages:

  • System.IdentityModel.Tokens.Jwt
  • Microsoft.AspNetCore.Authentication.JwtBearer

The middleware is configured in the Startup.cs file via the IServiceCollection::UseAuthentication extension method. The following code does this and instructs the service to contact Auth0 to validates the tokens authenticity.

services.AddAuthentication(options =>
{
options.DefaultAuthenticateScheme = JwtBearerDefaults.AuthenticationScheme;
options.DefaultChallengeScheme = JwtBearerDefaults.AuthenticationScheme;
}).AddJwtBearer(options =>
{
options.Authority = Configuration["Auth0:Domain"];
options.Audience = Configuration["Auth0:Audience"];
});

view raw
authentication.cs
hosted with ❤ by GitHub

The first thing to notice here is the Auth0:Domain value which is the full URL of your Auth0 tenant (mine is farrellsoft). This domain informs the underlying mechanisms where to look for the OAuth endpoints.

The second thing is Auth0:Audience and this is more specific to the OAuth flow. The audience is, in this context, a validation parameter. That is to say, we want to validate that the token being received can access our API (which is the audience). This will map to some unique value that identifies your API. In my case, this is https://weatherforecast.com. I do not own that URL, it is used as a way to identify this API.

When we authenticate to our frontend application, we specify what audience we want. Assuming the frontend application has access to that audience, we will receive a token that we can pass with API calls.

oauth1

This is another reason to look into a managed identity provider even beyond security. In high volume systems, the User API is constantly under load and can be a bottleneck if not thought through properly. You can imagine an architecture with MANY microservices, each needing to validate the token and get user information relevant to their task. Netflix has entire talks about this principle – with particular emphasis on the contribution of such services to cascade failures.

The final step is to enforce the presence of this token is the Authorize attribute on either the controller or specific endpoint method. Once you add this, not passing a valid JWT token will cause the middleware to return a 401 Unauthorized, as desired.

[ApiController]
[Authorize]
[Route("[controller]")]
public class WeatherForecastController : ControllerBase
{
private static readonly string[] Summaries = new[]
{
"Freezing", "Bracing", "Chilly", "Cool", "Mild", "Warm", "Balmy", "Hot", "Sweltering", "Scorching"
};
private readonly ILogger<WeatherForecastController> _logger;
public WeatherForecastController(ILogger<WeatherForecastController> logger)
{
_logger = logger;
}
[HttpGet]
public IEnumerable<WeatherForecast> Get()
{
var rng = new Random();
return Enumerable.Range(1, 5).Select(index => new WeatherForecast
{
Date = DateTime.Now.AddDays(index),
TemperatureC = rng.Next(20, 55),
Summary = Summaries[rng.Next(Summaries.Length)]
})
.ToArray();
}
}

view raw
controller1.cs
hosted with ❤ by GitHub

(Note: I am using the default controller implementation that comes out of the box with a new WebApi in .NET Core 3.1)

If you were to now, contact https://localhost:5001/weatherforecast in Postman (assuming you have SSL verification turned off in the settings) you will get a 401 (assuming you do not pass a token).

To generate a token that is valid, the easiest way is to create an SPA application in your Auth0 tenant and deploy a Quickstart (I recommend the Angular variant). Remember to have the the Web Console -> Network tab open as you login to this sample application – you can extract your token from the token endpoint call.

An additional bit of information is the site jwt.io which, given a token, can you show you all of the information contained within. Do not worry, nothing sensitive is exposed by default, just ALWAYS be mindful of what claims and properties you add to the token since they can be viewed here.

Establish a check for Authorization

While the authentication piece is commonly tied to a backend validation mechanism, authorization is commonly not the case, at least with JWT token. The reason is, we do not want to incur additional round trips if we can, safely, store that data in the token and have it decrypted.

This is an important aspect of this process because Authorization is ALWAYS the responsibility of the specific backend. There are many ways to accomplish it, but for this we are going to use .NET Core Authorization Requirement framework. This will allow us to inspect the valid token and indicate if certain requirements have been fulfilled. Based on this, we can create a policy for the user identified. Based on this policy the endpoint can be invoked or it cannot.

For this to work with Auth0 we need to ensure we create permissions and roles in the portal, enable RBAC and indicate we want assigned permissions for a specific audience (API) to be returned in the token.

First, we will need to create permissions:
Selection_008

Here I have created two permissions for my WeatherApiread:weather and write:weather

Next, we need to assign these permissions to users. Note, this can also be done by assigning permissions to specific roles and assigning that role to that user. Roles are groups of permissions. This is all done in the Users and Roles section.
Selection_009

Here you can see the permissions we added can be assigned to this user.

Next, we need to enable RBAC and toggle on for Auth0 to send assigned permissions down with the user. This is done from within the API configuration under RBAC (Role Based Access Control) Settings.
Selection_010

Finally, we have to modify the way we get our token to be specific to our API (https://weatherforecast.com). That is the trick you see, the token is SPECIFIC to a particular API – this is what ensure that permissions with the same name do not enter into the same token.

Now, on this you may wonder, well how do I handle it with multiple services that have different names and permissions? To that I will say, there are ways but they get at the heart of what makes designing effective microservice architectures so challenging and are not appropriate to delve into in this entry.

Assuming you are using the Angular Quickstart you only need to ensure that, when the Login request is made, the audience is provided. As of the Quickstart I downloaded today (07/04/2020):

  • Open the file at <root>/src/app/auth/auth.service.ts
  • Insert a new line after Line 18 with the contents: audience: <your API identifier>

Refresh your session either by revoking the user’s access in the Auth0 portal (under Authorized Applications) or simply by logging out. Log back in and recopy the token. Head over to jwt.io and run the token through – if everything is good, you will now see a permissions block in the decoded response.

You can now user this token in Postman or whatever to access the API and implement Authorization.

Building Authorization in .NET Core 3.1 WebApi

Now that the access token is coming into our backend we can analyze it for the qualities needed to “check” an authorization. In .NET Core this is handled via implementations of IAuthorizationRequirement and AuthorizationHandler<T> which work together to check the token for properties and validate fulfillment of policies.

We will start with implementing IAuthorizationRequirement – this class represents a desired requirement that we want to fulfill. In general it contains bits of information that the related handler will use to determine whether the requirement is fulfilled. Here is a sample of this Handler and Requirement working together:

public class IsWeathermanAuthorizationHandler : AuthorizationHandler<IsWeathermanAuthorizationRequirement>
{
protected override Task HandleRequirementAsync(AuthorizationHandlerContext context, IsWeathermanAuthorizationRequirement requirement)
{
var permission = context.User?.Claims?.FirstOrDefault(x => x.Type == "permissions" && x.Value == requirement.ValidPermission);
if (permission != null)
context.Succeed(requirement);
return Task.CompletedTask;
}
}
public class IsWeathermanAuthorizationRequirement : IAuthorizationRequirement
{
public string ValidPermission = "read:weather";
}

view raw
requirement.cs
hosted with ❤ by GitHub

Here, the token is passed to our IsWeathermanAuthorizationHandler where it looks for the permission as indicated by the requirement. If it finds it, it marks it as fulfilled. You can see the potential here for more sophisticated logic aimed at validating a requirement as fulfilled.

The final piece is the definition of a policy. A policy is a composition of requirements that need to be fulfilled to grant the user with the specific policy (by default ALL must be fulfilled but overloads can enable other functionality). In our example, we have created the IsWeatherman policy as such:

services.AddAuthorization(options =>
{
options.AddPolicy("IsWeatherman", policy => policy.Requirements.Add(new IsWeathermanAuthorizationRequirement()));
});
services.AddSingleton<IAuthorizationHandler, IsWeathermanAuthorizationHandler>();

view raw
policy.cs
hosted with ❤ by GitHub

Notice the AuthorizationHandler is added via its interface as a singleton. Obviously use the instantiation strategy that makes the most sense. For our IsWeatherman policy to be fulfilled the single requirement (IsWeathermanAuthorizationRequirement) must be marked successful. This then allows the use of this policy via the Authorize attribute:

[ApiController]
[Authorize(Policy = "IsWeatherman")]
[Route("[controller]")]
public class WeatherForecastController : ControllerBase
{

view raw
controller_top.cs
hosted with ❤ by GitHub

Pretty neat eh? Now, you can simply decorate the controller class or action method and forgo any logic in the method. If the policy is not given to the user a 403 Forbidden will be returned.

What about multiple policies? As of right now, based on what I can find, this is NOT supported. But given the flexibility of this implementation, it would not be challenging to create composite policies and go about it that way.

To validate this functionality, our user from previously should have the permission and, as such, should be able to access your endpoint. To further verify, create a new user in Auth0 with no permissions, sign in, and use the token to access the endpoint, you should get a 403 Forbidden.

Congrats on getting this to work. If you have questions please leave a comment I will do my best to answer.