Serverless Microservice: The API

Continuing from our Planning in Part 1, our creation of our Upload Azure Function in Part 2 and our backend processing pipeline in Part 3, Part 4 (this) goes up a level and talks about the API.

This may seem somewhat strange since, technically, when we deployed our Upload function we got a Url, so isnt that our API? Well yes, and no. Let me explain.

The entire purpose behind the Microservice architecture is to spread things out and allow for more parallel development and processing. When things are separated they can scale and follow different release cycles and be managed by different teams. These are all good things but, any organization, is going to want to still build an API that makes sense. And when you have a bunch of Azure Functions like this, you can very easily wind up with names or paths you dont want. This is where Azure API Management comes.

The Azure API Management service lets you lay a layer over n microservices and allow users to see it as a single API rather than a hodgepodge collection of services with no consistency. You can even combine Azure Functions with traditional services and the end users are none the wiser. On top of that, you get things like rate limiting, token provisioning, and a variety of other features built in natively. Azure API Management is a HUGE topic so we will be doing a few parts on this.

Let’s set things up

Just to get a clear picture of this, here is a diagram which, at a basic level, shows our setup:

Untitled Diagram

Essentially, you can have any number of services behind the proxy layer, and they can be anything (the proxy is basically using Url Rewriting). What the proxy provides is, authentication, rate limiting, CORS, and other general API features you are familiar with. For the remainder of this post we will do the following:

  • Create the API Management Service
  • Setup our API endpoints
  • Remove the requirement for tokenization (basically make it allow anonymous access)
  • Access the endpoint via Postman

Creating the API Management (APIM) is straightforward though the provisioning process can take a while; mine took around 20 minutes.

microservice7

Once you begin this process you will, as per usual, be asked to supply details for your new API. The two fields that are important here is Name and Pricing Tier. The former because it ultimately drives the name of the proxy and therefore the URL that users will use to access your service(s). The later because it drives your SLA agreement. If you are just playing around, I recommend the Developer SLA as it is part of the free tier.  For kicks, I named by APIM imageService.

Once the provisioning process is completed you can access the APIM instance. Before we talk technical I want to point out two things on the Overview section: Developer Portal and Publisher Portal.

One of the nice things I like about APIM is what it gives you out of the box. The Developer Portal contains an automatically generated set of documentation that provides multi language code examples and interactive API test tool.

The Publisher Portal contains information about the API and includes a way to “request” tokens. Basically, it gives you the sort of site that Twitter, Facebook, and others give to request tokens and learn more about the API. Both portals are fully customizable. There is also a reference to something called Products in the Developer Portal. These are important and we will be talking about them later.

Let’s add an endpoint

So, if you are following this series you should have an endpoint to upload an image. Right now this is coded as a pure Azure Function with its own Url and everything. Adding this to an APIM is fairly straightforward.

We can manage our APIs under the APIs options:

microservice8

For our purposes, we will select Functions App. This will be the easiest way to add our endpoint which is sourced in an Azure Function.

Notable fields in the creation dialog:

  • API Url Suffix – this allows you to prefix ALL calls to your API with a qualifier. Very useful if the API also serves web content as in a traditional MVC setup.
  • Version this API – gives your common versioning options for APIs. Honestly, you should always take this for any serious project because, you are going to have to version the API. Better to assume yes and not, than assume not and have to.
  • Products – This is the product association for how access is determined. We will discuss Products in more depth later. For now you can select Unlimited.

It is also likely that you will also get the follow message for your Azure Function during this process:

microservice.warning

In order to understand what endpoints are “there” and thereby automate the mapping, APIM looks for an API Definition, usually a Swagger definition. This is not generated by default, but luckily its pretty easy.

microservice9

As you can see, under Function App Settings there is a section for API Definition. In this section you are able to either copy/paste or define your Swagger definition. There is also a nice preview feature for Azure Functions that will introspectively build you definition for you – use the Generate API Definition template.

As a side note: I have observed that sometimes this doesnt work the first time the button is clicked but seems to be fine when you do it a second time

Once the definition is generated you can head back to APIM and complete the add process.

API Building

So, let’s walk through this flow

microservice10

I am going to enumerate each of these pipeline areas to give you an idea of what is happening.

Frontend

This is the face of your API call, the path that is matched which invokes this “lane”. Now this is very important to understand because when your path is matched everything, except the host (and optional API suffix and version identifier) are passed through. Let me use a diagram to better illustrate this:

microservice11

So, in this example look very carefully at the route for the APIM service. Notice that we configured it to use api as the prefix. So when it is matched the APIM layer will forward (rewrite) the request to the Azure Function endpoint you have configured maintaining the unique portion of the route.

Azure Function endpoints have an interesting property where they MUST have api/ in front of their path (I have yet to find out how to disable this). As you can see, this creates a weird path to the Azure Function. You can easily fix this by adding a rewrite rule to the Inbound stage but, just something to keep in mind if you have trouble connecting.

So, to summarize, the Frontend is the outward facing endpoint that your APIM exposes. Ultimately it will get rerouted internally to whatever is backing that specific endpoint.

Inbound Processing

In this situation the Frontend has matched the Url and is forwarding the request, this is the point where we can check for certain headers and perform Url rewriting (if needed) for the ultimate destination of our request.

Given this is a processing node, it offers a much wider array of capabilities vs the Frontend node. Microsoft does not provide a full featured editor for this yet, but does expose the Code View with shortcuts which allow you to take advantage of most common features fairly easily.

microservice12

Backend

This layer is pretty easy, it is the node that the request will be routed to. Notice that, in the case of an Azure Function, you point at the Azure Function node and not a specific function. This is where the rewritten route comes in handy as Azure will use that Url to call into the Azure function, so the new Url has to match what is supported in the Azure Function endpoint.

There are some great test tools here, but the most common problem that I ran into was, by far, getting the wrong Url out of Inbound processing and getting a 404 with my Azure Function. To correct this, I employed the time tested validate independently each component so I knew the plumbing was the problem.

You can also drop into Code View here, similar to the Inbound processing section but, take heed, as it may not make sense to do certain things here. Also, CORS usually will not not be needed here if you are using strictly Azure resources; only at the Frontend node.

Outbound processing

Outbound processing is Inbound processing in reverse. It lets you examine the result of the backend call and perform tasks like adding response headers.

Before you try to query

So, if you are taking most of the defaults here you wont be able to hit the API with Postman, that is because you lack a token. For this post we are going to disable this requirement, later we will discuss how to manage it.

Products

So, the way Microsoft choose to go about this can either make sense to you or be a bit weird; for me it was mostly the later and some of the former. So a “Product” is a single or many APIs that a user can “use”. The Developer Portal, if you remember, allow us to drill into these Products, test them, and understand how to use them.

The Publisher Portal, on the other hand, allows us to get tokens (if needed) to allow use of these APIs. These tokens can be used to implement some of APIMs more global features, such as rate limiting and usage tracking.

microservice13

By default, every new APIM instance is given 2 published Products: Starter and Unlimited. Each of these have their own configuration and can be associated with one or more APIs and access policies. For now, we are going to select Unlimited.

microservice14

There are a number of options here. We can select APIs and see which APIs this Product is associated with. We can click Policies and see the Policy information for this Product. As I have said, Products in APIM is a HUGE topic and not something I want to dive into with this post. For right now, select Settings.

Here, you will want to disable Requires subscription and this will effectively remove the need for a token in the request. Note that this is NOT recommended with a Production type deployment. I am going this route for simplicity sake, we will cover Products in more detail in another post.

With that in place you should be able to hit your API with Postman. This wont yet work from a client app (ala Angular) cause we have not configured CORS on the Frontend, we will do that next.

 

Advertisements

Serverless Microservice: The Backend

Part 1 – Planning and Getting Started
Part 2 –  Upload File

One of the key pieces when utilizing cloud architecture, be it for a Microservice or a more traditional monolithic approach, is selecting ready made components that can alleviate the need for custom code. For serverless this need is vitally important as going the serverless approach inherently implies a heavier reliance on theses components.

For our application, we are able to upload images and place them in our blob store. For the next step we would like the following to happen:

  • Upon being added to blob storage, an event is fired which triggers an Azure Function
  • The Azure Function will execute and take the new image and run it through Project Oxford (Microsoft Cognitive Services) and use Computer Vision to gather information about the image
  • Update the existing record in Mongo to have this data

BlobTrigger

As with our HTTP handling Azure Functions, we rely on Triggers to automatically create the link between our backend Cloud components and our Azure Functions. This is one of the huge advantages of serverless, it is to be able to easily listen for events that happen within the cloud infrastructure.

In our case, we want to invoke our Azure Function when a new image is added to our blob storage, so for this we will use the BlobTrigger. Here is an example of it in action:

[FunctionName("ImageAddedTrigger")]
public static async Task Run([BlobTrigger("images/{name}", Connection =
   "StorageConnectionString")]Stream imageStream, string name, TraceWriter log)
{
}

As with the HTTP Azure Function, we use the FunctionName attribute to specify the display name our function will use in the Azure portal.

To get this class you will need to add the WindowsAzure.Storage Nuget package, which will give you this trigger.

As with the HTTP Trigger, the “thing” that is invoking the function is passed as the first parameter, which in this case will be the stream of the newly added blob. As a note, there is a restriction on the type the first parameter can be when using the BlobTrigger. Here is a list: https://github.com/MicrosoftDocs/azure-docs/blob/master/articles/azure-functions/functions-bindings-storage-blob.md#trigger—usage

Within the BlobTrigger the first parameter is a “route” to the new blob. So, if we dissect this, images is our container within the blob storage and the {name} is the name of the new image within the blob storage. The cool thing is, we can bind this as a parameter in the function, hence the second method parameter called name.

Finally, you will notice the named parameter here Connection. This can either be the full connection string to your blob storage which has your container (images in this case) or it can be a name in your Application Settings which represents the connection string. The later here is preferred as it allows things to be more secure and easier to deploy to different environments.

Specifying the Connection

Locally, we can use the local.settings.json file as such:

microservice5

On Azure, as we said, this is something you would want to specify in your Application Settings so it can be environment specific. The Values properties are surfaced in the application.

Executing Against Cognitive Services

So, with BlobTrigger we get a reference to our new blob as it is added, now we want to do something with it. This next section is pretty standard, it involves including Cognitive Services within our application and call the AnalyzeImageAsync which will run our image data through the Computer Vision API. For reference, here is the code that I used:

log.Info(string.Format("Target Id: {0}", name));
IVisionServiceClient client = new VisionServiceClient(VisionApiKey, VisionApiUrl);
var features = new VisualFeature[] {
    VisualFeature.Adult,
    VisualFeature.Categories,
    VisualFeature.Color,
    VisualFeature.ImageType,
    VisualFeature.Tags
};

var result = await client.AnalyzeImageAsync(imageStream, features);
log.Info("Analysis Complete");

var image = await DataHelper.GetImage(name);
log.Info(string.Format("Image is null: {0}", image == null));
log.Info(string.Format("Image Id: {0}", image.Id));

if (image != null)
{
    // add return data to our image object<span id="mce_SELREST_start" style="overflow: hidden; line-height: 0;"></span>

    if (!(await DataHelper.UpdateImage(image.Id, image)))
    {
        log.Error(string.Format("Failed to Analyze Image: {0}", name));
    }
    else
    {
        log.Info("Update Complete");
    }
}

I am not going to go into how to get the API key, just use this link: https://azure.microsoft.com/en-us/try/cognitive-services/

To get access to these classes for Computer Vision you will need to add the Nuget package Microsoft.ProjectOxford.Vision.

Limitations of the BlobTrigger

So, BlobTrigger is not a real time operation. As stated by Microsoft on their GitHub:

NOTE: The WebJobs SDK scans log files to watch for new or changed blobs. This process is not real-time; a function might not get triggered until several minutes or longer after the blob is created. In addition, storage logs are created on a “best efforts” basis; there is no guarantee that all events will be captured. Under some conditions, logs might be missed. If the speed and reliability limitations of blob triggers are not acceptable for your application, the recommended method is to create a queue message when you create the blob, and use the QueueTrigger attribute instead of the BlobTrigger attribute on the function that processes the blob.

https://github.com/Azure/azure-webjobs-sdk/wiki/Blobs#-how-to-trigger-a-function-when-a-blob-is-created-or-updated

What this means is, you have to be careful with using BlobTrigger because if you have a lot activity you might not get a quick enough response, so the recommendation here is to use QueueTrigger. Queue storage is a fine solution but, I am a huge fan of ServiceBus which also supports queues. So, instead of diving into QueueTrigger I want to talk about ServiceBusTrigger which I think is a better solution.

Create the ServiceBus Queue

First we need to create the queue we will be listening to, to do that we need to go back to the portal and click Add, search for Service Bus.

microservice4

You can take all of the defaults with the create options.

ServiceBus is essentially Microsoft’s version of SNS and SQS (if you are familiar with AWS), but it essentially supports all forms of Pub/Sub, absolutely vital in Microservice so the various services can communicate as state changes occur.

At the top of the screen we can select to Add a Queue. Give the queue a name (any name is fine), just something you will be referencing a bit later.

Once the queue finishes deploying you can access it and select the Shared access policies. Here you can create the policy that permits access to the queue. I generally have a sender and a listener policy. No matter how you do it, you need to make sure you have something that has the rights to read from the queue and write to it.

Once you have created the policy you can select it to get the Connection String; you will need this later so dont navigate away. Ok, lets code.

ServiceBusTrigger

The ServiceBusTrigger is not in the standard SDK Nuget package as the BlobTrigger and HttpTrigger are, for this you will need the Microsoft.Azure.WebJobs.ServiceBus. Now, as we did with the BlobTrigger we need to ensure we can specify the connection string we want the trigger to monitor.

https://docs.microsoft.com/en-us/azure/azure-functions/functions-bindings-service-bus#trigger—attributes

We can do this, similar to above, by specifying the Connection string in our local.settings.json file, just remember to specify the same value in your Azure Function Application Settings. Here is an example of our backend trigger updated to use ServiceBusTrigger.

microservice6

As you can see, its roughly the same (apologies for using the image, WordPress failed to parse the code properly). The first parameter is the name of the queue we are listening to and is accessible by the given Connection string.

There is one thing I want to point before we move on, it has to do with Cognitive Services. I dont know whether its a bug or not but, when you are dealing ServiceBus your queue entries are simple messages or primitives. In this case, I am writing the name of the new image to the queue and this trigger will read that name and then download the appropriate blob from Storage.

For whatever reason, this doesnt work as you expect. Let me show you what I ended up with:

var url = string.Format("https://<blobname>.blob.core.windows.net/images/{0}", name);
var result = await client.AnalyzeImageAsync(url, features);

I actually wanted to read the byte data out of the blob storage based on the name, then I figured I would be able to pass that data into a MemoryStream and pass the stream to the AnalyzeImage and that would be the end of it. Not so, it crashes with no error message when passed. So, I noticed I can also pass a Url to AnalyzeImage so I just create the direct Url to my blob. Granted if you are wanting to keep the images private this doesnt work as well. Just something to note if you decide to copy this example.

The rest of the code here is the same as above where we read back the result and then update the entry inside Mongo.

Writing to the Queue

The final change that has to be made is when the user uploads an image we want to write our message into the queue, in addition to saving the image to Blob storage. This code is very easy and requires the WindowsAzure.ServiceBus Nuget package.

public static async Task<bool> AddToQueue(string imageName)
{
    var client = QueueClient.CreateFromConnectionString(SenderConnectionString);
    await client.SendAsync(new BrokeredMessage(imageName));

    return true;
}

Pretty straightforward. I am simply sending over the name of the image that was added, remember the name is the ObjectId that was returned from the Mongo create operation.

QueueTrigger

I didnt cover it here but there is such a thing as QueueStorage which is effectively a queue using our Storage account. This works similar to the ServiceBus but, as I said above, I really view this as a legacy piece and I think ServiceBus is the future. Nevertheless, it remains an option when dealing with a scenario where BlobTrigger does not work fast enough

Conclusion

Ok, so if everything is working you have a system that can take an uploaded image and send it off for processing to Cognitive Services. This is what we call “deferred processing” and is very common in high volume systems; systems where there is just not the ability to process things in real time. This “deferred processing” model is in widespread use at places like Facebook, Twitter, and others though much more complicated than our example. This even underpins popular design patterns like CQRS (Command Query Responsibility Segregation).

In short, the reason I like this model and, ultimately, Azure Functions (or the serverless model more specifically) is it allow us to take advantage of what is already there and not have to write things ourselves. We could write the pieces of this architecture that monitor and process but why? Microsoft and Amazon have already done so and support level of scalability that we likely cannot anticipate.

In our next section, we will create the GetAll endpoint and start talking about the API layer using Azure API Management.

Serverless Microservice: Upload File

See Part 1: Getting Started

For the first part of this application we are going to create an Azure Function which allows upload of a file. As this file is uploaded its binary data will be stored in Azure Blob Storage while other information will be stored in a MongoDB document. The later will also receive the Cognitive Service Analysis Result data.

I like to look at each project as a “microservice”, that is related functions which together fulfill a logic unit of functionality. Later, we will discuss how to connect these together using Azure API Management.

Before we perform any code, let us think back to our document and the overall flow we are going for. We know that the user will, via HTTP, upload a file which we want to save in Azure Storage and create a record for in Mongo.

Preparing Our Components

Logging into the Azure Portal, we will want to, optionally, create a Resource Group to hold everything in; this is my convention as I find it makes finding things easier as well as cleaning up when you need to tear things down.

I wont walk you through how to do these things but here is a high level summary of what you will need:

  • A Storage Account with a single container – I called my “images”
  • A CosmosDB using the Mongo API. Really you could use whatever API you want but, the rest of the examples will be using MongoDB.Driver for the Nuget package

For the CosmosDB you dont actually have to do anything more than create it as the MongoDB.Driver code will actually handle creating the database and collection for you.

What you will want to do is copy and hold onto the connection strings for both of these resources:

  • For Storage: Access the newly created Storage Account. In the left navigation bar select AccessKeys, the Connection String for the Storage Account is located here
  • For Cosmos: Access the newly created Cosmos Database. In the left hand navigation, select Connection String. You will want the Primary Connection String

Let’s write the code

So far I have tried creating Azure Functions with both Visual Studio and Visual Studio Code and for this case I have found Visual Studio to be the better tool, especially with the Azure Functions and Web Jobs Tools extension (download). This will give you access to the Azure Functions project type.

If we use Create New Project in Visual Studio, so long as you have the above update you should be able to make this selection:

microservice2

To help out, the extension provides an interface to select from the most common Azure Function types; by no means is this exhaustive but its a great starting point. Here is what each type is:

  • Empty – An empty project allowing you to create things from scratch
  • Http Trigger – An Azure Function that responds to a given path (we will be using this)
  • Queue Trigger – An Azure Function that monitors a Service Bus Queue
  • Timer Trigger – An Azure Function that fires every so often

For this we are going to use an HTTP Trigger since the function we are creating will service HTTP requests. For Access Rights just choose Anonymous, I will not be covering how authentication works with Azure Functions in this series.

Here is what the starting template for HTTP will look like:

public static class Function1
{
    [FunctionName("Function1")]
    public static async Task Run([HttpTrigger(AuthorizationLevel.Anonymous, "get", "post", Route = null)]HttpRequestMessage req, TraceWriter log)
    {
        log.Info("C# HTTP trigger function processed a request.");

        // parse query parameter
        string name = req.GetQueryNameValuePairs()
            .FirstOrDefault(q => string.Compare(q.Key, "name", true) == 0)
            .Value;

        // Get request body
        dynamic data = await req.Content.ReadAsAsync();

        // Set name to query string or body data
        name = name ?? data?.name;

        return name == null
            ? req.CreateResponse(HttpStatusCode.BadRequest, "Please pass a name on the query string or in the request body")
            : req.CreateResponse(HttpStatusCode.OK, "Hello " + name);
    }
}

The notable aspects of this code are:

  • FunctionName – This should parallel the name of the file (doesnt have to), but this is the name you will see when the function is listed in Azure
  • HttpTrigger – This is an attribute that informs the underlying Azure workings to map the first parameter to the incoming HttpRequestMessageYou can provide additional helpers to extract data out of the Url as well as provide for Route matching, the same as in WebAPI.

Let’s talk a bit more about HttpTrigger. You will see this sort of pattern through Azure Functions. These “bindings” allow us take the incoming “thing” and cast it to what we need. Depending on the type of binding (HttpTrigger in this case) you can bind it to various things.

Additionally, the HttpTrigger supports verb filtering so the above would only allow POST and GET calls. The named parameter Route allows you to specify the route that the Azure Function will look for to handle; these are the same concepts from WebAPI. This will also be used with BlobTrigger later on.

Finally, there are no limitations around return result. Azure will only look for a method called Run, you can return whatever you want. The default is an HttpResponseMessage but I have used IList and other complex and primitive types.

Here is the code for upload, you will note that I am also using HttpResponseMessage.

[FunctionName("UploadImage")]
public static async Task Run([HttpTrigger(AuthorizationLevel.Anonymous, "post", Route = "upload")]HttpRequestMessage req, TraceWriter log)
{
    var provider = new MultipartMemoryStreamProvider();
    await req.Content.ReadAsMultipartAsync(provider);
    var file = provider.Contents.First();
    var fileInfo = file.Headers.ContentDisposition;
    var fileData = await file.ReadAsByteArrayAsync();

    var newImage = new Image()
    {
        FileName = fileInfo.FileName,
        Size = fileData.LongLength,
        Status = ImageStatus.Processing
    };

    var imageName = await DataHelper.CreateImageRecord(newImage);
    if (!(await StorageHelper.SaveToBlobStorage(imageName, fileData)))
        return new HttpResponseMessage(HttpStatusCode.InternalServerError);

    return new HttpResponseMessage(HttpStatusCode.Created)
    {
       Content = new StringContent(imageName) };
    };

This code is pretty straightforward though, I admit, learning how to read the image data without using the Drawing API was challenging.

Basically, we are reading the contents of the response message as multipart/form-data. We can some basic information about the image from the processed form-data. We then use our helper method to create a record in Mongo thereby generating the unique ID identifying the document. We then use this unique ID as the name for the image in blob storage.

Here is the code for creating the Mongo Document:

public static async Task CreateImageRecord(Image image)
{
    var settings = MongoClientSettings.FromUrl(new MongoUrl(MongoConnectionString));
    var mongoClient = new MongoClient(settings);
    var database = mongoClient.GetDatabase("imageProcessor");
    var collection = database.GetCollection("images");
    await collection.InsertOneAsync(image);

    return image.Id;
}

And the code for adding the image to Azure Storage:

public static async Task SaveToBlobStorage(string blobName, byte[] data)
{
    CloudStorageAccount storageAccount = CloudStorageAccount.Parse(StorageConnectionString);
    CloudBlobClient client = storageAccount.CreateCloudBlobClient();
    CloudBlobContainer container = client.GetContainerReference("images");

    var blob = container.GetBlockBlobReference(blobName);
    await blob.UploadFromByteArrayAsync(data, 0, data.Length);

    return true;
}

To complete this code the following Nuget packages will need to be installed

  • MongoDB.Driver (latest)
  • Newtonsoft.Json (v9.0.1)
  • WindowsAzure.Storage (latest)

I had to use a different version of Newtonsoft because of differences when adding it to a .NET Standard Class Library.

Finishing Up

So, in this walkthrough we created our first service with our first endpoint. Now, we have a way to upload files to our storage. Our next step will be to write the code which, upon adding the file to blob storage, kicks off another Azure Function which runs the image data through Microsoft Cognitive Services Computer Vision API. We want to collect that data and update our existing Image record with the findings and then show that to the user. We still have a ways to go, but we made good strides.

Reference: https://drive.google.com/open?id=1lVo_woIZAAiiGyDECKvuKL97SfibwIja – this is the Image.cs class file I created with supports both serialization to and from Mongo via Bson and via the web using Json.

 

Serverless Micro Services: Getting Started

Serverless systems are all the rage these days, and why not? They are, in my view, the next evolution on the microservice architecture that has been quite common over the last few years. But they offer several benefits such as they require no infrastructure deployment and, within most cloud environments, can easily integrate with and respond to the events happening within the cloud system.

The two leaders in this category, as I see it, are Amazon (Lambda) and Microsoft (Azure Functions). I thought it would be useful to walk through creating a simple Image Processing Microservice and discuss the various pieces of the implementation, from planning to completion.

Planning

One of the most critical pieces in software development is planning, and with Cloud this is even more vital as the sheer number of ways to accomplish something can quickly create a sense of paralysis. It is important that, for your chosen Cloud provider, you understand its offerings and have some idea how things can integrate and how difficult that integration may be versus other routes.

This planning is not only for the archiecture and flow of the coming application but also a means to facilitate discussion with your team. For example, understanding that serverless may not be the best choice for everything. Serverless functions can take a while to spin up if they are not used often. Further, most serverless providers cap the execution time of a given function. Understanding these and other shortcoming may lead you down the road of a more traditional microservice using Docker and infrastructure, going purely serverless, or a hybrid of the two.

For our Image Processing application, I have concocted a very simple diagram with the pieces I intend to use.

microservice1

In our example, the user will access our API through the API Manager feature in Azure. Using this allows us to consolidate our various Azure Function blocks behind a consistent API name and Url. Further this allows easy definition of rate limiting rules, as well as other elements that while important, will not be in our example.

In our application, the user will upload an image which we will store in blob storage and create a record in our MongoDB instance. The addition of the image to blob storage will trigger another Azure function that is watching for new adds. This function will take the new add and run the image through Azure Cognitive Services which will tell us more about the image. The additional bits of data are then added to the NoSQL document. records from Mongo are transmitted to our user as calls to the GetAll function are made.

Closing Thoughts

For this example, I have chosen to use an Azure Function which means that my method MUST complete in 5 minutes or less or Azure will kill it. Now, the only time that matters is if you are upload a large file or doing a very intensive operation. Though, for the later you would not want to tie that to a web operation anyway, you would favor a form of deferred processing; we actually do that here with our call to Cognitive Services.

In a typical application we often want to consider the type of access a service will require, as it plays into a lot of selection for other components. I chose to use a NoSQL database here because, under normal circumstances for this sort of application, I would expect a lot of usage here but, I dont necessarily care about consistency as much as availability; this is an essential conversation that needs to be had as you plan to build the service.

Finally, I love Azure functions here because they tie so neatly into the existing Azure services. While it would be trivial to write a polling process or, even leverage a Service Bus queue, using Azure functions which can be configured to respond to blob storage adds, means I have less to think about.

Ok, so we have planned our application. Let’s get started by building our Image Upload process in the next section.

Go to Part 2: Create Upload File

 

React, Redux, and Redux Observables

Today I will conclude this series as we dive into the final bit of our example and feature the setup of one of my favorite ways to structure my data access layer.

Part 3: Redux Observables

One of the great things about Redux is how well it organizes your state and how easy it can be to follow state changes. The uni-directional flow works well for this, but where it falls flat is when it comes to asynchronous operations, namely calls to get and manipulate data from a backend server.

This is not a new problem. Sagas, Thunks, and other approaches have been made to solve this problem. I cannot say that Redux Observables are the best, but certainly I have finding more and more uses for Reactive approaches in the code I am writing so, I welcome the ability to use RxJS in Redux.

The first step in this process is to update Redux to support this approach. See, Redux expects an actual object to be returned from a reducer which is fine for state changes, but not realistic for async operations. We need to change the middleware to support other types being returned; for this we need custom middleware, which we get from the redux-observable NPM package.

Most of the setup work for the custom middleware happens in the configureStore method which we created during the Redux portion of this series. Here is a shot of the updated configureStore method

observable1

It is not at all unusual for this method, as your application grows in size and complexity to gain complexity as well. In this case, we have brought in the applyMiddleware method from redux.

We use this new method to apply the result of createEpicMiddleware which comes from the redux-observable package. The parameter to this call is a listing of our “epics”.

An epic is a new concept that redux-observable introduces. For reference, here is a look at the Redux flow with these Epics included.

observable2

I like to think of epics as “Observable aware Reducers” mainly because they sit at the same level and have a similar flow. That being said, I do not look at epics as devices for updating state in most cases, instead I look at them as more specialized aspects of the system. Here is an example of the epic I use to get a list of Todo items from my sample application:

observable3

What is happening here is actually straightforward, however the methods of RxJS can make things a bbit hard to understand at first. Essentially, our call above which passed in rootEpic allowed Redux to pass emitted actions into our Epics. You recall that, in Redux, every action is passed to every reducer; which is why every reducer must have an exist case. Using combineReducers we can mash all of these reducers into one giant one. Similarly the call above with rootEpic is doing the same thing.

Unlike Reducers however, Epics do not need to have an exit case defined. They can safely ignore an action if it does not pertain to it. In this case, we use switchMap to ensure the any pre-existing occurrences of the operation are cancelled to make way for the new. Full docs: https://www.learnrxjs.io/operators/transformation/switchmap.html

The rule here is that we always return the core object of RxJS: Observable. Observabbles are, in many ways, similar to Promises. However, one major difference is that Observables can be thought of as being alive and always listening where Promises exist for their operation alone. This difference enables Observables to very easily carry out advanced scenarios without adding a lot of of extra work.

For the above, if fetchItems was called more than once, only one call would ever be in play. This is important because, the Observable returned once the call does complete sends off an action to a Reducer to add the fetched items into state. As a general rule, on our teams we do not use Epics to carry out changes to state, though it is possible we find that having this separation makes things a bit easier.

To call into an epic, you simply raise the action as you would normally via the dispatcher.

observable4

Here we call loadItems in componentWillMount (essentially when the App loads). This will raise the FETCH action that caused things to happen.

A more advanced scenario

Ok, so now you have the general idea, let’s look at something a bit more complex: forkJoin (https://www.learnrxjs.io/operators/combination/forkjoin.html).

In our example, we allow the user to create new Todo items and update existing ones. When the user is ready they can hit our sync button which saves all of the changed data to the server. This is an obvious scenario where “we want to do a bunch of discrete things and then, when they are all done, we want to do a finishing action”. This sort of thing before Promises was absolutely brutal.

Since we are using Obbservables we can do this without Promises but we will use a similar structure. For us, forkJoin is analogous to Promise.all.

observable5

In this code we do some very basic filtering to find new and existing items which have changed. We want to call two separate endpoints for these two things. Another strategy would have been to send everything up and let the server figure it out; but that is less fun. And this is even easier to do in C#.

The important thing to understand is that our methods createItem and updateItem both return observables (they update the local state to reset dirty tracking flags and, for new items, hydrate the Id field to override the temp Id given).

Here we use mergeMap (https://www.learnrxjs.io/operators/transformation/mergemap.html) to allow the inner Observables to complete and update their state as that action is not important to the action of indicating the sync is complete. For reference, here is the code for createItem.

observable6

You can see that we use map (https://www.learnrxjs.io/operators/transformation/map.html) here which is crucial so the observable that is returned can work with forkJoin, we dont want to wait for any internal completion at this level.

So what will happen is when post is called, it will return an Observable and that is immediately returned (along with all others). Internally, when the call does complete it will return our action result; map will then wrap this in an observable.

Ok, so this inner observable will be striped out of the outer by mergeMap (along with all others) and will be added to an array of Observables within another one using concat, in addition to two others (syncComplete and snackbarItemUpdate).

So that is crazy complicated. Try to remember that the parameter passed into mergeMap is the array of completed observables (completed in the sense that the web call finished) which contain state changes that need to be applied in addition to actions which hide a busy indicator and show a snackbar.

This is all compressed into a single observable (via concat) and returned to the middleware. The middleware will then dispatch each internal (which it expects to resolve to an object) action. This will then be checked by other epics and your set of reducers. In our case, the actions will perform state changes before finally signalling to dismiss the busy indicator and show our snackbar.

I realize that my explanation there was probably very hard to follow, also I am no RxJS expert. However, it does enable some very cool scenarios out of the box, and I like it because I believe there many advantages offered over Promises.

Let’s bring it home

So that concludes the series. I am actually giving a presentation based on this material, most recently at Codemash in Sandusky. I really do believe that Observables offers some solid advantages over what I have seen of Thunks and Sagas bbut, as always, evaluate your needs first and foremost.

Here is the code for the full app used throughout this demo: https://gitlab.com/xximjasonxx/todo-app

Examine the various remote branches to see starting points that you can use to see how well you understand the setup for the various parts.

Cheers.

Codemash

With great humility I accepted the invitation to speak at Codemash for a second year in a row. Last year I spoke on Xamarin.Forms, this year I debuted my new talk based on the experience of a project I have been leading for 7 months at West Monroe; a talk on ReactJS, Redux, and Redux-Observables. The talk is a culmination of the lessons learned while using this stack to develop the product for our client.

This Codemash, however, was very different from all other experiences due mainly to my extended stay at the hotel (I am usually only there for the GA conference) and my fight with severe food poisoning on Wednesday. The later caused my session to be delayed until 830am on the final day of the conference. Thankfully things went well, but throwing up seven times on Wednesday was not at all fun.

But in the end things worked out, I even managed to catch an earlier flight back to Chicago to beat a snowstorm that was coming in. Throughout this trip I was reminded just how awesome it is to fly with Southwest as I had to make many changes to my trip and each time, super easy and no fees. I also discovered the Kalahari, and probably other similar hotels, are not well setup for person’s with upset stomachs – was very difficult to find bland foods on their menu. But their staff was amazing and even had the onsite EMT check me out to make sure I didnt need any additional treatment; I didnt.

As for the talk I got quite a few people which, given the reschedule, actually surprised me. It was a good audience, great questions. But I still feel the talk attempts to cover too much despite my best efforts to scale it down; it might well become a two part talk.

For now, I am resting and enjoying my 35th birthday and heading back to work with no travel on the calendar until March (MVP Summit). Time to find a new apartment in Chicago and start preparing for Ethan’s arrival in July.

React, Redux, and Redux Observables

I think this might be the first time that I have said I was going to create a multi-part series and actually went one to create more than one part. Glad to be getting the new years started well.

Part 2: Redux

State management is hard, in any application, for any reason. Applications today are very complex and have many intricate features that often need to be cross cutting (that is affect area within their scope of responsibility, as well as outside). In JavaScript, this task has long been the bane of developers for as long as I can remember. In recent years, smart people have attempted to find a better way to do this. I think they have stumbled onto something with Flux and now Redux.

So, Flux was the first attempt at patterning a meaningful way for applications, particular SPAs build on React, to tackle this problem. The most notable aspect of the Flux pattern was the “unidirectional flow” of data that emphasized determinism. The concept, simply put, was that if I raise an action, the effect of that action should be determined and not based on the current state of the system, i.e lacking in temporal coupling (http://blog.ploeh.dk/2011/05/24/DesignSmellTemporalCoupling/)

Flux has since fallen out of favor due to risks with keeping state change business logic in the store itself. Redux has supplanted it because it allows for tighter control and better separation of concerns. That is why, predominantly, we see Redux being used for Flux for new applications. YMMV

Returning to the example at the end of Part 1, we see the use of component state in FormComponent. This is not bad, nor does it represent a code smell. However, ideally if other parts of our application are going to need access to this, keeping it inside the component will not suffice. This is where Redux comes in, as it allows a global store of state and tight management of that store; a necessary feature as more applications are turning towards a sync model rather than a direct save.

Before we dive in, here is the overall flow of a Redux application. We will discuss each piece and how to set things up.

redux

Again, you can see the flow of information is uni-directional. The Container concept is a “connected” React component, we will discuss that in a bit.

The Setup

So, this part can be a bit tricky and I am going to assume you already have a React application, maybe even the one from Part 1. Your first step, as usual, is to install the appropriate NPM packages

yarn add redux react-redux

The first thing to understand is the store. The Store is a special construct that you will want to be widely available throughout your application. The store contains all of your applications data. To facilitate this the react-redux provides the Provider element, here is how you use it:

The element works by create a context level variable for the store. Without diving too much into what Context is, its suffice to say that our store will be accessible should we need it. The real magic here is what goes on in the configureStore method.

In general, I recommend create a separate method for this as, depending on the size and scope of your application, store setup can be quite involved, as we will see in Part 3 when we begin to add custom middleware. But for now, this will seem like overkill, though I do like the separation.

 

redux3

For the store, we are simply giving it a single Reducer which will handle state changes. Now, as a side note I am using the combineReducers method here from redux. Honestly, if you have only one reducer, using this method is overkill, but its important to be aware that it exists.

Reducers are an integral part of Redux because they are charged with replacing state based on events. When an event is raised via an Action ALL reducers are given the action. By default, if the Reducer does not care about this, it simply returns the unchanged state it was given. If it does care then, it replaces the part of state. Here is an example:

redux4

First, note the initialState constant. If you remember in our configureStore method we passed undefined in a the second parameter to createStore, this was the state to be given, initially, to the reducers. I dont personally like giving it there. By passing undefined I can do the above where the initial state for each reducer is defined in the same file.

You see, state = initialState will set state to initialState if undefined is passed for state. In this case, we are stating that the todoReducer only cares about an array called items. So, it is reasonable to expect that, throughout our reducer, the only part of state we will see modified is items; that is generally the smell test for a reducer.

Now, earlier I mentioned that Redux will fire ALL reducers when an action is raised. That is why we do not want to change state, only replace  it (note the use of Object.assign above). When a reducer is given an action that it cares about, its changes need to be as minimal as possible. In the above, we are adding an item so, our new state is simply the existing items array plus the addition of the new item.

If an action was passed to this reducer that it did not care about it, it would simply hit the default section of the switch and return the state it was given.

So, you see how a reducer plays out, let’s talk about actions.

In Redux, actions play the crucial role of informing Redux that the user wishes to change something. For their part, actions are probably the simplest thing in Redux to understand. Here is an example of three actions:

redux5

An action for Redux (and Flux) has only one requirement: it must have a property called type. Additional recommendations include a property payload if more than one piece of data is to be transmitted with the action.

By wrapping these results in functions, the code for dispatching is much cleaner and easier to read. You do not have to have action methods as shown above, but it is the recommended approach.

Ok, so at this point we have gone through most of the core pieces of Redux, now lets fit the pieces together.

Earlier I mentioned the Container concept, or a connected React component. Let’s understand what this means.

When we use the <Provider> tag we are able to pass a reference to our store around in context. A connected component accesses this variable and exposes it. In React, this is done via the connect method.

redux6

Notice the usage of LandingComponent in the above code, this export call effectively creates a the Landing Container. The container wraps the components and provides props to the component which allow access to the store and the Redux dispatcher.

Let’s walk through this code:
connect takes two parameters, both of which are callbacks. mapStateToProps provides us a reference to our state, via the store. Using this variable we can MAP data in state to our component. In the above code, LandingComponent will receive a prop called items which will contain the contents of state.items. Note here, however, that, if you use combineReducers you will need an additional qualifier after state since the various states will be partitioned.

mapDispatchToProps allows us to provide a set of functions as props to our component (LandiongComponent in this case) which we can invoke to dispatch actions. In this case, LandingComponent will receive a prop of type func, which, when invoked, will dispatch the removeAction.

The dispatch of removeAction will cause a reducer to change the state. Once that change is made, mapStateToProps will be called again and the Component will be given new props reflecting the state change. This will trigger a re-render. That render will affect the virtual-dom which will ensure that all state changes are properly and efficiently applied; see Part 1.

What connect() actually returns is another function which takes one parameter: the component to apply the props to, in this case LandingComponent. If we look at LandingComponent we can see that it does not look any different than any other React component, but the props are supplied from the Redux store.

redux7

A word of advice on the use of connect: be careful. It can be very easy to misuse and have connections everywhere; our teams strive to avoid this and thus only apply connect at the top most level. Your applications needs may vary, I have yet to find a hard and fast rule for this.

One other piece of advice when it comes to reducers. If you ever find yourself with a “selected*” type property in your state: stop. You are likely doing it wrong. The things being kept in state should be more permanent, not temporary. So if the user can cancel out of an action use component to hold it while its being edited; only use the store once you want to persist it.

On the topic of persistence, you will notice that Redux does not actually persist anything beyond the lifetime of your session. This is intentional. Redux is about state management, not state persistence. There are multiple ways to store state and Redux certainly makes it easier. In our next part, I intend to look at Redux Observables and how they can be used to make your data layer more flexible and resilient.