Creating an Automated Blazor Deployment in AWS

Blazor is a framework I have written about previously that enables C# developers to build SPA applications similar to Angular and React in C# using WebAssembly. Principally, this showcases the ability for WebAssembly to open the web up to other languages beyond JavaScript without any plugins or extensions; WebAssembly is already widely supported in all major browsers.

In this article, I would like to discuss how we can deploy the output of Blazor to S3 on AWS and host it via a Static Website; this is a very common pattern for hosting SPA due to decreased cost and efficient scale. For this process we will use two Amazon services: CodeBuild and CodePipeline.

Creating the Blazor Application

I wont go into this step other than to say you can use the default app if you so desire, though I recommend the standalone and not the one that features a WebAPI backend. Our goal here is the deployment of static web assets, an API would not be included in that categorization.

Steps are here: https://blazor.net/

Once you have the source drop it into GitHub (we will reference this later) or you can use CodeCommit which is the AWS GitHub repository service. In my experience, there is not a significant advantage to using CodeCommit over GH.

Setup Code Deploy

In your AWS Console, look under Developer Tools for CodeBuild

Click Create Build Project – this will launch the wizard

Here are the relevant fields and their associated values:

  • Source
    • Pick GitHub and your repository (you will need to authorize access if you haven’t done so before)
  • Environment – Pick Linux as the OS
    • For Role: name it something logical, you will need to modify it later
  • Buildspec
    • Select Use Buildspec file – AWS does not support dragging and dropping for the creation of its build process. Using this option we will need to create a buildspec.yml file at the root of your application (you can add it now and we will cover the syntax in the next section)
  • Artifacts
    • No Artifacts – this seems weird but, we are going to have CodeBuild do the deploy since CodeDeploy does NOT support deploying a SPA to S3

Click Create Build Project to complete the creation.  You can run it if you want to verify it will pull the source but, the build is going to fail as we dont have a valid buildspec.yml file. Let’s create that next

Creating the BuildSpec

One of the areas that I knock AWS is its lack of a good visual way to build DevOps pipelines. While it has gotten better, developers are still left manually define their build processes via YAML or JSON. This is in contrast to Microsoft which leverages a more visual drag and drop designer.

The first thing is to be aware of the syntax for these files, Amazon Docs are here: https://docs.aws.amazon.com/codebuild/latest/userguide/build-spec-ref.html

For our application the main step we need to be aware of is the build step. This is a simple test application and so it requires only a simple standard build operation. Here is my recommendation:

commands:
  – dotnet restore
  – dotnet publish –no-restore -c Release
All this does is leverage the dotnet command line tool (which will be installed on the container hosting our build) to restore all NuGet packages and then publish. This will end up creating some folders in /bin/Release/netstandard2.0 which will be involved in the post_build step.

The next part is where I can only shake my head with AWS. Since code deploy does not support S3 deploys like this we need to invoke the aws command line tool to copy the relevant output artifacts into our S3 bucket. Obviously, before we do that we need to create the S3 bucket that will host the website.

Creating the S3 Bucket to host your site

Amazon allows you to serve static web content from S3 buckets for a fraction of the cost of using other services. Best of all, no setup, automatic scalability and the same 11 9’s reliability bucket objects get from S3. It is no wonder this has became the defacto way to serve SPAs. Since the output of a Blazor build is static web content we can (and will) use S3 to host the application.

Under Storage pick S3. You need to create a bucket. Take the defaults for permissions, I will give you the Bucket Policy JSON at the end that allow objects to be served publicly.

Once the bucket is created select it and access the Properties tab. Select Static Web Site Hosting. Be sure to fill in the Default Document with index.html. I am not sure if this is necessary but, I always do it just to be safe. You should also take note of the endpoint as this is where you will access your website.

Now select Permissions and select Bucket Policy. You can use the Policy Generator here if you want but, this is the general JSON that comprises the appropriate policy to enable public access:

{
“Version”: “2012-10-17”,
“Statement”: [
{
“Sid”: “PublicReadGetObject”,
“Effect”: “Allow”,
“Principal”: “*”,
“Action”: “s3:GetObject”,
“Resource”: “arn:aws:s3:::YourBucketName/*”
}
]
}

Note: This is a very simply policy created for this example. In a real production setting, you will want to lock down the policy as much as possible to prevent nefarious access.

When you save this you should see the orange Public tag appear in the Permissions tab.

With this, you bucket is now accessible. Our next step is to get our content in there

Updating the Buildspec to copy to S3

As I said earlier, the normal tool used for deployments, AWS CodeDeploy does not, as of this writing, support static web asset deployment to S3, an opportunity missed in my opinion. This being the case, we can leverage the aws cli to copy to our bucket. There are a number of ways to organize this, here is how I did it.

I added a post_build step to the build spec which consolidated everything I was going to copy into a single folder:

commands:
  – mv ./WeatherLookup/bin/Release/netstandard2.0/dist ./artifact
  – cp -R ./WeatherLookup/bin/Release/netstandard2.0/publish/wwwroot/css ./artifact/

You dont have to do it this way, I just find it easier and more sensible than targeting the files/folder individually with the S3 copy command.

Next, we need to perform the copy to S3. I choose to use the finally substep within post_build to perform this operation

finally:
  – aws s3 cp ./artifact s3://YourBucketName –recursive

By using the –recursive the CLI will handle copying ONLY the contents of artifact into our bucket. We do not want the root folder since that would interfere with our pathing when users access the website.

If you run your build now it will get father but, it will break on the final step. The reason is, the role we defined for CodeBuild does NOT have the appropriate permissions to communicate with S3. So we have to update that before things will work.

Updating the Permissions

Within your AWS Console access IAM from Security, Identity, and Compliance. From this menu access Roles. Look for your role, we will need to attach a policy to it that gives it the ability to perform PutObject against our S3 bucket. There are two ways to do this:

Option 1:
You can apply the AmazonS3FullAccess which will grant the role full access to all S3 buckets in your account. I do NOT recommend for anything outside a simple test case. Reason, its never a good idea to give this sort of access to a role as it could be abused.

Option 2:
You can create a custom policy that provides the permissions specifically needed by this role. This is what I choose to do and is what I recommend others do to get into good habits.

For this demonstration we are going to use Option 2. Select Policies from the IAM left hand navigation menu. Select Create Policy. For most cases I would recommend the Visual Editor as it will greatly assist in creating policy documents. For this, I will give you the JSON I used:

{
“Version”: “2012-10-17”,
“Statement”: [
{
“Sid”: “VisualEditor0”,
“Effect”: “Allow”,
“Action”: [
“s3:PutObject”,
“s3:ListAllMyBuckets”,
“s3:ListBucket”,
“s3:HeadBucket”
],
“Resource”: [
“arn:aws:s3:::YourBucketName”,
“arn:aws:s3:::YourBucketName/*”
]
}
]
}

This policy grants the ability for the executor to access our S3 bucket and use the PutObject command.  Click Save after Reviewing the Policy.

Go back into roles, select the role associated with your CodeBuild project and Attach the Policy. When you rerun the build everything should work. Once you get a Completed status, you can browse to your S3 url and see your site.

Automating the Build

Our build works now but, there is a problem: it has to be kicked off manually. Ideally for any sort of process such as this, whether around integration or deployment we need the action to start automatically.  This is where AWS CodePipeline comes into play.

CodePipeline wraps a source provider, build agent, and deploy agent so that it can operate as an automated pipeline, hence the name. We are going to do the same

From Developer Tools select CodePipeline and on the ensuing menu select Create Pipeline.

On the ensuing page fill in the configuration options, there are two sections of particular note:

  • Service Role – this is the role that is assumed by the Pipeline as it reaches into other services. Notable it is to communicate with the various APIs for the supporting services.
    • Provide a service role here, we will not be modifying it at a later step
  • Artifact Store – Pipelines often deal with artifacts that come out of the various steps. Here we specify where those get stored. S3 is a great location. Keep in mind that for our example we will have no artifacts.

Once you press Next you are asked to configure the Pipeline source. Here you will want to specify our GitHub repository (you will need to connect again, but not reauth). This is to allow CodePipeline to register the GitHub webhooks that will be used to tell the Pipeline when a PUSH occurs.

Next comes to Build Provider, select the CodeBuild instance we created earlier.

Next comes Deploy, here we will press Skip and not define a deploy step. For something that would deploy to Lambda, ELB, EC2, ECS, or something like that you would need to select your deploy project. As we stated, we cannot use CodeDeploy with an SPA to S3 deployment.

The final step is to review and initiate the creation of the pipeline. The process is pretty quick. Once complete, select your build from the list. Amazon has done a great job making the pipeline visualization screen look much more appealing.

By convention, CodePipeline will run an initial build. If your CodeDeploy step worked before this should complete in a few minutes.

To test the pipeline, make a change to your local repo and push the change to the remote repo. If all is correct, you will see your Pipeline begin executing almost immediately. Once it completes, refresh your S3 website Url and your change should be visible (remember to check your cache if you dont see it).

Congrats, you have a working Blazor web app deployment in AWS.

Closing Thoughts

S3 is an ideal place to host a static website like an SPA where API requests are used to get data for the execution. The biggest win here is constant which is going to substantially less than something like EC2 or ELB.

One of the considerations I did not cover above is environmentalization of this process, we are always pushing the code to the same bucket. Normally we would have different buckets (perhaps across different Amazon accounts) that hold the specific version of the web code for that environment. This is when you might need to use CodeDeploy to deploy the artifact from CodeBuild to a Lambda to copy the contents into the bucket serving the content.

My goal here was to take a very simple deployment and determine how Amazon’s capabilities compare to those of DevOps. Needless to say, I found Amazon wanting, the DevOps aspect is certainly an area where Microsoft has the advantage. Between not being able to target S3 with their deploy tool and thus resorting to a copy from the build step to not supporting an easy visual way to build the sequence of build steps, more work is needed here.

2 thoughts on “Creating an Automated Blazor Deployment in AWS

  1. Hi Jason

    I’m also working on this topic, see https://pleasereleaseme.net/deploy-a-dockerized-asp-net-core-application-to-azure-kubernetes-service-using-a-vsts-ci-cd-pipeline-part-4/

    I’ve had a complete rethink over the past few weeks and my next post will be about: a) moving away from specialised Azure DevOps tasks and instead using command line commands in Bash scripts, and b) splitting the deployment in to separate pipelines for each component plus the k8s config so each component can have its own lifecycle.

    Cheers – Graham

    Like

Leave a comment