Serverless Proxy Pattern: Part 2

Part 1 available here

In the previous part, I talked through the motivation for build this application and the emphasis on using CloudFormation entirely as a means to enable efficient and consistent setup.

To start the template we specified our role (AppRole) and talked through some of the decisions and potential improvements with the approach. Next we setup our S3 buckets and configured the “raw” bucket to send a notification, via SNS to two (not yet created) Lambda functions which will handle various operations against the image data that was uploaded.

In this part, we will setup the appropriate permissions for the Lambdas so they can be corrected invoked from SNS. We will also talk through setting up GitOps so that our code and infrastructure can be deployed quickly and automatically enabling fast development.

Permissions

The one thing you learn very quickly working with AWS is the intense focus around security at all levels. This is mainly driven by “policies” which are attached to things like users and roles and specifies what actions may be taken by the attached. We talked about this a little in Part 1 and its worth repeating here – for the sake of getting started I am using a set of fairly wide-open policies, the goal being to refine these towards the end.

We can see this via the TopicPolicy which enables S3 to publish messages to the SNS Topic. Here it is in YAML format:

Keep in mind that ImageUploadTopic was described at the end of Part 1 and represents the topic we using to “fan out” bucket notifications.

Here we are declaring a policy and associating ith with the ImageUploadTopic resource (via the Resource portion of the policy statement). Effectively this allows S3 to public to this topic (the Principal). We can be further specific here, though I have chosen not to be, with what S3 resources can publish to this Topic.

With this in place, our S3 bucket can now publish messages to our SNS topic but, nothing will happen if we test this. Why? Because SNS does NOT have permissions to invoke our Lambda functions. Let’s set that up next.

Lambda Permissions

Recall from Part 1 this is what our SNS declaration looked like in YAML:

Since we have now added our TopicPolicy (above) SNS will attempt to invoke our lambdas, and will summarily fail. Why? Because, as you might guess at this point, it does not have permission to do so. For that we need to create Lambda permissions for our functions (though you could also create a more general permissions if you have many Lambda functions).

Here are the permission declarations in YAML:

Here we are specifying the SNS may invoke these functions, obviously needed. Next, we will create the actual Lambda functions we will be invoking.

Why are we using SNS?

There may be a question in your mind of why I am choosing to use SNS instead of straight Lambda invocation. The answer is, as I said in Part 1, S3 does NOT support multiple delivery of events so, if you are going to do a sort of “fan out” where the event is received by multiple services you need to use SNS. Alternatively, you could leverage Step functions or a single Lambda that calls other Lambdas. For me, I feel this is the most straightforward and keeps with the “codeless” theme I am going for as part of my “serverless” architecture.

Infrastructure as Code

Cloud Formation stands with other tools as a way to represents infrastructure needs in code, it is a core part of the GitOps movement that states all changes to an application should be invoked automatically from source control; source control needs to be the single source of truth for the application.

In general, this is part of a larger movement in the industry around “what is an application?”. Yes, traditionally, this used to be simply code which would then run on infrastructure provisioned, in some cases, by a totally separate team. As we have moved into Cloud though, this has shifted and coalesced to the point where Infrastructure and Code are intermingled dependent on each other.

When I speak with teams on these points, I emphasize the notion that the “application” is everything. And it being everything, your infrastructure definition is just as important as your compiled code. Gone are the days were it is acceptable for your Cloud infra to exist transiently in a console. It is now expected that your applications infrastructure in represented in code and versioned along with the rest of our application. That is why we are using Cloud Formation here (we could also use Terraform and similar tools).

GitOps

GitOps simply states that everything about our applications exists together and that commits to Git should be the means by which updates to our code or infrastructure are made – in other words, nothing is ever done manually.

For this application, as we enter into Lambda territory, we will want a way to compile and deploy our Lambda code AND update our infrastructure via Cloud Formation. The best way to do this is to setup a CI/CD pipeline. For this, I will leverage Azure DevOps.

Why not AWS Services?

It may seem odd to use Azure DevOps to deploy AWS infrastructure but, its not as uncommon as you might think. This due to the AWS tooling being awful – I really believe Amazon sought to simply say “we can do that” than create a viable platform supporting developer productivity. Many of the teams I have seen using AWS use Azure or, in other cases, will deploy a Jenkins server – rarely do I see people actually using CodePipeline and CodeBuild.

CI/CD Simplified

I will spare you the details of setting up the pipelines in Azure and doing the releases. I will leave you with the YAML file that represents the build pipelines that builds the Analyze Image and Create Thumbnail functions in tandem while publishing out the Cloud Formation template.

The gist of this is simple – we compile each function into a separate Zip file and publish that zip file so we can use it in our Release pipeline. Additionally we publish our infra.yaml which is our Cloud Formation template.

In the release pipeline (not shown) we use the S3 Upload tasks to upload the resulting zip files to an S3 bucket which houses the application artifacts. We then run a Stack Update task with the Cloud Formation template. This will replace the code for those Lambdas, here is the YAML:

The key thing here is the introduction of the CreateThumbnailLambdaVersionFile and AnalyzeImageLambdaVersionFile parameters which are used as the S3Key value for the Lambdas; this is fed to the template at runtime by the DevOps Release pipeline, like this:

This is what is meant by using GitOps – all changes to our application happen via Git operations, we never do anything manually. This sets up so that with a proper automated testing layer, we can achieve true Continuous delivery of our application.

That is all for this section – in the next Part we will write the actual code for our application which adds objects to our Thumbnail bucket and writes the data to DynamoDB.

As always here is the complete source if you want to skip ahead:
https://github.com/xximjasonxx/ThumbnailCreator/tree/release/version1

See you in Part 3

Advertisements

2 thoughts on “Serverless Proxy Pattern: Part 2

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s