Serverless Proxy Pattern: Part 2

Part 1 available here

In the previous part, I talked through the motivation for build this application and the emphasis on using CloudFormation entirely as a means to enable efficient and consistent setup.

To start the template we specified our role (AppRole) and talked through some of the decisions and potential improvements with the approach. Next we setup our S3 buckets and configured the “raw” bucket to send a notification, via SNS to two (not yet created) Lambda functions which will handle various operations against the image data that was uploaded.

In this part, we will setup the appropriate permissions for the Lambdas so they can be corrected invoked from SNS. We will also talk through setting up GitOps so that our code and infrastructure can be deployed quickly and automatically enabling fast development.

Permissions

The one thing you learn very quickly working with AWS is the intense focus around security at all levels. This is mainly driven by “policies” which are attached to things like users and roles and specifies what actions may be taken by the attached. We talked about this a little in Part 1 and its worth repeating here – for the sake of getting started I am using a set of fairly wide-open policies, the goal being to refine these towards the end.

We can see this via the TopicPolicy which enables S3 to publish messages to the SNS Topic. Here it is in YAML format:

 

 

 

 

 

 


AWSTemplateFormatVersion: 2010-09-09
Description: "Creates infrastructure for Thumnbail Creator"
Resources:
ImageUploadTopicPolicy:
Type: AWS::SNS::TopicPolicy
Properties:
PolicyDocument:
Id: S3TopicPublishPolicy
Version: "2012-10-17"
Statement:
– Sid: S3PublishAllowStatement
Effect: Allow
Principal:
Service:
– s3.amazonaws.com
Action: sns:Publish
Resource: !Ref ImageUploadTopic
Topics:
– !Ref ImageUploadTopic

Keep in mind that ImageUploadTopic was described at the end of Part 1 and represents the topic we using to “fan out” bucket notifications.

Here we are declaring a policy and associating ith with the ImageUploadTopic resource (via the Resource portion of the policy statement). Effectively this allows S3 to public to this topic (the Principal). We can be further specific here, though I have chosen not to be, with what S3 resources can publish to this Topic.

With this in place, our S3 bucket can now publish messages to our SNS topic but, nothing will happen if we test this. Why? Because SNS does NOT have permissions to invoke our Lambda functions. Let’s set that up next.

Lambda Permissions

Recall from Part 1 this is what our SNS declaration looked like in YAML:

 

 

 

 

 

 

 

 

 

 


AWSTemplateFormatVersion: 2010-09-09
Description: "Creates infrastructure for Thumnbail Creator"
Resources:
ImageUploadTopic:
Type: AWS::SNS::Topic
Properties:
TopicName: ImageUploadTopic
Subscription:
– Endpoint: !GetAtt CreateThumbnailFunction.Arn
Protocol: lambda
– Endpoint: !GetAtt AnalyzeImageFunction.Arn
Protocol: lambda

view raw

topic.yaml

hosted with ❤ by GitHub

Since we have now added our TopicPolicy (above) SNS will attempt to invoke our lambdas, and will summarily fail. Why? Because, as you might guess at this point, it does not have permission to do so. For that we need to create Lambda permissions for our functions (though you could also create a more general permissions if you have many Lambda functions).

Here are the permission declarations in YAML:

 

 

 

 

 


AWSTemplateFormatVersion: 2010-09-09
Description: "Creates infrastructure for Thumnbail Creator"
Resources:
CreateThumbnailFunctionSNSInvokePermission:
Type: AWS::Lambda::Permission
Properties:
FunctionName: !Ref CreateThumbnailFunction
Action: "lambda:InvokeFunction"
Principal: "sns.amazonaws.com"
AnalyzeImageFunctionSNSInvokePermission:
Type: AWS::Lambda::Permission
Properties:
FunctionName: !Ref AnalyzeImageFunction
Action: "lambda:InvokeFunction"
Principal: "sns.amazonaws.com"

Here we are specifying the SNS may invoke these functions, obviously needed. Next, we will create the actual Lambda functions we will be invoking.

Why are we using SNS?

There may be a question in your mind of why I am choosing to use SNS instead of straight Lambda invocation. The answer is, as I said in Part 1, S3 does NOT support multiple delivery of events so, if you are going to do a sort of “fan out” where the event is received by multiple services you need to use SNS. Alternatively, you could leverage Step functions or a single Lambda that calls other Lambdas. For me, I feel this is the most straightforward and keeps with the “codeless” theme I am going for as part of my “serverless” architecture.

Infrastructure as Code

Cloud Formation stands with other tools as a way to represents infrastructure needs in code, it is a core part of the GitOps movement that states all changes to an application should be invoked automatically from source control; source control needs to be the single source of truth for the application.

In general, this is part of a larger movement in the industry around “what is an application?”. Yes, traditionally, this used to be simply code which would then run on infrastructure provisioned, in some cases, by a totally separate team. As we have moved into Cloud though, this has shifted and coalesced to the point where Infrastructure and Code are intermingled dependent on each other.

When I speak with teams on these points, I emphasize the notion that the “application” is everything. And it being everything, your infrastructure definition is just as important as your compiled code. Gone are the days were it is acceptable for your Cloud infra to exist transiently in a console. It is now expected that your applications infrastructure in represented in code and versioned along with the rest of our application. That is why we are using Cloud Formation here (we could also use Terraform and similar tools).

GitOps

GitOps simply states that everything about our applications exists together and that commits to Git should be the means by which updates to our code or infrastructure are made – in other words, nothing is ever done manually.

For this application, as we enter into Lambda territory, we will want a way to compile and deploy our Lambda code AND update our infrastructure via Cloud Formation. The best way to do this is to setup a CI/CD pipeline. For this, I will leverage Azure DevOps.

Why not AWS Services?

It may seem odd to use Azure DevOps to deploy AWS infrastructure but, its not as uncommon as you might think. This due to the AWS tooling being awful – I really believe Amazon sought to simply say “we can do that” than create a viable platform supporting developer productivity. Many of the teams I have seen using AWS use Azure or, in other cases, will deploy a Jenkins server – rarely do I see people actually using CodePipeline and CodeBuild.

CI/CD Simplified

I will spare you the details of setting up the pipelines in Azure and doing the releases. I will leave you with the YAML file that represents the build pipelines that builds the Analyze Image and Create Thumbnail functions in tandem while publishing out the Cloud Formation template.

 

 

 


stages:
– stage: buildlambdafunction
displayName: Build Lambda Functions
jobs:
– job: buildcreatethumbnailfunction
displayName: Build Create Thumbnail Function
pool:
vmImage: ubuntu-latest
demands:
– msbuild
– visualstudio
steps:
– task: DotNetCoreCLI@2
displayName: Restore Dependencies
inputs:
projects: CreateThumbnailFunction/src/CreateThumbnailFunction/CreateThumbnailFunction.csproj
command: restore
– task: DotNetCoreCLI@2
displayName: Publish Source
inputs:
command: publish
projects: CreateThumbnailFunction/src/CreateThumbnailFunction/CreateThumbnailFunction.csproj
arguments: -c Debug -o $(Build.ArtifactStagingDirectory)/publish –no-restore
publishWebProjects: false
zipAfterPublish: false
– task: ArchiveFiles@2
displayName: Archive Publish Output
inputs:
includeRootFolder: false
archiveType: zip
rootFolderOrFile: $(Build.ArtifactStagingDirectory)/publish/CreateThumbnailFunction
archiveFile: $(Build.ArtifactStagingDirectory)/createthumbnail-publish-$(Build.BuildId).zip
– task: PublishBuildArtifacts@1
displayName: Publish Artifacts
inputs:
PathtoPublish: $(Build.ArtifactStagingDirectory)
ArtifactName: CreateThumbnail
– job: buildanalyzefunction
displayName: Build Analyze Image Function
pool:
vmImage: ubuntu-latest
demands:
– msbuild
– visualstudio
steps:
– task: DotNetCoreCLI@2
displayName: Restore Dependencies
inputs:
projects: AnalyzeImageFunction/src/AnalyzeImageFunction/AnalyzeImageFunction.csproj
command: restore
– task: DotNetCoreCLI@2
displayName: Publish Source
inputs:
command: publish
projects: AnalyzeImageFunction/src/AnalyzeImageFunction/AnalyzeImageFunction.csproj
arguments: -c Debug -o $(Build.ArtifactStagingDirectory)/publish –no-restore
publishWebProjects: false
zipAfterPublish: false
– task: ArchiveFiles@2
displayName: Archive Publish Output
inputs:
includeRootFolder: false
archiveType: zip
rootFolderOrFile: $(Build.ArtifactStagingDirectory)/publish/AnalyzeImageFunction
archiveFile: $(Build.ArtifactStagingDirectory)/analyzeimage-publish-$(Build.BuildId).zip
– task: PublishBuildArtifacts@1
displayName: Publish Artifacts
inputs:
PathtoPublish: $(Build.ArtifactStagingDirectory)
ArtifactName: AnalyzeImageFunction
– job: publishinfrastructure
displayName: Publish Infrastructure
pool:
vmImage: ubuntu-latest
demands:
– msbuild
– visualstudio
steps:
– task: CopyFiles@2
displayName: Copy CloudFormation Template
inputs:
SourceFolder: infrastructure
Contents: infra.yaml
TargetFolder: $(Build.ArtifactStagingDirectory)
– task: PublishBuildArtifacts@1
displayName: Publish Artifacts
inputs:
PathtoPublish: $(Build.ArtifactStagingDirectory)
ArtifactName: Infrastructure

view raw

build-spec.yaml

hosted with ❤ by GitHub

The gist of this is simple – we compile each function into a separate Zip file and publish that zip file so we can use it in our Release pipeline. Additionally we publish our infra.yaml which is our Cloud Formation template.

In the release pipeline (not shown) we use the S3 Upload tasks to upload the resulting zip files to an S3 bucket which houses the application artifacts. We then run a Stack Update task with the Cloud Formation template. This will replace the code for those Lambdas, here is the YAML:

 

 


AWSTemplateFormatVersion: 2010-09-09
Description: "Creates infrastructure for Thumnbail Creator"
Parameters:
RawBucketName:
Type: String
Default: rawimages
Description: Enter the name of the Raw Images Bucket
ThumbnailBucketName:
Type: String
Default: thumbnailimages
Description: Enter the name of the Thumbnail Images Bucket
CreateThumbnailLambdaVersionFile:
Type: String
Description: The file containing the compiled contents for the Lambda
AnalyzeImageLambdaVersionFile:
Type: String
Description: The file containing the compiled contents for the Lambda
Resources:
CreateThumbnailFunction:
Type: AWS::Lambda::Function
Properties:
Code:
S3Bucket: thumbnailcreator-artifacts
S3Key: !Ref CreateThumbnailLambdaVersionFile
Handler: CreateThumbnailFunction::Functions.CreateThumbnailFunction::ExecuteAsync
Runtime: dotnetcore2.1
Role: !GetAtt AppRole.Arn
TracingConfig:
Mode: Active
Timeout: 300
Environment:
Variables:
ThumbnailBucketName: !Ref ThumbnailBucketName
AnalyzeImageFunction:
Type: AWS::Lambda::Function
Properties:
Code:
S3Bucket: thumbnailcreator-artifacts
S3Key: !Ref AnalyzeImageLambdaVersionFile
Handler: AnalyzeImageFunction::Functions.AnalyzeImageFunction::ExecuteAsync
Runtime: dotnetcore2.1
Role: !GetAtt AppRole.Arn
TracingConfig:
Mode: Active
Timeout: 600

view raw

lambda.yaml

hosted with ❤ by GitHub

The key thing here is the introduction of the CreateThumbnailLambdaVersionFile and AnalyzeImageLambdaVersionFile parameters which are used as the S3Key value for the Lambdas; this is fed to the template at runtime by the DevOps Release pipeline, like this:

 


[
{
"ParameterKey": "RawBucketName",
"ParameterValue": "rawimagestc1983"
},
{
"ParameterKey": "ThumbnailBucketName",
"ParameterValue": "thumbimagestc1983"
},
{
"ParameterKey": "CreateThumbnailLambdaVersionFile",
"ParameterValue": "createthumbnail-publish-$(Release.Artifacts.PublishedArtifact.BuildId).zip"
},
{
"ParameterKey": "AnalyzeImageLambdaVersionFile",
"ParameterValue": "analyzeimage-publish-$(Release.Artifacts.PublishedArtifact.BuildId).zip"
}
]

view raw

params.json

hosted with ❤ by GitHub

This is what is meant by using GitOps – all changes to our application happen via Git operations, we never do anything manually. This sets up so that with a proper automated testing layer, we can achieve true Continuous delivery of our application.

That is all for this section – in the next Part we will write the actual code for our application which adds objects to our Thumbnail bucket and writes the data to DynamoDB.

As always here is the complete source if you want to skip ahead:
https://github.com/xximjasonxx/ThumbnailCreator/tree/release/version1

Part 3 is here

3 thoughts on “Serverless Proxy Pattern: Part 2

Leave a comment