The concepts of Infrastructure as Code (IaC) are one of the main pillars to modern DevOps and Cloud Native Applications. The general idea is, the software itself should dictate its infrastructure needs and should always be able to quickly and automatically deploy to existing and new environments.
This is important because most applications today tend use not a single cloud service but many, often times configured in varying ways depending on the environment. The risk here is, if this configuration lives only in the Cloud than if user error occurs or the cloud provider has problems this valuable configuration and settings information can be lost. With IaC, a simple rerun of the release script is all that is needed to reprovision your services.
Additionally, doing this mitigates the “vault of knowledge” problem whereby a small group of persons understand how things are set up. If they depart or are otherwise unavailable during an outage the organization can be at risk. The configuration and settings information for infrastructure is as much a part of your application as any DLL or line of code, we need to treat it as such.
To show this in action, we will develop a simple NodeJS application that responds to Http Requests using ExpressJS, Containerize it, and then deploy it to Azure using Terraform.
Step 1: Build the application
When laying out the application, I always find it useful to create a separate directory for my infrastructure code files, in this case I will create a directory called terraform. I store my source files under a directory src.
For this simple application I will use ExpressJS and the default Hello World code from the ExpressJS documentation:
npm install express –save
Create a file index.js – paste the following contents (taken from ExpressJS Hello World: https://expressjs.com/en/starter/hello-world.html)
const express = require(‘express’)
const app = express()
const port = 3000app.get(‘/’, (req, res) => res.send(‘Hello World!’))
app.listen(port, () => console.log(`Example app listening on port ${port}!`))
We can run this locally using the following NPM command
npm start
Note, however, this does not come prebuilt after npm init so you might have to define it yourself. In essence, its the same as running node index.js at the command line.
Step 2: Containerize
Containerization is not actually required for this but, let’s be honest, if you are not using containers at this point you are only depriving yourself of easy more consistent deployments; in my view it has become a question of when I do NOT use containers versus the default of using containers.
Within our src directory we create a Dockerfile. Below is the contents of my Dockerfile which enable the application from above to be served via a container.
FROM node:jessieWORKDIR /appCOPY . .RUN npm installEXPOSE 3000ENTRYPOINT [ “npm”, “start” ]
We start off by using the node:jessie base image (Jessie is the flavor of Linux inside the Container) – you can find additional base images here: https://hub.docker.com/_/node/
Next we set our directory within the container (where we will execute further commands) – in this case /app – note that you can call this whatever you like
Next we copy everything from the Dockerfile context directory (by default the directory where the Dockerfile lives). Note that for our example we are not creating a .dockerignore due to the simple nature. If this were more complicated you would want to make sure the node_modules directory was not copied, less it make your build time progressively longer
We then run the npm install command which populates node_modules with our dependencies. Recall in the previous point, we do not want to copy node_modules over, this is for two reasons:
- Often we will have development environment specific NPM packages which we likely do not want on the container – the goal with the container is ALWAYS to be as small as possible
- In accordance with #1, copy from the file system is often slower (especially in the Cloud) than simply downloading things – also to make sure we only download what we need (see Point #1)
Next we run a Docker command which instructs the container to have port 3000 open and accepting traffic. If you look at the ExpressJS script, this is the port it listen on, so this is poking a hole in the container firewall so the server can receive requests.
Finally, all Dockerfile will end with the EntryPoint Docker command. With the source in place, this is the command that gets run when the Docker image is started as a container. For web servers like this, this should be a command that blocks the program from exiting because, when the program exists the container will close down as well
Step 3: Publish to a Registry
When we build Dockerfile we create a Docker image. An image, by itself, is useless as it is merely a template for subsequent container [ we haven’t run ENTRYPOINT yet ]. Images are served from a Container Registry, this is where they live until be called on to become container ( an instance of execution ).
Now, generally, its a bad idea to use a laptop to run any sort of production services (these days the same is true for development as well) so, keeping your images in the local registry is not a good idea. Fortunately, all of the major cloud providers (and others) provide registries to store your images in:
- Azure Container Registry (Microsoft Azure)
- Elastic Container Registry (Amazon)
- Docker Hub (Docker)
- Container Registry (Google)
You can create the cloud registries above within the correct provider and publish your Docker images, more here: https://docs.microsoft.com/en-us/azure/container-registry/container-registry-get-started-docker-cli
Docker images being published to a registry such as this opens them up to being used, at scale, by other services include Kubernetes (though you can also host the registry itself in Kubernetes, but we wont get into that here).
The command to publish is actually more of a push (from the link above)
docker tag nginx myregistry.azurecr.io/samples/nginx
docker push myregistry.azurecr.io/samples/nginx
With this, we have our image in a centralized registry and we can pull it into App Service, AKS, or whatever.
Step 4: Understand Terraform
At the core of IaC is the idea of using code to provision infrastructure into, normally, Cloud providers. Both Azure and Amazon offer tools to automatically provision infrastructure based on a definition: CloudFormation (Amazon) and Azure Resource Manager (ARM) (Azure).
Terraform, by HashiCorp, is a third party version which can work with either and has gained immense popularity thanks to its ease of implementation. It can be downloaded here.
There is plenty of resources around the web and on HashiCorp’s site to explain how Terraform works at a conceptual level and how it interacts with each supported provider. Here are the basics:
- We define a provider block that indicates the provider plugin we will use to create resources; this will be specific to our target provider
- The provider block than governs the additional block types we can define in our HCL (HashiCorp Configuration Language)
- We define resource blocks to indicate we want to create something
- We define data blocks to indicate that we wish to query for certain values from existing resources
The distinction between resource and data is important as some elements are ONLY available as one or the others. One such example is a Container Registry. When you think about it, this makes sense. While we will certainly want to audit and deploy many infrastructure components with new releases, the container registry is not such a components. More likely, we want to be able to read from this component and use its data points in configuring other components, such as Azure App Service (we will see this later)
To learn more about Terraform (we will start covering syntax in the next step) I would advise reading through the HashiCorp doc site for Azure, its very thorough and pretty easy to make sense of things: https://www.terraform.io/docs/providers/azurerm/index.html
Step 5: Deploy with Terraform
Terraform definition files usually end with the .tf extension. I usually advise creating this in a separate folder if only to keep it separate from your application code.
Let’s start with a basic script which creates an Azure Resource Group
provider “azurerm” {version=”=1.22.0″subscription_id=””}resource “azurerm_resource_group” “test” {name=”example-group”location=”CentralUS”}
The first block defines the provider we will use (Azure in this case) and the target version we want to use of that provider. I also supply a Subscription Id which enables me to target a personal Azure subscription.
Open a command line (yeah, no GUI that I am aware of) and cd to the directory that holds your .tf file, execute the following command:
terraform init
Assuming the terraform program is in your PATH, this should get things going, you will see it download the provider and provision the .terraform which holds the binary for the provider plugin (and other plugins you choose to download). You only need to run the init command one time. Now you are ready to create things.
As with any proper IaC tool, Terraform lays out what it will do before it does it and asks for user confirmation. This is known as the plan step in Terraform and we execute the following:
terraform plan
This will analyze the .tf file and (by default) output what it intends to do to the console; you can also provider the appropriate command line argument here and get the plan into a file.
The goal of this step is to give you (and your team) a chance to review what Terrafrom will create, modify, and destroy. Very important information.
The final step is to apply the changes, which is done (you guessed it) using the apply command:
terraform apply
By default, this command will also output the contents of plan and you will need to confirm the changes. After doing so, Terraform will use its information to create our Resource Group in Azure.
Go check out Azure and you should find your new Resource Group created in the CentralUS region (if you used the above code block). That is pretty cool. In our next step we will take this further and deploy our application.
Step 6: Really Deploy with Terraform
Using Terraform does not excuse you from knowing how Azure works or what you need to provision to support certain resources in fact, this knowledge becomes even more critical. For our purposes we created a simple API that responds with some text to any request, for that we will need an App Service backed by a Container but, before that we need an App Service Plan – we can create this with Terraform:
resource “azurerm_app_service_plan” “test” {name=”example-plan”location=”${azurerm_resource_group.test.location}”resource_group_name=”${azurerm_resource_group.test.name}”kind=”Linux”reserved=truesku {tier=”Standard”size=”S1″}}
Here we see one of the many advantages to defining things this way; we can reference back to previous blocks (remember what we created earlier). As when writing code, we want to centralize the definition of things, where appropriate.
This basically creates a simple App Service plan that uses the bare basics, your SKU needs may vary. Since we are using containers we could also use Windows here as well, but Linux just feels better and is more readily designed for supporting containers; at least in so far as I have found.
Running the apply at this point will add App Service Plan into your Resource Group. Next we need to get some information that will enable us to reference the Docker container we published previously.
data “azurerm_container_registry” “test” {name=”HelloWorldTest”resource_group_name=”${azurerm_resource_group.test.name}”}
Here we see an example of a data node which is a read action – you are pulling in information about an EXISTING resource – an Azure Container Registry in this case. Note, this does NOT have to live in the same Resource Group as everything else, its a common approach for services like this to be in separate group for those that transcend environments.
Ok now we come to it, we are going to define the App Service itself. Before I lay this out, I want to give a shout out to https://pumpingco.de/blog/deploy-an-azure-web-app-for-containers-with-terraform which inspired this approach with App Services.
Here is the block: https://gist.github.com/xximjasonxx/0d0bdda8741ac43197528937f6cec9eb (too long for the blockquote)
There is a lot going on here so let’s walk through it. You can see that, as with the App Service Plan definition, we can reference back to other resources to get values such as the App Service Plan Id. Resources allow you to not just create them but reference their properties (Terraform will ensure things are created in the proper order).
The app_settings block let’s us pass values that Azure would otherwise add for us when we configure container support. Notice here though, we reference the Container Registry data block we created earlier. This makes it a snap to get the critical values we will need to allow App Service access into the Container Registry.
The last two blocks I got from PumpingCode – I know what linux_fx_version does, though I have never seen it used in App Services, same with Identity.
Step 7: But does it work?
Always the ultimate question. Let’s try it.
- Make a change and build your docker image. Tag it and push it to your Azure Container registry – remember the tag you gave it
- One tip here: You might have to change the port being exposed to 80 since App Service (I think) blocks all other ports
- Modify the .tf file so the appropriate image repository name and tag is represented for linux_fx_version. If you want some assurances you have the right values, you can log into the Azure Portal and check out your registry
- Run terraform apply – verify and accept the changes
- Once complete, try to access your App Service (I am assuming you changed the original and went with port 80)
- It might take some time but, if it worked, you should see your updated message being returned from the backend
Conclusion
The main point with IaC is to understand that modern applications are more than just their source code, especially when going to the Cloud. Having your infrastructure predefined can aid in automatic recovery from problems, enable better auditing of services, and truly represent your application.
In fact, IaC is the centerpiece to tools like Kubernetes as it allows it to maintain a minimum ideal state via YAML definitions for the abstract infrastructure. Pretty cool stuff if I do say so myself.
Of course, this here is all manual, where this gets really powerful is when you back it into a CD pipeline. That is a topic for another post, however 🙂
One thought on “Infrastructure as Code with Terraform”