Elastic Kubernetes Service or EKS is Amazon offering supporting managed Kubernetes (akin to Azure Kubernetes Service and Google Kubernetes Engine). As is often the case, Amazon takes a more infrastructure heavy approach than Azure meaning you will need to understand VPCs and Subnets when setting things up, since you will be defining things.
The good news is, Amazon offers a quick start tutorial here https://docs.aws.amazon.com/eks/latest/userguide/getting-started-console.html that will also provide the Cloud Formation scripts to set up the infrastructure for both the Control Plane and the Data Plane.
Before you get started
Amazon imposes limits on EC2 instances that an account is allowed to create, often this is very low – too low to support EKS. This is done to prevent users from accidentally spinning up resources they cannot pay for. You can see your limits here: https://console.aws.amazon.com/ec2/v2/home?region=us-east-1#Limits: (If the link does not work, select EC2 and you will see Limits listed near the top of the left hand navigation).
The MINIMUM instance size that EKS can be used with is t2.small – you may have to request a limit increase, takes a day or two. If you do NOT do this, you will run into errors when you setup the data plane. The documentation does not call this out and it can be quite frustrating.
Part 1 – Initial Setup
Part 1 of the tutorial is divided amongst a few prerequisite actions: Creation of VPC and related Subnets via Cloud Formation, IAM Role creation, and verifying installation of the AWS CLI and kubectl.
If you have worked with Amazon before you will be familiar with the need to use VPCs to contain resources. This helps things stay organized and also provides a minimum level of security. I will say that, what is not called out is the need to raise your EC2 limits before executing this tutorial.
At a minimum, you will need to be able to deploy EC2 instances of type t2.small to support the Data Plane, if you are a new console user, you will likely have to request a limit increase. This action does not cost anything and the limit is purely in place to prevent new users from racking up charges for resources they do not need.
I should point out that this limit increase is ONLY needed for the Data Plane, not the Control Plane so, it is possible to get through half of the tutorial without it. However, you will find yourself unable to deploy Kubernetes resources without a working data plane. You can view your limits here: https://console.aws.amazon.com/ec2/v2/home?region=us-east-1#Limits: (If the link does not work look for the Limits option under EC2)
Performing the Steps
Performing the steps is rather easy since most of the nitty gritty work will be handled by Cloud Formation. I encourage you to look over the scripts and get a sense of what is being created. For the most part its the standard VPC with subnets and an internet gateway.
Create the Control Plane
When we talk about managed Kubernetes, what is actually being referred to is a managed Control Plane. The Control Plane monitors and governs everything going on in a cluster so, it must be maintained at all costs. Kubernetes is designed to recover, automatically, from the loss of resources within the cluster. It can achieve this because the control plane responds to and addresses these problems.
Regarding the tutorial, it is straightforward and should be relatively easy. The one caution I would say is ensure the user that creates the cluster in the console is the SAME user your AWS CLI is configured to connect as (this is the default). If you fail to do this, you can receive authentication errors, provided additional configuration is not applied.
Update your kubectl context
The primary way to deploy resources to Kubernetes is via the managed API hosted on the Control Plane. This communication is handled via the kubectl assembly. kubectl operates via a “context” which tells it where commands will be executed. This is the purpose of the update-kubeconfig command in the next section. If you want to see a list of all your contexts, execute the following command:
kubectl config get-contexts
Each line entry here indicates a context you can use to talk to a Kubernetes cluster
Execute the final command in this section to verify you can talk to the Kubernetes Control Plane. If you run the following command you can see a host of resources in the Pending state – these will be deployed once a Data Plane is added to the cluster (next section)
kubectl get pods
Create the Data Plane
This next section is where you will get impacted if you have NOT raised your EC2 limits. EKS uses EC2 instances to support the Kubernetes Data Plane. Effectively, Kubernetes is nothing more than a resource scheduler. It schedules resources to run and uses the block of compute resources that are the Worker Nodes (EC2 instances) to host those resources.
On ephemeralism: The concept of ephemeral is very common within the Container ecosystem and Kubernetes. Everything within the cluster (outside the Control Plane) must be treated as a ephemeral. This means, you do not NOT want to persist state anywhere within the Cluster as you can lose it at any time.
I wont go into solutions for this but, when your deploy items that persist state in Kubernetes you need to be extra aware that it is viably being persisted.
Follow the instructions of this section, I recommend keeping the number of nodes to around 1 to 2 if this is for development and testing. Remember, in addition to paying for cluster time and the resources related to the VPC, you will also be paying for the EC2 instances – this can add up quickly. I recommend using t2.small for testing purposes as it works out to be the cheapest.
Add Your Nodes into the Cluster
As an extra step, once you create the EC2 instances that will be the worker nodes in the cluster you need to add them. I have yet to find the option that enables auto provisioning (this might be Fargate territory).
Once you finish executing the commands run the following command:
kubectl get pods
With luck, you should now see movement in the status of your nodes (mine were pretty fast and came to Running) in seconds. Congrats, your cluster is now working, to prove that, lets launch the Kubernetes Cluster Dashboard. Follow the instructions here: https://docs.aws.amazon.com/eks/latest/userguide/dashboard-tutorial.html
Let’s Deploy Something
Our cluster is pretty useless, so lets deploy an API to it. For this, I wrote up a pretty basic .NET Core Web API that does math, here is the source of the main controller:
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
using CalcApi.Models; | |
using Microsoft.AspNetCore.Mvc; | |
namespace CalcApi.Controllers | |
{ | |
[Route("api/calc")] | |
[ApiController] | |
public class CalcController : ControllerBase | |
{ | |
[HttpPost("add")] | |
public IActionResult AddNumbers([FromBody]NumbersModel numbersModel) | |
{ | |
return Ok(numbersModel.NumberOne + numbersModel.NumberTwo); | |
} | |
[HttpPost("subtract")] | |
public IActionResult SubtractNumbers([FromBody]NumbersModel numbersModel) | |
{ | |
return Ok(numbersModel.NumberOne – numbersModel.NumberTwo); | |
} | |
[HttpPost("multiply")] | |
public IActionResult MultiplyNumbers([FromBody]NumbersModel numbersModel) | |
{ | |
return Ok(numbersModel.NumberOne * numbersModel.NumberTwo); | |
} | |
[HttpPost("divide")] | |
public IActionResult DivideNumbers([FromBody]NumbersModel numbersModel) | |
{ | |
if (numbersModel.NumberTwo == 0) | |
return BadRequest("Divisor cannot be zero"); | |
return Ok(numbersModel.NumberOne / numbersModel.NumberTwo); | |
} | |
} | |
} |
Next, I create the Docker image using this Dockerfile
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
FROM mcr.microsoft.com/dotnet/core/sdk:2.2.203 as build | |
WORKDIR /code | |
COPY . . | |
RUN dotnet restore | |
RUN dotnet publish -o output -c Release –no-restore | |
FROM mcr.microsoft.com/dotnet/core/aspnet:2.2 as runtime | |
WORKDIR /app | |
COPY –from=build /code/output ./ | |
EXPOSE 8081 | |
EXPOSE 443 | |
ENTRYPOINT [ "dotnet", "CalcApi.dll" ] |
I recommend building it in the following way:
docker build -t calc-api:v1 .
Next, you have a decision to make. Assuming you have setup authentication with Docker Hub (via docker login) you can tag the image with your username and push, for me:
docker tag calc-api:v1 xximjasonxx/calc-api:v1
docker push xximjasonxx/cacl-api:v1
Or, if you want to take the Amazon route, you can create an Elastic Container Registry (ECR) and push the image there. To do this, simply select ECR from the Service options and create a registry. Once that is complete, Amazon will provide you with the appropriate commands.
The main point to understand is, Kubernetes will expand and contract the number of Pods that host your app as needed. To run containers on these pods, the source images need to be an accessible location, that is why we use a registry.
Once your image is pushed up you can apply a podspec file to add the resource to Kubernetes. Here is my podspec (I am using ECR):
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
apiVersion: apps/v1 | |
kind: Deployment | |
metadata: | |
name: calcapi-deployment | |
labels: | |
app: calcapi | |
spec: | |
replicas: 3 | |
selector: | |
matchLabels: | |
app: calcapi | |
type: pod | |
template: | |
metadata: | |
labels: | |
app: calcapi | |
type: pod | |
spec: | |
containers: | |
– name: calcapi-container | |
image: 684925376801.dkr.ecr.us-east-1.amazonaws.com/calc-api:v2 |
Run the apply command as such:
kubectl apply -f deployment-spec.yaml
Once the command completes run this command and wait for the new pods to enter the Running state:
kubectl get pods –watch
Congrats you deployed your first application to Kubernetes on EKS. Problem is, this isnt very useful because the Cluster offers us no way to make our API calls. For this we will create a service.
Accessing our Pods
When it comes to accessing resources within a cluster there are a couple options: Services and Ingress. Ingress we wont discuss here, its a rather large topic. For this simple example a Service will be fine.
Here is the documentation from Kubernetes on Services: https://kubernetes.io/docs/concepts/services-networking/service/
What you need to understand is Services are, simply, the mechanisms by which we address a group of Pods. They come in four flavors: ClusterIP (default), NodePort, LoadBalancer, and ExternalName.
Locally, I like to use NodePort because I am often using minikube. When you deploy to the Cloud, the recommendation is to use LoadBalancer. Doing so will have AWS automatically deploy a LoadBalancer with an external hostname. Here is my servicespec:
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
apiVersion: v1 | |
kind: Service | |
metadata: | |
name: calc-api-service | |
spec: | |
selector: | |
app: calcapi | |
type: pod | |
ports: | |
– protocol: TCP | |
port: 80 | |
targetPort: 8081 | |
type: LoadBalancer |
Of note here is the selector node. This tells the service what Pods it is addressing, you can see the app and type values match from the Deployment spec above.
Execute the following command:
kubectl apply -f service-spec.yaml
Next use the following command to know when the service is available and addressable:
kubectl get svc –watch
Here is what my final output looked like
Once this is up you can use Postman or whatnot to access the endpoints on your API.
Congrats – you have deployed your first application to EKS. Do NOT forget to tear everything down, EKS is not something you want to leave running for a long duration in a personal use context.
My Thoughts
So, going into this my experience had been more on the Azure side and with minikube than with EKS. Without surprise I found EKS to be a bit more technical and demanding than AKS, mostly due to with the need for a Limit increase not being documented and the heavier emphasis on the infrastructure, which is typical with many of Amazon’s services; in contrast AKS hides much of this from you.
Overall, the rise of managed Kubernetes services like EKS is very good for the industry and represents a step closer to where I believe applications need to be: that is not caring about the underlying services or piping but, just deployed as their code running what is defined that they need. That is still a ways off but, it is fascinating that so much effort was spent to get to the cloud and then, with Kubernetes, we are trying to make it so the What cloud question no longer matters.