Securing Configuration with Key Vault

In my previous post (here), I talked about the need to consider security when you build your application and focused mainly on securing network traffic. In keeping with a focus on DevOps, we took an Infrastructure as Code (IaC) approach which used Terraform to represent infrastructure in a script. But, as someone point out to me privately, I only covered a part of security, and not even the bit which generally leads to more security flaws.

The same codebase as in the aforementioned post is used: jfarrell-examples/SecureApp (github.com)

Configuration Leaking

While securing network access and communication direction is vital the more likely avenue for an attack tends to be an attacker finding your value in source code or in an Azure configuration section. In the example for Private Endpoints I stored the connection string for the Storage Account and the Access Key for the Event Grid in the Azure App Service Configuration section.

This is not entirely bad, as Configuration can be secured with RBAC to keep them visible to certain persons. However, this is still not advised as you would not be be following a “defense in depth” mentality. Defense in Depth calls for us to never rely on ONE mechanism for security but rather, force a would be attacked to conquer multiple layers. For web applications (and most Azure apps) the defacto standard for securing values in Azure Key Vault (Key Vault | Microsoft Azure).

By using Azure Key Vault with Managed Identity, you can omit your sensitive values from the configuration section and use a defined identity for the calling service to ensure only necessary access for the service to request its values. Doing so lessens the chance that you will leak configuration values to users with nefarious intentions.

Understanding Managed Identity

Most services in Azure can make use of managed identities in on of two flavors:

  • System Assigned Identity – this is an identity that will be managed by Azure. It can only be assigned to a SINGLE resource
  • User Managed Identity – this is an identity created and managed by a user. It can be assigned to as many resources as needed. It is ideal for situations involving things like scale-sets where new resources are created and need to use the same identity.

For this example we will use a System Assigned Identity as Azure App Service does not span multiple resources within a single region, Azure performs some magic behind the scenes to maintain the same identity for the server farm machines which support the App Service as it scales.

The identity of the service effectively represents a user, or more accurately a service principal. This service principal has an object_id that we can use in Key Vault Access policy. These policies, separate from RBAC settings, dictate what a specific identity can do against that key vault.

Policies are NOT specific to certain secrets, keys, and certificates. If you GET secret permission to an identity it allows that identity to read ANY secret in the vault. This is not always advisable. To improve your security posture, create multiple key vaults to segment access to secret, key, and certificate values.

We can use Terraform to create the Key Vault and an App Service with an identity, and make the association. This is key because, doing so allows us to create these secret values through IaC scripts versus relying on engineers to do it manually.

Create the App Service with a System Identity

Here is code for the App Service, note the identity block:

# create the app service
resource "azurerm_app_service" "this" {
name = "app-${var.name}ym05"
resource_group_name = var.rg_name
location = var.rg_location
app_service_plan_id = azurerm_app_service_plan.this.id
site_config {
dotnet_framework_version = "v5.0"
}
app_settings = {
"WEBSITE_DNS_SERVER" = "168.63.129.16"
"WEBSITE_VNET_ROUTE_ALL" = "1"
"WEBSITE_RUN_FROM_PACKAGE" = "1"
"EventGridEndpoint" = var.eventgrid_endpoint
"KeyVaultEndpoint" = var.keyvault_endpoint
}
identity {
type = "SystemAssigned"
}
}
# outputs
output "system_id" {
value = azurerm_app_service.this.identity.0.principal_id
}
view raw appservice2.tf hosted with ❤ by GitHub

The change from the previous version of this module is, the StorageAccountConnectionString and EventGridAccessKey is no longer present. We only provide the endpoints for our KeyVault and EventGrid, the sensitive values are held in Key Vault and accessed using the App Service’s Managed Identity.

Setup the Key Vault

First, I want to show you the creation block for Key Vault, here:

terraform {
required_providers {
azurerm = {
source = "hashicorp/azurerm"
version = "=2.62.1"
}
}
}
variable "name" {
type = string
}
variable "rg_name" {
type = string
}
variable "rg_location" {
type = string
}
variable "tenant_id" {
type = string
}
variable "secrets" {
type = map
}
# get current user
data "azurerm_client_config" "current" {}
# create the resource
resource "azurerm_key_vault" "this" {
name = "kv-${var.name}"
resource_group_name = var.rg_name
location = var.rg_location
tenant_id = var.tenant_id
sku_name = "standard"
# define an access policy for terraform connection
access_policy {
tenant_id = var.tenant_id
object_id = data.azurerm_client_config.current.object_id
secret_permissions = [ "Get", "Set", "List" ]
}
}
# add the secrets
resource "azurerm_key_vault_secret" "this" {
for_each = var.secrets
name = each.key
value = each.value
key_vault_id = azurerm_key_vault.this.id
}
#outputs
output "key_vault_endpoint" {
value = azurerm_key_vault.this.vault_uri
}
output "key_vault_id" {
value = azurerm_key_vault.this.id
}
view raw keyvault.tf hosted with ❤ by GitHub

The important thing to point out here is the definition of access_policy in this module. This is not the access being given to our App Service, it is instead a policy to allow Terraform to update Key Vault (the actual permissions are provided as parameters).

The output here is the Key Vault URI (for use as a configuration setting to the App Service) and the Event Grid endpoint (also for use as a configuration setting in App Service).

Creation of this Key Vault MUST precede the creation of the App Service BUT, we cannot create the App Service Access Policy until the App Service is created, we need the Identity’s obejct_id (see above).

Here is the access policy that gets created to allow the Managed Identity representing the App Service to Get secrets from the Key Vault:

terraform {
required_providers {
azurerm = {
source = "hashicorp/azurerm"
version = "=2.62.1"
}
}
}
variable "key_vault_id" {
type = string
}
variable "tenant_id" {
type = string
}
variable "object_id" {
type = string
}
variable "secret_permissions" {
type = list(string)
default = []
}
variable "key_permissions" {
type = list(string)
default = []
}
variable "certificate_permissions" {
type = list(string)
default = []
}
# create resource
resource "azurerm_key_vault_access_policy" "this" {
key_vault_id = var.key_vault_id
tenant_id = var.tenant_id
object_id = var.object_id
key_permissions = var.key_permissions
secret_permissions = var.secret_permissions
certificate_permissions = var.certificate_permissions
}

This policy does need to support List permission on the Key Vault Secret so that the configuration provider can bring all secrets into the configuration context for our App Service. This is the .NET Core code to bring the Key Vault Secrets into the IConfiguration instance for the web app:

public class Program
{
public static void Main(string[] args)
{
CreateHostBuilder(args).Build().Run();
}
public static IHostBuilder CreateHostBuilder(string[] args) =>
Host.CreateDefaultBuilder(args)
.ConfigureAppConfiguration((ctx, config) =>
{
var builtConfig = config.Build();
var keyVaultEndpoint = builtConfig["KeyVaultEndpoint"];
var secretClient = new SecretClient(
new Uri(keyVaultEndpoint),
new Azure.Identity.ManagedIdentityCredential());
config.AddAzureKeyVault(secretClient, new KeyVaultSecretManager())
})
.ConfigureWebHostDefaults(builder => builder.UseStartup<Startup>());
}
view raw program.cs hosted with ❤ by GitHub

The result here is, the Web App will use the Managed Identity of the Azure App Service to communication with our Key Vault at the given endpoint to bring our sensitive values into the web app. This gives us a solid amount of security and diminishes the chances that configuration values leak into places where they can be exposed.

Make it more secure

One issue with the above approach is, it requires some fudging because local development will NOT have a managed identity. Instead, they will need to use something else, such as a InteractiveCredentials or ClientSecretCredentials (available in Azure.Identity NuGet package). These are fine but, aside from requiring a person to authenticate with Azure when they run the app or finding a way to ensure sensitive client authorization values do NOT leak into source, it is a bit onerous.

The way to make our approach more secure is to introduce Azure App Configuration which can integrate with Key Vault in much the same way App Service does. The added benefit is Azure App Configuration can replace your local configuration in Azure and offers numerous features to aid in the management of these values across environments.

Unfortunately, at the time of this writing, Terraform does NOT support managing the keys within Azure App Configuration. Still, while its not perfectly secure, just using Key Vault is usually an improvement over existing sensitive data management techniques I typically see organization and teams using.

Advertisement

Getting Started with Kafka and .NET Core on Kubernetes

Real time streaming is at the hard of  many modern business critical systems. The ability for data to be constantly streamed can give organizations new insights into their data and the real time nature means these trends can be seen in real time, creating possible value streams for organizations.

It being so popular there are many options available from cloud hosted to self hosted. One of the big ones is Apache Kafka which enables this sort of “pubsub on steroids” that enables it to scale its data injest and streaming capabilities to fit the needs of almost any organization. In this post, I want to walk through a basic setup of Apache Kafka using Bitnami’s Helm Chart.

Prepare your cluster

Truth be told, I did this using an Azure Kubernetes Service cluster that I spin up on occasion that has three large VMs backing it. I have found using things like Kubernetes for Docker, minikube, and others that you run into resource limitations that make it hard to deploy. For that reason, either give your local cluster an immense amount of resources or using a cloud managed one. I recommend Azure simply because by default the AKS cluster is backed by a scale set that you can Deallocate as needed – saves immensely on cost

Bitnami Helm Chart

I love Helm because it enables quick deployment of supporting software that would otherwise take a lot of reading and learning and tweaking. Instead, you can use Helm to execute this: https://github.com/bitnami/charts/tree/master/bitnami/kafka

The instructions above are actually written to target Helm2, latest is Helm3. For the most part its similar enough, though I love the removal of Tiller in Helm3. Syntax of the command is a little different – here is what I used:

helm install kafka-release –namespace kube-kafka bitnami/kafka

This creates a release (an instance of deployment in Helm terminology) called kafka-release places the deployed components in a Kubernetes namespace called kube-kafka and deploys resources based on the bitnami/kafka Helm chart – if you look at the above link there are many ways to override how things are deployed via this chart

After running the command, Helm will start deploying resources which then have to spin up. In addition, it will layout some instructions for how you can play with the Kafka cluster on your own, using a temporary pod. But before you can do this, the deployment must enter the Ready state. Best to run the following command to know when things are ready:

kubectl get pods -n kube-kafka –watch

This will watch the Pods (which are the things that have to spin up due to container creation) and when both are ready you can safely assume Kafka is ready.

Kafka oversimplified

So, I am still learning the architecture internally of Kafka. I know that it is basically a message broker system that enables a multicast eventing to various topics that can have multiple subscribers. But I honestly need to do more to learn its internals – suffice to say for this exercise, you send a message to a topic in Kafka and all listeners for that topic will receive the message.

The term Producer and Consumer will be used throughout the remainder of this post. Producer sends data to the cluster nodes, and Consumers receive that data.

Create the Producer

Our producer will be rudimentary and over simplified but just to get the idea of the sort of structure these types of applications take. Remember, this is not Production level code so, copy and paste at your own risk.


using System;
using Confluent.Kafka;
namespace KafkaProducer
{
class Program
{
static void Main(string[] args)
{
var config = new ProducerConfig{ BootstrapServers = "kafka-release.kube-kafka.svc.cluster.local:9092" };
using (var producer = new ProducerBuilder<Null, string>(config).Build())
{
var increment = 0;
while (increment < int.MaxValue)
{
try
{
var produceResult = producer.ProduceAsync("increment-topic",
new Message<Null, string> { Value = $"The number is {increment}"})
.GetAwaiter()
.GetResult();
Console.WriteLine($"Success = {produceResult.Value} to {produceResult.TopicPartitionOffset}");
}
catch (Exception ex)
{
Console.WriteLine($"Error {ex.Message}");
}
increment++;
}
}
Console.WriteLine("Hello World!");
}
}
}

view raw

producer.cs

hosted with ❤ by GitHub

So to start you need to add the Confluent.Kafka NuGet package (current version as of this writing is 1.4.0).

Next, create a config type and set the BootstrapServers – this is the server your code will contact to setup the message broker and send messages to based on where that broker ends up (not sure how all of that works yet). Suffice to say, when you finished running your Helm install this is the Service DNS name you are given – it follows the standard convention used by Kubernetes to name services.

For our example we cycle over all of the int values available to .NET (all the way up to int.MaxValue) so we can keep our producer going for a long time, if need be. For each iteration our code simply writes a message to the broker indicating the current iteration number.

We use the ProduceAsync method to send this message to Kafka – we use a try/catch here to catch any sending errors. Everything is written out to STDOUT via Console.WriteLine.

One of the key arguments to ProduceAsync is the name of the topic to associate our message to. This is what our consumers will listen to so a single message sent to this topic can be fanned out to as many consumers as are listening. This is the power of this type of architecture as it allows for event based processing with a high degree of decoupling. This allows different parts of our application to simply respond to the event rather than being part of a longer chain of functions.

Build the Consumer


using System;
using System.Threading;
using Confluent.Kafka;
namespace KafkaConsumer
{
class Program
{
static void Main(string[] args)
{
var config = new ConsumerConfig
{
GroupId = "test-consumer-group",
BootstrapServers = "kafka-release.kube-kafka.svc.cluster.local:9092",
AutoOffsetReset = AutoOffsetReset.Earliest
};
using (var consumer = new ConsumerBuilder<Ignore, string>(config).Build())
{
consumer.Subscribe("increment-topic");
CancellationTokenSource cts = new CancellationTokenSource();
Console.CancelKeyPress += (_, e) =>
{
e.Cancel = true; // prevent the process from terminating.
cts.Cancel();
};
try
{
while (true)
{
try
{
var consumeResult = consumer.Consume(cts.Token);
Console.WriteLine($"Message: {consumeResult.Message.Value}");
}
catch (Exception ex)
{
Console.WriteLine($"Error: {ex.Message}");
}
Thread.Sleep(100);
}
}
catch (OperationCanceledException)
{
consumer.Close();
}
}
}
}
}

view raw

consumer.cs

hosted with ❤ by GitHub

As with the Producer, the first step here is to add the Confluent.Kafka NuGet package


using System;
using System.Threading;
using Confluent.Kafka;
namespace KafkaConsumer
{
class Program
{
static void Main(string[] args)
{
var config = new ConsumerConfig
{
GroupId = "test-consumer-group",
BootstrapServers = "kafka-release.kube-kafka.svc.cluster.local:9092",
AutoOffsetReset = AutoOffsetReset.Earliest
};
using (var consumer = new ConsumerBuilder<Ignore, string>(config).Build())
{
consumer.Subscribe("increment-topic");
CancellationTokenSource cts = new CancellationTokenSource();
Console.CancelKeyPress += (_, e) =>
{
e.Cancel = true; // prevent the process from terminating.
cts.Cancel();
};
try
{
while (true)
{
try
{
var consumeResult = consumer.Consume(cts.Token);
Console.WriteLine($"Message: {consumeResult.Message.Value}");
}
catch (Exception ex)
{
Console.WriteLine($"Error: {ex.Message}");
}
Thread.Sleep(100);
}
}
catch (OperationCanceledException)
{
consumer.Close();
}
}
}
}
}

view raw

consumer.cs

hosted with ❤ by GitHub

As with the Producer, our first step is to add the Confluent.Kafka Nuget package so we can use the built-in types to communicate with Kafka.

You can see with the Consumer that we subscribe to our topic (increment-topic in this case). I was a bit surprised this was using the built in .NET eventing model since I think that would make more sense. Instead, we have to create a busy loop that attempts to consume each time and checks if it gets anything.

From there we just bring the message we received.  You notice that our BootstrapServers value is the same as it was in the Producer, should be – we are writing and reading from the same Broker.

The GroupId, I do not know what this is – need to read up on it but I set it all the same.

Testing our Example

Our goal is to run against Kafka hosted in Kubernetes and thus we will want both our Producer and Consumer there as well. You could use kubectl port-forward to expose the Kafka port locally but, I didnt have much luck with that as the code would immediately try to call a Kubernetes generated service name which was not exposed. I might tinker with this some more. My main goal with this exercise is to see this working in Kubernetes.

The best way to do this, or at least the simplest is to create a vanilla pod with the consumer and producer executing as separate containers. Truthfully, you would never want to use a vanilla PodSpec in Production (usually you want it in a Deployment or ReplicaSet) since doing so would not maintain high availability and resiliency (if the Pod dies, everything stops, it doesnt get recreated). Nevertheless, for this simple example I will be creating a multi-container pod – we can check the log files to ensure things are working.

Here is the PodSpec:


apiVersion: v1
kind: Pod
metadata:
name: kafka-example
namespace: kube-kafka
spec:
containers:
name: producer
image: clusterkuberegistry.azurecr.io/kafkaproducer:v2
name: consumer
image: clusterkuberegistry.azurecr.io/kafkaconsumer:v1

view raw

podspec.yaml

hosted with ❤ by GitHub

I will assume that you know how to build and push Docker images to a repository like Docker Hub (or Azure Container Registry in this case).

Next, we apply this spec file to our Kubernetes cluster

kubectl apply -f podspec.yaml

And we can use the following to wait until our pod is running (we should see 2/2 when ready)

kubectl get pods -n kube-kafka

Next let’s get a sample of the logs from our Producer to ensure things are connecting our messages are going out. We can run the following command (using the PodName from the PodSpec above):

kubectl logs kafka-example -n kube-kafka producer

If things are working you should see a lot of message indicating a message send. If you see errors, double check that you typed everything right and that, in particular, your BootstrapServer value is correct (you can use kubectl to query the services in the deployed namespace to ensure you have the right name)

Assuming that is working, we can perform a similar command to see the logs for the Consumer:

kubectl logs kafka-example -n kube-kafka consumer

Once again, if all things are working you should see messages being read and displayed in the console.

Congratulations!! You have just completed your first end to end Kafka example (maybe its not your first but congrats all the same).

Why is this important?

I talked about real time applications at the onset – these are comprised, generally, of event streaming whereby as data enters the system it causes events to be generated (often times the events are the data) which can be sent all over the system. It can create records in a database, updated metric counters, flip switches, and many other things – this is actually the basis behind IoT applications, streaming a tremendous amount of telemetry data and using systems like Kakfa to analyze and process that data in real time.

I love these types of applications because they enable so many unique business scenarios and really cool real time charts. In my next post, I hope to take my Kafka knowledge further and do something more useful.


apiVersion: v1
kind: Pod
metadata:
name: kafka-example
namespace: kube-kafka
spec:
containers:
name: producer
image: clusterkuberegistry.azurecr.io/kafkaproducer:v2
name: consumer
image: clusterkuberegistry.azurecr.io/kafkaconsumer:v1

view raw

podspec.yaml

hosted with ❤ by GitHub