Custom variable validation, a practical example

Custom variable validation is my new go-to killer feature. Introduced as a language experiment in late 0.12, from Terraform 0.13 it is now production-ready! This enables us to write a definition of what we want our input variables to, and give send out a proper warning.

At first, you might ask why bother? If the user inputs something that can’t be deployed, wouldn’t Terraform fail? Sure, but for a failure to happen we actually have to run the code, use the provider to get the error back. This takes time, or even worse it might actually try to deploy and time out, which takes even more time.

Creating Azure Storage Account

One example that comes to mind is deploying Azure storage accounts. When deploying storage accounts, there are some rules for what you can name it. Its name must be unique, be between 3 and 24 characters in length, and may only contain lowercase letters and numbers. The first one, Azure will have to check for us but the others are pretty static.

Here is my example, which also can be found in my Azure examples git-repository.

variable "storage_account_name" {
  type    = string
  validation {
    condition     = (
                    length(var.storage_account_name) > 2 && 
                    length(var.storage_account_name) < 25 && 
                    can(regex("[a-z.*]|[0-9]", var.storage_account_name))
    error_message = "Storage account names must be between 3 and 24 characters in length and may contain numbers and lowercase letters only."

Note, we are using the && chain operator to build our conditions. In this case, it means that it will run the next line, only if the last one didn’t fail. In plain English, you would read this as;

If the length of the string is greater than 2, and the length of the string is less than 25, and the string only has lowercase letters and numbers, return true.

If anyone of the tests fails, return the error message you have defined.

In my tests, the error is returned in around 0.7 seconds. This is compared to 5, 6, and even 7 seconds when trying to deploy and getting the error back from Azure.

read more

List all VNet and Subnets across multiple subscriptions

It has happened to everyone, the network sprawl. You might have on-premises networks and virtual networks, maybe even in multiple clouds, and at one point you simply have lost count of your ranges and what they are used for. Usually, these ranges come from someone that is responsible for IP-ranges (preferably an IPAM solution) but what if you have a lot of teams creating VNet in a bunch of subscriptions? Well, it can get out of hand quickly.

The script

If you are interested in learning how this script works, we’ll continue the blog post after the code. For those who just want to run the script, here you go:

Get-AzSubscription | Foreach-Object {
    $sub = Set-AzContext -SubscriptionId $_.SubscriptionId
    $vnets = Get-AzVirtualNetwork

    foreach ($vnet in $vnets) {
            Subscription = $sub.Subscription.Name
            Name = $vnet.Name
            Vnet = $vnet.AddressSpace.AddressPrefixes -join ', '
            Subnets = $vnet.Subnets.AddressPrefix -join ', '
} | Export-Csv -Delimiter ";" -Path "AzureVnet.csv"

This will export the results to CSV, but if you don’t want that you can remove the last pipe and the cmdlet Export-Csv.

Note that you need to have the Az-module installed. You also have to be connected to Azure with an account that can at least read all the subscriptions and network resources.

How the script works

We start off by getting all the subscriptions available and running them one by one through a for each loop. So for every subscription, we set the active context to that subscription and populate the variable $vnets with all Virtual Networks in that subscription.

We run through another for each loop, where we create one new PSCustomObject per VNet in our $vnets variable. This is how we will represent our information, and the first couple of values makes sense. We set Subscription to the name of our current subscription, and the name of the Vnet as the Name field.

For our VNet address space and subnets, we could just point to the value from $vnet and be done with it. This works perfectly if you just want the results in the terminal. What I want, is to export this as a CSV so I can share this with whoever needs the list. If you try to export this value and it’s more than one, you will not get an IP range but the text System.Collections.Generic.List.

To get around this refer to the value we want, and use the join operator to join all the values together, separated by a comma. I also added a space after the comma to make it more readable. The VNet address space and the subnet can be multiple values, so I had to use the join operator for both of them.

read more

Extract Zip files with PowerShell

Note: I got a lot of feedback about how it is possible to use Expand-Archive for this. While this is true, I wanted to have a solution that didn’t rely on any prerequisite except for what comes with .net. For this tool, I tried to create something that would run flawless on any systems, even if they removed core functionality like some of the system modules and libraries. This might not have come across when I originally wrote this, but hopefully the rest of the post will make sense now.

For my module tftools I needed to download Terraform from Hashicorp, which came in a Zip archive. I didn’t want to rely on other tools or modules to extract the Zip files, and luckily there was a .Net class called ZipFile from the System.IO.Compression.FileSystem assembly that could be utilized.

Here’s how we can download a Zip file as a temporary file and extract the content.

# Define a temporary file, 
# the URI for the file you want to download,
# and the folder you want to extract to
$tempFile = [System.IO.Path]::GetTempFileName()
$URI      = ""
$OutputFolder = "C:\folder"

# Download the file by using splatting*
$downloadSplat = @{
    uri             = $URI
    OutFile         = $tempFile
    UseBasicParsing = $true
Invoke-WebRequest @downloadSplat

# Load the assembly
Add-Type -AssemblyName System.IO.Compression.FileSystem

# Extract the content
[System.IO.Compression.ZipFile]::ExtractToDirectory($tempFile, $OutputFolder)

# And clean up by deleting the temporary file
Remove-Item -Path $tempFile

*If you haven’t heard of splatting, here is my blogpost about it: PowerShell tricks: Splatting

read more

Version 0.3.5 release of tftools, now available for MacOS

Version 0.3.5 release of tftools, now available for MacOS

Cross platform functionality achieved!

As someone who uses PowerShell on 2 of the 3 major operation systems, Linux and Windows, having my modules work on all systems is very important to me. Doing this is usually though, but when working with Terraform which is cross platform made it relatively easy.

By utilizing a helper function to determine the OS and setting that platforms specific settings, and by using Azure DevOps pipelines to run Pester tests on the code, we now have a toolset that works on Linux, Windows and Mac.

You can install 0.3.5 from the PowerShell Gallery by running

# Install
Install-Module -Name tftools -RequiredVersion 0.3.5
# Update
Update-Module -Name tftools -RequiredVersion 0.3.1

If you face any issues, please open an issue on GitHub. Any feedback is appreciated.

read more

Create more flexible modules with Terraform and for_each loops

Create more flexible modules with Terraform and for_each loops

Note: When I first was looking into the new for_each loops, I hadn’t used the one inside of a module. So I thought that this was the new feature in Terraform 0.13, but it’s not. The new feature is being able to use for_each on a module block in the root module, not inside the child module like described here.

If you follow a link suggestion that this was completely new, it isn’t. But, it’s still a good example on how to use for_each effectively. I will be posting a lot of examples of the new features in the near future, so stay tuned for that. The rest of the post has been edited to show of just the example of deploying an Azure Kubernetes Service cluster with zero, one or one hundred additional node pools, depending on how many you define.

In AKS, you have one default node pool with the possibility to add additional pools. Traditionally, you would have to define the number of additional node pools statically. I just finished writing the basis for a new module at work using for_each to dynamically deploy as many node clusters as needed. If you pair this up with some validation rules, the use experience of the module is immediately higher. I will probably write a bunch about validation rules later, so I’ll concentrate on getting the for_each point across.

Here is some of the code that I wrote today:

resource "azurerm_kubernetes_cluster" "cluster" {
  name                = format("k8s-%s-%s", var.name_prefix, data.azurerm_resource_group.cluster.location)
  location            = data.azurerm_resource_group.cluster.location
  resource_group_name =
  dns_prefix          = var.name_prefix

  default_node_pool {
    name       =
    vm_size    = var.default_node_pool.vm_size
    node_count = var.default_node_pool.node_count

  service_principal {
    client_id     = azuread_service_principal.cluster.application_id
    client_secret = random_password.cluster.result

resource "azurerm_kubernetes_cluster_node_pool" "additional_cluster" {
  for_each     = { for np in local.additional_node_pools : => np }

  kubernetes_cluster_id =
  name                  = each.key
  vm_size               = each.value.vm_size
  node_count            = each.value.node_count

  tags = each.value.tags

The default node pool uses normal input variables, we got some data source magic referring to a resource group and we’ve created a service principal for the cluster to use. The however, in the azurerm_kubernetes_cluster_node_pool resource, we have a for_each referring to a local value that we’ll look at in a second.

I’ve tried to find a way to explain the for_each loop here but I have limited information to go on, and since I’m only a hobby programmer I might be wrong in my interpretation… But still, the way I look is this:

for 'each element' in 'local source' 
: (we transform it to a new collection where) 'key' (is used to group) 'collection entries'

Though train of though to follow, but if you look at the local value (next code paragraph) you’ll see that we have entries in that collection (np) which we can sort by the name key, which is probably the only one that will stay unique which you need to create a groups that we can go through. This is why we can refer to name as each.key because this would be a root key, if you want to call it that. Writing would result in the exact same result, so if you like to do that to make it easier to read you can go on ahead.

locals {
  additional_node_pools = flatten([
    for np in var.additional_node_pools : {
      name         =
      vm_size      = np.vm_size
      node_count   = np.node_count
      tags         = np.tags

In our local source we have another for_each loop that goes through a list of values submitted through the input variable additional_node_pools. We don’t have to transform this to a collection due to the fact that we use flatten to make sure that the entries is handled one by one.

variable "additional_node_pools" {
  type = list(object({
    name         = string
    vm_size      = string
    node_count   = number
    tags         = map(string)

Our input variable looks like this, a list. This is what we’ll refer to when calling our module from a Terraform root module. Let’s look at how we use the module:

module "aks" {
  source = "../terraform-azurerm-kubernetes-cluster"

  name_prefix    = "example"
  resource_group = "demo-services-rg"

  default_node_pool = {
      name       = "default"
      vm_size    = "Standard_F2s_v2"
      node_count = 2

  additional_node_pools = [
      name       = "pool2"
      vm_size    = "Standard_F2s_v2"
      node_count = 1
      tags = {
        source = "terraform"
      name       = "pool3"
      vm_size    = "Standard_F2s_v2"
      node_count = 3
      tags = {
        source = "terraform"
        use    = "application"

Referring to our module, we supply some of the other input variables like how our default node pool should look like but for our additional_node_pools we actually use send a list of two pools. When running terraform it would then go through the list and flatten them, then add one node pool resource per entries to our list.

This is all pretty neat, and if you don’t need an extra node pool you just need to have none present in the list and then your module wouldn’t run the node pool resource at all.

additional_node_pools = []

read more