Note: When I first was looking into the new for_each loops, I hadn’t used the one inside of a module. So I thought that this was the new feature in Terraform 0.13, but it’s not. The new feature is being able to use for_each on a module block in the root module, not inside the child module like described here.

If you follow a link suggestion that this was completely new, it isn’t. But, it’s still a good example on how to use for_each effectively. I will be posting a lot of examples of the new features in the near future, so stay tuned for that. The rest of the post has been edited to show of just the example of deploying an Azure Kubernetes Service cluster with zero, one or one hundred additional node pools, depending on how many you define.

In AKS, you have one default node pool with the possibility to add additional pools. Traditionally, you would have to define the number of additional node pools statically. I just finished writing the basis for a new module at work using for_each to dynamically deploy as many node clusters as needed. If you pair this up with some validation rules, the use experience of the module is immediately higher. I will probably write a bunch about validation rules later, so I’ll concentrate on getting the for_each point across.

Here is some of the code that I wrote today:

resource "azurerm_kubernetes_cluster" "cluster" {
  name                = format("k8s-%s-%s", var.name_prefix, data.azurerm_resource_group.cluster.location)
  location            = data.azurerm_resource_group.cluster.location
  resource_group_name =
  dns_prefix          = var.name_prefix

  default_node_pool {
    name       =
    vm_size    = var.default_node_pool.vm_size
    node_count = var.default_node_pool.node_count

  service_principal {
    client_id     = azuread_service_principal.cluster.application_id
    client_secret = random_password.cluster.result

resource "azurerm_kubernetes_cluster_node_pool" "additional_cluster" {
  for_each     = { for np in local.additional_node_pools : => np }

  kubernetes_cluster_id =
  name                  = each.key
  vm_size               = each.value.vm_size
  node_count            = each.value.node_count

  tags: each.value.tags

The default node pool uses normal input variables, we got some data source magic referring to a resource group and we’ve created a service principal for the cluster to use. The however, in the azurerm_kubernetes_cluster_node_pool resource, we have a for_each referring to a local value that we’ll look at in a second.

I’ve tried to find a way to explain the for_each loop here but I have limited information to go on, and since I’m only a hobby programmer I might be wrong in my interpretation… But still, the way I look is this:

for 'each element' in 'local source' : (we transform it to a new collection where) 'key' (is used to group) 'collection entries'

Though train of though to follow, but if you look at the local value (next code paragraph) you’ll see that we have entries in that collection (np) which we can sort by the name key, which is probably the only one that will stay unique which you need to create a groups that we can go through. This is why we can refer to name as each.key because this would be a root key, if you want to call it that. Writing would result in the exact same result, so if you like to do that to make it easier to read you can go on ahead.

locals {
  additional_node_pools = flatten([
    for np in var.additional_node_pools : {
      name         =
      vm_size      = np.vm_size
      node_count   = np.node_count
      tags         = np.tags

In our local source we have another for_each loop that goes through a list of values submitted through the input variable additional_node_pools. We don’t have to transform this to a collection due to the fact that we use flatten to make sure that the entries is handled one by one.

variable "additional_node_pools" {
  type = list(object({
    name         = string
    vm_size      = string
    node_count   = number
    tags         = map(string)

Our input variable looks like this, a list. This is what we’ll refer to when calling our module from a Terraform root module. Let’s look at how we use the module:

module "aks" {
  source = "../terraform-azurerm-kubernetes-cluster"

  name_prefix    = "example"
  resource_group = "demo-services-rg"

  default_node_pool = {
      name       = "default"
      vm_size    = "Standard_F2s_v2"
      node_count = 2

  additional_node_pools = [
      name       = "pool2"
      vm_size    = "Standard_F2s_v2"
      node_count = 1
      tags: {
        source = "terraform"
      name       = "pool3"
      vm_size    = "Standard_F2s_v2"
      node_count = 3
      tags: {
        source = "terraform"
        use    = "application"

Referring to our module, we supply some of the other input variables like how our default node pool should look like but for our additional_node_pools we actually use send a list of two pools. When running terraform it would then go through the list and flatten them, then add one node pool resource per entries to our list.

This is all pretty neat, and if you don’t need an extra node pool you just need to have none present in the list and then your module wouldn’t run the node pool resource at all.

additional_node_pools = []