Extract Zip files with PowerShell

Note: I got a lot of feedback about how it is possible to use Expand-Archive for this. While this is true, I wanted to have a solution that didn’t rely on any prerequisite except for what comes with .net. For this tool, I tried to create something that would run flawless on any systems, even if they removed core functionality like some of the system modules and libraries. This might not have come across when I originally wrote this, but hopefully the rest of the post will make sense now.

For my module tftools I needed to download Terraform from Hashicorp, which came in a Zip archive. I didn’t want to rely on other tools or modules to extract the Zip files, and luckily there was a .Net class called ZipFile from the System.IO.Compression.FileSystem assembly that could be utilized.

Here’s how we can download a Zip file as a temporary file and extract the content.

# Define a temporary file, 
# the URI for the file you want to download,
# and the folder you want to extract to
$tempFile = [System.IO.Path]::GetTempFileName()
$URI      = "https://example.org/file.zip"
$OutputFolder = "C:\folder"

# Download the file by using splatting*
$downloadSplat = @{
    uri             = $URI
    OutFile         = $tempFile
    UseBasicParsing = $true
Invoke-WebRequest @downloadSplat

# Load the assembly
Add-Type -AssemblyName System.IO.Compression.FileSystem

# Extract the content
[System.IO.Compression.ZipFile]::ExtractToDirectory($tempFile, $OutputFolder)

# And clean up by deleting the temporary file
Remove-Item -Path $tempFile

*If you haven’t heard of splatting, here is my blogpost about it: PowerShell tricks: Splatting

read more

Version 0.3.5 release of tftools, now available for MacOS

Version 0.3.5 release of tftools, now available for MacOS

Cross platform functionality achieved!

As someone who uses PowerShell on 2 of the 3 major operation systems, Linux and Windows, having my modules work on all systems is very important to me. Doing this is usually though, but when working with Terraform which is cross platform made it relatively easy.

By utilizing a helper function to determine the OS and setting that platforms specific settings, and by using Azure DevOps pipelines to run Pester tests on the code, we now have a toolset that works on Linux, Windows and Mac.

You can install 0.3.5 from the PowerShell Gallery by running

# Install
Install-Module -Name tftools -RequiredVersion 0.3.5
# Update
Update-Module -Name tftools -RequiredVersion 0.3.1

If you face any issues, please open an issue on GitHub. Any feedback is appreciated.

read more

Create more flexible modules with Terraform and for_each loops

Create more flexible modules with Terraform and for_each loops

Note: When I first was looking into the new for_each loops, I hadn’t used the one inside of a module. So I thought that this was the new feature in Terraform 0.13, but it’s not. The new feature is being able to use for_each on a module block in the root module, not inside the child module like described here.

If you follow a link suggestion that this was completely new, it isn’t. But, it’s still a good example on how to use for_each effectively. I will be posting a lot of examples of the new features in the near future, so stay tuned for that. The rest of the post has been edited to show of just the example of deploying an Azure Kubernetes Service cluster with zero, one or one hundred additional node pools, depending on how many you define.

In AKS, you have one default node pool with the possibility to add additional pools. Traditionally, you would have to define the number of additional node pools statically. I just finished writing the basis for a new module at work using for_each to dynamically deploy as many node clusters as needed. If you pair this up with some validation rules, the use experience of the module is immediately higher. I will probably write a bunch about validation rules later, so I’ll concentrate on getting the for_each point across.

Here is some of the code that I wrote today:

resource "azurerm_kubernetes_cluster" "cluster" {
  name                = format("k8s-%s-%s", var.name_prefix, data.azurerm_resource_group.cluster.location)
  location            = data.azurerm_resource_group.cluster.location
  resource_group_name = data.azurerm_resource_group.cluster.name
  dns_prefix          = var.name_prefix

  default_node_pool {
    name       = var.default_node_pool.name
    vm_size    = var.default_node_pool.vm_size
    node_count = var.default_node_pool.node_count

  service_principal {
    client_id     = azuread_service_principal.cluster.application_id
    client_secret = random_password.cluster.result

resource "azurerm_kubernetes_cluster_node_pool" "additional_cluster" {
  for_each     = { for np in local.additional_node_pools : np.name => np }

  kubernetes_cluster_id = azurerm_kubernetes_cluster.cluster.id
  name                  = each.key
  vm_size               = each.value.vm_size
  node_count            = each.value.node_count

  tags = each.value.tags

The default node pool uses normal input variables, we got some data source magic referring to a resource group and we’ve created a service principal for the cluster to use. The however, in the azurerm_kubernetes_cluster_node_pool resource, we have a for_each referring to a local value that we’ll look at in a second.

I’ve tried to find a way to explain the for_each loop here but I have limited information to go on, and since I’m only a hobby programmer I might be wrong in my interpretation… But still, the way I look is this:

for 'each element' in 'local source' 
: (we transform it to a new collection where) 'key' (is used to group) 'collection entries'

Though train of though to follow, but if you look at the local value (next code paragraph) you’ll see that we have entries in that collection (np) which we can sort by the name key, which is probably the only one that will stay unique which you need to create a groups that we can go through. This is why we can refer to name as each.key because this would be a root key, if you want to call it that. Writing each.key.name would result in the exact same result, so if you like to do that to make it easier to read you can go on ahead.

locals {
  additional_node_pools = flatten([
    for np in var.additional_node_pools : {
      name         = np.name
      vm_size      = np.vm_size
      node_count   = np.node_count
      tags         = np.tags

In our local source we have another for_each loop that goes through a list of values submitted through the input variable additional_node_pools. We don’t have to transform this to a collection due to the fact that we use flatten to make sure that the entries is handled one by one.

variable "additional_node_pools" {
  type = list(object({
    name         = string
    vm_size      = string
    node_count   = number
    tags         = map(string)

Our input variable looks like this, a list. This is what we’ll refer to when calling our module from a Terraform root module. Let’s look at how we use the module:

module "aks" {
  source = "../terraform-azurerm-kubernetes-cluster"

  name_prefix    = "example"
  resource_group = "demo-services-rg"

  default_node_pool = {
      name       = "default"
      vm_size    = "Standard_F2s_v2"
      node_count = 2

  additional_node_pools = [
      name       = "pool2"
      vm_size    = "Standard_F2s_v2"
      node_count = 1
      tags = {
        source = "terraform"
      name       = "pool3"
      vm_size    = "Standard_F2s_v2"
      node_count = 3
      tags = {
        source = "terraform"
        use    = "application"

Referring to our module, we supply some of the other input variables like how our default node pool should look like but for our additional_node_pools we actually use send a list of two pools. When running terraform it would then go through the list and flatten them, then add one node pool resource per entries to our list.

This is all pretty neat, and if you don’t need an extra node pool you just need to have none present in the list and then your module wouldn’t run the node pool resource at all.

additional_node_pools = []

read more

tftools - PowerShell module for Terraform version handling

One of the great things with PowerShell is that it’s pretty easy to create your own tools. Due to the nature of Terraform, there are times where you need a specific version of Terraform. For instance, a client that I work at these days have some old code written in 0.11 while also creating new code that uses 0.12 syntax. This can easily happen if you have a big code base, as it’s almost impossible at times to update the entire thing.

While there are other solutions for version handling, these were either platform specific or did not have all the functionality that I wanted. So, I wanted to make a PowerShell module to completely handle Terraform versioning on every platform. Not everyone uses the new open source PowerShell either (shame…) so it needs to work on Windows PowerShell.

Happy to announce that for Windows and Linux, I have a functional version. I got a Mac laying around that I will be the guinea pig to get Mac support as well.

edit: As of version 0.3.5, the module now also supports MacOS. Read more about 0.3.5 here.

The module can do the following:

  • Install the version of Terraform that you want, or the latest version
  • Change between version of Terraform
  • List all versions of Terraform that you have installed in you “library”
  • Delete versions of Terraform

I named it tftools, so that I can expand the feature set if I ever felt like it. For now, having a way to switch between versions of Terraform really helps my workflow and I’m sure this will help others as well.


If you already have Terraform installed by any other means, you want to remove that before installing

Installation is pretty simple. The module is published on powershellgallery, so all you need to do is the following.

Install-Module -Name tftools

Updating the module:

Update-Module -Name tftools

Staying up to date

To keep up to date with this module, you can star the repository at GitHub or follow me on twitter.

read more

Working with helm charts in Terraform

Working with helm charts in Terraform

Doing daily tasks in Kubernetes with Terraform might not be ideal, but when deploying a new cluster you would at least want to have some of your standard applications running right from the start. Using Helm charts to install these is pretty nifty and saves you a lot of time.

Just recently had my first go with setting up Helm charts with Terraform, and it didn’t go all according to plan. I had some issues with setting up the provider, and later deploying the charts themselves. The later turns out that even when uninstalling applications through Helm, it wouldn’t remove everything so the installation just timed out. That’s a story for another day, though.

The reason I wanted to write down a walkthrough of setting up Helm with Terraform, is both so that anyone else could benefit from it but also as an exercise to help me remember how I managed to get it working.

I assume that you already know what Helm is, and that you know how to set up Kubernetes and Terraform. Be aware that I write this in 0.12 syntax, and you will get errors running some of this with Terraform 0.11 and earlier.

Set up the helm provider

First, as always, we have to set up the provider. The documentation gives us two examples on how to authenticate to our cluster, through the normal kubeconfig or by statically define our credentials. Using the kubernetes config probably works fine, but we wanted to set up the cluster and install helm charts in the same process. We also wanted this to be able to run through a CI/CD pipeline, so referring to any types of config was not going to cut it.

The documentation example looks like this:

provider "helm" {
  kubernetes {
    host     = ""
    username = "ClusterMaster"
    password = "MindTheGap"

    client_certificate     = file("~/.kube/client-cert.pem")
    client_key             = file("~/.kube/client-key.pem")
    cluster_ca_certificate = file("~/.kube/cluster-ca-cert.pem")

This looks fine, but we don’t have all of this information or files until the cluster is created. Since this will be running in the same workflow as the one that is creating the cluster, we need to be referring to the resource element. Also, username and password was optional so we tried without them first and had no issues there.

provider "helm" {
  version = "~> 0.10.4"
  kubernetes {
    host                   = azurerm_kubernetes_cluster.k8s.kube_config.0.host
    client_certificate     = base64decode(azurerm_kubernetes_cluster.k8s.kube_config.0.client_certificate)
    client_key             = base64decode(azurerm_kubernetes_cluster.k8s.kube_config.0.client_key)
    cluster_ca_certificate = base64decode(azurerm_kubernetes_cluster.k8s.kube_config.0.cluster_ca_certificate)

The code above is from my Terraform and Kubernetes example that I use for my talk on Terraform. Feel free to look at the entire code at Github.

I’ve been working with Azure Kubernetes Services (AKS), so in my case we have created a AKS cluster with the local name of k8s that we can extrapolate the host, client certificate, client key and cluster CA certificate from.

We are now ready to deploy helm charts by using the helm_release resource!

Taking the helm

Oh, the jokes. Pretty naughtical (nautical, get it?) …

Dad jokes aside, it’s time to install something through helm. We do this by using the helm_release resource, which can look a bit like this:

resource "helm_release" "prometheus" {
	name            = "prometheus"
	chart           = "prometheus-operator"
	repository      = "https://kubernetes-charts.storage.googleapis.com/"
	namespace       = "monitoring"

The chart is the official Stable chart from the fine people over at Helm, but anything that is supported through the helm CLI will work here as well.

Most likely, you would want to send some configurations with your helm chart. There are two ways of doing this, either by defining a values file or by using a set value block. There aren’t any real benefits of one or the other but I guess that if you only have one setting you want to pass along then creating an entire values file for that would be unnecessary.

Using our above example, here is how to structure the values file and/or using the set value block.

resource "helm_release" "prometheus" {
	name            = "prometheus"
	chart           = "prometheus-operator"
	repository      = "https://kubernetes-charts.storage.googleapis.com/"
	namespace       = "monitoring"
	# Values file
	values = [
	# Set value block
	set {
	  name        = "global.rbac.create"
    value       = "false"

Other settings worth noting

wait - (Optional) Will wait until all resources are in a ready state before marking the release as successful. It will wait for as long as timeout. Defaults to true.

timeout - (Optional) Time in seconds to wait for any individual kubernetes operation (like Jobs for hooks). Defaults to 300 seconds.

recreate_pods - (Optional) Perform pods restart during upgrade/rollback. Defaults to false.

atomic - (Optional) If set, installation process purges chart on fail. The wait flag will be set automatically if atomic is used. Defaults to false.

read more