Version 0.3.5 release of tftools, now available for MacOS

Version 0.3.5 release of tftools, now available for MacOS

Cross platform functionality achieved!

As someone who uses PowerShell on 2 of the 3 major operation systems, Linux and Windows, having my modules work on all systems is very important to me. Doing this is usually though, but when working with Terraform which is cross platform made it relatively easy.

By utilizing a helper function to determine the OS and setting that platforms specific settings, and by using Azure DevOps pipelines to run Pester tests on the code, we now have a toolset that works on Linux, Windows and Mac.

You can install 0.3.5 from the PowerShell Gallery by running

# Install
Install-Module -Name tftools -RequiredVersion 0.3.5
# Update
Update-Module -Name tftools -RequiredVersion 0.3.1

If you face any issues, please open an issue on GitHub. Any feedback is appreciated.

read more

Create more flexible modules with Terraform and for_each loops

Create more flexible modules with Terraform and for_each loops

Note: When I first was looking into the new for_each loops, I hadn’t used the one inside of a module. So I thought that this was the new feature in Terraform 0.13, but it’s not. The new feature is being able to use for_each on a module block in the root module, not inside the child module like described here.

If you follow a link suggestion that this was completely new, it isn’t. But, it’s still a good example on how to use for_each effectively. I will be posting a lot of examples of the new features in the near future, so stay tuned for that. The rest of the post has been edited to show of just the example of deploying an Azure Kubernetes Service cluster with zero, one or one hundred additional node pools, depending on how many you define.

In AKS, you have one default node pool with the possibility to add additional pools. Traditionally, you would have to define the number of additional node pools statically. I just finished writing the basis for a new module at work using for_each to dynamically deploy as many node clusters as needed. If you pair this up with some validation rules, the use experience of the module is immediately higher. I will probably write a bunch about validation rules later, so I’ll concentrate on getting the for_each point across.

Here is some of the code that I wrote today:

resource "azurerm_kubernetes_cluster" "cluster" {
  name                = format("k8s-%s-%s", var.name_prefix, data.azurerm_resource_group.cluster.location)
  location            = data.azurerm_resource_group.cluster.location
  resource_group_name = data.azurerm_resource_group.cluster.name
  dns_prefix          = var.name_prefix

  default_node_pool {
    name       = var.default_node_pool.name
    vm_size    = var.default_node_pool.vm_size
    node_count = var.default_node_pool.node_count
  }

  service_principal {
    client_id     = azuread_service_principal.cluster.application_id
    client_secret = random_password.cluster.result
  }
}

resource "azurerm_kubernetes_cluster_node_pool" "additional_cluster" {
  for_each     = { for np in local.additional_node_pools : np.name => np }

  kubernetes_cluster_id = azurerm_kubernetes_cluster.cluster.id
  name                  = each.key
  vm_size               = each.value.vm_size
  node_count            = each.value.node_count

  tags = each.value.tags
}

The default node pool uses normal input variables, we got some data source magic referring to a resource group and we’ve created a service principal for the cluster to use. The however, in the azurerm_kubernetes_cluster_node_pool resource, we have a for_each referring to a local value that we’ll look at in a second.

I’ve tried to find a way to explain the for_each loop here but I have limited information to go on, and since I’m only a hobby programmer I might be wrong in my interpretation… But still, the way I look is this:

for 'each element' in 'local source' 
: (we transform it to a new collection where) 'key' (is used to group) 'collection entries'

Though train of though to follow, but if you look at the local value (next code paragraph) you’ll see that we have entries in that collection (np) which we can sort by the name key, which is probably the only one that will stay unique which you need to create a groups that we can go through. This is why we can refer to name as each.key because this would be a root key, if you want to call it that. Writing each.key.name would result in the exact same result, so if you like to do that to make it easier to read you can go on ahead.

locals {
  additional_node_pools = flatten([
    for np in var.additional_node_pools : {
      name         = np.name
      vm_size      = np.vm_size
      node_count   = np.node_count
      tags         = np.tags
    }
  ])
}

In our local source we have another for_each loop that goes through a list of values submitted through the input variable additional_node_pools. We don’t have to transform this to a collection due to the fact that we use flatten to make sure that the entries is handled one by one.

variable "additional_node_pools" {
  type = list(object({
    name         = string
    vm_size      = string
    node_count   = number
    tags         = map(string)
  }))
}

Our input variable looks like this, a list. This is what we’ll refer to when calling our module from a Terraform root module. Let’s look at how we use the module:

module "aks" {
  source = "../terraform-azurerm-kubernetes-cluster"

  name_prefix    = "example"
  resource_group = "demo-services-rg"

  default_node_pool = {
      name       = "default"
      vm_size    = "Standard_F2s_v2"
      node_count = 2
    }

  additional_node_pools = [
    {
      name       = "pool2"
      vm_size    = "Standard_F2s_v2"
      node_count = 1
      tags = {
        source = "terraform"
      }
    },
    {
      name       = "pool3"
      vm_size    = "Standard_F2s_v2"
      node_count = 3
      tags = {
        source = "terraform"
        use    = "application"
      }
    }
  ]
}

Referring to our module, we supply some of the other input variables like how our default node pool should look like but for our additional_node_pools we actually use send a list of two pools. When running terraform it would then go through the list and flatten them, then add one node pool resource per entries to our list.

This is all pretty neat, and if you don’t need an extra node pool you just need to have none present in the list and then your module wouldn’t run the node pool resource at all.

additional_node_pools = []

read more

tftools - PowerShell module for Terraform version handling

One of the great things with PowerShell is that it’s pretty easy to create your own tools. Due to the nature of Terraform, there are times where you need a specific version of Terraform. For instance, a client that I work at these days have some old code written in 0.11 while also creating new code that uses 0.12 syntax. This can easily happen if you have a big code base, as it’s almost impossible at times to update the entire thing.

While there are other solutions for version handling, these were either platform specific or did not have all the functionality that I wanted. So, I wanted to make a PowerShell module to completely handle Terraform versioning on every platform. Not everyone uses the new open source PowerShell either (shame…) so it needs to work on Windows PowerShell.

Happy to announce that for Windows and Linux, I have a functional version. I got a Mac laying around that I will be the guinea pig to get Mac support as well.

edit: As of version 0.3.5, the module now also supports MacOS. Read more about 0.3.5 here.

The module can do the following:

  • Install the version of Terraform that you want, or the latest version
  • Change between version of Terraform
  • List all versions of Terraform that you have installed in you “library”
  • Delete versions of Terraform

I named it tftools, so that I can expand the feature set if I ever felt like it. For now, having a way to switch between versions of Terraform really helps my workflow and I’m sure this will help others as well.

Installation

If you already have Terraform installed by any other means, you want to remove that before installing

Installation is pretty simple. The module is published on powershellgallery, so all you need to do is the following.

Install-Module -Name tftools

Updating the module:

Update-Module -Name tftools

Staying up to date

To keep up to date with this module, you can star the repository at GitHub or follow me on twitter.

read more

Working with helm charts in Terraform

Working with helm charts in Terraform

Doing daily tasks in Kubernetes with Terraform might not be ideal, but when deploying a new cluster you would at least want to have some of your standard applications running right from the start. Using Helm charts to install these is pretty nifty and saves you a lot of time.

Just recently had my first go with setting up Helm charts with Terraform, and it didn’t go all according to plan. I had some issues with setting up the provider, and later deploying the charts themselves. The later turns out that even when uninstalling applications through Helm, it wouldn’t remove everything so the installation just timed out. That’s a story for another day, though.

The reason I wanted to write down a walkthrough of setting up Helm with Terraform, is both so that anyone else could benefit from it but also as an exercise to help me remember how I managed to get it working.

I assume that you already know what Helm is, and that you know how to set up Kubernetes and Terraform. Be aware that I write this in 0.12 syntax, and you will get errors running some of this with Terraform 0.11 and earlier.

Set up the helm provider

First, as always, we have to set up the provider. The documentation gives us two examples on how to authenticate to our cluster, through the normal kubeconfig or by statically define our credentials. Using the kubernetes config probably works fine, but we wanted to set up the cluster and install helm charts in the same process. We also wanted this to be able to run through a CI/CD pipeline, so referring to any types of config was not going to cut it.

The documentation example looks like this:

provider "helm" {
  kubernetes {
    host     = "https://104.196.242.174"
    username = "ClusterMaster"
    password = "MindTheGap"

    client_certificate     = file("~/.kube/client-cert.pem")
    client_key             = file("~/.kube/client-key.pem")
    cluster_ca_certificate = file("~/.kube/cluster-ca-cert.pem")
  }
}

This looks fine, but we don’t have all of this information or files until the cluster is created. Since this will be running in the same workflow as the one that is creating the cluster, we need to be referring to the resource element. Also, username and password was optional so we tried without them first and had no issues there.

provider "helm" {
  version = "~> 0.10.4"
  kubernetes {
    host                   = azurerm_kubernetes_cluster.k8s.kube_config.0.host
    client_certificate     = base64decode(azurerm_kubernetes_cluster.k8s.kube_config.0.client_certificate)
    client_key             = base64decode(azurerm_kubernetes_cluster.k8s.kube_config.0.client_key)
    cluster_ca_certificate = base64decode(azurerm_kubernetes_cluster.k8s.kube_config.0.cluster_ca_certificate)
  }
}

The code above is from my Terraform and Kubernetes example that I use for my talk on Terraform. Feel free to look at the entire code at Github.

I’ve been working with Azure Kubernetes Services (AKS), so in my case we have created a AKS cluster with the local name of k8s that we can extrapolate the host, client certificate, client key and cluster CA certificate from.

We are now ready to deploy helm charts by using the helm_release resource!

Taking the helm

Oh, the jokes. Pretty naughtical (nautical, get it?) …

Dad jokes aside, it’s time to install something through helm. We do this by using the helm_release resource, which can look a bit like this:

resource "helm_release" "prometheus" {
	name            = "prometheus"
	chart           = "prometheus-operator"
	repository      = "https://kubernetes-charts.storage.googleapis.com/"
	namespace       = "monitoring"
}

The chart is the official Stable chart from the fine people over at Helm, but anything that is supported through the helm CLI will work here as well.

Most likely, you would want to send some configurations with your helm chart. There are two ways of doing this, either by defining a values file or by using a set value block. There aren’t any real benefits of one or the other but I guess that if you only have one setting you want to pass along then creating an entire values file for that would be unnecessary.

Using our above example, here is how to structure the values file and/or using the set value block.

resource "helm_release" "prometheus" {
	name            = "prometheus"
	chart           = "prometheus-operator"
	repository      = "https://kubernetes-charts.storage.googleapis.com/"
	namespace       = "monitoring"
	
	# Values file
	values = [
    file("${path.module}/values.yaml")
  ]
	# Set value block
	set {
	  name        = "global.rbac.create"
    value       = "false"
  }
}

Other settings worth noting

wait - (Optional) Will wait until all resources are in a ready state before marking the release as successful. It will wait for as long as timeout. Defaults to true.

timeout - (Optional) Time in seconds to wait for any individual kubernetes operation (like Jobs for hooks). Defaults to 300 seconds.

recreate_pods - (Optional) Perform pods restart during upgrade/rollback. Defaults to false.

atomic - (Optional) If set, installation process purges chart on fail. The wait flag will be set automatically if atomic is used. Defaults to false.

read more

Ops to DevOps, a move from SysAdmin to DevOps Engineer

Ops to DevOps, a move from SysAdmin to DevOps Engineer

Even though the term DevOps has been around for quite some time now, not everyone has shifted over from the old style of IT management to the new one. The big platform providers like the AWS, Azure and Google’s of the world has certainly been doing this for a while but now medium sized companies try to implement many of the new practices. In my experience, this is a great way of improving performance when you have a smaller operations team.

When I started in IT, I totally identified with the SysAdmin part of the spectrum. Managing servers, Hypervisors, making sure services were up and running. For most of the people I know that works in Ops, the Dev part of DevOps seems like a big and scary thing. I can relate to that, but I’ve always liked the appeal of development. That might be why I’ve been more willing to risk exploring “the other side”.

DevOps is more than automation, and weird small computers!

Before moving on, I want to address the fact that DevOps is not only a term for technology. It’s a culture movement, a very necessary one at that. I’ve worked with companies that are stuck in the old ways and I’ve worked with companies that embrace DevOps, and it works.

I suggest you start off by reading The DevOps Handbook. It’s an excellent written book that takes you through the story of a company as they move from traditional development and step by step turns into a DevOps organization. A great starting point for anyone interested in learning about DevOps.

Do I need to be able to program to understand DevOps?

Absolutely not, but it helps. To be precise, what helps is that you’re able to understand the logic of code when you see it. Keep in mind, DevOps doesn’t mean you have to be a developer who also do IT operations. You can just as easily be in operations but use tools proven by developers to increase agility. DevOps allows for more fluidity between work tasks, but you still want someone with high level of network experience to take a look at a network problem, even if that network is deployed through Infrastructure-as-Code.

DevOps doesn’t mean you have to be a developer who also do IT operations. You can just as easily be in operations but use tools proven by developers to increase agility

It’s normal for Sysadmin to create scripts to automate. There is an almost derogatory term for a SysAdmin that does not write scripts, a “right click administrator”, someone that only navigate the GUI. I am not saying that’s bad but normally the one that can automate task and write scripts are more efficient. It’s easier to continue to do right click administration in traditional IT but not in the new paradigm.

If you are hesitant to learn code, don’t be. DevOps is about promoting learning, so just ask a friend or colleague if you’re stuck.

Everything is different, I don’t like it!

Well, as harsh as it sounds… Welcome to IT? Not a day goes by without something changing and by the end of the month there are at least ten new things that you need to know about. You are just used to the idea that everything is in constant motion but sometimes a new thing comes and alter the course of how we work in a more profound way.

For people that don’t want change, I recommend reading up on growth mindset. Having a growth mindset is essential to getting ahead in any career but IT especially. One of the keys to a growth mindset is to always learn new things. In my experience, as long as you keep on learning the exponential growth of your knowledge will accumulate fast.

I suggest that you read this article from Forbes on growth mindset, consider maybe changing your outlook on things and you will soon see a change in what you comprehend.

What should I be learning about?

Containers and container orchestration

This have been the buzzword for many years, but it’s really seems to stick. We managed to optimize the server by removing the hardware and virtualizing it. This time we are removing more unnecessary fluff by taking away the OS, leaving just the application we want to run. This is the key to stable and efficient cloud computing.

Containers themselves are great but they still need to be managed. Container orchestration is a term we use for managing them as hands off as possible. Through orchestration, you can define that you want a certain container image to be running and if something happens to it, it will remove it and spin up a new one. They also support load-balancing and clustering technology.

Docker, one of the first that really pushed the technology into mainstream, have something called Docker Swarm. However, it seems that Kubernetes is winning the orchestration race and is generally considered the go-to-solution for container orchestration.

Resources to learn about containers:

What is a Container? by Docker

Kubernetes: Up and Running: Dive into the Future of Infrastructure by Brendan Burns, Joe Beda & Kelsey Hightower.

Automation, doing it once is enough

Like previously mentioned, this is something system administrators already are good at. But instead of doing small scripts and batch files, we are now focusing on automating as much possible. We’re working more and more with Infrastructure-as-Code (IaC), which totally automates everything related to infrastructure. We also have something called Configuration-as-Code (CaC), where you automate what your resources do after deployment.

We’ll look closer at IaC in the next section, but automation is more than writing IaC. We want to be able to automate everyday tasks, both on our servers and workstations. I might be biased, but I prefer to use PowerShell as my daily driver. After going Open Source, it is available on all platforms but if you’re stuck Bash or any other shell, there is no problem automating tasks with those as well. After a while you might even take things a step further and solve problems with python but creating some shell scripts is a great way to start ease into development.

Even if it is scripts, do still keep them in a repository and work with it like its code. It’s a good way of learning the ropes before getting to IaC.

Resources to learn about automation:

Learn Windows PowerShell in a Month of Lunches by Don Jones & Jeffrey Hicks

PowerShell for Sysadmins: Workflow Automation Made Easy by Adam Bertram

Wicked Cool Shell Scripts by Dave Taylor & Brandon Perry

Git and Continuous Interaction / Continuous Development

If you are like most system administrators this might be a little unfamiliar too you, but it is essential that you learn how to properly handle code. Git have become the de facto way of storing and collaborating on code. Continuous Integration / Continuous Development (CI/CD) is central in the process of DevOps. First, consider the follow diagram of the DevOps process.

Courtesy of What is DevOps? by Atlassian

This is the visualization of how a DevOps process should work. It’s a continuous work in progress, building the cathedral brick by brick. Usually this works by submitting code to a repository, then a build agent or service take the code and run with it.

A practical example might be better. Let’s say that we run Terraform as our IaC tool of choice.

  • By keeping the code in GitHub, we can collaborate on the code, have version control and peer review our code.
  • For this example, we have integrated our GitHub repository with Azure DevOps and by updating our code we trigger a pipeline that runs Terraform Plan (shows us what will happen when running the new code) and waits for our approval.
  • We look at the result of our Terraform Plan and approve the process.
  • Terraform talks to our cloud provider of choice and makes the changes.
  • While we are at it, integration towards slack lets the team know that Terraform is throwing up some infrastructure.

This automates the entire process from committing the code, to the changes going live in the cloud.

Resources to learn about Git, CI/CD and IaC:

What is Infrastructure as Code? by Sam Guckenheimer

What is DevOps? by Atlassian

Infrastructure as Code: Managing Servers in the Cloud by Kief Morris

DevOps Culture

This is perhaps the most important part of DevOps, the culture. Much of what I have written here so far is incorporated into the culture, and we can go on and on about all the aspects of DevOps but here are a few important takeaways.

Smaller teams, more fluency between team members

Everyone should be competent on most topics, but you don’t have to be a subject matter expert on all. You might have more experience with networking, other virtualization but when you work in a small team the entire baseline gets raised. If you have one member that is brilliant at monitoring, that person could explain why something is the correct decision and heighten the general knowledge of monitoring throughout the team.

Feedback and collaboration

DevOps practitioners rely heavily on standups, short meetings that happen daily were one discusses what happened yesterday and what the focus of today is. Communication tools like Slack, Teams or self-hosted solutions like Mattermost is used to update your team and colleagues about what is going on.

Resources to learn about DevOps culture:

What is DevOps Culture? by Sam Guckenheimer

Effective DevOps by Jennifer Davis & Ryn Daniels

read more