tftools - PowerShell module for Terraform version handling

One of the great things with PowerShell is that it’s pretty easy to create your own tools. Due to the nature of Terraform, there are times where you need a specific version of Terraform. For instance, a client that I work at these days have some old code written in 0.11 while also creating new code that uses 0.12 syntax. This can easily happen if you have a big code base, as it’s almost impossible at times to update the entire thing.

While there are other solutions for version handling, these were either platform specific or did not have all the functionality that I wanted. So, I wanted to make a PowerShell module to completely handle Terraform versioning on every platform. Not everyone uses the new open source PowerShell either (shame…) so it needs to work on Windows PowerShell.

Happy to announce that for Windows and Linux, I have a functional version. I got a Mac laying around that I will be the guinea pig to get Mac support as well.

edit: As of version 0.3.5, the module now also supports MacOS. Read more about 0.3.5 here.

The module can do the following:

  • Install the version of Terraform that you want, or the latest version
  • Change between version of Terraform
  • List all versions of Terraform that you have installed in you “library”
  • Delete versions of Terraform

I named it tftools, so that I can expand the feature set if I ever felt like it. For now, having a way to switch between versions of Terraform really helps my workflow and I’m sure this will help others as well.

Installation

If you already have Terraform installed by any other means, you want to remove that before installing

Installation is pretty simple. The module is published on powershellgallery, so all you need to do is the following.

Install-Module -Name tftools

Updating the module:

Update-Module -Name tftools

Staying up to date

To keep up to date with this module, you can star the repository at GitHub or follow me on twitter.

read more

Working with helm charts in Terraform

Working with helm charts in Terraform

Doing daily tasks in Kubernetes with Terraform might not be ideal, but when deploying a new cluster you would at least want to have some of your standard applications running right from the start. Using Helm charts to install these is pretty nifty and saves you a lot of time.

Just recently had my first go with setting up Helm charts with Terraform, and it didn’t go all according to plan. I had some issues with setting up the provider, and later deploying the charts themselves. The later turns out that even when uninstalling applications through Helm, it wouldn’t remove everything so the installation just timed out. That’s a story for another day, though.

The reason I wanted to write down a walkthrough of setting up Helm with Terraform, is both so that anyone else could benefit from it but also as an exercise to help me remember how I managed to get it working.

I assume that you already know what Helm is, and that you know how to set up Kubernetes and Terraform. Be aware that I write this in 0.12 syntax, and you will get errors running some of this with Terraform 0.11 and earlier.

Set up the helm provider

First, as always, we have to set up the provider. The documentation gives us two examples on how to authenticate to our cluster, through the normal kubeconfig or by statically define our credentials. Using the kubernetes config probably works fine, but we wanted to set up the cluster and install helm charts in the same process. We also wanted this to be able to run through a CI/CD pipeline, so referring to any types of config was not going to cut it.

The documentation example looks like this:

provider "helm" {
  kubernetes {
    host     = "https://104.196.242.174"
    username = "ClusterMaster"
    password = "MindTheGap"

    client_certificate     = file("~/.kube/client-cert.pem")
    client_key             = file("~/.kube/client-key.pem")
    cluster_ca_certificate = file("~/.kube/cluster-ca-cert.pem")
  }
}

This looks fine, but we don’t have all of this information or files until the cluster is created. Since this will be running in the same workflow as the one that is creating the cluster, we need to be referring to the resource element. Also, username and password was optional so we tried without them first and had no issues there.

provider "helm" {
  version = "~> 0.10.4"
  kubernetes {
    host                   = azurerm_kubernetes_cluster.k8s.kube_config.0.host
    client_certificate     = base64decode(azurerm_kubernetes_cluster.k8s.kube_config.0.client_certificate)
    client_key             = base64decode(azurerm_kubernetes_cluster.k8s.kube_config.0.client_key)
    cluster_ca_certificate = base64decode(azurerm_kubernetes_cluster.k8s.kube_config.0.cluster_ca_certificate)
  }
}

The code above is from my Terraform and Kubernetes example that I use for my talk on Terraform. Feel free to look at the entire code at Github.

I’ve been working with Azure Kubernetes Services (AKS), so in my case we have created a AKS cluster with the local name of k8s that we can extrapolate the host, client certificate, client key and cluster CA certificate from.

We are now ready to deploy helm charts by using the helm_release resource!

Taking the helm

Oh, the jokes. Pretty naughtical (nautical, get it?) …

Dad jokes aside, it’s time to install something through helm. We do this by using the helm_release resource, which can look a bit like this:

resource "helm_release" "prometheus" {
	name            = "prometheus"
	chart           = "prometheus-operator"
	repository      = "https://kubernetes-charts.storage.googleapis.com/"
	namespace       = "monitoring"
}

The chart is the official Stable chart from the fine people over at Helm, but anything that is supported through the helm CLI will work here as well.

Most likely, you would want to send some configurations with your helm chart. There are two ways of doing this, either by defining a values file or by using a set value block. There aren’t any real benefits of one or the other but I guess that if you only have one setting you want to pass along then creating an entire values file for that would be unnecessary.

Using our above example, here is how to structure the values file and/or using the set value block.

resource "helm_release" "prometheus" {
	name            = "prometheus"
	chart           = "prometheus-operator"
	repository      = "https://kubernetes-charts.storage.googleapis.com/"
	namespace       = "monitoring"
	
	# Values file
	values = [
    file("${path.module}/values.yaml")
  ]
	# Set value block
	set {
	  name        = "global.rbac.create"
    value       = "false"
  }
}

Other settings worth noting

wait - (Optional) Will wait until all resources are in a ready state before marking the release as successful. It will wait for as long as timeout. Defaults to true.

timeout - (Optional) Time in seconds to wait for any individual kubernetes operation (like Jobs for hooks). Defaults to 300 seconds.

recreate_pods - (Optional) Perform pods restart during upgrade/rollback. Defaults to false.

atomic - (Optional) If set, installation process purges chart on fail. The wait flag will be set automatically if atomic is used. Defaults to false.

read more

Ops to DevOps, a move from SysAdmin to DevOps Engineer

Ops to DevOps, a move from SysAdmin to DevOps Engineer

Even though the term DevOps has been around for quite some time now, not everyone has shifted over from the old style of IT management to the new one. The big platform providers like the AWS, Azure and Google’s of the world has certainly been doing this for a while but now medium sized companies try to implement many of the new practices. In my experience, this is a great way of improving performance when you have a smaller operations team.

When I started in IT, I totally identified with the SysAdmin part of the spectrum. Managing servers, Hypervisors, making sure services were up and running. For most of the people I know that works in Ops, the Dev part of DevOps seems like a big and scary thing. I can relate to that, but I’ve always liked the appeal of development. That might be why I’ve been more willing to risk exploring “the other side”.

DevOps is more than automation, and weird small computers!

Before moving on, I want to address the fact that DevOps is not only a term for technology. It’s a culture movement, a very necessary one at that. I’ve worked with companies that are stuck in the old ways and I’ve worked with companies that embrace DevOps, and it works.

I suggest you start off by reading The DevOps Handbook. It’s an excellent written book that takes you through the story of a company as they move from traditional development and step by step turns into a DevOps organization. A great starting point for anyone interested in learning about DevOps.

Do I need to be able to program to understand DevOps?

Absolutely not, but it helps. To be precise, what helps is that you’re able to understand the logic of code when you see it. Keep in mind, DevOps doesn’t mean you have to be a developer who also do IT operations. You can just as easily be in operations but use tools proven by developers to increase agility. DevOps allows for more fluidity between work tasks, but you still want someone with high level of network experience to take a look at a network problem, even if that network is deployed through Infrastructure-as-Code.

DevOps doesn’t mean you have to be a developer who also do IT operations. You can just as easily be in operations but use tools proven by developers to increase agility

It’s normal for Sysadmin to create scripts to automate. There is an almost derogatory term for a SysAdmin that does not write scripts, a “right click administrator”, someone that only navigate the GUI. I am not saying that’s bad but normally the one that can automate task and write scripts are more efficient. It’s easier to continue to do right click administration in traditional IT but not in the new paradigm.

If you are hesitant to learn code, don’t be. DevOps is about promoting learning, so just ask a friend or colleague if you’re stuck.

Everything is different, I don’t like it!

Well, as harsh as it sounds… Welcome to IT? Not a day goes by without something changing and by the end of the month there are at least ten new things that you need to know about. You are just used to the idea that everything is in constant motion but sometimes a new thing comes and alter the course of how we work in a more profound way.

For people that don’t want change, I recommend reading up on growth mindset. Having a growth mindset is essential to getting ahead in any career but IT especially. One of the keys to a growth mindset is to always learn new things. In my experience, as long as you keep on learning the exponential growth of your knowledge will accumulate fast.

I suggest that you read this article from Forbes on growth mindset, consider maybe changing your outlook on things and you will soon see a change in what you comprehend.

What should I be learning about?

Containers and container orchestration

This have been the buzzword for many years, but it’s really seems to stick. We managed to optimize the server by removing the hardware and virtualizing it. This time we are removing more unnecessary fluff by taking away the OS, leaving just the application we want to run. This is the key to stable and efficient cloud computing.

Containers themselves are great but they still need to be managed. Container orchestration is a term we use for managing them as hands off as possible. Through orchestration, you can define that you want a certain container image to be running and if something happens to it, it will remove it and spin up a new one. They also support load-balancing and clustering technology.

Docker, one of the first that really pushed the technology into mainstream, have something called Docker Swarm. However, it seems that Kubernetes is winning the orchestration race and is generally considered the go-to-solution for container orchestration.

Resources to learn about containers:

What is a Container? by Docker

Kubernetes: Up and Running: Dive into the Future of Infrastructure by Brendan Burns, Joe Beda & Kelsey Hightower.

Automation, doing it once is enough

Like previously mentioned, this is something system administrators already are good at. But instead of doing small scripts and batch files, we are now focusing on automating as much possible. We’re working more and more with Infrastructure-as-Code (IaC), which totally automates everything related to infrastructure. We also have something called Configuration-as-Code (CaC), where you automate what your resources do after deployment.

We’ll look closer at IaC in the next section, but automation is more than writing IaC. We want to be able to automate everyday tasks, both on our servers and workstations. I might be biased, but I prefer to use PowerShell as my daily driver. After going Open Source, it is available on all platforms but if you’re stuck Bash or any other shell, there is no problem automating tasks with those as well. After a while you might even take things a step further and solve problems with python but creating some shell scripts is a great way to start ease into development.

Even if it is scripts, do still keep them in a repository and work with it like its code. It’s a good way of learning the ropes before getting to IaC.

Resources to learn about automation:

Learn Windows PowerShell in a Month of Lunches by Don Jones & Jeffrey Hicks

PowerShell for Sysadmins: Workflow Automation Made Easy by Adam Bertram

Wicked Cool Shell Scripts by Dave Taylor & Brandon Perry

Git and Continuous Interaction / Continuous Development

If you are like most system administrators this might be a little unfamiliar too you, but it is essential that you learn how to properly handle code. Git have become the de facto way of storing and collaborating on code. Continuous Integration / Continuous Development (CI/CD) is central in the process of DevOps. First, consider the follow diagram of the DevOps process.

Courtesy of What is DevOps? by Atlassian

This is the visualization of how a DevOps process should work. It’s a continuous work in progress, building the cathedral brick by brick. Usually this works by submitting code to a repository, then a build agent or service take the code and run with it.

A practical example might be better. Let’s say that we run Terraform as our IaC tool of choice.

  • By keeping the code in GitHub, we can collaborate on the code, have version control and peer review our code.
  • For this example, we have integrated our GitHub repository with Azure DevOps and by updating our code we trigger a pipeline that runs Terraform Plan (shows us what will happen when running the new code) and waits for our approval.
  • We look at the result of our Terraform Plan and approve the process.
  • Terraform talks to our cloud provider of choice and makes the changes.
  • While we are at it, integration towards slack lets the team know that Terraform is throwing up some infrastructure.

This automates the entire process from committing the code, to the changes going live in the cloud.

Resources to learn about Git, CI/CD and IaC:

What is Infrastructure as Code? by Sam Guckenheimer

What is DevOps? by Atlassian

Infrastructure as Code: Managing Servers in the Cloud by Kief Morris

DevOps Culture

This is perhaps the most important part of DevOps, the culture. Much of what I have written here so far is incorporated into the culture, and we can go on and on about all the aspects of DevOps but here are a few important takeaways.

Smaller teams, more fluency between team members

Everyone should be competent on most topics, but you don’t have to be a subject matter expert on all. You might have more experience with networking, other virtualization but when you work in a small team the entire baseline gets raised. If you have one member that is brilliant at monitoring, that person could explain why something is the correct decision and heighten the general knowledge of monitoring throughout the team.

Feedback and collaboration

DevOps practitioners rely heavily on standups, short meetings that happen daily were one discusses what happened yesterday and what the focus of today is. Communication tools like Slack, Teams or self-hosted solutions like Mattermost is used to update your team and colleagues about what is going on.

Resources to learn about DevOps culture:

What is DevOps Culture? by Sam Guckenheimer

Effective DevOps by Jennifer Davis & Ryn Daniels

read more

Correcting hybrid routing addresses after updating SAMAccountName

I just recently got into a situation where customer with hybrid Exchange had to change many users SAMAccountName, which led to a whole bunch weird stuff due to hybrid routing addresses. So, quick script here to localize the users with Proxyaddresses that doesn’t match the SAMAccountName@tenantname.onmicrosoft.com and make sure that they get the correct address.

Remember that I’m no way responsible for what happens if you run this script. You shouldn’t run any script you find in a live environment, without knowing what it does.

# Variables, get all users and define your tenantname. Feel free to customize the $users variable to get a more accurate result for your domain
$users = Get-ADUser -Filter * -Properties proxyaddresses -SearchBase "OU=Users,DC=domain,DC=local"
$tenant = "tenantName"

# Now, for each object find...
$users | ForEach-Object {
    ## The correct routing address: [email protected]
    $routingMail = "smtp:" + $_.samaccountname + "@" + $tenant + ".mail.onmicrosoft.com"
    ## The users proxyaddresses
    $proxy = $_.proxyaddresses
    ## Then, if the user doesn't have the correct routing address, either replace the wrong one or just add a new one if there wasn't one
    if ($proxy -notcontains $routingMail) {
        ### First, we find the bad proxyaddresses and make sure we take a not of it
        $proxy | ForEach-Object {
            switch -wildcard ($_) {
                "smtp:*@$tenant.mail.*" { $wrongProxy = $_ }
            }
        }
        ### Then, we try to remove the wrong proxy. If there was no proxy present, this will fail but we'll make sure we don't see any of that nonsense
        Write-host -ForegroundColor Red "Removing $wrongProxy"
        Set-ADUser -Identity $_.SamAccountName -remove @{proxyAddresses=$wrongProxy} -ErrorAction SilentlyContinue
        ### Lastly, we'll add the correct address.
        Write-host -ForegroundColor Green "Adding $routingMail"
        Write-host $routingMail
        Set-ADUser -Identity $_.SamAccountName -add @{proxyAddresses=$routingMail} -ErrorAction SilentlyContinue
    }
}

read more

Working with Azure and Terraform, the basics

Working with Azure and Terraform, the basics

There are a couple of ways off running Terraform code with Azure depending on how your workflow is designed. If you are running Terraform on your local machine, you can connect to Azure through PowerShell or Azure CLI and run the Terraform commands locally. This works fine for demo and development scenarios but when moving into production it is recommended to use a CI/CD pipeline.

When you run Terraform, you generate a state file that stores the current state of your managed infrastructure, configuration and metadata. You can read up on why Terraform uses a state file here but the short answer is: It enables Terraform to work with several cloud vendors and makes Terraform perform much better. The file usually resides on the machine you run your code on, but for teams working together it would be preferable to store it remotely.

FYI; Everything that I’m going to mention is readily available in the official Terraform docs so if you prefer to learn everything the hard way, or you just don’t like me rambling, feel free to dive straight into the nitty-gritty. I’m also currently reading Terraform: Up and Running, so if you’re into book-learning then that’s the one I recommend on this subject.

Running the Terraform in your favorite shell

Now that you’re sold on the idea of using Terraform to manage your infrastructure, you’d want to cut to the chase and run some code towards Azure. But hold on to your claws, Bub! First, we need to authenticate. When you’re starting out, you can get everything up and running by connecting to Azure with Azure CLI in the same terminal window as you are planning to run Terraform in.

You can connect to Azure by running the following Azure CLI command, then follow the instructions:

# Azure CLI
az login

For more information, read up on how to connect to Azure with Azure CLI in the Microsoft Docs.

After you have connected your shell to Azure, you can now run your Terraform config files directly towards Azure.

Working with a Service Principal

So far we have authenticated within a shell to run Terraform but there comes a time where you have to either run Terraform on a shared server or better yet through a CI/CD pipeline. When that time comes, you want Terraform to be able to authenticate so you or the people you work with don’t have to authenticate all the time.

You can define a Service Principal and secret as Environment variables, or directly in the configuration file. The first one is highly recommended as the alternative is to store sensitive information in plain text. You could also use a service principal with a client certificate.

To set up a service principal, you would need a Client ID (Application ID), Client Secret, your subscription ID, and tenant ID.

There is no reason to invent the wheel over again, so for the setup itself, I’ll just refer to the Terraform docs. However, I prefer to use PowerShell for all things, and they tend to use bash and Azure CLI in their examples so here are the steps that they refer to Azure CLI but the PowerShell counterpart.

# Connect to Azure
Connect-AzAccount

# Connect to Azure if using China, Germany or Government Cloud
Connect-AzAccount -Environment <AzureChinaCloud|AzureGermanCloud|AzureUSGovernment>

# Fetch your subscription, which also gives you the tenant ID where it resides
Get-AzSubscription

# Create Service Principal
New-AzADServicePrincipal -DisplayName "Terraform-Auth" -Role Contributor -Scope "/subscriptions/SUBSCRIPTION_ID"

# After following the steps in the Terraform Docs
# storing the credentials as environment variables
New-Item -Path "Env:\" -Name ARM_CLIENT_ID -Value "00000000-0000-0000-0000-000000000000"
New-Item -Path "Env:\" -Name ARM_CLIENT_SECRET -Value "00000000-0000-0000-0000-000000000000"
New-Item -Path "Env:\" -Name ARM_SUBSCRIPTION_ID -Value "00000000-0000-0000-0000-000000000000"
New-Item -Path "Env:\" -Name ARM_TENANT_ID -Value "00000000-0000-0000-0000-000000000000"

read more