Publish to PowerShellGallery with GitHub Actions

Publish to PowerShellGallery with GitHub Actions

My next step in automating my PowerShell module development workflow is to have my module deploy to PowerShellGallery when creating a GitHub release. Last time it was doing unit testing with pester, now we want our code to get out in the world.

What I want to accomplish is pretty simple, to make my release process simple. By using GitHub Actions, we can trigger tasks by creating a new release. When creating a release, we checkout our code and run Publish-Module like we would locally on our machine. We need an API Key, which you can find when you log into PowerShellGallery, and that’s about it.

Add the API key as a secret

Under settings in the repository you want to set up publishing from, you find the menu item called Secrets. Press that big New secret button to add your secret. When you do, you can edit it but you can replace or delete it.

As you can see from my repo, I got one called PSGALLERY and one CODECOV, each of them for the respective services.

image-20201104115018721

Let’s see how we can set up our workflow and reference that secret!

Creating the workflow

Let us take a look at the code, then I can explain what is going on.


name: PSGallery
on:
  release:
    types: [published]
jobs:
  psgallery_publish:
    runs-on: ubuntu-latest
    steps:
      - name: checkout
        uses: actions/[email protected]
        
      - name: Publishing
        run: |
          Publish-Module -Path '...' -NuGetApiKey ${{ secrets.PSGALLERY }}
        shell: pwsh

First of all, we define when this workflow is triggered. What we want, is to have this run every time a new release is created and published. Types here can be everything from unpublished to edited so if you have any special needs, the GitHub Actions documentation covers everything you need to know.

I have created one job called psgallery_publish, that has to steps. One to check out the code, so we have the code locally on the agent we’re using, and one to run the line of PowerShell that actually publishes the module. I usually have the actual code for the module in a directory with the same name as the module itself, that goes into the -Path parameter.

For our secret, we can fetch this by using the secrets.PSGALLERY snippet. This ensures that you don’t have your actual secret in your public code, and makes it easy to maintain if you ever need to change this key.

read more

Using GitHub actions to run automatic Pester tests

Using GitHub actions to run automatic Pester tests

I used a long time before looking into Pester to test my code. Or, worded differently, for the longest time I was not testing my code. But as soon as I started creating PowerShell modules that was more than just small time projects, I had to step up the production quality. As soon as I had written some tests, I wanted to have those tests run every time I did a pull request. This helps me catch bugs before publishing the new version of my module, and saves me from a ton of stress.

This is not an explanation on how Pester works. If you want to learn how to write tests for PowerShell you can either check out pester.dev, or this book by Adam Bertram.

A basic example

If we look at the bare essentials that you need to create a GitHub action that runs pester, we would have something like this…

name: Pester
on:
  push:
    branches: [ main ]
jobs:
  test-pwsh:
    runs-on: windows-latest
    steps:
    - uses: actions/[email protected]
    - name: Run Pester tests
      run: |
        Set-PSRepository psgallery -InstallationPolicy trusted
        Install-Module -Name Pester -RequiredVersion 5.0.4 -Confirm:$false -Force
        Invoke-Pester -Path "tests"
      shell: pwsh

Just to break it down, here we have an action that:

  • Runs on every git push to a branch called main (change this to suite your needs)
  • Runs a job called test-pwsh on the latest available Windows image
  • Checks-out the repository to our workspace, so we can interact with our code
  • Run a couple of cmdlets inline in PowerShell 7, as defined by the shell selection at the end
    • Set the PSGallery as a trusted source
    • Install Pester, make sure nothing stops it from doing so by using -Confirm:$false and -force.
    • Run Pester on the directory containing the tests files, in my case it is called tests. Imagination is my string suite.

Some of the parameters and switches are probably unnecessary, as the container you get served is completely clean. I just like the false sense of security by defining -Force 😅

Turns out, this works just as you expect it to. However, one flaw here is that we are now running this on PowerShell 7 only, Windows only. If you are creating a module that you want to be usable for anyone running PowerShell, you need to run your tests on all platforms available, as well as Windows PowerShell. Turns out, GitHub Actions has a solution for that.

Running same tests on multiple platforms


name: Pester
on:
  pull_request:
    branches: [ main ]
jobs:
  test-pwsh:
    strategy:
      matrix:
        platform: [ubuntu-latest, macos-latest, windows-latest]
    runs-on: ${{ matrix.platform }}
    steps:
    - uses: actions/[email protected]
    - name: Run Pester tests (pwsh)
      run: |
        Write-host $PSVersionTable.PSVersion.Major $PSVersionTable.PSRemotingProtocolVersion.Minor
        Set-PSRepository psgallery -InstallationPolicy trusted
        Install-Module -Name Pester -RequiredVersion 5.0.4 -confirm:$false -Force
        Invoke-Pester -Path "tests"
      shell: pwsh
  
  test-posh:
    runs-on: windows-latest
    steps:
    - uses: actions/[email protected]
    - name: Run Pester tests (PowerShell)
      run: |
        Write-host $PSVersionTable.PSVersion.Major $PSVersionTable.PSRemotingProtocolVersion.Minor
        Set-PSRepository psgallery -InstallationPolicy trusted
        Install-Module -Name Pester -RequiredVersion 5.0.4 -Confirm:$false -Force
        Invoke-Pester -Path "tests"
        if ($Error[0].Fullyqualifiederrorid -eq 'PesterAssertionFailed') {exit 1}
      shell: powershell

In this example, we run two jobs. One called test-pwsh and one called test-posh. The first one is using the strategy called matrix to run the job for each of the platforms we have defined. This way, we save us from manually creating three jobs that run the same code.

We then have to create a new job to run on a different shell. Turns out even though pester fails in PowerShell 5.x the job still gets marked as a success, so I added a little if statement to make sure that it runs exit 1.

Overall, that’s it for this topic. Obviously, we can do a real deep dive here but for now I just want to make sure that more people are testing their code. Questions, comments or feedback? Feel free to reach out to me on twitter!

read more

Custom variable validation, a practical example

Custom variable validation is my new go-to killer feature. Introduced as a language experiment in late 0.12, from Terraform 0.13 it is now production-ready! This enables us to write a definition of what we want our input variables to, and give send out a proper warning.

At first, you might ask why bother? If the user inputs something that can’t be deployed, wouldn’t Terraform fail? Sure, but for a failure to happen we actually have to run the code, use the provider to get the error back. This takes time, or even worse it might actually try to deploy and time out, which takes even more time.

Creating Azure Storage Account

One example that comes to mind is deploying Azure storage accounts. When deploying storage accounts, there are some rules for what you can name it. Its name must be unique, be between 3 and 24 characters in length, and may only contain lowercase letters and numbers. The first one, Azure will have to check for us but the others are pretty static.

Here is my example, which also can be found in my Azure examples git-repository.

variable "storage_account_name" {
  type    = string
  validation {
    condition     = (
                    length(var.storage_account_name) > 2 && 
                    length(var.storage_account_name) < 25 && 
                    can(regex("[a-z.*]|[0-9]", var.storage_account_name))
                    )
    error_message = "Storage account names must be between 3 and 24 characters in length and may contain numbers and lowercase letters only."
  }
}

Note, we are using the && chain operator to build our conditions. In this case, it means that it will run the next line, only if the last one didn’t fail. In plain English, you would read this as;

If the length of the string is greater than 2, and the length of the string is less than 25, and the string only has lowercase letters and numbers, return true.

If anyone of the tests fails, return the error message you have defined.

In my tests, the error is returned in around 0.7 seconds. This is compared to 5, 6, and even 7 seconds when trying to deploy and getting the error back from Azure.

https://robstr.dev/images/posts/307-measure-command.png

read more

List all VNet and Subnets across multiple subscriptions

It has happened to everyone, the network sprawl. You might have on-premises networks and virtual networks, maybe even in multiple clouds, and at one point you simply have lost count of your ranges and what they are used for. Usually, these ranges come from someone that is responsible for IP-ranges (preferably an IPAM solution) but what if you have a lot of teams creating VNet in a bunch of subscriptions? Well, it can get out of hand quickly.

The script

If you are interested in learning how this script works, we’ll continue the blog post after the code. For those who just want to run the script, here you go:

Get-AzSubscription | Foreach-Object {
    $sub = Set-AzContext -SubscriptionId $_.SubscriptionId
    $vnets = Get-AzVirtualNetwork

    foreach ($vnet in $vnets) {
        [PSCustomObject]@{
            Subscription = $sub.Subscription.Name
            Name = $vnet.Name
            Vnet = $vnet.AddressSpace.AddressPrefixes -join ', '
            Subnets = $vnet.Subnets.AddressPrefix -join ', '
        }
    }
} | Export-Csv -Delimiter ";" -Path "AzureVnet.csv"

This will export the results to CSV, but if you don’t want that you can remove the last pipe and the cmdlet Export-Csv.

Note that you need to have the Az-module installed. You also have to be connected to Azure with an account that can at least read all the subscriptions and network resources.

How the script works

We start off by getting all the subscriptions available and running them one by one through a for each loop. So for every subscription, we set the active context to that subscription and populate the variable $vnets with all Virtual Networks in that subscription.

We run through another for each loop, where we create one new PSCustomObject per VNet in our $vnets variable. This is how we will represent our information, and the first couple of values makes sense. We set Subscription to the name of our current subscription, and the name of the Vnet as the Name field.

For our VNet address space and subnets, we could just point to the value from $vnet and be done with it. This works perfectly if you just want the results in the terminal. What I want, is to export this as a CSV so I can share this with whoever needs the list. If you try to export this value and it’s more than one, you will not get an IP range but the text System.Collections.Generic.List.

To get around this refer to the value we want, and use the join operator to join all the values together, separated by a comma. I also added a space after the comma to make it more readable. The VNet address space and the subnet can be multiple values, so I had to use the join operator for both of them.

read more

Extract Zip files with PowerShell

Note: I got a lot of feedback about how it is possible to use Expand-Archive for this. While this is true, I wanted to have a solution that didn’t rely on any prerequisite except for what comes with .net. For this tool, I tried to create something that would run flawless on any systems, even if they removed core functionality like some of the system modules and libraries. This might not have come across when I originally wrote this, but hopefully the rest of the post will make sense now.

For my module tftools I needed to download Terraform from Hashicorp, which came in a Zip archive. I didn’t want to rely on other tools or modules to extract the Zip files, and luckily there was a .Net class called ZipFile from the System.IO.Compression.FileSystem assembly that could be utilized.

Here’s how we can download a Zip file as a temporary file and extract the content.

# Define a temporary file, 
# the URI for the file you want to download,
# and the folder you want to extract to
$tempFile = [System.IO.Path]::GetTempFileName()
$URI      = "https://example.org/file.zip"
$OutputFolder = "C:\folder"

# Download the file by using splatting*
$downloadSplat = @{
    uri             = $URI
    OutFile         = $tempFile
    UseBasicParsing = $true
}
Invoke-WebRequest @downloadSplat

# Load the assembly
Add-Type -AssemblyName System.IO.Compression.FileSystem

# Extract the content
[System.IO.Compression.ZipFile]::ExtractToDirectory($tempFile, $OutputFolder)

# And clean up by deleting the temporary file
Remove-Item -Path $tempFile

*If you haven’t heard of splatting, here is my blogpost about it: PowerShell tricks: Splatting

read more