Ops to DevOps, a move from SysAdmin to DevOps Engineer

Ops to DevOps, a move from SysAdmin to DevOps Engineer

Even though the term DevOps has been around for quite some time now, not everyone has shifted over from the old style of IT management to the new one. The big platform providers like the AWS, Azure and Google’s of the world has certainly been doing this for a while but now medium sized companies try to implement many of the new practices. In my experience, this is a great way of improving performance when you have a smaller operations team.

When I started in IT, I totally identified with the SysAdmin part of the spectrum. Managing servers, Hypervisors, making sure services were up and running. For most of the people I know that works in Ops, the Dev part of DevOps seems like a big and scary thing. I can relate to that, but I’ve always liked the appeal of development. That might be why I’ve been more willing to risk exploring “the other side”.

DevOps is more than automation, and weird small computers!

Before moving on, I want to address the fact that DevOps is not only a term for technology. It’s a culture movement, a very necessary one at that. I’ve worked with companies that are stuck in the old ways and I’ve worked with companies that embrace DevOps, and it works.

I suggest you start off by reading The DevOps Handbook. It’s an excellent written book that takes you through the story of a company as they move from traditional development and step by step turns into a DevOps organization. A great starting point for anyone interested in learning about DevOps.

Do I need to be able to program to understand DevOps?

Absolutely not, but it helps. To be precise, what helps is that you’re able to understand the logic of code when you see it. Keep in mind, DevOps doesn’t mean you have to be a developer who also do IT operations. You can just as easily be in operations but use tools proven by developers to increase agility. DevOps allows for more fluidity between work tasks, but you still want someone with high level of network experience to take a look at a network problem, even if that network is deployed through Infrastructure-as-Code.

DevOps doesn’t mean you have to be a developer who also do IT operations. You can just as easily be in operations but use tools proven by developers to increase agility

It’s normal for Sysadmin to create scripts to automate. There is an almost derogatory term for a SysAdmin that does not write scripts, a “right click administrator”, someone that only navigate the GUI. I am not saying that’s bad but normally the one that can automate task and write scripts are more efficient. It’s easier to continue to do right click administration in traditional IT but not in the new paradigm.

If you are hesitant to learn code, don’t be. DevOps is about promoting learning, so just ask a friend or colleague if you’re stuck.

Everything is different, I don’t like it!

Well, as harsh as it sounds… Welcome to IT? Not a day goes by without something changing and by the end of the month there are at least ten new things that you need to know about. You are just used to the idea that everything is in constant motion but sometimes a new thing comes and alter the course of how we work in a more profound way.

For people that don’t want change, I recommend reading up on growth mindset. Having a growth mindset is essential to getting ahead in any career but IT especially. One of the keys to a growth mindset is to always learn new things. In my experience, as long as you keep on learning the exponential growth of your knowledge will accumulate fast.

I suggest that you read this article from Forbes on growth mindset, consider maybe changing your outlook on things and you will soon see a change in what you comprehend.

What should I be learning about?

Containers and container orchestration

This have been the buzzword for many years, but it’s really seems to stick. We managed to optimize the server by removing the hardware and virtualizing it. This time we are removing more unnecessary fluff by taking away the OS, leaving just the application we want to run. This is the key to stable and efficient cloud computing.

Containers themselves are great but they still need to be managed. Container orchestration is a term we use for managing them as hands off as possible. Through orchestration, you can define that you want a certain container image to be running and if something happens to it, it will remove it and spin up a new one. They also support load-balancing and clustering technology.

Docker, one of the first that really pushed the technology into mainstream, have something called Docker Swarm. However, it seems that Kubernetes is winning the orchestration race and is generally considered the go-to-solution for container orchestration.

Resources to learn about containers:

What is a Container? by Docker

Kubernetes: Up and Running: Dive into the Future of Infrastructure by Brendan Burns, Joe Beda & Kelsey Hightower.

Automation, doing it once is enough

Like previously mentioned, this is something system administrators already are good at. But instead of doing small scripts and batch files, we are now focusing on automating as much possible. We’re working more and more with Infrastructure-as-Code (IaC), which totally automates everything related to infrastructure. We also have something called Configuration-as-Code (CaC), where you automate what your resources do after deployment.

We’ll look closer at IaC in the next section, but automation is more than writing IaC. We want to be able to automate everyday tasks, both on our servers and workstations. I might be biased, but I prefer to use PowerShell as my daily driver. After going Open Source, it is available on all platforms but if you’re stuck Bash or any other shell, there is no problem automating tasks with those as well. After a while you might even take things a step further and solve problems with python but creating some shell scripts is a great way to start ease into development.

Even if it is scripts, do still keep them in a repository and work with it like its code. It’s a good way of learning the ropes before getting to IaC.

Resources to learn about automation:

Learn Windows PowerShell in a Month of Lunches by Don Jones & Jeffrey Hicks

PowerShell for Sysadmins: Workflow Automation Made Easy by Adam Bertram

Wicked Cool Shell Scripts by Dave Taylor & Brandon Perry

Git and Continuous Interaction / Continuous Development

If you are like most system administrators this might be a little unfamiliar too you, but it is essential that you learn how to properly handle code. Git have become the de facto way of storing and collaborating on code. Continuous Integration / Continuous Development (CI/CD) is central in the process of DevOps. First, consider the follow diagram of the DevOps process.

Courtesy of What is DevOps? by Atlassian

This is the visualization of how a DevOps process should work. It’s a continuous work in progress, building the cathedral brick by brick. Usually this works by submitting code to a repository, then a build agent or service take the code and run with it.

A practical example might be better. Let’s say that we run Terraform as our IaC tool of choice.

  • By keeping the code in GitHub, we can collaborate on the code, have version control and peer review our code.
  • For this example, we have integrated our GitHub repository with Azure DevOps and by updating our code we trigger a pipeline that runs Terraform Plan (shows us what will happen when running the new code) and waits for our approval.
  • We look at the result of our Terraform Plan and approve the process.
  • Terraform talks to our cloud provider of choice and makes the changes.
  • While we are at it, integration towards slack lets the team know that Terraform is throwing up some infrastructure.

This automates the entire process from committing the code, to the changes going live in the cloud.

Resources to learn about Git, CI/CD and IaC:

What is Infrastructure as Code? by Sam Guckenheimer

What is DevOps? by Atlassian

Infrastructure as Code: Managing Servers in the Cloud by Kief Morris

DevOps Culture

This is perhaps the most important part of DevOps, the culture. Much of what I have written here so far is incorporated into the culture, and we can go on and on about all the aspects of DevOps but here are a few important takeaways.

Smaller teams, more fluency between team members

Everyone should be competent on most topics, but you don’t have to be a subject matter expert on all. You might have more experience with networking, other virtualization but when you work in a small team the entire baseline gets raised. If you have one member that is brilliant at monitoring, that person could explain why something is the correct decision and heighten the general knowledge of monitoring throughout the team.

Feedback and collaboration

DevOps practitioners rely heavily on standups, short meetings that happen daily were one discusses what happened yesterday and what the focus of today is. Communication tools like Slack, Teams or self-hosted solutions like Mattermost is used to update your team and colleagues about what is going on.

Resources to learn about DevOps culture:

What is DevOps Culture? by Sam Guckenheimer

Effective DevOps by Jennifer Davis & Ryn Daniels

read more

Correcting hybrid routing addresses after updating SAMAccountName

I just recently got into a situation where customer with hybrid Exchange had to change many users SAMAccountName, which led to a whole bunch weird stuff due to hybrid routing addresses. So, quick script here to localize the users with Proxyaddresses that doesn’t match the SAMAccountName@tenantname.onmicrosoft.com and make sure that they get the correct address.

Remember that I’m no way responsible for what happens if you run this script. You shouldn’t run any script you find in a live environment, without knowing what it does.

# Variables, get all users and define your tenantname. Feel free to customize the $users variable to get a more accurate result for your domain
$users = Get-ADUser -Filter * -Properties proxyaddresses -SearchBase "OU=Users,DC=domain,DC=local"
$tenant = "tenantName"

# Now, for each object find...
$users | ForEach-Object {
    ## The correct routing address: [email protected]
    $routingMail = "smtp:" + $_.samaccountname + "@" + $tenant + ".mail.onmicrosoft.com"
    ## The users proxyaddresses
    $proxy = $_.proxyaddresses
    ## Then, if the user doesn't have the correct routing address, either replace the wrong one or just add a new one if there wasn't one
    if ($proxy -notcontains $routingMail) {
        ### First, we find the bad proxyaddresses and make sure we take a not of it
        $proxy | ForEach-Object {
            switch -wildcard ($_) {
                "smtp:*@$tenant.mail.*" { $wrongProxy = $_ }
            }
        }
        ### Then, we try to remove the wrong proxy. If there was no proxy present, this will fail but we'll make sure we don't see any of that nonsense
        Write-host -ForegroundColor Red "Removing $wrongProxy"
        Set-ADUser -Identity $_.SamAccountName -remove @{proxyAddresses=$wrongProxy} -ErrorAction SilentlyContinue
        ### Lastly, we'll add the correct address.
        Write-host -ForegroundColor Green "Adding $routingMail"
        Write-host $routingMail
        Set-ADUser -Identity $_.SamAccountName -add @{proxyAddresses=$routingMail} -ErrorAction SilentlyContinue
    }
}

read more

Working with Azure and Terraform, the basics

Working with Azure and Terraform, the basics

There are a couple of ways off running Terraform code with Azure depending on how your workflow is designed. If you are running Terraform on your local machine, you can connect to Azure through PowerShell or Azure CLI and run the Terraform commands locally. This works fine for demo and development scenarios but when moving into production it is recommended to use a CI/CD pipeline.

When you run Terraform, you generate a state file that stores the current state of your managed infrastructure, configuration and metadata. You can read up on why Terraform uses a state file here but the short answer is: It enables Terraform to work with several cloud vendors and makes Terraform perform much better. The file usually resides on the machine you run your code on, but for teams working together it would be preferable to store it remotely.

FYI; Everything that I’m going to mention is readily available in the official Terraform docs so if you prefer to learn everything the hard way, or you just don’t like me rambling, feel free to dive straight into the nitty-gritty. I’m also currently reading Terraform: Up and Running, so if you’re into book-learning then that’s the one I recommend on this subject.

Running the Terraform in your favorite shell

Now that you’re sold on the idea of using Terraform to manage your infrastructure, you’d want to cut to the chase and run some code towards Azure. But hold on to your claws, Bub! First, we need to authenticate. When you’re starting out, you can get everything up and running by connecting to Azure with Azure CLI in the same terminal window as you are planning to run Terraform in.

You can connect to Azure by running the following Azure CLI command, then follow the instructions:

# Azure CLI
az login

For more information, read up on how to connect to Azure with Azure CLI in the Microsoft Docs.

After you have connected your shell to Azure, you can now run your Terraform config files directly towards Azure.

Working with a Service Principal

So far we have authenticated within a shell to run Terraform but there comes a time where you have to either run Terraform on a shared server or better yet through a CI/CD pipeline. When that time comes, you want Terraform to be able to authenticate so you or the people you work with don’t have to authenticate all the time.

You can define a Service Principal and secret as Environment variables, or directly in the configuration file. The first one is highly recommended as the alternative is to store sensitive information in plain text. You could also use a service principal with a client certificate.

To set up a service principal, you would need a Client ID (Application ID), Client Secret, your subscription ID, and tenant ID.

There is no reason to invent the wheel over again, so for the setup itself, I’ll just refer to the Terraform docs. However, I prefer to use PowerShell for all things, and they tend to use bash and Azure CLI in their examples so here are the steps that they refer to Azure CLI but the PowerShell counterpart.

# Connect to Azure
Connect-AzAccount

# Connect to Azure if using China, Germany or Government Cloud
Connect-AzAccount -Environment <AzureChinaCloud|AzureGermanCloud|AzureUSGovernment>

# Fetch your subscription, which also gives you the tenant ID where it resides
Get-AzSubscription

# Create Service Principal
New-AzADServicePrincipal -DisplayName "Terraform-Auth" -Role Contributor -Scope "/subscriptions/SUBSCRIPTION_ID"

# After following the steps in the Terraform Docs
# storing the credentials as environment variables
New-Item -Path "Env:\" -Name ARM_CLIENT_ID -Value "00000000-0000-0000-0000-000000000000"
New-Item -Path "Env:\" -Name ARM_CLIENT_SECRET -Value "00000000-0000-0000-0000-000000000000"
New-Item -Path "Env:\" -Name ARM_SUBSCRIPTION_ID -Value "00000000-0000-0000-0000-000000000000"
New-Item -Path "Env:\" -Name ARM_TENANT_ID -Value "00000000-0000-0000-0000-000000000000"

read more

PowerShell and how to work with network settings

This one seems confusing for most, as it might be the area where most Windows sysadmins rely on the GUI. If you ask (almost) any sysadmin how to change the IP on a server, they are going to answers how to get to the network adapters in the settings. Things like this seem to be to be one of the reasons why people are afraid of adapting Server Core.

Figuring that it’s time to inform the people and make sure that anyone can handle networking, even if they can only access the shell. Let’s summarize how to do most network related tasks in PowerShell.

Any requests?

Enable and Disable NIC

# List all network adapters
Get-NetAdapter

# Disable a specific network adapter, for instance the Wi-Fi adapter
# First by name, then by piping a specific adapter
Disable-NetAdapter -Name "Wi-Fi"
Get-NetAdapter -InterfaceIndex 5 | Disable-NetAdapter

# Activate a specific network adapter
# Again by name and then by piping a specific adapter
Enable-NetAdapter -Name "Wi-Fi"
Get-NetAdapter -InterfaceIndex 5 | Enable-NetAdapter

Get and set IP address

# Get the IP-address of a specific adapter
Get-NetIPAddress -InterfaceIndex 5

# Get just the IPv4-address
Get-NetIPAddress -InterfaceIndex 5 -AddressFamily IPv4

# Just the address itself
(Get-NetIPAddress -InterfaceIndex 5 -AddressFamily IPv4).IPAddress
# Set IPv4-address, using splatting for better readability
$ipParameter = @{
    InterfaceIndex = 22
    IPAddress = "10.0.0.22"
    PrefixLength = 24
    AddressFamily = "IPv4"
}
New-NetIPAddress @ipParameter

# Set the adapter to DHCP
Set-NetIPInterface -InterfaceIndex 22 -Dhcp Enabled

Set DNS server for NIC and reset DNS Cache

# Set DNS-server addresses on a specific NIC
$dnsParameter = @{
    InterfaceIndex = 5
    ServerAddresses = ("8.8.8.8","8.8.4.4")
}
Set-DnsClientServerAddress @dnsParameter

# Clear DNS cache 
Clear-DnsClientCache

read more

What is Azure Lighthouse

Working at an MSP / CSP have taught me many things. Mainly that keeping track of credentials is a bitch. I jest, but not really. Some features of the partner dashboard that Microsoft provides makes it easier, lets you jump into a customers tenant but it’s flawed. You would have to go through the Partner Dashboard, find the customer and then select a link to whatever service you want to manage. Microsoft realized that it’s a struggle and have created Azure Lighthouse, a cross-customer management solution. The service itself is free, but if you use other services with it you have to pay for those, obviously.

What you get in Azure Lighthouse <figcaption>What you get in Azure Lighthouse</figcaption></figure>

With Azure Lighthouse, any management service provider will be able to have one view of their entire customer base. Monitoring, compliance and security, all in one portal. This makes working with customers so much better, and eventually, make the experience better for end users.

I recommend reading up on the service on the Azure Lighthouse product page, but also take a look at this demonstration video.

read more