Monday, January 24, 2022

How to backup your Azure infrastructure using automation and how to integrate with Azure Boards.

Must read

avatar
Dimitris Krallishttps://dkrallis.wordpress.com/
For almost twenty years, I have served as a system administrator and operational support officer for various departments of Eurobank, the leader of retail banking in Greece. Being a member of the bank's dedicated MIS team for Finance, my current role is to ensure the financial data close cycle runs smoothly and to support my colleagues in their day-to-day operational processes. I have extensive experience with issue management, troubleshooting and end-user support and I am always eager to assist a person in need. I have a strong work ethic and take pride from my work, I am hard-working and thorough in the tasks at hand. My experience in radio production and media content creation is the most solid proof of excellent communication skills.

In this solution we will try to automate the backup process for all application gateways we have in our subscription, but of course it could be VMs, keyvaults etc.
When a service is been created, a configuration file (json) is also created along with the service. So we will create an automation solution where these configuration files will be stored in a storage container. This process will run every day at 12:00 midnight.
Next, when the backup process is completed, then a work item will automatically be created in the Azure DevOps board. That way we can have an interaction between our Azure environment and Azure DevOps environment and more specifically, in Boards.
First, let’s have an Architectural view of the app gateways:

What we will need to have in place for this solution to be successful :
1) storage account with a container where the backup files will be stored.
2) Powershell script (for the backup process).
3) Powershell script (for creating the work item in the Azure DevOps Boards).
4) YAML pipeline with some variables. For security reasons we need to define some variables in the Azure DevOps environment because we don’ t want to have the following values visible in our script : storage account name, container name and subscription id (especially subscription id!).
These variables are included in the script for the backup process:

Assuming that we already have created a storage account and a container,
let’s start with the YAML pipeline :

trigger: none

schedules:
- cron: "0 0 * * *"
  displayName: Daily Midnight 12:00 AM
  always: true
  branches:
    include:
      - master

variables:
  agent_vmimage: "windows-latest"
  applicationgatewaybackup_artifact: ApplicationGatewayBackup_artifact
  applicationgatewaybackup_path: ApplicationGatewayBackup/

# Stages
stages:
- stage: publish_and_run_artifact
  displayName: Publish & Run artifacts

  jobs:
    # Publish Artifacts
    - job: artifacts
      pool:
        vmImage: $(agent_vmimage)
      continueOnError: false
      workspace:
        clean: outputs

      steps:
        - publish: $(applicationgatewaybackup_path)
          artifact: $(applicationgatewaybackup_artifact)

    # Run backup
    - job: backup_applicationgateway
      dependsOn: artifacts
      pool:
        vmImage: $(agent_vmimage)
      continueOnError: false
      workspace:
        clean: outputs

      steps:
      - checkout: none

      - download: current
        artifact: $(applicationgatewaybackup_artifact)

      - task: AzurePowerShell@4
        displayName: 'az ps: Run backup script'
        inputs:
          azureSubscription:  MyServiceConnection
          scriptPath: "$(Pipeline.Workspace)/$(applicationgatewaybackup_artifact)/Script/BackupScript.ps1"
          azurePowerShellVersion: LatestVersion

    # Create the task on the Azure DevOps board
      - task: AzurePowerShell@4
        displayName: 'az ps: Run Create Work Item script'
        inputs:
          azureSubscription:  MyServiceConnection
          scriptPath: "$(Pipeline.Workspace)/$(applicationgatewaybackup_artifact)/Script/CreateWorkItem.ps1"
          azurePowerShellVersion: LatestVersion

As you can see in line 4 I use Cron in order to schedule the pipeline to run automatically but you can always use the GUI from Azure DevOps environment.
Now let’s see how the BackupScript.ps1 is structured:

$currentsub = Select-AzSubscription -Subscription ${env:SUBSCRIPTION_ID}
$SubscriptionName = $currentsub.Name
$setdate = Get-Date -Format dd-MM-yyyy

#Get all application Gateways
try {
    $allapplicationGateways = Get-AzApplicationGateway
}
catch {
    #if application gateways not found,display the following message:
    throw "Couldn't find any Application Gateways in subscription: $SubscriptionName"
}

Write-Host 'In the Subscription' $SubscriptionName $allapplicationGateways.Count''Application Gateways' found.'

$activeAppGws = 0
$stoppedAppGws = 0
$i = 0

#Check the Operational state for each Application Gateway.
#We use the variables stoppedAppGws and activeAppGws to store the number of the active Application Gateways and publish them (see line 92,93)
ForEach ($appGateway in $allapplicationGateways) {
    $state = $appGateway.OperationalState
    if ($state -eq "Stopped") 
    {
            $stoppedAppGws++
        }
        else {
            $activeAppGws++
    }


    $i = $i + 1

    #We will store json files in a container, so each json file needs to be in lowercase (containers accepts only lowercase)

    $resourceName = $appGateway.Name
    $resourceName = $resourceName.ToLower()

    #Get the RG name and id from each Application Gateway

    $ResourceGroupName = $appGateway.ResourceGroupName
    $resourceId = $appGateway.Id

    #Publish some details:

    Write-Host 'Application Gateway No:' $i 'is' $resourceName 'in' $ResourceGroupName 'and the Operational state is:' $state

    #Specify the name of the json file and export path
    $exportfileName = "$resourceName-$setdate.json"
    $exportpathlocation = "Ctemp$exportfileName"

    #export current application gateway to a json file
    try {
        Write-Information "Now Executing export for Application Gateway: $ResourceName"
        Export-AzResourceGroup -ResourceGroupName $ResourceGroupName `
                               -Resource $resourceId `
                               -Path $exportpathlocation `
                               -IncludeParameterDefaultValue `
                               -Confirm:$false `
                               -Force
    } catch {
        throw $_
    }


    #Get Storage account
try {
    $StorageAccount = Get-AzStorageAccount -ResourceGroupName $ResourceGroupName -Name ${env:STORAGE_ACCOUNT_NAME}
    $storageAccountContext = $storageAccount.context
    Write-Information "storage account found:" ${env:STORAGE_ACCOUNT_NAME}
}
catch {
    Write-Error "No storage account found with the name:" ${env:STORAGE_ACCOUNT_NAME}
}
    #Get the Container
try {

    Get-AzStorageContainer -Name ${env:CONTAINER_NAME} -Context $storageAccountContext
}
catch {
    Write-Output "container not found"
}


    # Upload backup to Storage Account
    try {
        Write-Information "Now Uploading backup file to Azure..."
        Set-AzStorageBlobContent -File $exportpathlocation `
                                 -Container ${env:CONTAINER_NAME} `
                                 -Blob $exportfileName `
                                 -Context $storageAccountContext 
    } catch {
        throw $_
    }
}
Write-Host 'Application/s Gateways stopped in total:' $stoppedAppGws
Write-Host 'Application/s Gateways started in total:' $activeAppGws

As we can see in this PS script, we use environment variables in order to get the values for storage account name, container name and subscription id.
We also count the application gateways we have and check the operational state for each one (for testing purposes the current status is “stopped” ). These details will be printed in the log file while the pipeline runs like this:

Afterwards, we proceed with the naming convention for the json files,
export the application gateways as json files and finally upload them in the container.

In the YAML pipeline we have two tasks:
The first one is to run the Backup process and the second one is to create a work item in the Azure DevOps board. So let’s see now the PS script for creating the work item:

#Authentication in Azure DevOps
$AzureDevOpsPAT = 'skdjfhsjdfhsoifsodjrfoisfoweo234234swdf'
$AzureDevOpsAuthenicationHeader = @{Authorization = 'Basic ' + [Convert]::ToBase64String([Text.Encoding]::ASCII.GetBytes(":$($AzureDevOpsPAT)")) }

$OrganizationName = "dimitriskrallis0042"
$UriOrganization = "https://dev.azure.com/$($OrganizationName)/"

#Lists all projects in your organization
$uriAccount = $UriOrganization + "_apis/projects?api-version=5.1"
Invoke-RestMethod -Uri $uriAccount -Method get -Headers $AzureDevOpsAuthenicationHeader 


#Create a work item

$WorkItemType = "task"
$WorkItemTitle = "Application Gateway/s Backup"
$ProjectName = "MyBackupInfraProject";


$uri = $UriOrganization + $ProjectName + "/_apis/wit/workitems/$" + $WorkItemType + "?api-version=5.1"
echo $uri

$body="[
  {
    `"op`": `"add`",
    `"path`": `"/fields/System.Title`",
    `"value`": `"$($WorkItemTitle) completed`"
  }
]"

Invoke-RestMethod -Uri $uri -Method POST -Headers $AzureDevOpsAuthenicationHeader -ContentType "application/json-patch+json" -Body $body

As we can see, for this we need to create a PAT (Personal Access Token) in the DevOps environment first and put it in our PS script.

So when the pipeline will run at midnight 12:00 o’clock this is what we see in our container:

and this is what we get in our Azure DevOps Boards:

Of course there is always room for improvements or adding new features to this solution.
I just wanted to show you a simple way to automate your Azure infrastructure.
I hope you enjoyed it like I did !

Thank you for reading this post.

Stay safe!

More articles

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Latest articles