All posts by Thiyagu

How to Enable additional logs in Azure pipeline execution

One of the most important aspects in the Azure DevOps life cycle Pipeline development is to have tools and techniques in place to find out the root cause of any error that would have occurred during the pipeline Azure DevOps pipeline execution. In this article, we will learn how to review the logs that helps in troubleshoot for any errors in the Pipelines by enable additional logs in Azure pipeline execution

By default, Azure DevOps pipeline provides logs which provide information about the execution of each step in the pipeline. In case of any error or need to more information to debug that time the default logs wouldn’t help you in understanding what went wrong in the pipeline execution. In those cases, it would be helpful if we get more diagnostic logs about each step in the Azure DevOps pipelines.

Below are the two different techniques to enable the feature of getting additional logs.

Enable System Diagnostics logs for specific execution of pipeline.

If you would like to get additional logs for a specific pipeline execution then you need to do is enable the Enable System Diagnostics checkbox as shown below image and click on the Run button.

Enable System Diagnostics logs for All execution of pipeline.

If you always want to enable System Diagnostics to capture the Diagnostics trace for cases where the pipeline executed automatically (Continuous Integration scenarios), then you need to create a variable in the pipeline with the name system.debug and the set the value to true as shown below.

System.debug = true (variable helps us to generate diagnostics logs in during the execution of pipelines)
System.debug = False (variable helps us to not generate diagnostics logs in during the execution of pipelines)

Once we set the value of system.debug to true in our variable group (which referred in our pipeline), then the pipeline starts showing additional logs in the purple color as shown below.

System.debug = true

System.debug = false

Note: You like to view the logs without any colors, you can click on the View raw log button which opens up the logs in a separate browser window and you can save if required.

Download Logs from Azure pipeline:

In case if you like to share the logs to some other teams who doesn’t have access to your pipeline, you can download the logs by clicking on the Download Logs button in the Pipeline Summary page as shown below and you can share.

 

Understanding the directory structure created by Azure DevOps tasks

If you are beginner in Azure DevOps, understanding of when & which folders are created and populated by the pipeline tasks, this is one of the first step in learning Understanding the directory structure created by Azure DevOps tasks.

Azure DevOps Agent will supports Windows,Linux / Ubuntu, macOS Operating Systems but in this post we are going to check from the windows Agent machine.Let’s try to understand the folder structure by creating a very simple YAML based Azure DevOps Pipeline and add the Tasks based on the below instructions.

Let we understand the directory structure created by Azure DevOps tasks here !

Let we list all the folders that are created during the pipeline execution. It’s called as Workspace and so the local folder path can be referred using a Pre-defined variable called $(Pipeline.Workspace)

We are going to list all the folders (using below YAML with PowerShell task) that are created for a given current pipeline and this is called as Workspace. And the local folder path can be referred using a Pre-defined variable called $(Pipeline.Workspace)

In the below YAML, we added the powershell task to prints the folder structure inside the $(Pipeline.Workspace)

- task: PowerShell@2
  displayName: Show all Folders in $(Pipeline.Workspace) 
  inputs:
    targetType: 'inline'
    pwsh: true
    script: |
      Get-ChildItem -path $(Pipeline.Workspace)

Once your pipeline is executed, you will be able to see the folders like a, b, s and Test Results available in the workspace as shown in below snap shot.

Now based on the above image, Let’s now understand these folders and its usages in detail.

Folder Name: a
Referred using: $(Build.ArtifactStagingDirectory)/ $(Build.StagingDirectory)/ $(System.ArtifactsDirectory)

Artifact Staging Directory is a pre-defined variable and used in Build Pipelines for storing the artifacts of solution which get build (simply its output artifacts). if you confused, in simple it is output of the Build process for any type of solution ( like .net, java, python etc) or it could be as simple as copy files.

The publish build artifacts task creates an artifact of whatever is in this folder. This folder will get cleared/purged before each new build.

Folder Name: b
Referred using: $(Build.BinariesDirectory)

Binaries directory is a pre-defined variable used for storing the output of compiled binaries that are created as part of compilation process

Folder Name: s
Referred using: $(System.DefaultWorkingDirectory) / $(Build.SourcesDirectory)

Default working directory is a pre-defined variable that is mostly used to store the source code of the application. $(System.DefaultWorkingDirectory) is used automatically by the checkout step which download the code automatically as shown in below snap shot. In simple, this is the working directory and where your source code is stored.

Folder Name: TestResults
Referred using: $(Common.TestResultsDirectory)

Test results Directory contains the local directory of the agent which could be used for storing the Test Results.

Summary of directory structure created by Azure DevOps tasks

[wpdatatable id=2]

Variable Substitution in Config using YAML DevOps pipeline

As DevOps Engineer, you are responsible for to develop the Azure DevOps pipelines which should replace these values (DEV/TEST/PREPROD/PROD) based on the environment. However, the configuration values could change across environments. In this article, we are going to learn how to dynamically change the environment specific values (Variable Substitution) in the Azure DevOps Pipelines using an Azure DevOps Extension called Replace Tokens.

In my previous article, we had discussed about DevOps guys need to ensure all the secrets need to be kept inside the Key vault instead of using directly from the Azure DevOps Variable group. But all the project decision will not be same, still many project using the variable group for maintaining the secrets and lock them. This article is focusing on same and will explain how to Manage environment specific configurations (Variable Substitution using Replace Tokens – Azure DevOps Marketplace extension

Example Config File

Below is the sample config file which we are going to use for variable substitution from Key Vault in YAML Azure DevOps pipelines

These Configuration values must be environment specific and they have different values in different environments. DevOps engineers would have to develop the Azure DevOps pipelines which should replace these values in my below config. In my case the smtp-host, smtp-username, smtp-password are different for lower environment (dev/qa & preprod) and higher environment.

How to use Replace Tokens Extension in Azure YAML pipeline

Here we are going to use Replace Tokens task to replace tokens in config files with variable values.

The parameters of the task are described bellow, in parenthesis is the YAML name:

  • Root directory (rootDirectory): the base directory for searching files. If not specified the default working directory will be used. Default is empty string
  • Target files (targetFiles): the absolute or relative newline-separated paths to the files to replace tokens. Wildcards can be used (eg: **\*.config for all .config files in all sub folders). Default is **/*.config
  • Token prefix (tokenPrefix): when using custom token pattern, the prefix of the tokens to search in the target files. Default is #{
  • Token suffix (tokenSuffix): when using custom token pattern, the suffix of the tokens to search in the target files. Default is }#

Example 1:  Replace with Target files parameter

- task: replacetokens@5
displayName: 'replacing token in new config file'
inputs:
targetFiles: |
src/Feature/dotnethelpers.Feature.Common.General.config
src/Foundation/dotnethelpers.Foundation.SMTP.config
encoding: 'auto'
writeBOM: true
actionOnMissing: 'warn'
keepToken: false
tokenPrefix: '{'
tokenSuffix: '}'
useLegacyPattern: false
enableTelemetry: true

Note: the task will only work on text files, if you need to replace token in archive file you will need to first extract the files and after archive them back.

Example 2:  Replace with Root directory and Target files parameter

You can able to see, we can also give the targe files with comma separate as well like below YAML.

- task: replacetokens@5
inputs:
rootDirectory: 'src/Feature/Forms/code/App_Config/Include/Feature/'
targetFiles: 'dotnethelpers.Feature.config,dotnethelpers.Foundation.SMTP.config'
encoding: 'auto'
tokenPattern: 'default'
writeBOM: true
actionOnMissing: 'warn'
keepToken: false
actionOnNoFiles: 'continue'
enableTransforms: false
enableRecursion: false
useLegacyPattern: false
enableTelemetry: true

Example 3:  Replace with wildcard

Target files (targetFiles): the absolute or relative newline-separated paths to the files to replace tokens. Wildcards can be used (eg: **\*.config for all .config files in all sub folders). Default is **/*.config.

As per below targetfiles path, replace token will search all the .config files inside the node folder and replace the token if applicable.

- task: replacetokens@5
displayName: replacing token
inputs:
targetFiles: |
node/*.config
encoding: 'auto'
writeBOM: true
actionOnMissing: 'warn'
keepToken: false
tokenPrefix: '#{'
tokenSuffix: '}#'
useLegacyPattern: false
enableTelemetry: true

Sample Output: Variable Substitution

Add Tags On An AZURE SQL DATABASE Using PowerShell

As as system admin/DevOps guys, usually during audit time we need to complete the finding (by make the grouping to filter) in faster manner instead of doing as manual work, like similar we got the review command that all the resources need to be have the tags. But completing this task will have huge manual effort as we maintaining lot of resources so thought to make automation to complete this task. Here we going to discuss bout how to Add Tags On An AZURE SQL DATABASE Using PowerShell

What Is A Tag In Azure?

Its Creates a predefined Azure tag or adds values to an existing tag | Creates or updates the entire set of tags on a resource or subscription.

Azure tagging is an excellent feature from Microsoft that was help you to logically group your Azure resources and help you to track your resources. It also helps to automate the deployments of the resources and another important feature is this feature helps to provide the visibility of the resource costs that they are liable for.

Syntax: New-AzTag [-ResourceId] <String> [-Tag] <Hashtable> [-DefaultProfile <IAzureContextContainer>] [-WhatIf] [-Confirm] [<CommonParameters>]


What are the ways to create a Tags in Azure?

The Azure Tag enables the key-value pairs to be created and assigned to the resources in Azure using Azure Portal, PowerShell. Azure CLI, etc.

Note: Tag names and Tag Values are case-sensitive in nature.

Why Use The Azure tag ?

The main intention to use the Azure tags is to organize the resources in Azure Portal. When you are organizing the resources properly helps you to identify the category that the resources belong to. So basically, Azure tags are the name-value pairs that help to organize the Azure resources in the Azure Portal

For example (consider you having the tag), When you have more resources in your Azure Portal, that time it really helps you to categorize your Azure Resources. Suppose you have 6 Virtual machines (VM) in your Azure Subscription, among them 2 are under our development environment and 2 are for the QA environment and remain 2 belong to our Production environment. So we can tag them as Environment = Development, Environment = QA, or Environment = Production. so now we can easily get the view of each resource coming under the specific environment.


How To Create Azure Tags Using powershell


Step 1: Connect to your Azure account using Connect-AzAccount

Before starting please gather secret for service principal , AppId , tenantId to establish the connect to the azure portal to perform the operation against the Azure services.

#Converts plain text or encrypted strings to secure strings.
$SecureServicePrinciple = ConvertTo-SecureString -String "rfthr~SSDCDFSDFE53Lr3Daz95WF343jXBAtXADSsdfEED" -AsPlainText -Force
#Assigning the App ID
$AppId = "0ee7e633-0c49-408e-b956-36d62264f644"
#Assigning the Tenant ID
$tenantId= "32cf8ba2-403a-234b-a3b9-63c2f8311778"
#storing a username and password combination securely.
$pscredential = New-Object -TypeName System.Management.Automation.PSCredential -ArgumentList $AppId, $SecureServicePrinciple
#Connect to Azure with an authenticated account for use with cmdlets from the Az PowerShell modules.
Connect-AzAccount -ServicePrincipal -Credential $pscredential -Tenant $tenantId

Step 2: Define the tags and assign as a array list

Based on my requirement i had added the below tags, you can create N number of tags based on your segregation of the resources.

$tags = @{"Business Unit"="WX_Digital"; "Environment"="PROD"; "Owner"="dotnet-helpers" ; "Project"="webapp"}


Step 3: Getting all the SQL database in the specific resources group

Example 1: Get single SQL database to update

  • Get-AzResource cmdlet will Gets All the resources from the specific subscription and filtered with Resource Type SQL.
  • After executing below script the RESOURCE.id variable will have all the databases names inside the specific resource type as shown in the below snap shot.
  • -ResourceType : The resource type of the resource(s) to be retrieved. For example, Microsoft.Compute/virtualMachines
#GET the single database by where condition
$RESOURCE = Get-AzResource -ResourceGroupName "rg-dgtl-pprd" -ResourceType "Microsoft.Sql/servers/databases" | Where-Object name -Like 'sqlsrvr-dgtl-pprd/sitecore_master' 

$resourceIds = $RESOURCE.Id

Example 2: Get all the database to update

#Gets All the database 
$RESOURCE = Get-AzResource -ResourceGroupName "rg-dgtl-pprd" -ResourceType "Microsoft.Sql/servers/databases" 

$resourceIds = $RESOURCE.Id

Step 4: Update new tags using new-AzTag command

We can Creates a predefined Azure tag or adds values to an existing tag using new-AzTag cmdlet | Creates or updates the entire set of tags on a resource or subscription.

-ResourceId : The resource identifier for the entity being tagged. A resource, a resource group or a subscription may be tagged.

Example: 1 Add tags to single database

#Creates or updates the entire set of tags on a resource or subscription.
#The resource identifier for the entity being tagged. A resource, a resource group or a subscription may be tagged.
new-AzTag -ResourceId $resourceId -Tag $tags

Example 2: Add tags to all database under ResourceGroup

foreach($resourceId in $resourceIds){

Write-Output $resourceId
#Creates or updates the entire set of tags on a resource or subscription.
new-AzTag -ResourceId $resourceId -Tag $tags

}

Output:

Note: 

  • Get-AzResource cmdlet will Gets All the resources from the specific subscription and filtered with Resource Type SQL.
  • -ResourceType : The resource type of the resource(s) to be retrieved. For example, Microsoft.Compute/virtualMachines

Powershell Error handling with $ERROR Variable

In all programming, the code will have errors, and troubleshooting those problems will be difficult. Like another programming language, PowerShell has error handling mechanisms for handling the error in our programming (in this post, we will discuss about Error handling with $ERROR variable).

In PowerShell, errors can be categories in 2 ways one is terminating and non-terminating. As the name implies, a terminating error stops the code execution when the error is thrown. A non-terminating error implies that the code will continue the next line of code execution when an error is shown.

The $Error Variable

$Error is an automatic global variable in PowerShell which always contains an Array List of zero or more Error Record objects. As new errors occur, they are added to the beginning of this list, so you can always get information about the most recent error by getting at $Error[0]. Both Terminating and Non-Terminating errors will be part of this this list.

How does the $Error variable work?

Starting a new PowerShell session the $Error will be empty. Normally, if you run a Windows PowerShell command and an error occurs, the error record will be appended to the “automatic variable” named $Error.  Then we use the $Error[0] to display and access the rest of the information it holds.

The $Error variable hold a collection of information, and that’s why using $Error[0] can get to your error message objects.  Also the $Error[0] variable will hold the last error message encountered until the PowerShell session ends.

Example #1: Starting a new PowerShell session

For this example, we have tried with a new PowerShell window session so the $Error variable has empty as shown below

$error[0]

Example #2: Executing the below script which had the error

When an error occurs in our code, it is saved to the Automatic variable named $Error. The $Error variable contains an array of recent errors, and you can reference the most recent error in the array at index 0.

In the below example, the path is not exit and instead of throwing an error we had included -ErrorAction SilentlyContinue, and next line we have written the current error using the $Error variable.

Get-content -Path “C:\dotnet-helpers\BLOG\TestFile.txt” -ErrorAction SilentlyContinue
Write-Warning $Error[0]

Getting Members of $Error Variable

We can use Get-Member to expose your PS variable objects. using the below-listed members we can get deeper into the $Error[0] object to extract

Example #3: Getting the detailed Error using $Error variable

In the below example, we can get deeper into the $Error[0] object to extract the line that failed during execution.This assumes that the error information is available in the first element of the $Error array. The InvocationInfo property of the ErrorRecord object contains information about the context in which the error occurred, including the line number.

Keep in mind that if there are multiple errors in the $Error array, you might want to loop through them or access a specific error by its index. Also, note that this information might not be available for all types of errors, depending on how the error was generated

$Error[0].InvocationInfo

#Display the failed code line
Write-Host “Error occured at line : ” $Error[0].InvocationInfo.line

How to Create and Use PowerShell Modules

What is Module in PowerShell?

As per docs.microsoft.com, A module is a package that contains PowerShell members, such as cmdlets, providers, functions, workflows, variables, and aliases. The members of this package can be implemented in a PowerShell script, a compiled DLL, or a combination of both. These files are usually grouped together in a single directory.

In simple, PowerShell Modules allows us to organize our functions and use them in other scripts or PowerShell modules allow you to combine multiple scripts to simplify code management, accessibility, and sharing. Mostly many PowerShell scripters are slow to take that step of building a module. This allows you to be able to use the same code in many places without copying and pasting to code all over the place.

When do I create a PowerShell module?

  • When the same script needs to be used more than once.
  • if we need to break it apart into functions because it’s getting too complex to be in a single script.
  • If we need to share the code with others. 

In this post, you can learn Step-by-step instructions on creating and using modules.

STEP #1 Starting with a PowerShell Function

PowerShell Module can store any number of functions. To create a new module, we need to start creating a PowerShell Function. When your scripts get large, you start using more functions. These functions could be from someone else or functions that you write yourself. These functions start collecting at the top of your script.

In the below example, we creating a function called Get-BIOSInfo which will output the system BIOS information for the specific system. 

function Get-BIOSInfo
{
param($ComputerName)
Get-WmiObject -ComputerName $ComputerName -Class Win32_BIOS
}

Get-BIOSInfo -ComputerName localhost

STEP #2 Create a separate Folder for Custom Module 

All the custom Modules need to save under the Module folder, mostly the location will be C:\Program
Files\WindowsPowerShell\Modules
. We need to create a separate folder for our Module so here we creating a folder called Get-BIOSInfo as shown below.

STEP #3 Save the Function as Module with .psm1 extension 

Next, we need to save our function under the Get-BIOSInfo folder. Most important thing is, the Folder name must match the Module name. Now I’ve got the Get-BIOSInfo module saved/created, and I’ve called it Get-BIOSInfo.psm1. Now I can ask my team to use it,

To make our function into Module, the file needs to be saved with .psm1 extension as shown below,

STEP #4 Test-Driving Your Module

PowerShell has automatically loaded your new module and made all of its commands available. Executing the Get-Module cmdlet will show your module just contains one function Get-BIOSInfo. To understand what just has happened, I had ran the below Get-Module cmdlet and shown the output below.

STEP #4 Finally, Import your Module to utilize in any script

Open a different PowerShell window, or open a new PowerShell (console or ISE). Your command Get-BIOSInfo is available immediately! It is now a standard PowerShell command just like the other commands you use. Importing the module brings all of the functions and variables into each user’s PowerShell session.

Note:

  • PowerShell caches Modules so once you have loaded and used a module in a PowerShell session, any changes to the module will not become effective. To see changes, either use the module in a new PowerShell Host or force a complete module to reload:
  • The module name should not be the name of your function. It should be a generic name such as a topic because later you will want to store more functions into your module.
  • Do not use special characters or whitespace in your module name.
  • PowerShell Module can store any number of functions.

Conclusion

Having the option to create a Module in PowerShell directly is super handy and we can be really flexible in our day to day DevOps or other automation tasks.

How to Check SSL Certificate Expiration Date in PowerShell

SSL ( Secure Sockets Layer) is a digital certificate that provides an encrypted connection between server and client and authenticates a website identity. To keep user-sensitive data secure, and provide trust to users, it is very important to check SSL certificate expiration and renew them if they are due. The challenge for support team will be during the renewal activity, checking all the domains which having different certificate became critical job. To overcome the above challenge, we throught to check the powershell script to validate all the domain before and after the renewal activity. Let we discuss how to Check SSL Certificate Expiration Date in PowerShell.

In PowerShell, we can use [Net.HttpWebRequest] to make the HTTP web request to the website and get all properties associated with it, and certificate details. It will help to find the SSL certificate expiration date and other details of certificate.

The System.Net.ServicePoint is the .Net library which provides to manage the collections of ServicePoint objects. ServicePointManager returns the ServicePoint object that contains the information about the internet resource URI.

Check SSL Certificate Expiration Date

Step: 1 Get the URL properties

In the below PowerShell script lines, it uses [Net.HttpWebRequest] to create HTTP web requests to website URI and retrieve the URI properties like Address, ConnectionName, Certificate, etc… in the $webRequest variable.

[Net.ServicePointManager]::ServerCertificateValidationCallback = { $true }
# Create Web Http request to URI
$uri = "https://www.dotnet-helpers.com"
$webRequest = [Net.HttpWebRequest]::Create($uri)

Step: 2 Retrive the Certificate Start and End date

As we already having the certificate details in the $webRequest, so we can retrive the Certificate Start and end date as shown below.$webRequest.ServicePoint.Certificate gets the certificate details like issuer, Handle, and SSL certificate thumbprint. We can use the GetExpirationDateString() method to check the SSL expiration date for a website using PowerShell.

# Get Effective Date of the certificate
$Start = $webRequest.ServicePoint.Certificate.GetEffectiveDateString()
# Get Expiration Date of the certificate
$End   = $webRequest.ServicePoint.Certificate.GetExpirationDateString()

Step: 3 Find the no. of Remaining days for expiration

# Calculate the no. of Dates remaining for expiration
$ExpirationDays = (New-TimeSpan -Start (Get-Date) -End $end).Days
# Prinit the required details
Write-Host "Validating for :" $webRequest.Address
Write-Host "Certificate Effective Date :" $Start
Write-Host "Certificate Expiration Date :" $End
Write-Host "No. of days to Expiration :" $ExpirationDays

Full Code: Check SSL Certificate Expiration Date in PowerShell

Below full code will helps to Check SSL Certificate Expiration Date in PowerShell for single domain, if you want to have multiple urls then place all the domain in the txt file and loop the same code for validation. 

[Net.ServicePointManager]::ServerCertificateValidationCallback = { $true }
# Create Web Http request to URI
$uri = "https://www.dotnet-helpers.com"
$webRequest = [Net.HttpWebRequest]::Create($uri)
# Get Effective Date of the certificate
$Start = $webRequest.ServicePoint.Certificate.GetEffectiveDateString()
# Get Expiration Date of the certificate
$End = $webRequest.ServicePoint.Certificate.GetExpirationDateString()
# Calculate the no. of Dates remaining for expiration
$ExpirationDays = (New-TimeSpan -Start (Get-Date) -End $end).Days
# Prinit the required details
Write-Host "Validating for :" $webRequest.Address
Write-Host "Certificate Effective Date :" $Start
Write-Host "Certificate Expiration Date :" $End
Write-Host "No. of days to Expiration :" $ExpirationDays

OUTPUT:

Import bulk Variables to Variable Group using Azure DevOps CLI

My Scenario:

As as System Admin/DevOps engineer, maintaining the variable group is little tricky as its very difficult to maintain the history and changes. We got the requirement in one of our migration project with more number of variables for the pipeline with values in excel. It’s definitely very easy to copy/paste as there are just 20 Key-Value pairs in this scenario. However, think about a scenario where you need to repeat this for many Variable Groups for multiple project? It’s definitely a tedious job and Manually creating this new key values in the variable group will make more time and surely there will be human error. So to overcome this problem, we have though to Import bulk Variables to Variable Group using Azure DevOps CLI

What format we got the excel?

Instead of adding them directly from the Azure DevOps Portal, we will leverage automation the Process of automatically adding the Key-Value pairs without doing any manual Data-Entry job as we got huge number or variables.

Note: It’s definitely very easy to copy/paste as there are just 10/20 Key-Value pairs in this scenario. However, think about a scenario where you need to repeat this for many Variable Groups for multiple project? It’s definitely a very tedious job and surely there will be an human-error.

Prerequisite

Step 1: Retrieve the Variable Group ID:

The Variable group need to be ready for importing the variable from the excel. For this example, i already created one variable group know as “mytestvariablegroup” (as shown in below snap) and noted the variable group id (this id will be unique for each variable group) as shown below. In my case, the Variable Group ID is 1 as shown in the below snap shot. This ID will be used in the Step4 and Step5 to dynamically create the Variables using the Azure DevOps CLI commands.

Step 2: Generate Azure DevOps CLI commands using Excel Formula

Navigate to the excel sheet add another column and paste the below formula. The C2 and D2 will be column which containing the variable name and variable value. And, apply the formula to all the rows. Once you apply the formula to all the rows, it should look something like below.

=CONCAT(“az pipelines variable-group variable create –group-id 2 –name “””,B2,””” –value “””,C2,””””)

Import bulk Variables to Variable Group using Azure DevOps CLI

Step 3: Login to Azure DevOps from Command Line

Non-interactive mode

Before we start with Azure CLI commands, it’s mandatory to authenticate using Azure DevOps credentials. If you are using same account for both Azure and Azure DevOps then you can use the below command to authenticate.

Az login

Post enter, it will open the browser to authenticate the login details.

Step 4: Set Default Organization

Run the below command to set the organization where we are going to update the variable.

az devops configure -d organization=https://dev.azure.com/thiyaguDevops/

Step 5: Set Default Project

Run the below command to set the default Project.

az devops configure -d project=poc

Step 6: Execute the Azure DevOps CLI commands

In the step 2, we generated all the commands in excel. Now, it’s time to execute them. Copy the entire rows which containing the formula (column D , without header if any only the values) of commands and paste all of them at once in the command prompt.

Note: No need to copy past one by one from excel, copy all from the Colum D and past at single time, remaining the PowerShell will take care

Step7: Review the Output.

Finally now it’s time to view the results in our variable group. Navigate to the Variable Group and refresh the page to view all the new variables are added like as shown below

 

 

How to use the variable group at runtime in Azure YAML Pipeline

When & Where to use?

We received the request that we would like to pass the variable group as a runtime parameter so that whenever I run the pipeline, it should allow me to select the variable group name as input, and based on the input value for the variable group during runtime my pipeline should proceed. In this article, we will discuss How to use the variable group at runtime in Azure YAML Pipeline.

This can be achieve by using the Runtime parameters. Runtime parameters let you have more control over what values can be passed to a pipeline. In this article 

What is Runtime parameters?

You can specify parameters in templates and in the pipeline. Parameters have data types such as number and string, and they can be restricted to a subset of values. The parameters section in a YAML defines what parameters are available. These runtime parameters allow you to have more control over the parameter values you pass to your pipelines.

Parameters are only available at template parsing time. Parameters are expanded just before the pipeline runs so that values surrounded by ${{ }} are replaced with parameter values. Use variables if you need your values to be more widely available during your pipeline run.

Note: If you are going to trigger the pipeline manually then you can make use of Runtime parameters in the Azure DevOps pipeline.

Runtime parameters let you have more control over what values can be passed to a pipeline. Unlike variables, runtime parameters have data types and don’t automatically become environment variables.

Let we see How to use the variable group at runtime in Azure YAML Pipeline

Step 1: Define the parameters under the Values section

Ensure Always Set runtime parameters at the beginning of a YAML. This example pipeline accepts the value of variable and then outputs the value in the job

parameters:
- name: variable_group
displayName: Variable Group
type: string
default: app-sitecore-dev
values:
- app-sitecore-dev
- app-sitecore-qa
- app-sitecore-pprd
- app-sitecore-prd
- app-sitecore-pprd-hotfix

trigger: none # trigger is explicitly set to none

Step 2: Assign the selected value to the variable group.

Post slection of variable group during manula build, the selected variable will be assinged by using ${{ parameters.<parameter_name> }}. once runtime parameter is assinged the sequence of stage/jobs can able to use the values

variables:
- group: ${{ parameters.variable_group }}

Step 3: Use the values from the selected variable group

Based on the variable group assinged from run time parameter, the remaining stage can fetch the value from the variable group like agentPool…

stages:
- stage: Build_Artifacts
jobs:
- template: Prepare_Artifacts.yml
parameters:
agentPool: '$(agentPool)'
TargetFolder: '$(Build.ArtifactStagingDirectory)'

Full YAML Code

parameters:
- name: variable_group
  displayName: Variable Group
  type: string
  default: app-sitecore-dev
  values:
  - app-sitecore-dev
  - app-sitecore-qa
  - app-sitecore-pprd
  - app-sitecore-prd
  - app-sitecore-pprd-hotfix

trigger: none # trigger is explicitly set to none

variables:
- group: ${{ parameters.variable_group }}

stages:
- stage: Build_Artifacts
jobs:
- template: Prepare_Artifacts.yml
parameters:
agentPool: '$(agentPool)'
TargetFolder: '$(Build.ArtifactStagingDirectory)'

Output