Category Archives: Devops

How to pass values between Tasks in a Pipeline using task.setvariable Command

Problem ?

Azure DevOps pipeline is a set of Tasks which can perform a specific task and these tasks will run inside a Agent Machine (ie., Virtual Machine). While Task is executing, it will be allocated some resources and after the Task execution is complete, the allocated resources will be de-allocated. The entire allocation / de-allocation process repeats for other tasks available within the pipeline. It means the Task1 cannot directly communicate with Task2 or any other subsequent Tasks in the pipeline (pass values between Tasks) as their scope of execution is completely isolated though they get executed in the same Virtual Machine .

In this article, we are going to learn about the scenario where you can communicate between Tasks and pass values between Tasks in a Pipeline .

How to Pass Values between tasks ?

When you use PowerShell and Bash scripts in your pipelines, it’s often useful to be able to set variables that you can then use in future tasks. Newly set variables aren’t available in the same task. You’ll use the task.setvariable logging command to set variables in PowerShell and Bash scripts.

what is Task.setvariable?

Task.setvariable is a logging command can be used to create a variable that be used across tasks within the pipeline whether they are in same job of a stage or across stages. VSO stands for Visual Studio Online, which is part of Azure DevOps’ early roots

“##vso[task.setvariable variable=myStageVal;isOutput=true]this is a stage output variable”

Example:

- powershell: |
Write-Host "##vso[task.setvariable variable=myVar;]foo"

- bash: |
echo "##vso[task.setvariable variable=myVar;]foo"

SetVariable Properties

The task.setvariable command includes properties for setting a variable as secret, as an output variable, and as read only. The available properties include:

  • variable = variable name (Required)
  • Issecret = true make the variable as a Secret
  • isoutput = To use the variable in the next stage, set the isoutput property to true
  • isreadonly = When you set a variable as read only, it can’t be overwritten by downstream tasks. Set isreadonly to true

Share variables between Tasks within a Job

Let’s now create a new variable in Task1, assign some value to it and access the Variable in next Task. 

  • Create a variable named Token using the setvariable syntax, assign it some test value (eg – TestTokenValue)
  • Display the value of the Token variable in the next Task as shown in below (Task name ‘Stage1-Job1-Task2’).
stages:
- stage: Stage1
jobs:
- job: Job1
steps:
- task: PowerShell@2
displayName: 'Stage1-Job1-Task1'
inputs:
targetType: 'inline'
script: |
Write-Host "##vso[task.setvariable variable=token]TestTokenValue"
- task: PowerShell@2
displayName: 'Stage1-Job1-Task2'
inputs:
targetType: 'inline'
script: |
Write-Host "the Value of Token : $(token)"

Now, view the output of the variable of the Stage1-Job1-Task2 as shown below. Share variables between Tasks across the Jobs (of the same Stage)

pass values between Tasks

Share variables between Tasks across the Jobs (of the same Stage)

As we discussed in SetVariable property section, We need to use the isOutput=true flag when you desire to use the variable in another Task located in another Job.

>pool:
name: devopsagent-w-pprd01

stages:
- stage: Stage1
jobs:
- job: Stage1_Job1
steps:
- task: PowerShell@2
name: 'Stage1_Job1_Task1'
inputs:
targetType: 'inline'
script: |
Write-Host "##vso[task.setvariable variable=token;isoutput=true;]TestTokenValue"

- job: Stage1_Job2
dependsOn: Stage1_Job1
variables:
- name: GetToken
value: $[dependencies.Stage1_Job1.outputs['Stage1_Job1_Task1.token']]
steps:
- task: PowerShell@2
displayName: 'Stag1-Job2-Task1'
inputs:
targetType: 'inline'
script: |
Write-Host "the Value of Token : $(GetToken)"
  1. Navigate to Stage1_Job1_Task1 and add isoutput = true flag to the Logging Command which let’s us to access the value outside the Job.
  2. The Job in which you want to access the variable must be dependent on the other Job which produces the output. Add dependsOn: Stage1_Job1 in the Stage1_Job2.
  3. In the Stage1_Job2, Create a new variable named GetToken and set it’s values to $[dependencies.Stage1_Job1.outputs[‘Stage1_Job1_Task1.token’]]. This will help to access the variable value which is available in another dependent job. You can’ access this expression directly in the script. It’s mandatory to map the expression into the value of another variable.
  4. Finally, access the new variable in your script.
  5. Once the isoutput=true is added, it’s important to access the variable by prefixing the Task name. Otherwise, it wouldn’t work.

OUTPUT:

Below code where the Job2 can access the output of Job1.

Share variables between Tasks across Stages

As per below code, I didn’t specify dependency (using dependsOn) between Stages as Stage1 and Stage2 are one after the other. In case if you would like to access Stage1’s variable in Stage3 then the Stage2 must depend on Stage1.

Accessing value of one stage from another we need to use stageDependencies attribute where in between jobs we are used dependencies as shown in above YAML.

pool:
name: devopsagent-w-pprd01

stages:
- stage: Stage1
jobs:
- job: Stage1_Job1
steps:
- task: PowerShell@2
name: 'Stage1_Job1_Task1'
inputs:
targetType: 'inline'
script: |
Write-Host "##vso[task.setvariable variable=token;isoutput=true;]TestTokenValue"
- stage: Stage2
jobs:
- job: Stage2_Job1
variables:
- name: getToken
value: $[stageDependencies.Stage1.Stage1_Job1.outputs['Stage1_Job1_Task1.token']]
steps:
- task: PowerShell@2
displayName: 'Stag1-Job2-Task1'
inputs:
targetType: 'inline'
script: |
Write-Host "the Value of Token from Stage2: $(getToken)"

OUTPUT:

Rebuild index to reduce Fragmentation in SQL Server

Here we will learn how to identify and resolve by Rebuild index to reduce Fragmentation in SQL Server. Index fragmentation identification and index maintenance are important parts of the database maintenance task. Microsoft SQL Server keeps updating the index statistics with the Insert, Update or Delete activity over the table. The index fragmentation is the index performance value in percentage, which can be fetched by SQL Server DMV. According to the index performance value, users can take the indexes in maintenance by revising the fragmentation percentage with the help of Rebuild or Reorganize operation.

In SQL Server, both “rebuild” and “reorganize” refer to operations that can be performed on indexes to address fragmentation. However, they are distinct operations with different characteristics. Let’s explore the differences between rebuilding and reorganizing indexes:

Note: Optimize index is one of the maintenance activity to improve query performance and reduce resource consumption. Ensure you will plan to Performing the database index during the off business hours or less traffic hours (less request to database).

Advantages of Rebuild index to reduce Fragmentation:

  • Removes both internal and external fragmentation.
  • Reclaims unused space on data pages.
  • Updates statistics associated with the index.

Considerations:

  • Requires more system resources.
  • Locks the entire index during the rebuild process, potentially causing blocking.

How to find the Fragmentation?

Here we executed the SQL script for checking the fragmentation details for the specific database , where the result shown in the percentage.

## Script will give details of fragmentation in percentage 
Method 1:

DECLARE @cutoff_date DATETIME = DATEADD(day, -20, GETDATE()); SELECT OBJECT_NAME(ip.object_id) AS TableName,
i.name AS IndexName,
ip.avg_fragmentation_in_percent
FROM sys.dm_db_index_physical_stats(DB_ID(), NULL, NULL, NULL, NULL) ip
JOIN sys.indexes i
ON ip.object_id = i.object_id AND ip.index_id = i.index_id
JOIN sys.dm_db_index_usage_stats ius
ON ip.object_id = ius.object_id AND ip.index_id = ius.index_id

Method 2:

SELECT
DB_NAME() AS DBName
,OBJECT_NAME(ps.object_id) AS TableName
,i.name AS IndexName
,ips.index_type_desc
,ips.avg_fragmentation_in_percent
FROM sys.dm_db_partition_stats ps
INNER JOIN sys.indexes i
ON ps.object_id = i.object_id
AND ps.index_id = i.index_id
CROSS APPLY sys.dm_db_index_physical_stats(DB_ID(), ps.object_id, ps.index_id, null, 'LIMITED') ips
ORDER BY ips.avg_fragmentation_in_percent DESC

Rebuild index to reduce Fragmentation:

The REBUILD operation involves recreating the entire index. This process drops the existing index and builds a new one from scratch. During the rebuild, the index is effectively offline, and there can be a period of downtime where the index is not available for queries. In simple, REBUILD locks the table for the whole operation period (which may be hours and days if the table is large). The syntax for rebuilding an index is as follows:

Rebuilding Full Index on selected database:

After executing the Rebuild index on specific database, you can able to view the fragmentation is reduce as shown in the below image.

-- Rebuild ALL Indexes
-- This will rebuild all the indexes on all the tables in your database.

SET NOCOUNT ON
GO

DECLARE rebuildindexes CURSOR FOR
SELECT table_schema, table_name  
FROM information_schema.tables
	where TABLE_TYPE = 'BASE TABLE'
OPEN rebuildindexes

DECLARE @tableSchema NVARCHAR(128)
DECLARE @tableName NVARCHAR(128)
DECLARE @Statement NVARCHAR(300)

FETCH NEXT FROM rebuildindexes INTO @tableSchema, @tableName

WHILE (@@FETCH_STATUS = 0)
BEGIN
   SET @Statement = 'ALTER INDEX ALL ON '  + '[' + @tableSchema + ']' + '.' + '[' + @tableName + ']' + ' REBUILD'
   --PRINT @Statement 
   EXEC sp_executesql @Statement  
   FETCH NEXT FROM rebuildindexes INTO @tableSchema, @tableName
END

CLOSE rebuildindexes
DEALLOCATE rebuildindexes
GO
SET NOCOUNT OFF
GO

 

How to Continue Azure Pipeline on failed task

Introduction

Sometimes failing scripts are not failing the task when they should. And sometimes a failing command should not fail the task. How to handle these situations by Continue Azure Pipeline on failed task?

In Sometimes, you many have some Pipeline Tasks which may be dependent on external reference which may chance to fail at any time. In these scenarios, if the Task is failing (or may be failing intermittently) due to any issue in external reference, your entire pipeline would fail. You don’t have any insights about when the bug would get fixed. So in the above case, you though to run the pipeline if any issue in the that task and you want to ensure if any issue in future for this task then it wont lead to pipeline failure.

This simple technique can be used in scenarios where you have a non-mandatory task that’s failing intermittently and you want to continue the execution of the pipeline

Solution : 

In this case, it absolutely makes sense to continue the execution of the next set of Tasks (Continue Azure Pipeline on failed task) . In this post, we are going to learn how to continue the execution of the pipeline if a particular Task has failed by using ContinueOnError/failOnStderr/failOnStandardError property.

using ContinueOnError attribute in Powershell/script Task

Let’s build a pipeline with few tasks where we simulate and error in one of the tasks as shown below. As shown in the below code, the following points to be noted.

The task named “continueOnError Task”, an intentional added targetType: ‘filePath’ but used the ‘inline’ script instead of mapping with script file to simulate an error. Second, attribute called continueOnError has bee added to ignore if there any errors while executing the pipeline.

steps:
- task: Powershell@2
displayName: "continueOnError Task"
continueOnError: true
inputs:
ScriptType: InlineScript
Inline: |
Write-Hosts "Continue if any issue here"

- task: Powershell@2
displayName: "No Error Task"
continueOnError: true
inputs:
targetType: 'inline'
Inline: |
Write-Host "Block if any issue here"

Now, when you run the pipeline, an indication about the error for the Task is shown and the execution will carry forward as shown below.

 

Like PowerShell task, you can continue error in other tasks as well as shown in bellow (click below link reference to know about other tasks).

Summary:

In this this post, we have learnt how to continue the execution of the pipeline in spite-of having an error in one of the tasks that is Continue Azure Pipeline on failed task. This simple technique can be used in scenarios where you have a non-mandatory task that’s failing intermittently and you want to continue the execution of the pipeline.

How to Enable additional logs in Azure pipeline execution

One of the most important aspects in the Azure DevOps life cycle Pipeline development is to have tools and techniques in place to find out the root cause of any error that would have occurred during the pipeline Azure DevOps pipeline execution. In this article, we will learn how to review the logs that helps in troubleshoot for any errors in the Pipelines by enable additional logs in Azure pipeline execution

By default, Azure DevOps pipeline provides logs which provide information about the execution of each step in the pipeline. In case of any error or need to more information to debug that time the default logs wouldn’t help you in understanding what went wrong in the pipeline execution. In those cases, it would be helpful if we get more diagnostic logs about each step in the Azure DevOps pipelines.

Below are the two different techniques to enable the feature of getting additional logs.

Enable System Diagnostics logs for specific execution of pipeline.

If you would like to get additional logs for a specific pipeline execution then you need to do is enable the Enable System Diagnostics checkbox as shown below image and click on the Run button.

Enable System Diagnostics logs for All execution of pipeline.

If you always want to enable System Diagnostics to capture the Diagnostics trace for cases where the pipeline executed automatically (Continuous Integration scenarios), then you need to create a variable in the pipeline with the name system.debug and the set the value to true as shown below.

System.debug = true (variable helps us to generate diagnostics logs in during the execution of pipelines)
System.debug = False (variable helps us to not generate diagnostics logs in during the execution of pipelines)

Once we set the value of system.debug to true in our variable group (which referred in our pipeline), then the pipeline starts showing additional logs in the purple color as shown below.

System.debug = true

System.debug = false

Note: You like to view the logs without any colors, you can click on the View raw log button which opens up the logs in a separate browser window and you can save if required.

Download Logs from Azure pipeline:

In case if you like to share the logs to some other teams who doesn’t have access to your pipeline, you can download the logs by clicking on the Download Logs button in the Pipeline Summary page as shown below and you can share.

Understanding the directory structure created by Azure DevOps tasks

If you are beginner in Azure DevOps, understanding of when & which folders are created and populated by the pipeline tasks, this is one of the first step in learning Understanding the directory structure created by Azure DevOps tasks.

Azure DevOps Agent will supports Windows,Linux / Ubuntu, macOS Operating Systems but in this post we are going to check from the windows Agent machine.Let’s try to understand the folder structure by creating a very simple YAML based Azure DevOps Pipeline and add the Tasks based on the below instructions.

Let we understand the directory structure created by Azure DevOps tasks here !

Let we list all the folders that are created during the pipeline execution. It’s called as Workspace and so the local folder path can be referred using a Pre-defined variable called $(Pipeline.Workspace)

We are going to list all the folders (using below YAML with PowerShell task) that are created for a given current pipeline and this is called as Workspace. And the local folder path can be referred using a Pre-defined variable called $(Pipeline.Workspace)

In the below YAML, we added the powershell task to prints the folder structure inside the $(Pipeline.Workspace)

- task: PowerShell@2
  displayName: Show all Folders in $(Pipeline.Workspace) 
  inputs:
    targetType: 'inline'
    pwsh: true
    script: |
      Get-ChildItem -path $(Pipeline.Workspace)

Once your pipeline is executed, you will be able to see the folders like a, b, s and Test Results available in the workspace as shown in below snap shot.

Now based on the above image, Let’s now understand these folders and its usages in detail.

Folder Name: a
Referred using: $(Build.ArtifactStagingDirectory)/ $(Build.StagingDirectory)/ $(System.ArtifactsDirectory)

Artifact Staging Directory is a pre-defined variable and used in Build Pipelines for storing the artifacts of solution which get build (simply its output artifacts). if you confused, in simple it is output of the Build process for any type of solution ( like .net, java, python etc) or it could be as simple as copy files.

The publish build artifacts task creates an artifact of whatever is in this folder. This folder will get cleared/purged before each new build.

Folder Name: b
Referred using: $(Build.BinariesDirectory)

Binaries directory is a pre-defined variable used for storing the output of compiled binaries that are created as part of compilation process

Folder Name: s
Referred using: $(System.DefaultWorkingDirectory) / $(Build.SourcesDirectory)

Default working directory is a pre-defined variable that is mostly used to store the source code of the application. $(System.DefaultWorkingDirectory) is used automatically by the checkout step which download the code automatically as shown in below snap shot. In simple, this is the working directory and where your source code is stored.

Folder Name: TestResults
Referred using: $(Common.TestResultsDirectory)

Test results Directory contains the local directory of the agent which could be used for storing the Test Results.

Summary of directory structure created by Azure DevOps tasks

[wpdatatable id=2]

Variable Substitution in Config using YAML DevOps pipeline

As DevOps Engineer, you are responsible for to develop the Azure DevOps pipelines which should replace these values (DEV/TEST/PREPROD/PROD) based on the environment. However, the configuration values could change across environments. In this article, we are going to learn how to dynamically change the environment specific values (Variable Substitution) in the Azure DevOps Pipelines using an Azure DevOps Extension called Replace Tokens.

In my previous article, we had discussed about DevOps guys need to ensure all the secrets need to be kept inside the Key vault instead of using directly from the Azure DevOps Variable group. But all the project decision will not be same, still many project using the variable group for maintaining the secrets and lock them. This article is focusing on same and will explain how to Manage environment specific configurations (Variable Substitution using Replace Tokens – Azure DevOps Marketplace extension

Example Config File

Below is the sample config file which we are going to use for variable substitution from Key Vault in YAML Azure DevOps pipelines

These Configuration values must be environment specific and they have different values in different environments. DevOps engineers would have to develop the Azure DevOps pipelines which should replace these values in my below config. In my case the smtp-host, smtp-username, smtp-password are different for lower environment (dev/qa & preprod) and higher environment.

How to use Replace Tokens Extension in Azure YAML pipeline

Here we are going to use Replace Tokens task to replace tokens in config files with variable values.

The parameters of the task are described bellow, in parenthesis is the YAML name:

  • Root directory (rootDirectory): the base directory for searching files. If not specified the default working directory will be used. Default is empty string
  • Target files (targetFiles): the absolute or relative newline-separated paths to the files to replace tokens. Wildcards can be used (eg: **\*.config for all .config files in all sub folders). Default is **/*.config
  • Token prefix (tokenPrefix): when using custom token pattern, the prefix of the tokens to search in the target files. Default is #{
  • Token suffix (tokenSuffix): when using custom token pattern, the suffix of the tokens to search in the target files. Default is }#

Example 1:  Replace with Target files parameter

- task: replacetokens@5
displayName: 'replacing token in new config file'
inputs:
targetFiles: |
src/Feature/dotnethelpers.Feature.Common.General.config
src/Foundation/dotnethelpers.Foundation.SMTP.config
encoding: 'auto'
writeBOM: true
actionOnMissing: 'warn'
keepToken: false
tokenPrefix: '{'
tokenSuffix: '}'
useLegacyPattern: false
enableTelemetry: true

Note: the task will only work on text files, if you need to replace token in archive file you will need to first extract the files and after archive them back.

Example 2:  Replace with Root directory and Target files parameter

You can able to see, we can also give the targe files with comma separate as well like below YAML.

- task: replacetokens@5
inputs:
rootDirectory: 'src/Feature/Forms/code/App_Config/Include/Feature/'
targetFiles: 'dotnethelpers.Feature.config,dotnethelpers.Foundation.SMTP.config'
encoding: 'auto'
tokenPattern: 'default'
writeBOM: true
actionOnMissing: 'warn'
keepToken: false
actionOnNoFiles: 'continue'
enableTransforms: false
enableRecursion: false
useLegacyPattern: false
enableTelemetry: true

Example 3:  Replace with wildcard

Target files (targetFiles): the absolute or relative newline-separated paths to the files to replace tokens. Wildcards can be used (eg: **\*.config for all .config files in all sub folders). Default is **/*.config.

As per below targetfiles path, replace token will search all the .config files inside the node folder and replace the token if applicable.

- task: replacetokens@5
displayName: replacing token
inputs:
targetFiles: |
node/*.config
encoding: 'auto'
writeBOM: true
actionOnMissing: 'warn'
keepToken: false
tokenPrefix: '#{'
tokenSuffix: '}#'
useLegacyPattern: false
enableTelemetry: true

Sample Output: Variable Substitution

How To Copy Secrets From KeyVault To Another In Azure

My Scenario:

In my case, we are configuring the application to be available in the two regions to have high availability. During the configuration, we observed, having more number secretes in the region1 and its very difficult to move one by one to the region2 (ie., moving to key vault in another region) so though to automate this process instead manual so without more manual and error we can Copy All Secrets From One Key Vault To Another In Azure. This blog will help you to understand How To Copy Secrets From KeyVault To Another In Azure using PowerShell script.

To clone a secret between key vaults, we need to perform two steps:

  1. Retrieve/export the secret value from the source key vault.
  2. Import this value into the destination key vault.

You can also refer below link to learn how to maintain your secrets in key vault and access in YAML pipeline

Step 1: Install Azure AZ module

Use the below cmdlet to Install the Azure PowerShell module if not already installed

# Install the Azure PowerShell module if not already installed
  Install-Module -Name Az -Force -AllowClobber

Step 2: Set Source and destination Key Vault name

# Pass both Source and destination Key Vault Name
Param( [Parameter(Mandatory)] 
[string]$sourceKvName, 
[Parameter(Mandatory)] 
[string]$destinationKvName )

Step 3:  Connect the Azure portal to access the Key Vault (non-interactive mode)

As we are doing the automation, so you can’t use Connect-AzAccount (which will make the popup to authenticate), if want to execute without any manual intervention then use az login with non-interactive mode as shown in below.

# Connect to Azure portal (you can also use Connect-AzAccount)
az login --service-principal -u "0ff3664821-0c94-48e0-96b5-7cd6422f46" -p "XACccAV2jXQrNks6Lr3Dac2B8z95BAt~MTCrP" --tenant "116372c23-ba4a-223b-0339-ff8ba7883c2"

Step 4:  Get the all the secrets name from the source KV

# Get all the Source Secret keys
$secretNames = (Get-AzKeyVaultSecret -VaultName $sourceKvName).Name

Step 5: Copy Secrets From source to destination KV.

The below script will loop based on the number of key names to fetch both name of the key and its value from the source key Vault and started to set the key and value in the destination KvName.

# Loop the Secret Names and copy the key/value pair to the destination key vault
$secretNames.foreach{
Set-AzKeyVaultSecret -VaultName $destinationKvName -Name $_ `
-SecretValue (Get-AzKeyVaultSecret -VaultName $sourceKvName -Name $_).SecretValue
}

Full code

# Pass both Source and destination Key Vault Name
Param(
[Parameter(Mandatory)]
[string]$sourceKvName,
[Parameter(Mandatory)]
[string]$destinationKvName
)

# Connect to Azure portal (you can also use Connect-AzAccount)
az login --service-principal -u "422f464821-0c94-48e0-96b5-7cd60ff366" -p "XACccAV2jXQrNks6Lr3Dac2B8z95BAt~MTCrP" --tenant "116372c23-ba4a-223b-0339-ff8ba7883c2"

# Get all the Source Secret keys
$secretNames = (Get-AzKeyVaultSecret -VaultName $sourceKvName).Name

# Loop the Secret Names and copy the key/value pair to the destination key vault
$secretNames.foreach{
Set-AzKeyVaultSecret -VaultName $destinationKvName -Name $_ `
-SecretValue (Get-AzKeyVaultSecret -VaultName $sourceKvName -Name $_).SecretValue
}

 

Import bulk Variables to Variable Group using Azure DevOps CLI

My Scenario:

As as System Admin/DevOps engineer, maintaining the variable group is little tricky as its very difficult to maintain the history and changes. We got the requirement in one of our migration project with more number of variables for the pipeline with values in excel. It’s definitely very easy to copy/paste as there are just 20 Key-Value pairs in this scenario. However, think about a scenario where you need to repeat this for many Variable Groups for multiple project? It’s definitely a tedious job and Manually creating this new key values in the variable group will make more time and surely there will be human error. So to overcome this problem, we have though to Import bulk Variables to Variable Group using Azure DevOps CLI

What format we got the excel?

Instead of adding them directly from the Azure DevOps Portal, we will leverage automation the Process of automatically adding the Key-Value pairs without doing any manual Data-Entry job as we got huge number or variables.

Note: It’s definitely very easy to copy/paste as there are just 10/20 Key-Value pairs in this scenario. However, think about a scenario where you need to repeat this for many Variable Groups for multiple project? It’s definitely a very tedious job and surely there will be an human-error.

Prerequisite

Step 1: Retrieve the Variable Group ID:

The Variable group need to be ready for importing the variable from the excel. For this example, i already created one variable group know as “mytestvariablegroup” (as shown in below snap) and noted the variable group id (this id will be unique for each variable group) as shown below. In my case, the Variable Group ID is 1 as shown in the below snap shot. This ID will be used in the Step4 and Step5 to dynamically create the Variables using the Azure DevOps CLI commands.

Step 2: Generate Azure DevOps CLI commands using Excel Formula

Navigate to the excel sheet add another column and paste the below formula. The C2 and D2 will be column which containing the variable name and variable value. And, apply the formula to all the rows. Once you apply the formula to all the rows, it should look something like below.

=CONCAT(“az pipelines variable-group variable create –group-id 2 –name “””,B2,””” –value “””,C2,””””)

Import bulk Variables to Variable Group using Azure DevOps CLI

Step 3: Login to Azure DevOps from Command Line

Non-interactive mode

Before we start with Azure CLI commands, it’s mandatory to authenticate using Azure DevOps credentials. If you are using same account for both Azure and Azure DevOps then you can use the below command to authenticate.

Az login

Post enter, it will open the browser to authenticate the login details.

Step 4: Set Default Organization

Run the below command to set the organization where we are going to update the variable.

az devops configure -d organization=https://dev.azure.com/thiyaguDevops/

Step 5: Set Default Project

Run the below command to set the default Project.

az devops configure -d project=poc

Step 6: Execute the Azure DevOps CLI commands

In the step 2, we generated all the commands in excel. Now, it’s time to execute them. Copy the entire rows which containing the formula (column D , without header if any only the values) of commands and paste all of them at once in the command prompt.

Note: No need to copy past one by one from excel, copy all from the Colum D and past at single time, remaining the PowerShell will take care

Step7: Review the Output.

Finally now it’s time to view the results in our variable group. Navigate to the Variable Group and refresh the page to view all the new variables are added like as shown below

 

 

Search and Replace String Using the sed Command in Linux/Unix.

My Requirement & solution:

We are maintaining the application in Linux machine (in AKS pods) and as a Devops team, we Got a requirement to replace some config values based on the environment (value need to be maintain in the AKS environment variable). To manage this, we thought to create one startup script in the docker image which will execute during the new image deployment ,where we used the sed command to achieve the find & replace of config value based on environments. Based on my experience i though to write this article (Search and Replace String Using the sed Command in Linux/Unix) immediately which will be helpful like me who are new to the Linux Operating system/Bash commands. 

What Is the Sed Command in Linux?

The SED command in Linux stands for Stream Editor and it helps in operations like selecting the text, substituting text, modifying an original file, adding lines to text, or deleting lines from the text. Though most common use of SED command in UNIX is for substitution or for find and replace.

By using SED you can edit files even without opening them, which is much quicker way to find and replace something in file, than first opening that file in VI Editor and then changing it.

[su_highlight color=”#2F1C6A”]Syntax: sed OPTIONS… [SCRIPT] [INPUTFILE…][/su_highlight]

  • Options control the output of the Linux command.
  • Script contains a list of Linux commands to run.
  • File name (with extension) represents the file on which you’re using the sed command.

[su_quote]Note: We can run a sed command without any option. We can also run it without a filename, in which case, the script works on the std input data.[/su_quote]

Replace First Matched String

The below example, the script will replace the first found instance of the word test1 with test2 in every line of a file

    sed -i 's/test1/test2/' opt/example.txt

The command replaces the first instance of test1 with test2 in every line, including substrings. The match is exact, ignoring capitalization variations. -i tells the sed command to write the results to a file instead of standard output.

Search & Global Replacement (all the matches)

To replace every string match in a file, add the g flag to the script. For example

    sed -i 's/test1/test2/g' opt/example.txt

The command globally replaces every instance of test1 with test2 in the /example.txt.

The command consists of the following:

  • -i tells the sed command to write the results to a file instead of standard output.
  • s indicates the substitute command.
  • / is the most common delimiter character. The command also accepts other characters as delimiters, which is useful when the string contains forward slashes.
  • g is the global replacement flag, which replaces all occurrences of a string instead of just the first.
    “input file” is the file where the search and replace happens. The single quotes help avoid meta-character expansion in the shell.

Search and Replace All Cases

To find and replace all instances of a word and ignore capitalization, use the I parameter:

    sed -i 's/test1/tes2/gI' opt/example.txt

The command replaces all instances of the word test1 with test2, ignoring capitalization.

Conclusion 

You can check the inputs based on conditions like if.. else and make the code more dynamic. In this tutorial, hope you learned Search and Replace String Using the sed Command in Linux/Unix.

I hope you found this tutorial helpful. What’s your favorite thing you learned from this tutorial? Let me know on comments!