Category Archives: Devops

How to view the secret variables in Azure DevOps

Today, I will be taking about a technique using which you can view the secret variables in Azure DevOps.

Introduction

Azure DevOps supports us to store secrets within Azure DevOps variable Groups which could be used with the Pipelines. These secret variables couldn’t be viewed by us manually from the portal. Sometimes, we may want to view the password to perform some other activities.

Note: The best practice to have the secrets in Azure Key Vault and same you can read and execute in Azure pipeline in very secured way. Still some legacy projects are maintaining the secrets in Azure Variable group, so this article focus on them. You can read this to use Key Vault to handle the secrets

What are Secrets Variables in Azure Pipelines?

Secret Variables are placeholders for values which you want to store in an encrypted format and use while using running a pipeline. Secret Variables can be used for values like username, password, API key etc. Secret variables are encrypted variables that you can use in pipelines without exposing their value. Secret variables can be used for private information like passwords, IDs, and other identifying data that you wouldn’t want to have exposed in a pipeline. Secret variables are encrypted at rest with a 2048-bit RSA key and are available on the agent for tasks and scripts to use.

How to set Secret in Azure Variable group?

Set secret variables in the UI for a pipeline. Secret variables set in the pipeline settings UI for a pipeline are scoped to the pipeline where they are set. So, you can have secrets that only visible to users with access to that pipeline. Set secrets in a variable group. Variable groups follow the library security model. You can control who can define new items in a library, and who can use an existing item.

Let’s create a Secret variable in a Variable Group as shown below and make sure that you set it as a secret by locking it.

Once you mark it a secret (by clicking on the open lock icon as shown in below image), save the Variable Group, no one including admin will be able to view the secret. Let’s now understand how to view the secret with the help of Azure DevOps – Pipelines.

View the secret variables from Variable Group

You can create a simple Pipeline which has the below tasks to view the secrets in pipeline execution.

  1. PowerShell task which outputs a text (along with secret) into a file names ViewSecretValue.Txt
  2. Publish the ViewSecretValue.txt into Azure Pipeline artifacts.

Run the pipeline with with below PowerShell task

variables:
- group: Demo_VariableGroup
steps:
- powershell: |
    "The secretkey value is : $(secretkey)" | Out-File -FilePath  $(Build.ArtifactStagingDirectory)\ViewSecretValue.txt
- task: PublishBuildArtifacts@1
  inputs:
    PathtoPublish: '$(Build.ArtifactStagingDirectory)'
    ArtifactName: 'drop'
    publishLocation: 'Container'

Now, click on the ViewSecretValue.txt file to download the file. Once you download it, view that in a Notepad which should below.

Conclusion

In summary, handling secret variables securely is crucial for maintaining data confidentiality in DevOps processes. Azure DevOps provides built-in features and best practices to keep sensitive data protected, making it a powerful platform for secure CI/CD pipeline management. Integrating with tools like Azure Key Vault can further strengthen your security posture and simplify secret management across multiple projects.

 

How to pass objects between tasks in Azure pipeline

In our previous post, we already discussed about “How to pass values between Tasks in a Pipeline” where we can pass single value from one task/job/stage to another in pipeline. But incase, if you want to share the object from one task/job/stage to another instead of single value then we need to perform some trick to achieve this. In this quick post we will discuss about trick that I have found recently to store an object in an Azure Pipelines variable from a PowerShell script (How to pass objects between tasks in Azure pipeline).

The problem

Setting a variable in a script for later use in an Azure DevOps’ pipeline is possible using the task.setvariable command as described in below post.

This works great for simple variables like below:

steps:
- pwsh: |
Write-Host "##vso[task.setvariable variable=variableName]variableValue"
- pwsh: |
Write-Host "Value from previous step: $(variableName)"

But it is bit trickier for sharing the complex variables likes objects, arrays, or arrays of objects between taks/stage/jobs in pipeline.

The solution

As an example, let’s we try to retrieve the name, type and resource group of all the resources in an Azure subscription as shown in the below script. Let we see how can pass this value of $resources in pipeline.

$azure_resources = Get-AzResource | Select-Object -Property Name,Type,ResourceGroupName

First you can store an object in an Azure Pipelines variable using the PowerShell task. Next, you can simply serialize it in JSON and apply in single input like below: -Compress flag which conver JSON to a single line.

$azure_resourcesJson = $azure_resources | ConvertTo-Json -Compress

Pass objects between tasks in Azure pipeline

pool:
name: devopsagent-win-pprd

steps:
- task: AzurePowerShell@5
inputs:
azureSubscription: 'Azure_Digital'
azurePowerShellVersion: LatestVersion
ScriptType: InlineScript
Inline: |
$azure_resources = Get-AzResource | Select-Object -Property Name,Type,ResourceGroupName -First 3
$azure_resourcesJson = $azure_resources | ConvertTo-Json -Compress
Write-Host "##vso[task.setvariable variable=resources]$azure_resourcesJson"
- pwsh: |
$resources = '$(resources)' | ConvertFrom-Json
Write-Host "There are $($resources.Count) resources in the list"
Write-Host "There are resources are" $resources.ResourceGroupName

OUTPUT

 

Linux Environment Variables

What Are Linux Environment Variables?

Linux environment variables are dynamic values that the operating system and various applications use to determine information about the user environment. They are essentially variables that can influence the behavior and configuration of processes and programs on a Linux system. These variables are used to pass configuration information to programs and scripts, allowing for flexible and dynamic system management.

These variables, often referred to as global variables, play a crucial role in tailoring the system’s functionality and managing the startup behavior of various applications across the system. On the other hand, local variables are restricted and accessible from within the shell in which they’re created and initialized.

Linux environment variables have a key-value pair structure, separated by an equal (=) sign. Note that the names of the variables are case-sensitive and should be in uppercase for instant identification.

Key Features of Environment Variables

  • Dynamic Values: They can change from session to session and even during the execution of programs.
  • System-Wide or User-Specific: Some variables are set globally and affect all users and processes, while others are specific to individual users.
  • Inheritance: Environment variables can be inherited by child processes from the parent process, making them useful for configuring complex applications.

Common Environment Variables

Here are some commonly used environment variables in Linux:

  • HOME: Indicates the current user’s home directory.
  • PATH: Specifies the directories where the system looks for executable files.
  • USER: Contains the name of the current user.
  • SHELL: Defines the path to the current user’s shell.
  • LANG: Sets the system language and locale settings.

Setting and Using Environment Variables

Temporary Environment Variables

You can set environment variables temporarily in a terminal session using the export command: This command sets an environment variable named MY_VAR to true for the current session. Environment variables are used to store information about the environment in which programs run.

export MY_VAR=true
echo $MY_VAR

Example 1: Setting Single Environment Variable

For example, the following command will set the Java home environment directory.

export JAVA_HOME=/usr/bin/java

Note that you won’t get any response about the success or failure of the command. As a result, if you want to verify that the variable has been properly set, use the echo command.

echo $JAVA_HOME

The echo command will display the value if the variable has been appropriately set. If the variable has no set value, you might not see anything on the screen.

Example 2: Setting Multiple Environment Variables

You can specify multiple values for a multiple variable by separating them with space like this:

<NAME>=<VALUE1> <VALUE2><VALUE3>

export VAR1="value1" VAR2="value2" VAR3="value3"

Example 3: Setting Multiple value for single Environment Variable

You can specify multiple values for a single variable by separating them with colons like this: <NAME>=<VALUE1>:<VALUE2>:<VALUE3>

export PATH="/usr/local/bin:/usr/bin:/bin:/usr/local/sbin:/usr/sbin:/sbin"

The PATH variable contains a list of directories where the system looks for executable files. Multiple directories are separated by colons.

Permanent Environment Variables

To make DOTNET_HOME available system-wide, follow these steps:

This command appends the line MY_VAR=”True” to the /etc/environment file, which is a system-wide configuration file for environment variables. By adding this line, you make the MY_VAR variable available to all users and sessions on the system. The use of sudo ensures that the command has the necessary permissions to modify /etc/environment

Example 1: Setting Single Environment Variable for all USERS

export DOTNET_HOME=true
echo 'DOTNET_HOME="true"' | sudo tee /etc/environment -a

Example 2: Setting Multiple value for single Environment Variable for all USERS

You can specify multiple values for a single variable by separating them with colons like this: <NAME>=<VALUE1>:<VALUE2>:<VALUE3>

export PATH="/usr/local/bin:/usr/bin:/bin:/usr/local/sbin:/usr/sbin:/sbin"
echo PATH="/usr/local/bin:/usr/bin:/bin:/usr/local/sbin:/usr/sbin:/sbin" | sudo tee /etc/environment -a

Breakdown of the Command

echo ‘DOTNET_HOME=”true”‘: This command outputs the string DOTNET_HOME=”/dotnet-helpers/execute”. Essentially, echo is used to display a line of text.

| (Pipe): The pipe symbol | takes the output from the echo command and passes it as input to the next command. In this case, it passes the string DOTNET_HOME=”/dotnet-helpers/execute” to sudo tee.

sudo tee /etc/environment -a: sudo: This command is used to run commands with superuser (root) privileges. Since modifying /etc/environment requires administrative rights, sudo is necessary.

tee: The tee command reads from the standard input (which is the output of the echo command in this case) and writes it to both the standard output (displaying it on the terminal) and a file.

/etc/environment: This is the file where tee will write the output. The /etc/environment file is a system-wide configuration file for environment variables.

-a: The -a (append) option tells tee to append the input to the file rather than overwriting its contents. This ensures that any existing settings in /etc/environment are preserved and the new line is simply added to the end of the file.

This command is used to add a new environment variable (DOTNET_HOME) to the system-wide environment variables file (/etc/environment). By appending it, you ensure that the new variable is available to all users and sessions across the entire system.

Delete File or Directory in Linux with Scheduled Azure DevOps Pipeline

In my working environment, we are managing more Linux based Agent machines for building the solution to create artifacts and we got required to clean the build artifacts on regular manner in automatic way so we though to write a bash scrip and make this as scheduled in release pipeline (Delete File or Directory in Linux). Base on my automation, though to write a post to explain How to Delete File or Directory in Linux with Scheduled Azure DevOps Pipeline

You can also read

step 1: Find the disk space usage

The df -h command is used to display information about disk space usage on a Unix-like system. When you run this command in a terminal, it will show the disk space usage in a human-readable format.

For more clear, disk free also known as `df`, which is a powerful utility that provides valuable information on disk space utilization. The df command displays information about file system disk space usage on the mounted file system. The -h flag makes the sizes human-readable, using units like KB, MB, GB, etc.

df -h

STEP 2:Get list of directories/files and assign to variable

Before we can remove a folder or director, we must first know its name. Therefore, we must first execute the “ls” command in the terminal to find a folder or directory, or to view all of the folders. In Linux and other operating systems based on Unix, the “ls” command is used to display the files or folders.

As i am going to clean my agent folder so path will be /agent/_work.
We are assigning the output of the command ls /agent/_work/ | grep [0-9] to the variable directorylist. This command lists the contents of the /agent/_work/ directory and filters the results to include only lines that contain numbers (as my agent machine folder will create with name as numbers).

directorylist=$(ls /agent/_work/ | grep [0-9])

 STEP 3: Loop the list of directories and delete

Next, we need to loop the directory list one by one in the while loop as shown in below script. while read line is a loop that reads each line of the processed output ( is used for bash shell to read a file using while loop).The option ‘-r’ in the above-mentioned syntax passed to read command that avoids the backslash escapes from being interpreted

  • tr ‘ ‘ ‘\n’: one of the use of tr command is to find and replace, here it will replace spaces with newline characters.
  • The loop body (between do and done) is where you can put your processing logic for each line. I’ve included a simple echo statement as an example.
echo $directorylist | tr ' ' '\n' | while read -r line
do
........... You logic to delete ...........
Done

STEP 4: Remove the directory/file from the list

We can Delete File or Directory in Linux by using rm command and -rf will be used to remove fore fully as shown below.

echo "removing folder $line"
rm -rf /agent/_work/$line

Full code: Delete File or Directory in Linux 

# Find the disk space usage
df -h
echo "Running a disk space clean up"
#Get list of directories/files and assign to variable
directorylist=$(ls /agent/_work/ | grep [0-9])
#Loop the list of directries and delete
echo $directorylist | tr ' ' '\n' | while read line
do
echo "removing folder $line"
rm -rf /agent/_work/$line
done

How to implement delete directory (above script) script in scheduled manner Azure DevOps pipeline?

  1. First enable the “Scheduled release trigger” as shown below in release pipeline. In same pipeline, create a new stage with the Bash task with the above script which shown to Delete File or Directory in Linux .
  2. Select the stage and click “pre-deployment condition” and schedule the pipeline condition when it need to execute and save. Post this action, the pipeline will ran on specific time and execute the cleanup task.

 

How to Drop SQL database using PowerShell

My Scenario:

As a System Admin/DevOps Guys, we got urgent cost optimization process and it need to done in very less time frame. From the cost optimization list, one of the action is to clean the unused database and backup file across all the environment. So we having 100+ database in each environment and its difficult to make manual clean up as it will take more time and there is lot of chance to wrong deletion of used database’s. So to avoid this we thought to automate this process to clean up database’s across all the environment. In this post we will discuss about how to How to drop SQL database using PowerShell

If you are working with Azure SQL Databases and want to use Azure PowerShell (Az module), you can use the Get-AzSqlDatabase cmdlet to retrieve information about SQL databases in an Azure SQL Server. Here’s an example script to get the list of all SQL database names:

Step 1: Declare SQL and resource details

#Assign the variables
$resourcegroup = "rg-dgtl-strg-prd-we-01"
$dbserverName = "sqlsrvr-dgtl-prd-we"
$username = "sqlprd01"
$password = "Srdc4$wm2t1F"

Step 2: Connect to the database using Get-AzSqlDatabase cmdlet

The Get-AzSqlDatabase cmdlet is used in Azure PowerShell to retrieve information about SQL databases (as shown in the below snap shot) in an Azure SQL Server. It’s part of the Az module, which is the recommended module for managing Azure resources. Below is a brief explanation of how to use the Get-AzSqlDatabase cmdlet:

SYNTAX:

Get-AzSqlDatabase
[[-DatabaseName] <String>]
[-ExpandKeyList]
[-KeysFilter <String>]
[-ServerName] <String>
[-ResourceGroupName] <String>
[-DefaultProfile <IAzureContextContainer>]
[-WhatIf]
[-Confirm]
[<CommonParameters>]

#Get the all the database for specific SQL server using -ServerName parameter
$SQLdbs = Get-AzSqlDatabase -ServerName $dbserverName -ResourceGroupName $resourcegroup

Step 3: Retrieve all database’s details Using foreach.

In step 1, we got all the database details which present inside sqlsrvr-dgtl-prd-we SQL server and as i said on top, I am having 100+ database present in the sql server so below loop will loop one by one to process the db’s details. Below, i am getting only database name for db using property name like “DatabaseName”

#Loop the list of databases and check on one by one
foreach ($SQLdb in $SQLdbs){
$SQLdb = $SQLdb.DatabaseName.ToString()
if (..) { check databasename contains or -eq to find activie dbs. }
else { logic to delete the non-active dbs. }
}

Step 4: Remove the database using Remove-AzSqlDatabase

The Remove-AzSqlDatabase cmdlet removes an Azure SQL database. This cmdlet is also supported by the SQL Server Stretch Database service on Azure.

SYNTAX :

Remove-AzSqlDatabase
[-DatabaseName] <String>
[-Force]
[-ServerName] <String>
[-ResourceGroupName] <String>
[-DefaultProfile <IAzureContextContainer>]
[-WhatIf]
[-Confirm]
[<CommonParameters>]

#Remove the database based on -DatabaseName parameter
Remove-AzSqlDatabase -ResourceGroupName $resourcegroup -ServerName $dbserverName -DatabaseName $dbName

Points to remember:

I am running above script inside my jump/AVD machine so if required please use -DefaultProfile parameter in the Get-AzSqlDatabase / Remove-AzSqlDatabase to authenticate the SQL server. This -DefaultProfile parameter used with credentials, tenant and subscription used for communication with azure.

 

Pull and Push Docker Image to Azure Container Registry using Azure Devops pipeline

When you want to develop and implement the container application in Azure. The first and main step you would execute is to build the images and push them into the our own private Registry (ex: Azure Container registry). In this post, I will explain how to Pull and Push Docker Image to Azure Container Registry using Azure DevOps pipeline

If your solution is going to use base image from public repo then best practice in DevOps to pull & push the trusted public image to ACR, post that we need to use same in our custom solution build.

what is Azure container Registry (ACR)

Azure Container Registry also is the similar as hub.docker.com but is provided by azure cloud. The Azure Container registry can be private and can be used by only one team or users who have access. So, users with access can push and pull images.

It provides geo-replication so that images pushed in one datacenter in one region gets replicated in all the connected configured datacenters and gets deployed simultaneously to Kubernetes clusters in respective locations.

Pull and push Docker Image

The purpose of this article is to provide steps to guide how to pull the image from public repository and provide commands to push and pull images from registry using the Azure DevOps pipeline.

There can be two options when you want to push the container images into ACR.

Option 1: Import the pre-existing Docker image from the docker hub (docker.io)/public registry and deploy it to AKS.

Option 2: Create a new custom image based on our solution (we can use push and pull other public registry and use in our solution as base image to build our solution), push it to ACR, and then deploy it to AKS.

Note: If you are using Azure default Agent or your own Agent, then decide which type of image your pulling and pushing. If the image is build on windows then the window Agent need to use for the push and pull or linux Agent if image is build with linux as base image. In my case, i am pulling the linux based image from registry.k8s.io to my ACR. Post this action, we will refer the same image during the nginx ingress installation in my AKS

Push Image to Azure Container Registry

Step 1 : Login to Azure Container Registry with Non-Interactive mode

Syntax:  docker login –username demo –password example

- bash: |
docker login crdgtlshared02.azurecr.io -u crdgtlshared010 -p rtvGwd6X2YJeeKhernclok=UHRceH7rls

Step 2 : Pull the image and tag the image with registry prefix

In my case, I need to pull the image from public repository (registry.k8s.io) and from my ACR i need to refer this image during the ingress installation in AKS cluster. To be able to push Docker images to Azure Container Registry, they need to be tagged with the login Server name of the Registry. These tags are used for routing purposes when we push these Docker images to Azure. In simple words, Docker tags convey useful information about a specific image version/variant

Syntax:  docker pull [OPTIONS] NAME[:TAG|@DIGEST]
docker tag SOURCE_IMAGE[:TAG] TARGET_IMAGE[:TAG]

- bash: |

docker pull registry.k8s.io/ingress-nginx/controller:v1.3.0

docker tag registry.k8s.io/ingress-nginx/controller:v1.3.0 crdgtlshared02.azurecr.io/nginx-baseimage/controller:v1.3.0

displayName: 'push ingnix base image'

enabled: false

Pull Image to Azure Container Registry

Step 3 : Pull the image with registry name prefix

Now that the image is tagged (in step 2), we can use the “docker push” command to push this image to Azure Container Registry;

Syntax:  docker push [OPTIONS] NAME[:TAG|@DIGEST]

- bash: |
docker push crdgtlshared02.azurecr.io/nginx-baseimage/controller:v1.3.0
displayName: 'push ingnix base image'
enabled: false

This operation might take a few minutes and you will se the image being uploaded to Azure Container Registry in the console.

Note: To pull image directly onto docker-compose, kubernetes yml files, use appropriate logins. Usually in these scenarios, docker login is the first step before docker-compose up is called, so that images get pulled successfully

For this above example, to explain in step by step i used bash task for each action but we can do all to gether in single bask task in pipeline as shown below.

Full YAML code for Pipeline

- bash: |
docker login crdgtlshared02.azurecr.io -u crdgtlshared02 -p gbHdlo6X2YJeeKhaxjnlok=UHRceT9NR

docker pull registry.k8s.io/ingress-nginx/controller:v1.3.0 docker tag registry.k8s.io/ingress-nginx/controller:v1.3.0 crdgtlshared02.azurecr.io/nginx-baseimage/controller:v1.3.0

docker push crdgtlshared02.azurecr.io/nginx-baseimage/controller:v1.3.0 

displayName: 'push ingnix base image' 

enabled: false

How to pass values between Tasks in a Pipeline using task.setvariable Command

Problem ?

Azure DevOps pipeline is a set of Tasks which can perform a specific task and these tasks will run inside a Agent Machine (ie., Virtual Machine). While Task is executing, it will be allocated some resources and after the Task execution is complete, the allocated resources will be de-allocated. The entire allocation / de-allocation process repeats for other tasks available within the pipeline. It means the Task1 cannot directly communicate with Task2 or any other subsequent Tasks in the pipeline (pass values between Tasks) as their scope of execution is completely isolated though they get executed in the same Virtual Machine .

In this article, we are going to learn about the scenario where you can communicate between Tasks and pass values between Tasks in a Pipeline .

How to Pass Values between tasks ?

When you use PowerShell and Bash scripts in your pipelines, it’s often useful to be able to set variables that you can then use in future tasks. Newly set variables aren’t available in the same task. You’ll use the task.setvariable logging command to set variables in PowerShell and Bash scripts.

what is Task.setvariable?

Task.setvariable is a logging command can be used to create a variable that be used across tasks within the pipeline whether they are in same job of a stage or across stages. VSO stands for Visual Studio Online, which is part of Azure DevOps’ early roots

“##vso[task.setvariable variable=myStageVal;isOutput=true]this is a stage output variable”

Example:

- powershell: |
Write-Host "##vso[task.setvariable variable=myVar;]foo"

- bash: |
echo "##vso[task.setvariable variable=myVar;]foo"

SetVariable Properties

The task.setvariable command includes properties for setting a variable as secret, as an output variable, and as read only. The available properties include:

  • variable = variable name (Required)
  • Issecret = true make the variable as a Secret
  • isoutput = To use the variable in the next stage, set the isoutput property to true
  • isreadonly = When you set a variable as read only, it can’t be overwritten by downstream tasks. Set isreadonly to true

Share variables between Tasks within a Job

Let’s now create a new variable in Task1, assign some value to it and access the Variable in next Task. 

  • Create a variable named Token using the setvariable syntax, assign it some test value (eg – TestTokenValue)
  • Display the value of the Token variable in the next Task as shown in below (Task name ‘Stage1-Job1-Task2’).
stages:
- stage: Stage1
jobs:
- job: Job1
steps:
- task: PowerShell@2
displayName: 'Stage1-Job1-Task1'
inputs:
targetType: 'inline'
script: |
Write-Host "##vso[task.setvariable variable=token]TestTokenValue"
- task: PowerShell@2
displayName: 'Stage1-Job1-Task2'
inputs:
targetType: 'inline'
script: |
Write-Host "the Value of Token : $(token)"

Now, view the output of the variable of the Stage1-Job1-Task2 as shown below. Share variables between Tasks across the Jobs (of the same Stage)

Share variables between Tasks across the Jobs (of the same Stage)

As we discussed in SetVariable property section, We need to use the isOutput=true flag when you desire to use the variable in another Task located in another Job.

>pool:
name: devopsagent-w-pprd01

stages:
- stage: Stage1
jobs:
- job: Stage1_Job1
steps:
- task: PowerShell@2
name: 'Stage1_Job1_Task1'
inputs:
targetType: 'inline'
script: |
Write-Host "##vso[task.setvariable variable=token;isoutput=true;]TestTokenValue"

- job: Stage1_Job2
dependsOn: Stage1_Job1
variables:
- name: GetToken
value: $[dependencies.Stage1_Job1.outputs['Stage1_Job1_Task1.token']]
steps:
- task: PowerShell@2
displayName: 'Stag1-Job2-Task1'
inputs:
targetType: 'inline'
script: |
Write-Host "the Value of Token : $(GetToken)"
  1. Navigate to Stage1_Job1_Task1 and add isoutput = true flag to the Logging Command which let’s us to access the value outside the Job.
  2. The Job in which you want to access the variable must be dependent on the other Job which produces the output. Add dependsOn: Stage1_Job1 in the Stage1_Job2.
  3. In the Stage1_Job2, Create a new variable named GetToken and set it’s values to $[dependencies.Stage1_Job1.outputs[‘Stage1_Job1_Task1.token’]]. This will help to access the variable value which is available in another dependent job. You can’ access this expression directly in the script. It’s mandatory to map the expression into the value of another variable.
  4. Finally, access the new variable in your script.
  5. Once the isoutput=true is added, it’s important to access the variable by prefixing the Task name. Otherwise, it wouldn’t work.

OUTPUT:

Below code where the Job2 can access the output of Job1.

Share variables between Tasks across Stages

As per below code, I didn’t specify dependency (using dependsOn) between Stages as Stage1 and Stage2 are one after the other. In case if you would like to access Stage1’s variable in Stage3 then the Stage2 must depend on Stage1.

Accessing value of one stage from another we need to use stageDependencies attribute where in between jobs we are used dependencies as shown in above YAML.

pool:
name: devopsagent-w-pprd01

stages:
- stage: Stage1
jobs:
- job: Stage1_Job1
steps:
- task: PowerShell@2
name: 'Stage1_Job1_Task1'
inputs:
targetType: 'inline'
script: |
Write-Host "##vso[task.setvariable variable=token;isoutput=true;]TestTokenValue"
- stage: Stage2
jobs:
- job: Stage2_Job1
variables:
- name: getToken
value: $[stageDependencies.Stage1.Stage1_Job1.outputs['Stage1_Job1_Task1.token']]
steps:
- task: PowerShell@2
displayName: 'Stag1-Job2-Task1'
inputs:
targetType: 'inline'
script: |
Write-Host "the Value of Token from Stage2: $(getToken)"

OUTPUT:

Rebuild index to reduce Fragmentation in SQL Server

Here we will learn how to identify and resolve by Rebuild index to reduce Fragmentation in SQL Server. Index fragmentation identification and index maintenance are important parts of the database maintenance task. Microsoft SQL Server keeps updating the index statistics with the Insert, Update or Delete activity over the table. The index fragmentation is the index performance value in percentage, which can be fetched by SQL Server DMV. According to the index performance value, users can take the indexes in maintenance by revising the fragmentation percentage with the help of Rebuild or Reorganize operation.

Introduction:

In SQL Server, both “rebuild” and “reorganize” refer to operations that can be performed on indexes to address fragmentation. However, they are distinct operations with different characteristics. Let’s explore the differences between rebuilding and reorganizing indexes:

Note: Optimize index is one of the maintenance activity to improve query performance and reduce resource consumption. Ensure you will plan to Performing the database index during the off business hours or less traffic hours (less request to database).

Advantages of Rebuild index to reduce Fragmentation:

  • Removes both internal and external fragmentation.
  • Reclaims unused space on data pages.
  • Updates statistics associated with the index.

Considerations:

  • Requires more system resources.
  • Locks the entire index during the rebuild process, potentially causing blocking.

How to find the Fragmentation?

Here we executed the SQL script for checking the fragmentation details for the specific database , where the result shown in the percentage.

## Script will give details of fragmentation in percentage 
Method 1:

DECLARE @cutoff_date DATETIME = DATEADD(day, -20, GETDATE()); SELECT OBJECT_NAME(ip.object_id) AS TableName,
i.name AS IndexName,
ip.avg_fragmentation_in_percent
FROM sys.dm_db_index_physical_stats(DB_ID(), NULL, NULL, NULL, NULL) ip
JOIN sys.indexes i
ON ip.object_id = i.object_id AND ip.index_id = i.index_id
JOIN sys.dm_db_index_usage_stats ius
ON ip.object_id = ius.object_id AND ip.index_id = ius.index_id

Method 2:

SELECT
DB_NAME() AS DBName
,OBJECT_NAME(ps.object_id) AS TableName
,i.name AS IndexName
,ips.index_type_desc
,ips.avg_fragmentation_in_percent
FROM sys.dm_db_partition_stats ps
INNER JOIN sys.indexes i
ON ps.object_id = i.object_id
AND ps.index_id = i.index_id
CROSS APPLY sys.dm_db_index_physical_stats(DB_ID(), ps.object_id, ps.index_id, null, 'LIMITED') ips
ORDER BY ips.avg_fragmentation_in_percent DESC

Rebuild index to reduce Fragmentation:

The REBUILD operation involves recreating the entire index. This process drops the existing index and builds a new one from scratch. During the rebuild, the index is effectively offline, and there can be a period of downtime where the index is not available for queries. In simple, REBUILD locks the table for the whole operation period (which may be hours and days if the table is large). The syntax for rebuilding an index is as follows:

Rebuilding Full Index on selected database:

After executing the Rebuild index on specific database, you can able to view the fragmentation is reduce as shown in the below image.

-- Rebuild ALL Indexes
-- This will rebuild all the indexes on all the tables in your database.

SET NOCOUNT ON
GO

DECLARE rebuildindexes CURSOR FOR
SELECT table_schema, table_name  
FROM information_schema.tables
	where TABLE_TYPE = 'BASE TABLE'
OPEN rebuildindexes

DECLARE @tableSchema NVARCHAR(128)
DECLARE @tableName NVARCHAR(128)
DECLARE @Statement NVARCHAR(300)

FETCH NEXT FROM rebuildindexes INTO @tableSchema, @tableName

WHILE (@@FETCH_STATUS = 0)
BEGIN
   SET @Statement = 'ALTER INDEX ALL ON '  + '[' + @tableSchema + ']' + '.' + '[' + @tableName + ']' + ' REBUILD'
   --PRINT @Statement 
   EXEC sp_executesql @Statement  
   FETCH NEXT FROM rebuildindexes INTO @tableSchema, @tableName
END

CLOSE rebuildindexes
DEALLOCATE rebuildindexes
GO
SET NOCOUNT OFF
GO

Summary: 

Index fragmentation occurs due to frequent INSERT, UPDATE, and DELETE operations in SQL Server, leading to degraded query performance. Regular index maintenance, including identifying and resolving fragmentation, is crucial for database optimization.

Key Points:

  • Fragmentation Identification:
    • Use DMVs (Dynamic Management Views) like sys.dm_db_index_physical_stats to check fragmentation percentage.
    • Two methods are provided to analyze fragmentation levels across indexes.
  • Rebuild vs. Reorganize:
    • Rebuild: Drops and recreates the index entirely, removing internal and external fragmentation, reclaiming space, and updating statistics. However, it locks the index and consumes more resources.
    • Reorganize: Defragments the index without rebuilding it, suitable for low to moderate fragmentation.
  • When to Rebuild Indexes:
    • Recommended for high fragmentation (typically above 30%).
    • Should be scheduled during off-peak hours to minimize blocking.
  • How to Rebuild Indexes:
    • A script is provided to rebuild all indexes in a database dynamically.
    • Rebuilding reduces fragmentation, improving query performance and resource efficiency.

By regularly monitoring and rebuilding fragmented indexes, database administrators can maintain optimal SQL Server performance.

How to Continue Azure Pipeline on failed task

Introduction

Sometimes failing scripts are not failing the task when they should. And sometimes a failing command should not fail the task. How to handle these situations by Continue Azure Pipeline on failed task?

In Sometimes, you many have some Pipeline Tasks which may be dependent on external reference which may chance to fail at any time. In these scenarios, if the Task is failing (or may be failing intermittently) due to any issue in external reference, your entire pipeline would fail. You don’t have any insights about when the bug would get fixed. So in the above case, you though to run the pipeline if any issue in the that task and you want to ensure if any issue in future for this task then it wont lead to pipeline failure.

This simple technique can be used in scenarios where you have a non-mandatory task that’s failing intermittently and you want to continue the execution of the pipeline

Solution : 

In this case, it absolutely makes sense to continue the execution of the next set of Tasks (Continue Azure Pipeline on failed task) . In this post, we are going to learn how to continue the execution of the pipeline if a particular Task has failed by using ContinueOnError/failOnStderr/failOnStandardError property.

using ContinueOnError attribute in Powershell/script Task

Let’s build a pipeline with few tasks where we simulate and error in one of the tasks as shown below. As shown in the below code, the following points to be noted.

The task named “continueOnError Task”, an intentional added targetType: ‘filePath’ but used the ‘inline’ script instead of mapping with script file to simulate an error. Second, attribute called continueOnError has bee added to ignore if there any errors while executing the pipeline.

steps:
- task: Powershell@2
displayName: "continueOnError Task"
continueOnError: true
inputs:
ScriptType: InlineScript
Inline: |
Write-Hosts "Continue if any issue here"

- task: Powershell@2
displayName: "No Error Task"
continueOnError: true
inputs:
targetType: 'inline'
Inline: |
Write-Host "Block if any issue here"

Now, when you run the pipeline, an indication about the error for the Task is shown and the execution will carry forward as shown below.

 

Like PowerShell task, you can continue error in other tasks as well as shown in bellow (click below link reference to know about other tasks).

Summary:

In this this post, we have learnt how to continue the execution of the pipeline in spite-of having an error in one of the tasks that is Continue Azure Pipeline on failed task. This simple technique can be used in scenarios where you have a non-mandatory task that’s failing intermittently and you want to continue the execution of the pipeline.

How to Enable additional logs in Azure pipeline execution

One of the most important aspects in the Azure DevOps life cycle Pipeline development is to have tools and techniques in place to find out the root cause of any error that would have occurred during the pipeline Azure DevOps pipeline execution. In this article, we will learn how to review the logs that helps in troubleshoot for any errors in the Pipelines by enable additional logs in Azure pipeline execution

By default, Azure DevOps pipeline provides logs which provide information about the execution of each step in the pipeline. In case of any error or need to more information to debug that time the default logs wouldn’t help you in understanding what went wrong in the pipeline execution. In those cases, it would be helpful if we get more diagnostic logs about each step in the Azure DevOps pipelines.

Below are the two different techniques to enable the feature of getting additional logs.

Enable System Diagnostics logs for specific execution of pipeline.

If you would like to get additional logs for a specific pipeline execution then you need to do is enable the Enable System Diagnostics checkbox as shown below image and click on the Run button.

Enable System Diagnostics logs for All execution of pipeline.

If you always want to enable System Diagnostics to capture the Diagnostics trace for cases where the pipeline executed automatically (Continuous Integration scenarios), then you need to create a variable in the pipeline with the name system.debug and the set the value to true as shown below.

System.debug = true (variable helps us to generate diagnostics logs in during the execution of pipelines)
System.debug = False (variable helps us to not generate diagnostics logs in during the execution of pipelines)

Once we set the value of system.debug to true in our variable group (which referred in our pipeline), then the pipeline starts showing additional logs in the purple color as shown below.

System.debug = true

System.debug = false

Note: You like to view the logs without any colors, you can click on the View raw log button which opens up the logs in a separate browser window and you can save if required.

Download Logs from Azure pipeline:

In case if you like to share the logs to some other teams who doesn’t have access to your pipeline, you can download the logs by clicking on the Download Logs button in the Pipeline Summary page as shown below and you can share.