All posts by Thiyagu

Import bulk Variables to Variable Group using Azure DevOps CLI

My Scenario:

As as System Admin/DevOps engineer, maintaining the variable group is little tricky as its very difficult to maintain the history and changes. We got the requirement in one of our migration project with more number of variables for the pipeline with values in excel. It’s definitely very easy to copy/paste as there are just 20 Key-Value pairs in this scenario. However, think about a scenario where you need to repeat this for many Variable Groups for multiple project? It’s definitely a tedious job and Manually creating this new key values in the variable group will make more time and surely there will be human error. So to overcome this problem, we have though to Import bulk Variables to Variable Group using Azure DevOps CLI

What format we got the excel?

Instead of adding them directly from the Azure DevOps Portal, we will leverage automation the Process of automatically adding the Key-Value pairs without doing any manual Data-Entry job as we got huge number or variables.

Note: It’s definitely very easy to copy/paste as there are just 10/20 Key-Value pairs in this scenario. However, think about a scenario where you need to repeat this for many Variable Groups for multiple project? It’s definitely a very tedious job and surely there will be an human-error.

Prerequisite

Step 1: Retrieve the Variable Group ID:

The Variable group need to be ready for importing the variable from the excel. For this example, i already created one variable group know as “mytestvariablegroup” (as shown in below snap) and noted the variable group id (this id will be unique for each variable group) as shown below. In my case, the Variable Group ID is 1 as shown in the below snap shot. This ID will be used in the Step4 and Step5 to dynamically create the Variables using the Azure DevOps CLI commands.

Step 2: Generate Azure DevOps CLI commands using Excel Formula

Navigate to the excel sheet add another column and paste the below formula. The C2 and D2 will be column which containing the variable name and variable value. And, apply the formula to all the rows. Once you apply the formula to all the rows, it should look something like below.

=CONCAT(“az pipelines variable-group variable create –group-id 2 –name “””,B2,””” –value “””,C2,””””)

Import bulk Variables to Variable Group using Azure DevOps CLI

Step 3: Login to Azure DevOps from Command Line

Non-interactive mode

Before we start with Azure CLI commands, it’s mandatory to authenticate using Azure DevOps credentials. If you are using same account for both Azure and Azure DevOps then you can use the below command to authenticate.

Az login

Post enter, it will open the browser to authenticate the login details.

Step 4: Set Default Organization

Run the below command to set the organization where we are going to update the variable.

az devops configure -d organization=https://dev.azure.com/thiyaguDevops/

Step 5: Set Default Project

Run the below command to set the default Project.

az devops configure -d project=poc

Step 6: Execute the Azure DevOps CLI commands

In the step 2, we generated all the commands in excel. Now, it’s time to execute them. Copy the entire rows which containing the formula (column D , without header if any only the values) of commands and paste all of them at once in the command prompt.

Note: No need to copy past one by one from excel, copy all from the Colum D and past at single time, remaining the PowerShell will take care

Step7: Review the Output.

Finally now it’s time to view the results in our variable group. Navigate to the Variable Group and refresh the page to view all the new variables are added like as shown below

 

 

Search and Replace String Using the sed Command in Linux/Unix.

My Requirement & solution:

We are maintaining the application in Linux machine (in AKS pods) and as a Devops team, we Got a requirement to replace some config values based on the environment (value need to be maintain in the AKS environment variable). To manage this, we thought to create one startup script in the docker image which will execute during the new image deployment ,where we used the sed command to achieve the find & replace of config value based on environments. Based on my experience i though to write this article (Search and Replace String Using the sed Command in Linux/Unix) immediately which will be helpful like me who are new to the Linux Operating system/Bash commands. 

What Is the Sed Command in Linux?

The SED command in Linux stands for Stream Editor and it helps in operations like selecting the text, substituting text, modifying an original file, adding lines to text, or deleting lines from the text. Though most common use of SED command in UNIX is for substitution or for find and replace.

By using SED you can edit files even without opening them, which is much quicker way to find and replace something in file, than first opening that file in VI Editor and then changing it.

[su_highlight color=”#2F1C6A”]Syntax: sed OPTIONS… [SCRIPT] [INPUTFILE…][/su_highlight]

  • Options control the output of the Linux command.
  • Script contains a list of Linux commands to run.
  • File name (with extension) represents the file on which you’re using the sed command.

[su_quote]Note: We can run a sed command without any option. We can also run it without a filename, in which case, the script works on the std input data.[/su_quote]

Replace First Matched String

The below example, the script will replace the first found instance of the word test1 with test2 in every line of a file

    sed -i 's/test1/test2/' opt/example.txt

The command replaces the first instance of test1 with test2 in every line, including substrings. The match is exact, ignoring capitalization variations. -i tells the sed command to write the results to a file instead of standard output.

Search & Global Replacement (all the matches)

To replace every string match in a file, add the g flag to the script. For example

    sed -i 's/test1/test2/g' opt/example.txt

The command globally replaces every instance of test1 with test2 in the /example.txt.

The command consists of the following:

  • -i tells the sed command to write the results to a file instead of standard output.
  • s indicates the substitute command.
  • / is the most common delimiter character. The command also accepts other characters as delimiters, which is useful when the string contains forward slashes.
  • g is the global replacement flag, which replaces all occurrences of a string instead of just the first.
    “input file” is the file where the search and replace happens. The single quotes help avoid meta-character expansion in the shell.

Search and Replace All Cases

To find and replace all instances of a word and ignore capitalization, use the I parameter:

    sed -i 's/test1/tes2/gI' opt/example.txt

The command replaces all instances of the word test1 with test2, ignoring capitalization.

Conclusion 

You can check the inputs based on conditions like if.. else and make the code more dynamic. In this tutorial, hope you learned Search and Replace String Using the sed Command in Linux/Unix.

I hope you found this tutorial helpful. What’s your favorite thing you learned from this tutorial? Let me know on comments!

 

 

How to use the variable group at runtime in Azure YAML Pipeline

When & Where to use?

We received the request that we would like to pass the variable group as a runtime parameter so that whenever I run the pipeline, it should allow me to select the variable group name as input, and based on the input value for the variable group during runtime my pipeline should proceed. In this article, we will discuss How to use the variable group at runtime in Azure YAML Pipeline.

This can be achieve by using the Runtime parameters. Runtime parameters let you have more control over what values can be passed to a pipeline. In this article 

What is Runtime parameters?

You can specify parameters in templates and in the pipeline. Parameters have data types such as number and string, and they can be restricted to a subset of values. The parameters section in a YAML defines what parameters are available. These runtime parameters allow you to have more control over the parameter values you pass to your pipelines.

Parameters are only available at template parsing time. Parameters are expanded just before the pipeline runs so that values surrounded by ${{ }} are replaced with parameter values. Use variables if you need your values to be more widely available during your pipeline run.

Note: If you are going to trigger the pipeline manually then you can make use of Runtime parameters in the Azure DevOps pipeline.

Runtime parameters let you have more control over what values can be passed to a pipeline. Unlike variables, runtime parameters have data types and don’t automatically become environment variables.

Let we see How to use the variable group at runtime in Azure YAML Pipeline

Step 1: Define the parameters under the Values section

Ensure Always Set runtime parameters at the beginning of a YAML. This example pipeline accepts the value of variable and then outputs the value in the job

parameters:
- name: variable_group
displayName: Variable Group
type: string
default: app-sitecore-dev
values:
- app-sitecore-dev
- app-sitecore-qa
- app-sitecore-pprd
- app-sitecore-prd
- app-sitecore-pprd-hotfix

trigger: none # trigger is explicitly set to none

Step 2: Assign the selected value to the variable group.

Post slection of variable group during manula build, the selected variable will be assinged by using ${{ parameters.<parameter_name> }}. once runtime parameter is assinged the sequence of stage/jobs can able to use the values

variables:
- group: ${{ parameters.variable_group }}

Step 3: Use the values from the selected variable group

Based on the variable group assinged from run time parameter, the remaining stage can fetch the value from the variable group like agentPool…

stages:
- stage: Build_Artifacts
jobs:
- template: Prepare_Artifacts.yml
parameters:
agentPool: '$(agentPool)'
TargetFolder: '$(Build.ArtifactStagingDirectory)'

Full YAML Code

parameters:
- name: variable_group
  displayName: Variable Group
  type: string
  default: app-sitecore-dev
  values:
  - app-sitecore-dev
  - app-sitecore-qa
  - app-sitecore-pprd
  - app-sitecore-prd
  - app-sitecore-pprd-hotfix

trigger: none # trigger is explicitly set to none

variables:
- group: ${{ parameters.variable_group }}

stages:
- stage: Build_Artifacts
jobs:
- template: Prepare_Artifacts.yml
parameters:
agentPool: '$(agentPool)'
TargetFolder: '$(Build.ArtifactStagingDirectory)'

Output

Bash Scripting – If Statement

The Bash Scripting  is now a days mandatory language for most of the system admins/devops guys. so in upcoming articles we will shed light on the power and subtlety that is the Unix shell, I’d like to take a dive into just one of its many features: Bash Scripting – If Statement.

When coding, you might need to make decisions based on certain conditions. Conditions are expressions that evaluate to a boolean expression (true or false)Statements that help to execute different code branches based on certain conditions are known as conditional statements.if…else is one of the most commonly used conditional statements. Like other programming languages, Bash scripting also supports if…else statements. And we will study that in detail in this blog post.

In another way, If statements (and, closely related, case statements) allow us to make decisions in our Bash scripts. They allow us to decide whether or not to run a piece of code based upon conditions that we may set.

SYNTAX

When you are using a single if statement, the syntax is as follows: A basic if statement effectively says, if a particular condition is true, then perform a given set of actions. If it is not true then don’t perform those actions. If follows the format below:

The if statement is composed of the if keyword, the conditional phrase, and the then keyword. The fi keyword is used at the end of the statement. The COMMANDS gets executed if the CONDITION evaluates to True. Nothing happens if CONDITION returns False; the COMMANDS are ignored.. The basic syntax of an if statement is the following:

if [ condition ]
then
    statement/actions
fi

The “[ ]” in the if statement above are actually a reference to the command test. This means that all of the operators that test allows may be used here as well. When you are using a multiple condition check with if statement, the syntax is as follows:

if [ condition ] ; then
   statement/actions
elif [ condition ] ; then
   statement/actions
else
   statement/actions
fi
  • if >> Perform a set of commands if a test is true.
  • elif >> If the previous test returned false then try this one.
  • else >> If the test is not true then perform a different set of commands.

Note that the spaces are part of the syntax and should not be removed.

Example: Simple with IF statement

Let’s go through an example where we are comparing two numbers to find if the first number is the smaller one.

a=25
b=30

if [ $a -lt $b ]
then
    echo "a value is less than b"
fi

Output: a value is less than b

Example: How to Use the if .. else Statement

Let’s see an example where we want to find if the first number is greater or smaller than the second one. Here, if [ $a -lt $b ] evaluates to false, which causes the else part of the code to run.

a=65
b=35

if [ $a -lt $b ]
then
   echo "a is less than b"
else
   echo "a is greater than b"
fi

Output: a value is greater than b

Example: How to Use if..elif..else Statements

To have comparisons, we can use AND -a and OR -o operators as well in the bash command. For performing the checks between two values, we can use AND -a and OR -o as well.

In this example, we will do the check on 3 values conditions:

if [ $a == $b -a $b == $c -a $a == $c ]
then
   echo "All values are equal"

elif [ $a == $b -o $b == $c -o $a == $c ]
then
   echo "May be more than one value is equal"

else
   echo "All numbers are not equal"

fi

Conclusion on Bash Scripting – If Statement

You can check the inputs based on conditions like if..else and make the code more dynamic. In this tutorial, hope you learned Bash Scripting – If Statement

I hope you found this tutorial helpful.

What’s your favorite thing you learned from this tutorial? Let me know on Twitter!

Using secrets from Azure Key Vault in a pipeline

You know as a best practice, DevOps guys need to ensure all the secrets need to be kept inside the Keyvalut instead of using directly from the Azure DevOps Variable group. So, in this article, we are going to see how we can do Variable substitute from KeyVault in YAML Azure DevOps pipelines (ie., Using secrets from Azure Key Vault in a pipeline) 

Config File

Below is the sample config file which we are going to use for substituting variables from Key Vault in YAML Azure DevOps pipelines

Step 1: Fetch the Key from Key vault:

The variable substitution can be done with 2 tasks in Azure DevOps, let’s start. The task can be used to fetch the latest values of all or a subset of secrets from the vault and set them as variables that can be used in subsequent tasks of a pipeline. The task is Node-based and works with agents on Linux, macOS, and Windows. First, we need to create the task for Connecting and fetching the secrets from the Azure Keyvalut. As we mentioned RunAsPreJob: false so the value will only scope up to the next following task alone.

- task: AzureKeyVault@2
  inputs:
    azureSubscription: 2a28a5af-3671-48fd-5ce1-4c144540aae2
    KeyVaultName: kv-dgtl-dev
    SecretsFilter: 'smtp-host,smtp-username,smtp-password'
    RunAsPreJob: false

Point to remember for Variable substitute from KeyVault:

  • RunAsPreJob – Make secrets available to the whole job, Default value is false
  • Keyvalut task needs to run before the job execution begins. Exposes secrets to all tasks in the job, not just tasks that follow this one.
  • Ensure the Agent machine has the required permissions to access the Azure key vault
  • if you want to fetch the all secrets during this task then you can specify ‘*’ instead of secrets name in the SecretsFilter.

Step 2: Apply the secrets to config files:

Second, we can have the replace token task to have the target files which need to replace the variables. once this is executed, the value fetched from the key vault will apply to the matched variable

- task: replacetokens@5
  inputs:
    rootDirectory: 'src/Feature/Forms/code/App_Config/Include/Feature/'
    targetFiles: 'dotnethelpers.Feature.Forms.SMTP.config,SMTP_external.config'
    encoding: 'auto'
    tokenPattern: 'default'
    writeBOM: true
    actionOnMissing: 'warn'
    keepToken: false
    actionOnNoFiles: 'continue'
    enableTransforms: false
    enableRecursion: false
    useLegacyPattern: false
    enableTelemetry: true

Point to remember:

  • The token pattern is set to default (so I used #{YOUR_VARIABLE}#, it may define based on your requirement.
  • The name of the Keyvalut secrets needs to match with the config variable which needs to substitute. For example, in the config, we have variables like smtp-host, smtp-username, and smtp-password so the Azure key vault secrets name need to match with same.

How to Find and delete duplicate Files Using PowerShell

Anyone who manages a file storage has to keep track of the size of files to ensure there is always enough free space. Documents, photos, backups and other can quickly occupy up your shared file resources — especially if you have a lot of duplicates. Duplicate files are often the result of users’ mistakes, such as double copy actions or incorrect folder transfers. To avoid wasting space and driving up storage costs, you have to analyze your file structure, find and delete duplicate Files Using PowerShell. When you say there are files with the same content but with different names

As a result we end up running out of disk space and then get in to a situation where we have to sit and find the unnecessary files to gain free storage space.
One of the biggest issue that we see during such clean-up activity is to get rid of duplicate files. A simple Windows PowerShell script can help you complete this tedious task faster. we having may types of approach to handle this scenario, we will discuss about few examples here.

Find Duplicate file using Get-FileHash

Do you need to compare two files or make sure a file has not changed? The PowerShell cmdlet Get-FileHash generates hash values both for files or streams of data. A hash is simply a function that converts one value into another. Sometimes the hash value may be smaller to save on space, or the hash value may be a checksum used to validate a file. Therefore a hash will be different if even a single character in the input is changed

In this demo, i am having 4 text files, which 3 files are having same content with different file name and remaining 1 file unique content as shown in the below image.

STEP 1: Open the PowerShell window

Open PowerShell: Click on the Start Menu and type “PowerShell” in the search bar. Then, select “Windows PowerShell” from the results.

STEP 2: Find to the directory where you want to search for duplicate files:

$filePath = ‘C:\Thiyagu Disk\backupFiles\’

STEP 3: Get the all child items inside the file path to check the duplicate.

Use the Get-ChildItem cmdlet to find all files in the directory: Type “Get-ChildItem -Recurse -File” to list all files in the current directory and its subdirectories. The “-Recurse” option tells PowerShell to search all subdirectories.

Get-ChildItem –path $filePath -Recurse

STEP 4: Find duplicate files using Get-FileHash cmdlet.

Using Get-FileHash generates hash values both for files or streams of data and group by the hash value as shown below to find the duplicate and unique files/folders/…

Get-ChildItem –path $filePath -Recurse | Get-FileHash | Group-Object -property hash | Where-Object { $_.count -gt 1 } | ForEach-Object { $_.group | Select-Object Path, Hash }

Full Code : Find the duplicate files

$filePath = ‘C:\backupFiles\’
$group_by_unique_files = Get-ChildItem –path $filePath -Recurse | Get-FileHash | Group-Object -property hash | Where-Object { $_.count -gt 1 }
$duplicatefile_details = $group_by_unique_files | ForEach-Object { $_.group | Select-Object Path, Hash }
$duplicatefile_details

Full Code: Find and delete duplicate Files Using PowerShell

$filePath = ‘C:\backupFiles\’
$group_by_files = Get-ChildItem –path $filePath -Recurse | Get-FileHash | Group-Object -property hash | Where-Object { $_.count -gt 1 }
$group_by_files
$duplicatefile_details = $group_by_files | ForEach-Object { $_.group | Select-Object Path, Hash}
$duplicatefile_details | Out-GridView -OutputMode Multiple | Remove-item

After finding the duplicate files, you can move/delete based on your requirement. if you want to delete through UI, you can use Out-GridView and delete by selecting the multiple files as shown below. A user may select files to be deleted in the table (to select multiple files, press and hold CTRL) and click OK.

Note: Please be careful while using the Remove-Item cmdlet as it can permanently delete files from your computer. It’s recommended to test this command on a test folder before using it on your actual data.

How to use Vim editor in PowerShell

If you are familiar with Linux or come from a Unix background, you probably know about Vim. For those of us that started and stay mostly in the realm of Windows however; I Let we exposed to vim editor in PowerShell, and see what it can do. Windows OS does not come with Vim as Unix-based systems do. 

Vim is a powerful, widely used text editor for Unix-based systems, including Linux and macOS. It is known for its speed, efficiency, and flexibility, making it a popular choice among programmers, system administrators, and other power users who need to edit text files on a regular basis. Vim is a command-line interface (CLI) application that can be used in a terminal window, and it provides a wide range of commands and keyboard shortcuts for navigating and editing text files.

Why we need this editor?

Did you run a script that read a text file and need to change something in config for debugging or found that the file had several wrong entries? A PowerShell text editor may come in handy in such situations. You wouldn’t need to fire up an external editor or not have permission to open the file directly. Instead, you can edit the file without leaving PowerShell. How cool is that?

You can also read : If you want to check if script is running in admin privileges,

How to use vim editor in PowerShell

To edit a text file using the Vim editor in PowerShell, follow below steps:

Install the Vim editor in PowerShell

STEP 1: Open PowerShell as an Administrator.

Open PowerShell by searching for “PowerShell” in the Start menu and selecting “Windows PowerShell” or “Windows PowerShell (x86)” in admin.

STEP 2: Install Vim editor in PowerShell using Chocolatey

In the PowerShell terminal, execute the following command to install the Vim editor.

choco install vim -y

STEP 3: To verify the Vim version, run the following command

vim –version

Editing and Saving a File using Vim

For this demo, I already having the txt file (in c:\mytestfile) where i am going to edit & save my changes. By following the above steps, now that you have Vim installed, it’s time to get you to learn to edit a file. Before you go any further, you should know that there are different modes in Vim. Each mode behaves differently and affects which actions you can do inside the editor.

The three commonly-used modes are:

  • Normal – The default mode as soon as you open Vim. This mode allows you to navigate the text file but not add new texts.
  • Insert – This mode is where Vim allows you to edit the file. To enter this mode, press i (case insensitive) on the keyboard. To exit and go back to the normal mode, press ESC.
  • Command – In this mode, Vim lets you invoke commands such as save the file, quit Vim, or view the help document, among others.

STEP 4: Open a file using Vim Command

To open the file, run the vim command followed by the filename to open. The command below opens the mytestfile.txt file in the PowerShell console and its ready for view and edit . 

vim “c:\thiyagu disck\mytestfile.txt

STEP 5: Enable the Insert Mode for the file

Next, enter the insert mode by pressing “i”. As you enter the insert mode, the text — INSERT — appears at the bottom of the editor, as shown in the following image. Now that you are in insert mode edit the file as you wish. The arrow keys will let you move the cursor inside the editor.

For this example I added new line as highlighted in the yellow arrow.

STEP 6: Append changes & Save

After making the necessary changes to the text file, press Esc to return to normal mode and Type the command :wq and press Enter to save and close the file. The command w saves the files while q exits Vim.

Output:

 

How to create new DNS in Azure Private DNS using PowerShell

You have a more number of options when it comes to resolving names using DNS. Microsoft Azure DNS is one of such option. In this post, we will discuss How to create new DNS in the Azure Private DNS using PowerShell

To manage Azure DNS, you can configure it through Azure Portal UI or command-line tools like the Azure CLI or PowerShell. Often admins need to manage DNS at scale or automate the management of various objects. A great way to do that isn’t via a graphical method like the Azure Portal but with a scripting tool like PowerShell (as we can automate).

Azure DNS is a managed DNS solution. We can use it for public DNS records (use the URL for access public) as well as for private DNS records. Using Azure private DNS, we can resolve DNS names in a virtual network. There are many benefits to using Azure private DNS.

  • No additional servers – We do not need to maintain additional servers to run the DNS solution. It is a fully managed service.
  • Automatic Record Update – Similar to Active Directory DNS, we can configure Azure DNS to register/update/delete hostname records for virtual machines automatically.
  • Support common DNS record types – It supports common DNS record types such as A, AAAA, MX, NS, SRV, and TXT.
  • DNS resolution between virtual networks – Azure Private DNS zones can be shared between virtual networks.

 As we had to set many URLs so we thought to have automation to create through Azure DevOps Pipeline.

using New-AzPrivateDnsRecordSet cmdlet we can able to create a new DNS record in the Azure DNS zone and Get-AzPrivateDnsRecordSet will use to list out all the DNS records which were created. The Set-AzPrivateDnsRecordSet cmdlet updates a record set in the Azure Private DNS service from a local RecordSet object. You can pass a RecordSet object as a parameter or by using the pipeline operator

Prequistion for making automation for creating a record set in a Private DNS zone.

  • -Name : The name of the records in this record set (relative to the name of the zone and without a terminating dot).
  • -RecordType : The type of Private DNS records in this record set (values may be A, AAAA, CNAME, MX, PTR, SOA, SRV, TXT)
  • -ZoneName : The zone in which to create the record set (without a terminating dot). In my case, all the domains need to be like .cloud.dotnethelpers.com. for example,
    preprod.cloud.dotnethelpers.com.
  • -ResourceGroupName : The resource group to which the zone belongs.
  • -Ttl : The TTL value of all the records in this record set.
  • -PrivateDnsRecords : The private DNS records that are part of this record set.
  • -Ipv4Address: The IPv4 address for the A record to add. For me this ip from the ingress, in your case it may be your server or anything.

Script: How to create new DNS

New-AzPrivateDnsRecordSet -Name pprd -RecordType A -ZoneName “cloud.dotnethelpers.com” -ResourceGroupName “rg-dgtl-network-pprd” -Ttl 3600 -PrivateDnsRecords (New-AzPrivateDnsRecordConfig -IPv4Address “10.55.161.23”)

Script: How to get DNS record details

Get-AzPrivateDnsRecordSet -ResourceGroupName ‘rg-dgtl-network-pprd’ -ZoneName ‘cloud.dotnethelpers.com’ -RecordType A

Script: How to detect DNS record

$RecordSet = Get-AzPrivateDnsRecordSet -Name “cd-ppr” -ResourceGroupName “rg-dgtl-network-pprd” -ZoneName “cloud.dotnethelpers.com” -RecordType A
Remove-AzPrivateDnsRecordSet -RecordSet $RecordSet

Output: 

The final URL will be pprd.cloud.dotnethelpers.com

Points to Remember:

Before running the above script ensure you have installed the required module in PowerShell to connect to the Azure portal to access the resources (connect using the Connect-AzAccount cmdlet). I hope you have a basic idea about How to create  new DNS in the Azure Private DNS using PowerShell, if any queries please comment so I can able to answer ASAP.

Trigger Azure DevOps pipeline automatically using PowerShell

In many situations, we need to trigger pipelines automatically or from another pipeline (it may be another build pipeline or release pipeline). In my project, I had the same situation where I need to trigger the build from the release pipeline, in my case, the build (CI) pipeline is written in the YAML, and the release (CD) pipeline is configured in the classic editor.

How we can trigger pipelines automatically?

Trigger pipelines automatically can be achieved using Azure tasks or using PowerShell (can be done through the API using PowerShell). Using this, you can trigger a build or release pipeline from another pipeline within the same project or organization but also in another project or organization.

In this example, we will be going to discuss how we can achieve this through PowerShell using the API. in a future post, we can discuss how we can achieve using the task in Powershell.

Step: 1 Create the PAT token for Authorization

To get started, a Personal Access Token is required with the appropriate rights to execute pipelines. To generate a new Personal Access Token follow the link:

Step: 2 Enycrpt the PAT token

Always encrypt the pat token before using it in our script and kept the pat in Keyvalut. For this example, I used direct here for our example.

$token = [System.Convert]::ToBase64String([System.Text.Encoding]::ASCII.GetBytes(“:$($token)”))

Step: 3 Define the API and assign it to variable

This was the latest API version 7.0 which I am going to use for Triggering the pipeline automatically using PowerShell Azure DevOps. As the name implies, we can able to get the {organization}/{project} name easily. if you are new to Azure DevOps, they will struggle to find the {pipelineId}. please find the below snapshot for reference, after clicking on the pipeline which you need to trigger, there you are able to find the build?definitionid which is called as pipelineId.

Syntax : https://dev.azure.com/{organization}/{project}/_apis/pipelines/{pipelineId}/runs?api-version=7.0

$url=”https://dev.azure.com/myOrganization/Myproject/_apis/pipelines/4/runs?api-version=7.0″

 

step: 4 Pass the parameter in the body of API.

This action in required as there are a lot of branches in my repo and the build needs to understand from which branch the build needs to be triggered so I am going to pass the branch name for the pipeline.

$JSON = @’
{
“self”: { “refName”:”develop”},
}
‘@

Step: 5 Invoke the API to trigger pipelines automatically

In this example, I am going to use the PowerShell task to execute the below script as shown in the below snapshot to Trigger the pipeline automatically.

 

$response = Invoke-RestMethod -Uri $url -Headers @{Authorization = “Basic $token”} -Method Post -Body $JSON -ContentType application/json

Full Code

$token = '5dfdferedaztxopaqwxkzf7kk4xgfhn5x5akuvgn3tycwsehlfznq'
$url="https://dev.azure.com/myOrganization/Myproject/_apis/pipelines/4/runs?api-version=7.0"
$token = [System.Convert]::ToBase64String([System.Text.Encoding]::ASCII.GetBytes(":$($token)"))

$JSON = @'
{
"self": { "refName":"develop"},
}
'@

$response = Invoke-RestMethod -Uri $url -Headers @{Authorization = "Basic $token"} -Method Post -Body $JSON -ContentType application/json

 

Azure KeyVault Set and Retrieve Secrets using Powershell

What is Key Vault?

Azure Key Vault is a cloud service that works as a secure secrets store. You can securely store keys, passwords, certificates, and other secrets.

In this example, I am going to create/fetch secrets in Azure key vault secrets using the PowerShell task in the Azure DevOps, so for this, you need to ensure your Agent (it may be self-hosted or default Agent) has access to the Azure Key vault.

Note: Az Module is required for performing the below operations.

STEP: 1 Connect to Azure using Connect-AzAccount

After executing the below cmdlet, you will get the pop for authentication, post successful authentication you will able to execute from the STEP 2

Connect-AzAccount

STEP: 2 Convert the Values to Secure String

Before pushing the secrets in the Azure key vault ensure you are Converts plain text to encrypted strings to secure.

$captcha_value = ConvertTo-SecureString ‘5KjciMedTTTTTJObOOpwysZPFDH-M-TOx1OIuDt6’ -AsPlainText -Force

STEP: 3 Set the Secrets using set-AzKeyVaultSecret

set-AzKeyVaultSecret -VaultName kv-dgtl-dev -Name ‘captcha-secret-key’ -SecretValue $captcha_value

STEP: 4 Get the Secrets using Get-AzKeyVaultSecret

$captcha-secret = Get-AzKeyVaultSecret -VaultName kv-dgtl-dev -Name ‘captcha-secret-key’

To get the value in plain text just use -AsPlainText at the end of the command as shown below

$captcha-secret = Get-AzKeyVaultSecret -VaultName kv-dgtl-dev -Name ‘captcha-secret-key’ -AsPlainText