All posts by Thiyagu

Add Tags On An AZURE SQL DATABASE Using PowerShell

As as system admin/DevOps guys, usually during audit time we need to complete the finding (by make the grouping to filter) in faster manner instead of doing as manual work, like similar we got the review command that all the resources need to be have the tags. But completing this task will have huge manual effort as we maintaining lot of resources so thought to make automation to complete this task. Here we going to discuss bout how to Add Tags On An AZURE SQL DATABASE Using PowerShell

What Is A Tag In Azure?

Its Creates a predefined Azure tag or adds values to an existing tag | Creates or updates the entire set of tags on a resource or subscription.

Azure tagging is an excellent feature from Microsoft that was help you to logically group your Azure resources and help you to track your resources. It also helps to automate the deployments of the resources and another important feature is this feature helps to provide the visibility of the resource costs that they are liable for.

Syntax: New-AzTag [-ResourceId] <String> [-Tag] <Hashtable> [-DefaultProfile <IAzureContextContainer>] [-WhatIf] [-Confirm] [<CommonParameters>]


What are the ways to create a Tags in Azure?

The Azure Tag enables the key-value pairs to be created and assigned to the resources in Azure using Azure Portal, PowerShell. Azure CLI, etc.

Note: Tag names and Tag Values are case-sensitive in nature.

Why Use The Azure tag ?

The main intention to use the Azure tags is to organize the resources in Azure Portal. When you are organizing the resources properly helps you to identify the category that the resources belong to. So basically, Azure tags are the name-value pairs that help to organize the Azure resources in the Azure Portal

For example (consider you having the tag), When you have more resources in your Azure Portal, that time it really helps you to categorize your Azure Resources. Suppose you have 6 Virtual machines (VM) in your Azure Subscription, among them 2 are under our development environment and 2 are for the QA environment and remain 2 belong to our Production environment. So we can tag them as Environment = Development, Environment = QA, or Environment = Production. so now we can easily get the view of each resource coming under the specific environment.


How To Create Azure Tags Using powershell


Step 1: Connect to your Azure account using Connect-AzAccount

Before starting please gather secret for service principal , AppId , tenantId to establish the connect to the azure portal to perform the operation against the Azure services.

#Converts plain text or encrypted strings to secure strings.
$SecureServicePrinciple = ConvertTo-SecureString -String "rfthr~SSDCDFSDFE53Lr3Daz95WF343jXBAtXADSsdfEED" -AsPlainText -Force
#Assigning the App ID
$AppId = "0ee7e633-0c49-408e-b956-36d62264f644"
#Assigning the Tenant ID
$tenantId= "32cf8ba2-403a-234b-a3b9-63c2f8311778"
#storing a username and password combination securely.
$pscredential = New-Object -TypeName System.Management.Automation.PSCredential -ArgumentList $AppId, $SecureServicePrinciple
#Connect to Azure with an authenticated account for use with cmdlets from the Az PowerShell modules.
Connect-AzAccount -ServicePrincipal -Credential $pscredential -Tenant $tenantId

Step 2: Define the tags and assign as a array list

Based on my requirement i had added the below tags, you can create N number of tags based on your segregation of the resources.

$tags = @{"Business Unit"="WX_Digital"; "Environment"="PROD"; "Owner"="dotnet-helpers" ; "Project"="webapp"}


Step 3: Getting all the SQL database in the specific resources group

Example 1: Get single SQL database to update

  • Get-AzResource cmdlet will Gets All the resources from the specific subscription and filtered with Resource Type SQL.
  • After executing below script the RESOURCE.id variable will have all the databases names inside the specific resource type as shown in the below snap shot.
  • -ResourceType : The resource type of the resource(s) to be retrieved. For example, Microsoft.Compute/virtualMachines
#GET the single database by where condition
$RESOURCE = Get-AzResource -ResourceGroupName "rg-dgtl-pprd" -ResourceType "Microsoft.Sql/servers/databases" | Where-Object name -Like 'sqlsrvr-dgtl-pprd/sitecore_master' 

$resourceIds = $RESOURCE.Id

Example 2: Get all the database to update

#Gets All the database 
$RESOURCE = Get-AzResource -ResourceGroupName "rg-dgtl-pprd" -ResourceType "Microsoft.Sql/servers/databases" 

$resourceIds = $RESOURCE.Id

Step 4: Update new tags using new-AzTag command

We can Creates a predefined Azure tag or adds values to an existing tag using new-AzTag cmdlet | Creates or updates the entire set of tags on a resource or subscription.

-ResourceId : The resource identifier for the entity being tagged. A resource, a resource group or a subscription may be tagged.

Example: 1 Add tags to single database

#Creates or updates the entire set of tags on a resource or subscription.
#The resource identifier for the entity being tagged. A resource, a resource group or a subscription may be tagged.
new-AzTag -ResourceId $resourceId -Tag $tags

Example 2: Add tags to all database under ResourceGroup

foreach($resourceId in $resourceIds){

Write-Output $resourceId
#Creates or updates the entire set of tags on a resource or subscription.
new-AzTag -ResourceId $resourceId -Tag $tags

}

Output:

Note: 

  • Get-AzResource cmdlet will Gets All the resources from the specific subscription and filtered with Resource Type SQL.
  • -ResourceType : The resource type of the resource(s) to be retrieved. For example, Microsoft.Compute/virtualMachines

Powershell Error handling with $ERROR Variable

In all programming, the code will have errors, and troubleshooting those problems will be difficult. Like another programming language, PowerShell has error handling mechanisms for handling the error in our programming (in this post, we will discuss about Error handling with $ERROR variable).

In PowerShell, errors can be categories in 2 ways one is terminating and non-terminating. As the name implies, a terminating error stops the code execution when the error is thrown. A non-terminating error implies that the code will continue the next line of code execution when an error is shown.

The $Error Variable

$Error is an automatic global variable in PowerShell which always contains an Array List of zero or more Error Record objects. As new errors occur, they are added to the beginning of this list, so you can always get information about the most recent error by getting at $Error[0]. Both Terminating and Non-Terminating errors will be part of this this list.

How does the $Error variable work?

Starting a new PowerShell session the $Error will be empty. Normally, if you run a Windows PowerShell command and an error occurs, the error record will be appended to the “automatic variable” named $Error.  Then we use the $Error[0] to display and access the rest of the information it holds.

The $Error variable hold a collection of information, and that’s why using $Error[0] can get to your error message objects.  Also the $Error[0] variable will hold the last error message encountered until the PowerShell session ends.

Example #1: Starting a new PowerShell session

For this example, we have tried with a new PowerShell window session so the $Error variable has empty as shown below

$error[0]

Example #2: Executing the below script which had the error

When an error occurs in our code, it is saved to the Automatic variable named $Error. The $Error variable contains an array of recent errors, and you can reference the most recent error in the array at index 0.

In the below example, the path is not exit and instead of throwing an error we had included -ErrorAction SilentlyContinue, and next line we have written the current error using the $Error variable.

Get-content -Path “C:\dotnet-helpers\BLOG\TestFile.txt” -ErrorAction SilentlyContinue
Write-Warning $Error[0]

Getting Members of $Error Variable

We can use Get-Member to expose your PS variable objects. using the below-listed members we can get deeper into the $Error[0] object to extract

Example #3: Getting the detailed Error using $Error variable

In the below example, we can get deeper into the $Error[0] object to extract the line that failed during execution.This assumes that the error information is available in the first element of the $Error array. The InvocationInfo property of the ErrorRecord object contains information about the context in which the error occurred, including the line number.

Keep in mind that if there are multiple errors in the $Error array, you might want to loop through them or access a specific error by its index. Also, note that this information might not be available for all types of errors, depending on how the error was generated

$Error[0].InvocationInfo

#Display the failed code line
Write-Host “Error occured at line : ” $Error[0].InvocationInfo.line

How to Create and Use PowerShell Modules

What is Module in PowerShell?

As per docs.microsoft.com, A module is a package that contains PowerShell members, such as cmdlets, providers, functions, workflows, variables, and aliases. The members of this package can be implemented in a PowerShell script, a compiled DLL, or a combination of both. These files are usually grouped together in a single directory.

In simple, PowerShell Modules allows us to organize our functions and use them in other scripts or PowerShell modules allow you to combine multiple scripts to simplify code management, accessibility, and sharing. Mostly many PowerShell scripters are slow to take that step of building a module. This allows you to be able to use the same code in many places without copying and pasting to code all over the place.

When do I create a PowerShell module?

  • When the same script needs to be used more than once.
  • if we need to break it apart into functions because it’s getting too complex to be in a single script.
  • If we need to share the code with others. 

In this post, you can learn Step-by-step instructions on creating and using modules.

STEP #1 Starting with a PowerShell Function

PowerShell Module can store any number of functions. To create a new module, we need to start creating a PowerShell Function. When your scripts get large, you start using more functions. These functions could be from someone else or functions that you write yourself. These functions start collecting at the top of your script.

In the below example, we creating a function called Get-BIOSInfo which will output the system BIOS information for the specific system. 

function Get-BIOSInfo
{
param($ComputerName)
Get-WmiObject -ComputerName $ComputerName -Class Win32_BIOS
}

Get-BIOSInfo -ComputerName localhost

STEP #2 Create a separate Folder for Custom Module 

All the custom Modules need to save under the Module folder, mostly the location will be C:\Program
Files\WindowsPowerShell\Modules
. We need to create a separate folder for our Module so here we creating a folder called Get-BIOSInfo as shown below.

STEP #3 Save the Function as Module with .psm1 extension 

Next, we need to save our function under the Get-BIOSInfo folder. Most important thing is, the Folder name must match the Module name. Now I’ve got the Get-BIOSInfo module saved/created, and I’ve called it Get-BIOSInfo.psm1. Now I can ask my team to use it,

To make our function into Module, the file needs to be saved with .psm1 extension as shown below,

STEP #4 Test-Driving Your Module

PowerShell has automatically loaded your new module and made all of its commands available. Executing the Get-Module cmdlet will show your module just contains one function Get-BIOSInfo. To understand what just has happened, I had ran the below Get-Module cmdlet and shown the output below.

STEP #4 Finally, Import your Module to utilize in any script

Open a different PowerShell window, or open a new PowerShell (console or ISE). Your command Get-BIOSInfo is available immediately! It is now a standard PowerShell command just like the other commands you use. Importing the module brings all of the functions and variables into each user’s PowerShell session.

Note:

  • PowerShell caches Modules so once you have loaded and used a module in a PowerShell session, any changes to the module will not become effective. To see changes, either use the module in a new PowerShell Host or force a complete module to reload:
  • The module name should not be the name of your function. It should be a generic name such as a topic because later you will want to store more functions into your module.
  • Do not use special characters or whitespace in your module name.
  • PowerShell Module can store any number of functions.

Conclusion

Having the option to create a Module in PowerShell directly is super handy and we can be really flexible in our day to day DevOps or other automation tasks.

How to Check SSL Certificate Expiration Date in PowerShell

SSL ( Secure Sockets Layer) is a digital certificate that provides an encrypted connection between server and client and authenticates a website identity. To keep user-sensitive data secure, and provide trust to users, it is very important to check SSL certificate expiration and renew them if they are due. The challenge for support team will be during the renewal activity, checking all the domains which having different certificate became critical job. To overcome the above challenge, we throught to check the powershell script to validate all the domain before and after the renewal activity. Let we discuss how to Check SSL Certificate Expiration Date in PowerShell.

In PowerShell, we can use [Net.HttpWebRequest] to make the HTTP web request to the website and get all properties associated with it, and certificate details. It will help to find the SSL certificate expiration date and other details of certificate.

The System.Net.ServicePoint is the .Net library which provides to manage the collections of ServicePoint objects. ServicePointManager returns the ServicePoint object that contains the information about the internet resource URI.

Check SSL Certificate Expiration Date

Step: 1 Get the URL properties

In the below PowerShell script lines, it uses [Net.HttpWebRequest] to create HTTP web requests to website URI and retrieve the URI properties like Address, ConnectionName, Certificate, etc… in the $webRequest variable.

[Net.ServicePointManager]::ServerCertificateValidationCallback = { $true }
# Create Web Http request to URI
$uri = "https://www.dotnet-helpers.com"
$webRequest = [Net.HttpWebRequest]::Create($uri)

Step: 2 Retrive the Certificate Start and End date

As we already having the certificate details in the $webRequest, so we can retrive the Certificate Start and end date as shown below.$webRequest.ServicePoint.Certificate gets the certificate details like issuer, Handle, and SSL certificate thumbprint. We can use the GetExpirationDateString() method to check the SSL expiration date for a website using PowerShell.

# Get Effective Date of the certificate
$Start = $webRequest.ServicePoint.Certificate.GetEffectiveDateString()
# Get Expiration Date of the certificate
$End   = $webRequest.ServicePoint.Certificate.GetExpirationDateString()

Step: 3 Find the no. of Remaining days for expiration

# Calculate the no. of Dates remaining for expiration
$ExpirationDays = (New-TimeSpan -Start (Get-Date) -End $end).Days
# Prinit the required details
Write-Host "Validating for :" $webRequest.Address
Write-Host "Certificate Effective Date :" $Start
Write-Host "Certificate Expiration Date :" $End
Write-Host "No. of days to Expiration :" $ExpirationDays

Full Code: Check SSL Certificate Expiration Date in PowerShell

Below full code will helps to Check SSL Certificate Expiration Date in PowerShell for single domain, if you want to have multiple urls then place all the domain in the txt file and loop the same code for validation. 

[Net.ServicePointManager]::ServerCertificateValidationCallback = { $true }
# Create Web Http request to URI
$uri = "https://www.dotnet-helpers.com"
$webRequest = [Net.HttpWebRequest]::Create($uri)
# Get Effective Date of the certificate
$Start = $webRequest.ServicePoint.Certificate.GetEffectiveDateString()
# Get Expiration Date of the certificate
$End = $webRequest.ServicePoint.Certificate.GetExpirationDateString()
# Calculate the no. of Dates remaining for expiration
$ExpirationDays = (New-TimeSpan -Start (Get-Date) -End $end).Days
# Prinit the required details
Write-Host "Validating for :" $webRequest.Address
Write-Host "Certificate Effective Date :" $Start
Write-Host "Certificate Expiration Date :" $End
Write-Host "No. of days to Expiration :" $ExpirationDays

OUTPUT:

Import bulk Variables to Variable Group using Azure DevOps CLI

My Scenario:

As as System Admin/DevOps engineer, maintaining the variable group is little tricky as its very difficult to maintain the history and changes. We got the requirement in one of our migration project with more number of variables for the pipeline with values in excel. It’s definitely very easy to copy/paste as there are just 20 Key-Value pairs in this scenario. However, think about a scenario where you need to repeat this for many Variable Groups for multiple project? It’s definitely a tedious job and Manually creating this new key values in the variable group will make more time and surely there will be human error. So to overcome this problem, we have though to Import bulk Variables to Variable Group using Azure DevOps CLI

What format we got the excel?

Instead of adding them directly from the Azure DevOps Portal, we will leverage automation the Process of automatically adding the Key-Value pairs without doing any manual Data-Entry job as we got huge number or variables.

Note: It’s definitely very easy to copy/paste as there are just 10/20 Key-Value pairs in this scenario. However, think about a scenario where you need to repeat this for many Variable Groups for multiple project? It’s definitely a very tedious job and surely there will be an human-error.

Prerequisite

Step 1: Retrieve the Variable Group ID:

The Variable group need to be ready for importing the variable from the excel. For this example, i already created one variable group know as “mytestvariablegroup” (as shown in below snap) and noted the variable group id (this id will be unique for each variable group) as shown below. In my case, the Variable Group ID is 1 as shown in the below snap shot. This ID will be used in the Step4 and Step5 to dynamically create the Variables using the Azure DevOps CLI commands.

Step 2: Generate Azure DevOps CLI commands using Excel Formula

Navigate to the excel sheet add another column and paste the below formula. The C2 and D2 will be column which containing the variable name and variable value. And, apply the formula to all the rows. Once you apply the formula to all the rows, it should look something like below.

=CONCAT(“az pipelines variable-group variable create –group-id 2 –name “””,B2,””” –value “””,C2,””””)

Import bulk Variables to Variable Group using Azure DevOps CLI

Step 3: Login to Azure DevOps from Command Line

Non-interactive mode

Before we start with Azure CLI commands, it’s mandatory to authenticate using Azure DevOps credentials. If you are using same account for both Azure and Azure DevOps then you can use the below command to authenticate.

Az login

Post enter, it will open the browser to authenticate the login details.

Step 4: Set Default Organization

Run the below command to set the organization where we are going to update the variable.

az devops configure -d organization=https://dev.azure.com/thiyaguDevops/

Step 5: Set Default Project

Run the below command to set the default Project.

az devops configure -d project=poc

Step 6: Execute the Azure DevOps CLI commands

In the step 2, we generated all the commands in excel. Now, it’s time to execute them. Copy the entire rows which containing the formula (column D , without header if any only the values) of commands and paste all of them at once in the command prompt.

Note: No need to copy past one by one from excel, copy all from the Colum D and past at single time, remaining the PowerShell will take care

Step7: Review the Output.

Finally now it’s time to view the results in our variable group. Navigate to the Variable Group and refresh the page to view all the new variables are added like as shown below

 

 

Search and Replace String Using the sed Command in Linux/Unix.

My Requirement & solution:

We are maintaining the application in Linux machine (in AKS pods) and as a Devops team, we Got a requirement to replace some config values based on the environment (value need to be maintain in the AKS environment variable). To manage this, we thought to create one startup script in the docker image which will execute during the new image deployment ,where we used the sed command to achieve the find & replace of config value based on environments. Based on my experience i though to write this article (Search and Replace String Using the sed Command in Linux/Unix) immediately which will be helpful like me who are new to the Linux Operating system/Bash commands. 

What Is the Sed Command in Linux?

The SED command in Linux stands for Stream Editor and it helps in operations like selecting the text, substituting text, modifying an original file, adding lines to text, or deleting lines from the text. Though most common use of SED command in UNIX is for substitution or for find and replace.

By using SED you can edit files even without opening them, which is much quicker way to find and replace something in file, than first opening that file in VI Editor and then changing it.

[su_highlight color=”#2F1C6A”]Syntax: sed OPTIONS… [SCRIPT] [INPUTFILE…][/su_highlight]

  • Options control the output of the Linux command.
  • Script contains a list of Linux commands to run.
  • File name (with extension) represents the file on which you’re using the sed command.

[su_quote]Note: We can run a sed command without any option. We can also run it without a filename, in which case, the script works on the std input data.[/su_quote]

Replace First Matched String

The below example, the script will replace the first found instance of the word test1 with test2 in every line of a file

    sed -i 's/test1/test2/' opt/example.txt

The command replaces the first instance of test1 with test2 in every line, including substrings. The match is exact, ignoring capitalization variations. -i tells the sed command to write the results to a file instead of standard output.

Search & Global Replacement (all the matches)

To replace every string match in a file, add the g flag to the script. For example

    sed -i 's/test1/test2/g' opt/example.txt

The command globally replaces every instance of test1 with test2 in the /example.txt.

The command consists of the following:

  • -i tells the sed command to write the results to a file instead of standard output.
  • s indicates the substitute command.
  • / is the most common delimiter character. The command also accepts other characters as delimiters, which is useful when the string contains forward slashes.
  • g is the global replacement flag, which replaces all occurrences of a string instead of just the first.
    “input file” is the file where the search and replace happens. The single quotes help avoid meta-character expansion in the shell.

Search and Replace All Cases

To find and replace all instances of a word and ignore capitalization, use the I parameter:

    sed -i 's/test1/tes2/gI' opt/example.txt

The command replaces all instances of the word test1 with test2, ignoring capitalization.

Conclusion 

You can check the inputs based on conditions like if.. else and make the code more dynamic. In this tutorial, hope you learned Search and Replace String Using the sed Command in Linux/Unix.

I hope you found this tutorial helpful. What’s your favorite thing you learned from this tutorial? Let me know on comments!

 

 

How to use the variable group at runtime in Azure YAML Pipeline

When & Where to use?

We received the request that we would like to pass the variable group as a runtime parameter so that whenever I run the pipeline, it should allow me to select the variable group name as input, and based on the input value for the variable group during runtime my pipeline should proceed. In this article, we will discuss How to use the variable group at runtime in Azure YAML Pipeline.

This can be achieve by using the Runtime parameters. Runtime parameters let you have more control over what values can be passed to a pipeline. In this article 

What is Runtime parameters?

You can specify parameters in templates and in the pipeline. Parameters have data types such as number and string, and they can be restricted to a subset of values. The parameters section in a YAML defines what parameters are available. These runtime parameters allow you to have more control over the parameter values you pass to your pipelines.

Parameters are only available at template parsing time. Parameters are expanded just before the pipeline runs so that values surrounded by ${{ }} are replaced with parameter values. Use variables if you need your values to be more widely available during your pipeline run.

Note: If you are going to trigger the pipeline manually then you can make use of Runtime parameters in the Azure DevOps pipeline.

Runtime parameters let you have more control over what values can be passed to a pipeline. Unlike variables, runtime parameters have data types and don’t automatically become environment variables.

Let we see How to use the variable group at runtime in Azure YAML Pipeline

Step 1: Define the parameters under the Values section

Ensure Always Set runtime parameters at the beginning of a YAML. This example pipeline accepts the value of variable and then outputs the value in the job

parameters:
- name: variable_group
displayName: Variable Group
type: string
default: app-sitecore-dev
values:
- app-sitecore-dev
- app-sitecore-qa
- app-sitecore-pprd
- app-sitecore-prd
- app-sitecore-pprd-hotfix

trigger: none # trigger is explicitly set to none

Step 2: Assign the selected value to the variable group.

Post slection of variable group during manula build, the selected variable will be assinged by using ${{ parameters.<parameter_name> }}. once runtime parameter is assinged the sequence of stage/jobs can able to use the values

variables:
- group: ${{ parameters.variable_group }}

Step 3: Use the values from the selected variable group

Based on the variable group assinged from run time parameter, the remaining stage can fetch the value from the variable group like agentPool…

stages:
- stage: Build_Artifacts
jobs:
- template: Prepare_Artifacts.yml
parameters:
agentPool: '$(agentPool)'
TargetFolder: '$(Build.ArtifactStagingDirectory)'

Full YAML Code

parameters:
- name: variable_group
  displayName: Variable Group
  type: string
  default: app-sitecore-dev
  values:
  - app-sitecore-dev
  - app-sitecore-qa
  - app-sitecore-pprd
  - app-sitecore-prd
  - app-sitecore-pprd-hotfix

trigger: none # trigger is explicitly set to none

variables:
- group: ${{ parameters.variable_group }}

stages:
- stage: Build_Artifacts
jobs:
- template: Prepare_Artifacts.yml
parameters:
agentPool: '$(agentPool)'
TargetFolder: '$(Build.ArtifactStagingDirectory)'

Output

Bash Scripting – If Statement

The Bash Scripting  is now a days mandatory language for most of the system admins/devops guys. so in upcoming articles we will shed light on the power and subtlety that is the Unix shell, I’d like to take a dive into just one of its many features: Bash Scripting – If Statement.

When coding, you might need to make decisions based on certain conditions. Conditions are expressions that evaluate to a boolean expression (true or false)Statements that help to execute different code branches based on certain conditions are known as conditional statements.if…else is one of the most commonly used conditional statements. Like other programming languages, Bash scripting also supports if…else statements. And we will study that in detail in this blog post.

In another way, If statements (and, closely related, case statements) allow us to make decisions in our Bash scripts. They allow us to decide whether or not to run a piece of code based upon conditions that we may set.

SYNTAX

When you are using a single if statement, the syntax is as follows: A basic if statement effectively says, if a particular condition is true, then perform a given set of actions. If it is not true then don’t perform those actions. If follows the format below:

The if statement is composed of the if keyword, the conditional phrase, and the then keyword. The fi keyword is used at the end of the statement. The COMMANDS gets executed if the CONDITION evaluates to True. Nothing happens if CONDITION returns False; the COMMANDS are ignored.. The basic syntax of an if statement is the following:

if [ condition ]
then
    statement/actions
fi

The “[ ]” in the if statement above are actually a reference to the command test. This means that all of the operators that test allows may be used here as well. When you are using a multiple condition check with if statement, the syntax is as follows:

if [ condition ] ; then
   statement/actions
elif [ condition ] ; then
   statement/actions
else
   statement/actions
fi
  • if >> Perform a set of commands if a test is true.
  • elif >> If the previous test returned false then try this one.
  • else >> If the test is not true then perform a different set of commands.

Note that the spaces are part of the syntax and should not be removed.

Example: Simple with IF statement

Let’s go through an example where we are comparing two numbers to find if the first number is the smaller one.

a=25
b=30

if [ $a -lt $b ]
then
    echo "a value is less than b"
fi

Output: a value is less than b

Example: How to Use the if .. else Statement

Let’s see an example where we want to find if the first number is greater or smaller than the second one. Here, if [ $a -lt $b ] evaluates to false, which causes the else part of the code to run.

a=65
b=35

if [ $a -lt $b ]
then
   echo "a is less than b"
else
   echo "a is greater than b"
fi

Output: a value is greater than b

Example: How to Use if..elif..else Statements

To have comparisons, we can use AND -a and OR -o operators as well in the bash command. For performing the checks between two values, we can use AND -a and OR -o as well.

In this example, we will do the check on 3 values conditions:

if [ $a == $b -a $b == $c -a $a == $c ]
then
   echo "All values are equal"

elif [ $a == $b -o $b == $c -o $a == $c ]
then
   echo "May be more than one value is equal"

else
   echo "All numbers are not equal"

fi

Conclusion on Bash Scripting – If Statement

You can check the inputs based on conditions like if..else and make the code more dynamic. In this tutorial, hope you learned Bash Scripting – If Statement

I hope you found this tutorial helpful.

What’s your favorite thing you learned from this tutorial? Let me know on Twitter!

Using secrets from Azure Key Vault in a pipeline

You know as a best practice, DevOps guys need to ensure all the secrets need to be kept inside the Keyvalut instead of using directly from the Azure DevOps Variable group. So, in this article, we are going to see how we can do Variable substitute from KeyVault in YAML Azure DevOps pipelines (ie., Using secrets from Azure Key Vault in a pipeline) 

Config File

Below is the sample config file which we are going to use for substituting variables from Key Vault in YAML Azure DevOps pipelines

Step 1: Fetch the Key from Key vault:

The variable substitution can be done with 2 tasks in Azure DevOps, let’s start. The task can be used to fetch the latest values of all or a subset of secrets from the vault and set them as variables that can be used in subsequent tasks of a pipeline. The task is Node-based and works with agents on Linux, macOS, and Windows. First, we need to create the task for Connecting and fetching the secrets from the Azure Keyvalut. As we mentioned RunAsPreJob: false so the value will only scope up to the next following task alone.

- task: AzureKeyVault@2
  inputs:
    azureSubscription: 2a28a5af-3671-48fd-5ce1-4c144540aae2
    KeyVaultName: kv-dgtl-dev
    SecretsFilter: 'smtp-host,smtp-username,smtp-password'
    RunAsPreJob: false

Point to remember for Variable substitute from KeyVault:

  • RunAsPreJob – Make secrets available to the whole job, Default value is false
  • Keyvalut task needs to run before the job execution begins. Exposes secrets to all tasks in the job, not just tasks that follow this one.
  • Ensure the Agent machine has the required permissions to access the Azure key vault
  • if you want to fetch the all secrets during this task then you can specify ‘*’ instead of secrets name in the SecretsFilter.

Step 2: Apply the secrets to config files:

Second, we can have the replace token task to have the target files which need to replace the variables. once this is executed, the value fetched from the key vault will apply to the matched variable

- task: replacetokens@5
  inputs:
    rootDirectory: 'src/Feature/Forms/code/App_Config/Include/Feature/'
    targetFiles: 'dotnethelpers.Feature.Forms.SMTP.config,SMTP_external.config'
    encoding: 'auto'
    tokenPattern: 'default'
    writeBOM: true
    actionOnMissing: 'warn'
    keepToken: false
    actionOnNoFiles: 'continue'
    enableTransforms: false
    enableRecursion: false
    useLegacyPattern: false
    enableTelemetry: true

Point to remember:

  • The token pattern is set to default (so I used #{YOUR_VARIABLE}#, it may define based on your requirement.
  • The name of the Keyvalut secrets needs to match with the config variable which needs to substitute. For example, in the config, we have variables like smtp-host, smtp-username, and smtp-password so the Azure key vault secrets name need to match with same.

How to Find and delete duplicate Files Using PowerShell

Anyone who manages a file storage has to keep track of the size of files to ensure there is always enough free space. Documents, photos, backups and other can quickly occupy up your shared file resources — especially if you have a lot of duplicates. Duplicate files are often the result of users’ mistakes, such as double copy actions or incorrect folder transfers. To avoid wasting space and driving up storage costs, you have to analyze your file structure, find and delete duplicate Files Using PowerShell. When you say there are files with the same content but with different names

As a result we end up running out of disk space and then get in to a situation where we have to sit and find the unnecessary files to gain free storage space.
One of the biggest issue that we see during such clean-up activity is to get rid of duplicate files. A simple Windows PowerShell script can help you complete this tedious task faster. we having may types of approach to handle this scenario, we will discuss about few examples here.

Find Duplicate file using Get-FileHash

Do you need to compare two files or make sure a file has not changed? The PowerShell cmdlet Get-FileHash generates hash values both for files or streams of data. A hash is simply a function that converts one value into another. Sometimes the hash value may be smaller to save on space, or the hash value may be a checksum used to validate a file. Therefore a hash will be different if even a single character in the input is changed

In this demo, i am having 4 text files, which 3 files are having same content with different file name and remaining 1 file unique content as shown in the below image.

STEP 1: Open the PowerShell window

Open PowerShell: Click on the Start Menu and type “PowerShell” in the search bar. Then, select “Windows PowerShell” from the results.

STEP 2: Find to the directory where you want to search for duplicate files:

$filePath = ‘C:\Thiyagu Disk\backupFiles\’

STEP 3: Get the all child items inside the file path to check the duplicate.

Use the Get-ChildItem cmdlet to find all files in the directory: Type “Get-ChildItem -Recurse -File” to list all files in the current directory and its subdirectories. The “-Recurse” option tells PowerShell to search all subdirectories.

Get-ChildItem –path $filePath -Recurse

STEP 4: Find duplicate files using Get-FileHash cmdlet.

Using Get-FileHash generates hash values both for files or streams of data and group by the hash value as shown below to find the duplicate and unique files/folders/…

Get-ChildItem –path $filePath -Recurse | Get-FileHash | Group-Object -property hash | Where-Object { $_.count -gt 1 } | ForEach-Object { $_.group | Select-Object Path, Hash }

Full Code : Find the duplicate files

$filePath = ‘C:\backupFiles\’
$group_by_unique_files = Get-ChildItem –path $filePath -Recurse | Get-FileHash | Group-Object -property hash | Where-Object { $_.count -gt 1 }
$duplicatefile_details = $group_by_unique_files | ForEach-Object { $_.group | Select-Object Path, Hash }
$duplicatefile_details

Full Code: Find and delete duplicate Files Using PowerShell

$filePath = ‘C:\backupFiles\’
$group_by_files = Get-ChildItem –path $filePath -Recurse | Get-FileHash | Group-Object -property hash | Where-Object { $_.count -gt 1 }
$group_by_files
$duplicatefile_details = $group_by_files | ForEach-Object { $_.group | Select-Object Path, Hash}
$duplicatefile_details | Out-GridView -OutputMode Multiple | Remove-item

After finding the duplicate files, you can move/delete based on your requirement. if you want to delete through UI, you can use Out-GridView and delete by selecting the multiple files as shown below. A user may select files to be deleted in the table (to select multiple files, press and hold CTRL) and click OK.

Note: Please be careful while using the Remove-Item cmdlet as it can permanently delete files from your computer. It’s recommended to test this command on a test folder before using it on your actual data.