All posts by Thiyagu

How to Continue Azure Pipeline on failed task

Introduction

Sometimes failing scripts are not failing the task when they should. And sometimes a failing command should not fail the task. How to handle these situations by Continue Azure Pipeline on failed task?

In Sometimes, you many have some Pipeline Tasks which may be dependent on external reference which may chance to fail at any time. In these scenarios, if the Task is failing (or may be failing intermittently) due to any issue in external reference, your entire pipeline would fail. You don’t have any insights about when the bug would get fixed. So in the above case, you though to run the pipeline if any issue in the that task and you want to ensure if any issue in future for this task then it wont lead to pipeline failure.

This simple technique can be used in scenarios where you have a non-mandatory task that’s failing intermittently and you want to continue the execution of the pipeline

Solution : 

In this case, it absolutely makes sense to continue the execution of the next set of Tasks (Continue Azure Pipeline on failed task) . In this post, we are going to learn how to continue the execution of the pipeline if a particular Task has failed by using ContinueOnError/failOnStderr/failOnStandardError property.

using ContinueOnError attribute in Powershell/script Task

Let’s build a pipeline with few tasks where we simulate and error in one of the tasks as shown below. As shown in the below code, the following points to be noted.

The task named “continueOnError Task”, an intentional added targetType: ‘filePath’ but used the ‘inline’ script instead of mapping with script file to simulate an error. Second, attribute called continueOnError has bee added to ignore if there any errors while executing the pipeline.

steps:
- task: Powershell@2
displayName: "continueOnError Task"
continueOnError: true
inputs:
ScriptType: InlineScript
Inline: |
Write-Hosts "Continue if any issue here"

- task: Powershell@2
displayName: "No Error Task"
continueOnError: true
inputs:
targetType: 'inline'
Inline: |
Write-Host "Block if any issue here"

Now, when you run the pipeline, an indication about the error for the Task is shown and the execution will carry forward as shown below.

 

Continue Azure Pipeline on failed task

Like PowerShell task, you can continue error in other tasks as well as shown in bellow (click below link reference to know about other tasks).

Summary:

In this this post, we have learnt how to continue the execution of the pipeline in spite-of having an error in one of the tasks that is Continue Azure Pipeline on failed task. This simple technique can be used in scenarios where you have a non-mandatory task that’s failing intermittently and you want to continue the execution of the pipeline.

How to Enable additional logs in Azure pipeline execution

One of the most important aspects in the Azure DevOps life cycle Pipeline development is to have tools and techniques in place to find out the root cause of any error that would have occurred during the pipeline Azure DevOps pipeline execution. In this article, we will learn how to review the logs that helps in troubleshoot for any errors in the Pipelines by enable additional logs in Azure pipeline execution

By default, Azure DevOps pipeline provides logs which provide information about the execution of each step in the pipeline. In case of any error or need to more information to debug that time the default logs wouldn’t help you in understanding what went wrong in the pipeline execution. In those cases, it would be helpful if we get more diagnostic logs about each step in the Azure DevOps pipelines.

Below are the two different techniques to enable the feature of getting additional logs.

Enable System Diagnostics logs for specific execution of pipeline.

If you would like to get additional logs for a specific pipeline execution then you need to do is enable the Enable System Diagnostics checkbox as shown below image and click on the Run button.

Enable System Diagnostics logs for All execution of pipeline.

If you always want to enable System Diagnostics to capture the Diagnostics trace for cases where the pipeline executed automatically (Continuous Integration scenarios), then you need to create a variable in the pipeline with the name system.debug and the set the value to true as shown below.

System.debug = true (variable helps us to generate diagnostics logs in during the execution of pipelines)
System.debug = False (variable helps us to not generate diagnostics logs in during the execution of pipelines)

Once we set the value of system.debug to true in our variable group (which referred in our pipeline), then the pipeline starts showing additional logs in the purple color as shown below.

System.debug = true

System.debug = false

Note: You like to view the logs without any colors, you can click on the View raw log button which opens up the logs in a separate browser window and you can save if required.

Download Logs from Azure pipeline:

In case if you like to share the logs to some other teams who doesn’t have access to your pipeline, you can download the logs by clicking on the Download Logs button in the Pipeline Summary page as shown below and you can share.

Understanding the directory structure created by Azure DevOps tasks

If you are beginner in Azure DevOps, understanding of when & which folders are created and populated by the pipeline tasks, this is one of the first step in learning Understanding the directory structure created by Azure DevOps tasks.

Azure DevOps Agent will supports Windows,Linux / Ubuntu, macOS Operating Systems but in this post we are going to check from the windows Agent machine.Let’s try to understand the folder structure by creating a very simple YAML based Azure DevOps Pipeline and add the Tasks based on the below instructions.

Let we understand the directory structure created by Azure DevOps tasks here !

Let we list all the folders that are created during the pipeline execution. It’s called as Workspace and so the local folder path can be referred using a Pre-defined variable called $(Pipeline.Workspace)

We are going to list all the folders (using below YAML with PowerShell task) that are created for a given current pipeline and this is called as Workspace. And the local folder path can be referred using a Pre-defined variable called $(Pipeline.Workspace)

In the below YAML, we added the powershell task to prints the folder structure inside the $(Pipeline.Workspace)

- task: PowerShell@2
  displayName: Show all Folders in $(Pipeline.Workspace) 
  inputs:
    targetType: 'inline'
    pwsh: true
    script: |
      Get-ChildItem -path $(Pipeline.Workspace)

Once your pipeline is executed, you will be able to see the folders like a, b, s and Test Results available in the workspace as shown in below snap shot.

Now based on the above image, Let’s now understand these folders and its usages in detail.

Folder Name: a
Referred using: $(Build.ArtifactStagingDirectory)/ $(Build.StagingDirectory)/ $(System.ArtifactsDirectory)

Artifact Staging Directory is a pre-defined variable and used in Build Pipelines for storing the artifacts of solution which get build (simply its output artifacts). if you confused, in simple it is output of the Build process for any type of solution ( like .net, java, python etc) or it could be as simple as copy files.

The publish build artifacts task creates an artifact of whatever is in this folder. This folder will get cleared/purged before each new build.

Folder Name: b
Referred using: $(Build.BinariesDirectory)

Binaries directory is a pre-defined variable used for storing the output of compiled binaries that are created as part of compilation process

Folder Name: s
Referred using: $(System.DefaultWorkingDirectory) / $(Build.SourcesDirectory)

Default working directory is a pre-defined variable that is mostly used to store the source code of the application. $(System.DefaultWorkingDirectory) is used automatically by the checkout step which download the code automatically as shown in below snap shot. In simple, this is the working directory and where your source code is stored.

Folder Name: TestResults
Referred using: $(Common.TestResultsDirectory)

Test results Directory contains the local directory of the agent which could be used for storing the Test Results.

Summary of directory structure created by Azure DevOps tasks

[wpdatatable id=2]

Variable Substitution in Config using YAML DevOps pipeline

As DevOps Engineer, you are responsible for to develop the Azure DevOps pipelines which should replace these values (DEV/TEST/PREPROD/PROD) based on the environment. However, the configuration values could change across environments. In this article, we are going to learn how to dynamically change the environment specific values (Variable Substitution) in the Azure DevOps Pipelines using an Azure DevOps Extension called Replace Tokens.

In my previous article, we had discussed about DevOps guys need to ensure all the secrets need to be kept inside the Key vault instead of using directly from the Azure DevOps Variable group. But all the project decision will not be same, still many project using the variable group for maintaining the secrets and lock them. This article is focusing on same and will explain how to Manage environment specific configurations (Variable Substitution using Replace Tokens – Azure DevOps Marketplace extension

Example Config File

Below is the sample config file which we are going to use for variable substitution from Key Vault in YAML Azure DevOps pipelines

These Configuration values must be environment specific and they have different values in different environments. DevOps engineers would have to develop the Azure DevOps pipelines which should replace these values in my below config. In my case the smtp-host, smtp-username, smtp-password are different for lower environment (dev/qa & preprod) and higher environment.

How to use Replace Tokens Extension in Azure YAML pipeline

Here we are going to use Replace Tokens task to replace tokens in config files with variable values.

The parameters of the task are described bellow, in parenthesis is the YAML name:

  • Root directory (rootDirectory): the base directory for searching files. If not specified the default working directory will be used. Default is empty string
  • Target files (targetFiles): the absolute or relative newline-separated paths to the files to replace tokens. Wildcards can be used (eg: **\*.config for all .config files in all sub folders). Default is **/*.config
  • Token prefix (tokenPrefix): when using custom token pattern, the prefix of the tokens to search in the target files. Default is #{
  • Token suffix (tokenSuffix): when using custom token pattern, the suffix of the tokens to search in the target files. Default is }#

Example 1:  Replace with Target files parameter

- task: replacetokens@5
displayName: 'replacing token in new config file'
inputs:
targetFiles: |
src/Feature/dotnethelpers.Feature.Common.General.config
src/Foundation/dotnethelpers.Foundation.SMTP.config
encoding: 'auto'
writeBOM: true
actionOnMissing: 'warn'
keepToken: false
tokenPrefix: '{'
tokenSuffix: '}'
useLegacyPattern: false
enableTelemetry: true

Note: the task will only work on text files, if you need to replace token in archive file you will need to first extract the files and after archive them back.

Example 2:  Replace with Root directory and Target files parameter

You can able to see, we can also give the targe files with comma separate as well like below YAML.

- task: replacetokens@5
inputs:
rootDirectory: 'src/Feature/Forms/code/App_Config/Include/Feature/'
targetFiles: 'dotnethelpers.Feature.config,dotnethelpers.Foundation.SMTP.config'
encoding: 'auto'
tokenPattern: 'default'
writeBOM: true
actionOnMissing: 'warn'
keepToken: false
actionOnNoFiles: 'continue'
enableTransforms: false
enableRecursion: false
useLegacyPattern: false
enableTelemetry: true

Example 3:  Replace with wildcard

Target files (targetFiles): the absolute or relative newline-separated paths to the files to replace tokens. Wildcards can be used (eg: **\*.config for all .config files in all sub folders). Default is **/*.config.

As per below targetfiles path, replace token will search all the .config files inside the node folder and replace the token if applicable.

- task: replacetokens@5
displayName: replacing token
inputs:
targetFiles: |
node/*.config
encoding: 'auto'
writeBOM: true
actionOnMissing: 'warn'
keepToken: false
tokenPrefix: '#{'
tokenSuffix: '}#'
useLegacyPattern: false
enableTelemetry: true

Sample Output: Variable Substitution

Add Tags On An AZURE SQL DATABASE Using PowerShell

As as system admin/DevOps guys, usually during audit time we need to complete the finding (by make the grouping to filter) in faster manner instead of doing as manual work, like similar we got the review command that all the resources need to be have the tags. But completing this task will have huge manual effort as we maintaining lot of resources so thought to make automation to complete this task. Here we going to discuss bout how to Add Tags On An AZURE SQL DATABASE Using PowerShell

What Is A Tag In Azure?

Its Creates a predefined Azure tag or adds values to an existing tag | Creates or updates the entire set of tags on a resource or subscription.

Azure tagging is an excellent feature from Microsoft that was help you to logically group your Azure resources and help you to track your resources. It also helps to automate the deployments of the resources and another important feature is this feature helps to provide the visibility of the resource costs that they are liable for.

Syntax: New-AzTag [-ResourceId] <String> [-Tag] <Hashtable> [-DefaultProfile <IAzureContextContainer>] [-WhatIf] [-Confirm] [<CommonParameters>]


What are the ways to create a Tags in Azure?

The Azure Tag enables the key-value pairs to be created and assigned to the resources in Azure using Azure Portal, PowerShell. Azure CLI, etc.

Note: Tag names and Tag Values are case-sensitive in nature.

Why Use The Azure tag ?

The main intention to use the Azure tags is to organize the resources in Azure Portal. When you are organizing the resources properly helps you to identify the category that the resources belong to. So basically, Azure tags are the name-value pairs that help to organize the Azure resources in the Azure Portal

For example (consider you having the tag), When you have more resources in your Azure Portal, that time it really helps you to categorize your Azure Resources. Suppose you have 6 Virtual machines (VM) in your Azure Subscription, among them 2 are under our development environment and 2 are for the QA environment and remain 2 belong to our Production environment. So we can tag them as Environment = Development, Environment = QA, or Environment = Production. so now we can easily get the view of each resource coming under the specific environment.


How To Create Azure Tags Using powershell


Step 1: Connect to your Azure account using Connect-AzAccount

Before starting please gather secret for service principal , AppId , tenantId to establish the connect to the azure portal to perform the operation against the Azure services.

#Converts plain text or encrypted strings to secure strings.
$SecureServicePrinciple = ConvertTo-SecureString -String "rfthr~SSDCDFSDFE53Lr3Daz95WF343jXBAtXADSsdfEED" -AsPlainText -Force
#Assigning the App ID
$AppId = "0ee7e633-0c49-408e-b956-36d62264f644"
#Assigning the Tenant ID
$tenantId= "32cf8ba2-403a-234b-a3b9-63c2f8311778"
#storing a username and password combination securely.
$pscredential = New-Object -TypeName System.Management.Automation.PSCredential -ArgumentList $AppId, $SecureServicePrinciple
#Connect to Azure with an authenticated account for use with cmdlets from the Az PowerShell modules.
Connect-AzAccount -ServicePrincipal -Credential $pscredential -Tenant $tenantId

Step 2: Define the tags and assign as a array list

Based on my requirement i had added the below tags, you can create N number of tags based on your segregation of the resources.

$tags = @{"Business Unit"="WX_Digital"; "Environment"="PROD"; "Owner"="dotnet-helpers" ; "Project"="webapp"}


Step 3: Getting all the SQL database in the specific resources group

Example 1: Get single SQL database to update

  • Get-AzResource cmdlet will Gets All the resources from the specific subscription and filtered with Resource Type SQL.
  • After executing below script the RESOURCE.id variable will have all the databases names inside the specific resource type as shown in the below snap shot.
  • -ResourceType : The resource type of the resource(s) to be retrieved. For example, Microsoft.Compute/virtualMachines
#GET the single database by where condition
$RESOURCE = Get-AzResource -ResourceGroupName "rg-dgtl-pprd" -ResourceType "Microsoft.Sql/servers/databases" | Where-Object name -Like 'sqlsrvr-dgtl-pprd/sitecore_master' 

$resourceIds = $RESOURCE.Id

Example 2: Get all the database to update

#Gets All the database 
$RESOURCE = Get-AzResource -ResourceGroupName "rg-dgtl-pprd" -ResourceType "Microsoft.Sql/servers/databases" 

$resourceIds = $RESOURCE.Id

Step 4: Update new tags using new-AzTag command

We can Creates a predefined Azure tag or adds values to an existing tag using new-AzTag cmdlet | Creates or updates the entire set of tags on a resource or subscription.

-ResourceId : The resource identifier for the entity being tagged. A resource, a resource group or a subscription may be tagged.

Example: 1 Add tags to single database

#Creates or updates the entire set of tags on a resource or subscription.
#The resource identifier for the entity being tagged. A resource, a resource group or a subscription may be tagged.
new-AzTag -ResourceId $resourceId -Tag $tags

Example 2: Add tags to all database under ResourceGroup

foreach($resourceId in $resourceIds){

Write-Output $resourceId
#Creates or updates the entire set of tags on a resource or subscription.
new-AzTag -ResourceId $resourceId -Tag $tags

}

Output:

Note: 

  • Get-AzResource cmdlet will Gets All the resources from the specific subscription and filtered with Resource Type SQL.
  • -ResourceType : The resource type of the resource(s) to be retrieved. For example, Microsoft.Compute/virtualMachines

Powershell Error handling with $ERROR Variable

In all programming, the code will have errors, and troubleshooting those problems will be difficult. Like another programming language, PowerShell has error handling mechanisms for handling the error in our programming (in this post, we will discuss about Error handling with $ERROR variable).

In PowerShell, errors can be categories in 2 ways one is terminating and non-terminating. As the name implies, a terminating error stops the code execution when the error is thrown. A non-terminating error implies that the code will continue the next line of code execution when an error is shown.

The $Error Variable

$Error is an automatic global variable in PowerShell which always contains an Array List of zero or more Error Record objects. As new errors occur, they are added to the beginning of this list, so you can always get information about the most recent error by getting at $Error[0]. Both Terminating and Non-Terminating errors will be part of this this list.

How does the $Error variable work?

Starting a new PowerShell session the $Error will be empty. Normally, if you run a Windows PowerShell command and an error occurs, the error record will be appended to the “automatic variable” named $Error.  Then we use the $Error[0] to display and access the rest of the information it holds.

The $Error variable hold a collection of information, and that’s why using $Error[0] can get to your error message objects.  Also the $Error[0] variable will hold the last error message encountered until the PowerShell session ends.

Example #1: Starting a new PowerShell session

For this example, we have tried with a new PowerShell window session so the $Error variable has empty as shown below

$error[0]

Example #2: Executing the below script which had the error

When an error occurs in our code, it is saved to the Automatic variable named $Error. The $Error variable contains an array of recent errors, and you can reference the most recent error in the array at index 0.

In the below example, the path is not exit and instead of throwing an error we had included -ErrorAction SilentlyContinue, and next line we have written the current error using the $Error variable.

Get-content -Path “C:\dotnet-helpers\BLOG\TestFile.txt” -ErrorAction SilentlyContinue
Write-Warning $Error[0]

Getting Members of $Error Variable

We can use Get-Member to expose your PS variable objects. using the below-listed members we can get deeper into the $Error[0] object to extract

Example #3: Getting the detailed Error using $Error variable

In the below example, we can get deeper into the $Error[0] object to extract the line that failed during execution.This assumes that the error information is available in the first element of the $Error array. The InvocationInfo property of the ErrorRecord object contains information about the context in which the error occurred, including the line number.

Keep in mind that if there are multiple errors in the $Error array, you might want to loop through them or access a specific error by its index. Also, note that this information might not be available for all types of errors, depending on how the error was generated

$Error[0].InvocationInfo

#Display the failed code line
Write-Host “Error occured at line : ” $Error[0].InvocationInfo.line

How To Copy Secrets From KeyVault To Another In Azure

My Scenario:

In my case, we are configuring the application to be available in the two regions to have high availability. During the configuration, we observed, having more number secretes in the region1 and its very difficult to move one by one to the region2 (ie., moving to key vault in another region) so though to automate this process instead manual so without more manual and error we can Copy All Secrets From One Key Vault To Another In Azure. This blog will help you to understand How To Copy Secrets From KeyVault To Another In Azure using PowerShell script.

To clone a secret between key vaults, we need to perform two steps:

  1. Retrieve/export the secret value from the source key vault.
  2. Import this value into the destination key vault.

You can also refer below link to learn how to maintain your secrets in key vault and access in YAML pipeline

Step 1: Install Azure AZ module

Use the below cmdlet to Install the Azure PowerShell module if not already installed

# Install the Azure PowerShell module if not already installed
  Install-Module -Name Az -Force -AllowClobber

Step 2: Set Source and destination Key Vault name

# Pass both Source and destination Key Vault Name
Param( [Parameter(Mandatory)] 
[string]$sourceKvName, 
[Parameter(Mandatory)] 
[string]$destinationKvName )

Step 3:  Connect the Azure portal to access the Key Vault (non-interactive mode)

As we are doing the automation, so you can’t use Connect-AzAccount (which will make the popup to authenticate), if want to execute without any manual intervention then use az login with non-interactive mode as shown in below.

# Connect to Azure portal (you can also use Connect-AzAccount)
az login --service-principal -u "0ff3664821-0c94-48e0-96b5-7cd6422f46" -p "XACccAV2jXQrNks6Lr3Dac2B8z95BAt~MTCrP" --tenant "116372c23-ba4a-223b-0339-ff8ba7883c2"

Step 4:  Get the all the secrets name from the source KV

# Get all the Source Secret keys
$secretNames = (Get-AzKeyVaultSecret -VaultName $sourceKvName).Name

Step 5: Copy Secrets From source to destination KV.

The below script will loop based on the number of key names to fetch both name of the key and its value from the source key Vault and started to set the key and value in the destination KvName.

# Loop the Secret Names and copy the key/value pair to the destination key vault
$secretNames.foreach{
Set-AzKeyVaultSecret -VaultName $destinationKvName -Name $_ `
-SecretValue (Get-AzKeyVaultSecret -VaultName $sourceKvName -Name $_).SecretValue
}

Full code

# Pass both Source and destination Key Vault Name
Param(
[Parameter(Mandatory)]
[string]$sourceKvName,
[Parameter(Mandatory)]
[string]$destinationKvName
)

# Connect to Azure portal (you can also use Connect-AzAccount)
az login --service-principal -u "422f464821-0c94-48e0-96b5-7cd60ff366" -p "XACccAV2jXQrNks6Lr3Dac2B8z95BAt~MTCrP" --tenant "116372c23-ba4a-223b-0339-ff8ba7883c2"

# Get all the Source Secret keys
$secretNames = (Get-AzKeyVaultSecret -VaultName $sourceKvName).Name

# Loop the Secret Names and copy the key/value pair to the destination key vault
$secretNames.foreach{
Set-AzKeyVaultSecret -VaultName $destinationKvName -Name $_ `
-SecretValue (Get-AzKeyVaultSecret -VaultName $sourceKvName -Name $_).SecretValue
}

 

How to Create and Use PowerShell Modules

What is Module in PowerShell?

As per docs.microsoft.com, A module is a package that contains PowerShell members, such as cmdlets, providers, functions, workflows, variables, and aliases. The members of this package can be implemented in a PowerShell script, a compiled DLL, or a combination of both. These files are usually grouped together in a single directory.

In simple, PowerShell Modules allows us to organize our functions and use them in other scripts or PowerShell modules allow you to combine multiple scripts to simplify code management, accessibility, and sharing. Mostly many PowerShell scripters are slow to take that step of building a module. This allows you to be able to use the same code in many places without copying and pasting to code all over the place.

When do I create a PowerShell module?

  • When the same script needs to be used more than once.
  • if we need to break it apart into functions because it’s getting too complex to be in a single script.
  • If we need to share the code with others. 

In this post, you can learn Step-by-step instructions on creating and using modules.

STEP #1 Starting with a PowerShell Function

PowerShell Module can store any number of functions. To create a new module, we need to start creating a PowerShell Function. When your scripts get large, you start using more functions. These functions could be from someone else or functions that you write yourself. These functions start collecting at the top of your script.

In the below example, we creating a function called Get-BIOSInfo which will output the system BIOS information for the specific system. 

function Get-BIOSInfo
{
param($ComputerName)
Get-WmiObject -ComputerName $ComputerName -Class Win32_BIOS
}

Get-BIOSInfo -ComputerName localhost

STEP #2 Create a separate Folder for Custom Module 

All the custom Modules need to save under the Module folder, mostly the location will be C:\Program
Files\WindowsPowerShell\Modules
. We need to create a separate folder for our Module so here we creating a folder called Get-BIOSInfo as shown below.

STEP #3 Save the Function as Module with .psm1 extension 

Next, we need to save our function under the Get-BIOSInfo folder. Most important thing is, the Folder name must match the Module name. Now I’ve got the Get-BIOSInfo module saved/created, and I’ve called it Get-BIOSInfo.psm1. Now I can ask my team to use it,

To make our function into Module, the file needs to be saved with .psm1 extension as shown below,

STEP #4 Test-Driving Your Module

PowerShell has automatically loaded your new module and made all of its commands available. Executing the Get-Module cmdlet will show your module just contains one function Get-BIOSInfo. To understand what just has happened, I had ran the below Get-Module cmdlet and shown the output below.

STEP #4 Finally, Import your Module to utilize in any script

Open a different PowerShell window, or open a new PowerShell (console or ISE). Your command Get-BIOSInfo is available immediately! It is now a standard PowerShell command just like the other commands you use. Importing the module brings all of the functions and variables into each user’s PowerShell session.

Note:

  • PowerShell caches Modules so once you have loaded and used a module in a PowerShell session, any changes to the module will not become effective. To see changes, either use the module in a new PowerShell Host or force a complete module to reload:
  • The module name should not be the name of your function. It should be a generic name such as a topic because later you will want to store more functions into your module.
  • Do not use special characters or whitespace in your module name.
  • PowerShell Module can store any number of functions.

Conclusion

Having the option to create a Module in PowerShell directly is super handy and we can be really flexible in our day to day DevOps or other automation tasks.

How to Check SSL Certificate Expiration Date in PowerShell

SSL ( Secure Sockets Layer) is a digital certificate that provides an encrypted connection between server and client and authenticates a website identity. To keep user-sensitive data secure, and provide trust to users, it is very important to check SSL certificate expiration and renew them if they are due. The challenge for support team will be during the renewal activity, checking all the domains which having different certificate became critical job. To overcome the above challenge, we throught to check the powershell script to validate all the domain before and after the renewal activity. Let we discuss how to Check SSL Certificate Expiration Date in PowerShell.

In PowerShell, we can use [Net.HttpWebRequest] to make the HTTP web request to the website and get all properties associated with it, and certificate details. It will help to find the SSL certificate expiration date and other details of certificate.

The System.Net.ServicePoint is the .Net library which provides to manage the collections of ServicePoint objects. ServicePointManager returns the ServicePoint object that contains the information about the internet resource URI.

Check SSL Certificate Expiration Date

Step: 1 Get the URL properties

In the below PowerShell script lines, it uses [Net.HttpWebRequest] to create HTTP web requests to website URI and retrieve the URI properties like Address, ConnectionName, Certificate, etc… in the $webRequest variable.

[Net.ServicePointManager]::ServerCertificateValidationCallback = { $true }
# Create Web Http request to URI
$uri = "https://www.dotnet-helpers.com"
$webRequest = [Net.HttpWebRequest]::Create($uri)

Step: 2 Retrive the Certificate Start and End date

As we already having the certificate details in the $webRequest, so we can retrive the Certificate Start and end date as shown below.$webRequest.ServicePoint.Certificate gets the certificate details like issuer, Handle, and SSL certificate thumbprint. We can use the GetExpirationDateString() method to check the SSL expiration date for a website using PowerShell.

# Get Effective Date of the certificate
$Start = $webRequest.ServicePoint.Certificate.GetEffectiveDateString()
# Get Expiration Date of the certificate
$End   = $webRequest.ServicePoint.Certificate.GetExpirationDateString()

Step: 3 Find the no. of Remaining days for expiration

# Calculate the no. of Dates remaining for expiration
$ExpirationDays = (New-TimeSpan -Start (Get-Date) -End $end).Days
# Prinit the required details
Write-Host "Validating for :" $webRequest.Address
Write-Host "Certificate Effective Date :" $Start
Write-Host "Certificate Expiration Date :" $End
Write-Host "No. of days to Expiration :" $ExpirationDays

Full Code: Check SSL Certificate Expiration Date in PowerShell

Below full code will helps to Check SSL Certificate Expiration Date in PowerShell for single domain, if you want to have multiple urls then place all the domain in the txt file and loop the same code for validation. 

[Net.ServicePointManager]::ServerCertificateValidationCallback = { $true }
# Create Web Http request to URI
$uri = "https://www.dotnet-helpers.com"
$webRequest = [Net.HttpWebRequest]::Create($uri)
# Get Effective Date of the certificate
$Start = $webRequest.ServicePoint.Certificate.GetEffectiveDateString()
# Get Expiration Date of the certificate
$End = $webRequest.ServicePoint.Certificate.GetExpirationDateString()
# Calculate the no. of Dates remaining for expiration
$ExpirationDays = (New-TimeSpan -Start (Get-Date) -End $end).Days
# Prinit the required details
Write-Host "Validating for :" $webRequest.Address
Write-Host "Certificate Effective Date :" $Start
Write-Host "Certificate Expiration Date :" $End
Write-Host "No. of days to Expiration :" $ExpirationDays

OUTPUT: