All posts by Thiyagu

Powershell Error handling with $ERROR Variable

In all programming, the code will have errors, and troubleshooting those problems will be difficult. Like another programming language, PowerShell has error handling mechanisms for handling the error in our programming (in this post, we will discuss about Error handling with $ERROR variable).

In PowerShell, errors can be categories in 2 ways one is terminating and non-terminating. As the name implies, a terminating error stops the code execution when the error is thrown. A non-terminating error implies that the code will continue the next line of code execution when an error is shown.

The $Error Variable

$Error is an automatic global variable in PowerShell which always contains an Array List of zero or more Error Record objects. As new errors occur, they are added to the beginning of this list, so you can always get information about the most recent error by getting at $Error[0]. Both Terminating and Non-Terminating errors will be part of this this list.

How does the $Error variable work?

Starting a new PowerShell session the $Error will be empty. Normally, if you run a Windows PowerShell command and an error occurs, the error record will be appended to the “automatic variable” named $Error.  Then we use the $Error[0] to display and access the rest of the information it holds.

The $Error variable hold a collection of information, and that’s why using $Error[0] can get to your error message objects.  Also the $Error[0] variable will hold the last error message encountered until the PowerShell session ends.

Example #1: Starting a new PowerShell session

For this example, we have tried with a new PowerShell window session so the $Error variable has empty as shown below

$error[0]

$ERROR Variable

Example #2: Executing the below script which had the error

When an error occurs in our code, it is saved to the Automatic variable named $Error. The $Error variable contains an array of recent errors, and you can reference the most recent error in the array at index 0.

In the below example, the path is not exit and instead of throwing an error we had included -ErrorAction SilentlyContinue, and next line we have written the current error using the $Error variable.

Get-content -Path “C:\dotnet-helpers\BLOG\TestFile.txt” -ErrorAction SilentlyContinue
Write-Warning $Error[0]

Getting Members of $Error Variable

We can use Get-Member to expose your PS variable objects. using the below-listed members we can get deeper into the $Error[0] object to extract

Example #3: Getting the detailed Error using $Error variable

In the below example, we can get deeper into the $Error[0] object to extract the line that failed during execution.This assumes that the error information is available in the first element of the $Error array. The InvocationInfo property of the ErrorRecord object contains information about the context in which the error occurred, including the line number.

Keep in mind that if there are multiple errors in the $Error array, you might want to loop through them or access a specific error by its index. Also, note that this information might not be available for all types of errors, depending on how the error was generated

$Error[0].InvocationInfo

#Display the failed code line
Write-Host “Error occured at line : ” $Error[0].InvocationInfo.line

How to Create and Use PowerShell Modules

What is Module in PowerShell?

As per docs.microsoft.com, A module is a package that contains PowerShell members, such as cmdlets, providers, functions, workflows, variables, and aliases. The members of this package can be implemented in a PowerShell script, a compiled DLL, or a combination of both. These files are usually grouped together in a single directory.

In simple, PowerShell Modules allows us to organize our functions and use them in other scripts or PowerShell modules allow you to combine multiple scripts to simplify code management, accessibility, and sharing. Mostly many PowerShell scripters are slow to take that step of building a module. This allows you to be able to use the same code in many places without copying and pasting to code all over the place.

When do I create a PowerShell module?

  • When the same script needs to be used more than once.
  • if we need to break it apart into functions because it’s getting too complex to be in a single script.
  • If we need to share the code with others. 

In this post, you can learn Step-by-step instructions on creating and using modules.

STEP #1 Starting with a PowerShell Function

PowerShell Module can store any number of functions. To create a new module, we need to start creating a PowerShell Function. When your scripts get large, you start using more functions. These functions could be from someone else or functions that you write yourself. These functions start collecting at the top of your script.

In the below example, we creating a function called Get-BIOSInfo which will output the system BIOS information for the specific system. 

function Get-BIOSInfo
{
param($ComputerName)
Get-WmiObject -ComputerName $ComputerName -Class Win32_BIOS
}

Get-BIOSInfo -ComputerName localhost

STEP #2 Create a separate Folder for Custom Module 

All the custom Modules need to save under the Module folder, mostly the location will be C:\Program
Files\WindowsPowerShell\Modules
. We need to create a separate folder for our Module so here we creating a folder called Get-BIOSInfo as shown below.

STEP #3 Save the Function as Module with .psm1 extension 

Next, we need to save our function under the Get-BIOSInfo folder. Most important thing is, the Folder name must match the Module name. Now I’ve got the Get-BIOSInfo module saved/created, and I’ve called it Get-BIOSInfo.psm1. Now I can ask my team to use it,

To make our function into Module, the file needs to be saved with .psm1 extension as shown below,

STEP #4 Test-Driving Your Module

PowerShell has automatically loaded your new module and made all of its commands available. Executing the Get-Module cmdlet will show your module just contains one function Get-BIOSInfo. To understand what just has happened, I had ran the below Get-Module cmdlet and shown the output below.

STEP #4 Finally, Import your Module to utilize in any script

Open a different PowerShell window, or open a new PowerShell (console or ISE). Your command Get-BIOSInfo is available immediately! It is now a standard PowerShell command just like the other commands you use. Importing the module brings all of the functions and variables into each user’s PowerShell session.

Note:

  • PowerShell caches Modules so once you have loaded and used a module in a PowerShell session, any changes to the module will not become effective. To see changes, either use the module in a new PowerShell Host or force a complete module to reload:
  • The module name should not be the name of your function. It should be a generic name such as a topic because later you will want to store more functions into your module.
  • Do not use special characters or whitespace in your module name.
  • PowerShell Module can store any number of functions.

Conclusion

Having the option to create a Module in PowerShell directly is super handy and we can be really flexible in our day to day DevOps or other automation tasks.

How to Check SSL Certificate Expiration Date in PowerShell

SSL ( Secure Sockets Layer) is a digital certificate that provides an encrypted connection between server and client and authenticates a website identity. To keep user-sensitive data secure, and provide trust to users, it is very important to check SSL certificate expiration and renew them if they are due. The challenge for support team will be during the renewal activity, checking all the domains which having different certificate became critical job. To overcome the above challenge, we throught to check the powershell script to validate all the domain before and after the renewal activity. Let we discuss how to Check SSL Certificate Expiration Date in PowerShell.

In PowerShell, we can use [Net.HttpWebRequest] to make the HTTP web request to the website and get all properties associated with it, and certificate details. It will help to find the SSL certificate expiration date and other details of certificate.

The System.Net.ServicePoint is the .Net library which provides to manage the collections of ServicePoint objects. ServicePointManager returns the ServicePoint object that contains the information about the internet resource URI.

Check SSL Certificate Expiration Date

Step: 1 Get the URL properties

In the below PowerShell script lines, it uses [Net.HttpWebRequest] to create HTTP web requests to website URI and retrieve the URI properties like Address, ConnectionName, Certificate, etc… in the $webRequest variable.

[Net.ServicePointManager]::ServerCertificateValidationCallback = { $true }
# Create Web Http request to URI
$uri = "https://www.dotnet-helpers.com"
$webRequest = [Net.HttpWebRequest]::Create($uri)

Step: 2 Retrive the Certificate Start and End date

As we already having the certificate details in the $webRequest, so we can retrive the Certificate Start and end date as shown below.$webRequest.ServicePoint.Certificate gets the certificate details like issuer, Handle, and SSL certificate thumbprint. We can use the GetExpirationDateString() method to check the SSL expiration date for a website using PowerShell.

# Get Effective Date of the certificate
$Start = $webRequest.ServicePoint.Certificate.GetEffectiveDateString()
# Get Expiration Date of the certificate
$End   = $webRequest.ServicePoint.Certificate.GetExpirationDateString()

Step: 3 Find the no. of Remaining days for expiration

# Calculate the no. of Dates remaining for expiration
$ExpirationDays = (New-TimeSpan -Start (Get-Date) -End $end).Days
# Prinit the required details
Write-Host "Validating for :" $webRequest.Address
Write-Host "Certificate Effective Date :" $Start
Write-Host "Certificate Expiration Date :" $End
Write-Host "No. of days to Expiration :" $ExpirationDays

Full Code: Check SSL Certificate Expiration Date in PowerShell

Below full code will helps to Check SSL Certificate Expiration Date in PowerShell for single domain, if you want to have multiple urls then place all the domain in the txt file and loop the same code for validation. 

[Net.ServicePointManager]::ServerCertificateValidationCallback = { $true }
# Create Web Http request to URI
$uri = "https://www.dotnet-helpers.com"
$webRequest = [Net.HttpWebRequest]::Create($uri)
# Get Effective Date of the certificate
$Start = $webRequest.ServicePoint.Certificate.GetEffectiveDateString()
# Get Expiration Date of the certificate
$End = $webRequest.ServicePoint.Certificate.GetExpirationDateString()
# Calculate the no. of Dates remaining for expiration
$ExpirationDays = (New-TimeSpan -Start (Get-Date) -End $end).Days
# Prinit the required details
Write-Host "Validating for :" $webRequest.Address
Write-Host "Certificate Effective Date :" $Start
Write-Host "Certificate Expiration Date :" $End
Write-Host "No. of days to Expiration :" $ExpirationDays

OUTPUT:

Import bulk Variables to Variable Group using Azure DevOps CLI

My Scenario:

As as System Admin/DevOps engineer, maintaining the variable group is little tricky as its very difficult to maintain the history and changes. We got the requirement in one of our migration project with more number of variables for the pipeline with values in excel. It’s definitely very easy to copy/paste as there are just 20 Key-Value pairs in this scenario. However, think about a scenario where you need to repeat this for many Variable Groups for multiple project? It’s definitely a tedious job and Manually creating this new key values in the variable group will make more time and surely there will be human error. So to overcome this problem, we have though to Import bulk Variables to Variable Group using Azure DevOps CLI

What format we got the excel?

Instead of adding them directly from the Azure DevOps Portal, we will leverage automation the Process of automatically adding the Key-Value pairs without doing any manual Data-Entry job as we got huge number or variables.

Note: It’s definitely very easy to copy/paste as there are just 10/20 Key-Value pairs in this scenario. However, think about a scenario where you need to repeat this for many Variable Groups for multiple project? It’s definitely a very tedious job and surely there will be an human-error.

Prerequisite

Step 1: Retrieve the Variable Group ID:

The Variable group need to be ready for importing the variable from the excel. For this example, i already created one variable group know as “mytestvariablegroup” (as shown in below snap) and noted the variable group id (this id will be unique for each variable group) as shown below. In my case, the Variable Group ID is 1 as shown in the below snap shot. This ID will be used in the Step4 and Step5 to dynamically create the Variables using the Azure DevOps CLI commands.

Step 2: Generate Azure DevOps CLI commands using Excel Formula

Navigate to the excel sheet add another column and paste the below formula. The C2 and D2 will be column which containing the variable name and variable value. And, apply the formula to all the rows. Once you apply the formula to all the rows, it should look something like below.

=CONCAT(“az pipelines variable-group variable create –group-id 2 –name “””,B2,””” –value “””,C2,””””)

Import bulk Variables to Variable Group using Azure DevOps CLI

Step 3: Login to Azure DevOps from Command Line

Non-interactive mode

Before we start with Azure CLI commands, it’s mandatory to authenticate using Azure DevOps credentials. If you are using same account for both Azure and Azure DevOps then you can use the below command to authenticate.

Az login

Post enter, it will open the browser to authenticate the login details.

Step 4: Set Default Organization

Run the below command to set the organization where we are going to update the variable.

az devops configure -d organization=https://dev.azure.com/thiyaguDevops/

Step 5: Set Default Project

Run the below command to set the default Project.

az devops configure -d project=poc

Step 6: Execute the Azure DevOps CLI commands

In the step 2, we generated all the commands in excel. Now, it’s time to execute them. Copy the entire rows which containing the formula (column D , without header if any only the values) of commands and paste all of them at once in the command prompt.

Note: No need to copy past one by one from excel, copy all from the Colum D and past at single time, remaining the PowerShell will take care

Step7: Review the Output.

Finally now it’s time to view the results in our variable group. Navigate to the Variable Group and refresh the page to view all the new variables are added like as shown below

 

 

How to use the variable group at runtime in Azure YAML Pipeline

When & Where to use?

We received the request that we would like to pass the variable group as a runtime parameter so that whenever I run the pipeline, it should allow me to select the variable group name as input, and based on the input value for the variable group during runtime my pipeline should proceed. In this article, we will discuss How to use the variable group at runtime in Azure YAML Pipeline.

This can be achieve by using the Runtime parameters. Runtime parameters let you have more control over what values can be passed to a pipeline. In this article 

What is Runtime parameters?

You can specify parameters in templates and in the pipeline. Parameters have data types such as number and string, and they can be restricted to a subset of values. The parameters section in a YAML defines what parameters are available. These runtime parameters allow you to have more control over the parameter values you pass to your pipelines.

Parameters are only available at template parsing time. Parameters are expanded just before the pipeline runs so that values surrounded by ${{ }} are replaced with parameter values. Use variables if you need your values to be more widely available during your pipeline run.

Note: If you are going to trigger the pipeline manually then you can make use of Runtime parameters in the Azure DevOps pipeline.

Runtime parameters let you have more control over what values can be passed to a pipeline. Unlike variables, runtime parameters have data types and don’t automatically become environment variables.

Let we see How to use the variable group at runtime in Azure YAML Pipeline

Step 1: Define the parameters under the Values section

Ensure Always Set runtime parameters at the beginning of a YAML. This example pipeline accepts the value of variable and then outputs the value in the job

parameters:
- name: variable_group
displayName: Variable Group
type: string
default: app-sitecore-dev
values:
- app-sitecore-dev
- app-sitecore-qa
- app-sitecore-pprd
- app-sitecore-prd
- app-sitecore-pprd-hotfix

trigger: none # trigger is explicitly set to none

Step 2: Assign the selected value to the variable group.

Post slection of variable group during manula build, the selected variable will be assinged by using ${{ parameters.<parameter_name> }}. once runtime parameter is assinged the sequence of stage/jobs can able to use the values

variables:
- group: ${{ parameters.variable_group }}

Step 3: Use the values from the selected variable group

Based on the variable group assinged from run time parameter, the remaining stage can fetch the value from the variable group like agentPool…

stages:
- stage: Build_Artifacts
jobs:
- template: Prepare_Artifacts.yml
parameters:
agentPool: '$(agentPool)'
TargetFolder: '$(Build.ArtifactStagingDirectory)'

Full YAML Code

parameters:
- name: variable_group
  displayName: Variable Group
  type: string
  default: app-sitecore-dev
  values:
  - app-sitecore-dev
  - app-sitecore-qa
  - app-sitecore-pprd
  - app-sitecore-prd
  - app-sitecore-pprd-hotfix

trigger: none # trigger is explicitly set to none

variables:
- group: ${{ parameters.variable_group }}

stages:
- stage: Build_Artifacts
jobs:
- template: Prepare_Artifacts.yml
parameters:
agentPool: '$(agentPool)'
TargetFolder: '$(Build.ArtifactStagingDirectory)'

Output

Bash Scripting – If Statement

The Bash Scripting  is now a days mandatory language for most of the system admins/devops guys. so in upcoming articles we will shed light on the power and subtlety that is the Unix shell, I’d like to take a dive into just one of its many features: Bash Scripting – If Statement.

When coding, you might need to make decisions based on certain conditions. Conditions are expressions that evaluate to a boolean expression (true or false)Statements that help to execute different code branches based on certain conditions are known as conditional statements.if…else is one of the most commonly used conditional statements. Like other programming languages, Bash scripting also supports if…else statements. And we will study that in detail in this blog post.

In another way, If statements (and, closely related, case statements) allow us to make decisions in our Bash scripts. They allow us to decide whether or not to run a piece of code based upon conditions that we may set.

SYNTAX

When you are using a single if statement, the syntax is as follows: A basic if statement effectively says, if a particular condition is true, then perform a given set of actions. If it is not true then don’t perform those actions. If follows the format below:

The if statement is composed of the if keyword, the conditional phrase, and the then keyword. The fi keyword is used at the end of the statement. The COMMANDS gets executed if the CONDITION evaluates to True. Nothing happens if CONDITION returns False; the COMMANDS are ignored.. The basic syntax of an if statement is the following:

if [ condition ]
then
    statement/actions
fi

The “[ ]” in the if statement above are actually a reference to the command test. This means that all of the operators that test allows may be used here as well. When you are using a multiple condition check with if statement, the syntax is as follows:

if [ condition ] ; then
   statement/actions
elif [ condition ] ; then
   statement/actions
else
   statement/actions
fi
  • if >> Perform a set of commands if a test is true.
  • elif >> If the previous test returned false then try this one.
  • else >> If the test is not true then perform a different set of commands.

Note that the spaces are part of the syntax and should not be removed.

Example: Simple with IF statement

Let’s go through an example where we are comparing two numbers to find if the first number is the smaller one.

a=25
b=30

if [ $a -lt $b ]
then
    echo "a value is less than b"
fi

Output: a value is less than b

Example: How to Use the if .. else Statement

Let’s see an example where we want to find if the first number is greater or smaller than the second one. Here, if [ $a -lt $b ] evaluates to false, which causes the else part of the code to run.

a=65
b=35

if [ $a -lt $b ]
then
   echo "a is less than b"
else
   echo "a is greater than b"
fi

Output: a value is greater than b

Example: How to Use if..elif..else Statements

To have comparisons, we can use AND -a and OR -o operators as well in the bash command. For performing the checks between two values, we can use AND -a and OR -o as well.

In this example, we will do the check on 3 values conditions:

if [ $a == $b -a $b == $c -a $a == $c ]
then
   echo "All values are equal"

elif [ $a == $b -o $b == $c -o $a == $c ]
then
   echo "May be more than one value is equal"

else
   echo "All numbers are not equal"

fi

Conclusion on Bash Scripting – If Statement

You can check the inputs based on conditions like if..else and make the code more dynamic. In this tutorial, hope you learned Bash Scripting – If Statement

I hope you found this tutorial helpful.

What’s your favorite thing you learned from this tutorial? Let me know on Twitter!

Using secrets from Azure Key Vault in a pipeline

You know as a best practice, DevOps guys need to ensure all the secrets need to be kept inside the Keyvalut instead of using directly from the Azure DevOps Variable group. So, in this article, we are going to see how we can do Variable substitute from KeyVault in YAML Azure DevOps pipelines (ie., Using secrets from Azure Key Vault in a pipeline) 

Config File

Below is the sample config file which we are going to use for substituting variables from Key Vault in YAML Azure DevOps pipelines

Step 1: Fetch the Key from Key vault:

The variable substitution can be done with 2 tasks in Azure DevOps, let’s start. The task can be used to fetch the latest values of all or a subset of secrets from the vault and set them as variables that can be used in subsequent tasks of a pipeline. The task is Node-based and works with agents on Linux, macOS, and Windows. First, we need to create the task for Connecting and fetching the secrets from the Azure Keyvalut. As we mentioned RunAsPreJob: false so the value will only scope up to the next following task alone.

- task: AzureKeyVault@2
  inputs:
    azureSubscription: 2a28a5af-3671-48fd-5ce1-4c144540aae2
    KeyVaultName: kv-dgtl-dev
    SecretsFilter: 'smtp-host,smtp-username,smtp-password'
    RunAsPreJob: false

Point to remember for Variable substitute from KeyVault:

  • RunAsPreJob – Make secrets available to the whole job, Default value is false
  • Keyvalut task needs to run before the job execution begins. Exposes secrets to all tasks in the job, not just tasks that follow this one.
  • Ensure the Agent machine has the required permissions to access the Azure key vault
  • if you want to fetch the all secrets during this task then you can specify ‘*’ instead of secrets name in the SecretsFilter.

Step 2: Apply the secrets to config files:

Second, we can have the replace token task to have the target files which need to replace the variables. once this is executed, the value fetched from the key vault will apply to the matched variable

- task: replacetokens@5
  inputs:
    rootDirectory: 'src/Feature/Forms/code/App_Config/Include/Feature/'
    targetFiles: 'dotnethelpers.Feature.Forms.SMTP.config,SMTP_external.config'
    encoding: 'auto'
    tokenPattern: 'default'
    writeBOM: true
    actionOnMissing: 'warn'
    keepToken: false
    actionOnNoFiles: 'continue'
    enableTransforms: false
    enableRecursion: false
    useLegacyPattern: false
    enableTelemetry: true

Point to remember:

  • The token pattern is set to default (so I used #{YOUR_VARIABLE}#, it may define based on your requirement.
  • The name of the Keyvalut secrets needs to match with the config variable which needs to substitute. For example, in the config, we have variables like smtp-host, smtp-username, and smtp-password so the Azure key vault secrets name need to match with same.

How to Find and delete duplicate Files Using PowerShell

Anyone who manages a file storage has to keep track of the size of files to ensure there is always enough free space. Documents, photos, backups and other can quickly occupy up your shared file resources — especially if you have a lot of duplicates. Duplicate files are often the result of users’ mistakes, such as double copy actions or incorrect folder transfers. To avoid wasting space and driving up storage costs, you have to analyze your file structure, find and delete duplicate Files Using PowerShell. When you say there are files with the same content but with different names

As a result we end up running out of disk space and then get in to a situation where we have to sit and find the unnecessary files to gain free storage space.
One of the biggest issue that we see during such clean-up activity is to get rid of duplicate files. A simple Windows PowerShell script can help you complete this tedious task faster. we having may types of approach to handle this scenario, we will discuss about few examples here.

Find Duplicate file using Get-FileHash

Do you need to compare two files or make sure a file has not changed? The PowerShell cmdlet Get-FileHash generates hash values both for files or streams of data. A hash is simply a function that converts one value into another. Sometimes the hash value may be smaller to save on space, or the hash value may be a checksum used to validate a file. Therefore a hash will be different if even a single character in the input is changed

In this demo, i am having 4 text files, which 3 files are having same content with different file name and remaining 1 file unique content as shown in the below image.

STEP 1: Open the PowerShell window

Open PowerShell: Click on the Start Menu and type “PowerShell” in the search bar. Then, select “Windows PowerShell” from the results.

STEP 2: Find to the directory where you want to search for duplicate files:

$filePath = ‘C:\Thiyagu Disk\backupFiles\’

STEP 3: Get the all child items inside the file path to check the duplicate.

Use the Get-ChildItem cmdlet to find all files in the directory: Type “Get-ChildItem -Recurse -File” to list all files in the current directory and its subdirectories. The “-Recurse” option tells PowerShell to search all subdirectories.

Get-ChildItem –path $filePath -Recurse

STEP 4: Find duplicate files using Get-FileHash cmdlet.

Using Get-FileHash generates hash values both for files or streams of data and group by the hash value as shown below to find the duplicate and unique files/folders/…

Get-ChildItem –path $filePath -Recurse | Get-FileHash | Group-Object -property hash | Where-Object { $_.count -gt 1 } | ForEach-Object { $_.group | Select-Object Path, Hash }

Full Code : Find the duplicate files

$filePath = ‘C:\backupFiles\’
$group_by_unique_files = Get-ChildItem –path $filePath -Recurse | Get-FileHash | Group-Object -property hash | Where-Object { $_.count -gt 1 }
$duplicatefile_details = $group_by_unique_files | ForEach-Object { $_.group | Select-Object Path, Hash }
$duplicatefile_details

Full Code: Find and delete duplicate Files Using PowerShell

$filePath = ‘C:\backupFiles\’
$group_by_files = Get-ChildItem –path $filePath -Recurse | Get-FileHash | Group-Object -property hash | Where-Object { $_.count -gt 1 }
$group_by_files
$duplicatefile_details = $group_by_files | ForEach-Object { $_.group | Select-Object Path, Hash}
$duplicatefile_details | Out-GridView -OutputMode Multiple | Remove-item

After finding the duplicate files, you can move/delete based on your requirement. if you want to delete through UI, you can use Out-GridView and delete by selecting the multiple files as shown below. A user may select files to be deleted in the table (to select multiple files, press and hold CTRL) and click OK.

Note: Please be careful while using the Remove-Item cmdlet as it can permanently delete files from your computer. It’s recommended to test this command on a test folder before using it on your actual data.

How to use Vim editor in PowerShell

If you are familiar with Linux or come from a Unix background, you probably know about Vim. For those of us that started and stay mostly in the realm of Windows however; I Let we exposed to vim editor in PowerShell, and see what it can do. Windows OS does not come with Vim as Unix-based systems do. 

Vim is a powerful, widely used text editor for Unix-based systems, including Linux and macOS. It is known for its speed, efficiency, and flexibility, making it a popular choice among programmers, system administrators, and other power users who need to edit text files on a regular basis. Vim is a command-line interface (CLI) application that can be used in a terminal window, and it provides a wide range of commands and keyboard shortcuts for navigating and editing text files.

Why we need this editor?

Did you run a script that read a text file and need to change something in config for debugging or found that the file had several wrong entries? A PowerShell text editor may come in handy in such situations. You wouldn’t need to fire up an external editor or not have permission to open the file directly. Instead, you can edit the file without leaving PowerShell. How cool is that?

You can also read : If you want to check if script is running in admin privileges,

How to use vim editor in PowerShell

To edit a text file using the Vim editor in PowerShell, follow below steps:

Install the Vim editor in PowerShell

STEP 1: Open PowerShell as an Administrator.

Open PowerShell by searching for “PowerShell” in the Start menu and selecting “Windows PowerShell” or “Windows PowerShell (x86)” in admin.

STEP 2: Install Vim editor in PowerShell using Chocolatey

In the PowerShell terminal, execute the following command to install the Vim editor.

choco install vim -y

STEP 3: To verify the Vim version, run the following command

vim –version

Editing and Saving a File using Vim

For this demo, I already having the txt file (in c:\mytestfile) where i am going to edit & save my changes. By following the above steps, now that you have Vim installed, it’s time to get you to learn to edit a file. Before you go any further, you should know that there are different modes in Vim. Each mode behaves differently and affects which actions you can do inside the editor.

The three commonly-used modes are:

  • Normal – The default mode as soon as you open Vim. This mode allows you to navigate the text file but not add new texts.
  • Insert – This mode is where Vim allows you to edit the file. To enter this mode, press i (case insensitive) on the keyboard. To exit and go back to the normal mode, press ESC.
  • Command – In this mode, Vim lets you invoke commands such as save the file, quit Vim, or view the help document, among others.

STEP 4: Open a file using Vim Command

To open the file, run the vim command followed by the filename to open. The command below opens the mytestfile.txt file in the PowerShell console and its ready for view and edit . 

vim “c:\thiyagu disck\mytestfile.txt

STEP 5: Enable the Insert Mode for the file

Next, enter the insert mode by pressing “i”. As you enter the insert mode, the text — INSERT — appears at the bottom of the editor, as shown in the following image. Now that you are in insert mode edit the file as you wish. The arrow keys will let you move the cursor inside the editor.

For this example I added new line as highlighted in the yellow arrow.

STEP 6: Append changes & Save

After making the necessary changes to the text file, press Esc to return to normal mode and Type the command :wq and press Enter to save and close the file. The command w saves the files while q exits Vim.

Output:

 

How to create new DNS in Azure Private DNS using PowerShell

You have a more number of options when it comes to resolving names using DNS. Microsoft Azure DNS is one of such option. In this post, we will discuss How to create new DNS in the Azure Private DNS using PowerShell

To manage Azure DNS, you can configure it through Azure Portal UI or command-line tools like the Azure CLI or PowerShell. Often admins need to manage DNS at scale or automate the management of various objects. A great way to do that isn’t via a graphical method like the Azure Portal but with a scripting tool like PowerShell (as we can automate).

Azure DNS is a managed DNS solution. We can use it for public DNS records (use the URL for access public) as well as for private DNS records. Using Azure private DNS, we can resolve DNS names in a virtual network. There are many benefits to using Azure private DNS.

  • No additional servers – We do not need to maintain additional servers to run the DNS solution. It is a fully managed service.
  • Automatic Record Update – Similar to Active Directory DNS, we can configure Azure DNS to register/update/delete hostname records for virtual machines automatically.
  • Support common DNS record types – It supports common DNS record types such as A, AAAA, MX, NS, SRV, and TXT.
  • DNS resolution between virtual networks – Azure Private DNS zones can be shared between virtual networks.

 As we had to set many URLs so we thought to have automation to create through Azure DevOps Pipeline.

using New-AzPrivateDnsRecordSet cmdlet we can able to create a new DNS record in the Azure DNS zone and Get-AzPrivateDnsRecordSet will use to list out all the DNS records which were created. The Set-AzPrivateDnsRecordSet cmdlet updates a record set in the Azure Private DNS service from a local RecordSet object. You can pass a RecordSet object as a parameter or by using the pipeline operator

Prequistion for making automation for creating a record set in a Private DNS zone.

  • -Name : The name of the records in this record set (relative to the name of the zone and without a terminating dot).
  • -RecordType : The type of Private DNS records in this record set (values may be A, AAAA, CNAME, MX, PTR, SOA, SRV, TXT)
  • -ZoneName : The zone in which to create the record set (without a terminating dot). In my case, all the domains need to be like .cloud.dotnethelpers.com. for example,
    preprod.cloud.dotnethelpers.com.
  • -ResourceGroupName : The resource group to which the zone belongs.
  • -Ttl : The TTL value of all the records in this record set.
  • -PrivateDnsRecords : The private DNS records that are part of this record set.
  • -Ipv4Address: The IPv4 address for the A record to add. For me this ip from the ingress, in your case it may be your server or anything.

Script: How to create new DNS

New-AzPrivateDnsRecordSet -Name pprd -RecordType A -ZoneName “cloud.dotnethelpers.com” -ResourceGroupName “rg-dgtl-network-pprd” -Ttl 3600 -PrivateDnsRecords (New-AzPrivateDnsRecordConfig -IPv4Address “10.55.161.23”)

Script: How to get DNS record details

Get-AzPrivateDnsRecordSet -ResourceGroupName ‘rg-dgtl-network-pprd’ -ZoneName ‘cloud.dotnethelpers.com’ -RecordType A

Script: How to detect DNS record

$RecordSet = Get-AzPrivateDnsRecordSet -Name “cd-ppr” -ResourceGroupName “rg-dgtl-network-pprd” -ZoneName “cloud.dotnethelpers.com” -RecordType A
Remove-AzPrivateDnsRecordSet -RecordSet $RecordSet

Output: 

The final URL will be pprd.cloud.dotnethelpers.com

Points to Remember:

Before running the above script ensure you have installed the required module in PowerShell to connect to the Azure portal to access the resources (connect using the Connect-AzAccount cmdlet). I hope you have a basic idea about How to create  new DNS in the Azure Private DNS using PowerShell, if any queries please comment so I can able to answer ASAP.