Category Archives: PowerShell

Exception Handling – Try Catch with Custom Error Message in PowerShell

An error in a PowerShell script will prevent it from completing script execution successfully. Using error handling with try-catch blocks allows you to manage and respond to these terminating errors. In this post, we will discuss the basics of try/catch blocks and how to find or handle Custom Error Message in PowerShell.

Handling errors effectively in scripts can save a lot of troubleshooting time and provide better user experiences. In PowerShell, we have robust options to handle exceptions using try, catch, and finally blocks. Let’s dive into how you can use try-catch to gracefully handle errors and add custom error messages for better feedback.

Why Use Exception Handling in PowerShell?

Scripts can fail for many reasons: missing files, invalid input, or network issues, to name a few. With exception handling, you can capture these issues, inform users in a friendly way, and potentially recover from errors without crashing your script. Using try-catch, you can:

  • Catch specific errors.
  • Display user-friendly messages.
  • Log errors for debugging.

Syntax overview of Try/Catch

Like similar in other programming languages, the try-catch block syntax is very simple and syntax will be the same. It is framed with two sections enclosed in curly brackets (the first block is a try and the second is the catch block).

try {
# Functionality within try block
}
catch {
# Action to do with errors
}

The main purpose of using the try-catch block, we can start to manipulate the error output and make it more friendly for the user.

Example 1:

After executing the below script, the below error will be shown on the screen as output and it would occupy some space and the problem may not be immediately visible to the User. So you can use a try-catch block to manipulate the error output and make it more friendly.

without Try-Catch block

Get-content -Path “C:\dotnet-helpers\BLOG\TestFiled.txt” 

with Try Catch block

In the below script, we added the ErrorAction parameter with a value of Stop to the command. Not all errors are considered “terminating”, so sometimes we need to add this bit of code in order to properly terminate into the catch block.

try {
Get-content -Path “C:\dotnet-helpers\BLOG\TestFile.txt” -ErrorAction Stop
}
catch {
Write-Warning -Message “Can’t read the file, seem there is an issue”

}

Example 2:

Using the $Error Variable

In Example 1, we have displayed our own custom message instead of this you can display the specific error message that occurred instead of the entire red text exception block. When an error occurs in the try block, it is saved to the Automatic variable named $Error. The $Error variable contains an array of recent errors, and you can reference the most recent error in the array at index 0.

try{
Get-content -Path “C:\dotnet-helpers\BLOG\TestFiled.txt” -ErrorAction Stop
}
Catch{

Write-Warning -Message “Cant’t read the file, seem there is an issue”
Write-Warning $Error[0]

}

Example 3:

Using Exception Messages

You can also use multiple catch blocks in case if you want to handle different types of errors. For this example, we going to handle two different types of errors and planned to display different custom messages. The first CATCH is to handle if the path does not exist and the next CATCH is to handle if any error related to the driver not found.

Using try/catch blocks gives additional power in handling errors in a script and we can have different actions based on the error type. The catch block focuses on not only displaying error messages but we can have logic that will resolve the error and continue executing the rest of the script.

In this example, the file mentioned driver (G:\dotnet-helpers\BLOG\TestFiled.txt) does not exist in the execution machine, so it was caught by [System.Management.Automation.DriveNotFoundException] and executed the same CATCH block.

try{

Get-content -Path "G:\dotnet-helpers\BLOG\TestFiled.txt" -ErrorAction Stop
}
# It will execute if a specific file is not found in a specific Directory
Catch [System.IO.DirectoryNotFoundException] {

Write-Warning -Message "Can't read the file, seems there is an issue"
Write-Warning $Error[0]
}
# It will execute if the specified driver is not found in the specified path
Catch [System.Management.Automation.DriveNotFoundException]{
Write-Warning -Message "Custom Message: Specific driver is not found"
Write-Warning $Error[0]

}
#Execute for Un-Handled exception - This catch block will run if the error does not match any other catch block exception.
Catch{
Write-Warning -Message "Oops, An un-expected Error Occurred"
#It will return the exception message for the last error that occurred.
Write-host $Error[0].Exception.GetType().FullName
}

OUTPUT

 

How to Create Log File using Start-Transcript cmdlet in PowerShell

What is Start-Transcript?

As per MSDN, The Start-Transcript cmdlet creates a record of all or part of a PowerShell session to a text file. The transcript includes all commands that the user types and all output that appears on the console. Starting in Windows PowerShell 5.0, Start-Transcript includes the hostname in the generated filename of all transcripts.

When & Where to use?

As system admin/DevOps, we are doing much automation inside the servers and it’s mandatory to capture all the details (maybe error or success ran) in the log to make an analysis if required. In simple, if you running PowerShell scripts automatically you need a way to log any errors or warnings that occur. Usually, we will create your own log function (the same I did for my previous automation, please refer to my implemented method), but there is an easier way which I found during my team discussion and thought to share with all of you. This is especially useful when your enterprise’s logging is centralized.

The Start-Transcript cmdlet writes everything that happens during a session to a log file. These are the commands that you enter in a PowerShell session and all output that normally appears in the console.

You can also refer : Try/Catch , Error HandlingError Logging

Example 1: Without any parameters (inside our script)

To start the transcript you can simply use the cmdlet Start-Transcript and Stop-Transcript to stoping it. You can place whatever script needs to be executed in between the Start and Stop Transcript.

Without any parameters, the transcript will be saved in the user’s documents folder. The filename will automatically be generated and consists of the device name, random characters followed by a timestamp. The default path is great when you are only using PowerShell on your own machine.

Start-Transcript
$destPath = "C:\dotnet-helpers\Destination\FinedMe"
$sourcePath = 'C:\dotnet-helpers\Source\'
Get-content $destPath
Stop-Transcript

Output: 

The Transcript log will contain all the information that you see in the console as well, including a very detailed header with information of the host that you have used:

Example 2: With Parameters (-path & -Append)

The default path is great when you are only using PowerShell on your own machine. But most of the time you want to centralize the log files. There are two options for this, we can use the -Path parameter or the -OutputDirectory parameter for this.

# Append the transcript to an Error.log file.
Start-Transcript -Path c:\automationLog\Error.log -Append

For -Path parameter, we will need to specify the full path, including the file name. This is helpful when you want to have a single log file for a script and append all the transcripts into a single file. In the above example, we used the -Append parameter, by default it will overwrite any existing content in the file. To overcome this we need to use -Append or -NoClobber parameters to append in the same file.

# Use -NoClobber or -Append to prevent overwriting of existing files
Start-Transcript -Path c:\automationLog\Error.log -NoClobber

Example 3: With -OutputDirectory Parameters

You can also use the -OutputDirectory parameter to store the log file in a custom location, and this cmdlet allows to create of a unique filename.

Start-Transcript -OutputDirectory c:\automationLog\
$destPath = "C:\dotnet-helpers\Destination\FinedMe"
$sourcePath = 'C:\dotnet-helpers\Source\'
Get-content $destPath
Stop-Transcript

Output: 

For this example, I had executed the script repeatedly and for each execution, the log file is created uniquely with appending of some random AlphaNumeric values as shown in the below snapshot (3of74bj, 5yrpf4R,..)

Points to remember:

Files that are created by the Start-Transcript cmdlet include random characters in names to prevent potential overwrites or duplication when two or more transcripts are started simultaneously.

The Start-Transcript cmdlet creates a record of all or part of a Windows PowerShell session in a text file. The transcript includes all commands that the user types and all output that appears on the console.

Each transcript starts with a fully detailed header with a lot of information. When using this for logging of automated scripts that run automatically this information is large and not much use. So from PowerShell 6 and higher, we have many parameters to make cut off this (like -UseMinimalHeader).

[/fusion_text][/fusion_builder_column][/fusion_builder_row][/fusion_builder_container]

How to remove Multiple bindings in IIS using PowerShell script

As you aware large number of unused URLs in the servers will lead critical to maintenance during the maintenance activity so we need to remove Multiple bindings in IIS which not in use . In our project, my team identified a large number of un-used URLs (almost 500+ URLs) across many servers and requested to make clean up in all the servers. It’s very hard to clean up the URLs manually and it will lead to manual error like wrongly removing others URLs which is already in use and almost it will take more days to complete the cleanup. So we decide to make this activity automated instead of manual cleaning to Remove multiple bindings in IIS.

To resolve the above scenario, we created the PowerShell script to remove large URLs in a single execution. Here let we discuss the script in detail. To demonstrate this, you’ll first either need to RDP to the webserver directly and open up a PowerShell console or use PowerShell remoting to connect to a remote session.

STEP: #1

First, we need to ensure query my default website using the Get-Website cmdlet. It will Get configuration information for an IIS Web site. After execution of the below line, it will return the configuration information for the “Default Web Site”.

Get-Website -Name “$(‘Default Web Site’)”

STEP: #2

After the execution of the above script, now the website information will be available, the next step we need to find the bindings (URLs) based on our parameters/criteria.

As you probably already know you can have multiple bindings attached to a single site. Using Get-WebBinding cmdlet, you can get bindings on the specified IIS site. We can get the binding based on the parameters like -Protocol, -Port, -HostHeader,-IPAddress, etc., From the below script, you can get the binding that matching HostHeader with HTTP/HTTPS with port 80/443.

Get-WebBinding -Protocol “http” -Port 80 -HostHeader $siteURL
Get-WebBinding -Protocol “https” -Port 443 -HostHeader $siteURL

STEP: #3

Now finally, we need to removed the URLs with help of Remove-WebBinding cmdlet. The Remove-WebBinding cmdlet will remove a binding from an Internet Information Services (IIS) website.

Remove-WebBinding

Full code (to remove multiple bindings in IIS )

Reading the list of URLs from the txt file and looping to remove the bindings from the IIS website.

##############################################################################
#Project : How to remove the IIS binding from server using PowerShell script.
#Developer : Thiyagu S (dotnet-helpers.com)
#Tools : PowerShell 5.1.15063.1155 
#E-Mail : mail2thiyaguji@gmail.com 
##############################################################################

#Get list of URLs from the Text file
$siteURLs = Get-Content -path C:\Desktop\ToBeRemoveURLs_List.txt

#looping the URLs list to remove one by one
foreach($siteURL in $siteURLs)
{

Get-Website -Name "$('Default Web Site')"  | Get-WebBinding -Protocol "http" -Port 80 -HostHeader   $siteURL| Remove-WebBinding

Get-Website -Name "$('Default Web Site')"  | Get-WebBinding -Protocol "https" -Port 443 -HostHeader $siteURL | Remove-WebBinding

}

Quickly Display Files with PowerShell: Understanding Cat and Get-Content

PowerShell offers powerful cmdlets for managing and Display Files with PowerShell, and among them, Cat (alias for Get-Content) and Get-Content are commonly used to read and display file contents. Though they may seem similar, understanding the differences between Cat and Get-Content can help you use them more effectively in your scripts and commands.

Understanding Get-Content

Get-Content is a versatile cmdlet that reads the contents of a file and outputs each line as an individual string. It’s useful for working with files line by line, as it returns an array of strings where each element corresponds to a line in the file.

In PowerShell, Cat is an alias for the Get-Content cmdlet. This alias comes from Unix-like systems, where the cat command is used to concatenate and display file contents. In PowerShell, Cat serves the same purpose but is simply a shorthand for Get-Content.

Apart from cat, there are other aliases for the Get-Content command, which you can find out by running the below command. As you can see below, gc and type are also aliases of Get-Command, along with cat.

Displaying the Contents of a File with PowerShell Cat

The primary usage of the PowerShell cat is showing a file’s contents on the screen. Running the cat command followed by the filename will tell the command to output the file’s contents for display. Run the below command to read the tmp.txt file and output the data on the screen

cat "C:\path\to\tmp.txt"

Showing Lines from the Top & Bottom

Reading the first few lines of the file may help identify whether the file is what you need. PowerShell cat allows you to display a specific line or lines from a file to have a quick look as shown below.

cat tmp.txt -TotalCount 6

To View the contents from the bottom by specifying the -Tail parameter or its alias, which is -Last. This method is typical when troubleshooting with log files.

cat tmp.txt -Tail 5

Merging Contents Into a New File

Instead of simply showing the content on the screen, you can redirect the standard out of a command to a new file in PowerShell. Moreover, the PowerShell cat can read multiple files at once, which makes merging contents possible. Run the cat command to concatenate File1.txt and File2.txt as follows. The output redirect (>) sends the command output to the new file called catMerge.txt.

Method 1:

cat File1.txt,File2.txt > catMerge.txt

Method 2:

cat File1.txt,File2.txt | Out-File Merge1.txt

Appending the Contents of One File to Another

Another thing you can do with the Linux cat command is appending the contents of one file to another instead of overwriting the file or creating a new one.

# PowerShell cat with Add-Content
cat File1.txt | Add-Content File2.txt

This commands appends the contents of File1.txt to File2.txt.

# PowerShell cat with double redirection symbol (append)
cat File1.txt >> File2.txt

 

Getting Redirected (301/302) URI’s in PowerShell

In my working environment, we are managing more than 500+ sites. Usually, sometimes users will make a redirect to other sites or put temporary maintenance (redirect to another page) and we are not aware of their changes so we thought to validate all the sites in-frequent manner to identify those changes so thought to write post on Getting Redirected (301/302) URI’s in PowerShell.

The above scenario is very difficult to check each URL in a regular manner , it will take a lot of manual work and it may lead to human errors so we thought to automate this task instead of Manual work. The result needs to be automatically populated in Excel so we can easily share it with our Managers.

One way to get redirected URL through PowerShell is WebRequest class. In this post, we’re going to cover how to build a PowerShell function that will query a specific URL to find out if that URL is being redirected to some other URL. Here we going to discuss how to achieve this using Invoke-Webrequest.

STEP #1: First grab the response head from an Invoke-Webrequest:

$request = Invoke-WebRequest -Method Head -Uri $Url

STEP #2: Next, we need to get the Response URL using AbsoluteUri

$redirectedUri = $request.BaseResponse.ResponseUri.AbsoluteUri

Full Code : Getting Redirected (301/302) URI’s in PowerShell 

This is a quick and easy way to pull the redirected URI’s for the given URI. Putting it all together we get the function below: In the Final code, I had incorporated the result in the result in Excel.

 ###########################################################################################
#Project: Getting Redirected (301/302) URI’s in Powershell using Invoke-WebRequest Method
#Developer: Thiyagu S (dotnet-helpers.com)
#Tools : PowerShell 5.1.15063.1155 [irp]
#E-Mail: mail2thiyaguji@gmail.com 
############################################################################################

function Get-RedirectedUrl
 {
     [CmdletBinding()]
     param
     (
         [Parameter(Mandatory)]
         [ValidateNotNullOrEmpty()]
         [string]$GetfilePath
     )

     $Result = @()
     $FormatGenerater = "<HTML><BODY background-color:grey><font color =""black"">
                         <H2>Finiding the Redireted URLs</H2>
                         </font><Table border=1 cellpadding=0 cellspacing=0>
                         <TR bgcolor=gray align=center><TD><B>Source URL</B>
                         <TD><B>RedirectedURL</TD></TR>"

     $filePath = $GetfilePath
     $fileContent = Get-Content $filePath
     foreach($singleURL in $fileContent )
     {
        try
        {
         $redirectionrequest = Invoke-WebRequest -Method HEAD $singleURL -ErrorAction Ignore
             if ($redirectionrequest.BaseResponse.ResponseUri -ne $null) 
             {
             $FormatGenerater += "<TR bgcolor=#CCFFE5>" 
             $redirectedURL = $redirectionrequest.BaseResponse.ResponseUri.AbsoluteUri
             $redirectedURL
             }
             $FormatGenerater += "<TD>$($singleURL)</TD><TD>$($redirectedURL)</TD></TR>" 
         }   
       Catch {  }
     }

    $FormatGenerater += "</Table></BODY></HTML>" 
    $FormatGenerater | out-file C:\dotnet-helpers\RedirectedURLs.xls
 }

 Get-RedirectedUrl "C:\dotnet-helpers\URLsList.txt"

OUTPUT:

Cache Purging in Azure Front Door with Azure PowerShell and CLI

Introduction

Azure Front Door is a global, scalable entry point for fast delivery of your applications. It provides load balancing, SSL offloading, and caching, among other features. One critical task for maintaining optimal performance and ensuring the delivery of up-to-date content is cache purging. This article provides a step-by-step guide to performing cache purging in Azure Front Door using Azure PowerShell and the Azure Command-Line Interface (CLI).

What is Cache Purging?

Cache purging, also known as cache invalidation, is the process of removing cached content from a caching layer. This is essential when the content served to the end users needs to be updated or deleted. In the context of Azure Front Door, purging ensures that the latest version of your content is delivered to users instead of outdated cached versions.

Prerequisites for Cache Purging in Azure Front Door

Step 1: Open Azure PowerShell

Open your preferred PowerShell environment (Windows PowerShell, PowerShell Core, or the PowerShell Integrated Scripting Environment (ISE)).

Step 2: Sign in to Azure

Sign in to your Azure account using the following command:

Connect-AzAccount

Step 3: Select the Subscription

If you have multiple subscriptions, select the appropriate subscription:

Select-AzSubscription -SubscriptionId "your-subscription-id"

Step 4: Cache Purge using PowerShell

Method 1: Using Invoke-AzFrontDoorPurge

Purpose: Invoke-AzFrontDoorPurge is used specifically for purging content from the Azure Front Door caching service.

Usage: This cmdlet is part of the Azure PowerShell module and is used to remove specific cached content from the Azure Front Door service.

Use the Invoke-AzFrontDoorPurge cmdlet to purge the cache. You’ll need the name of your Front Door profile and the list of content paths you want to purge.

Here’s an example:

# prerequisite Parameters

$frontDoorName = "your-frontdoor-name"
$resourceGroupName = "your-resource-group-name"
$contentPaths = @("/path1/*", "/path2/*")

Invoke-AzFrontDoorPurge -ResourceGroupName $resourceGroupName -FrontDoorName $frontDoorName -ContentPath $contentPaths

This command purges the specified paths in your Front Door profile.

When to Use:

When you need to remove cached content specifically from Azure Front Door using Azure PowerShell.
Ideal for scenarios involving global load balancing and dynamic site acceleration provided by Azure Front Door.

Method 2: Using Clear-AzFrontDoorCdnEndpointContent

Purpose: Clear-AzFrontDoorCdnEndpointContent is used for purging content from Azure CDN endpoints, which might also be linked to an Azure Front Door service. However, it specifically targets the CDN layer.

Usage: This cmdlet clears content from Azure CDN endpoints, which can be part of a solution using Azure Front Door.

$endpointName = "your-cdn-endpoint-name"
$resourceGroupName = "your-resource-group-name"
$contentPaths = @("/path1/*", "/path2/*")

Clear-AzFrontDoorCdnEndpointContent -ResourceGroupName $resourceGroupName -EndpointName $endpointName -ContentPath $contentPaths

When to Use:

  • When working specifically with Azure CDN endpoints.
  • Useful for content distribution network scenarios where you need to clear cached content from CDN endpoints.

Step 5: Cache Purge using Azure CLI

Method 3: Using Clear-AzFrontDoorCdnEndpointContent

Purpose: az afd endpoint purge is an Azure CLI command used for purging content from Azure Front Door endpoints.

Usage: This command is used within the Azure CLI to purge specific content paths from Azure Front Door.

frontDoorName="your-frontdoor-name"
resourceGroupName="your-resource-group-name"
contentPaths="/path1/* /path2/*"

az afd endpoint purge --resource-group $resourceGroupName --profile-name $frontDoorName --content-paths $contentPaths

When to Use:

  • When you need to purge cached content from Azure Front Door using Azure CLI.
  • Suitable for users who prefer command-line tools for automation and scripting.

Key Differences

Service Targeted:

  1. Invoke-AzFrontDoorPurge: Specifically targets Azure Front Door.
  2. Clear-AzFrontDoorCdnEndpointContent: Specifically targets Azure CDN endpoints.
  3. az afd endpoint purge: Specifically targets Azure Front Door.

Use Case:

  1. Invoke-AzFrontDoorPurge: Best for scenarios involving global load balancing and content delivery with Azure Front Door.
  2. Clear-AzFrontDoorCdnEndpointContent: Best for scenarios involving Azure CDN, which might or might not involve Azure Front Door.
  3. az afd endpoint purge: Best for users comfortable with CLI and needing to purge Azure Front Door content.

Conclusion

Understanding the differences between these commands helps you choose the right tool for your specific needs. Whether you are managing caches at the CDN layer or the Azure Front Door layer, Azure provides flexible and powerful tools to help you maintain optimal performance and up-to-date content delivery.

Exploring Different Ways to Check DNS Resolution in Windows PowerShell

Performing DNS resolution in Windows using PowerShell is a fundamental task for network administrators and IT professionals. Here are several methods to Check DNS Resolution using PowerShell, which you can share on your blog.

The Domain Name System (DNS) is an essential component of the internet’s infrastructure, translating human-readable domain names (like www.example.com) into machine-readable IP addresses (like 192.0.2.1). Checking DNS resolution is crucial for troubleshooting network issues, ensuring proper domain configurations, and enhancing overall internet performance. This article explores various methods to check DNS resolution, providing insights into tools and techniques available for different operating systems and use cases.

Method 1: Using nslookup

Although nslookup is not a PowerShell cmdlet, it can be executed within PowerShell using Get-Command. This method is handy for those familiar with traditional command-line tools.

nslookup google.com

Output: This command will return the DNS server being queried and the resolved IP addresses for the domain.

Method 2: Using Test-Connection (Ping)

The Test-Connection cmdlet can be used to ping a domain name, which resolves the domain to an IP address. This is a useful method for quickly verifying DNS resolution and connectivity.

Test-Connection google.com

Output: This command will return the resolved IP address along with ping statistics, providing both DNS resolution and connectivity information.

Method 3: Using Test-NetConnection

The Test-NetConnection cmdlet is another versatile tool that can be used for DNS resolution. It provides more detailed network diagnostics compared to Test-Connection.

Test-NetConnection -ComputerName google.com

Output: This command returns comprehensive information including the resolved IP address, ping results, and network adapter status.

Method 4: Using wget Command

The wget command can be used within PowerShell to download content from a URL. Although its primary use is for retrieving files, it can also resolve the domain name in the process.

wget google.com

Output: This command will display the resolved IP address and download information for the specified URL.

Method 5: Using ping

The ping command is a classic network utility used to test the reachability of a host. It also performs DNS resolution.

ping google.com

Output: This command will return the resolved IP address and round-trip time for packets sent to the domain.

Method 6: Parsing DNS Records with Resolve-DnsName

Resolve-DnsName can be used to retrieve specific DNS records like A, AAAA, MX, and TXT records.

Resolve-DnsName -Name "google.com" -Type A

Resolve-DnsName -Name "google.com" -Type MX

Resolve-DnsName -Name "google.com" -Type AAA

Output: This command will return detailed information about the domain, including IP addresses, aliases, and DNS record types.

PowerShell provides versatile methods for DNS resolution, ranging from the native Resolve-DnsName cmdlet to leveraging .NET classes, traditional command-line tools like nslookup, ping, Test-Connection, Test-NetConnection, and wget. These methods cater to various preferences and requirements, ensuring that DNS resolution can be performed efficiently and effectively in any PowerShell environment.

By incorporating these methods into your network management toolkit, you can enhance your ability to diagnose and resolve DNS-related issues seamlessly.

 

How to run PowerShell Script from a Batch File

What is a .bat file?

A batch file is a text file that the Windows cmd.exe command line processor executes as a batch job. It contains a series of line commands in plain text that are executed to perform various tasks, such as starting programs or running maintenance utilities within Windows.

You can also read  – How to create Basic Chart by reading excel using PowerShell and How to remove the Multiple bindings (URLs) from IIS using PowerShell script

Why call my PowerShell script from a batch file

You can’t double-click to run .PS1 files, but you can execute a .BAT file that way. So, we’ll write a batch file to call the PowerShell script from the command line. There are several good reasons for doing this:

  • Non-technical Users are often tackled with PowerShell.
  • When a user double-clicks on a PowerShell script file (*.ps1) in default it will open the file in the editor rather than executing it.
  • Many scripts may require admin privileges in order to run correctly and in this case, the user need-aware of how to run a PowerShell script as admin without going into a PowerShell console and it will be difficult to run this for Non-technical users.

So in this post, we are going to discuss how you can call the PowerShell script from a batch file.

STEP #1: Creating sample .ps1 file

Writing simple site validation PowerShell script and save as SiteValidationTestThroughBATFile.ps1

#######################################################################<br>
#Project : Creating Powershell Mulitple Reports in HTML with CSS Format<br>
#Developer : Thiyagu S (dotnet-helpers.com)<br>
#Tools : PowerShell 5.1.15063.1155<br>
#E-Mail : mail2thiyaguji@gmail.com<br>
######################################################################

$_URL = 'https://dotnet-helpers.com'
$request= [System.Net.WebRequest]::Create($_URL)
$response = $request.getResponse()
if ($response.StatusCode -eq "200"){
write-host "`nSite - $_URL is up (Return code: $($response.StatusCode) - $([int] $response.StatusCode)) `n" -ForegroundColor green 
}
else {
write-host "`n Site - $_URL is down `n" ` -ForegroundColor red
}

STEP #2: Creating a batch file with .bat extension

I had created the simple notepad file and saved it with the .bat extension as shown below. The below .bat file has created file name with ClickToValidationSite_EXE.bat

STEP #3: Calling .ps file inside the .bat file

Open the ClickToValidationSite_EXE.bat by right with the Edit option, it will open in the notepad. Here we going to call the PowerShell script which will validate and provide the status of the site.

In the first line, the @echo off commands are used to disable the echoing or prevents the echo on the screen.
PowerShell.exe called from any CMD window or batch file to launch PowerShell. -ExecutionPolicy Unrestricted
-Commend will bypass the execution policy so the script will run without any restriction.

@echo off

powershell.exe -ExecutionPolicy Unrestricted -Command ". 'C:\PowerShell\SiteValidation.ps1'"

TIMEOUT /T 10

OUTPUT: Run PowerShell Script from a Batch

 

Export CSV file with Array values using Powershell

One of the best and easiest ways to put data into an easy-to-read format is with a CSV (comma-separated values ) file. The CSV file will have a line of headers to indicate column name and subsequent values for each column. The structured data is required for positioning in the CSV file, to achieve the PowerShell has few option for structured data. Since a CSV file is just like a text file, it can loosely be created with Powershell’s Add-Content cmdlet. Here i had explained how to Export CSV file with Array with help of simple Add-Content cmdle.

We can use both Add-Content OR Export-Csv to create and write the CSV files, the best way is use Export-Csv (if we created a structured data/object) else we can use Add-Content cmdlet . The Add-Content does not negatively understand a CSV file, it would still be able to read one.

############################################################
#Project : Read Array values and generate CSV file
#Developer : Thiyagu S (dotnet-helpers.com)
#Tools : PowerShell 5.1.15063.1155 
#E-Mail : mail2thiyaguji@gmail.com 
############################################################
#Declared the File path location with the CSV format.
$FilePathLocation = "C:\PowerShell\EmployeeDetails.csv"
#Declared the Array
$Empdetails =@();
$empCount = 3;
#Declared the Column names
$columnName = '"EmpName","EMPID","EMPemail","EMPpin"'
#Add the column tile to the Excel
Add-Content -Path $FilePathLocation -Value $columnName
#Loop will get user input for 3 time
for ($loopindex=0; $loopindex -lt $empCount; $loopindex++)
{
#Getting the User Input from the console
$Empdetails += Read-Host " Enter EMPName,EMPID,EMPemail,EMPpin by comma separator"
}
#looping the Emp details to print in the excel cell
$Empdetails | foreach { Add-Content -Path $FilePathLocation -Value $_ }

Output: 

Export CSV file with Array

What do you think?

I hope you have an idea of  How To Read Array values and generate CSV file with PowerShell. I would like to have feedback from my posts readers. Your valuable feedback, question, or comments about this article are always welcome.

Add Tags On An AZURE SQL DATABASE Using PowerShell

As as system admin/DevOps guys, usually during audit time we need to complete the finding (by make the grouping to filter) in faster manner instead of doing as manual work, like similar we got the review command that all the resources need to be have the tags. But completing this task will have huge manual effort as we maintaining lot of resources so thought to make automation to complete this task. Here we going to discuss bout how to Add Tags On An AZURE SQL DATABASE Using PowerShell

What Is A Tag In Azure?

Its Creates a predefined Azure tag or adds values to an existing tag | Creates or updates the entire set of tags on a resource or subscription.

Azure tagging is an excellent feature from Microsoft that was help you to logically group your Azure resources and help you to track your resources. It also helps to automate the deployments of the resources and another important feature is this feature helps to provide the visibility of the resource costs that they are liable for.

Syntax: New-AzTag [-ResourceId] <String> [-Tag] <Hashtable> [-DefaultProfile <IAzureContextContainer>] [-WhatIf] [-Confirm] [<CommonParameters>]


What are the ways to create a Tags in Azure?

The Azure Tag enables the key-value pairs to be created and assigned to the resources in Azure using Azure Portal, PowerShell. Azure CLI, etc.

Note: Tag names and Tag Values are case-sensitive in nature.

Why Use The Azure tag ?

The main intention to use the Azure tags is to organize the resources in Azure Portal. When you are organizing the resources properly helps you to identify the category that the resources belong to. So basically, Azure tags are the name-value pairs that help to organize the Azure resources in the Azure Portal

For example (consider you having the tag), When you have more resources in your Azure Portal, that time it really helps you to categorize your Azure Resources. Suppose you have 6 Virtual machines (VM) in your Azure Subscription, among them 2 are under our development environment and 2 are for the QA environment and remain 2 belong to our Production environment. So we can tag them as Environment = Development, Environment = QA, or Environment = Production. so now we can easily get the view of each resource coming under the specific environment.


How To Create Azure Tags Using powershell


Step 1: Connect to your Azure account using Connect-AzAccount

Before starting please gather secret for service principal , AppId , tenantId to establish the connect to the azure portal to perform the operation against the Azure services.

#Converts plain text or encrypted strings to secure strings.
$SecureServicePrinciple = ConvertTo-SecureString -String "rfthr~SSDCDFSDFE53Lr3Daz95WF343jXBAtXADSsdfEED" -AsPlainText -Force
#Assigning the App ID
$AppId = "0ee7e633-0c49-408e-b956-36d62264f644"
#Assigning the Tenant ID
$tenantId= "32cf8ba2-403a-234b-a3b9-63c2f8311778"
#storing a username and password combination securely.
$pscredential = New-Object -TypeName System.Management.Automation.PSCredential -ArgumentList $AppId, $SecureServicePrinciple
#Connect to Azure with an authenticated account for use with cmdlets from the Az PowerShell modules.
Connect-AzAccount -ServicePrincipal -Credential $pscredential -Tenant $tenantId

Step 2: Define the tags and assign as a array list

Based on my requirement i had added the below tags, you can create N number of tags based on your segregation of the resources.

$tags = @{"Business Unit"="WX_Digital"; "Environment"="PROD"; "Owner"="dotnet-helpers" ; "Project"="webapp"}


Step 3: Getting all the SQL database in the specific resources group

Example 1: Get single SQL database to update

  • Get-AzResource cmdlet will Gets All the resources from the specific subscription and filtered with Resource Type SQL.
  • After executing below script the RESOURCE.id variable will have all the databases names inside the specific resource type as shown in the below snap shot.
  • -ResourceType : The resource type of the resource(s) to be retrieved. For example, Microsoft.Compute/virtualMachines
#GET the single database by where condition
$RESOURCE = Get-AzResource -ResourceGroupName "rg-dgtl-pprd" -ResourceType "Microsoft.Sql/servers/databases" | Where-Object name -Like 'sqlsrvr-dgtl-pprd/sitecore_master' 

$resourceIds = $RESOURCE.Id

Example 2: Get all the database to update

#Gets All the database 
$RESOURCE = Get-AzResource -ResourceGroupName "rg-dgtl-pprd" -ResourceType "Microsoft.Sql/servers/databases" 

$resourceIds = $RESOURCE.Id

Step 4: Update new tags using new-AzTag command

We can Creates a predefined Azure tag or adds values to an existing tag using new-AzTag cmdlet | Creates or updates the entire set of tags on a resource or subscription.

-ResourceId : The resource identifier for the entity being tagged. A resource, a resource group or a subscription may be tagged.

Example: 1 Add tags to single database

#Creates or updates the entire set of tags on a resource or subscription.
#The resource identifier for the entity being tagged. A resource, a resource group or a subscription may be tagged.
new-AzTag -ResourceId $resourceId -Tag $tags

Example 2: Add tags to all database under ResourceGroup

foreach($resourceId in $resourceIds){

Write-Output $resourceId
#Creates or updates the entire set of tags on a resource or subscription.
new-AzTag -ResourceId $resourceId -Tag $tags

}

Output:

Note: 

  • Get-AzResource cmdlet will Gets All the resources from the specific subscription and filtered with Resource Type SQL.
  • -ResourceType : The resource type of the resource(s) to be retrieved. For example, Microsoft.Compute/virtualMachines