All posts by Thiyagu

Cache Purging in Azure Front Door with Azure PowerShell and CLI

Introduction

Azure Front Door is a global, scalable entry point for fast delivery of your applications. It provides load balancing, SSL offloading, and caching, among other features. One critical task for maintaining optimal performance and ensuring the delivery of up-to-date content is cache purging. This article provides a step-by-step guide to performing cache purging in Azure Front Door using Azure PowerShell and the Azure Command-Line Interface (CLI).

What is Cache Purging?

Cache purging, also known as cache invalidation, is the process of removing cached content from a caching layer. This is essential when the content served to the end users needs to be updated or deleted. In the context of Azure Front Door, purging ensures that the latest version of your content is delivered to users instead of outdated cached versions.

Prerequisites for Cache Purging in Azure Front Door

Step 1: Open Azure PowerShell

Open your preferred PowerShell environment (Windows PowerShell, PowerShell Core, or the PowerShell Integrated Scripting Environment (ISE)).

Step 2: Sign in to Azure

Sign in to your Azure account using the following command:

Connect-AzAccount

Step 3: Select the Subscription

If you have multiple subscriptions, select the appropriate subscription:

Select-AzSubscription -SubscriptionId "your-subscription-id"

Step 4: Cache Purge using PowerShell

Method 1: Using Invoke-AzFrontDoorPurge

Purpose: Invoke-AzFrontDoorPurge is used specifically for purging content from the Azure Front Door caching service.

Usage: This cmdlet is part of the Azure PowerShell module and is used to remove specific cached content from the Azure Front Door service (ie., Cache Purging in Azure Front Door).

Use the Invoke-AzFrontDoorPurge cmdlet to purge the cache. You’ll need the name of your Front Door profile and the list of content paths you want to purge.

Here’s an example:

# prerequisite Parameters

$frontDoorName = "your-frontdoor-name"
$resourceGroupName = "your-resource-group-name"
$contentPaths = @("/path1/*", "/path2/*")

Invoke-AzFrontDoorPurge -ResourceGroupName $resourceGroupName -FrontDoorName $frontDoorName -ContentPath $contentPaths

This command purges the specified paths in your Front Door profile.

When to Use:

When you need to remove cached content specifically from Azure Front Door using Azure PowerShell.
Ideal for scenarios involving global load balancing and dynamic site acceleration provided by Azure Front Door.

Method 2: Using Clear-AzFrontDoorCdnEndpointContent

Purpose: Clear-AzFrontDoorCdnEndpointContent is used for purging content from Azure CDN endpoints, which might also be linked to an Azure Front Door service. However, it specifically targets the CDN layer.

Usage: This cmdlet clears content from Azure CDN endpoints, which can be part of a solution using Azure Front Door.

$endpointName = "your-cdn-endpoint-name"
$resourceGroupName = "your-resource-group-name"
$contentPaths = @("/path1/*", "/path2/*")

Clear-AzFrontDoorCdnEndpointContent -ResourceGroupName $resourceGroupName -EndpointName $endpointName -ContentPath $contentPaths

When to Use:

  • When working specifically with Azure CDN endpoints.
  • Useful for content distribution network scenarios where you need to clear cached content from CDN endpoints.

Step 5: Cache Purge using Azure CLI

Method 3: Using Clear-AzFrontDoorCdnEndpointContent

Purpose: az afd endpoint purge is an Azure CLI command used for purging content from Azure Front Door endpoints.

Usage: This command is used within the Azure CLI to purge specific content paths from Azure Front Door.

frontDoorName="your-frontdoor-name"
resourceGroupName="your-resource-group-name"
contentPaths="/path1/* /path2/*"

az afd endpoint purge --resource-group $resourceGroupName --profile-name $frontDoorName --content-paths $contentPaths

When to Use:

  • When you need to purge cached content from Azure Front Door using Azure CLI.
  • Suitable for users who prefer command-line tools for automation and scripting.

Key Differences

Service Targeted:

  1. Invoke-AzFrontDoorPurge: Specifically targets Azure Front Door.
  2. Clear-AzFrontDoorCdnEndpointContent: Specifically targets Azure CDN endpoints.
  3. az afd endpoint purge: Specifically targets Azure Front Door.

Use Case:

  1. Invoke-AzFrontDoorPurge: Best for scenarios involving global load balancing and content delivery with Azure Front Door.
  2. Clear-AzFrontDoorCdnEndpointContent: Best for scenarios involving Azure CDN, which might or might not involve Azure Front Door.
  3. az afd endpoint purge: Best for users comfortable with CLI and needing to purge Azure Front Door content.

Conclusion

Understanding the differences between these commands helps you choose the right tool for your specific needs to Cache Purging in Azure Front Door. Whether you are managing caches at the CDN layer or the Azure Front Door layer, Azure provides flexible and powerful tools to help you maintain optimal performance and up-to-date content delivery.

How to Delete a Blob from an Azure Storage using PowerShell

In one of my automation (Delete a Blob), I need to delete the previously stored reports (reports will always append with timestamp) on daily basis in Azure storage account in automated way in the specific container. So i need to ensure my container is available before start deleting my report. This article will have detail explain about How to Delete a Blob from an Azure Storage Account using PowerShell.

New to storage account?

One of the core services within Microsoft Azure is the Storage Account service. There are many service that utilize Storage Accounts for storing data, such as Virtual Machine Disks, Diagnostics logs (specially application log), SQL backups and others. You can also use the Azure Storage Account service to store your own data; such as blobs or binary data.

As per MSDN, Azure blob storage allows you to store large amounts of unstructured object data. You can use blob storage to gather or expose media, content, or application data to users. Because all blob data is stored within containers, you must create a storage container before you can begin to upload data.

Delete a Blob from an Azure Storage

Step: 1 Get the prerequisite inputs

As in this example i am going to delete the one the sql db (backup/imported to the storage) stored as bacpac format in the container called SQL…

## prerequisite Parameters
$resourceGroupName="rg-dgtl-strg-01"
$storageAccountName="sadgtlautomation01"
$storageContainerName="sql"
$blobName = "core_2022110824.bacpac"

Step: 2 Connect to your Azure subscription

Using the az login command with a service principal is a secure and efficient way to authenticate and connect to your Azure subscription for automation tasks and scripts. In scenarios where you need to automate Azure management tasks or run scripts in a non-interactive manner, you can authenticate using a service principal. A service principal is an identity created for your application or script to access Azure resources securely.

## Connect to your Azure subscription
az login --service-principal -u "210f8f7c-049c-e480-96b5-642d6362f464" -p "c82BQ~MTCrPr3Daz95Nks6LrWF32jXBAtXACccAV" --tenant "cf8ba223-a403-342b-ba39-c21f78831637"

Step: 3 Get the storage account to Check the container exit or not

When working with Azure Storage, you may need to verify if a container exists in a storage account or create it if it doesn’t. You can use the Get-AzStorageContainer cmdlet to check for the existence of a container.

## Get the storage account to check container exist or need to be create
$storageAccount = Get-AzStorageAccount -ResourceGroupName $resourceGroupName -Name $storageAccountName

## Get the storage account context
$context = $storageAccount.Context

Step: 4 Check the container exist before deleting the blob

We need to use Remove-AzStorageBlob cmdlet to delete a blob from the Azure storage container

## Check if the storage container exists
if(Get-AzStorageContainer -Name $storageContainerName -Context $context -ErrorAction SilentlyContinue)
{

Write-Host -ForegroundColor Green $storageContainerName ", the requested container exit,started deleting blob"

## Create a new Azure Storage container
Remove-AzStorageBlob -Container $storageContainerName -Context $context -Blob $blobName
Write-Host -ForegroundColor Green $blobName deleted

}
else
{
Write-Host -ForegroundColor Magenta $storageContainerName "the requested container does not exist"
}

Full Code:

## Delete a Blob from an Azure Storage
## Input Parameters
$resourceGroupName="rg-dgtl-strg-01"
$storageAccountName="sadgtlautomation01"
$storageContainerName="sql"
$blobName = "core_2022110824.bacpac"

## Connect to your Azure subscription
az login --service-principal -u "210f8f7c-049c-e480-96b5-642d6362f464" -p "c82BQ~MTCrPr3Daz95Nks6LrWF32jXBAtXACccAV" --tenant "cf8ba223-a403-342b-ba39-c21f78831637"

## Function to create the storage container
Function DeleteblogfromStorageContainer
{
## Get the storage account to check container exist or need to be create
$storageAccount = Get-AzStorageAccount -ResourceGroupName $resourceGroupName -Name $storageAccountName

## Get the storage account context
$context = $storageAccount.Context


## Check if the storage container exists
if(Get-AzStorageContainer -Name $storageContainerName -Context $context -ErrorAction SilentlyContinue)
{

Write-Host -ForegroundColor Green $storageContainerName ", the requested container exit,started deleting blob"
## Remove the blob in Azure Storage container
Remove-AzStorageBlob -Container $storageContainerName -Context $context -Blob $blobName

Write-Host -ForegroundColor Green $blobName deleted
}
else
{
Write-Host -ForegroundColor Magenta $storageContainerName "the requested container does not exist"
}

}
#Call the Function
DeleteblogfromStorageContainer

Output:

 

How to check Website status on the Linux Server

Maintaining website uptime is essential for a positive user experience, as even short periods of downtime can frustrate users and result in lost business. Automating uptime checks on a Linux machine allows quick detection of issues, enabling faster response times. In this article, we’ll explore simple, effective ways to create a Website Uptime Checker Script in Linux using different commands like curl, wget, ping.

As my team and we are worked on windows machines and familiar with PowerShell but now we are working on the Linux based machine which lead to write articles based on command which we are using on daily basis.

1. Checking Website Uptime with curl

One of the most straightforward ways to check if a website is up is by using curl. The following multi-line bash script pings the specified website and returns its status:

#!/bin/bash
website="https://example.com"

# Check if website is accessible
if curl --output /dev/null --silent --head --fail "$website"; then
echo "Website is up."
else
echo "Website is down."
fi

Alternatively, here’s a one-liner with curl:

curl -Is https://dotnet-helpers.com | head -n 1 | grep -q "200 OK" && echo "Website is up." || echo "Website is down."

Explanation:

  • curl -Is sends a HEAD request to retrieve only headers.
  • head -n 1 captures the status line of the HTTP response.
  • grep -q “200 OK” checks if the response is “200 OK”.
    Based on this, the command outputs either “Website is up.” or “Website is down.”

2. Monitoring Uptime with wget

If curl isn’t available, wget can be an alternative. Here’s a multi-line script using wget:

#!/bin/bash
website="https://dotnet-helpers.com"

if wget --spider --quiet "$website"; then
echo "Website is up."
else
echo "Website is down."
fi

And the one-liner version with wget:

wget --spider --quiet https://dotnet-helpers.com && echo "Website is up." || echo "Website is down."

Explanation:

  • The –spider option makes wget operate in “spider” mode, checking if the website exists without downloading content.
  • –quiet suppresses the output.

3. Checking Server Reachability with ping

Although ping checks the server rather than website content, it can still verify server reachability. Here’s a multi-line script using ping:

#!/bin/bash
server="example.com"

if ping -c 1 "$server" &> /dev/null; then
echo "Server is reachable."
else
echo "Server is down."
fi

And here’s the one-liner with ping:

ping -c 1 https://dotnet-helpers.com &> /dev/null && echo "Server is reachable." || echo "Server is down."

Summary

By combining these single-line and multi-line commands, you can monitor website availability, server reachability, and port status effectively. Monitoring website uptime on a Linux machine is simple and effective with these commands. Choose the single-line or multi-line scripts that best suit your needs, and consider automating them for consistent uptime checks. Start implementing these methods to ensure your website remains accessible and reliable for your users.

 

Exception Handling – Try Catch with Custom Error Message in PowerShell

An error in a PowerShell script will prevent it from completing script execution successfully. Using error handling with try-catch blocks allows you to manage and respond to these terminating errors. In this post, we will discuss the basics of try/catch blocks and how to find or handle Custom Error Message in PowerShell.

Handling errors effectively in scripts can save a lot of troubleshooting time and provide better user experiences. In PowerShell, we have robust options to handle exceptions using try, catch, and finally blocks. Let’s dive into how you can use try-catch to gracefully handle errors and add custom error messages for better feedback.

Why Use Exception Handling in PowerShell?

Scripts can fail for many reasons: missing files, invalid input, or network issues, to name a few. With exception handling, you can capture these issues, inform users in a friendly way, and potentially recover from errors without crashing your script. Using try-catch, you can:

  • Catch specific errors.
  • Display user-friendly messages.
  • Log errors for debugging.

Syntax overview of Try/Catch

Like similar in other programming languages, the try-catch block syntax is very simple and syntax will be the same. It is framed with two sections enclosed in curly brackets (the first block is a try and the second is the catch block).

try {
# Functionality within try block
}
catch {
# Action to do with errors
}

The main purpose of using the try-catch block, we can start to manipulate the error output and make it more friendly for the user.

Example 1:

After executing the below script, the below error will be shown on the screen as output and it would occupy some space and the problem may not be immediately visible to the User. So you can use a try-catch block to manipulate the error output and make it more friendly.

without Try-Catch block

Get-content -Path “C:\dotnet-helpers\BLOG\TestFiled.txt” 

with Try Catch block

In the below script, we added the ErrorAction parameter with a value of Stop to the command. Not all errors are considered “terminating”, so sometimes we need to add this bit of code in order to properly terminate into the catch block.

try {
Get-content -Path “C:\dotnet-helpers\BLOG\TestFile.txt” -ErrorAction Stop
}
catch {
Write-Warning -Message “Can’t read the file, seem there is an issue”

}

Example 2:

Using the $Error Variable

In Example 1, we have displayed our own custom message instead of this you can display the specific error message that occurred instead of the entire red text exception block. When an error occurs in the try block, it is saved to the Automatic variable named $Error. The $Error variable contains an array of recent errors, and you can reference the most recent error in the array at index 0.

try{
Get-content -Path “C:\dotnet-helpers\BLOG\TestFiled.txt” -ErrorAction Stop
}
Catch{

Write-Warning -Message “Cant’t read the file, seem there is an issue”
Write-Warning $Error[0]

}

Example 3:

Using Exception Messages

You can also use multiple catch blocks in case if you want to handle different types of errors. For this example, we going to handle two different types of errors and planned to display different custom messages. The first CATCH is to handle if the path does not exist and the next CATCH is to handle if any error related to the driver not found.

Using try/catch blocks gives additional power in handling errors in a script and we can have different actions based on the error type. The catch block focuses on not only displaying error messages but we can have logic that will resolve the error and continue executing the rest of the script.

In this example, the file mentioned driver (G:\dotnet-helpers\BLOG\TestFiled.txt) does not exist in the execution machine, so it was caught by [System.Management.Automation.DriveNotFoundException] and executed the same CATCH block.

try{

Get-content -Path "G:\dotnet-helpers\BLOG\TestFiled.txt" -ErrorAction Stop
}
# It will execute if a specific file is not found in a specific Directory
Catch [System.IO.DirectoryNotFoundException] {

Write-Warning -Message "Can't read the file, seems there is an issue"
Write-Warning $Error[0]
}
# It will execute if the specified driver is not found in the specified path
Catch [System.Management.Automation.DriveNotFoundException]{
Write-Warning -Message "Custom Message: Specific driver is not found"
Write-Warning $Error[0]

}
#Execute for Un-Handled exception - This catch block will run if the error does not match any other catch block exception.
Catch{
Write-Warning -Message "Oops, An un-expected Error Occurred"
#It will return the exception message for the last error that occurred.
Write-host $Error[0].Exception.GetType().FullName
}

OUTPUT

How to view the secret variables in Azure DevOps

Today, I will be taking about a technique using which you can view the secret variables in Azure DevOps.

Introduction

Azure DevOps supports us to store secrets within Azure DevOps variable Groups which could be used with the Pipelines. These secret variables couldn’t be viewed by us manually from the portal. Sometimes, we may want to view the password to perform some other activities.

Note: The best practice to have the secrets in Azure Key Vault and same you can read and execute in Azure pipeline in very secured way. Still some legacy projects are maintaining the secrets in Azure Variable group, so this article focus on them. You can read this to use Key Vault to handle the secrets

What are Secrets Variables in Azure Pipelines?

Secret Variables are placeholders for values which you want to store in an encrypted format and use while using running a pipeline. Secret Variables can be used for values like username, password, API key etc. Secret variables are encrypted variables that you can use in pipelines without exposing their value. Secret variables can be used for private information like passwords, IDs, and other identifying data that you wouldn’t want to have exposed in a pipeline. Secret variables are encrypted at rest with a 2048-bit RSA key and are available on the agent for tasks and scripts to use.

How to set Secret in Azure Variable group?

Set secret variables in the UI for a pipeline. Secret variables set in the pipeline settings UI for a pipeline are scoped to the pipeline where they are set. So, you can have secrets that only visible to users with access to that pipeline. Set secrets in a variable group. Variable groups follow the library security model. You can control who can define new items in a library, and who can use an existing item.

Let’s create a Secret variable in a Variable Group as shown below and make sure that you set it as a secret by locking it.

Once you mark it a secret (by clicking on the open lock icon as shown in below image), save the Variable Group, no one including admin will be able to view the secret. Let’s now understand how to view the secret with the help of Azure DevOps – Pipelines.

View the secret variables from Variable Group

You can create a simple Pipeline which has the below tasks to view the secrets in pipeline execution.

  1. PowerShell task which outputs a text (along with secret) into a file names ViewSecretValue.Txt
  2. Publish the ViewSecretValue.txt into Azure Pipeline artifacts.

Run the pipeline with with below PowerShell task

variables:
- group: Demo_VariableGroup
steps:
- powershell: |
    "The secretkey value is : $(secretkey)" | Out-File -FilePath  $(Build.ArtifactStagingDirectory)\ViewSecretValue.txt
- task: PublishBuildArtifacts@1
  inputs:
    PathtoPublish: '$(Build.ArtifactStagingDirectory)'
    ArtifactName: 'drop'
    publishLocation: 'Container'

Now, click on the ViewSecretValue.txt file to download the file. Once you download it, view that in a Notepad which should below.

Conclusion

In summary, handling secret variables securely is crucial for maintaining data confidentiality in DevOps processes. Azure DevOps provides built-in features and best practices to keep sensitive data protected, making it a powerful platform for secure CI/CD pipeline management. Integrating with tools like Azure Key Vault can further strengthen your security posture and simplify secret management across multiple projects.

 

How to pass objects between tasks in Azure pipeline

In our previous post, we already discussed about “How to pass values between Tasks in a Pipeline” where we can pass single value from one task/job/stage to another in pipeline. But incase, if you want to share the object from one task/job/stage to another instead of single value then we need to perform some trick to achieve this. In this quick post we will discuss about trick that I have found recently to store an object in an Azure Pipelines variable from a PowerShell script (How to pass objects between tasks in Azure pipeline).

The problem

Setting a variable in a script for later use in an Azure DevOps’ pipeline is possible using the task.setvariable command as described in below post.

This works great for simple variables like below:

steps:
- pwsh: |
Write-Host "##vso[task.setvariable variable=variableName]variableValue"
- pwsh: |
Write-Host "Value from previous step: $(variableName)"

But it is bit trickier for sharing the complex variables likes objects, arrays, or arrays of objects between taks/stage/jobs in pipeline.

The solution

As an example, let’s we try to retrieve the name, type and resource group of all the resources in an Azure subscription as shown in the below script. Let we see how can pass this value of $resources in pipeline.

$azure_resources = Get-AzResource | Select-Object -Property Name,Type,ResourceGroupName

First you can store an object in an Azure Pipelines variable using the PowerShell task. Next, you can simply serialize it in JSON and apply in single input like below: -Compress flag which conver JSON to a single line.

$azure_resourcesJson = $azure_resources | ConvertTo-Json -Compress

Pass objects between tasks in Azure pipeline

pool:
name: devopsagent-win-pprd

steps:
- task: AzurePowerShell@5
inputs:
azureSubscription: 'Azure_Digital'
azurePowerShellVersion: LatestVersion
ScriptType: InlineScript
Inline: |
$azure_resources = Get-AzResource | Select-Object -Property Name,Type,ResourceGroupName -First 3
$azure_resourcesJson = $azure_resources | ConvertTo-Json -Compress
Write-Host "##vso[task.setvariable variable=resources]$azure_resourcesJson"
- pwsh: |
$resources = '$(resources)' | ConvertFrom-Json
Write-Host "There are $($resources.Count) resources in the list"
Write-Host "There are resources are" $resources.ResourceGroupName

OUTPUT

 

How to Create Log File using Start-Transcript cmdlet in PowerShell

What is Start-Transcript?

As per MSDN, The Start-Transcript cmdlet creates a record of all or part of a PowerShell session to a text file. The transcript includes all commands that the user types and all output that appears on the console. Starting in Windows PowerShell 5.0, Start-Transcript includes the hostname in the generated filename of all transcripts.

When & Where to use?

As system admin/DevOps, we are doing much automation inside the servers and it’s mandatory to capture all the details (maybe error or success ran) in the log to make an analysis if required. In simple, if you running PowerShell scripts automatically you need a way to log any errors or warnings that occur. Usually, we will create your own log function (the same I did for my previous automation, please refer to my implemented method), but there is an easier way which I found during my team discussion and thought to share with all of you. This is especially useful when your enterprise’s logging is centralized.

The Start-Transcript cmdlet writes everything that happens during a session to a log file. These are the commands that you enter in a PowerShell session and all output that normally appears in the console.

You can also refer : Try/Catch , Error HandlingError Logging

Example 1: Without any parameters (inside our script)

To start the transcript you can simply use the cmdlet Start-Transcript and Stop-Transcript to stoping it. You can place whatever script needs to be executed in between the Start and Stop Transcript.

Without any parameters, the transcript will be saved in the user’s documents folder. The filename will automatically be generated and consists of the device name, random characters followed by a timestamp. The default path is great when you are only using PowerShell on your own machine.

Start-Transcript
$destPath = "C:\dotnet-helpers\Destination\FinedMe"
$sourcePath = 'C:\dotnet-helpers\Source\'
Get-content $destPath
Stop-Transcript

Output: 

The Transcript log will contain all the information that you see in the console as well, including a very detailed header with information of the host that you have used:

Example 2: With Parameters (-path & -Append)

The default path is great when you are only using PowerShell on your own machine. But most of the time you want to centralize the log files. There are two options for this, we can use the -Path parameter or the -OutputDirectory parameter for this.

# Append the transcript to an Error.log file.
Start-Transcript -Path c:\automationLog\Error.log -Append

For -Path parameter, we will need to specify the full path, including the file name. This is helpful when you want to have a single log file for a script and append all the transcripts into a single file. In the above example, we used the -Append parameter, by default it will overwrite any existing content in the file. To overcome this we need to use -Append or -NoClobber parameters to append in the same file.

# Use -NoClobber or -Append to prevent overwriting of existing files
Start-Transcript -Path c:\automationLog\Error.log -NoClobber

Example 3: With -OutputDirectory Parameters

You can also use the -OutputDirectory parameter to store the log file in a custom location, and this cmdlet allows to create of a unique filename.

Start-Transcript -OutputDirectory c:\automationLog\
$destPath = "C:\dotnet-helpers\Destination\FinedMe"
$sourcePath = 'C:\dotnet-helpers\Source\'
Get-content $destPath
Stop-Transcript

Output: 

For this example, I had executed the script repeatedly and for each execution, the log file is created uniquely with appending of some random AlphaNumeric values as shown in the below snapshot (3of74bj, 5yrpf4R,..)

Points to remember:

Files that are created by the Start-Transcript cmdlet include random characters in names to prevent potential overwrites or duplication when two or more transcripts are started simultaneously.

The Start-Transcript cmdlet creates a record of all or part of a Windows PowerShell session in a text file. The transcript includes all commands that the user types and all output that appears on the console.

Each transcript starts with a fully detailed header with a lot of information. When using this for logging of automated scripts that run automatically this information is large and not much use. So from PowerShell 6 and higher, we have many parameters to make cut off this (like -UseMinimalHeader).

[/fusion_text][/fusion_builder_column][/fusion_builder_row][/fusion_builder_container]

How to remove Multiple bindings in IIS using PowerShell script

As you aware large number of unused URLs in the servers will lead critical to maintenance during the maintenance activity so we need to remove Multiple bindings in IIS which not in use . In our project, my team identified a large number of un-used URLs (almost 500+ URLs) across many servers and requested to make clean up in all the servers. It’s very hard to clean up the URLs manually and it will lead to manual error like wrongly removing others URLs which is already in use and almost it will take more days to complete the cleanup. So we decide to make this activity automated instead of manual cleaning to Remove multiple bindings in IIS.

To resolve the above scenario, we created the PowerShell script to remove large URLs in a single execution. Here let we discuss the script in detail. To demonstrate this, you’ll first either need to RDP to the webserver directly and open up a PowerShell console or use PowerShell remoting to connect to a remote session.

STEP: #1

First, we need to ensure query my default website using the Get-Website cmdlet. It will Get configuration information for an IIS Web site. After execution of the below line, it will return the configuration information for the “Default Web Site”.

Get-Website -Name “$(‘Default Web Site’)”

STEP: #2

After the execution of the above script, now the website information will be available, the next step we need to find the bindings (URLs) based on our parameters/criteria.

As you probably already know you can have multiple bindings attached to a single site. Using Get-WebBinding cmdlet, you can get bindings on the specified IIS site. We can get the binding based on the parameters like -Protocol, -Port, -HostHeader,-IPAddress, etc., From the below script, you can get the binding that matching HostHeader with HTTP/HTTPS with port 80/443.

Get-WebBinding -Protocol “http” -Port 80 -HostHeader $siteURL
Get-WebBinding -Protocol “https” -Port 443 -HostHeader $siteURL

STEP: #3

Now finally, we need to removed the URLs with help of Remove-WebBinding cmdlet. The Remove-WebBinding cmdlet will remove a binding from an Internet Information Services (IIS) website.

Remove-WebBinding

Full code (to remove multiple bindings in IIS )

Reading the list of URLs from the txt file and looping to remove the bindings from the IIS website.

##############################################################################
#Project : How to remove the IIS binding from server using PowerShell script.
#Developer : Thiyagu S (dotnet-helpers.com)
#Tools : PowerShell 5.1.15063.1155 
#E-Mail : mail2thiyaguji@gmail.com 
##############################################################################

#Get list of URLs from the Text file
$siteURLs = Get-Content -path C:\Desktop\ToBeRemoveURLs_List.txt

#looping the URLs list to remove one by one
foreach($siteURL in $siteURLs)
{

Get-Website -Name "$('Default Web Site')"  | Get-WebBinding -Protocol "http" -Port 80 -HostHeader   $siteURL| Remove-WebBinding

Get-Website -Name "$('Default Web Site')"  | Get-WebBinding -Protocol "https" -Port 443 -HostHeader $siteURL | Remove-WebBinding

}

Linux Environment Variables

What Are Linux Environment Variables?

Linux environment variables are dynamic values that the operating system and various applications use to determine information about the user environment. They are essentially variables that can influence the behavior and configuration of processes and programs on a Linux system. These variables are used to pass configuration information to programs and scripts, allowing for flexible and dynamic system management.

These variables, often referred to as global variables, play a crucial role in tailoring the system’s functionality and managing the startup behavior of various applications across the system. On the other hand, local variables are restricted and accessible from within the shell in which they’re created and initialized.

Linux environment variables have a key-value pair structure, separated by an equal (=) sign. Note that the names of the variables are case-sensitive and should be in uppercase for instant identification.

Key Features of Environment Variables

  • Dynamic Values: They can change from session to session and even during the execution of programs.
  • System-Wide or User-Specific: Some variables are set globally and affect all users and processes, while others are specific to individual users.
  • Inheritance: Environment variables can be inherited by child processes from the parent process, making them useful for configuring complex applications.

Common Environment Variables

Here are some commonly used environment variables in Linux:

  • HOME: Indicates the current user’s home directory.
  • PATH: Specifies the directories where the system looks for executable files.
  • USER: Contains the name of the current user.
  • SHELL: Defines the path to the current user’s shell.
  • LANG: Sets the system language and locale settings.

Setting and Using Environment Variables

Temporary Environment Variables

You can set environment variables temporarily in a terminal session using the export command: This command sets an environment variable named MY_VAR to true for the current session. Environment variables are used to store information about the environment in which programs run.

export MY_VAR=true
echo $MY_VAR

Example 1: Setting Single Environment Variable

For example, the following command will set the Java home environment directory.

export JAVA_HOME=/usr/bin/java

Note that you won’t get any response about the success or failure of the command. As a result, if you want to verify that the variable has been properly set, use the echo command.

echo $JAVA_HOME

The echo command will display the value if the variable has been appropriately set. If the variable has no set value, you might not see anything on the screen.

Example 2: Setting Multiple Environment Variables

You can specify multiple values for a multiple variable by separating them with space like this:

<NAME>=<VALUE1> <VALUE2><VALUE3>

export VAR1="value1" VAR2="value2" VAR3="value3"

Example 3: Setting Multiple value for single Environment Variable

You can specify multiple values for a single variable by separating them with colons like this: <NAME>=<VALUE1>:<VALUE2>:<VALUE3>

export PATH="/usr/local/bin:/usr/bin:/bin:/usr/local/sbin:/usr/sbin:/sbin"

The PATH variable contains a list of directories where the system looks for executable files. Multiple directories are separated by colons.

Permanent Environment Variables

To make DOTNET_HOME available system-wide, follow these steps:

This command appends the line MY_VAR=”True” to the /etc/environment file, which is a system-wide configuration file for environment variables. By adding this line, you make the MY_VAR variable available to all users and sessions on the system. The use of sudo ensures that the command has the necessary permissions to modify /etc/environment

Example 1: Setting Single Environment Variable for all USERS

export DOTNET_HOME=true
echo 'DOTNET_HOME="true"' | sudo tee /etc/environment -a

Example 2: Setting Multiple value for single Environment Variable for all USERS

You can specify multiple values for a single variable by separating them with colons like this: <NAME>=<VALUE1>:<VALUE2>:<VALUE3>

export PATH="/usr/local/bin:/usr/bin:/bin:/usr/local/sbin:/usr/sbin:/sbin"
echo PATH="/usr/local/bin:/usr/bin:/bin:/usr/local/sbin:/usr/sbin:/sbin" | sudo tee /etc/environment -a

Breakdown of the Command

echo ‘DOTNET_HOME=”true”‘: This command outputs the string DOTNET_HOME=”/dotnet-helpers/execute”. Essentially, echo is used to display a line of text.

| (Pipe): The pipe symbol | takes the output from the echo command and passes it as input to the next command. In this case, it passes the string DOTNET_HOME=”/dotnet-helpers/execute” to sudo tee.

sudo tee /etc/environment -a: sudo: This command is used to run commands with superuser (root) privileges. Since modifying /etc/environment requires administrative rights, sudo is necessary.

tee: The tee command reads from the standard input (which is the output of the echo command in this case) and writes it to both the standard output (displaying it on the terminal) and a file.

/etc/environment: This is the file where tee will write the output. The /etc/environment file is a system-wide configuration file for environment variables.

-a: The -a (append) option tells tee to append the input to the file rather than overwriting its contents. This ensures that any existing settings in /etc/environment are preserved and the new line is simply added to the end of the file.

This command is used to add a new environment variable (DOTNET_HOME) to the system-wide environment variables file (/etc/environment). By appending it, you ensure that the new variable is available to all users and sessions across the entire system.

Quickly Display Files with PowerShell: Understanding Cat and Get-Content

PowerShell offers powerful cmdlets for managing and Display Files with PowerShell, and among them, Cat (alias for Get-Content) and Get-Content are commonly used to read and display file contents. Though they may seem similar, understanding the differences between Cat and Get-Content can help you use them more effectively in your scripts and commands.

Understanding Get-Content

Get-Content is a versatile cmdlet that reads the contents of a file and outputs each line as an individual string. It’s useful for working with files line by line, as it returns an array of strings where each element corresponds to a line in the file.

In PowerShell, Cat is an alias for the Get-Content cmdlet. This alias comes from Unix-like systems, where the cat command is used to concatenate and display file contents. In PowerShell, Cat serves the same purpose but is simply a shorthand for Get-Content.

Apart from cat, there are other aliases for the Get-Content command, which you can find out by running the below command. As you can see below, gc and type are also aliases of Get-Command, along with cat.

Displaying the Contents of a File with PowerShell Cat

The primary usage of the PowerShell cat is showing a file’s contents on the screen. Running the cat command followed by the filename will tell the command to output the file’s contents for display. Run the below command to read the tmp.txt file and output the data on the screen

cat "C:\path\to\tmp.txt"

Showing Lines from the Top & Bottom

Reading the first few lines of the file may help identify whether the file is what you need. PowerShell cat allows you to display a specific line or lines from a file to have a quick look as shown below.

cat tmp.txt -TotalCount 6

To View the contents from the bottom by specifying the -Tail parameter or its alias, which is -Last. This method is typical when troubleshooting with log files.

cat tmp.txt -Tail 5

Merging Contents Into a New File

Instead of simply showing the content on the screen, you can redirect the standard out of a command to a new file in PowerShell. Moreover, the PowerShell cat can read multiple files at once, which makes merging contents possible. Run the cat command to concatenate File1.txt and File2.txt as follows. The output redirect (>) sends the command output to the new file called catMerge.txt.

Method 1:

cat File1.txt,File2.txt > catMerge.txt

Method 2:

cat File1.txt,File2.txt | Out-File Merge1.txt

Appending the Contents of One File to Another

Another thing you can do with the Linux cat command is appending the contents of one file to another instead of overwriting the file or creating a new one.

# PowerShell cat with Add-Content
cat File1.txt | Add-Content File2.txt

This commands appends the contents of File1.txt to File2.txt.

# PowerShell cat with double redirection symbol (append)
cat File1.txt >> File2.txt