All posts by Thiyagu

How to Create Log File using Start-Transcript cmdlet in PowerShell

What is Start-Transcript?

As per MSDN, The Start-Transcript cmdlet creates a record of all or part of a PowerShell session to a text file. The transcript includes all commands that the user types and all output that appears on the console. Starting in Windows PowerShell 5.0, Start-Transcript includes the hostname in the generated filename of all transcripts.

When & Where to use?

As system admin/DevOps, we are doing much automation inside the servers and it’s mandatory to capture all the details (maybe error or success ran) in the log to make an analysis if required. In simple, if you running PowerShell scripts automatically you need a way to log any errors or warnings that occur. Usually, we will create your own log function (the same I did for my previous automation, please refer to my implemented method), but there is an easier way which I found during my team discussion and thought to share with all of you. This is especially useful when your enterprise’s logging is centralized.

The Start-Transcript cmdlet writes everything that happens during a session to a log file. These are the commands that you enter in a PowerShell session and all output that normally appears in the console.

You can also refer : Try/Catch , Error HandlingError Logging

Example 1: Without any parameters (inside our script)

To start the transcript you can simply use the cmdlet Start-Transcript and Stop-Transcript to stoping it. You can place whatever script needs to be executed in between the Start and Stop Transcript.

Without any parameters, the transcript will be saved in the user’s documents folder. The filename will automatically be generated and consists of the device name, random characters followed by a timestamp. The default path is great when you are only using PowerShell on your own machine.

Start-Transcript
$destPath = "C:\dotnet-helpers\Destination\FinedMe"
$sourcePath = 'C:\dotnet-helpers\Source\'
Get-content $destPath
Stop-Transcript

Output: 

The Transcript log will contain all the information that you see in the console as well, including a very detailed header with information of the host that you have used:

Example 2: With Parameters (-path & -Append)

The default path is great when you are only using PowerShell on your own machine. But most of the time you want to centralize the log files. There are two options for this, we can use the -Path parameter or the -OutputDirectory parameter for this.

# Append the transcript to an Error.log file.
Start-Transcript -Path c:\automationLog\Error.log -Append

For -Path parameter, we will need to specify the full path, including the file name. This is helpful when you want to have a single log file for a script and append all the transcripts into a single file. In the above example, we used the -Append parameter, by default it will overwrite any existing content in the file. To overcome this we need to use -Append or -NoClobber parameters to append in the same file.

# Use -NoClobber or -Append to prevent overwriting of existing files
Start-Transcript -Path c:\automationLog\Error.log -NoClobber

Example 3: With -OutputDirectory Parameters

You can also use the -OutputDirectory parameter to store the log file in a custom location, and this cmdlet allows to create of a unique filename.

Start-Transcript -OutputDirectory c:\automationLog\
$destPath = "C:\dotnet-helpers\Destination\FinedMe"
$sourcePath = 'C:\dotnet-helpers\Source\'
Get-content $destPath
Stop-Transcript

Output: 

For this example, I had executed the script repeatedly and for each execution, the log file is created uniquely with appending of some random AlphaNumeric values as shown in the below snapshot (3of74bj, 5yrpf4R,..)

Points to remember:

Files that are created by the Start-Transcript cmdlet include random characters in names to prevent potential overwrites or duplication when two or more transcripts are started simultaneously.

The Start-Transcript cmdlet creates a record of all or part of a Windows PowerShell session in a text file. The transcript includes all commands that the user types and all output that appears on the console.

Each transcript starts with a fully detailed header with a lot of information. When using this for logging of automated scripts that run automatically this information is large and not much use. So from PowerShell 6 and higher, we have many parameters to make cut off this (like -UseMinimalHeader).

[/fusion_text][/fusion_builder_column][/fusion_builder_row][/fusion_builder_container]

How to remove Multiple bindings in IIS using PowerShell script

As you aware large number of unused URLs in the servers will lead critical to maintenance during the maintenance activity so we need to remove Multiple bindings in IIS which not in use . In our project, my team identified a large number of un-used URLs (almost 500+ URLs) across many servers and requested to make clean up in all the servers. It’s very hard to clean up the URLs manually and it will lead to manual error like wrongly removing others URLs which is already in use and almost it will take more days to complete the cleanup. So we decide to make this activity automated instead of manual cleaning to Remove multiple bindings in IIS.

To resolve the above scenario, we created the PowerShell script to remove large URLs in a single execution. Here let we discuss the script in detail. To demonstrate this, you’ll first either need to RDP to the webserver directly and open up a PowerShell console or use PowerShell remoting to connect to a remote session.

STEP: #1

First, we need to ensure query my default website using the Get-Website cmdlet. It will Get configuration information for an IIS Web site. After execution of the below line, it will return the configuration information for the “Default Web Site”.

Get-Website -Name “$(‘Default Web Site’)”

STEP: #2

After the execution of the above script, now the website information will be available, the next step we need to find the bindings (URLs) based on our parameters/criteria.

As you probably already know you can have multiple bindings attached to a single site. Using Get-WebBinding cmdlet, you can get bindings on the specified IIS site. We can get the binding based on the parameters like -Protocol, -Port, -HostHeader,-IPAddress, etc., From the below script, you can get the binding that matching HostHeader with HTTP/HTTPS with port 80/443.

Get-WebBinding -Protocol “http” -Port 80 -HostHeader $siteURL
Get-WebBinding -Protocol “https” -Port 443 -HostHeader $siteURL

STEP: #3

Now finally, we need to removed the URLs with help of Remove-WebBinding cmdlet. The Remove-WebBinding cmdlet will remove a binding from an Internet Information Services (IIS) website.

Remove-WebBinding

Full code (to remove multiple bindings in IIS )

Reading the list of URLs from the txt file and looping to remove the bindings from the IIS website.

##############################################################################
#Project : How to remove the IIS binding from server using PowerShell script.
#Developer : Thiyagu S (dotnet-helpers.com)
#Tools : PowerShell 5.1.15063.1155 
#E-Mail : mail2thiyaguji@gmail.com 
##############################################################################

#Get list of URLs from the Text file
$siteURLs = Get-Content -path C:\Desktop\ToBeRemoveURLs_List.txt

#looping the URLs list to remove one by one
foreach($siteURL in $siteURLs)
{

Get-Website -Name "$('Default Web Site')"  | Get-WebBinding -Protocol "http" -Port 80 -HostHeader   $siteURL| Remove-WebBinding

Get-Website -Name "$('Default Web Site')"  | Get-WebBinding -Protocol "https" -Port 443 -HostHeader $siteURL | Remove-WebBinding

}

Linux Environment Variables

What Are Linux Environment Variables?

Linux environment variables are dynamic values that the operating system and various applications use to determine information about the user environment. They are essentially variables that can influence the behavior and configuration of processes and programs on a Linux system. These variables are used to pass configuration information to programs and scripts, allowing for flexible and dynamic system management.

These variables, often referred to as global variables, play a crucial role in tailoring the system’s functionality and managing the startup behavior of various applications across the system. On the other hand, local variables are restricted and accessible from within the shell in which they’re created and initialized.

Linux environment variables have a key-value pair structure, separated by an equal (=) sign. Note that the names of the variables are case-sensitive and should be in uppercase for instant identification.

Key Features of Environment Variables

  • Dynamic Values: They can change from session to session and even during the execution of programs.
  • System-Wide or User-Specific: Some variables are set globally and affect all users and processes, while others are specific to individual users.
  • Inheritance: Environment variables can be inherited by child processes from the parent process, making them useful for configuring complex applications.

Common Environment Variables

Here are some commonly used environment variables in Linux:

  • HOME: Indicates the current user’s home directory.
  • PATH: Specifies the directories where the system looks for executable files.
  • USER: Contains the name of the current user.
  • SHELL: Defines the path to the current user’s shell.
  • LANG: Sets the system language and locale settings.

Setting and Using Environment Variables

Temporary Environment Variables

You can set environment variables temporarily in a terminal session using the export command: This command sets an environment variable named MY_VAR to true for the current session. Environment variables are used to store information about the environment in which programs run.

export MY_VAR=true
echo $MY_VAR

Example 1: Setting Single Environment Variable

For example, the following command will set the Java home environment directory.

export JAVA_HOME=/usr/bin/java

Note that you won’t get any response about the success or failure of the command. As a result, if you want to verify that the variable has been properly set, use the echo command.

echo $JAVA_HOME

The echo command will display the value if the variable has been appropriately set. If the variable has no set value, you might not see anything on the screen.

Example 2: Setting Multiple Environment Variables

You can specify multiple values for a multiple variable by separating them with space like this:

<NAME>=<VALUE1> <VALUE2><VALUE3>

export VAR1="value1" VAR2="value2" VAR3="value3"

Example 3: Setting Multiple value for single Environment Variable

You can specify multiple values for a single variable by separating them with colons like this: <NAME>=<VALUE1>:<VALUE2>:<VALUE3>

export PATH="/usr/local/bin:/usr/bin:/bin:/usr/local/sbin:/usr/sbin:/sbin"

The PATH variable contains a list of directories where the system looks for executable files. Multiple directories are separated by colons.

Permanent Environment Variables

To make DOTNET_HOME available system-wide, follow these steps:

This command appends the line MY_VAR=”True” to the /etc/environment file, which is a system-wide configuration file for environment variables. By adding this line, you make the MY_VAR variable available to all users and sessions on the system. The use of sudo ensures that the command has the necessary permissions to modify /etc/environment

Example 1: Setting Single Environment Variable for all USERS

export DOTNET_HOME=true
echo 'DOTNET_HOME="true"' | sudo tee /etc/environment -a

Example 2: Setting Multiple value for single Environment Variable for all USERS

You can specify multiple values for a single variable by separating them with colons like this: <NAME>=<VALUE1>:<VALUE2>:<VALUE3>

export PATH="/usr/local/bin:/usr/bin:/bin:/usr/local/sbin:/usr/sbin:/sbin"
echo PATH="/usr/local/bin:/usr/bin:/bin:/usr/local/sbin:/usr/sbin:/sbin" | sudo tee /etc/environment -a

Breakdown of the Command

echo ‘DOTNET_HOME=”true”‘: This command outputs the string DOTNET_HOME=”/dotnet-helpers/execute”. Essentially, echo is used to display a line of text.

| (Pipe): The pipe symbol | takes the output from the echo command and passes it as input to the next command. In this case, it passes the string DOTNET_HOME=”/dotnet-helpers/execute” to sudo tee.

sudo tee /etc/environment -a: sudo: This command is used to run commands with superuser (root) privileges. Since modifying /etc/environment requires administrative rights, sudo is necessary.

tee: The tee command reads from the standard input (which is the output of the echo command in this case) and writes it to both the standard output (displaying it on the terminal) and a file.

/etc/environment: This is the file where tee will write the output. The /etc/environment file is a system-wide configuration file for environment variables.

-a: The -a (append) option tells tee to append the input to the file rather than overwriting its contents. This ensures that any existing settings in /etc/environment are preserved and the new line is simply added to the end of the file.

This command is used to add a new environment variable (DOTNET_HOME) to the system-wide environment variables file (/etc/environment). By appending it, you ensure that the new variable is available to all users and sessions across the entire system.

Quickly Display Files with PowerShell: Understanding Cat and Get-Content

PowerShell offers powerful cmdlets for managing and Display Files with PowerShell, and among them, Cat (alias for Get-Content) and Get-Content are commonly used to read and display file contents. Though they may seem similar, understanding the differences between Cat and Get-Content can help you use them more effectively in your scripts and commands.

Understanding Get-Content

Get-Content is a versatile cmdlet that reads the contents of a file and outputs each line as an individual string. It’s useful for working with files line by line, as it returns an array of strings where each element corresponds to a line in the file.

In PowerShell, Cat is an alias for the Get-Content cmdlet. This alias comes from Unix-like systems, where the cat command is used to concatenate and display file contents. In PowerShell, Cat serves the same purpose but is simply a shorthand for Get-Content.

Apart from cat, there are other aliases for the Get-Content command, which you can find out by running the below command. As you can see below, gc and type are also aliases of Get-Command, along with cat.

Displaying the Contents of a File with PowerShell Cat

The primary usage of the PowerShell cat is showing a file’s contents on the screen. Running the cat command followed by the filename will tell the command to output the file’s contents for display. Run the below command to read the tmp.txt file and output the data on the screen

cat "C:\path\to\tmp.txt"

Showing Lines from the Top & Bottom

Reading the first few lines of the file may help identify whether the file is what you need. PowerShell cat allows you to display a specific line or lines from a file to have a quick look as shown below.

cat tmp.txt -TotalCount 6

To View the contents from the bottom by specifying the -Tail parameter or its alias, which is -Last. This method is typical when troubleshooting with log files.

cat tmp.txt -Tail 5

Merging Contents Into a New File

Instead of simply showing the content on the screen, you can redirect the standard out of a command to a new file in PowerShell. Moreover, the PowerShell cat can read multiple files at once, which makes merging contents possible. Run the cat command to concatenate File1.txt and File2.txt as follows. The output redirect (>) sends the command output to the new file called catMerge.txt.

Method 1:

cat File1.txt,File2.txt > catMerge.txt

Method 2:

cat File1.txt,File2.txt | Out-File Merge1.txt

Appending the Contents of One File to Another

Another thing you can do with the Linux cat command is appending the contents of one file to another instead of overwriting the file or creating a new one.

# PowerShell cat with Add-Content
cat File1.txt | Add-Content File2.txt

This commands appends the contents of File1.txt to File2.txt.

# PowerShell cat with double redirection symbol (append)
cat File1.txt >> File2.txt

 

Delete File or Directory in Linux with Scheduled Azure DevOps Pipeline

In my working environment, we are managing more Linux based Agent machines for building the solution to create artifacts and we got required to clean the build artifacts on regular manner in automatic way so we though to write a bash scrip and make this as scheduled in release pipeline (Delete File or Directory in Linux). Base on my automation, though to write a post to explain How to Delete File or Directory in Linux with Scheduled Azure DevOps Pipeline

You can also read

step 1: Find the disk space usage

The df -h command is used to display information about disk space usage on a Unix-like system. When you run this command in a terminal, it will show the disk space usage in a human-readable format.

For more clear, disk free also known as `df`, which is a powerful utility that provides valuable information on disk space utilization. The df command displays information about file system disk space usage on the mounted file system. The -h flag makes the sizes human-readable, using units like KB, MB, GB, etc.

df -h

STEP 2:Get list of directories/files and assign to variable

Before we can remove a folder or director, we must first know its name. Therefore, we must first execute the “ls” command in the terminal to find a folder or directory, or to view all of the folders. In Linux and other operating systems based on Unix, the “ls” command is used to display the files or folders.

As i am going to clean my agent folder so path will be /agent/_work.
We are assigning the output of the command ls /agent/_work/ | grep [0-9] to the variable directorylist. This command lists the contents of the /agent/_work/ directory and filters the results to include only lines that contain numbers (as my agent machine folder will create with name as numbers).

directorylist=$(ls /agent/_work/ | grep [0-9])

 STEP 3: Loop the list of directories and delete

Next, we need to loop the directory list one by one in the while loop as shown in below script. while read line is a loop that reads each line of the processed output ( is used for bash shell to read a file using while loop).The option ‘-r’ in the above-mentioned syntax passed to read command that avoids the backslash escapes from being interpreted

  • tr ‘ ‘ ‘\n’: one of the use of tr command is to find and replace, here it will replace spaces with newline characters.
  • The loop body (between do and done) is where you can put your processing logic for each line. I’ve included a simple echo statement as an example.
echo $directorylist | tr ' ' '\n' | while read -r line
do
........... You logic to delete ...........
Done

STEP 4: Remove the directory/file from the list

We can Delete File or Directory in Linux by using rm command and -rf will be used to remove fore fully as shown below.

echo "removing folder $line"
rm -rf /agent/_work/$line

Full code: Delete File or Directory in Linux 

# Find the disk space usage
df -h
echo "Running a disk space clean up"
#Get list of directories/files and assign to variable
directorylist=$(ls /agent/_work/ | grep [0-9])
#Loop the list of directries and delete
echo $directorylist | tr ' ' '\n' | while read line
do
echo "removing folder $line"
rm -rf /agent/_work/$line
done

How to implement delete directory (above script) script in scheduled manner Azure DevOps pipeline?

  1. First enable the “Scheduled release trigger” as shown below in release pipeline. In same pipeline, create a new stage with the Bash task with the above script which shown to Delete File or Directory in Linux .
  2. Select the stage and click “pre-deployment condition” and schedule the pipeline condition when it need to execute and save. Post this action, the pipeline will ran on specific time and execute the cleanup task.

Different ways to List Environment Variables in Linux

An environment variable is a dynamic object that defines a location to store some value. We can change the behavior of the system and software using an environment variable. Environment variables are very important in computer programming. They help developers to write flexible programs.

There are Different ways to List Environment Variables in Linux. We can use the env, printenv, declare, or set command to list all variables in the system. In this Post , we’ll explain how to use Different ways to List Environment Variables in Linux.

You can also learn how to A Step-by-Step Guide to Set Environment Variables in Linux

Using printenv Command

The printenv command displays all or specified environment variables. To list all environment variables, simply type:

printenv

We can specify one or more variable names on the command line to print only those specific variables. Or, if we run the command without arguments, it will display all environment variables of the current shell.

For example, we can use the printenv command followed by HOME to display the value of the HOME environment variable:

printenv HOME
/root

In addition, we can specify multiple environment variables with the printenv command to display the values of all the specified environment variables:

Let’s display the values of the HOME and SHELL environment variables:

printenv HOME SHELL
/root
/bin/bash

Using env Command

The env command is similar to printenv but is primarily used to run a command in a modified environment. env is another shell command we can use to print a list of environment variables and their values. Similarly, we can use the env command to launch the correct interpreter in shell scripts.

We can run the env command without any arguments to display a list of all environment variables:

env

Using set Command

The set command lists all shell variables, including environment variables and shell functions. It displays more than just environment variables, so the output will be more comprehensive:

set is yet another command-line utility for listing the names and values of each shell variable. Although the set command has other uses, we can display the names and values of all shell variables in the current shell simply by running it without any options or arguments:

set

Using export -p Command

The export -p command shows all environment variables that are exported to the current shell session:

export -p

Using the declare Command

declare is another built-in command used to declare a shell variable and display its values. For example, let’s run the declare command without any option to print a list of all shell variables in the system: The declare -x command lists environment variables along with some additional information, similar to export -p:

declare -x

Using the echo Command

echo is also used to display values of the shell variable in Linux. For example, let’s run the echo command to display the value of the $HOSTNAME variable:

echo $HOSTNAME

Conclusion

There are multiple ways to list and manage environment variables in Linux, ranging from command-line utilities to graphical tools. Each method provides a different level of detail and flexibility, allowing users to choose the one that best fits their needs.

Incorporating these methods into your blog post will provide a comprehensive guide for readers looking to understand and manage environment variables in Linux.

How to Drop SQL database using PowerShell

My Scenario:

As a System Admin/DevOps Guys, we got urgent cost optimization process and it need to done in very less time frame. From the cost optimization list, one of the action is to clean the unused database and backup file across all the environment. So we having 100+ database in each environment and its difficult to make manual clean up as it will take more time and there is lot of chance to wrong deletion of used database’s. So to avoid this we thought to automate this process to clean up database’s across all the environment. In this post we will discuss about how to How to drop SQL database using PowerShell

If you are working with Azure SQL Databases and want to use Azure PowerShell (Az module), you can use the Get-AzSqlDatabase cmdlet to retrieve information about SQL databases in an Azure SQL Server. Here’s an example script to get the list of all SQL database names:

Step 1: Declare SQL and resource details

#Assign the variables
$resourcegroup = "rg-dgtl-strg-prd-we-01"
$dbserverName = "sqlsrvr-dgtl-prd-we"
$username = "sqlprd01"
$password = "Srdc4$wm2t1F"

Step 2: Connect to the database using Get-AzSqlDatabase cmdlet

The Get-AzSqlDatabase cmdlet is used in Azure PowerShell to retrieve information about SQL databases (as shown in the below snap shot) in an Azure SQL Server. It’s part of the Az module, which is the recommended module for managing Azure resources. Below is a brief explanation of how to use the Get-AzSqlDatabase cmdlet:

SYNTAX:

Get-AzSqlDatabase
[[-DatabaseName] <String>]
[-ExpandKeyList]
[-KeysFilter <String>]
[-ServerName] <String>
[-ResourceGroupName] <String>
[-DefaultProfile <IAzureContextContainer>]
[-WhatIf]
[-Confirm]
[<CommonParameters>]

#Get the all the database for specific SQL server using -ServerName parameter
$SQLdbs = Get-AzSqlDatabase -ServerName $dbserverName -ResourceGroupName $resourcegroup

Step 3: Retrieve all database’s details Using foreach.

In step 1, we got all the database details which present inside sqlsrvr-dgtl-prd-we SQL server and as i said on top, I am having 100+ database present in the sql server so below loop will loop one by one to process the db’s details. Below, i am getting only database name for db using property name like “DatabaseName”

#Loop the list of databases and check on one by one
foreach ($SQLdb in $SQLdbs){
$SQLdb = $SQLdb.DatabaseName.ToString()
if (..) { check databasename contains or -eq to find activie dbs. }
else { logic to delete the non-active dbs. }
}

Step 4: Remove the database using Remove-AzSqlDatabase

The Remove-AzSqlDatabase cmdlet removes an Azure SQL database. This cmdlet is also supported by the SQL Server Stretch Database service on Azure.

SYNTAX :

Remove-AzSqlDatabase
[-DatabaseName] <String>
[-Force]
[-ServerName] <String>
[-ResourceGroupName] <String>
[-DefaultProfile <IAzureContextContainer>]
[-WhatIf]
[-Confirm]
[<CommonParameters>]

#Remove the database based on -DatabaseName parameter
Remove-AzSqlDatabase -ResourceGroupName $resourcegroup -ServerName $dbserverName -DatabaseName $dbName

Points to remember:

I am running above script inside my jump/AVD machine so if required please use -DefaultProfile parameter in the Get-AzSqlDatabase / Remove-AzSqlDatabase to authenticate the SQL server. This -DefaultProfile parameter used with credentials, tenant and subscription used for communication with azure.

 

Pull and Push Docker Image to Azure Container Registry using Azure Devops pipeline

When you want to develop and implement the container application in Azure. The first and main step you would execute is to build the images and push them into the our own private Registry (ex: Azure Container registry). In this post, I will explain how to Pull and Push Docker Image to Azure Container Registry using Azure DevOps pipeline

If your solution is going to use base image from public repo then best practice in DevOps to pull & push the trusted public image to ACR, post that we need to use same in our custom solution build.

what is Azure container Registry (ACR)

Azure Container Registry also is the similar as hub.docker.com but is provided by azure cloud. The Azure Container registry can be private and can be used by only one team or users who have access. So, users with access can push and pull images.

It provides geo-replication so that images pushed in one datacenter in one region gets replicated in all the connected configured datacenters and gets deployed simultaneously to Kubernetes clusters in respective locations.

Pull and push Docker Image

The purpose of this article is to provide steps to guide how to pull the image from public repository and provide commands to push and pull images from registry using the Azure DevOps pipeline.

There can be two options when you want to push the container images into ACR.

Option 1: Import the pre-existing Docker image from the docker hub (docker.io)/public registry and deploy it to AKS.

Option 2: Create a new custom image based on our solution (we can use push and pull other public registry and use in our solution as base image to build our solution), push it to ACR, and then deploy it to AKS.

Note: If you are using Azure default Agent or your own Agent, then decide which type of image your pulling and pushing. If the image is build on windows then the window Agent need to use for the push and pull or linux Agent if image is build with linux as base image. In my case, i am pulling the linux based image from registry.k8s.io to my ACR. Post this action, we will refer the same image during the nginx ingress installation in my AKS

Push Image to Azure Container Registry

Step 1 : Login to Azure Container Registry with Non-Interactive mode

Syntax:  docker login –username demo –password example

- bash: |
docker login crdgtlshared02.azurecr.io -u crdgtlshared010 -p rtvGwd6X2YJeeKhernclok=UHRceH7rls

Step 2 : Pull the image and tag the image with registry prefix

In my case, I need to pull the image from public repository (registry.k8s.io) and from my ACR i need to refer this image during the ingress installation in AKS cluster. To be able to push Docker images to Azure Container Registry, they need to be tagged with the login Server name of the Registry. These tags are used for routing purposes when we push these Docker images to Azure. In simple words, Docker tags convey useful information about a specific image version/variant

Syntax:  docker pull [OPTIONS] NAME[:TAG|@DIGEST]
docker tag SOURCE_IMAGE[:TAG] TARGET_IMAGE[:TAG]

- bash: |

docker pull registry.k8s.io/ingress-nginx/controller:v1.3.0

docker tag registry.k8s.io/ingress-nginx/controller:v1.3.0 crdgtlshared02.azurecr.io/nginx-baseimage/controller:v1.3.0

displayName: 'push ingnix base image'

enabled: false

Pull Image to Azure Container Registry

Step 3 : Pull the image with registry name prefix

Now that the image is tagged (in step 2), we can use the “docker push” command to push this image to Azure Container Registry;

Syntax:  docker push [OPTIONS] NAME[:TAG|@DIGEST]

- bash: |
docker push crdgtlshared02.azurecr.io/nginx-baseimage/controller:v1.3.0
displayName: 'push ingnix base image'
enabled: false

This operation might take a few minutes and you will se the image being uploaded to Azure Container Registry in the console.

Note: To pull image directly onto docker-compose, kubernetes yml files, use appropriate logins. Usually in these scenarios, docker login is the first step before docker-compose up is called, so that images get pulled successfully

For this above example, to explain in step by step i used bash task for each action but we can do all to gether in single bask task in pipeline as shown below.

Full YAML code for Pipeline

- bash: |
docker login crdgtlshared02.azurecr.io -u crdgtlshared02 -p gbHdlo6X2YJeeKhaxjnlok=UHRceT9NR

docker pull registry.k8s.io/ingress-nginx/controller:v1.3.0 docker tag registry.k8s.io/ingress-nginx/controller:v1.3.0 crdgtlshared02.azurecr.io/nginx-baseimage/controller:v1.3.0

docker push crdgtlshared02.azurecr.io/nginx-baseimage/controller:v1.3.0 

displayName: 'push ingnix base image' 

enabled: false

Getting Redirected (301/302) URI’s in PowerShell

In my working environment, we are managing more than 500+ sites. Usually, sometimes users will make a redirect to other sites or put temporary maintenance (redirect to another page) and we are not aware of their changes so we thought to validate all the sites in-frequent manner to identify those changes so thought to write post on Getting Redirected (301/302) URI’s in PowerShell.

The above scenario is very difficult to check each URL in a regular manner , it will take a lot of manual work and it may lead to human errors so we thought to automate this task instead of Manual work. The result needs to be automatically populated in Excel so we can easily share it with our Managers.

One way to get redirected URL through PowerShell is WebRequest class. In this post, we’re going to cover how to build a PowerShell function that will query a specific URL to find out if that URL is being redirected to some other URL. Here we going to discuss how to achieve this using Invoke-Webrequest.

STEP #1: First grab the response head from an Invoke-Webrequest:

$request = Invoke-WebRequest -Method Head -Uri $Url

STEP #2: Next, we need to get the Response URL using AbsoluteUri

$redirectedUri = $request.BaseResponse.ResponseUri.AbsoluteUri

Full Code : Getting Redirected (301/302) URI’s in PowerShell 

This is a quick and easy way to pull the redirected URI’s for the given URI. Putting it all together we get the function below: In the Final code, I had incorporated the result in the result in Excel.

 ###########################################################################################
#Project: Getting Redirected (301/302) URI’s in Powershell using Invoke-WebRequest Method
#Developer: Thiyagu S (dotnet-helpers.com)
#Tools : PowerShell 5.1.15063.1155 [irp]
#E-Mail: mail2thiyaguji@gmail.com 
############################################################################################

function Get-RedirectedUrl
 {
     [CmdletBinding()]
     param
     (
         [Parameter(Mandatory)]
         [ValidateNotNullOrEmpty()]
         [string]$GetfilePath
     )

     $Result = @()
     $FormatGenerater = "<HTML><BODY background-color:grey><font color =""black"">
                         <H2>Finiding the Redireted URLs</H2>
                         </font><Table border=1 cellpadding=0 cellspacing=0>
                         <TR bgcolor=gray align=center><TD><B>Source URL</B>
                         <TD><B>RedirectedURL</TD></TR>"

     $filePath = $GetfilePath
     $fileContent = Get-Content $filePath
     foreach($singleURL in $fileContent )
     {
        try
        {
         $redirectionrequest = Invoke-WebRequest -Method HEAD $singleURL -ErrorAction Ignore
             if ($redirectionrequest.BaseResponse.ResponseUri -ne $null) 
             {
             $FormatGenerater += "<TR bgcolor=#CCFFE5>" 
             $redirectedURL = $redirectionrequest.BaseResponse.ResponseUri.AbsoluteUri
             $redirectedURL
             }
             $FormatGenerater += "<TD>$($singleURL)</TD><TD>$($redirectedURL)</TD></TR>" 
         }   
       Catch {  }
     }

    $FormatGenerater += "</Table></BODY></HTML>" 
    $FormatGenerater | out-file C:\dotnet-helpers\RedirectedURLs.xls
 }

 Get-RedirectedUrl "C:\dotnet-helpers\URLsList.txt"

OUTPUT:

A Step-by-Step Guide to Set Environment Variables in Linux

What Are Environment Variables in Linux?

Environment Variables in Linux are dynamic values that the operating system and various applications use to determine information about the user environment. They are essentially variables that can influence the behavior and configuration of processes and programs on a Linux system. These variables are used to pass configuration information to programs and scripts, allowing for flexible and dynamic system management.

These variables, often referred to as global variables, play a crucial role in tailoring the system’s functionality and managing the startup behavior of various applications across the system. On the other hand, local variables are restricted and accessible from within the shell in which they’re created and initialized.

Linux environment variables have a key-value pair structure, separated by an equal (=) sign. Note that the names of the variables are case-sensitive and should be in uppercase for instant identification.

Key Features of Environment Variables

  • Dynamic Values: They can change from session to session and even during the execution of programs.
  • System-Wide or User-Specific: Some variables are set globally and affect all users and processes, while others are specific to individual users.
  • Inheritance: Environment variables can be inherited by child processes from the parent process, making them useful for configuring complex applications.

Common Environment Variables

Here are some commonly used environment variables in Linux:

  • HOME: Indicates the current user’s home directory.
  • PATH: Specifies the directories where the system looks for executable files.
  • USER: Contains the name of the current user.
  • SHELL: Defines the path to the current user’s shell.
  • LANG: Sets the system language and locale settings.

Setting and Using Environment Variables

Temporary Environment Variables in Linux

You can set environment variables temporarily in a terminal session using the export command: This command sets an environment variable named MY_VAR to true for the current session. Environment variables are used to store information about the environment in which programs run.

export MY_VAR=true
echo $MY_VAR

Example 1: Setting Single Environment Variable

For example, the following command will set the Java home environment directory.

export JAVA_HOME=/usr/bin/java

Note that you won’t get any response about the success or failure of the command. As a result, if you want to verify that the variable has been properly set, use the echo command.

echo $JAVA_HOME

The echo command will display the value if the variable has been appropriately set. If the variable has no set value, you might not see anything on the screen.

OUTPUT :

Example 2: Setting Multiple Environment Variables

You can specify multiple values for a multiple variable by separating them with space like this:

<NAME>=<VALUE1> <VALUE2><VALUE3>

export VAR1="value1" VAR2="value2" VAR3="value3"

Example 3: Setting Multiple value for single Environment Variable

You can specify multiple values for a single variable by separating them with colons like this: <NAME>=<VALUE1>:<VALUE2>:<VALUE3>

export PATH="/usr/local/bin:/usr/bin:/bin:/usr/local/sbin:/usr/sbin:/sbin"

The PATH variable contains a list of directories where the system looks for executable files. Multiple directories are separated by colons.

Permanent Environment Variables in Linux

To make MY_VAR available system-wide, follow these steps:

This command appends the line MY_VAR=”True” to the /etc/environment file, which is a system-wide configuration file for environment variables.

By adding this line, you make the MY_VAR variable available to all users and sessions on the system.

The use of sudo ensures that the command has the necessary permissions to modify /etc/environment

Example 1: Setting Single Environment Variable for all USERS

export MY_VAR=true
echo 'MY_VAR="true"' | sudo tee /etc/environment -a

Breakdown of the Command

echo ‘MY_VAR=”true”‘: This command outputs the string MY_VAR=”true”. Essentially, echo is used to display a line of text.

| (Pipe): The pipe symbol | takes the output from the echo command and passes it as input to the next command. In this case, it passes the string MY_VAR=”true” to sudo tee.

sudo tee /etc/environment -a: sudo: This command is used to run commands with superuser (root) privileges. Since modifying /etc/environment requires administrative rights, sudo is necessary.

tee: The tee command reads from the standard input (which is the output of the echo command in this case) and writes it to both the standard output (displaying it on the terminal) and a file.

/etc/environment: This is the file where tee will write the output. The /etc/environment file is a system-wide configuration file for environment variables.

-a: The -a (append) option tells tee to append the input to the file rather than overwriting its contents. This ensures that any existing settings in /etc/environment are preserved and the new line is simply added to the end of the file.

This command is used to add a new environment variable (MY_VAR) to the system-wide environment variables file (/etc/environment). By appending it, you ensure that the new variable is available to all users and sessions across the entire system.

Example 2: Setting Multiple value for single Environment Variable for all USERS

You can specify multiple values for a single variable by separating them with colons like this: <NAME>=<VALUE1>:<VALUE2>:<VALUE3>

export MY_PATH="/usr/local/bin:/usr/bin:/bin:/usr/local/sbin"
echo MY_PATH="/usr/local/bin:/usr/bin:/bin:/usr/local/sbin" | sudo tee /etc/environment -a

OUTPUT :