All posts by Thiyagu

Linux Secrets: How to List Environment Variables (Beginners to Pros)

An environment variable is a dynamic object that defines a location to store some value. We can change the behavior of the system and software using an environment variable. Environment variables are very important in computer programming. They help developers to write flexible programs.

There are Different ways to List Environment Variables in Linux. We can use the env, printenv, declare, or set command to list all variables in the system. In this Post , we’ll explain how to use Different ways to List Environment Variables in Linux.

You can also learn how to A Step-by-Step Guide to Set Environment Variables in Linux

Using printenv Command

The printenv command displays all or specified environment variables. To list all environment variables, simply type:

printenv

We can specify one or more variable names on the command line to print only those specific variables. Or, if we run the command without arguments, it will display all environment variables of the current shell.

For example, we can use the printenv command followed by HOME to display the value of the HOME environment variable:

printenv HOME
/root

In addition, we can specify multiple environment variables with the printenv command to display the values of all the specified environment variables:

Let’s display the values of the HOME and SHELL environment variables:

printenv HOME SHELL
/root
/bin/bash

Using env Command

The env command is similar to printenv but is primarily used to run a command in a modified environment. env is another shell command we can use to print a list of environment variables and their values. Similarly, we can use the env command to launch the correct interpreter in shell scripts.

We can run the env command without any arguments to display a list of all environment variables:

env

Using set Command

The set command lists all shell variables, including environment variables and shell functions. It displays more than just environment variables, so the output will be more comprehensive:

set is yet another command-line utility for listing the names and values of each shell variable. Although the set command has other uses, we can display the names and values of all shell variables in the current shell simply by running it without any options or arguments:

set

Using export -p Command

The export -p command shows all environment variables that are exported to the current shell session:

export -p

Using the declare Command

declare is another built-in command used to declare a shell variable and display its values. For example, let’s run the declare command without any option to print a list of all shell variables in the system: The declare -x command lists environment variables along with some additional information, similar to export -p:

declare -x

Using the echo Command

echo is also used to display values of the shell variable in Linux. For example, let’s run the echo command to display the value of the $HOSTNAME variable:

echo $HOSTNAME

Conclusion

There are multiple ways to list and manage environment variables in Linux, ranging from command-line utilities to graphical tools. Each method provides a different level of detail and flexibility, allowing users to choose the one that best fits their needs.

Incorporating these methods into your blog post will provide a comprehensive guide for readers looking to understand and manage environment variables in Linux.

 

Dockerfile Mastery: Step-by-Step Guide to Building & Deploying Node.js Containers

Introduction

Docker has revolutionized how developers build, ship, and run applications by simplifying dependency management and environment consistency. At the core of Docker’s workflow is the Dockerfile, a script that defines how to assemble a container image. This article walks you through creating a custom Docker image from a local Dockerfile, deploying it as a container, and understanding real-world use cases. Whether you’re new to Docker or refining your skills, this guide offers practical steps to streamline your workflow.

Why Use a Dockerfile?

A Dockerfile automates the creation of Docker images, ensuring repeatability across environments. Instead of manually configuring containers, you define instructions (e.g., installing dependencies, copying files) in the Dockerfile. This approach eliminates “it works on my machine” issues and speeds up deployment.

Create a Docker Image for simple Node.js App

Step 1: Create a Dockerfile

Let’s build a Docker image for a simple Node.js server.

1. Project Setup

Create a directory for your project:

mkdir node-docker-app && cd node-docker-app

2. Add two files:

server.js (a basic Express server): This is the main application file where the Express server is set up. It defines the routes and how the server should respond to requests (e.g., GET / sends “Hello from Docker example from dotnet-helpers !”). It is essential for the application’s functionality.

const express = require('express');  
const app = express();  
app.get('/', (req, res) => res.send('Hello from Docker example from dotnet-helpers !'));  
app.listen(3000, () => console.log('Server running on port 3000'));

package.json (dependencies file): This file is needed to manage the application’s dependencies (in this case, express). It ensures that Docker can install the correct version of the dependencies when the application is built, ensuring the server runs without issues.

{  
  "name": "node-docker-app",  
  "dependencies": {  
    "express": "^4.18.2"  
  }  
}

3. Write the Dockerfile

Create a file named Dockerfile (no extension) with these instructions:

# Use the official Node.js 18 image as a base  
FROM node:18-alpine  
# Set the working directory in the container  
WORKDIR /app  
# Copy package.json and install dependencies  
COPY package.json .  
RUN npm install  
# Copy the rest of the application code  
COPY . .  
# Expose port 3000 for the app  
EXPOSE 3000  
# Command to start the server  
CMD ["node", "server.js"]
  • FROM specifies the base image.
  • WORKDIR sets the container’s working directory.
  • COPY transfers local files to the container.
  • RUN executes commands during image build.
  • EXPOSE documents which port the app uses.
  • CMD defines the command to run the app.

 

Step 2: Build the Docker Image

Run this command in your project directory:

docker build -t node-app:latest .
  • -t tags the image (name:tag format).
  • The . at the end tells Docker to use the current directory as the build context.

Docker executes each instruction sequentially as shown below and caching layers for faster rebuilds (for next build).

Step 3: Attach Image & Run the Container

The docker run command is used to create and start a new container from a specified Docker image. It is one of the most commonly used Docker commands to launch applications in an isolated environment. It’s one of the most fundamental Docker commands — essentially bringing a container to life!

Start a container from your image:

#syntax
docker run -d -p 3000:3000 --name <container-name> <image-name>
docker run -d -p 3000:3000 --name my-node-app node-app:latest
  • -d runs the container in detached mode.
  • -p 3000:3000 maps the host’s port 3000 to the container’s port 3000.
  • --name assigns a name to the container.

Verify it’s working by using CURL or in browser as shown below.

curl http://localhost:3000
# Output: Hello from Docker!

Output : Run in the console using curl

Run in the Browser

Step 4: Manage the Container

Stop the container: Gracefully stops the running container named my-node-app. If you want to shut down a running container without deleting it — useful for pausing an app or troubleshooting.

docker stop my-node-app

Remove the container: Deletes the container (but not the image). After stopping the container, if you don’t need it anymore — like cleaning up old containers.

docker rm my-node-app

Delete the image: Deletes the Docker image named node-app with the latest tag. If you want to clear up disk space or remove outdated images. Note, You cannot remove an image if there are running or stopped containers using it. Stop and remove the containers first:

docker rmi node-app:latest

If you build a new Docker image and want to update a running container to use this new image, Docker doesn’t allow you to “swap” the image directly — instead, you have to stop the running container and create a new one. Let’s go through the step-by-step process!

Optimization Tips

  1. Use .dockerignore
    Prevent unnecessary files (e.g., node_modules, local logs) from being copied into the image.
  2. Leverage Multi-Stage Builds
    Reduce image size by discarding build dependencies in the final image.
  3. Choose Smaller Base Images
    Use -alpine or -slim   variants to minimize bloat.

Conclusion

Creating Docker images from a Docker file standardizes development and deployment workflows, ensuring consistency across teams and environments. By following the steps above, you’ve packaged a Node.js app into a portable image and ran it as a container. This method applies to any language or framework—Python, Java, or even legacy apps.

Docker’s power lies in its simplicity. Once you master Docker files, explore advanced features like Docker Compose for multi-container apps or Kubernetes for orchestration. Start small, automate the basics, and scale confidently.

Exception Handling 101: Stop Script Failures in Their Tracks with Custom Try‑Catch Tricks

An error in a PowerShell script will prevent it from completing script execution successfully. Using error handling with try-catch blocks allows you to manage and respond to these terminating errors. In this post, we will discuss the basics of try/catch blocks and how to find or handle Custom Error Message in PowerShell.

Handling errors effectively in scripts can save a lot of troubleshooting time and provide better user experiences. In PowerShell, we have robust options to handle exceptions using try, catch, and finally blocks. Let’s dive into how you can use try-catch to gracefully handle errors and add custom error messages for better feedback.

Why Use Exception Handling in PowerShell?

Scripts can fail for many reasons: missing files, invalid input, or network issues, to name a few. With exception handling, you can capture these issues, inform users in a friendly way, and potentially recover from errors without crashing your script. Using try-catch, you can:

  • Catch specific errors.
  • Display user-friendly messages.
  • Log errors for debugging.

Syntax overview of Try/Catch

Like similar in other programming languages, the try-catch block syntax is very simple and syntax will be the same. It is framed with two sections enclosed in curly brackets (the first block is a try and the second is the catch block).

try {
# Functionality within try block
}
catch {
# Action to do with errors
}

The main purpose of using the try-catch block, we can start to manipulate the error output and make it more friendly for the user.

Example 1:

After executing the below script, the below error will be shown on the screen as output and it would occupy some space and the problem may not be immediately visible to the User. So you can use a try-catch block to manipulate the error output and make it more friendly.

without Try-Catch block

Get-content -Path “C:\dotnet-helpers\BLOG\TestFiled.txt” 

with Try Catch block

In the below script, we added the ErrorAction parameter with a value of Stop to the command. Not all errors are considered “terminating”, so sometimes we need to add this bit of code in order to properly terminate into the catch block.

try {
Get-content -Path “C:\dotnet-helpers\BLOG\TestFile.txt” -ErrorAction Stop
}
catch {
Write-Warning -Message “Can’t read the file, seem there is an issue”

}

Example 2:

Using the $Error Variable

In Example 1, we have displayed our own custom message instead of this you can display the specific error message that occurred instead of the entire red text exception block. When an error occurs in the try block, it is saved to the Automatic variable named $Error. The $Error variable contains an array of recent errors, and you can reference the most recent error in the array at index 0.

try{
Get-content -Path “C:\dotnet-helpers\BLOG\TestFiled.txt” -ErrorAction Stop
}
Catch{

Write-Warning -Message “Cant’t read the file, seem there is an issue”
Write-Warning $Error[0]

}

Example 3:

Using Exception Messages

You can also use multiple catch blocks in case if you want to handle different types of errors. For this example, we going to handle two different types of errors and planned to display different custom messages. The first CATCH is to handle if the path does not exist and the next CATCH is to handle if any error related to the driver not found.

Using try/catch blocks gives additional power in handling errors in a script and we can have different actions based on the error type. The catch block focuses on not only displaying error messages but we can have logic that will resolve the error and continue executing the rest of the script.

In this example, the file mentioned driver (G:\dotnet-helpers\BLOG\TestFiled.txt) does not exist in the execution machine, so it was caught by [System.Management.Automation.DriveNotFoundException] and executed the same CATCH block.

try{
Get-content -Path "G:\dotnet-helpers\BLOG\TestFiled.txt" -ErrorAction Stop
}
# It will execute if a specific file is not found in a specific Directory
Catch [System.IO.DirectoryNotFoundException] {
Write-Warning -Message "Can't read the file, seems there is an issue"
Write-Warning $Error[0]
}
# It will execute if the specified driver is not found in the specified path
Catch [System.Management.Automation.DriveNotFoundException]{
Write-Warning -Message "Custom Message: Specific driver is not found"
Write-Warning $Error[0]
}
#Execute for Un-Handled exception - This catch block will run if the error does not match any other catch block exception.
Catch{
Write-Warning -Message "Oops, An un-expected Error Occurred"
#It will return the exception message for the last error that occurred.
Write-host $Error[0].Exception.GetType().FullName
}

OUTPUT

 

Step-by-Step Guide: Creating Simple Docker Image from a Dockerfile

Docker has revolutionized how developers build, ship, and run applications by simplifying dependency management and environment consistency. At the core of Docker’s workflow is the Dockerfile, a script that defines how to assemble a container image. This article walks you through Create Docker Image from a local Docker file, deploying it as a container, and understanding real-world use cases. Whether you’re new to Docker or refining your skills, this guide offers practical steps to streamline your workflow.

Why use a Dockerfile?

A Dockerfile is a simple text file containing a series of commands and instructions used to build a Docker image. It’s the blueprint for your image, automating the creation process so that your app’s environment can be replicated anywhere. A Dockerfile automates the creation of Docker images, ensuring repeatability across environments. Instead of manually configuring containers, you define instructions (e.g., installing dependencies, copying files) in the Dockerfile. This approach eliminates “it works on my machine” issues and speeds up deployment.

Dockerfile commands have a wide range of purposes. Use them to:

  • Install application dependencies.
  • Specify the container environment.
  • Set up application directories.
  • Define runtime configuration.
  • Provide image metadata.

Prerequisites

  1. Command-line access.
  2. Administrative privileges on the system.
  3. Docker installed.

Create Docker Image from Dockerfile

Follow the steps below to create a Dockerfile, build the image, and test it with Docker.

Step 1: Create Project Directory

Creating a Docker image with Dockerfile requires setting up a project directory. The directory contains the Dockerfile and stores all other files involved in building the image.

To make simple, you can create required docker file inside the  project directory as shown below.

Create a directory by opening the Terminal and using the mkdir command, for this example i used powershell 

mkdir dockerapp

Replace <directory> with the name of the project.

Step 2: Create Dockerfile

The contents of a Dockerfile depend on the image that it describes. The section below explains how to create a Dockerfile and provides a simple example to illustrate the procedure:

1. Navigate to the project directory:

cd <directory>

2. Create a Dockerfile using a text editor of your choice. Here i created using PowerShell cmdlet as shown below else you can create file manually inside your directory

New-Item -Path . -Name "Dockerfile" -ItemType "File"

3. Add the instructions for image building. For example, the code below creates a simple Docker image that uses Ubuntu as a base, runs the apt command to update the repositories, and executes an echo command that prints the words Hello World in the output: Please place this docker file command inside the file which we create in above step.

FROM ubuntu
MAINTAINER test-user
RUN apt update
CMD ["echo", "Hello World"]

Once you finish adding commands to the Dockerfile, save the file and exit.

Note: After running of the above image, you will have “Hello World” as output (refer the last image of this article).

SyntaxDescription
FROM <image>Specifies an existing image as a base.
MAINTAINER <name>Defines the image maintainer.
RUN <command>Executes commands at build time.
CMD <command> <argument>Sets the default executable.
ENTRYPOINT <command>Defines a mandatory command.
LABEL <key>=<value>Adds metadata to the image.
ENV <key>=<value>Sets environment variables.
ARG <key>[=<default-value>]Defines build-time variables.
COPY <source> <destination>Copies files into the image.

 

Step 3: Build Docker Image

Use the following procedure to create a Docker image using the Dockerfile created in the previous step.

1. Run the following command to build a docker image, replacing <image> with an image name and <path> with the path to Dockerfile:

docker build -t <image> <path>

The -t option allows the user to provide a name and (optionally) a tag for the new image. When executing the command from within the project directory, use (.) as the path:

docker build -t <image> .

Docker reads the Dockerfile’s contents and executes the commands in steps as shown in below snap shot.

2. Verify that the new image is in the list of local images by entering the following command or you can check inside the docker dashboard as shown below.

docker images

The output shows the list of locally available images.

Step 4: Test Docker Image

To test the new image, use docker run to launch a new Docker container based on it: Ensure the container need to attach to run the docker image.

docker run --name <container> <image>

The example below uses the myfirstapp image to create a new container named myfirstappcontainer:

docker run --name myfirstappcontainer myfirstapp

Docker creates a container and successfully executes the command listed in the image’s Dockerfile.

Conclusion:

Understanding Docker’s core commands, such as docker run --name, is essential for efficiently managing containers. The example provided (docker run --name myfirstappcontainer myfirstapp) illustrates how to launch a container directly tied to a specific image, ensuring the execution of predefined Dockerfile instructions.

This approach streamlines development and deployment by enforcing container-image linkage at runtime. The article reinforces the importance of Docker in modern DevOps practices, offering actionable insights for creating images, handling containers, and integrating these tools into broader development workflows. By mastering these concepts, developers can enhance reproducibility, scalability, and automation in their projects.

How to Use Policy Fragments to Simplify Your Azure API Management Policies

In the evolving landscape of API-driven architectures, Azure API Management (APIM) has emerged as a critical tool for securing, scaling, and streamlining API interactions. At its core, APIM policies empower developers to manipulate requests and responses across the API lifecycle—enforcing security, transforming data, or throttling traffic. But as organizations scale, managing these policies across hundreds of APIs and operations becomes a labyrinth of duplicated code, hidden configurations, and maintenance nightmares.

Introduction: 

A policy fragmentation, a game-changing feature in APIM that re-imagines policy management by breaking monolithic configurations into modular, reusable components. Imagine defining a rate-limiting rule once and applying it seamlessly across all APIs, or centralizing authentication logic to ensure consistency while eliminating redundancy. Policy fragments not only streamline development but also turn maintenance into a single-step process—fix a fragment once, and every API referencing it inherits the update.

When you work with Azure API Management on a regular basis, you probably are familiar with policies. Policies allow you to perform actions or adjustments on the incoming request before it’s sent to the backend API, or adjust the response before returning to the caller.

Policies can be applied on various levels, so called scopes, and each lower level can inherit the policy of a higher level.

  • Global level => executed for all APIs
  • Product level => executed for all APIs under a product
  • API level => executed for all operations under an API
  • Operation level => executed for this single operation

What is Fragmentation in APIM?

Fragmentation in Azure API Management (APIM) refers to the ability to break down API policies into smaller, reusable components called policy fragments. These fragments can then be applied across multiple APIs or operations within an API Management instance. Each policy fragment typically consists of one or more policy elements that define a set of instructions to be executed within a specific stage of the API request-response lifecycle (inbound, outbound, backend, on-error).

Fragments promote reusability by allowing you to define common sets of policies that can be shared and applied across different APIs or operations

Fragmentation improves development efficiency by eliminating the need to duplicate policy configurations across APIs. Changes made to a policy fragment propagate to all APIs where it’s applied, reducing maintenance efforts

Why Fragmentation required for policy creation?

The main problems with policies always have been maintenance and reuse. Policy code is quite hidden within the portal (especially with so many scope levels where it can reside), so it’s hard to see where a policy is used. When a certain piece of policy is used in multiple places, it’s even harder to keep track where they’re used and keeping them consistent. Bug fixing is difficult and cumbersome, as you need to find out all the places where you need to fix it.

To overcome the above problem, Microsoft introduced the apim Fragmentation.

Benefits of using policy fragments

There are several benefits to using policy fragments in your Azure API Management policies:

Reusability : Policy fragments allow you to create reusable code snippets that can be used in multiple policies. This promotes code reuse and reduces the amount of code you need to maintain.

Modularity : Policy fragments promote modularity by allowing you to create self-contained code snippets that can be added to a policy when needed. This makes it easier to read and understand policies, as well as test and debug them.

Maintainability : Policy fragments make policies more maintainable by allowing you to make changes to the code in a single place, rather than having to update the same code in multiple policies.

Readability : Making it easier for other developers to understand and review your code. With modular and reusable fragments, it’s easier to spot mistakes and ensure consistency across policies.

Use case:

Consider, your organization has three APIs: PaymentAPI, UserAPI, and InventoryAPI. All APIs need a rate limit of 100 requests per minute per client to prevent abuse. Instead of duplicating the rate-limiting policy in each API’s configuration, you’ll create a reusable policy fragment and apply it centrally.

Let’s consider a simple example of APIM fragmentation for rate limiting, which is a common use case in API management. In this example, we’ll create a policy fragment for rate limiting and apply it to multiple APIs within an API Management instance. For this example we are going to have create policy for controlling the API request.

Understanding the <rate-limit> Policy in Azure API Management

The <rate-limit> policy in Azure API Management (APIM) is a critical tool for controlling API traffic by restricting the number of requests a client can make within a specified time window. This policy helps prevent abuse, manage resource consumption, and ensure fair usage across consumers. Below is a detailed breakdown of its attributes, behavior, and practical use cases for your article.

Step: 1 Click & go to  “Policy Fragments”

Go to “Policy fragments” in the left menu panel and click create button. You can able to view like below snap.

 

Step: 2  Enter the properties to create New Fragments

Apply the details, of Name of Fragment, its description and policy like shown below. 

[su_highlight background=”#ffffff” color=”#f91355″]rate-limit – The  policy in Azure API Management (APIM) enforces a request quota per client to prevent overuse or abuse of your API.[/su_highlight]

<!-- rate_limiting_fragment.xml -->
<rate-limit calls="100" renewal-period="60" />


Step: 3 Finally click create button

You can able to see, our Fragment is created and there is 0 reference, means it is yet to apply to any policy.                                                                                                                                                   

How to add the fragment in the policy

These fragments can be included in the policy of individual APIs or operations using the <include> element, allowing for modular and reusable policy configurations. If you click on the created fragments, you can able to see the view, how to include this fragment in the policy (pls refer the last snap mentioned in the post).

## Add below line in the policy to include the Fragment
<include-fragment fragment-id="fragment_rate_limit" />

 

Edit Fragment for update:

You can click on Policy editor to open the policy to edit/update and save

Best Practices

  • Naming Conventions: Prefix fragments with fragment_ for easy identification.
  • Documentation: Add <description> tags in fragments to clarify usage.
  • Reference Tracking: Use the References column in the Policy fragments list to identify impacted APIs.

By leveraging policy fragments, APIM becomes a scalable, maintainable solution for enterprise-grade API governance.

Summary 

Azure API Management (APIM) policies enable customization of request/response behavior at four scopes: Global, Product, API, and Operation. However, maintaining and reusing policies across these scopes can be challenging due to hidden configurations and duplication. Policy Fragmentation addresses this by breaking policies into reusable XML components (fragments). These fragments centralize common logic (e.g., rate limiting, authentication) and are included via <include-fragment>, ensuring consistency, reducing redundancy, and simplifying updates. For example, a rate-limiting fragment can be applied across multiple APIs, and changes propagate automatically. Fragments improve maintainability, enforce standardization, and streamline debugging.

How to Copy Files from and to Kubernetes Pods: A Comprehensive Guide for Windows and Linux

Introduction

Azure Kubernetes Service (AKS) is a powerful platform for deploying and managing containerized applications. However, managing files within pods can sometimes be challenging. This article will guide you through the process of Copy File From Pod to Local and upload file from local to pod, covering both Windows and Linux environments. We’ll explore practical examples using kubectl exec and kubectl cp commands, empowering you to effectively manage your AKS deployments. In this article we will have more dig how to Copy Files from and to Kubernetes Pods using windows and Linux pods.

Prerequisites Copy Files from and to Kubernetes

  • Before proceeding, ensure you have the following:
  • Access to an AKS cluster.
  • The kubectl command-line tool installed and configured to interact with your cluster.
  • Basic knowledge of Kubernetes and pod management.

Copying Files to and from Windows Pods

Step 1: Copying Files from a Windows Pod to Local

To copy a file from a Windows pod to your local system, use the kubectl cp command. For instance:

#Syntax :
kubectl cp <pod-name>:<container-path> <local-file-path>

Replace <pod-name> and <container-path> as described above.
Replace <local-file-path> with the desired destination path on your local machine.

#Example: 
kubectl cp sitecore-bg/cdnewbooking-upgrade-8875f7d95-pg4xq:/inetpub/wwwroot/web.config web.config

In this example:

sitecore-bg/cdnewbooking-upgrade-8875f7d95-pg4xq is the pod name.
/inetpub/wwwroot/web.config is the file path inside the pod.
web.config is the destination file on your local system.

Step 2: Copy File From Local to Window Pod

To copy a file from a Windows pod to your local system, use the kubectl cp command. For instance:

#Syntax : 
kubectl cp <local-file-path> <pod-name>:<container-path>

Replace <local-file-path> with the path to the file on your local machine.
Replace <pod-name> with the name of the pod.
Replace <container-path> with the desired destination path within the pod’s container.

#Example: 
kubectl cp web.config sitecore-bg/cdnewbooking-upgrade-8875f7d95-pg4xq:/inetpub/wwwroot/web.config

This command uploads the web.config file to the specified path inside the pod.

web.config is the source file on your local system.
sitecore-bg/cdnewbooking-upgrade-8875f7d95-pg4xq is the pod name.
/inetpub/wwwroot/web.config is the file path inside the pod.

Step 3: Entering & Verify in Windows Pod

To interact directly with a Windows pod, use the kubectl exec command. For example:

Utilize the kubectl exec command to access the PowerShell shell within your Windows pod:

#Syntax : 
kubectl exec -n <namespace> <pod-name> -it -- powershell

Replace <namespace> with the actual namespace of your pod.
Replace <pod-name> with the unique name of the pod.

#Example : 
kubectl exec -n sitecore cd202404232-657b6c6d87-lj7xp -it -- powershell


Copy Files to and From Linux Pods

Step 1: Copying Files from a Linux Pod to local

Use the kubectl cp command to copy files from the pod to your local machine:

#Syntax :
kubectl cp <pod-name>:<container-path> <local-file-path>

To copy a directory from a Linux pod to your local system, use the following command:

#Example: 
kubectl cp solr/solr-leader-202312186-78b759dc5b-8pkrl:/var/solr/data ./solr_data

Here:

solr/solr-leader-202312186-78b759dc5b-8pkrl is the pod name.
/var/solr/data is the directory path inside the pod.
./solr_data is the destination directory on your local machine.

Step 2: Copying Files to a Linux Pod

Use the kubectl cp command to copy files from your local machine to the pod:

#Syntax :
kubectl cp <local-file-path> <pod-name>:<container-path>

To upload files or directories to a Linux pod, use:

#Example: 
kubectl cp solr/solr-leader-202312186-78b759dc5b-8pkrl:/var/solr/data ./solr_data

This command copies the solr_data directory from the specified Linux pod to your current local directory.

Step 3: Entering & Verify in a Linux Pod

Utilize the kubectl exec command to access the bash shell within your Linux pod:

#Syntax : Replace <namespace> and <pod-name> as described above.
kubectl exec -it <pod-name> -n <namespace> -- bash

For Linux-based pods, start a bash session using: This opens a bash shell inside the Linux pod.

#Example: 
kubectl exec -it solr-leader-202312186-78b759dc5b-8pkrl -n solr -- bash

FAQ

1. How do I find the name of a pod in my cluster?

Run kubectl get pods -n <namespace>. This will list all pods in the specified namespace along with their statuses.

2. Can I copy entire directories between my system and a pod?

Yes, the kubectl cp command supports copying directories. Use the directory path in the source and destination arguments.

3. Why do I get a “permission denied” error when copying files?

This typically happens due to insufficient permissions in the pod’s file system. Verify the access rights of the target directory or file.

4. What happens if I specify an incorrect file path inside the pod?

The kubectl cp command will fail and display an error stating that the specified path does not exist.

5. Can I use kubectl cp with compressed files?

Yes, you can use kubectl cp to transfer compressed files. However, you may need to extract or compress them manually before or after the transfer.

6. Is it possible to copy files between two pods directly?

No, kubectl cp does not support direct pod-to-pod file transfers. Copy the files to your local system first and then upload them to the target pod.

7. How do I check if a file was successfully copied?

After copying, use kubectl exec to enter the pod and verify the file’s existence in the target directory.

8. Does kubectl cp work with all storage classes?

Yes, kubectl cp works regardless of the underlying storage class since it operates at the pod file system level.

Conclusion

Copying files to and from AKS pods is a vital skill for efficiently managing your Kubernetes environment. By following the examples provided for both Windows and Linux pods, you can streamline your workflow and tackle common tasks with ease. Bookmark this guide for future reference and elevate your Kubernetes management expertise.

With AKS becoming a preferred choice for enterprises in the United States, mastering these commands ensures you’re equipped to handle file operations effortlessly. Have questions or additional tips? Share them in the comments below!

 

How to Use scripts inside configMap in Windows-Based Kubernetes Deployments: A Step-by-Step Guide

If you’re running Windows-based applications on Kubernetes, using ConfigMaps to manage startup scripts ensures consistency, scalability, and easier configuration management. This guide walks you through the process of creating a ConfigMap for a startup script and executing it in a Windows container,

What is a ConfigMap?

A ConfigMap is a Kubernetes object used to store non-confidential configuration data in key-value pairs. It decouples configuration artifacts from container images, making applications portable and easier to manage. ConfigMaps can store:

  • Configuration files (e.g., JSON, XML, YAML).
  • Environment variables.
  • Scripts (e.g., startup scripts).

Use Cases for ConfigMaps

ConfigMaps are ideal for:

  • Storing Application Configuration: Manage settings like database URLs, API endpoints, or feature flags.
  • Injecting Environment Variables: Pass runtime configurations to containers.
  • Managing Startup Scripts: Execute initialization logic during container startup.
  • Sharing Configuration Across Pods: Reuse the same configuration for multiple deployments.

When and Where to Use ConfigMaps

Use ConfigMaps when:

  • You need to externalize configuration from your container images.
  • You want to avoid hardcoding sensitive or environment-specific data.
  • You need to execute scripts during container initialization.
  • You want to centralize configuration for multiple pods or services.
  • ConfigMaps are particularly useful in Windows-based Kubernetes deployments where PowerShell scripts are commonly used for setup and initialization.

Step 1: Create the Startup Script

Start by writing a PowerShell script (startup.ps1) for your Windows container. For example:

Write-Host "Starting application setup..."
# Your setup commands here
Start-Sleep -Seconds 5
Write-Host "Application is ready!"

Step 2: Create a ConfigMap from the Script

Next, create a ConfigMap from your startup script file. Use the kubectl create configmap command to generate a ConfigMap that includes your PowerShell script.

This command reads the startup.ps1 file and creates a ConfigMap named win-startup-script.

kubectl create configmap [configmap_name] --from-file [path/to/yaml/file]
kubectl create configmap win-startup-script --from-file=startup.ps1

To verify the config map is correctly created with script, you can use the command kubectl describe configmap win-startup-script to retrieve detailed information about a specific ConfigMap named win-startup-script in your Kubernetes cluster. This command is particularly useful for debugging, verifying configurations, or understanding how a ConfigMap is structured.

kubectl describe configmap <configmap-name>
kubectl describe configmap win-startup-script

Step 3: Mount the ConfigMap in Your Deployment

Create a deployment manifest for your Windows container. In your YAML file, reference the ConfigMap so that the startup script is available to your container.

Modify your Kubernetes deployment YAML to:

  1. Mount the ConfigMap as a volume.
  2. Execute the script on container startup.

Example deployment.yaml:

apiVersion: apps/v1  
kind: Deployment  
metadata:  
  name: windows-app  
spec:  
  replicas: 1  
  selector:  
    matchLabels:  
      app: windows-app  
  template:  
    metadata:  
      labels:  
        app: windows-app  
    spec:  
      containers:  
      - name: windows-container  
        image: mcr.microsoft.com/windows/servercore:ltsc2019  
        command: ["powershell", "-Command", "C:\\scripts\\startup.ps1"]  
        volumeMounts:  
        - name: startup-script-volume  
          mountPath: "C:\\scripts"  
          readOnly: true  
      volumes:  
      - name: startup-script-volume  
        configMap:  
          name: win-startup-script  
      nodeSelector:  
        kubernetes.io/os: windows

How It Works

ConfigMap as a Volume:

  • The ConfigMap win-startup-script contains a key-value pair where the key is startup.ps1 and the value is the script content.
  • When the pod is created, the ConfigMap is mounted as a volume at C:\scripts inside the container.
  • The script startup.ps1 becomes accessible at C:\scripts\startup.ps1

Script Execution:

  • The command field in the container specification runs the PowerShell script at startup.
  • The script executes the commands defined in startup.ps1, such as printing messages or performing setup tasks.
  • The use of C:\\scripts (Windows-style path) and the mcr.microsoft.com/windows/servercore:ltsc2019 image ensures compatibility with Windows containers.

Step 4: Apply the Deployment

Deploy your application:

kubectl apply -f deployment.yaml

Step 5: Verify Execution

Check if the script ran successfully:

kubectl logs <pod-name>
kubectl logs windows-app-75b64f6568-gzcmv

Output:

[su_note note_color=”#d2f2fa” text_color=”#333333″ radius=”3″]Starting application setup…

Application is ready![/su_note]

Conclusion

Using ConfigMaps to manage startup scripts in Windows-based Kubernetes deployments is a powerful and efficient way to externalize configuration and ensure consistency across your applications. By following the steps outlined in this guide, you can:

  • Decouple Configuration from Code: Store scripts and configuration data outside your container images, making your applications more portable and easier to manage.
  • Automate Startup Tasks: Execute PowerShell scripts during container initialization to set up environments, configure settings, or perform other necessary tasks.
  • Leverage Kubernetes Features: Use ConfigMaps as volumes to inject scripts into your containers, ensuring they are available at the correct location.
  • Storing Application Configuration: Manage settings like database URLs, API endpoints, or feature flags.
  • Injecting Environment Variables: Pass runtime configurations to containers without hardcoding them.
  • Sharing Configuration Across Pods: Reuse the same configuration for multiple deployments, reducing duplication.
  • Managing Configuration Files: Store and mount configuration files (e.g., JSON, XML, YAML) for applications that rely on external configs.

This approach not only simplifies configuration management but also enhances scalability and maintainability. Whether you’re running a single instance or scaling across multiple pods, ConfigMaps provide a centralized and reusable solution for managing startup scripts.

Question & Answer

Below are six frequently asked questions with concise answers based on the article:

  1. What is a ConfigMap and why is it useful?
    A ConfigMap is a Kubernetes object that stores non-confidential configuration data as key-value pairs. It decouples configuration from container images, making applications more portable, easier to manage, and adaptable to different environments.

  2. What are the common use cases for ConfigMaps?
    ConfigMaps are ideal for storing application configurations (e.g., database URLs, API endpoints, feature flags), injecting environment variables at runtime, managing startup or initialization scripts, and sharing configurations across multiple pods.

  3. How do you create a startup script for a Windows container?
    You create a PowerShell script (e.g., startup.ps1) that contains the necessary initialization commands—such as logging messages or executing setup routines—which the Windows container will execute upon startup.

  4. How do you generate a ConfigMap from a startup script?
    Use the kubectl create configmap command with the --from-file option. For example:
    kubectl create configmap win-startup-script --from-file=startup.ps1
    This command reads the script file and stores its content in a ConfigMap.

  5. How do you mount a ConfigMap in your Windows container deployment?
    In your deployment YAML, define a volume that references the ConfigMap and mount it to a directory (e.g., C:\scripts) in your container. Then, specify the container’s command to execute the script from that location using PowerShell.

  6. How can you verify that the ConfigMap and startup script are working correctly?
    After deploying your application, you can verify the ConfigMap with:
    kubectl describe configmap win-startup-script
    And check the container logs with:
    kubectl logs <pod-name>
    The logs should show output confirming that the startup script executed successfully (e.g., “Starting application setup…” and “Application is ready!”).

How to check Website status on the Linux Server

Maintaining website uptime is essential for a positive user experience, as even short periods of downtime can frustrate users and result in lost business. Automating uptime checks on a Linux machine allows quick detection of issues, enabling faster response times. In this article, we’ll explore simple, effective ways to create a Website Uptime Checker Script in Linux using different commands like curl, wget, ping.

As my team and we are worked on windows machines and familiar with PowerShell but now we are working on the Linux based machine which lead to write articles based on command which we are using on daily basis.

1. Checking Website Uptime with curl

One of the most straightforward ways to check if a website is up is by using curl. The following multi-line bash script pings the specified website and returns its status:

#!/bin/bash
website="https://example.com"
# Check if website is accessible
if curl --output /dev/null --silent --head --fail "$website"; then
echo "Website is up."
else
echo "Website is down."
fi

Alternatively, here’s a one-liner with curl:

curl -Is https://dotnet-helpers.com | head -n 1 | grep -q "200 OK" && echo "Website is up." || echo "Website is down."

Explanation:

  • curl -Is sends a HEAD request to retrieve only headers.
  • head -n 1 captures the status line of the HTTP response.
  • grep -q “200 OK” checks if the response is “200 OK”.
    Based on this, the command outputs either “Website is up.” or “Website is down.”

2. Monitoring Uptime with wget

If curl isn’t available, wget can be an alternative. Here’s a multi-line script using wget:

#!/bin/bash
website="https://dotnet-helpers.com"
if wget --spider --quiet "$website"; then
echo "Website is up."
else
echo "Website is down."
fi

And the one-liner version with wget:

wget --spider --quiet https://dotnet-helpers.com && echo "Website is up." || echo "Website is down."

Explanation:

  • The –spider option makes wget operate in “spider” mode, checking if the website exists without downloading content.
  • –quiet suppresses the output.

3. Checking Server Reachability with ping

Although ping checks the server rather than website content, it can still verify server reachability. Here’s a multi-line script using ping:

#!/bin/bash
server="example.com"
if ping -c 1 "$server" &> /dev/null; then
echo "Server is reachable."
else
echo "Server is down."
fi

And here’s the one-liner with ping:

ping -c 1 https://dotnet-helpers.com &> /dev/null && echo "Server is reachable." || echo "Server is down."

Summary

By combining these single-line and multi-line commands, you can monitor website availability, server reachability, and port status effectively. Monitoring website uptime on a Linux machine is simple and effective with these commands. Choose the single-line or multi-line scripts that best suit your needs, and consider automating them for consistent uptime checks. Start implementing these methods to ensure your website remains accessible and reliable for your users.

 

Mastering Kubernetes Event Logs: Real-Time Debugging, Monitoring & Alerts Made Easy

However, the teams that manage these clusters need to know what’s happening to the state of objects in the cluster, and this in turn introduces a requirement to gather real-time information about cluster statuses and changes. This is enabled by Kubernetes events, which give you a detailed view of the cluster and allow for effective alerting and monitoring.

In this guide, you’ll learn how Kubernetes events work, what generates them, and where they’re stored. You’ll also learn to integrate Grafana with your Kubernetes environment to effectively use the information supplied by those events to support your observability strategy.

What are Kubernetes events?

Kubernetes events provide a rich source of information. These objects can be used to monitor your application and cluster state, respond to failures, and perform diagnostics. The events are generated when the cluster’s resources — such as pods, deployments, or nodes — change state.

Whenever something happens inside your cluster, it produces an events object that provides visibility into your cluster. However, Kubernetes events don’t persist throughout your cluster life cycle, as there’s no mechanism for retention. They’re short-lived, only available for one hour after the event is generated.

Some of the reason for events generation:

  • Kubernetes events are automatically generated when certain actions are taken on objects in a cluster, e.g., when a pod is created, a corresponding event is created. Other examples are changes in pod status to pending, successful, or failed. This includes reasons such as pod eviction or cluster failure.
  • Events are also generated when there’s a configuration change. Configuration changes for nodes can include scaling horizontally by adding replicas, or scaling vertically by upgrading memory, disk input/output capacity, or your processor cores.
  • Scheduling or failed scheduling scenarios also generate events. Failures can occur due to invalid container image repository access, insufficient resources, or if the container fails a liveness or readiness probe.

Why Kubernetes Events are Useful

Kubernetes events are a key diagnostic tool because they:

  1. Help detect issues with deployments, services, and pods.
  2. Provide insights into scheduling failures, container crashes, and resource limits.
  3. Track changes and status updates of various objects.
  4. Assist in debugging networking and storage issues.
  5. Support performance monitoring by identifying anomalies.

Types of Kubernetes Events

Kubernetes Events can broadly be categorized into two types:

Normal Events: These events signify expected and routine operations in the cluster, like a Pod being scheduled or an image being successfully pulled.
Warning Events: Warning events indicate issues that users need to address. These might include failed Pod scheduling, errors pulling an image, or problems with resource limits.

How to Collect Kubernetes Events

Kubectl is a powerful Kubernetes utility that helps you manage your Kubernetes objects and resources. The simplest way to view your event objects is to use kubectl get events.

When working with Kubernetes Events, the volume of data can be overwhelming, especially in large clusters. Efficiently filtering and sorting these events is key to extracting meaningful insights. Here are some practical tips to help you manage this:

To view all Kubernetes events in a cluster:

Add the -A flag to see events from all namespaces.

kubectl get events --all-namespaces
kubectl get events -A

To view events for a specific namespace:

Replace <NAMESPACE_NAME> with the actual namespace. This command filters events to show only those occurring in a specified namespace.

kubectl get events -n <namespace>

Get a detailed view of events

Add the -o wide flag to get a comprehensive view of each event, including additional details not visible in the standard output.

kubectl get events -o wide 

Stream live events

Add the -w command to stream events in real-time. This is particularly useful for monitoring ongoing activities or troubleshooting live issues, as it updates continuously as new events occur. Use Ctrl+C to terminate the stream.

kubectl get events -w

Use field selectors for precise filtering

Add the –field-selector flag to filter events based on specific field values. Replace with the event type you want to filter by. For example, kubectl get events –field-selector type=Warning will only show events of type Warning. This is particularly useful for isolating events related to errors or critical issues.

kubectl get events --field-selector type=<EVENT_TYPE>
#command will return all events of type Warning in the current namespace.
kubectl get events --field-selector type=Warning

Sort events by timestamp

kubectl get event -n default --sort-by=.metadata.creationTimestamp

Add the –sort-by flag to sort events chronologically. This is useful for tracking the sequence of events and understanding their progression over time.

Use JSON or YAML output for complex queries

For complex filtering that can’t be achieved with kubectl flags, you can output the events in a structured format like JSON or YAML by adding the -o json and -o yaml flags, respectively. You can then use tools like jq (for JSON) to perform advanced queries and analyses.

kubectl get events -o yaml
kubectl get events -o json
kubectl get events --field-selector type=Warning -o yaml

Summary: How to Collect Kubernetes Events Logs

Kubernetes events are short-lived records (retained for 1 hour) that track state changes in cluster resources like pods, nodes, or deployments. They provide critical insights for monitoring, debugging, and alerting but require proactive collection due to their transient nature. This guide outlines their utility, types, and methods to collect them effectively.

Key Concepts:

Why Events Matter:

  • Detect issues (e.g., failed deployments, resource limits).
  • Track scheduling failures, crashes, or configuration changes.
  • Support diagnostics and performance monitoring.

Event Types:

  • Normal: Routine operations (e.g., pod scheduling, image pulled).
  • Warning: Critical issues (e.g., pod eviction, image pull errors).

Collection Methods Using kubectl:

You can filter logs using multiple ways like View All Events, Namespace-Specific Filtering, Detailed Output, Live Streaming, Precise Filtering, Chronological Sorting, Structured Outputs (JSON/YAML):

How to Restart Pod in Kubernetes with rollout: A Detailed Guide

Kubernetes provides a robust mechanism for managing application deployments, ensuring high availability and smooth rollouts. The kubectl rollout status command is essential for monitoring deployment progress, while various methods exist for refreshing pods to apply updates or troubleshoot issues. In this blog, we’ll explore how to check the rollout status of a deployment, why rollouts are required, when kubectl rollout restart is necessary, and different ways to refresh pods in a Kubernetes cluster. In this article, we will discuss on how to Restart Pod in Kubernetes in detail.

Introduction:

In this blog post, we’ll explore three different methods to restart a Pod in Kubernetes. It’s important to note that in Kubernetes, “restarting a pod” doesn’t happen in the traditional sense, like restarting a service or a server. When we say a Pod is “restarted,” it usually means a Pod is deleted, and a new one is created to replace it. The new Pod runs the same container(s) as the one that was deleted.

When to Use kubectl rollout restart

The kubectl rollout restart command is particularly useful in the following cases:

  • After a ConfigMap or Secret Update: If a pod depends on a ConfigMap or Secret and the values change, the pods won’t restart automatically. Running a rollout restart ensures they pick up the new configuration.
  • When a Deployment Becomes Unstable: If a deployment is experiencing intermittent failures or connectivity issues, restarting can help resolve problems.
  • To Clear Stale Connections: When applications hold persistent connections to databases or APIs, a restart can help clear old connections and establish new ones.
  • For Application Performance Issues: If the application is behaving unexpectedly or consuming excessive resources, restarting the pods can help reset its state.

During Planned Maintenance or Upgrades: Ensuring all pods restart as part of a routine update helps maintain consistency across the deployment.

Sample Deployment created for testing:

The spec field of the Pod template contains the configuration for the containers running inside the Pod. The restartPolicy field is one of the configuration options available in the spec field. It allows you to control how the Pods hosting the containers are restarted in case of failure. Here’s an example of a Deployment configuration file with a restartPolicy field added to the Pod spec:

You can set the restartPolicy field to one of the following three values:

  • Always: Always restart the Pod when it terminates.
  • OnFailure: Restart the Pod only when it terminates with failure.
  • Never: Never restart the Pod after it terminates.

If you don’t explicitly specify the restartPolicy field in a Deployment configuration file (as shown in below YAML), Kubernetes sets the restartPolicy to Always by default.

In this file, we have defined a Deployment named demo-deployment that manages a single Pod. The Pod has one container running the alpine:3.15 image.

apiVersion: apps/v1
kind: Deployment
metadata:
 name: demo-deployment
spec:
 replicas: 1
 selector:
   matchLabels:
     app: alpine-demo
 template:
   metadata:
     labels:
       app: alpine-demo
   spec:
     restartPolicy: Always
     containers:
     - name: alpine-container
       image: alpine:3.15
       command: ["/bin/sh","-c"]
       args: ["echo Hello World! && sleep infinity"]

Look for the Pod with a name starting with demo-deployment and ensure that it’s in the Running state. Note that Kubernetes creates unique Pod names by adding unique characters to the Deployment name. Hence, your Pod name will be different from as shown below.

Restart Kubernetes Pod

In this section, we’ll explore three methods you can use to restart a Kubernetes Pod.

Method 1: Deleting the Pod

One of the easiest methods to restart a running Pod is to simply delete it. Run the following command to see the Pod restart in action:

#Syntax
kubectl delete pod <POD-NAME>
#Example Delete pod
kubectl delete pod demo-deployment-67789cc7db-dw6xz -n default
#To get the status of the deletion
kubectl get pod -n default

After running the command above, you will receive a confirmation that the Pod has been deleted, as shown in the output below: The job of a Deployment is to ensure that the specified Pod replicas is running at all times. Therefore, after deleting the Pod, Kubernetes will automatically create a new Pod to replace the deleted one.

Method 2: Using the “kubectl rollout restart” command

You can restart a Pod using the kubectl rollout restart command without making any modifications to the Deployment configuration. To see the Pod restart in action, run the following command:

#Syntax
kubectl rollout restart deployment/<Deployment-Name>
#Example
kubectl rollout restart deployment/demo-deployment

After running the command, you’ll receive an output similar to the following:

As you can see, the Deployment has been restarted. Next, let’s list the Pods in our system by running the kubectl get pod command:

As you can see in the output above, the Pod rollout process is in progress. If you run the kubectl get pods command again, you’ll see only the new Pod in a Running state, as shown above:

Any Downtime during Restart Kubernetes Pod?

The Deployment resource in Kubernetes has a default rolling update strategy, which allows for restarting Pods without causing downtime. Here’s how it works: Kubernetes gradually replaces the old Pods with the new version, minimizing the impact on users and ensuring the system remains available throughout the update process.

To restart a Pod without downtime, you can choose between two methods which discussed above using a Deployment  or using the kubectl rollout restart command. Note that manually deleting a Pod (Method 1) to restart it won’t work effectively because there might be a brief period of downtime. When you manually delete a Pod in a Deployment, the old Pod is immediately removed, but the new Pod takes some time to start up.

Rolling update strategy

You can confirm that Kubernetes uses a rolling update strategy by fetching the Deployment details using the following command:

#Syntax
kubectl describe deployment/<Deployment-Name>
#Example
kubectl describe deployment/demo-deployment

After running the command above, you’ll see like below snap

Notice the highlighted section in the output above. The RollingUpdateStrategy field has a default value of 25% max unavailable, 25% max surge. 25% max unavailable means that during a rolling update, 25% of the total number of Pods can be unavailable. And 25% max surge means that the total number of Pods can temporarily exceed the desired count by up to 25% to ensure that the application is available as old Pods are brought down. This can be adjust based on our requirement of the application traffic.

Conclusion

Kubernetes provides multiple methods to restart Pods, ensuring seamless application updates and issue resolution. The best approach depends on the use case:

  1. For minimal disruption and rolling updates, kubectl rollout restart deployment/<Deployment-Name> is the recommended method. It triggers a controlled restart of Pods without causing downtime.
  2. For troubleshooting individual Pods, manually deleting a Pod (kubectl delete pod <POD-NAME>) allows Kubernetes to recreate it automatically. However, this approach may introduce brief downtime.
  3. For configuration updates, restarting Pods after modifying a ConfigMap or Secret ensures that new configurations take effect without redeploying the entire application.

Ultimately, using the rolling update strategy provided by Kubernetes ensures high availability, reducing service disruptions while refreshing Pods efficiently.