Category Archives: Devops

Dockerfile Mastery: Step-by-Step Guide to Building & Deploying Node.js Containers

Introduction

Docker has revolutionized how developers build, ship, and run applications by simplifying dependency management and environment consistency. At the core of Docker’s workflow is the Dockerfile, a script that defines how to assemble a container image. This article walks you through creating a custom Docker image from a local Dockerfile, deploying it as a container, and understanding real-world use cases. Whether you’re new to Docker or refining your skills, this guide offers practical steps to streamline your workflow.

Why Use a Dockerfile?

A Dockerfile automates the creation of Docker images, ensuring repeatability across environments. Instead of manually configuring containers, you define instructions (e.g., installing dependencies, copying files) in the Dockerfile. This approach eliminates “it works on my machine” issues and speeds up deployment.

Create a Docker Image for simple Node.js App

Step 1: Create a Dockerfile

Let’s build a Docker image for a simple Node.js server.

1. Project Setup

Create a directory for your project:

mkdir node-docker-app && cd node-docker-app

Build Docker Image Node.js

2. Add two files:

server.js (a basic Express server): This is the main application file where the Express server is set up. It defines the routes and how the server should respond to requests (e.g., GET / sends “Hello from Docker example from dotnet-helpers !”). It is essential for the application’s functionality.

const express = require('express');  
const app = express();  
app.get('/', (req, res) => res.send('Hello from Docker example from dotnet-helpers !'));  
app.listen(3000, () => console.log('Server running on port 3000'));

package.json (dependencies file): This file is needed to manage the application’s dependencies (in this case, express). It ensures that Docker can install the correct version of the dependencies when the application is built, ensuring the server runs without issues.

{  
  "name": "node-docker-app",  
  "dependencies": {  
    "express": "^4.18.2"  
  }  
}

3. Write the Dockerfile

Create a file named Dockerfile (no extension) with these instructions:

# Use the official Node.js 18 image as a base  
FROM node:18-alpine  

# Set the working directory in the container  
WORKDIR /app  

# Copy package.json and install dependencies  
COPY package.json .  
RUN npm install  

# Copy the rest of the application code  
COPY . .  

# Expose port 3000 for the app  
EXPOSE 3000  

# Command to start the server  
CMD ["node", "server.js"]
  • FROM specifies the base image.
  • WORKDIR sets the container’s working directory.
  • COPY transfers local files to the container.
  • RUN executes commands during image build.
  • EXPOSE documents which port the app uses.
  • CMD defines the command to run the app.

 

Step 2: Build the Docker Image

Run this command in your project directory:

docker build -t node-app:latest .
  • -t tags the image (name:tag format).
  • The . at the end tells Docker to use the current directory as the build context.

Docker executes each instruction sequentially as shown below and caching layers for faster rebuilds (for next build).

Step 3: Attach Image & Run the Container

The docker run command is used to create and start a new container from a specified Docker image. It is one of the most commonly used Docker commands to launch applications in an isolated environment. It’s one of the most fundamental Docker commands — essentially bringing a container to life!

Start a container from your image:

#syntax
docker run -d -p 3000:3000 --name <container-name> <image-name>

docker run -d -p 3000:3000 --name my-node-app node-app:latest
  • -d runs the container in detached mode.
  • -p 3000:3000 maps the host’s port 3000 to the container’s port 3000.
  • --name assigns a name to the container.

Verify it’s working by using CURL or in browser as shown below.

curl http://localhost:3000
# Output: Hello from Docker!

Output : Run in the console using curl

Run in the Browser

Step 4: Manage the Container

Stop the container: Gracefully stops the running container named my-node-app. If you want to shut down a running container without deleting it — useful for pausing an app or troubleshooting.

docker stop my-node-app

Remove the container: Deletes the container (but not the image). After stopping the container, if you don’t need it anymore — like cleaning up old containers.

docker rm my-node-app

Delete the image: Deletes the Docker image named node-app with the latest tag. If you want to clear up disk space or remove outdated images. Note, You cannot remove an image if there are running or stopped containers using it. Stop and remove the containers first:

docker rmi node-app:latest

If you build a new Docker image and want to update a running container to use this new image, Docker doesn’t allow you to “swap” the image directly — instead, you have to stop the running container and create a new one. Let’s go through the step-by-step process!

Optimization Tips

  1. Use .dockerignore
    Prevent unnecessary files (e.g., node_modules, local logs) from being copied into the image.
  2. Leverage Multi-Stage Builds
    Reduce image size by discarding build dependencies in the final image.
  3. Choose Smaller Base Images
    Use -alpine or -slim   variants to minimize bloat.

Conclusion

Creating Docker images from a Docker file standardizes development and deployment workflows, ensuring consistency across teams and environments. By following the steps above, you’ve packaged a Node.js app into a portable image and ran it as a container. This method applies to any language or framework—Python, Java, or even legacy apps.

Docker’s power lies in its simplicity. Once you master Docker files, explore advanced features like Docker Compose for multi-container apps or Kubernetes for orchestration. Start small, automate the basics, and scale confidently.

Step-by-Step Guide: Creating Simple Docker Image from a Dockerfile

Docker has revolutionized how developers build, ship, and run applications by simplifying dependency management and environment consistency. At the core of Docker’s workflow is the Dockerfile, a script that defines how to assemble a container image. This article walks you through Create Docker Image from a local Docker file, deploying it as a container, and understanding real-world use cases. Whether you’re new to Docker or refining your skills, this guide offers practical steps to streamline your workflow.

Why use a Dockerfile?

A Dockerfile is a simple text file containing a series of commands and instructions used to build a Docker image. It’s the blueprint for your image, automating the creation process so that your app’s environment can be replicated anywhere. A Dockerfile automates the creation of Docker images, ensuring repeatability across environments. Instead of manually configuring containers, you define instructions (e.g., installing dependencies, copying files) in the Dockerfile. This approach eliminates “it works on my machine” issues and speeds up deployment.

Dockerfile commands have a wide range of purposes. Use them to:

  • Install application dependencies.
  • Specify the container environment.
  • Set up application directories.
  • Define runtime configuration.
  • Provide image metadata.

Prerequisites

  1. Command-line access.
  2. Administrative privileges on the system.
  3. Docker installed.

Create Docker Image from Dockerfile

Follow the steps below to create a Dockerfile, build the image, and test it with Docker.

Step 1: Create Project Directory

Creating a Docker image with Dockerfile requires setting up a project directory. The directory contains the Dockerfile and stores all other files involved in building the image.

To make simple, you can create required docker file inside the  project directory as shown below.

Create a directory by opening the Terminal and using the mkdir command, for this example i used powershell 

mkdir dockerapp

Replace <directory> with the name of the project.

Step 2: Create Dockerfile

The contents of a Dockerfile depend on the image that it describes. The section below explains how to create a Dockerfile and provides a simple example to illustrate the procedure:

1. Navigate to the project directory:

cd <directory>

2. Create a Dockerfile using a text editor of your choice. Here i created using PowerShell cmdlet as shown below else you can create file manually inside your directory

New-Item -Path . -Name "Dockerfile" -ItemType "File"

3. Add the instructions for image building. For example, the code below creates a simple Docker image that uses Ubuntu as a base, runs the apt command to update the repositories, and executes an echo command that prints the words Hello World in the output: Please place this docker file command inside the file which we create in above step.

FROM ubuntu
MAINTAINER test-user
RUN apt update
CMD ["echo", "Hello World"]

Once you finish adding commands to the Dockerfile, save the file and exit.

Note: After running of the above image, you will have “Hello World” as output (refer the last image of this article).

Syntax Description
FROM <image> Specifies an existing image as a base.
MAINTAINER <name> Defines the image maintainer.
RUN <command> Executes commands at build time.
CMD <command> <argument> Sets the default executable.
ENTRYPOINT <command> Defines a mandatory command.
LABEL <key>=<value> Adds metadata to the image.
ENV <key>=<value> Sets environment variables.
ARG <key>[=<default-value>] Defines build-time variables.
COPY <source> <destination> Copies files into the image.

 

Step 3: Build Docker Image

Use the following procedure to create a Docker image using the Dockerfile created in the previous step.

1. Run the following command to build a docker image, replacing <image> with an image name and <path> with the path to Dockerfile:

docker build -t <image> <path>

The -t option allows the user to provide a name and (optionally) a tag for the new image. When executing the command from within the project directory, use (.) as the path:

docker build -t <image> .

Docker reads the Dockerfile’s contents and executes the commands in steps as shown in below snap shot.

2. Verify that the new image is in the list of local images by entering the following command or you can check inside the docker dashboard as shown below.

docker images

The output shows the list of locally available images.

Step 4: Test Docker Image

To test the new image, use docker run to launch a new Docker container based on it: Ensure the container need to attach to run the docker image.

docker run --name <container> <image>

The example below uses the myfirstapp image to create a new container named myfirstappcontainer:

docker run --name myfirstappcontainer myfirstapp

Docker creates a container and successfully executes the command listed in the image’s Dockerfile.

Conclusion:

Understanding Docker’s core commands, such as docker run --name, is essential for efficiently managing containers. The example provided (docker run --name myfirstappcontainer myfirstapp) illustrates how to launch a container directly tied to a specific image, ensuring the execution of predefined Dockerfile instructions.

This approach streamlines development and deployment by enforcing container-image linkage at runtime. The article reinforces the importance of Docker in modern DevOps practices, offering actionable insights for creating images, handling containers, and integrating these tools into broader development workflows. By mastering these concepts, developers can enhance reproducibility, scalability, and automation in their projects.

How to Use Policy Fragments to Simplify Your Azure API Management Policies

In the evolving landscape of API-driven architectures, Azure API Management (APIM) has emerged as a critical tool for securing, scaling, and streamlining API interactions. At its core, APIM policies empower developers to manipulate requests and responses across the API lifecycle—enforcing security, transforming data, or throttling traffic. But as organizations scale, managing these policies across hundreds of APIs and operations becomes a labyrinth of duplicated code, hidden configurations, and maintenance nightmares.

Introduction: 

A policy fragmentation, a game-changing feature in APIM that re-imagines policy management by breaking monolithic configurations into modular, reusable components. Imagine defining a rate-limiting rule once and applying it seamlessly across all APIs, or centralizing authentication logic to ensure consistency while eliminating redundancy. Policy fragments not only streamline development but also turn maintenance into a single-step process—fix a fragment once, and every API referencing it inherits the update.

When you work with Azure API Management on a regular basis, you probably are familiar with policies. Policies allow you to perform actions or adjustments on the incoming request before it’s sent to the backend API, or adjust the response before returning to the caller.

Policies can be applied on various levels, so called scopes, and each lower level can inherit the policy of a higher level.

  • Global level => executed for all APIs
  • Product level => executed for all APIs under a product
  • API level => executed for all operations under an API
  • Operation level => executed for this single operation

What is Fragmentation in APIM?

Fragmentation in Azure API Management (APIM) refers to the ability to break down API policies into smaller, reusable components called policy fragments. These fragments can then be applied across multiple APIs or operations within an API Management instance. Each policy fragment typically consists of one or more policy elements that define a set of instructions to be executed within a specific stage of the API request-response lifecycle (inbound, outbound, backend, on-error).

Fragments promote reusability by allowing you to define common sets of policies that can be shared and applied across different APIs or operations

Fragmentation improves development efficiency by eliminating the need to duplicate policy configurations across APIs. Changes made to a policy fragment propagate to all APIs where it’s applied, reducing maintenance efforts

Why Fragmentation required for policy creation?

The main problems with policies always have been maintenance and reuse. Policy code is quite hidden within the portal (especially with so many scope levels where it can reside), so it’s hard to see where a policy is used. When a certain piece of policy is used in multiple places, it’s even harder to keep track where they’re used and keeping them consistent. Bug fixing is difficult and cumbersome, as you need to find out all the places where you need to fix it.

To overcome the above problem, Microsoft introduced the apim Fragmentation.

Benefits of using policy fragments

There are several benefits to using policy fragments in your Azure API Management policies:

Reusability : Policy fragments allow you to create reusable code snippets that can be used in multiple policies. This promotes code reuse and reduces the amount of code you need to maintain.

Modularity : Policy fragments promote modularity by allowing you to create self-contained code snippets that can be added to a policy when needed. This makes it easier to read and understand policies, as well as test and debug them.

Maintainability : Policy fragments make policies more maintainable by allowing you to make changes to the code in a single place, rather than having to update the same code in multiple policies.

Readability : Making it easier for other developers to understand and review your code. With modular and reusable fragments, it’s easier to spot mistakes and ensure consistency across policies.

Use case:

Consider, your organization has three APIs: PaymentAPI, UserAPI, and InventoryAPI. All APIs need a rate limit of 100 requests per minute per client to prevent abuse. Instead of duplicating the rate-limiting policy in each API’s configuration, you’ll create a reusable policy fragment and apply it centrally.

Let’s consider a simple example of APIM fragmentation for rate limiting, which is a common use case in API management. In this example, we’ll create a policy fragment for rate limiting and apply it to multiple APIs within an API Management instance. For this example we are going to have create policy for controlling the API request.

Understanding the <rate-limit> Policy in Azure API Management

The <rate-limit> policy in Azure API Management (APIM) is a critical tool for controlling API traffic by restricting the number of requests a client can make within a specified time window. This policy helps prevent abuse, manage resource consumption, and ensure fair usage across consumers. Below is a detailed breakdown of its attributes, behavior, and practical use cases for your article.

Step: 1 Click & go to  “Policy Fragments”

Go to “Policy fragments” in the left menu panel and click create button. You can able to view like below snap.

 

Step: 2  Enter the properties to create New Fragments

Apply the details, of Name of Fragment, its description and policy like shown below. 

[su_highlight background=”#ffffff” color=”#f91355″]rate-limit – The  policy in Azure API Management (APIM) enforces a request quota per client to prevent overuse or abuse of your API.[/su_highlight]

<!-- rate_limiting_fragment.xml -->
<rate-limit calls="100" renewal-period="60" />


Step: 3 Finally click create button

You can able to see, our Fragment is created and there is 0 reference, means it is yet to apply to any policy.                                                                                                                                                   

How to add the fragment in the policy

These fragments can be included in the policy of individual APIs or operations using the <include> element, allowing for modular and reusable policy configurations. If you click on the created fragments, you can able to see the view, how to include this fragment in the policy (pls refer the last snap mentioned in the post).

## Add below line in the policy to include the Fragment
<include-fragment fragment-id="fragment_rate_limit" />

 

Edit Fragment for update:

You can click on Policy editor to open the policy to edit/update and save

Best Practices

  • Naming Conventions: Prefix fragments with fragment_ for easy identification.
  • Documentation: Add <description> tags in fragments to clarify usage.
  • Reference Tracking: Use the References column in the Policy fragments list to identify impacted APIs.

By leveraging policy fragments, APIM becomes a scalable, maintainable solution for enterprise-grade API governance.

Summary 

Azure API Management (APIM) policies enable customization of request/response behavior at four scopes: Global, Product, API, and Operation. However, maintaining and reusing policies across these scopes can be challenging due to hidden configurations and duplication. Policy Fragmentation addresses this by breaking policies into reusable XML components (fragments). These fragments centralize common logic (e.g., rate limiting, authentication) and are included via <include-fragment>, ensuring consistency, reducing redundancy, and simplifying updates. For example, a rate-limiting fragment can be applied across multiple APIs, and changes propagate automatically. Fragments improve maintainability, enforce standardization, and streamline debugging.

How to check Website status on the Linux Server

Maintaining website uptime is essential for a positive user experience, as even short periods of downtime can frustrate users and result in lost business. Automating uptime checks on a Linux machine allows quick detection of issues, enabling faster response times. In this article, we’ll explore simple, effective ways to create a Website Uptime Checker Script in Linux using different commands like curl, wget, ping.

As my team and we are worked on windows machines and familiar with PowerShell but now we are working on the Linux based machine which lead to write articles based on command which we are using on daily basis.

1. Checking Website Uptime with curl

One of the most straightforward ways to check if a website is up is by using curl. The following multi-line bash script pings the specified website and returns its status:

#!/bin/bash
website="https://example.com"

# Check if website is accessible
if curl --output /dev/null --silent --head --fail "$website"; then
echo "Website is up."
else
echo "Website is down."
fi

Alternatively, here’s a one-liner with curl:

curl -Is https://dotnet-helpers.com | head -n 1 | grep -q "200 OK" && echo "Website is up." || echo "Website is down."

Explanation:

  • curl -Is sends a HEAD request to retrieve only headers.
  • head -n 1 captures the status line of the HTTP response.
  • grep -q “200 OK” checks if the response is “200 OK”.
    Based on this, the command outputs either “Website is up.” or “Website is down.”

2. Monitoring Uptime with wget

If curl isn’t available, wget can be an alternative. Here’s a multi-line script using wget:

#!/bin/bash
website="https://dotnet-helpers.com"

if wget --spider --quiet "$website"; then
echo "Website is up."
else
echo "Website is down."
fi

And the one-liner version with wget:

wget --spider --quiet https://dotnet-helpers.com && echo "Website is up." || echo "Website is down."

Explanation:

  • The –spider option makes wget operate in “spider” mode, checking if the website exists without downloading content.
  • –quiet suppresses the output.

3. Checking Server Reachability with ping

Although ping checks the server rather than website content, it can still verify server reachability. Here’s a multi-line script using ping:

#!/bin/bash
server="example.com"

if ping -c 1 "$server" &> /dev/null; then
echo "Server is reachable."
else
echo "Server is down."
fi

And here’s the one-liner with ping:

ping -c 1 https://dotnet-helpers.com &> /dev/null && echo "Server is reachable." || echo "Server is down."

Summary

By combining these single-line and multi-line commands, you can monitor website availability, server reachability, and port status effectively. Monitoring website uptime on a Linux machine is simple and effective with these commands. Choose the single-line or multi-line scripts that best suit your needs, and consider automating them for consistent uptime checks. Start implementing these methods to ensure your website remains accessible and reliable for your users.

 

Understanding Environment Variables in Linux: A Must-Know for DevOps and System Admins

What Are Environment Variables in Linux?

Environment Variables in Linux are dynamic values that the operating system and various applications use to determine information about the user environment. They are essentially variables that can influence the behavior and configuration of processes and programs on a Linux system. These variables are used to pass configuration information to programs and scripts, allowing for flexible and dynamic system management.

These variables, often referred to as global variables, play a crucial role in tailoring the system’s functionality and managing the startup behavior of various applications across the system. On the other hand, local variables are restricted and accessible from within the shell in which they’re created and initialized.

Linux environment variables have a key-value pair structure, separated by an equal (=) sign. Note that the names of the variables are case-sensitive and should be in uppercase for instant identification.

Key Features of Environment Variables

  • Dynamic Values: They can change from session to session and even during the execution of programs.
  • System-Wide or User-Specific: Some variables are set globally and affect all users and processes, while others are specific to individual users.
  • Inheritance: Environment variables can be inherited by child processes from the parent process, making them useful for configuring complex applications.

Common Environment Variables

Here are some commonly used environment variables in Linux:

  • HOME: Indicates the current user’s home directory.
  • PATH: Specifies the directories where the system looks for executable files.
  • USER: Contains the name of the current user.
  • SHELL: Defines the path to the current user’s shell.
  • LANG: Sets the system language and locale settings.

Setting and Using Environment Variables

Temporary Environment Variables in Linux

You can set environment variables temporarily in a terminal session using the export command: This command sets an environment variable named MY_VAR to true for the current session. Environment variables are used to store information about the environment in which programs run.

export MY_VAR=true
echo $MY_VAR

Example 1: Setting Single Environment Variable

For example, the following command will set the Java home environment directory.

export JAVA_HOME=/usr/bin/java

Note that you won’t get any response about the success or failure of the command. As a result, if you want to verify that the variable has been properly set, use the echo command.

echo $JAVA_HOME

The echo command will display the value if the variable has been appropriately set. If the variable has no set value, you might not see anything on the screen.

Example 2: Setting Multiple Environment Variables

You can specify multiple values for a multiple variable by separating them with space like this:

<NAME>=<VALUE1> <VALUE2><VALUE3>

export VAR1="value1" VAR2="value2" VAR3="value3"

Example 3: Setting Multiple value for single Environment Variable

You can specify multiple values for a single variable by separating them with colons like this: <NAME>=<VALUE1>:<VALUE2>:<VALUE3>

export PATH="/usr/local/bin:/usr/bin:/bin:/usr/local/sbin:/usr/sbin:/sbin"

The PATH variable contains a list of directories where the system looks for executable files. Multiple directories are separated by colons.

Permanent Environment Variables in Linux

To make MY_VAR available system-wide, follow these steps:

This command appends the line MY_VAR=”True” to the /etc/environment file, which is a system-wide configuration file for environment variables.

By adding this line, you make the MY_VAR variable available to all users and sessions on the system.

The use of sudo ensures that the command has the necessary permissions to modify /etc/environment

Example 1: Setting Single Environment Variable for all USERS

export MY_VAR=true
echo 'MY_VAR="true"' | sudo tee /etc/environment -a

Breakdown of the Command

echo ‘MY_VAR=”true”‘: This command outputs the string MY_VAR=”true”. Essentially, echo is used to display a line of text.

| (Pipe): The pipe symbol | takes the output from the echo command and passes it as input to the next command. In this case, it passes the string MY_VAR=”true” to sudo tee.

sudo tee /etc/environment -a: sudo: This command is used to run commands with superuser (root) privileges. Since modifying /etc/environment requires administrative rights, sudo is necessary.

tee: The tee command reads from the standard input (which is the output of the echo command in this case) and writes it to both the standard output (displaying it on the terminal) and a file.

/etc/environment: This is the file where tee will write the output. The /etc/environment file is a system-wide configuration file for environment variables.

-a: The -a (append) option tells tee to append the input to the file rather than overwriting its contents. This ensures that any existing settings in /etc/environment are preserved and the new line is simply added to the end of the file.

This command is used to add a new environment variable (MY_VAR) to the system-wide environment variables file (/etc/environment). By appending it, you ensure that the new variable is available to all users and sessions across the entire system.

Example 2: Setting Multiple value for single Environment Variable for all USERS

You can specify multiple values for a single variable by separating them with colons like this: <NAME>=<VALUE1>:<VALUE2>:<VALUE3>

export MY_PATH="/usr/local/bin:/usr/bin:/bin:/usr/local/sbin"
echo MY_PATH="/usr/local/bin:/usr/bin:/bin:/usr/local/sbin" | sudo tee /etc/environment -a

Cross-Subscription Code Integration: How to Access External Azure DevOps Repos Like a Pro

Introduction

Efficiently integrating code from external Azure DevOps repositories is crucial for collaborative projects and streamlined development workflows. This comprehensive guide provides a step-by-step approach to accessing and utilizing external repositories within your Azure DevOps pipelines (Checkout External Repositories). We’ll cover essential steps, including creating Personal Access Tokens (PATs), configuring service connections, and referencing external repositories in your YAML pipelines. By following these instructions, you’ll enhance your development process by seamlessly incorporating code from various sources across different subscriptions.

Accessing an External Azure DevOps Repository Across Subscriptions

Accessing a repository from another Azure DevOps subscription can be essential for projects where resources are distributed across different organizations or accounts. This article provides a step-by-step guide on using a Personal Access Token (PAT) and a service connection to access an external repository within an Azure DevOps pipeline. By following these instructions, you’ll be able to integrate code from another subscription seamlessly.

Where it required?

In scenarios where you need to access resources (like repositories) that belong to a different Azure DevOps organization or subscription, you need to configure cross-subscription access. This setup is commonly required in the following situations:

  • Shared Repositories Across Teams: Teams working on interconnected projects in different organizations or subscriptions often need to share code. For example, a core library or shared services might be maintained in one subscription and used across multiple other projects.
  • Centralized Code Management: Large enterprises often centralize codebases for specific functionalities (e.g., CRM services, microservices). If your pipeline depends on these centralized repositories, you must configure access.
  • Multi-Subscription Projects: When an organization spans multiple Azure subscriptions, projects from one subscription might need to integrate code or services from another, necessitating secure cross-subscription access.
  • Dependency Management: A project may depend on another repository’s codebase (e.g., APIs, SDKs, or CI/CD templates) that resides in a different Azure DevOps subscription.
  • Separate Environments: Development and production environments might exist in separate subscriptions for security and compliance. For example, accessing a production-ready repository for release from a different subscription’s development repository.

Step-by-Step Guide

Step 1: Create a Personal Access Token (PAT) in External ADO

  • Navigate to the Azure DevOps organization containing the external repository.
  • Click on your profile picture in the top-right corner and select Personal Access Tokens.
  • Click on New Token and:

Provide a name (e.g., External Repo Access).
Set the Scope to Code (Read) (or higher if required).
Specify the expiration date.
Generate the PAT and copy it. Store it securely as you won’t be able to view it again.

Step 2: Create a Service Connection in your ADO

A service connection allows your pipeline to authenticate with the external repository.

  • Go to the Azure DevOps project where you’re creating the pipeline.
  • Navigate to Project Settings > Service Connections.
  • Click on New Service Connection and select Azure Repos/Team Foundation Server.

In the setup form:

Repository URL: Enter the URL of the external repository.
Authentication Method: Select Personal Access Token.
PAT: Paste the PAT you generated earlier.

Give the service connection a name (e.g., CRM Service Connection) and save it.

Step 3: Reference the External Repository in Your Pipeline

The repository keyword lets you specify an external repository. Use a repository resource to reference an additional repository in your pipeline. Add the external repository to your pipeline configuration.

SYNTAX

repositories:
- repository: string #Required as first property. Alias for the repository.
  endpoint: string #ID of the service endpoint connecting to this repository.
  trigger: none | trigger | [ string ] # CI trigger for this repository(only works for Azure Repos).
  name: string #repository name (format depends on 'type'; does not accept variables).
  ref: string #ref name to checkout; defaults to 'refs/heads/main'. The branch checked out by default whenever the resource trigger fires.
  type: string #Type of repository: git, github, githubenterprise, and bitbucket.

Update your pipeline YAML file to include:

resources:
  repositories:
  - repository: externalRepo
    type: git
    name: myexternal_project/myexternal_repo
    ref: external-ProductionBranch #Branch reference
    endpoint: dotnet Service Connection #Service connection name
  • References the external repository under resources.repositories.
  • name:  mention your external project and Repo name
  • ref: Specifies the branch (external-ProductionBranch)
  • endpoint: service connection (dotnet Service Connection).

Step 4: Checkout the External Repository

Include a checkout step in your pipeline: This ensures the external repository is cloned into the pipeline workspace for subsequent tasks.

steps:
- checkout: externalRepo

Step 5: Define the Build Pipeline

Add steps for building and packaging the code. In my case, the external project is dotnet core so i have added the build steps for the same as shown in below.

- script: |
    dotnet --version
    nuget restore ProjectSrc/dotnethelpers.FunctionApp.csproj
  displayName: 'Restore NuGet Packages'

- task: DotNetCoreCLI@2
  inputs:
    command: 'build'
    projects: '**/dotnethelpers.FunctionApp.csproj'
    arguments: '--output $(Build.BinariesDirectory)/publish_output'

- task: ArchiveFiles@2
  inputs:
    rootFolderOrFile: '$(Build.BinariesDirectory)/publish_output'
    includeRootFolder: false
    archiveType: 'zip'
    archiveFile: '$(Build.ArtifactStagingDirectory)/$(Build.BuildId).zip'
    replaceExistingArchive: true

- task: PublishBuildArtifacts@1
  inputs:
    PathtoPublish: '$(Build.ArtifactStagingDirectory)'
    ArtifactName: 'drop'
    publishLocation: 'Container'

Full YAML

resources:
  repositories:
  - repository: externalRepo
    type: git
    trigger: 
    - external-ProductionBranch
    name: myexternal_project/myexternal_repo
    ref: external-ProductionBranch # Branch reference
    endpoint:dotnet Service Connection # Service connection name

pool:
  vmImage: windows-latest

steps:
- checkout: externalRepo

- task: UseDotNet@2
  displayName: 'Install .NET SDK'
  inputs:
    packageType: 'sdk'
    version: '8.0.x'
    installationPath: $(Agent.ToolsDirectory)/dotnet

- script: |
    dotnet --version
    nuget restore ProjectSrc/dotnethelpers.FunctionApp.csproj
  displayName: 'Restore NuGet Packages'


- task: DotNetCoreCLI@2
  inputs:
    command: 'build'
    projects: '**/dotnethelpers.FunctionApp.csproj'
    arguments: '--output $(Build.BinariesDirectory)/publish_output'

- task: ArchiveFiles@2
  inputs:
    rootFolderOrFile: '$(Build.BinariesDirectory)/publish_output'
    includeRootFolder: false
    archiveType: 'zip'
    archiveFile: '$(Build.ArtifactStagingDirectory)/$(Build.BuildId).zip'
    replaceExistingArchive: true
  
- task: PublishBuildArtifacts@1
  inputs:
    PathtoPublish: '$(Build.ArtifactStagingDirectory)'
    ArtifactName: 'drop'
    publishLocation: 'Container'



Conclusion

Successfully accessing and integrating external Azure DevOps repositories requires careful authentication and configuration. By following the steps outlined in this guide, including creating PATs, establishing service connections, and effectively referencing external repositories within your YAML pipelines, you can seamlessly integrate code from various sources. This streamlined approach fosters enhanced collaboration, improved efficiency, and a more robust development process for your projects.

 

Search and Replace String Using the sed Command in Linux/Unix.

Introduction:

The sed command, a powerful stream editor in Linux/Unix, is a cornerstone for text manipulation. This guide will delve into the intricacies of using sed to search and replace strings within files. We’ll explore various scenarios, from replacing single occurrences to global substitutions, and even handling case-insensitive replacements. Whether you’re a seasoned system administrator or a budding developer, this comprehensive tutorial will equip you with the knowledge to effectively wield the sed command for your text processing needs. We will discuss more about how to Search and Replace String Using the sed.

My Requirement & solution:

We are maintaining the application in Linux machine (in AKS pods) and as a Devops team, we Got a requirement to replace some config values based on the environment (value need to be maintain in the AKS environment variable). To manage this, we thought to create one startup script in the docker image which will execute during the new image deployment ,where we used the sed command to achieve the find & replace of config value based on environments. Based on my experience i though to write this article (Search and Replace String Using the sed Command in Linux/Unix) immediately which will be helpful like me who are new to the Linux Operating system/Bash commands. 

What Is the Sed Command in Linux?

The SED command in Linux stands for Stream Editor and it helps in operations like selecting the text, substituting text, modifying an original file, adding lines to text, or deleting lines from the text. Though most common use of SED command in UNIX is for substitution or for find and replace.

By using SED you can edit files even without opening them, which is much quicker way to find and replace something in file, than first opening that file in VI Editor and then changing it.

[su_highlight color=”#2F1C6A”]Syntax: sed OPTIONS… [SCRIPT] [INPUTFILE…][/su_highlight]

  • Options control the output of the Linux command.
  • Script contains a list of Linux commands to run.
  • File name (with extension) represents the file on which you’re using the sed command.

[su_quote]Note: We can run a sed command without any option. We can also run it without a filename, in which case, the script works on the std input data.[/su_quote]

Key Advantages of Using sed

  • Efficiency: sed allows for in-place editing, eliminating the need to manually open and modify files in a text editor.
  • Flexibility: It supports a wide array of editing commands, enabling complex text manipulations.
  • Automation: sed can be easily integrated into scripts for automated text processing tasks.

Search and Replace String Using the sed

Replace First Matched String

The below example, the script will replace the first found instance of the word test1 with test2 in every line of a file

    sed -i 's/test1/test2/' opt/example.txt

The command replaces the first instance of test1 with test2 in every line, including substrings. The match is exact, ignoring capitalization variations. -i tells the sed command to write the results to a file instead of standard output.

Search & Global Replacement (all the matches)

To replace every string match in a file, add the g flag to the script (To replace all occurrences of a pattern within each line). For example

    sed -i 's/test1/test2/g' opt/example.txt

The command globally replaces every instance of test1 with test2 in the /example.txt.

The command consists of the following:

  • -i tells the sed command to write the results to a file instead of standard output.
  • s indicates the substitute command.
  • / is the most common delimiter character. The command also accepts other characters as delimiters, which is useful when the string contains forward slashes.
  • g is the global replacement flag, which replaces all occurrences of a string instead of just the first.
    “input file” is the file where the search and replace happens. The single quotes help avoid meta-character expansion in the shell.

Search and Replace All Cases

To find and replace all instances of a word and ignore capitalization, use the I parameter:

#I: The case-insensitive flag.    
sed -i 's/test1/tes2/gI' opt/example.txt

The command replaces all instances of the word test1 with test2, ignoring capitalization.

Conclusion 

The sed command is an invaluable tool for text manipulation in Linux/Unix environments. By mastering its basic usage and exploring its advanced features, you can streamline your text processing tasks and significantly improve your system administration and development workflows. This tutorial has provided a foundational understanding of sed’s search and replace capabilities. For further exploration, consider delving into more advanced sed scripting techniques and exploring its other powerful features.

I hope you found this tutorial helpful. What’s your favorite thing you learned from this tutorial? Let me know on comments!

How To Copy Secrets From KeyVault To Another KeyVault In Azure

Introduction

Azure Key Vault is a secure cloud service for managing secrets, encryption keys, and certificates. In modern multi-region deployments, ensuring that application secrets are consistently available across regions is essential for high availability and disaster recovery. However, manually copying secrets from one Key Vault to another can be tedious, error-prone, and time-consuming, especially when dealing with numerous secrets.

This blog post demonstrates how to automate the process of copying secrets from one Azure Key Vault to another using a PowerShell script. By following this guide, you can efficiently replicate secrets between regions, ensuring consistency and reducing manual intervention.

Use Case:

In our application setup, we aimed to configure high availability by deploying the application in two Azure regions. The primary Key Vault in region 1 contained numerous secrets, which we needed to replicate to the Key Vault in region 2. Manually moving each secret one by one was impractical and error-prone.

To overcome this, we developed an automated process using PowerShell to copy all secrets from the source Key Vault to the destination Key Vault. This approach eliminates human errors, saves time, and ensures seamless secret replication for high availability.

e. This blog will help you to understand How To Copy Secrets From KeyVault To Another In Azure using PowerShell script.

To clone a secret between key vaults, we need to perform two steps:

  1. Retrieve/export the secret value from the source key vault.
  2. Import this value into the destination key vault.

You can also refer below link to learn how to maintain your secrets in key vault and access in YAML pipeline

Step 1: Install Azure AZ module

Use the below cmdlet to Install the Azure PowerShell module if not already installed

# Install the Azure PowerShell module if not already installed
  Install-Module -Name Az -Force -AllowClobber

Step 2: Set Source and destination Key Vault name

# Pass both Source and destination Key Vault Name
Param( [Parameter(Mandatory)] 
[string]$sourceKvName, 
[Parameter(Mandatory)] 
[string]$destinationKvName )

Step 3:  Connect the Azure portal to access the Key Vault (non-interactive mode)

As we are doing the automation, so you can’t use Connect-AzAccount (which will make the popup to authenticate), if want to execute without any manual intervention then use az login with non-interactive mode as shown in below.

# Connect to Azure portal (you can also use Connect-AzAccount)
az login --service-principal -u "0ff3664821-0c94-48e0-96b5-7cd6422f46" -p "XACccAV2jXQrNks6Lr3Dac2B8z95BAt~MTCrP" --tenant "116372c23-ba4a-223b-0339-ff8ba7883c2"

Step 4:  Get the all the secrets name from the source KV

# Get all the Source Secret keys
$secretNames = (Get-AzKeyVaultSecret -VaultName $sourceKvName).Name

Step 5: Copy Secrets From source to destination KV.

The below script will loop based on the number of key names to fetch both name of the key and its value from the source key Vault and started to set the key and value in the destination KvName.

# Loop the Secret Names and copy the key/value pair to the destination key vault
$secretNames.foreach{
Set-AzKeyVaultSecret -VaultName $destinationKvName -Name $_ `
-SecretValue (Get-AzKeyVaultSecret -VaultName $sourceKvName -Name $_).SecretValue
}

Full code

# Pass both Source and destination Key Vault Name
Param(
[Parameter(Mandatory)]
[string]$sourceKvName,
[Parameter(Mandatory)]
[string]$destinationKvName
)

# Connect to Azure portal (you can also use Connect-AzAccount)
az login --service-principal -u "422f464821-0c94-48e0-96b5-7cd60ff366" -p "XACccAV2jXQrNks6Lr3Dac2B8z95BAt~MTCrP" --tenant "116372c23-ba4a-223b-0339-ff8ba7883c2"

# Get all the Source Secret keys
$secretNames = (Get-AzKeyVaultSecret -VaultName $sourceKvName).Name

# Loop the Secret Names and copy the key/value pair to the destination key vault
$secretNames.foreach{
Set-AzKeyVaultSecret -VaultName $destinationKvName -Name $_ `
-SecretValue (Get-AzKeyVaultSecret -VaultName $sourceKvName -Name $_).SecretValue
}

Conclusion

Managing secrets across multiple Azure regions can be challenging but is crucial for ensuring high availability and disaster recovery. Automating the process of copying secrets between Key Vaults not only streamlines the operation but also enhances reliability and reduces the risk of errors.

By following the steps outlined in this blog, you can easily replicate secrets between Azure Key Vaults using PowerShell. This solution ensures that your applications in different regions are configured with consistent and secure credentials, paving the way for robust and scalable deployments.

Implement this process to save time, minimize errors, and focus on scaling your applications while Azure handles secure secret management for you.

 

How to Delete a Blob from an Azure Storage using PowerShell

In one of my automation (Delete a Blob), I need to delete the previously stored reports (reports will always append with timestamp) on daily basis in Azure storage account in automated way in the specific container. So i need to ensure my container is available before start deleting my report. This article will have detail explain about How to Delete a Blob from an Azure Storage Account using PowerShell.

New to storage account?

One of the core services within Microsoft Azure is the Storage Account service. There are many service that utilize Storage Accounts for storing data, such as Virtual Machine Disks, Diagnostics logs (specially application log), SQL backups and others. You can also use the Azure Storage Account service to store your own data; such as blobs or binary data.

As per MSDN, Azure blob storage allows you to store large amounts of unstructured object data. You can use blob storage to gather or expose media, content, or application data to users. Because all blob data is stored within containers, you must create a storage container before you can begin to upload data.

Delete a Blob from an Azure Storage

Step: 1 Get the prerequisite inputs

As in this example i am going to delete the one the sql db (backup/imported to the storage) stored as bacpac format in the container called SQL…

## prerequisite Parameters
$resourceGroupName="rg-dgtl-strg-01"
$storageAccountName="sadgtlautomation01"
$storageContainerName="sql"
$blobName = "core_2022110824.bacpac"

Step: 2 Connect to your Azure subscription

Using the az login command with a service principal is a secure and efficient way to authenticate and connect to your Azure subscription for automation tasks and scripts. In scenarios where you need to automate Azure management tasks or run scripts in a non-interactive manner, you can authenticate using a service principal. A service principal is an identity created for your application or script to access Azure resources securely.

## Connect to your Azure subscription
az login --service-principal -u "210f8f7c-049c-e480-96b5-642d6362f464" -p "c82BQ~MTCrPr3Daz95Nks6LrWF32jXBAtXACccAV" --tenant "cf8ba223-a403-342b-ba39-c21f78831637"

Step: 3 Get the storage account to Check the container exit or not

When working with Azure Storage, you may need to verify if a container exists in a storage account or create it if it doesn’t. You can use the Get-AzStorageContainer cmdlet to check for the existence of a container.

## Get the storage account to check container exist or need to be create
$storageAccount = Get-AzStorageAccount -ResourceGroupName $resourceGroupName -Name $storageAccountName

## Get the storage account context
$context = $storageAccount.Context

Step: 4 Check the container exist before deleting the blob

We need to use Remove-AzStorageBlob cmdlet to delete a blob from the Azure storage container

## Check if the storage container exists
if(Get-AzStorageContainer -Name $storageContainerName -Context $context -ErrorAction SilentlyContinue)
{

Write-Host -ForegroundColor Green $storageContainerName ", the requested container exit,started deleting blob"

## Create a new Azure Storage container
Remove-AzStorageBlob -Container $storageContainerName -Context $context -Blob $blobName
Write-Host -ForegroundColor Green $blobName deleted

}
else
{
Write-Host -ForegroundColor Magenta $storageContainerName "the requested container does not exist"
}

Full Code:

## Delete a Blob from an Azure Storage
## Input Parameters
$resourceGroupName="rg-dgtl-strg-01"
$storageAccountName="sadgtlautomation01"
$storageContainerName="sql"
$blobName = "core_2022110824.bacpac"

## Connect to your Azure subscription
az login --service-principal -u "210f8f7c-049c-e480-96b5-642d6362f464" -p "c82BQ~MTCrPr3Daz95Nks6LrWF32jXBAtXACccAV" --tenant "cf8ba223-a403-342b-ba39-c21f78831637"

## Function to create the storage container
Function DeleteblogfromStorageContainer
{
## Get the storage account to check container exist or need to be create
$storageAccount = Get-AzStorageAccount -ResourceGroupName $resourceGroupName -Name $storageAccountName

## Get the storage account context
$context = $storageAccount.Context


## Check if the storage container exists
if(Get-AzStorageContainer -Name $storageContainerName -Context $context -ErrorAction SilentlyContinue)
{

Write-Host -ForegroundColor Green $storageContainerName ", the requested container exit,started deleting blob"
## Remove the blob in Azure Storage container
Remove-AzStorageBlob -Container $storageContainerName -Context $context -Blob $blobName

Write-Host -ForegroundColor Green $blobName deleted
}
else
{
Write-Host -ForegroundColor Magenta $storageContainerName "the requested container does not exist"
}

}
#Call the Function
DeleteblogfromStorageContainer

Output:

 

How to view the secret variables in Azure DevOps

Today, I will be taking about a technique using which you can view the secret variables in Azure DevOps.

Introduction

Azure DevOps supports us to store secrets within Azure DevOps variable Groups which could be used with the Pipelines. These secret variables couldn’t be viewed by us manually from the portal. Sometimes, we may want to view the password to perform some other activities.

Note: The best practice to have the secrets in Azure Key Vault and same you can read and execute in Azure pipeline in very secured way. Still some legacy projects are maintaining the secrets in Azure Variable group, so this article focus on them. You can read this to use Key Vault to handle the secrets

What are Secrets Variables in Azure Pipelines?

Secret Variables are placeholders for values which you want to store in an encrypted format and use while using running a pipeline. Secret Variables can be used for values like username, password, API key etc. Secret variables are encrypted variables that you can use in pipelines without exposing their value. Secret variables can be used for private information like passwords, IDs, and other identifying data that you wouldn’t want to have exposed in a pipeline. Secret variables are encrypted at rest with a 2048-bit RSA key and are available on the agent for tasks and scripts to use.

How to set Secret in Azure Variable group?

Set secret variables in the UI for a pipeline. Secret variables set in the pipeline settings UI for a pipeline are scoped to the pipeline where they are set. So, you can have secrets that only visible to users with access to that pipeline. Set secrets in a variable group. Variable groups follow the library security model. You can control who can define new items in a library, and who can use an existing item.

Let’s create a Secret variable in a Variable Group as shown below and make sure that you set it as a secret by locking it.

Once you mark it a secret (by clicking on the open lock icon as shown in below image), save the Variable Group, no one including admin will be able to view the secret. Let’s now understand how to view the secret with the help of Azure DevOps – Pipelines.

View the secret variables from Variable Group

You can create a simple Pipeline which has the below tasks to view the secrets in pipeline execution.

  1. PowerShell task which outputs a text (along with secret) into a file names ViewSecretValue.Txt
  2. Publish the ViewSecretValue.txt into Azure Pipeline artifacts.

Run the pipeline with with below PowerShell task

variables:
- group: Demo_VariableGroup
steps:
- powershell: |
    "The secretkey value is : $(secretkey)" | Out-File -FilePath  $(Build.ArtifactStagingDirectory)\ViewSecretValue.txt
- task: PublishBuildArtifacts@1
  inputs:
    PathtoPublish: '$(Build.ArtifactStagingDirectory)'
    ArtifactName: 'drop'
    publishLocation: 'Container'

Now, click on the ViewSecretValue.txt file to download the file. Once you download it, view that in a Notepad which should below.

Conclusion

In summary, handling secret variables securely is crucial for maintaining data confidentiality in DevOps processes. Azure DevOps provides built-in features and best practices to keep sensitive data protected, making it a powerful platform for secure CI/CD pipeline management. Integrating with tools like Azure Key Vault can further strengthen your security posture and simplify secret management across multiple projects.