Maintaining website uptime is essential for a positive user experience, as even short periods of downtime can frustrate users and result in lost business. Automating uptime checks on a Linux machine allows quick detection of issues, enabling faster response times. In this article, we’ll explore simple, effective ways to create a Website Uptime Checker Script in Linux using different commands like curl, wget, ping.
As my team and we are worked on windows machines and familiar with PowerShell but now we are working on the Linux based machine which lead to write articles based on command which we are using on daily basis.
1. Checking Website Uptime with curl
One of the most straightforward ways to check if a website is up is by using curl. The following multi-line bash script pings the specified website and returns its status:
#!/bin/bash
website="https://example.com"
# Check if website is accessible
if curl --output /dev/null --silent --head --fail "$website"; then
echo "Website is up."
else
echo "Website is down."
fi
Alternatively, here’s a one-liner with curl:
curl -Is https://dotnet-helpers.com | head -n 1 | grep -q "200 OK" && echo "Website is up." || echo "Website is down."
Explanation:
curl -Is sends a HEAD request to retrieve only headers.
head -n 1 captures the status line of the HTTP response.
grep -q “200 OK” checks if the response is “200 OK”. Based on this, the command outputs either “Website is up.” or “Website is down.”
2. Monitoring Uptime with wget
If curl isn’t available, wget can be an alternative. Here’s a multi-line script using wget:
#!/bin/bash
website="https://dotnet-helpers.com"
if wget --spider --quiet "$website"; then
echo "Website is up."
else
echo "Website is down."
fi
And the one-liner version with wget:
wget --spider --quiet https://dotnet-helpers.com && echo "Website is up." || echo "Website is down."
Explanation:
The –spider option makes wget operate in “spider” mode, checking if the website exists without downloading content.
–quiet suppresses the output.
3. Checking Server Reachability with ping
Although ping checks the server rather than website content, it can still verify server reachability. Here’s a multi-line script using ping:
#!/bin/bash
server="example.com"
if ping -c 1 "$server" &> /dev/null; then
echo "Server is reachable."
else
echo "Server is down."
fi
And here’s the one-liner with ping:
ping -c 1 https://dotnet-helpers.com &> /dev/null && echo "Server is reachable." || echo "Server is down."
Summary
By combining these single-line and multi-line commands, you can monitor website availability, server reachability, and port status effectively. Monitoring website uptime on a Linux machine is simple and effective with these commands. Choose the single-line or multi-line scripts that best suit your needs, and consider automating them for consistent uptime checks. Start implementing these methods to ensure your website remains accessible and reliable for your users.
However, the teams that manage these clusters need to know what’s happening to the state of objects in the cluster, and this in turn introduces a requirement to gather real-time information about cluster statuses and changes. This is enabled by Kubernetes events, which give you a detailed view of the cluster and allow for effective alerting and monitoring.
In this guide, you’ll learn how Kubernetes events work, what generates them, and where they’re stored. You’ll also learn to integrate Grafana with your Kubernetes environment to effectively use the information supplied by those events to support your observability strategy.
What are Kubernetes events?
Kubernetes events provide a rich source of information. These objects can be used to monitor your application and cluster state, respond to failures, and perform diagnostics. The events are generated when the cluster’s resources — such as pods, deployments, or nodes — change state.
Whenever something happens inside your cluster, it produces an events object that provides visibility into your cluster. However, Kubernetes events don’t persist throughout your cluster life cycle, as there’s no mechanism for retention. They’re short-lived, only available for one hour after the event is generated.
Some of the reason for events generation:
Kubernetes events are automatically generated when certain actions are taken on objects in a cluster, e.g., when a pod is created, a corresponding event is created. Other examples are changes in pod status to pending, successful, or failed. This includes reasons such as pod eviction or cluster failure.
Events are also generated when there’s a configuration change. Configuration changes for nodes can include scaling horizontally by adding replicas, or scaling vertically by upgrading memory, disk input/output capacity, or your processor cores.
Scheduling or failed scheduling scenarios also generate events. Failures can occur due to invalid container image repository access, insufficient resources, or if the container fails a liveness or readiness probe.
Why Kubernetes Events are Useful
Kubernetes events are a key diagnostic tool because they:
Help detect issues with deployments, services, and pods.
Provide insights into scheduling failures, container crashes, and resource limits.
Track changes and status updates of various objects.
Assist in debugging networking and storage issues.
Support performance monitoring by identifying anomalies.
Types of Kubernetes Events
Kubernetes Events can broadly be categorized into two types:
Normal Events: These events signify expected and routine operations in the cluster, like a Pod being scheduled or an image being successfully pulled. Warning Events: Warning events indicate issues that users need to address. These might include failed Pod scheduling, errors pulling an image, or problems with resource limits.
How to Collect Kubernetes Events
Kubectl is a powerful Kubernetes utility that helps you manage your Kubernetes objects and resources. The simplest way to view your event objects is to use kubectl get events.
When working with Kubernetes Events, the volume of data can be overwhelming, especially in large clusters. Efficiently filtering and sorting these events is key to extracting meaningful insights. Here are some practical tips to help you manage this:
To view all Kubernetes events in a cluster:
Add the -A flag to see events from all namespaces.
kubectl get events --all-namespaces
kubectl get events -A
To view events for a specific namespace:
Replace <NAMESPACE_NAME> with the actual namespace. This command filters events to show only those occurring in a specified namespace.
kubectl get events -n <namespace>
Get a detailed view of events
Add the -o wide flag to get a comprehensive view of each event, including additional details not visible in the standard output.
kubectl get events -o wide
Stream live events
Add the -w command to stream events in real-time. This is particularly useful for monitoring ongoing activities or troubleshooting live issues, as it updates continuously as new events occur. Use Ctrl+C to terminate the stream.
kubectl get events -w
Use field selectors for precise filtering
Add the –field-selector flag to filter events based on specific field values. Replace with the event type you want to filter by. For example, kubectl get events –field-selector type=Warning will only show events of type Warning. This is particularly useful for isolating events related to errors or critical issues.
kubectl get events --field-selector type=<EVENT_TYPE>
#command will return all events of type Warning in the current namespace.
kubectl get events --field-selector type=Warning
Sort events by timestamp
kubectl get event -n default --sort-by=.metadata.creationTimestamp
Add the –sort-by flag to sort events chronologically. This is useful for tracking the sequence of events and understanding their progression over time.
Use JSON or YAML output for complex queries
For complex filtering that can’t be achieved with kubectl flags, you can output the events in a structured format like JSON or YAML by adding the -o json and -o yaml flags, respectively. You can then use tools like jq (for JSON) to perform advanced queries and analyses.
kubectl get events -o yaml
kubectl get events -o json
kubectl get events --field-selector type=Warning -o yaml
Summary: How to Collect Kubernetes Events Logs
Kubernetes events are short-lived records (retained for 1 hour) that track state changes in cluster resources like pods, nodes, or deployments. They provide critical insights for monitoring, debugging, and alerting but require proactive collection due to their transient nature. This guide outlines their utility, types, and methods to collect them effectively.
Track scheduling failures, crashes, or configuration changes.
Support diagnostics and performance monitoring.
Event Types:
Normal: Routine operations (e.g., pod scheduling, image pulled).
Warning: Critical issues (e.g., pod eviction, image pull errors).
Collection Methods Using kubectl:
You can filter logs using multiple ways like View All Events, Namespace-Specific Filtering, Detailed Output, Live Streaming, Precise Filtering, Chronological Sorting, Structured Outputs (JSON/YAML):
Kubernetes provides a robust mechanism for managing application deployments, ensuring high availability and smooth rollouts. The kubectl rollout status command is essential for monitoring deployment progress, while various methods exist for refreshing pods to apply updates or troubleshoot issues. In this blog, we’ll explore how to check the rollout status of a deployment, why rollouts are required, when kubectl rollout restart is necessary, and different ways to refresh pods in a Kubernetes cluster. In this article, we will discuss on how to Restart Pod in Kubernetes in detail.
Introduction:
In this blog post, we’ll explore three different methods to restart a Pod in Kubernetes. It’s important to note that in Kubernetes, “restarting a pod” doesn’t happen in the traditional sense, like restarting a service or a server. When we say a Pod is “restarted,” it usually means a Pod is deleted, and a new one is created to replace it. The new Pod runs the same container(s) as the one that was deleted.
When to Use kubectl rollout restart
The kubectl rollout restart command is particularly useful in the following cases:
After a ConfigMap or Secret Update:If a pod depends on a ConfigMap or Secret and the values change, the pods won’t restart automatically. Running a rollout restart ensures they pick up the new configuration.
When a Deployment Becomes Unstable:If a deployment is experiencing intermittent failures or connectivity issues, restarting can help resolve problems.
To Clear Stale Connections:When applications hold persistent connections to databases or APIs, a restart can help clear old connections and establish new ones.
For Application Performance Issues:If the application is behaving unexpectedly or consuming excessive resources, restarting the pods can help reset its state.
During Planned Maintenance or Upgrades: Ensuring all pods restart as part of a routine update helps maintain consistency across the deployment.
Sample Deployment created for testing:
The spec field of the Pod template contains the configuration for the containers running inside the Pod. The restartPolicy field is one of the configuration options available in the spec field. It allows you to control how the Pods hosting the containers are restarted in case of failure. Here’s an example of a Deployment configuration file with a restartPolicy field added to the Pod spec:
You can set the restartPolicy field to one of the following three values:
Always: Always restart the Pod when it terminates.
OnFailure: Restart the Pod only when it terminates with failure.
Never: Never restart the Pod after it terminates.
If you don’t explicitly specify the restartPolicy field in a Deployment configuration file (as shown in below YAML), Kubernetes sets the restartPolicy to Always by default.
In this file, we have defined a Deployment named demo-deployment that manages a single Pod. The Pod has one container running the alpine:3.15 image.
Look for the Pod with a name starting with demo-deployment and ensure that it’s in the Running state. Note that Kubernetes creates unique Pod names by adding unique characters to the Deployment name. Hence, your Pod name will be different from as shown below.
Restart Kubernetes Pod
In this section, we’ll explore three methods you can use to restart a Kubernetes Pod.
Method 1: Deleting the Pod
One of the easiest methods to restart a running Pod is to simply delete it. Run the following command to see the Pod restart in action:
#Syntax
kubectl delete pod <POD-NAME>
#Example Delete pod
kubectl delete pod demo-deployment-67789cc7db-dw6xz -n default
#To get the status of the deletion
kubectl get pod -n default
After running the command above, you will receive a confirmation that the Pod has been deleted, as shown in the output below: The job of a Deployment is to ensure that the specified Pod replicas is running at all times. Therefore, after deleting the Pod, Kubernetes will automatically create a new Pod to replace the deleted one.
Method 2: Using the “kubectl rollout restart” command
You can restart a Pod using the kubectl rollout restart command without making any modifications to the Deployment configuration. To see the Pod restart in action, run the following command:
After running the command, you’ll receive an output similar to the following:
As you can see, the Deployment has been restarted. Next, let’s list the Pods in our system by running the kubectl get pod command:
As you can see in the output above, the Pod rollout process is in progress. If you run the kubectl get pods command again, you’ll see only the new Pod in a Running state, as shown above:
Any Downtime during Restart Kubernetes Pod?
The Deployment resource in Kubernetes has a default rolling update strategy, which allows for restarting Pods without causing downtime. Here’s how it works: Kubernetes gradually replaces the old Pods with the new version, minimizing the impact on users and ensuring the system remains available throughout the update process.
To restart a Pod without downtime, you can choose between two methods which discussed above using a Deployment or using the kubectl rollout restart command. Note that manually deleting a Pod (Method 1) to restart it won’t work effectively because there might be a brief period of downtime. When you manually delete a Pod in a Deployment, the old Pod is immediately removed, but the new Pod takes some time to start up.
Rolling update strategy
You can confirm that Kubernetes uses a rolling update strategy by fetching the Deployment details using the following command:
After running the command above, you’ll see like below snap
Notice the highlighted section in the output above. The RollingUpdateStrategy field has a default value of 25% max unavailable, 25% max surge. 25% max unavailable means that during a rolling update, 25% of the total number of Pods can be unavailable. And 25% max surge means that the total number of Pods can temporarily exceed the desired count by up to 25% to ensure that the application is available as old Pods are brought down. This can be adjust based on our requirement of the application traffic.
Conclusion
Kubernetes provides multiple methods to restart Pods, ensuring seamless application updates and issue resolution. The best approach depends on the use case:
For minimal disruption and rolling updates, kubectl rollout restart deployment/<Deployment-Name> is the recommended method. It triggers a controlled restart of Pods without causing downtime.
For troubleshooting individual Pods, manually deleting a Pod (kubectl delete pod <POD-NAME>) allows Kubernetes to recreate it automatically. However, this approach may introduce brief downtime.
For configuration updates, restarting Pods after modifying a ConfigMap or Secret ensures that new configurations take effect without redeploying the entire application.
Ultimately, using the rolling update strategy provided by Kubernetes ensures high availability, reducing service disruptions while refreshing Pods efficiently.
Performing DNS resolution in Windows using PowerShell is a fundamental task for network administrators and IT professionals. Here are several methods to Check DNS Resolution using PowerShell, which you can share on your blog.
The Domain Name System (DNS) is an essential component of the internet’s infrastructure, translating human-readable domain names (like www.example.com) into machine-readable IP addresses (like 192.0.2.1). Checking DNS resolution is crucial for troubleshooting network issues, ensuring proper domain configurations, and enhancing overall internet performance. This article explores various methods to check DNS resolution, providing insights into tools and techniques available for different operating systems and use cases.
Method 1: Using nslookup
Although nslookup is not a PowerShell cmdlet, it can be executed within PowerShell using Get-Command. This method is handy for those familiar with traditional command-line tools.
nslookup google.com
Output: This command will return the DNS server being queried and the resolved IP addresses for the domain.
Method 2: Using Test-Connection (Ping)
The Test-Connection cmdlet can be used to ping a domain name, which resolves the domain to an IP address. This is a useful method for quickly verifying DNS resolution and connectivity.
Test-Connection google.com
Output: This command will return the resolved IP address along with ping statistics, providing both DNS resolution and connectivity information.
Method 3: Using Test-NetConnection
The Test-NetConnection cmdlet is another versatile tool that can be used for DNS resolution. It provides more detailed network diagnostics compared to Test-Connection.
Test-NetConnection -ComputerName google.com
Output: This command returns comprehensive information including the resolved IP address, ping results, and network adapter status.
Method 4: Using wget Command
The wget command can be used within PowerShell to download content from a URL. Although its primary use is for retrieving files, it can also resolve the domain name in the process.
wget google.com
Output:This command will display the resolved IP address and download information for the specified URL.
Method 5: Using ping
The ping command is a classic network utility used to test the reachability of a host. It also performs DNS resolution.
ping google.com
Output:This command will return the resolved IP address and round-trip time for packets sent to the domain.
Method 6: Parsing DNS Records with Resolve-DnsName
Resolve-DnsName can be used to retrieve specific DNS records like A, AAAA, MX, and TXT records.
Output: This command will return detailed information about the domain, including IP addresses, aliases, and DNS record types.
PowerShell provides versatile methods for DNS resolution, ranging from the native Resolve-DnsName cmdlet to leveraging .NET classes, traditional command-line tools like nslookup, ping, Test-Connection, Test-NetConnection, and wget. These methods cater to various preferences and requirements, ensuring that DNS resolution can be performed efficiently and effectively in any PowerShell environment.
By incorporating these methods into your network management toolkit, you can enhance your ability to diagnose and resolve DNS-related issues seamlessly.
Conclusion:
Performing DNS resolution using PowerShell offers multiple approaches, each suited for different scenarios and troubleshooting needs. Whether you prefer traditional command-line tools like nslookup and ping, or more advanced PowerShell cmdlets like Resolve-DnsName and Test-NetConnection, these methods provide flexibility in verifying domain name resolution and diagnosing network issues. By integrating these techniques into your workflow, you can efficiently manage DNS queries, ensure proper domain configurations, and improve overall network reliability.
Azure Kubernetes Service (AKS) empowers you to dynamically scale your applications to meet fluctuating demands. By leveraging CPU and memory-based autoscaling, you can optimize resource allocation, minimize costs, and ensure your applications consistently deliver peak performance. This guide will walk you through the process of configuring and implementing effective autoscaling in Azure Kubernetes Service deployment.
By default, the Horizontal Pod Autoscaler (HPA) in Kubernetes primarily uses CPU utilization as a metric for scaling. However, it is also possible to configure HPA to use memory utilization or custom metrics. Here’s how you can set up HPA to consider memory usage in addition to CPU usage.
What is HPA?
Horizontal Pod Auto scaler (HPA) automatically scales the number of pods in a Kubernetes deployment based on observed metrics such as CPU and memory usage. It ensures your application can handle increased load and conserves resources when demand is low.
“AKS Autoscaling automatically adjusts the number of pods in your deployments, ensuring your applications can seamlessly handle fluctuating workloads.”
Why we monitor Memory and CPU Utilization?
In many applications, both memory and CPU usage are critical metrics to monitor. Memory-intensive applications require additional resources to maintain performance, so scaling based on memory ensures pods are added when usage increases, preventing performance degradation due to memory pressure. Similarly, CPU utilization is essential because high CPU demand can quickly lead to processing bottlenecks. By monitoring and autoscaling based on both memory and CPU, you achieve a more holistic and balanced approach that ensures your applications have the necessary resources to operate optimally under varying workloads.
Step-by-Step Guide to Configure AKS autoscaling
Prerequisites
Before we begin, ensure you have the following:
Azure CLI installed and configured on your machine.
kubectl installed and configured to interact with your AKS cluster.
An AKS cluster up and running.
Step 1: Create a Deployment
First, Create a simple deployment using kubectl apply. Let’s create a simple NGINX deployment:
Save this YAML file as nginx-deployment.yaml and apply it using kubectl:
kubectl apply -f nginx-deployment.yaml
This will create a deployment named nginx-deployment with one replica of the NGINX container.
Step 2: Create the HPA with Memory Utilization
To create an HPA that uses both CPU and memory metrics, you need to define the metrics in the HPA configuration (Define an HPA that considers both CPU and memory utilization). Save the following YAML as hpa-nginx.yaml:
To associate the Horizontal Pod Autoscaler (HPA) with the specific deployment created in Step 1 (nginx-deployment), the autoscaling YAML must specify the kind: Deployment and name: nginx-deployment within the scaleTargetRef section, as shown in the example below.
Check the status of the HPA to ensure it includes both CPU and memory metrics: Use kubectl get hpa to confirm the HPA is configured correctly and includes both CPU and memory targets.
kubectl get hpa nginx-hpa
The output should display both CPU and memory utilization targets:
Step 4: Modify the HPA Configuration:
If you need to adjust the scaling parameters (e.g., minReplicas, maxReplicas, CPU/memory utilization targets), edit the hpa-nginx.yaml file accordingly as shown below and update the new value and save. For example, to increase the maximum number of replicas:
Key Considerations:
Monitor HPA Behavior: Regularly monitor the HPA’s behavior using kubectl describe hpa nginx-hpa. This will provide insights into the scaling activities, current pod count, and the reasons for scaling up or down.
Fine-tune Metrics: Experiment with different CPU and memory utilization targets to find the optimal values for your application’s workload.
Consider Custom Metrics: For more complex scenarios, explore using custom metrics for autoscaling (e.g., request latency, error rates).
Conclusion:
By following these steps, you can effectively update your HPA configuration in AKS to ensure your deployments scale efficiently and effectively based on both CPU and memory utilization. By incorporating memory utilization into your AKS autoscaling strategy, you optimize resource allocation, minimize costs, and enhance application performance. This proactive approach ensures your applications seamlessly handle varying workloads while maintaining high availability and delivering an exceptional user experience. Regularly monitor your HPA metrics and adjust scaling parameters as needed to fine-tune performance and achieve optimal resource utilization.
Environment Variables in Linux are dynamic values that the operating system and various applications use to determine information about the user environment. They are essentially variables that can influence the behavior and configuration of processes and programs on a Linux system. These variables are used to pass configuration information to programs and scripts, allowing for flexible and dynamic system management.
These variables, often referred to as global variables, play a crucial role in tailoring the system’s functionality and managing the startup behavior of various applications across the system. On the other hand, local variables are restricted and accessible from within the shell in which they’re created and initialized.
Linux environment variables have a key-value pair structure, separated by an equal (=) sign. Note that the names of the variables are case-sensitive and should be in uppercase for instant identification.
Key Features of Environment Variables
Dynamic Values: They can change from session to session and even during the execution of programs.
System-Wide or User-Specific: Some variables are set globally and affect all users and processes, while others are specific to individual users.
Inheritance: Environment variables can be inherited by child processes from the parent process, making them useful for configuring complex applications.
Common Environment Variables
Here are some commonly used environment variables in Linux:
HOME: Indicates the current user’s home directory.
PATH: Specifies the directories where the system looks for executable files.
USER: Contains the name of the current user.
SHELL: Defines the path to the current user’s shell.
LANG: Sets the system language and locale settings.
Setting and Using Environment Variables
Temporary Environment Variables in Linux
You can set environment variables temporarily in a terminal session using the export command: This command sets an environment variable named MY_VAR to true for the current session. Environment variables are used to store information about the environment in which programs run.
export MY_VAR=true echo $MY_VAR
Example 1: Setting Single Environment Variable
For example, the following command will set the Java home environment directory.
export JAVA_HOME=/usr/bin/java
Note that you won’t get any response about the success or failure of the command. As a result, if you want to verify that the variable has been properly set, use the echo command.
echo $JAVA_HOME
The echo command will display the value if the variable has been appropriately set. If the variable has no set value, you might not see anything on the screen.
Example 2: Setting Multiple Environment Variables
You can specify multiple values for a multiple variable by separating them with space like this:
<NAME>=<VALUE1> <VALUE2><VALUE3>
export VAR1="value1" VAR2="value2" VAR3="value3"
Example 3: Setting Multiple value for single Environment Variable
You can specify multiple values for a single variable by separating them with colons like this: <NAME>=<VALUE1>:<VALUE2>:<VALUE3>
The PATH variable contains a list of directories where the system looks for executable files. Multiple directories are separated by colons.
Permanent Environment Variables in Linux
To make MY_VAR available system-wide, follow these steps:
This command appends the line MY_VAR=”True” to the /etc/environment file, which is a system-wide configuration file for environment variables.
By adding this line, you make the MY_VAR variable available to all users and sessions on the system.
The use of sudo ensures that the command has the necessary permissions to modify /etc/environment
Example 1: Setting Single Environment Variable for all USERS
export MY_VAR=true
echo 'MY_VAR="true"' | sudo tee /etc/environment -a
Breakdown of the Command
echo ‘MY_VAR=”true”‘: This command outputs the string MY_VAR=”true”. Essentially, echo is used to display a line of text.
| (Pipe): The pipe symbol | takes the output from the echo command and passes it as input to the next command. In this case, it passes the string MY_VAR=”true” to sudo tee.
sudo tee /etc/environment -a: sudo: This command is used to run commands with superuser (root) privileges. Since modifying /etc/environment requires administrative rights, sudo is necessary.
tee:The tee command reads from the standard input (which is the output of the echo command in this case) and writes it to both the standard output (displaying it on the terminal) and a file.
/etc/environment:This is the file where tee will write the output. The /etc/environment file is a system-wide configuration file for environment variables.
-a:The -a (append) option tells tee to append the input to the file rather than overwriting its contents. This ensures that any existing settings in /etc/environment are preserved and the new line is simply added to the end of the file.
This command is used to add a new environment variable (MY_VAR) to the system-wide environment variables file (/etc/environment). By appending it, you ensure that the new variable is available to all users and sessions across the entire system.
Example 2: Setting Multiple value for single Environment Variable for all USERS
You can specify multiple values for a single variable by separating them with colons like this: <NAME>=<VALUE1>:<VALUE2>:<VALUE3>
Efficiently integrating code from external Azure DevOps repositories is crucial for collaborative projects and streamlined development workflows. This comprehensive guide provides a step-by-step approach to accessing and utilizing external repositories within your Azure DevOps pipelines (Checkout External Repositories). We’ll cover essential steps, including creating Personal Access Tokens (PATs), configuring service connections, and referencing external repositories in your YAML pipelines. By following these instructions, you’ll enhance your development process by seamlessly incorporating code from various sources across different subscriptions.
Accessing an External Azure DevOps Repository Across Subscriptions
Accessing a repository from another Azure DevOps subscription can be essential for projects where resources are distributed across different organizations or accounts. This article provides a step-by-step guide on using a Personal Access Token (PAT) and a service connection to access an external repository within an Azure DevOps pipeline. By following these instructions, you’ll be able to integrate code from another subscription seamlessly.
Where it required?
In scenarios where you need to access resources (like repositories) that belong to a different Azure DevOps organization or subscription, you need to configure cross-subscription access. This setup is commonly required in the following situations:
Shared Repositories Across Teams: Teams working on interconnected projects in different organizations or subscriptions often need to share code. For example, a core library or shared services might be maintained in one subscription and used across multiple other projects.
Centralized Code Management: Large enterprises often centralize codebases for specific functionalities (e.g., CRM services, microservices). If your pipeline depends on these centralized repositories, you must configure access.
Multi-Subscription Projects: When an organization spans multiple Azure subscriptions, projects from one subscription might need to integrate code or services from another, necessitating secure cross-subscription access.
Dependency Management: A project may depend on another repository’s codebase (e.g., APIs, SDKs, or CI/CD templates) that resides in a different Azure DevOps subscription.
Separate Environments: Development and production environments might exist in separate subscriptions for security and compliance. For example, accessing a production-ready repository for release from a different subscription’s development repository.
Step-by-Step Guide
Step 1: Create a Personal Access Token (PAT) in External ADO
Navigate to the Azure DevOps organization containing the external repository.
Click on your profile picture in the top-right corner and select Personal Access Tokens.
Click on New Token and:
Provide a name (e.g., External Repo Access). Set the Scope to Code (Read) (or higher if required). Specify the expiration date. Generate the PAT and copy it. Store it securely as you won’t be able to view it again.
Step 2: Create a Service Connection in your ADO
A service connection allows your pipeline to authenticate with the external repository.
Go to the Azure DevOps project where you’re creating the pipeline.
Navigate to Project Settings > Service Connections.
Click on New Service Connection and select Azure Repos/Team Foundation Server.
In the setup form:
Repository URL: Enter the URL of the external repository. Authentication Method: Select Personal Access Token. PAT: Paste the PAT you generated earlier.
Give the service connection a name (e.g., CRM Service Connection) and save it.
Step 3: Reference the External Repository in Your Pipeline
The repository keyword lets you specify an external repository. Use a repository resource to reference an additional repository in your pipeline. Add the external repository to your pipeline configuration.
SYNTAX
repositories:
- repository: string #Required as first property. Alias for the repository.
endpoint: string #ID of the service endpoint connecting to this repository.
trigger: none | trigger | [ string ] # CI trigger for this repository(only works for Azure Repos).
name: string #repository name (format depends on 'type'; does not accept variables).
ref: string #ref name to checkout; defaults to 'refs/heads/main'. The branch checked out by default whenever the resource trigger fires.
type: string #Type of repository: git, github, githubenterprise, and bitbucket.
Update your pipeline YAML file to include:
resources:
repositories:
- repository: externalRepo
type: git
name: myexternal_project/myexternal_repo
ref: external-ProductionBranch #Branch reference
endpoint: dotnet Service Connection #Service connection name
References the external repository under resources.repositories.
name: mention your external project and Repo name
ref: Specifies the branch (external-ProductionBranch)
endpoint: service connection (dotnet Service Connection).
Step 4: Checkout the External Repository
Include a checkout step in your pipeline: This ensures the external repository is cloned into the pipeline workspace for subsequent tasks.
steps:
- checkout: externalRepo
Step 5: Define the Build Pipeline
Add steps for building and packaging the code. In my case, the external project is dotnet core so i have added the build steps for the same as shown in below.
Successfully accessing and integrating external Azure DevOps repositories requires careful authentication and configuration. By following the steps outlined in this guide, including creating PATs, establishing service connections, and effectively referencing external repositories within your YAML pipelines, you can seamlessly integrate code from various sources. This streamlined approach fosters enhanced collaboration, improved efficiency, and a more robust development process for your projects.
Azure Kubernetes Service (AKS) simplifies the deployment, management, and scaling of containerized applications using Kubernetes. To interact with an AKS cluster, you need to establish a connection using kubectl, the Kubernetes command-line tool. This guide provides a step-by-step process to explain How To Connect Azure Kubernetes Cluster Using Kubectl
Why Kubectl needed?
Connecting to an AKS cluster is an essential step for managing workloads, monitoring performance, and deploying applications. This process is especially critical for:
Monitoring Cluster Health:Using kubectl commands to retrieve performance metrics and check node status.
Application Deployment:Deploying and managing containerized applications in the AKS cluster.
Cluster Administration:Performing administrative tasks like scaling, updating, or debugging resources within the cluster.
Whether you’re a developer or administrator, establishing this connection ensures you can effectively manage your Kubernetes environment.
To connect to an Azure Kubernetes Service (AKS) cluster using kubectl, you will need to perform the following steps:
Prerequisites ( install both Azure CLI and Kubectl)
Open command prompt, run az login command to authenticate your CLI with your Azure account. Once you run this command, you will be prompted to enter your Azure account credentials.
az login
Steps to connect Azure AKS Cluster:
Go to Azure Portal -> Kubernetes Services -> Select the required Cluster -> Overview -> Connect -> to find the entire command for the specific cluster itself or follow the below commands one by one by replacing with subscription Id, cluster name and resource group name.
STEP: 4 Set the subscription
To set subscription, run
az account set --subscriptiond 95fe7-8d2c-4297-ad8b-a8eb08322955
STEP: 5Generate kubeconfig file
Open command prompt, run the below command and run az aks get-credentials command to connect to your AKS cluster. The get-credentials command downloads credentials and configures the Kubernetes CLI to use them.
#Syntax:
az aks get-credentials --resource-group <resource-group-name> --name <cluster-name>
Replace <resource-group-name> with the name of the resource group that contains your AKS cluster, and <cluster-name> with the name of your AKS cluster.
az aks get-credentials --resource-group rg-dgtl-pprd-we-01 --name aks-dgtl-pprd-we-01
Above command will create the kubeconfig file in the user root directory. To get kubeconfig file in the specific location
Connect Azure Kubernetes Cluster Using Kubectl
STEP: 6Verify the connection
To verify that Kubectl is connected to your AKS cluster, run the Kubectl get nodes command. This command should display a list of the nodes in your AKS cluster. If the command returns the list of nodes, then you have successfully connected to your AKS cluster using Kubectl.
kubectl get nodes
Points to Remember
Go to Azure Portal -> Kubernetes Services -> Select the required Cluster -> Overview -> Connect -> to find the entire command for the specific cluster itself or follow the below commands one by one by replacing with subscription Id, cluster name and resource group name.
First we need to login to the Azure account by configuring Azure subscription and login details. And then we need to connect to Kubernetes Cluster only then we can able to run Kubectl commands.
Conclusion
Connecting to an AKS cluster using kubectl is a fundamental skill for managing Kubernetes workloads in Azure. By following this guide, you can authenticate, configure, and verify your connection to the AKS cluster seamlessly. This enables you to monitor cluster performance, deploy applications, and manage resources effectively.
As Kubernetes continues to be a vital platform for container orchestration, mastering tools like kubectl and Azure CLI is essential for efficient cluster management.
Azure Key Vault is a secure cloud service for managing secrets, encryption keys, and certificates. In modern multi-region deployments, ensuring that application secrets are consistently available across regions is essential for high availability and disaster recovery. However, manually copying secrets from one Key Vault to another can be tedious, error-prone, and time-consuming, especially when dealing with numerous secrets.
This blog post demonstrates how to automate the process of copying secrets from one Azure Key Vault to another using a PowerShell script. By following this guide, you can efficiently replicate secrets between regions, ensuring consistency and reducing manual intervention.
Use Case:
In our application setup, we aimed to configure high availability by deploying the application in two Azure regions. The primary Key Vault in region 1 contained numerous secrets, which we needed to replicate to the Key Vault in region 2. Manually moving each secret one by one was impractical and error-prone.
To overcome this, we developed an automated process using PowerShell to copy all secrets from the source Key Vault to the destination Key Vault. This approach eliminates human errors, saves time, and ensures seamless secret replication for high availability.
e. This blog will help you to understand How To Copy Secrets From KeyVault To Another In Azure using PowerShell script.
To clone a secret between key vaults, we need to perform two steps:
Retrieve/export the secret value from the source key vault.
Import this value into the destination key vault.
You can also refer below link to learn how to maintain your secrets in key vault and access in YAML pipeline
Use the below cmdlet to Install the Azure PowerShell module if not already installed
# Install the Azure PowerShell module if not already installed
Install-Module -Name Az -Force -AllowClobber
Step 2: Set Source and destination Key Vault name
# Pass both Source and destination Key Vault Name
Param( [Parameter(Mandatory)]
[string]$sourceKvName,
[Parameter(Mandatory)]
[string]$destinationKvName )
Step 3: Connect the Azure portal to access the Key Vault (non-interactive mode)
As we are doing the automation, so you can’t use Connect-AzAccount (which will make the popup to authenticate), if want to execute without any manual intervention then use az login with non-interactive mode as shown in below.
# Connect to Azure portal (you can also use Connect-AzAccount)
az login --service-principal -u "0ff3664821-0c94-48e0-96b5-7cd6422f46" -p "XACccAV2jXQrNks6Lr3Dac2B8z95BAt~MTCrP" --tenant "116372c23-ba4a-223b-0339-ff8ba7883c2"
Step 4: Get the all the secrets name from the source KV
# Get all the Source Secret keys
$secretNames = (Get-AzKeyVaultSecret -VaultName $sourceKvName).Name
Step 5: Copy Secrets From source to destination KV.
The below script will loop based on the number of key names to fetch both name of the key and its value from the source key Vault and started to set the key and value in the destination KvName.
# Loop the Secret Names and copy the key/value pair to the destination key vault
$secretNames.foreach{
Set-AzKeyVaultSecret -VaultName $destinationKvName -Name $_ `
-SecretValue (Get-AzKeyVaultSecret -VaultName $sourceKvName -Name $_).SecretValue
}
Full code
# Pass both Source and destination Key Vault Name
Param(
[Parameter(Mandatory)]
[string]$sourceKvName,
[Parameter(Mandatory)]
[string]$destinationKvName
)
# Connect to Azure portal (you can also use Connect-AzAccount)
az login --service-principal -u "422f464821-0c94-48e0-96b5-7cd60ff366" -p "XACccAV2jXQrNks6Lr3Dac2B8z95BAt~MTCrP" --tenant "116372c23-ba4a-223b-0339-ff8ba7883c2"
# Get all the Source Secret keys
$secretNames = (Get-AzKeyVaultSecret -VaultName $sourceKvName).Name
# Loop the Secret Names and copy the key/value pair to the destination key vault
$secretNames.foreach{
Set-AzKeyVaultSecret -VaultName $destinationKvName -Name $_ `
-SecretValue (Get-AzKeyVaultSecret -VaultName $sourceKvName -Name $_).SecretValue
}
Conclusion
Managing secrets across multiple Azure regions can be challenging but is crucial for ensuring high availability and disaster recovery. Automating the process of copying secrets between Key Vaults not only streamlines the operation but also enhances reliability and reduces the risk of errors.
By following the steps outlined in this blog, you can easily replicate secrets between Azure Key Vaults using PowerShell. This solution ensures that your applications in different regions are configured with consistent and secure credentials, paving the way for robust and scalable deployments.
Implement this process to save time, minimize errors, and focus on scaling your applications while Azure handles secure secret management for you.
Manipulating text is a common task in any scripting or programming language, and PowerShell is no exception. Whether you need to replace a single word, a pattern, or handle more complex transformations using regular expressions, PowerShell provides two powerful tools for the job: the .Replace() method and the -replace operator.
In this article, we’ll explore the differences between these two approaches, highlight their strengths, and demonstrate practical examples to help you choose the right one for your needs. From simple string substitutions to advanced regex-based replacements, you’ll learn how to effectively use these features to handle text in PowerShell scripts with ease.
By the end of this tutorial, you’ll understand:
When to use the .Replace() method for straightforward replacements.
How the -replace operator leverages the power of regular expressions for more complex scenarios.
Common pitfalls and tips for working with special characters.
Let’s dive into how you can make the most of these string replacement techniques in your PowerShell scripts!
–replace operator and Replace() method
In this post, you’re going to learn where to use the PowerShell replace() method and PowerShell replace operator. The tutorial will cover the basics and even drive into some regular expressions.
.Replace is a .NET method and -replace is a PowerShell operator that uses regular expressions. In another word, the .Replace() method comes from the .NET String class whereas the -Replace operator is implemented using System.Text.RegularExpressions.Regex.Replace().
The -replace operator takes a regular expression (regex) replacement rule as input and replaces every match with the replacement string.
When and where to use it?
Like other languages, PowerShell can work with strings and text. One of those useful features is to use PowerShell to replace characters, strings, or even text inside of files. In PowerShell, Replace() method and -replace operator is used to finding specified characters and replace them with a new string. To perform simple replacements, you can use the replace() method but if you need to match and replace anything more advanced, always use the replace operator.
.Replace Method:
Example 1: Replace characters in strings.
$string = ‘hello, dotnet-helpers.com’
In the above code, we gave the string that likes to replace. From the above string value, we would like to replace “hello” with “Welcome”. To do this in PowerShell, first you need to figure out the matched text. Once it’s found, need to replace that text with a user-defined value.
The replace() method has two arguments; the string to find and the string to replace. As shown above, the “hello” string is going to replace with “Welcome”.
Points to Remember:
You can call the replace() method on any string to replace any literal string with another. If the string-to-be-replaced isn’t found, the replace() method returns nothing.
Example 2: Replacing multiple strings
You aware that replace() method returns a string, to replace another instance, you can append another replace() method at the end ( .replace(‘dotnet-helpers’,’dotnet-helpers.com!!!’) ). In the previous example, we try to replace “hello” with “Welcome”, in this example we trying to replace another string with one more .replace method as shown below.
You can chain together as many replace() method calls as necessary
-Replace Operator:
The replace operator is similar to the .Replace method (in that you provide a string to find and replace). But, it has one big advantage; the ability to use regular expressions to find matching strings.
Replacing strings in PowerShell with the replace() method works but it’s limited. You are constrained to only using literal strings. You cannot use wildcards or regex. If you’re performing any kind of intermediate or advanced replacing, you should use the replace operator.
The -replace operator takes a regex (regular expression) replacement rule as input and replaces every match with the replacement string. The operator itself is used as shown in the following examples : <input string> -replace <replacement rule>,<replacement string>
Example 1: With Simple Regex
In this example, you can use the expression hello|hi to match both required strings using the regex “or” (|) character as you can see below. In the below regex, it finds the match for a string like “hello” or “hi” and if a match is found it will replace with the given string.
As per the below example, you need to replace text in a string. That string contains a couple of regex special characters like a bracket and an exclamation mark. If you try to replace the string [dotnethelpers!] with dotnet-helpers.com as shown below, then it will not work as expected because the characters will have special meaning in regex language.
The problem is you often need to replace “[]”, “!” or other characters that have special meaning in regex language. One way to achieve this is to escape every special character by “\”.
To overcome this problem, you have two options. You can either escape these special characters by prepending a backslash to the front of each character or using the Escape() method (([regex]::Escape(‘[dotnethelpers]’)).
Points to Remember:
If you try to replace any special characters directly from the string using Replace operator then it won’t work correctly as characters will have special meaning in regex language.
Conclusion :
Replacing characters or words in a string with PowerShell is easily done using either the replace method or -replace operator. When working with special characters, like [ ], \ or $ symbols, it’s often easier to use the replace() method than the operator variant. Because this way you don’t need to escape the special character.
To perform simple replacements, you can use the replace() methodbut if you need to match and replace anything more advanced, always use the replace operator.
To provide the best experiences, we use technologies like cookies to store and/or access device information. Consenting to these technologies will allow us to process data such as browsing behavior or unique IDs on this site. Not consenting or withdrawing consent, may adversely affect certain features and functions.