URL Monitoring Using Bash Shell Script

URL Monitoring Using Bash Script

URL Monitoring Using Bash Shell Script

In today’s digital age, ensuring the availability and performance of websites and APIs is crucial for maintaining a seamless user experience. One way to achieve this is through regular monitoring. In this article, we will explore a powerful yet simple method of URL monitoring and API endpoints monitoring using a Bash script in Linux. We’ll break down each step of the script, explain its functionality, and provide insights into its syntax.

Understanding the Bash Script

The provided Bash script in this article is designed to monitor the health of various URLs, both websites and API endpoints. It utilizes the curl command-line tool to send requests and retrieve responses from the specified URLs. The script then analyzes the HTTP response codes to determine whether each URL is operational or experiencing issues.

Let’s dive into the script step by step:

1. Defining URLs to Monitor:
The script begins by declaring an array named URLS, containing the URLs that need to be monitored. You can customize this list by adding or removing URLs as needed. URLs can be in various formats, including both HTTP and HTTPS.

URLS=("http://www.example.com" "http://www.example.org" "https://api.example.com/users" "https://api.example.com/orders")

2. Looping Through URLs and Monitoring:

The script then enters a loop that iterates over each URL in the URLS array. It performs the following tasks for each URL:

a. It uses the curl command with specific options to send a request to the URL. The -s option suppresses progress output, while the -o /dev/null option discards the response body. The -w option allows us to format and display the HTTP response code. The -m 10 option sets a maximum timeout of 10 seconds for the request.

b. The HTTP response code is captured in the RESPONSE variable using command substitution

for URL in "${URLS[@]}"
do
    RESPONSE=$(curl -s -o /dev/null -w "%{http_code}" -m 10 "$URL")

3. Analyzing the Response and Reporting:

Once the response code is obtained, the script evaluates whether the response indicates a successful status (HTTP code 200-399) or an error (HTTP code 400 or higher). Depending on the result, the script prints a message indicating the status of the URL

 if [ $RESPONSE -ge 200 ] && [ $RESPONSE -lt 400 ]
then
    echo "$URL is up and running. Status code: $RESPONSE"
else
    echo "$URL is down. Status code: $RESPONSE"
fi

Let’s combine all the script parts to single script:

#!/bin/bash
# Define URLs to monitor
URLS=("http://www.example.com" "http://www.example.org" "https://api.example.com/users" "https://api.example.com/orders")
# Loop through URLs and monitor with curl
for URL in "${URLS[@]}"
do
    RESPONSE=$(curl -s -o /dev/null -w "%{http_code}" -m 10 "$URL")
    # Check if response code indicates success or an error
    if [ $RESPONSE -ge 200 ] && [ $RESPONSE -lt 400 ]
    then
        echo "$URL is up and running. Status code: $RESPONSE"
    else
        echo "$URL is down. Status code: $RESPONSE"
    fi
done

Conclusion

In this article, we’ve explored a Bash script that empowers you to monitor the health of websites and API endpoints effortlessly. By using the curl command-line tool and analyzing HTTP response codes, you can quickly identify potential issues and take timely actions to ensure a smooth user experience. The script’s modular structure and simplicity make it a valuable addition to any developer’s toolkit.

Feel free to customize the URLS array to suit your monitoring needs, whether it’s keeping tabs on critical web pages or tracking the availability of essential API endpoints. Regular monitoring using scripts like this can play a significant role in maintaining the reliability and performance of your online services.

FAQs

1: How do I customize the URLs to be monitored? 

You can modify the URLS array in the script to include the URLs you want to monitor. Simply add or remove URLs within the parentheses, keeping each URL enclosed in double quotes and separated by a space.

2: What does the -s option do in the curl command? 

The -s option suppresses the progress output of the curl command. This ensures that the response from the URLs is the only output displayed, making the script output cleaner and more focused.

3: What is the purpose of the -o /dev/null option in the curl command? 

The -o /dev/null option directs the output of the curl command to the null device, effectively discarding the response body. Since we’re interested in the HTTP response code, using this option saves resources and makes the script more efficient.

4: How does the script determine if a URL is down? 

The script checks the HTTP response code obtained from the curl command’s output. If the response code is 400 or higher, it indicates an error. If the response code is between 200 and 399, it signifies success. Based on this, the script prints a corresponding message indicating whether the URL is up and running or experiencing issues.

Nitin Kumar

Nitin Kumar is a skilled DevOps Engineer with a strong passion for the finance domain. Specializing in automating processes and improving software delivery. With expertise in cloud technologies and CI/CD pipelines, he is adept at driving efficiency and scalability in complex IT environments.