Python URL Checker Script Example
This example shows a simple Python script that checks whether URLs respond successfully.
You will learn how to:
- check one or more URLs from Python
- send an HTTP request to each URL
- read the response status code
- handle errors without stopping the whole script
This is a practical beginner-friendly example. The goal is to understand the script, run it, and then make small improvements.
Quick example
import requests
urls = [
"https://www.python.org",
"https://example.com",
"https://this-url-does-not-exist-12345.com"
]
for url in urls:
try:
response = requests.get(url, timeout=5)
print(url, response.status_code)
except requests.RequestException as e:
print(url, "ERROR:", e)
This is the fastest working example. It checks each URL and prints either the status code or an error.
What this example does
This script:
- checks one or more URLs from Python
- sends an HTTP
GETrequest to each URL - shows the response status code
- handles connection errors without stopping the whole script
A URL checker like this is useful when you want to test links, verify website pages, or learn how Python works with web requests.
What you need before running it
Before you run the script, make sure you have:
- Python installed
- an internet connection
- the
requestspackage installed - a basic understanding of loops and
try-except
If you are new to error handling, see using try-except, else, and finally in Python.
Install the requests package
This example uses the requests package.
Install it with:
pip install requests
Why this matters:
requestsis not part of Python's basic built-in functions- if it is missing,
import requestswill fail - you may see a
ModuleNotFoundError
You can check whether it is installed with:
pip show requests
If you get an import error, see how to fix ModuleNotFoundError: No module named X.
How the script works
Here is the same script again:
import requests
urls = [
"https://www.python.org",
"https://example.com",
"https://this-url-does-not-exist-12345.com"
]
for url in urls:
try:
response = requests.get(url, timeout=5)
print(url, response.status_code)
except requests.RequestException as e:
print(url, "ERROR:", e)
1. Create a list of URLs
urls = [
"https://www.python.org",
"https://example.com",
"https://this-url-does-not-exist-12345.com"
]
This list stores the URLs you want to check.
Each item is a string. Later, the loop will process them one by one.
2. Loop through the list
for url in urls:
This line goes through the list one URL at a time.
If the list has 3 URLs, the loop runs 3 times.
3. Send a request with requests.get()
response = requests.get(url, timeout=5)
This sends an HTTP GET request to the current URL.
Important parts:
urlis the web address to checktimeout=5means "stop waiting after 5 seconds"
Without a timeout, your script might wait too long if a site is slow or not responding.
If you want to learn more about making web requests, see how to make an API request in Python.
4. Read the status code
print(url, response.status_code)
The response.status_code value tells you how the server responded.
Example output:
https://www.python.org 200
https://example.com 200
https://this-url-does-not-exist-12345.com ERROR: HTTPSConnectionPool(...)
5. Catch request errors
except requests.RequestException as e:
print(url, "ERROR:", e)
This catches network-related problems, such as:
- connection failures
- timeout errors
- invalid domains
- other request problems
The script keeps running even if one URL fails. That is why try-except is useful here.
For more about working with responses, see how to handle API responses in Python.
Understanding common status codes
A status code is a number returned by the server.
Common ones include:
200means success301or302means redirect404means page not found500means server error
Example:
import requests
response = requests.get("https://example.com", timeout=5)
print(response.status_code)
Possible output:
200
A status code tells you what happened at a high level, but not always the full reason.
For example:
404means the server was reached, but the page was not found- this is different from a connection error, where Python could not reach the server at all
Beginner improvements to the script
Once the basic version works, you can improve it in simple ways.
You could:
- read URLs from a text file instead of hard-coding them
- save results to a CSV file
- mark working and broken URLs clearly
- skip blank lines
- check redirects if needed
Here is a small improved version that labels results more clearly:
import requests
urls = [
"https://www.python.org",
"https://example.com",
"",
"https://this-url-does-not-exist-12345.com"
]
for url in urls:
if not url:
continue
try:
response = requests.get(url, timeout=5)
if response.status_code == 200:
print(url, "- WORKING")
else:
print(url, f"- STATUS {response.status_code}")
except requests.RequestException as e:
print(url, "- ERROR:", e)
This version:
- skips blank values
- marks
200responses as working - still shows other status codes and errors
If you later want to load URLs from files, the Python os module overview can help when working with file paths.
Common problems when checking URLs
Here are common issues beginners run into:
ModuleNotFoundErrorifrequestsis not installed- timeout errors if the site is slow
- connection errors if the domain is wrong
- some websites block automated requests
- a valid domain can still return a
404page
Also remember:
- the URL should usually include
http://orhttps:// - no internet connection means the request cannot succeed
- no timeout can make the script wait too long
Common causes include:
requestspackage is not installed- URL is missing
http://orhttps:// - site is offline or domain name is wrong
- internet connection is unavailable
- the request waits too long because no timeout was set
Useful commands for debugging:
python --version
pip install requests
pip show requests
python your_script.py
ping python.org
What this example does not cover
This example is intentionally simple.
It does not cover:
- advanced HTTP concepts
- asynchronous requests
- authentication
- large-scale website monitoring
That keeps the script easier to understand if you are still learning the basics.
FAQ
Do I need the requests library for a URL checker?
For this example, yes. It makes HTTP requests easier for beginners.
Why does a URL return 404 instead of causing an exception?
Because the server responded successfully. A 404 is still a real HTTP response.
What is the difference between a status code and an error?
A status code comes from the server. An error usually means Python could not complete the request.
Can I check many URLs at once?
Yes. Store them in a list or file and loop through them.